Skip to content

fix: api memory improvement#2762

Open
baktun14 wants to merge 2 commits intomainfrom
fix/api-memory-leaks
Open

fix: api memory improvement#2762
baktun14 wants to merge 2 commits intomainfrom
fix/api-memory-leaks

Conversation

@baktun14
Copy link
Contributor

@baktun14 baktun14 commented Feb 17, 2026

Why

The memory-cache doesn't have any limit

What

Replace with LRUCache

Summary by CodeRabbit

  • Improvements

    • Implemented bounded LRU caching strategy across the system for improved memory efficiency and controlled cache eviction.
    • Simplified health check status logic for more predictable behavior.
  • Chores

    • Removed memory-cache dependency and updated cache implementation throughout the system.

@baktun14 baktun14 requested a review from a team as a code owner February 17, 2026 18:13
@baktun14 baktun14 changed the title Fix/api memory leaks fix: api memory improvement Feb 17, 2026
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 17, 2026

📝 Walkthrough

Walkthrough

This PR migrates the caching infrastructure from the memory-cache library to lru-cache, updating package dependencies, the MemoryCacheEngine implementation, and affected services to use LRU eviction policies with bounded memory limits. A minor health check initialization simplification is also included.

Changes

Cohort / File(s) Summary
Dependency Update
apps/api/package.json
Removed memory-cache and @types/memory-cache dependencies.
Cache Engine Implementation
apps/api/src/caching/memoryCacheEngine.ts, apps/api/src/caching/helpers.ts
Replaced memory-cache with LRUCache; updated cache methods (get, set, clear, delete); added clearByKey alias method; memoizeAsync now uses bounded LRU cache with max 100 entries.
Service Cache Migration
apps/api/src/deployment/services/cached-balance/cached-balance.service.ts
Replaced Map-based cache with LRUCache({ max: 10000 }) for balance caching.
Health Check Simplification
apps/api/src/healthz/services/healthz/healthz.service.ts
Changed Healthcheck.isFailed field from boolean | null to boolean with default false; simplified error path logic.
Test Cache Cleanup
apps/api/test/functional/provider-*.spec.ts
Replaced memory-cache clear calls with cacheEngine.clearAllKeyInCache() in test teardown across three test files.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Poem

🐰 Cache memory takes flight,
LRU bounds our delight,
No more unlimited hoard,
Smart eviction our reward,
Bounded limits, cleaner sight! 🎉

🚥 Pre-merge checks | ✅ 1 | ❌ 2

❌ Failed checks (1 warning, 1 inconclusive)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Title check ❓ Inconclusive The title 'fix: api memory improvement' is vague and doesn't clearly convey the main change, which is replacing memory-cache with LRUCache to address memory leaks. Consider using a more specific title like 'fix: replace memory-cache with LRUCache to prevent memory leaks' to clearly convey the primary change.
✅ Passed checks (1 passed)
Check name Status Explanation
Description check ✅ Passed The pull request description includes both 'Why' and 'What' sections as required by the template, with clear explanations of the motivation and the solution.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch fix/api-memory-leaks

Comment @coderabbitai help to get the list of available commands and usage tips.

@codecov
Copy link

codecov bot commented Feb 17, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 53.63%. Comparing base (fd91ace) to head (238392c).
✅ All tests successful. No failed tests found.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #2762      +/-   ##
==========================================
- Coverage   53.64%   53.63%   -0.02%     
==========================================
  Files        1019     1019              
  Lines       23591    23578      -13     
  Branches     5754     5752       -2     
==========================================
- Hits        12655    12645      -10     
+ Misses       9531     9528       -3     
  Partials     1405     1405              
Flag Coverage Δ *Carryforward flag
api 76.94% <ø> (+0.01%) ⬆️ Carriedforward from fd91ace
deploy-web 35.92% <ø> (-0.01%) ⬇️
log-collector 75.35% <ø> (ø)
notifications 85.56% <ø> (ø)
provider-console 81.48% <ø> (ø)
provider-proxy 82.41% <ø> (ø)
tx-signer 79.29% <ø> (ø)

*This pull request uses carry forward flags. Click here to find out more.

Files with missing lines Coverage Δ
apps/api/src/caching/helpers.ts 100.00% <ø> (ø)
apps/api/src/caching/memoryCacheEngine.ts 54.54% <ø> (-0.46%) ⬇️
.../services/cached-balance/cached-balance.service.ts 100.00% <ø> (ø)
...pi/src/healthz/services/healthz/healthz.service.ts 100.00% <ø> (ø)

... and 2 files with indirect coverage changes

🚀 New features to boost your workflow:
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@apps/api/src/caching/memoryCacheEngine.ts`:
- Around line 3-4: Replace the explicit any used in the LRUCache instantiation
with unknown and remove the eslint-disable comment: change the cache declaration
from LRUCache<string, any> to LRUCache<string, unknown> so the MemoryCacheEngine
wrapper’s generic get/set methods maintain type safety without suppressing the
`@typescript-eslint/no-explicit-any` rule (look for the cache variable and
MemoryCacheEngine usages to update).
- Around line 25-26: The TTL unit is mismatched: make storeInCache treat its
duration parameter as seconds (to match cacheResponse callers) and convert to
milliseconds before passing to lru-cache; inside storeInCache(key: string, data:
T, duration?: number) compute const ttl = duration ? duration * 1000 : undefined
and call cache.set(key, data, ttl ? { ttl } : undefined). Update any references
to storeInCache (and leave cacheResponse(seconds) calls unchanged) so callers
can continue passing seconds without breaking behavior.

Comment on lines +3 to +4
// eslint-disable-next-line @typescript-eslint/no-explicit-any
const cache = new LRUCache<string, any>({ max: 500 });
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cd apps/api/src/caching && cat -n memoryCacheEngine.ts

Repository: akash-network/console

Length of output: 2034


🏁 Script executed:

cd apps/api/src/caching && rg "cache\.(set|get|del)" -A 2 -B 2

Repository: akash-network/console

Length of output: 1364


🏁 Script executed:

rg "lru-cache" --type ts --type json -i

Repository: akash-network/console

Length of output: 3852


🏁 Script executed:

cat package.json | grep -A 5 '"dependencies"'

Repository: akash-network/console

Length of output: 202


Remove any from cache typing.

The any type at line 4 violates the coding guideline requiring proper TypeScript types. Since the MemoryCacheEngine wrapper is designed to handle values of any type through its generic methods, use unknown instead to maintain type safety while removing the eslint-disable requirement.

Proposed fix
-// eslint-disable-next-line `@typescript-eslint/no-explicit-any`
-const cache = new LRUCache<string, any>({ max: 500 });
+const cache = new LRUCache<string, unknown>({ max: 500 });
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// eslint-disable-next-line @typescript-eslint/no-explicit-any
const cache = new LRUCache<string, any>({ max: 500 });
const cache = new LRUCache<string, unknown>({ max: 500 });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/api/src/caching/memoryCacheEngine.ts` around lines 3 - 4, Replace the
explicit any used in the LRUCache instantiation with unknown and remove the
eslint-disable comment: change the cache declaration from LRUCache<string, any>
to LRUCache<string, unknown> so the MemoryCacheEngine wrapper’s generic get/set
methods maintain type safety without suppressing the
`@typescript-eslint/no-explicit-any` rule (look for the cache variable and
MemoryCacheEngine usages to update).

Comment on lines 25 to +26
storeInCache<T>(key: string, data: T, duration?: number) {
mcache.put(key, data, duration);
cache.set(key, data, duration ? { ttl: duration } : undefined);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# First, examine the memoryCacheEngine.ts file to understand the cache library
head -50 apps/api/src/caching/memoryCacheEngine.ts

Repository: akash-network/console

Length of output: 1180


🏁 Script executed:

#!/bin/bash
# Find all call sites of storeInCache to see what durations are passed
rg -n --type=ts -C 3 "storeInCache\s*\("

Repository: akash-network/console

Length of output: 4405


🏁 Script executed:

#!/bin/bash
# Search for cache library imports and initialization in the caching directory
rg -n "import.*cache|new.*Cache|require.*cache" apps/api/src/caching/ --type=ts

Repository: akash-network/console

Length of output: 624


🏁 Script executed:

#!/bin/bash
# Check package.json for lru-cache version
cat apps/api/package.json | grep -A 2 -B 2 "lru-cache"

Repository: akash-network/console

Length of output: 209


🏁 Script executed:

#!/bin/bash
# Search for any TODOs, comments, or code related to duration/ttl
rg -n "duration|ttl|millisecond|second" apps/api/src/caching/ --type=ts -C 2

Repository: akash-network/console

Length of output: 2270


🏁 Script executed:

#!/bin/bash
# Check if there are any calls to storeInCache with a duration parameter
rg -n "storeInCache.*\d" apps/api/src --type=ts

Repository: akash-network/console

Length of output: 47


🏁 Script executed:

#!/bin/bash
# Check if cacheResponse actually passes the seconds parameter to storeInCache
cat -n apps/api/src/caching/helpers.ts | head -120 | tail -90

Repository: akash-network/console

Length of output: 3900


🏁 Script executed:

#!/bin/bash
# Look at the full cacheResponse function to see if/where duration is passed to storeInCache
sed -n '40,110p' apps/api/src/caching/helpers.ts

Repository: akash-network/console

Length of output: 2683


🌐 Web query:

lru-cache npm package ttl milliseconds or seconds

💡 Result:

For the popular lru-cache npm package, ttl is in milliseconds (must be a positive integer if provided). The README/docs describe it as “Time to live in milliseconds,” and related options like ttlResolution are also in ms. [1], [2]

So:

  • ttl: 1000 = 1 second
  • ttl: 60000 = 60 seconds / 1 minute [1], [2]

Fix TTL unit mismatch: duration parameter must be in milliseconds, not seconds.

The lru-cache library expects TTL in milliseconds, but the duration parameter name and context suggest it should be in seconds. Currently, no callers pass a duration value, so this latent bug hasn't manifested. However, the method signature invites incorrect usage. The cacheResponse function receives a seconds parameter that should be converted to milliseconds before being passed to storeInCache (lines 67, 99), or the parameter should be documented and renamed to clarify it expects milliseconds.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/api/src/caching/memoryCacheEngine.ts` around lines 25 - 26, The TTL unit
is mismatched: make storeInCache treat its duration parameter as seconds (to
match cacheResponse callers) and convert to milliseconds before passing to
lru-cache; inside storeInCache(key: string, data: T, duration?: number) compute
const ttl = duration ? duration * 1000 : undefined and call cache.set(key, data,
ttl ? { ttl } : undefined). Update any references to storeInCache (and leave
cacheResponse(seconds) calls unchanged) so callers can continue passing seconds
without breaking behavior.

@ygrishajev
Copy link
Contributor

although this is not introducing radical changes I think the cache engine should be reworked as an interface utilising various storages. E.g. in local dev LRUCache is totally find, in prod we should use a distributed storage like redis.

@stalniy wdyt?

import { LRUCache } from "lru-cache";

// eslint-disable-next-line @typescript-eslint/no-explicit-any
const cache = new LRUCache<string, any>({ max: 500 });
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
const cache = new LRUCache<string, any>({ max: 500 });
export type CacheValue = NonNullable<unknown>;
const cache = new LRUCache<string, CacheValue>({ max: 500 });
....
storeInCache<T extends CacheValue>(key: string, data: T, duration?: number) {

if (cachedBody !== undefined) {
return cachedBody as T;
}
return false;
Copy link
Contributor

@ygrishajev ygrishajev Feb 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

question: false is a valid value to store. Any idea why we return it as a fallback?

@stalniy
Copy link
Contributor

stalniy commented Feb 18, 2026

although this is not introducing radical changes I think the cache engine should be reworked as an interface utilising various storages. E.g. in local dev LRUCache is totally find, in prod we should use a distributed storage like redis.

@stalniy wdyt?

I actually like this one because it's minus 1 dependency. And this is the thing which I wanted to do by myself for a long time :)

I think that we should make a discussion for this and consider other options like https://bentocache.dev/docs/introduction

baktun14 and others added 2 commits February 19, 2026 11:35
The healthz service was returning unhealthy immediately on the first
failed check after startup (when isFailed was null), bypassing the
failure tolerance logic. This could cause liveness probe failures
during pod startup when DB connections aren't yet established.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@baktun14 baktun14 force-pushed the fix/api-memory-leaks branch from f9d164b to 238392c Compare February 19, 2026 16:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants

Comments