fix(jsonrpc): enforce log filter cap and improve match efficiency#71
fix(jsonrpc): enforce log filter cap and improve match efficiency#71
Conversation
📝 WalkthroughWalkthroughAdds a configurable max log-filter limit ( Changes
Sequence DiagramsequenceDiagram
participant Client
participant TronJsonRpc
participant TronJsonRpcImpl
participant Config as CommonParameter
participant FilterRegistry as Filter Registry
Client->>TronJsonRpc: eth_newFilter(FilterRequest)
TronJsonRpc->>TronJsonRpcImpl: newFilter(fr)
TronJsonRpcImpl->>Config: getJsonRpcMaxLogFilterNum()
Config-->>TronJsonRpcImpl: maxLogFilterNum
TronJsonRpcImpl->>FilterRegistry: query current filter count
FilterRegistry-->>TronJsonRpcImpl: currentCount
alt currentCount >= maxLogFilterNum
TronJsonRpcImpl-->>TronJsonRpc: throw JsonRpcExceedLimitException
TronJsonRpc-->>Client: Error (-32005)
else
TronJsonRpcImpl->>FilterRegistry: create filter -> store
FilterRegistry-->>TronJsonRpcImpl: filterId
TronJsonRpcImpl-->>TronJsonRpc: return filterId
TronJsonRpc-->>Client: Success
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
2 issues found across 10 files
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="framework/src/main/java/org/tron/core/services/jsonrpc/TronJsonRpcImpl.java">
<violation number="1" location="framework/src/main/java/org/tron/core/services/jsonrpc/TronJsonRpcImpl.java:236">
P0: The `ForkJoinPool` is created on every `handleLogsFilter` call but never shut down. This leaks worker threads on each invocation. Either shut down the pool in a `finally` block after `join()`, or make it a static/instance-level field that is reused across calls.</violation>
<violation number="2" location="framework/src/main/java/org/tron/core/services/jsonrpc/TronJsonRpcImpl.java:1431">
P2: Inconsistent boundary check: block filter uses `>=` but log filter uses `>`, allowing one extra filter beyond `maxLogFilterNum`. Use `>=` to match the block filter behavior.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
…e addAll matchedLog instead after
20f0cc1 to
9897ea8
Compare
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (1)
framework/src/main/java/org/tron/core/services/jsonrpc/TronJsonRpcImpl.java (1)
251-291: Add a clarifying comment for ConcurrentHashMap iteration and removal.Line 257's
eventFilterMap.remove(entry.getKey())during iteration is safe forConcurrentHashMap(weakly consistent iterator), but consider adding a brief inline comment explaining this is intentional and safe—preventing future contributors from mistakenly porting this pattern to a non-concurrent collection.The concurrent mutation concern at line 290 is mitigated:
LogFilterAndResult.resultis aLinkedBlockingQueue, so concurrentaddAll()calls fromForkJoinPoolworkers anddrainTo()calls from RPC reader threads are both thread-safe.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@framework/src/main/java/org/tron/core/services/jsonrpc/TronJsonRpcImpl.java` around lines 251 - 291, The code in processLogFilterEntry calls eventFilterMap.remove(entry.getKey()) while iterating a ConcurrentHashMap; add a concise inline comment next to that remove explaining this is intentional and safe because eventFilterMap is a ConcurrentHashMap with a weakly-consistent iterator (so removals during iteration are allowed), and warn contributors not to copy this pattern to non-concurrent maps; also add a short comment by the getResult().addAll(...) call noting that LogFilterAndResult.getResult() is a LinkedBlockingQueue so concurrent addAll() from ForkJoinPool workers and drainTo() from RPC reader threads is thread-safe.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@framework/src/main/java/org/tron/core/config/args/Args.java`:
- Around line 268-271: When reading ConfigKey.NODE_JSONRPC_MAX_LOG_FILTER_NUM in
Args (setting PARAMETER.jsonRpcMaxLogFilterNum), validate the value is a
positive integer (>0) and reject non-positive values; if invalid, log an error
and fail fast (throw an IllegalArgumentException or exit) so the node won't
start with zero/negative maxLogFilterNum which breaks TronJsonRpcImpl's
eth_newFilter logic (see TronJsonRpcImpl.java:1443). Locate the assignment in
Args and add the check immediately after config.getInt(...), referencing
PARAMETER.jsonRpcMaxLogFilterNum for the value and ensuring a clear error
message that includes the invalid value.
In `@framework/src/main/java/org/tron/core/services/jsonrpc/TronJsonRpcImpl.java`:
- Around line 228-249: The current per-block timing log in handleLogsFilter uses
logger.info which floods INFO logs; change it to logger.debug (or wrap with
logger.isDebugEnabled()) so timing is only emitted at DEBUG level, or
alternatively emit at INFO only when (t2 - t1) exceeds a chosen threshold;
update the call that currently uses logger.info(...) to use logger.debug(...) or
add the conditional check and keep the same message, referencing
handleLogsFilter, logger.info, logger.debug, and the timing variables t1/t2 and
eventFilterMap.size().
- Around line 174-175: The static LOGS_FILTER_POOL is created with new
ForkJoinPool(2) while the PR text said parallelism 3 and it is never shut down,
causing thread leaks; reconcile the intended parallelism (use 3 or make it
configurable using available processors), replace the hardcoded new
ForkJoinPool(2) by the chosen value (or a supplier/config param) and ensure the
pool is properly shutdown in the class close() method alongside sectionExecutor
(call LOGS_FILTER_POOL.shutdown() and await termination or forceShutdown as
appropriate); reference FILTER_PARALLEL_THRESHOLD, LOGS_FILTER_POOL, close(),
and sectionExecutor when making these changes.
- Around line 1443-1446: The size() check on eventFilter2Result and the
subsequent put() can race under concurrent eth_newFilter calls; change the
insertion to an atomic compute/computeIfAbsent or use eventFilter2Result.compute
to both test and insert in one atomic operation (referencing eventFilter2Result,
maxLogFilterNum, and the eth_newFilter insertion logic) so that you only
create/accept a new filter if the total stays <= maxLogFilterNum; alternatively,
perform the put and then immediately re-check the map size and remove/rollback
the newly inserted entry if the cap is exceeded to enforce the limit atomically.
---
Nitpick comments:
In `@framework/src/main/java/org/tron/core/services/jsonrpc/TronJsonRpcImpl.java`:
- Around line 251-291: The code in processLogFilterEntry calls
eventFilterMap.remove(entry.getKey()) while iterating a ConcurrentHashMap; add a
concise inline comment next to that remove explaining this is intentional and
safe because eventFilterMap is a ConcurrentHashMap with a weakly-consistent
iterator (so removals during iteration are allowed), and warn contributors not
to copy this pattern to non-concurrent maps; also add a short comment by the
getResult().addAll(...) call noting that LogFilterAndResult.getResult() is a
LinkedBlockingQueue so concurrent addAll() from ForkJoinPool workers and
drainTo() from RPC reader threads is thread-safe.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 2f546df6-e48d-4a43-8ae7-6129380594c1
📒 Files selected for processing (11)
common/src/main/java/org/tron/common/parameter/CommonParameter.javaframework/src/main/java/org/tron/core/config/args/Args.javaframework/src/main/java/org/tron/core/config/args/ConfigKey.javaframework/src/main/java/org/tron/core/db/Manager.javaframework/src/main/java/org/tron/core/services/jsonrpc/TronJsonRpc.javaframework/src/main/java/org/tron/core/services/jsonrpc/TronJsonRpcImpl.javaframework/src/main/java/org/tron/core/services/jsonrpc/filters/LogMatch.javaframework/src/main/resources/config.confframework/src/test/resources/config-localtest.confframework/src/test/resources/config-test-mainnet.confframework/src/test/resources/config-test.conf
| if (config.hasPath(ConfigKey.NODE_JSONRPC_MAX_LOG_FILTER_NUM)) { | ||
| PARAMETER.jsonRpcMaxLogFilterNum = | ||
| config.getInt(ConfigKey.NODE_JSONRPC_MAX_LOG_FILTER_NUM); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: Inspect all uses of the new JSON-RPC log filter cap and related tests.
# Expectation: Enforcement assumes a strictly positive cap, and tests cover zero/negative config values if allowed.
rg -n -C4 '\bjsonRpcMaxLogFilterNum\b|getJsonRpcMaxLogFilterNum|NODE_JSONRPC_MAX_LOG_FILTER_NUM|eth_newFilter|newFilter\s*\(' --glob '*.java'Repository: 317787106/java-tron
Length of output: 17055
🏁 Script executed:
#!/bin/bash
# Find where maxLogFilterNum is compared/used in enforcement logic
rg -n -B2 -A2 'maxLogFilterNum\s*[<>]=?\s*|<\s*maxLogFilterNum|>\s*maxLogFilterNum' --glob '*.java'Repository: 317787106/java-tron
Length of output: 45
🏁 Script executed:
#!/bin/bash
# Broader search for all usages of maxLogFilterNum variable
rg -n 'maxLogFilterNum' --glob '*.java' -A3 -B1Repository: 317787106/java-tron
Length of output: 2332
🏁 Script executed:
#!/bin/bash
# Verify the exact context around lines 268-271 in Args.java
sed -n '260,280p' framework/src/main/java/org/tron/core/config/args/Args.javaRepository: 317787106/java-tron
Length of output: 789
Add validation for maxLogFilterNum to reject non-positive values.
Line 270 accepts zero or negative values, which would break all eth_newFilter calls due to the check at TronJsonRpcImpl.java:1443 (if (eventFilter2Result.size() >= maxLogFilterNum)). Validate that the parameter is positive when loading config to fail fast.
Proposed validation
if (config.hasPath(ConfigKey.NODE_JSONRPC_MAX_LOG_FILTER_NUM)) {
- PARAMETER.jsonRpcMaxLogFilterNum =
- config.getInt(ConfigKey.NODE_JSONRPC_MAX_LOG_FILTER_NUM);
+ int maxLogFilterNum = config.getInt(ConfigKey.NODE_JSONRPC_MAX_LOG_FILTER_NUM);
+ if (maxLogFilterNum <= 0) {
+ throw new IllegalArgumentException(
+ ConfigKey.NODE_JSONRPC_MAX_LOG_FILTER_NUM + " must be greater than 0");
+ }
+ PARAMETER.jsonRpcMaxLogFilterNum = maxLogFilterNum;
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@framework/src/main/java/org/tron/core/config/args/Args.java` around lines 268
- 271, When reading ConfigKey.NODE_JSONRPC_MAX_LOG_FILTER_NUM in Args (setting
PARAMETER.jsonRpcMaxLogFilterNum), validate the value is a positive integer (>0)
and reject non-positive values; if invalid, log an error and fail fast (throw an
IllegalArgumentException or exit) so the node won't start with zero/negative
maxLogFilterNum which breaks TronJsonRpcImpl's eth_newFilter logic (see
TronJsonRpcImpl.java:1443). Locate the assignment in Args and add the check
immediately after config.getInt(...), referencing
PARAMETER.jsonRpcMaxLogFilterNum for the value and ensuring a clear error
message that includes the invalid value.
| if (eventFilter2Result.size() >= maxLogFilterNum) { | ||
| throw new JsonRpcExceedLimitException( | ||
| "exceed max log filters: " + maxLogFilterNum + ", try again later"); | ||
| } |
There was a problem hiding this comment.
TOCTOU race: size check is not atomic with insert.
size() check on line 1443 and put() on line 1450 are not performed atomically on the ConcurrentHashMap. Under concurrent eth_newFilter requests, multiple callers can observe size() < maxLogFilterNum simultaneously and all proceed to insert, exceeding the configured cap. Given the default cap is 20,000 this is a soft limit abuse concern rather than a correctness bug, but it defeats the purpose of the cap under burst load.
Consider using compute/computeIfAbsent pattern, or re-check size after put and rollback if exceeded:
🛡️ Example hardening
- if (eventFilter2Result.size() >= maxLogFilterNum) {
- throw new JsonRpcExceedLimitException(
- "exceed max log filters: " + maxLogFilterNum + ", try again later");
- }
long currentMaxFullNum = wallet.getNowBlock().getBlockHeader().getRawData().getNumber();
LogFilterAndResult logFilterAndResult = new LogFilterAndResult(fr, currentMaxFullNum, wallet);
String filterID = generateFilterId();
- eventFilter2Result.put(filterID, logFilterAndResult);
+ if (eventFilter2Result.size() >= maxLogFilterNum) {
+ throw new JsonRpcExceedLimitException(
+ "exceed max log filters: " + maxLogFilterNum + ", try again later");
+ }
+ eventFilter2Result.put(filterID, logFilterAndResult);
+ if (eventFilter2Result.size() > maxLogFilterNum) {
+ eventFilter2Result.remove(filterID);
+ throw new JsonRpcExceedLimitException(
+ "exceed max log filters: " + maxLogFilterNum + ", try again later");
+ }
return ByteArray.toJsonHex(filterID);The same concern applies to newBlockFilter (lines 1465-1468), which is preexisting and out of scope but worth noting for a follow-up.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@framework/src/main/java/org/tron/core/services/jsonrpc/TronJsonRpcImpl.java`
around lines 1443 - 1446, The size() check on eventFilter2Result and the
subsequent put() can race under concurrent eth_newFilter calls; change the
insertion to an atomic compute/computeIfAbsent or use eventFilter2Result.compute
to both test and insert in one atomic operation (referencing eventFilter2Result,
maxLogFilterNum, and the eth_newFilter insertion logic) so that you only
create/accept a new filter if the total stays <= maxLogFilterNum; alternatively,
perform the put and then immediately re-check the map size and remove/rollback
the newly inserted entry if the cap is exceeded to enforce the limit atomically.
|
|
||
| private static final String ERROR_SELECTOR = "08c379a0"; // Function selector for Error(string) | ||
| private static final int FILTER_PARALLEL_THRESHOLD = 10000; | ||
| private static final ForkJoinPool LOGS_FILTER_POOL = new ForkJoinPool(); |
There was a problem hiding this comment.
[SHOULD] Static LOGS_FILTER_POOL shut down by instance close() breaks reuse
LOGS_FILTER_POOL is private static final, but close() (line 1598) calls shutdownAndAwaitTermination on it. If any instance of TronJsonRpcImpl is closed (Spring @DirtiesContext integration tests, or any future lifecycle that recreates the bean), subsequent handleLogsFilter calls that hit the parallel branch will throw RejectedExecutionException — the class-level pool is permanently dead after the first close. Separately, shutdownAndAwaitTermination(LOGS_FILTER_POOL, "") passes an empty name, so the log line becomes Pool shutdown... (two spaces), which is useless for operators.
Suggestion: Either make the pool an instance field (constructed in the constructor, torn down in close()), or keep it static and stop shutting it down from close() — use daemon worker threads so the JVM can exit cleanly. Pass a meaningful name (e.g. "jsonrpc-logs-filter-pool") either way.
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
framework/src/test/java/org/tron/core/jsonrpc/WalletCursorTest.java (1)
88-94:⚠️ Potential issue | 🟡 Minor
close()is unreachable on the happy path of this test; move tofinally.This test asserts that
buildTransactionthrows because the cursor isSOLIDITY(see Line 92-93). When the test passes as intended, line 90 is never executed, so theTronJsonRpcImplinstance — and the staticLOGS_FILTER_POOLit owns — is never cleaned up. If the test ever regresses to a non-throwing path,close()will run and (as noted onTronJsonRpcImpl.javaLine 175) shut down the static pool for the rest of the JVM.Move the cleanup into a
finallyblock so it’s unconditional, and co-locate it with the instance lifecycle:🧹 Proposed fix
TronJsonRpcImpl tronJsonRpc = new TronJsonRpcImpl(nodeInfoService, wallet, dbManager); try { tronJsonRpc.buildTransaction(buildArguments); - tronJsonRpc.close(); } catch (Exception e) { Assert.assertEquals("the method buildTransaction does not exist/is not available in " + "SOLIDITY", e.getMessage()); + } finally { + try { + tronJsonRpc.close(); + } catch (IOException ioe) { + // ignore on teardown + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@framework/src/test/java/org/tron/core/jsonrpc/WalletCursorTest.java` around lines 88 - 94, The test currently calls tronJsonRpc.close() only in the try path so the TronJsonRpcImpl instance and its static LOGS_FILTER_POOL may not be cleaned up when buildTransaction throws; move the close() call into a finally block and ensure the tronJsonRpc variable is declared/initialized outside the try so it can be closed unconditionally (i.e., surround the try/catch with try { ... } catch(...) { ... } finally { tronJsonRpc.close(); }) referencing buildTransaction and close on the TronJsonRpcImpl instance used in WalletCursorTest.
♻️ Duplicate comments (1)
framework/src/main/java/org/tron/core/services/jsonrpc/TronJsonRpcImpl.java (1)
175-175:⚠️ Potential issue | 🟠 MajorStatic
LOGS_FILTER_POOLpermanently dies after firstclose().
LOGS_FILTER_POOLisprivate static final(Line 175) butclose()shuts it down on every instance tear-down (Line 1598). After the first instance is closed, the class-scoped pool is terminated for the lifetime of the JVM, and any subsequentTronJsonRpcImplthat hits the parallel branch inhandleLogsFilterwill throwRejectedExecutionExceptionatLOGS_FILTER_POOL.submit(...).This is now actively triggerable:
WalletCursorTest.testEnableInFullNodeandtestDisableInSolidityhave been updated in this PR to calltronJsonRpc.close(), so test suites that construct multiple instances in the same JVM (Spring@DirtiesContext, multiple@Testmethods, or any lifecycle that recreates the bean) will leave the class in a broken state.Pick one of the two ownership models:
Option A — keep pool static; stop shutting it down from instance close()
- private static final ForkJoinPool LOGS_FILTER_POOL = new ForkJoinPool(2); + private static final ForkJoinPool LOGS_FILTER_POOL = new ForkJoinPool( + 2, + pool -> { + ForkJoinWorkerThread w = ForkJoinPool.defaultForkJoinWorkerThreadFactory.newThread(pool); + w.setName("jsonrpc-logs-filter-" + w.getPoolIndex()); + w.setDaemon(true); + return w; + }, + null, + false);public void close() throws IOException { - ExecutorServiceManager.shutdownAndAwaitTermination(LOGS_FILTER_POOL, "logs-filter-pool"); logElementCache.invalidateAll(); blockHashCache.invalidateAll(); ExecutorServiceManager.shutdownAndAwaitTermination(sectionExecutor, esName); }Option B — make the pool an instance field tied to the bean lifecycle
- private static final ForkJoinPool LOGS_FILTER_POOL = new ForkJoinPool(2); + private final ForkJoinPool logsFilterPool = new ForkJoinPool(2);Then update
handleLogsFilter/processLogFilterEntryto be instance methods (or pass the pool in), and keep the existing shutdown inclose(). Note this also requires rethinking thestaticcallers inManager/event dispatch.Option A is the minimal fix and matches the intent that
handleLogsFilteris a static class-level service.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@framework/src/main/java/org/tron/core/services/jsonrpc/TronJsonRpcImpl.java` at line 175, The static LOGS_FILTER_POOL is being shut down in the instance close() causing RejectedExecutionException for subsequent instances; to fix, adopt Option A: treat LOGS_FILTER_POOL as a static shared resource and remove (or guard against) any shutdown/close logic from TronJsonRpcImpl.close() that calls LOGS_FILTER_POOL.shutdown()/shutdownNow()/close; ensure handleLogsFilter and processLogFilterEntry continue to submit to LOGS_FILTER_POOL as a static pool and verify no other code paths attempt to terminate that static pool.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@framework/src/test/java/org/tron/core/jsonrpc/WalletCursorTest.java`:
- Around line 88-94: The test currently calls tronJsonRpc.close() only in the
try path so the TronJsonRpcImpl instance and its static LOGS_FILTER_POOL may not
be cleaned up when buildTransaction throws; move the close() call into a finally
block and ensure the tronJsonRpc variable is declared/initialized outside the
try so it can be closed unconditionally (i.e., surround the try/catch with try {
... } catch(...) { ... } finally { tronJsonRpc.close(); }) referencing
buildTransaction and close on the TronJsonRpcImpl instance used in
WalletCursorTest.
---
Duplicate comments:
In `@framework/src/main/java/org/tron/core/services/jsonrpc/TronJsonRpcImpl.java`:
- Line 175: The static LOGS_FILTER_POOL is being shut down in the instance
close() causing RejectedExecutionException for subsequent instances; to fix,
adopt Option A: treat LOGS_FILTER_POOL as a static shared resource and remove
(or guard against) any shutdown/close logic from TronJsonRpcImpl.close() that
calls LOGS_FILTER_POOL.shutdown()/shutdownNow()/close; ensure handleLogsFilter
and processLogFilterEntry continue to submit to LOGS_FILTER_POOL as a static
pool and verify no other code paths attempt to terminate that static pool.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: f848f185-10c5-4ee0-9a9f-05c3d8e47094
📒 Files selected for processing (2)
framework/src/main/java/org/tron/core/services/jsonrpc/TronJsonRpcImpl.javaframework/src/test/java/org/tron/core/jsonrpc/WalletCursorTest.java
There was a problem hiding this comment.
1 issue found across 1 file (changes from recent commits).
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="framework/src/main/java/org/tron/core/db/Manager.java">
<violation number="1">
P1: EventBloomException is no longer included in the fork-apply catch, so EventBloom failures skip the switch-back rollback path and can leave fork switching in a partially applied state.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
What does this PR do?
This PR improves robustness and performance of the JSON-RPC log filter subsystem (
eth_newFilter/eth_getLogs). Closes tronprotocol#6510Changes:
Enforce a configurable cap on active
eth_newFilterfiltersnode.jsonrpc.maxLogFilterNumconfig item (default: 20,000).eth_newFilterreturnsJsonRpcExceedLimitException(code-32005) instead of silently growing without bound.Optimize
handleLogsFilterfor large filter mapsforEachwhen the filter map size ≤ 10,000 (common case).ForkJoinPool(2)withparallelStreamonly when filter count exceeds the threshold, avoiding excessive thread creation.addAll, reducing lock contention on the shared result collection.logger.debug) for observability.Fix over-limit check in
LogMatch.matchBlockOneByOnesize + matchedLog.size() > MAX_RESULTbefore callingaddAll, preventing the result list from silently exceedingMAX_RESULT.Why are these changes required?
eth_newFilterrepeatedly can exhaust heap memory on the node.handleLogsFilteriterated over the filter map with anIteratorand calledresult.add()per element; under high filter count this is slow and contention-prone.MAX_RESULTcheck was evaluated afteraddAll, meaning results could briefly exceed the limit.This PR has been tested by:
Configuration