Skip to content

fix(jsonrpc): enforce log filter cap and improve match efficiency#71

Open
317787106 wants to merge 13 commits intodevelopfrom
hotfix/fix_newFilter
Open

fix(jsonrpc): enforce log filter cap and improve match efficiency#71
317787106 wants to merge 13 commits intodevelopfrom
hotfix/fix_newFilter

Conversation

@317787106
Copy link
Copy Markdown
Owner

@317787106 317787106 commented Apr 16, 2026

What does this PR do?

This PR improves robustness and performance of the JSON-RPC log filter subsystem (eth_newFilter / eth_getLogs). Closes tronprotocol#6510

Changes:

  1. Enforce a configurable cap on active eth_newFilter filters

    • Adds node.jsonrpc.maxLogFilterNum config item (default: 20,000).
    • When the active filter count reaches the limit, eth_newFilter returns JsonRpcExceedLimitException (code -32005) instead of silently growing without bound.
  2. Optimize handleLogsFilter for large filter maps

    • Uses serial forEach when the filter map size ≤ 10,000 (common case).
    • Uses a bounded ForkJoinPool(2) with parallelStream only when filter count exceeds the threshold, avoiding excessive thread creation.
    • Collects matched elements into a local list before addAll, reducing lock contention on the shared result collection.
    • Adds timing log (logger.debug) for observability.
  3. Fix over-limit check in LogMatch.matchBlockOneByOne

    • Checks size + matchedLog.size() > MAX_RESULT before calling addAll, preventing the result list from silently exceeding MAX_RESULT.

Why are these changes required?

  • Without a filter count cap, a client calling eth_newFilter repeatedly can exhaust heap memory on the node.
  • The previous handleLogsFilter iterated over the filter map with an Iterator and called result.add() per element; under high filter count this is slow and contention-prone.
  • The MAX_RESULT check was evaluated after addAll, meaning results could briefly exceed the limit.

This PR has been tested by:

  • Unit Tests
  • Manual Testing

Configuration

node {
  jsonrpc {
    # Allowed maximum number for newFilter (eth_newFilter)
    maxLogFilterNum = 20000
  }
}

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 16, 2026

📝 Walkthrough

Walkthrough

Adds a configurable max log-filter limit (jsonRpcMaxLogFilterNum, default 20000), enforces it in newFilter, parallelizes log-filter matching for large subscription sets, tightens incremental result-size checks, and expands exception handling for event-bloom errors during fork reorgs.

Changes

Cohort / File(s) Summary
Configuration Definition & Keys
common/src/main/java/org/tron/common/parameter/CommonParameter.java, framework/src/main/java/org/tron/core/config/args/ConfigKey.java
Introduce jsonRpcMaxLogFilterNum = 20000 and corresponding NODE_JSONRPC_MAX_LOG_FILTER_NUM config key.
Configuration Loading
framework/src/main/java/org/tron/core/config/args/Args.java
Load NODE_JSONRPC_MAX_LOG_FILTER_NUM into runtime PARAMETER during config application.
Config Files
framework/src/main/resources/config.conf, framework/src/test/resources/config-*.conf
Add maxLogFilterNum = 20000 entry/comment under node.jsonrpc in default and test configs.
JSON-RPC: newFilter & Filtering Pipeline
framework/src/main/java/org/tron/core/services/jsonrpc/TronJsonRpc.java, framework/src/main/java/org/tron/core/services/jsonrpc/TronJsonRpcImpl.java
newFilter now enforces max log-filter count and throws JsonRpcExceedLimitException; handleLogsFilter refactored to use processLogFilterEntry, logs timing/size, and conditionally parallelizes via a ForkJoinPool(2) when map size exceeds a threshold. close() now shuts down the new pool.
Filter Matching Limits
framework/src/main/java/org/tron/core/services/jsonrpc/filters/LogMatch.java
Enforce LogBlockQuery.MAX_RESULT incrementally by checking combined sizes before appending matched results.
Fork Reorg Handling
framework/src/main/java/org/tron/core/db/Manager.java
Include EventBloomException alongside BadBlockException in fork-branch applyBlock exception handling to trigger existing revert logic.
Tests
framework/src/test/java/org/tron/core/jsonrpc/WalletCursorTest.java
Ensure TronJsonRpcImpl.close() is explicitly called in two tests after normal execution.

Sequence Diagram

sequenceDiagram
    participant Client
    participant TronJsonRpc
    participant TronJsonRpcImpl
    participant Config as CommonParameter
    participant FilterRegistry as Filter Registry

    Client->>TronJsonRpc: eth_newFilter(FilterRequest)
    TronJsonRpc->>TronJsonRpcImpl: newFilter(fr)
    TronJsonRpcImpl->>Config: getJsonRpcMaxLogFilterNum()
    Config-->>TronJsonRpcImpl: maxLogFilterNum
    TronJsonRpcImpl->>FilterRegistry: query current filter count
    FilterRegistry-->>TronJsonRpcImpl: currentCount
    alt currentCount >= maxLogFilterNum
        TronJsonRpcImpl-->>TronJsonRpc: throw JsonRpcExceedLimitException
        TronJsonRpc-->>Client: Error (-32005)
    else
        TronJsonRpcImpl->>FilterRegistry: create filter -> store
        FilterRegistry-->>TronJsonRpcImpl: filterId
        TronJsonRpcImpl-->>TronJsonRpc: return filterId
        TronJsonRpc-->>Client: Success
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Poem

🐇 I nibble configs, set a gentle bound,
Twenty thousand filters — tight and sound.
I split the hops to run in parallel lanes,
Keep order whole, avoid tangled chains.
A rabbit cheers — no more slow event pains! ✨

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 14.29% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Linked Issues check ✅ Passed The PR fulfills the core coding objectives from #6510: adds configurable log filter cap (node.jsonrpc.maxLogFilterNum), implements parallel matching with ForkJoinPool(2), enforces subscription limits with JsonRpcExceedLimitException, and fixes exception handling.
Out of Scope Changes check ✅ Passed All changes align with the stated objectives. Configuration additions, filter processing optimizations, exception handling fixes, and test updates are in-scope adjustments to support the core feature.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately captures the main objectives: enforcing a log filter cap (maxLogFilterNum limit) and improving match efficiency (parallelization and optimizations).

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch hotfix/fix_newFilter

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2 issues found across 10 files

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="framework/src/main/java/org/tron/core/services/jsonrpc/TronJsonRpcImpl.java">

<violation number="1" location="framework/src/main/java/org/tron/core/services/jsonrpc/TronJsonRpcImpl.java:236">
P0: The `ForkJoinPool` is created on every `handleLogsFilter` call but never shut down. This leaks worker threads on each invocation. Either shut down the pool in a `finally` block after `join()`, or make it a static/instance-level field that is reused across calls.</violation>

<violation number="2" location="framework/src/main/java/org/tron/core/services/jsonrpc/TronJsonRpcImpl.java:1431">
P2: Inconsistent boundary check: block filter uses `>=` but log filter uses `>`, allowing one extra filter beyond `maxLogFilterNum`. Use `>=` to match the block filter behavior.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

Comment thread framework/src/main/java/org/tron/core/services/jsonrpc/TronJsonRpcImpl.java Outdated
Comment thread framework/src/main/java/org/tron/core/services/jsonrpc/TronJsonRpcImpl.java Outdated
@317787106 317787106 changed the base branch from release_v4.8.1 to develop April 23, 2026 08:31
@317787106 317787106 force-pushed the hotfix/fix_newFilter branch from 20f0cc1 to 9897ea8 Compare April 23, 2026 08:31
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (1)
framework/src/main/java/org/tron/core/services/jsonrpc/TronJsonRpcImpl.java (1)

251-291: Add a clarifying comment for ConcurrentHashMap iteration and removal.

Line 257's eventFilterMap.remove(entry.getKey()) during iteration is safe for ConcurrentHashMap (weakly consistent iterator), but consider adding a brief inline comment explaining this is intentional and safe—preventing future contributors from mistakenly porting this pattern to a non-concurrent collection.

The concurrent mutation concern at line 290 is mitigated: LogFilterAndResult.result is a LinkedBlockingQueue, so concurrent addAll() calls from ForkJoinPool workers and drainTo() calls from RPC reader threads are both thread-safe.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@framework/src/main/java/org/tron/core/services/jsonrpc/TronJsonRpcImpl.java`
around lines 251 - 291, The code in processLogFilterEntry calls
eventFilterMap.remove(entry.getKey()) while iterating a ConcurrentHashMap; add a
concise inline comment next to that remove explaining this is intentional and
safe because eventFilterMap is a ConcurrentHashMap with a weakly-consistent
iterator (so removals during iteration are allowed), and warn contributors not
to copy this pattern to non-concurrent maps; also add a short comment by the
getResult().addAll(...) call noting that LogFilterAndResult.getResult() is a
LinkedBlockingQueue so concurrent addAll() from ForkJoinPool workers and
drainTo() from RPC reader threads is thread-safe.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@framework/src/main/java/org/tron/core/config/args/Args.java`:
- Around line 268-271: When reading ConfigKey.NODE_JSONRPC_MAX_LOG_FILTER_NUM in
Args (setting PARAMETER.jsonRpcMaxLogFilterNum), validate the value is a
positive integer (>0) and reject non-positive values; if invalid, log an error
and fail fast (throw an IllegalArgumentException or exit) so the node won't
start with zero/negative maxLogFilterNum which breaks TronJsonRpcImpl's
eth_newFilter logic (see TronJsonRpcImpl.java:1443). Locate the assignment in
Args and add the check immediately after config.getInt(...), referencing
PARAMETER.jsonRpcMaxLogFilterNum for the value and ensuring a clear error
message that includes the invalid value.

In `@framework/src/main/java/org/tron/core/services/jsonrpc/TronJsonRpcImpl.java`:
- Around line 228-249: The current per-block timing log in handleLogsFilter uses
logger.info which floods INFO logs; change it to logger.debug (or wrap with
logger.isDebugEnabled()) so timing is only emitted at DEBUG level, or
alternatively emit at INFO only when (t2 - t1) exceeds a chosen threshold;
update the call that currently uses logger.info(...) to use logger.debug(...) or
add the conditional check and keep the same message, referencing
handleLogsFilter, logger.info, logger.debug, and the timing variables t1/t2 and
eventFilterMap.size().
- Around line 174-175: The static LOGS_FILTER_POOL is created with new
ForkJoinPool(2) while the PR text said parallelism 3 and it is never shut down,
causing thread leaks; reconcile the intended parallelism (use 3 or make it
configurable using available processors), replace the hardcoded new
ForkJoinPool(2) by the chosen value (or a supplier/config param) and ensure the
pool is properly shutdown in the class close() method alongside sectionExecutor
(call LOGS_FILTER_POOL.shutdown() and await termination or forceShutdown as
appropriate); reference FILTER_PARALLEL_THRESHOLD, LOGS_FILTER_POOL, close(),
and sectionExecutor when making these changes.
- Around line 1443-1446: The size() check on eventFilter2Result and the
subsequent put() can race under concurrent eth_newFilter calls; change the
insertion to an atomic compute/computeIfAbsent or use eventFilter2Result.compute
to both test and insert in one atomic operation (referencing eventFilter2Result,
maxLogFilterNum, and the eth_newFilter insertion logic) so that you only
create/accept a new filter if the total stays <= maxLogFilterNum; alternatively,
perform the put and then immediately re-check the map size and remove/rollback
the newly inserted entry if the cap is exceeded to enforce the limit atomically.

---

Nitpick comments:
In `@framework/src/main/java/org/tron/core/services/jsonrpc/TronJsonRpcImpl.java`:
- Around line 251-291: The code in processLogFilterEntry calls
eventFilterMap.remove(entry.getKey()) while iterating a ConcurrentHashMap; add a
concise inline comment next to that remove explaining this is intentional and
safe because eventFilterMap is a ConcurrentHashMap with a weakly-consistent
iterator (so removals during iteration are allowed), and warn contributors not
to copy this pattern to non-concurrent maps; also add a short comment by the
getResult().addAll(...) call noting that LogFilterAndResult.getResult() is a
LinkedBlockingQueue so concurrent addAll() from ForkJoinPool workers and
drainTo() from RPC reader threads is thread-safe.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 2f546df6-e48d-4a43-8ae7-6129380594c1

📥 Commits

Reviewing files that changed from the base of the PR and between f0a8f0f and 9897ea8.

📒 Files selected for processing (11)
  • common/src/main/java/org/tron/common/parameter/CommonParameter.java
  • framework/src/main/java/org/tron/core/config/args/Args.java
  • framework/src/main/java/org/tron/core/config/args/ConfigKey.java
  • framework/src/main/java/org/tron/core/db/Manager.java
  • framework/src/main/java/org/tron/core/services/jsonrpc/TronJsonRpc.java
  • framework/src/main/java/org/tron/core/services/jsonrpc/TronJsonRpcImpl.java
  • framework/src/main/java/org/tron/core/services/jsonrpc/filters/LogMatch.java
  • framework/src/main/resources/config.conf
  • framework/src/test/resources/config-localtest.conf
  • framework/src/test/resources/config-test-mainnet.conf
  • framework/src/test/resources/config-test.conf

Comment on lines +268 to +271
if (config.hasPath(ConfigKey.NODE_JSONRPC_MAX_LOG_FILTER_NUM)) {
PARAMETER.jsonRpcMaxLogFilterNum =
config.getInt(ConfigKey.NODE_JSONRPC_MAX_LOG_FILTER_NUM);
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Description: Inspect all uses of the new JSON-RPC log filter cap and related tests.
# Expectation: Enforcement assumes a strictly positive cap, and tests cover zero/negative config values if allowed.

rg -n -C4 '\bjsonRpcMaxLogFilterNum\b|getJsonRpcMaxLogFilterNum|NODE_JSONRPC_MAX_LOG_FILTER_NUM|eth_newFilter|newFilter\s*\(' --glob '*.java'

Repository: 317787106/java-tron

Length of output: 17055


🏁 Script executed:

#!/bin/bash
# Find where maxLogFilterNum is compared/used in enforcement logic
rg -n -B2 -A2 'maxLogFilterNum\s*[<>]=?\s*|<\s*maxLogFilterNum|>\s*maxLogFilterNum' --glob '*.java'

Repository: 317787106/java-tron

Length of output: 45


🏁 Script executed:

#!/bin/bash
# Broader search for all usages of maxLogFilterNum variable
rg -n 'maxLogFilterNum' --glob '*.java' -A3 -B1

Repository: 317787106/java-tron

Length of output: 2332


🏁 Script executed:

#!/bin/bash
# Verify the exact context around lines 268-271 in Args.java
sed -n '260,280p' framework/src/main/java/org/tron/core/config/args/Args.java

Repository: 317787106/java-tron

Length of output: 789


Add validation for maxLogFilterNum to reject non-positive values.

Line 270 accepts zero or negative values, which would break all eth_newFilter calls due to the check at TronJsonRpcImpl.java:1443 (if (eventFilter2Result.size() >= maxLogFilterNum)). Validate that the parameter is positive when loading config to fail fast.

Proposed validation
     if (config.hasPath(ConfigKey.NODE_JSONRPC_MAX_LOG_FILTER_NUM)) {
-      PARAMETER.jsonRpcMaxLogFilterNum =
-          config.getInt(ConfigKey.NODE_JSONRPC_MAX_LOG_FILTER_NUM);
+      int maxLogFilterNum = config.getInt(ConfigKey.NODE_JSONRPC_MAX_LOG_FILTER_NUM);
+      if (maxLogFilterNum <= 0) {
+        throw new IllegalArgumentException(
+            ConfigKey.NODE_JSONRPC_MAX_LOG_FILTER_NUM + " must be greater than 0");
+      }
+      PARAMETER.jsonRpcMaxLogFilterNum = maxLogFilterNum;
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@framework/src/main/java/org/tron/core/config/args/Args.java` around lines 268
- 271, When reading ConfigKey.NODE_JSONRPC_MAX_LOG_FILTER_NUM in Args (setting
PARAMETER.jsonRpcMaxLogFilterNum), validate the value is a positive integer (>0)
and reject non-positive values; if invalid, log an error and fail fast (throw an
IllegalArgumentException or exit) so the node won't start with zero/negative
maxLogFilterNum which breaks TronJsonRpcImpl's eth_newFilter logic (see
TronJsonRpcImpl.java:1443). Locate the assignment in Args and add the check
immediately after config.getInt(...), referencing
PARAMETER.jsonRpcMaxLogFilterNum for the value and ensuring a clear error
message that includes the invalid value.

Comment on lines +1443 to +1446
if (eventFilter2Result.size() >= maxLogFilterNum) {
throw new JsonRpcExceedLimitException(
"exceed max log filters: " + maxLogFilterNum + ", try again later");
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

TOCTOU race: size check is not atomic with insert.

size() check on line 1443 and put() on line 1450 are not performed atomically on the ConcurrentHashMap. Under concurrent eth_newFilter requests, multiple callers can observe size() < maxLogFilterNum simultaneously and all proceed to insert, exceeding the configured cap. Given the default cap is 20,000 this is a soft limit abuse concern rather than a correctness bug, but it defeats the purpose of the cap under burst load.

Consider using compute/computeIfAbsent pattern, or re-check size after put and rollback if exceeded:

🛡️ Example hardening
-    if (eventFilter2Result.size() >= maxLogFilterNum) {
-      throw new JsonRpcExceedLimitException(
-          "exceed max log filters: " + maxLogFilterNum + ", try again later");
-    }
     long currentMaxFullNum = wallet.getNowBlock().getBlockHeader().getRawData().getNumber();
     LogFilterAndResult logFilterAndResult = new LogFilterAndResult(fr, currentMaxFullNum, wallet);
     String filterID = generateFilterId();
-    eventFilter2Result.put(filterID, logFilterAndResult);
+    if (eventFilter2Result.size() >= maxLogFilterNum) {
+      throw new JsonRpcExceedLimitException(
+          "exceed max log filters: " + maxLogFilterNum + ", try again later");
+    }
+    eventFilter2Result.put(filterID, logFilterAndResult);
+    if (eventFilter2Result.size() > maxLogFilterNum) {
+      eventFilter2Result.remove(filterID);
+      throw new JsonRpcExceedLimitException(
+          "exceed max log filters: " + maxLogFilterNum + ", try again later");
+    }
     return ByteArray.toJsonHex(filterID);

The same concern applies to newBlockFilter (lines 1465-1468), which is preexisting and out of scope but worth noting for a follow-up.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@framework/src/main/java/org/tron/core/services/jsonrpc/TronJsonRpcImpl.java`
around lines 1443 - 1446, The size() check on eventFilter2Result and the
subsequent put() can race under concurrent eth_newFilter calls; change the
insertion to an atomic compute/computeIfAbsent or use eventFilter2Result.compute
to both test and insert in one atomic operation (referencing eventFilter2Result,
maxLogFilterNum, and the eth_newFilter insertion logic) so that you only
create/accept a new filter if the total stays <= maxLogFilterNum; alternatively,
perform the put and then immediately re-check the map size and remove/rollback
the newly inserted entry if the cap is exceeded to enforce the limit atomically.

@317787106 317787106 changed the title hotfix(jsonrpc): restrict the newFilter size fix(jsonrpc): restrict the newFilter size Apr 23, 2026
@317787106 317787106 changed the title fix(jsonrpc): restrict the newFilter size fix(jsonrpc): enforce active log filter cap and fix EventBloomException Apr 23, 2026
Comment thread framework/src/main/java/org/tron/core/services/jsonrpc/TronJsonRpcImpl.java Outdated

private static final String ERROR_SELECTOR = "08c379a0"; // Function selector for Error(string)
private static final int FILTER_PARALLEL_THRESHOLD = 10000;
private static final ForkJoinPool LOGS_FILTER_POOL = new ForkJoinPool();
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[SHOULD] Static LOGS_FILTER_POOL shut down by instance close() breaks reuse

LOGS_FILTER_POOL is private static final, but close() (line 1598) calls shutdownAndAwaitTermination on it. If any instance of TronJsonRpcImpl is closed (Spring @DirtiesContext integration tests, or any future lifecycle that recreates the bean), subsequent handleLogsFilter calls that hit the parallel branch will throw RejectedExecutionException — the class-level pool is permanently dead after the first close. Separately, shutdownAndAwaitTermination(LOGS_FILTER_POOL, "") passes an empty name, so the log line becomes Pool shutdown... (two spaces), which is useless for operators.

Suggestion: Either make the pool an instance field (constructed in the constructor, torn down in close()), or keep it static and stop shutting it down from close() — use daemon worker threads so the JVM can exit cleanly. Pass a meaningful name (e.g. "jsonrpc-logs-filter-pool") either way.

Comment thread framework/src/main/java/org/tron/core/db/Manager.java Outdated
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
framework/src/test/java/org/tron/core/jsonrpc/WalletCursorTest.java (1)

88-94: ⚠️ Potential issue | 🟡 Minor

close() is unreachable on the happy path of this test; move to finally.

This test asserts that buildTransaction throws because the cursor is SOLIDITY (see Line 92-93). When the test passes as intended, line 90 is never executed, so the TronJsonRpcImpl instance — and the static LOGS_FILTER_POOL it owns — is never cleaned up. If the test ever regresses to a non-throwing path, close() will run and (as noted on TronJsonRpcImpl.java Line 175) shut down the static pool for the rest of the JVM.

Move the cleanup into a finally block so it’s unconditional, and co-locate it with the instance lifecycle:

🧹 Proposed fix
     TronJsonRpcImpl tronJsonRpc = new TronJsonRpcImpl(nodeInfoService, wallet, dbManager);
     try {
       tronJsonRpc.buildTransaction(buildArguments);
-      tronJsonRpc.close();
     } catch (Exception e) {
       Assert.assertEquals("the method buildTransaction does not exist/is not available in "
           + "SOLIDITY", e.getMessage());
+    } finally {
+      try {
+        tronJsonRpc.close();
+      } catch (IOException ioe) {
+        // ignore on teardown
+      }
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@framework/src/test/java/org/tron/core/jsonrpc/WalletCursorTest.java` around
lines 88 - 94, The test currently calls tronJsonRpc.close() only in the try path
so the TronJsonRpcImpl instance and its static LOGS_FILTER_POOL may not be
cleaned up when buildTransaction throws; move the close() call into a finally
block and ensure the tronJsonRpc variable is declared/initialized outside the
try so it can be closed unconditionally (i.e., surround the try/catch with try {
... } catch(...) { ... } finally { tronJsonRpc.close(); }) referencing
buildTransaction and close on the TronJsonRpcImpl instance used in
WalletCursorTest.
♻️ Duplicate comments (1)
framework/src/main/java/org/tron/core/services/jsonrpc/TronJsonRpcImpl.java (1)

175-175: ⚠️ Potential issue | 🟠 Major

Static LOGS_FILTER_POOL permanently dies after first close().

LOGS_FILTER_POOL is private static final (Line 175) but close() shuts it down on every instance tear-down (Line 1598). After the first instance is closed, the class-scoped pool is terminated for the lifetime of the JVM, and any subsequent TronJsonRpcImpl that hits the parallel branch in handleLogsFilter will throw RejectedExecutionException at LOGS_FILTER_POOL.submit(...).

This is now actively triggerable: WalletCursorTest.testEnableInFullNode and testDisableInSolidity have been updated in this PR to call tronJsonRpc.close(), so test suites that construct multiple instances in the same JVM (Spring @DirtiesContext, multiple @Test methods, or any lifecycle that recreates the bean) will leave the class in a broken state.

Pick one of the two ownership models:

Option A — keep pool static; stop shutting it down from instance close()
-  private static final ForkJoinPool LOGS_FILTER_POOL = new ForkJoinPool(2);
+  private static final ForkJoinPool LOGS_FILTER_POOL = new ForkJoinPool(
+      2,
+      pool -> {
+        ForkJoinWorkerThread w = ForkJoinPool.defaultForkJoinWorkerThreadFactory.newThread(pool);
+        w.setName("jsonrpc-logs-filter-" + w.getPoolIndex());
+        w.setDaemon(true);
+        return w;
+      },
+      null,
+      false);
   public void close() throws IOException {
-    ExecutorServiceManager.shutdownAndAwaitTermination(LOGS_FILTER_POOL, "logs-filter-pool");
     logElementCache.invalidateAll();
     blockHashCache.invalidateAll();
     ExecutorServiceManager.shutdownAndAwaitTermination(sectionExecutor, esName);
   }
Option B — make the pool an instance field tied to the bean lifecycle
-  private static final ForkJoinPool LOGS_FILTER_POOL = new ForkJoinPool(2);
+  private final ForkJoinPool logsFilterPool = new ForkJoinPool(2);

Then update handleLogsFilter / processLogFilterEntry to be instance methods (or pass the pool in), and keep the existing shutdown in close(). Note this also requires rethinking the static callers in Manager/event dispatch.

Option A is the minimal fix and matches the intent that handleLogsFilter is a static class-level service.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@framework/src/main/java/org/tron/core/services/jsonrpc/TronJsonRpcImpl.java`
at line 175, The static LOGS_FILTER_POOL is being shut down in the instance
close() causing RejectedExecutionException for subsequent instances; to fix,
adopt Option A: treat LOGS_FILTER_POOL as a static shared resource and remove
(or guard against) any shutdown/close logic from TronJsonRpcImpl.close() that
calls LOGS_FILTER_POOL.shutdown()/shutdownNow()/close; ensure handleLogsFilter
and processLogFilterEntry continue to submit to LOGS_FILTER_POOL as a static
pool and verify no other code paths attempt to terminate that static pool.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@framework/src/test/java/org/tron/core/jsonrpc/WalletCursorTest.java`:
- Around line 88-94: The test currently calls tronJsonRpc.close() only in the
try path so the TronJsonRpcImpl instance and its static LOGS_FILTER_POOL may not
be cleaned up when buildTransaction throws; move the close() call into a finally
block and ensure the tronJsonRpc variable is declared/initialized outside the
try so it can be closed unconditionally (i.e., surround the try/catch with try {
... } catch(...) { ... } finally { tronJsonRpc.close(); }) referencing
buildTransaction and close on the TronJsonRpcImpl instance used in
WalletCursorTest.

---

Duplicate comments:
In `@framework/src/main/java/org/tron/core/services/jsonrpc/TronJsonRpcImpl.java`:
- Line 175: The static LOGS_FILTER_POOL is being shut down in the instance
close() causing RejectedExecutionException for subsequent instances; to fix,
adopt Option A: treat LOGS_FILTER_POOL as a static shared resource and remove
(or guard against) any shutdown/close logic from TronJsonRpcImpl.close() that
calls LOGS_FILTER_POOL.shutdown()/shutdownNow()/close; ensure handleLogsFilter
and processLogFilterEntry continue to submit to LOGS_FILTER_POOL as a static
pool and verify no other code paths attempt to terminate that static pool.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: f848f185-10c5-4ee0-9a9f-05c3d8e47094

📥 Commits

Reviewing files that changed from the base of the PR and between 9897ea8 and 61704aa.

📒 Files selected for processing (2)
  • framework/src/main/java/org/tron/core/services/jsonrpc/TronJsonRpcImpl.java
  • framework/src/test/java/org/tron/core/jsonrpc/WalletCursorTest.java

@317787106 317787106 changed the title fix(jsonrpc): enforce active log filter cap and fix EventBloomException fix(jsonrpc): enforce active log filter cap and improve match efficiency Apr 24, 2026
Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 issue found across 1 file (changes from recent commits).

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="framework/src/main/java/org/tron/core/db/Manager.java">

<violation number="1">
P1: EventBloomException is no longer included in the fork-apply catch, so EventBloom failures skip the switch-back rollback path and can leave fork switching in a partially applied state.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

@317787106 317787106 changed the title fix(jsonrpc): enforce active log filter cap and improve match efficiency fix(jsonrpc): enforce log filter cap and improve match efficiency Apr 25, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Parallelizing eth_newFilter event matching

2 participants