Skip to content

feat(jsonrpc): add resource restrict for jsonrpc#69

Open
317787106 wants to merge 11 commits intodevelopfrom
hotfix/restrict_jsonrpc_size
Open

feat(jsonrpc): add resource restrict for jsonrpc#69
317787106 wants to merge 11 commits intodevelopfrom
hotfix/restrict_jsonrpc_size

Conversation

@317787106
Copy link
Copy Markdown
Owner

@317787106 317787106 commented Apr 16, 2026

What does this PR do?

Adds configurable resource limits to the JSON-RPC endpoint to prevent memory exhaustion and abuse from oversized requests or responses. Closes tronprotocol#6632

Changes:

  1. Batch size limit (node.jsonrpc.maxBatchSize, default: 100)

    • Validates the array length of batch JSON-RPC requests before dispatching.
    • Requests exceeding the limit are rejected with error code -32005 (exceed limit).
  2. Response size limit (node.jsonrpc.maxResponseSize, default: 25 MB)

    • Introduces BufferedResponseWrapper: intercepts getOutputStream() and getWriter() writes into an in-memory buffer; throws JsonRpcResponseTooLargeException the moment a write would exceed the configured limit, bounding memory usage to at most maxResponseSize bytes.
    • Introduces CachedBodyRequestWrapper: replays the pre-read request body via both getInputStream() and getReader(), so the body can be inspected before being forwarded to JsonRpcServer.
    • On JsonRpcResponseTooLargeException, the servlet discards the partial buffer and returns error code -32003 (response too large).
  3. Address list limit (node.jsonrpc.maxAddressSize, default: 1000)

    • In LogFilter, validates the address array length in eth_getLogs / eth_newFilter requests.
    • Requests exceeding the limit are rejected with JsonRpcInvalidParamsException.
  4. Structured JSON-RPC error responses

    • writeJsonRpcError uses ObjectMapper to build error responses safely, avoiding JSON injection from error messages.
    • Error codes follow the JSON-RPC 2.0 spec: -32700 parse error, -32005 exceed limit, -32003 response too large.

Why are these changes required?

  • Without limits, a client can send an arbitrarily large batch, trigger an expensive query with many addresses, or force the node to serialize a massive response — all of which cause unbounded memory growth.
  • The response buffer caps worst-case allocation to maxResponseSize and fails fast rather than buffering the entire response before checking.

Configuration

node {
  jsonrpc {
    # Max JSON-RPC batch array size; 0 means no limit
    maxBatchSize = 100
    # Max response body bytes; 0 means no limit (25 MB default)
    maxResponseSize = 26214400
    # Max address entries in eth_getLogs / eth_newFilter; 0 means no limit
    maxAddressSize = 1000
  }
}

This PR has been tested by:

  • Unit Tests
  • Manual Testing

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 16, 2026

📝 Walkthrough

Walkthrough

Adds configurable JSON‑RPC limits (batch size, response size, address count), new request/response buffering wrappers and an exception for oversized responses, config keys and loading, servlet-level JSON parsing/validation and error mapping, plus address-count validation in LogFilter.

Changes

Cohort / File(s) Summary
Configuration & Parameters
common/src/main/java/org/tron/common/parameter/CommonParameter.java, framework/src/main/java/org/tron/core/config/args/ConfigKey.java, framework/src/main/resources/config.conf
Introduce jsonRpcMaxBatchSize, jsonRpcMaxResponseSize, jsonRpcMaxAddressSize defaults and config keys (node.jsonrpc.maxBatchSize, node.jsonrpc.maxResponseSize, node.jsonrpc.maxAddressSize) with defaults in config.
Config Loading
framework/src/main/java/org/tron/core/config/args/Args.java
Read and apply the three new JSON‑RPC config keys to PARAMETER when present.
Exceptions
common/src/main/java/org/tron/core/exception/jsonrpc/JsonRpcResponseTooLargeException.java
Add new unchecked exception type to signal an oversized JSON‑RPC response.
Request/Response Wrappers
framework/src/main/java/org/tron/core/services/filter/BufferedResponseWrapper.java, framework/src/main/java/org/tron/core/services/filter/CachedBodyRequestWrapper.java
Add CachedBodyRequestWrapper to replay buffered request body and BufferedResponseWrapper to buffer response bytes, enforce max response size and defer committing headers/body; overflow throws the new exception.
Servlet & Validation
framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java, framework/src/main/java/org/tron/core/services/jsonrpc/filters/LogFilter.java
JsonRpcServlet.doPost now fully reads/parses body, enforces batch size limit, wraps request/response with new wrappers, maps oversized-response exception to JSON‑RPC error; LogFilter validates address list size against new limit.

Sequence Diagram

sequenceDiagram
    participant Client as Client
    participant Servlet as JsonRpcServlet
    participant ReqWrap as CachedBodyRequestWrapper
    participant RespWrap as BufferedResponseWrapper
    participant Rpc as RpcServer
    participant Filter as LogFilter
    participant Backend as Backend

    Client->>Servlet: POST body
    Servlet->>Servlet: readBody() & parse JSON
    alt root is array and size > maxBatchSize
        Servlet->>Client: JSON‑RPC error (batch too large)
    else
        Servlet->>ReqWrap: wrap request with buffered body
        Servlet->>RespWrap: wrap response (maxResponseSize)
        Servlet->>Rpc: rpcServer.handle(request, response)
        Rpc->>Filter: apply filters (address list)
        alt address list size > maxAddressSize
            Filter->>Rpc: throw JsonRpcInvalidParamsException
        else
            Filter->>Backend: forward request
            Backend-->>Rpc: result
        end
        Rpc->>RespWrap: write response bytes
        alt write exceeds maxResponseSize
            RespWrap-->>Rpc: throw JsonRpcResponseTooLargeException
        else
            RespWrap->>Servlet: buffered response ready
        end
        Servlet->>RespWrap: commitToResponse()
        RespWrap->>Client: final HTTP response
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related issues

  • java-tron#64: Implements the same JSON‑RPC limits (maxBatchSize, maxResponseSize, maxAddressSize) and related buffering/exception handling described in that issue.

Poem

🐰 I buffered the hop, kept payloads light,
Batches pruned tidy, addresses checked right,
When responses balloon, I raise a small cry,
I commit only what fits—no packets gone spry,
Hoppity safeguards under moonlight. 🌿

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 13.79% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately describes the main change: adding configurable resource restrictions (batch size, response size, address size limits) for JSON-RPC endpoints.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch hotfix/restrict_jsonrpc_size

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

3 issues found across 9 files

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java">

<violation number="1" location="framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java:129">
P2: Race condition after timeout: `future.cancel(true)` only sends an interrupt; the RPC handler may still be running. Since `BufferedResponseWrapper` doesn't override header methods (`setStatus`, `setContentType`, etc.), the handler thread writes headers directly to the underlying `resp` concurrently with `writeJsonRpcError`. This can corrupt the HTTP response. Consider either wrapping header methods in `BufferedResponseWrapper` too, or awaiting actual thread termination before writing the error.</violation>

<violation number="2" location="framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java:155">
P1: No limit on request body size. `readBody` loads the entire input stream into memory before any validation runs. A malicious client can send a multi-gigabyte body to cause OOM. Add a maximum request body size check (e.g., reject if total bytes read exceeds a configured limit).</violation>

<violation number="3" location="framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java:168">
P1: The `message` string is concatenated into the JSON response without JSON-escaping. If `message` contains `"`, `\`, or control characters, the output is malformed JSON. Use the `ObjectMapper` (already available as `MAPPER`) to build the JSON safely, or at minimum escape the string.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

Comment thread framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java Outdated
Comment thread framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java Outdated
@317787106 317787106 changed the title feat(jsonrpc): add resource restirct for jsonrpc feat(jsonrpc): add resource restrict for jsonrpc Apr 19, 2026
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@framework/src/main/java/org/tron/core/services/filter/BufferedResponseWrapper.java`:
- Around line 80-105: BufferedResponseWrapper currently overrides
getOutputStream() but not getWriter(), so callers using getWriter() bypass the
buffered size-checked stream; override getWriter() in BufferedResponseWrapper to
return a PrintWriter that wraps an OutputStreamWriter which writes into the same
buffering/size-checking stream used by the existing outputStream field (honoring
actual.getCharacterEncoding() or UTF-8 fallback), ensure the writer delegates
flush/close correctly without directly committing the real response, and reuse
the same buffer/commitToResponse() path so output written via getWriter() is
included in commitToResponse().

In
`@framework/src/main/java/org/tron/core/services/filter/CachedBodyRequestWrapper.java`:
- Around line 21-49: CachedBodyRequestWrapper currently only overrides
getInputStream() to replay the stored byte[] body, so callers using getReader()
will still read the original (consumed) request; add an override for public
BufferedReader getReader() in CachedBodyRequestWrapper that constructs an
InputStreamReader over a new ByteArrayInputStream(body) using
getCharacterEncoding() (falling back to StandardCharsets.UTF_8 if null) and
wraps it in a BufferedReader so all downstream code reading via getReader() gets
the cached body.

In `@framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java`:
- Line 89: The request body is being capped with
parameter.getJsonRpcMaxResponseSize() in JsonRpcServlet.readBody call which
wrongly couples request-size to response-size; introduce or use a distinct
request-size limit (e.g., parameter.getJsonRpcMaxRequestSize() or a new config
field), change the readBody invocation in JsonRpcServlet to use that request
limit instead of getJsonRpcMaxResponseSize(), leave response-size uses
(getJsonRpcMaxResponseSize()) unchanged, and update configuration defaults/docs
and any parameter parsing to include the new request-size setting (with a
sensible fallback to avoid breaking deployments).
- Around line 33-35: The pre-parse steps readBody(...) and MAPPER.readTree(...)
can throw servlet-level exceptions for oversized requests or malformed JSON;
update the JsonRpcError enum to add PARSE_ERROR(-32700) and wrap the calls to
readBody and MAPPER.readTree in a try-catch that converts size-limit violations
into JsonRpcError.EXCEED_LIMIT (-32005) responses and JSON parse failures into
JsonRpcError.PARSE_ERROR (-32700) responses (keeping existing
RESPONSETOO_LARGE/-32003 where appropriate), ensuring the servlet returns proper
JSON-RPC error objects instead of propagating exceptions.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 920b17a2-c138-473b-b836-b7d711012802

📥 Commits

Reviewing files that changed from the base of the PR and between 2de63bb and 2dad857.

📒 Files selected for processing (9)
  • common/src/main/java/org/tron/common/parameter/CommonParameter.java
  • common/src/main/java/org/tron/core/exception/jsonrpc/JsonRpcResponseTooLargeException.java
  • framework/src/main/java/org/tron/core/config/args/Args.java
  • framework/src/main/java/org/tron/core/config/args/ConfigKey.java
  • framework/src/main/java/org/tron/core/services/filter/BufferedResponseWrapper.java
  • framework/src/main/java/org/tron/core/services/filter/CachedBodyRequestWrapper.java
  • framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java
  • framework/src/main/java/org/tron/core/services/jsonrpc/filters/LogFilter.java
  • framework/src/main/resources/config.conf

Comment on lines +80 to +105
@Override
public ServletOutputStream getOutputStream() {
return outputStream;
}

/**
* Suppress forwarding Content-Length to the real response; caller sets it after size check.
*/
@Override
public void setContentLength(int len) {
}

@Override
public void setContentLengthLong(long len) {
}

public void commitToResponse() throws IOException {
if (contentType != null) {
actual.setContentType(contentType);
}
actual.setStatus(status);
byte[] bytes = buffer.toByteArray();
actual.setContentLength(bytes.length);
actual.getOutputStream().write(bytes);
actual.getOutputStream().flush();
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

fd -type f -name "BufferedResponseWrapper.java" | head -5

Repository: 317787106/java-tron

Length of output: 233


🏁 Script executed:

fd "BufferedResponseWrapper.java"

Repository: 317787106/java-tron

Length of output: 145


🏁 Script executed:

cat -n "framework/src/main/java/org/tron/core/services/filter/BufferedResponseWrapper.java"

Repository: 317787106/java-tron

Length of output: 3909


🏁 Script executed:

rg "getWriter" framework/src/main/java/org/tron/core/services/filter/ -A 2 -B 2

Repository: 317787106/java-tron

Length of output: 2528


🏁 Script executed:

rg "BufferedResponseWrapper" framework/src/main/java/ -B 3 -A 3

Repository: 317787106/java-tron

Length of output: 3497


🏁 Script executed:

cat -n "framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java" | head -150

Repository: 317787106/java-tron

Length of output: 6199


Override getWriter() to buffer through the same size-checked stream.

HttpServletResponseWrapper#getWriter() delegates to the real response. If the JSON-RPC handler writes through getWriter(), the body bypasses checkLimit(...), may commit early to the underlying response, and commitToResponse() will not contain that output.

Proposed fix
 import java.io.ByteArrayOutputStream;
 import java.io.IOException;
+import java.io.OutputStreamWriter;
+import java.io.PrintWriter;
+import java.nio.charset.Charset;
+import java.nio.charset.StandardCharsets;
 import javax.servlet.ServletOutputStream;
 import javax.servlet.WriteListener;
@@
   private int status = HttpServletResponse.SC_OK;
   private String contentType;
+  private PrintWriter writer;
+  private boolean outputStreamUsed;
@@
   `@Override`
   public ServletOutputStream getOutputStream() {
+    if (writer != null) {
+      throw new IllegalStateException("getWriter() has already been called");
+    }
+    outputStreamUsed = true;
     return outputStream;
   }
+
+  `@Override`
+  public PrintWriter getWriter() {
+    if (outputStreamUsed) {
+      throw new IllegalStateException("getOutputStream() has already been called");
+    }
+    if (writer == null) {
+      Charset charset = getCharacterEncoding() == null
+          ? StandardCharsets.UTF_8
+          : Charset.forName(getCharacterEncoding());
+      writer = new PrintWriter(new OutputStreamWriter(outputStream, charset));
+    }
+    return writer;
+  }
@@
   public void commitToResponse() throws IOException {
+    if (writer != null) {
+      writer.flush();
+    }
     if (contentType != null) {
       actual.setContentType(contentType);
     }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
@Override
public ServletOutputStream getOutputStream() {
return outputStream;
}
/**
* Suppress forwarding Content-Length to the real response; caller sets it after size check.
*/
@Override
public void setContentLength(int len) {
}
@Override
public void setContentLengthLong(long len) {
}
public void commitToResponse() throws IOException {
if (contentType != null) {
actual.setContentType(contentType);
}
actual.setStatus(status);
byte[] bytes = buffer.toByteArray();
actual.setContentLength(bytes.length);
actual.getOutputStream().write(bytes);
actual.getOutputStream().flush();
}
`@Override`
public ServletOutputStream getOutputStream() {
if (writer != null) {
throw new IllegalStateException("getWriter() has already been called");
}
outputStreamUsed = true;
return outputStream;
}
`@Override`
public PrintWriter getWriter() {
if (outputStreamUsed) {
throw new IllegalStateException("getOutputStream() has already been called");
}
if (writer == null) {
Charset charset = getCharacterEncoding() == null
? StandardCharsets.UTF_8
: Charset.forName(getCharacterEncoding());
writer = new PrintWriter(new OutputStreamWriter(outputStream, charset));
}
return writer;
}
/**
* Suppress forwarding Content-Length to the real response; caller sets it after size check.
*/
`@Override`
public void setContentLength(int len) {
}
`@Override`
public void setContentLengthLong(long len) {
}
public void commitToResponse() throws IOException {
if (writer != null) {
writer.flush();
}
if (contentType != null) {
actual.setContentType(contentType);
}
actual.setStatus(status);
byte[] bytes = buffer.toByteArray();
actual.setContentLength(bytes.length);
actual.getOutputStream().write(bytes);
actual.getOutputStream().flush();
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@framework/src/main/java/org/tron/core/services/filter/BufferedResponseWrapper.java`
around lines 80 - 105, BufferedResponseWrapper currently overrides
getOutputStream() but not getWriter(), so callers using getWriter() bypass the
buffered size-checked stream; override getWriter() in BufferedResponseWrapper to
return a PrintWriter that wraps an OutputStreamWriter which writes into the same
buffering/size-checking stream used by the existing outputStream field (honoring
actual.getCharacterEncoding() or UTF-8 fallback), ensure the writer delegates
flush/close correctly without directly committing the real response, and reuse
the same buffer/commitToResponse() path so output written via getWriter() is
included in commitToResponse().

rpcServer.handle(req, resp);
CommonParameter parameter = CommonParameter.getInstance();

byte[] body = readBody(req.getInputStream(), parameter.getJsonRpcMaxResponseSize());
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don’t reuse jsonRpcMaxResponseSize as the request body cap.

Line 89 couples accepted request size to the response limit used on Lines 100-101. If operators lower response size to protect egress, valid requests with small responses can start failing during input buffering. Use a separate request-body limit, or make the shared limit explicit in config/docs.

🛠️ Proposed direction
-    byte[] body = readBody(req.getInputStream(), parameter.getJsonRpcMaxResponseSize());
+    byte[] body = readBody(req.getInputStream(), parameter.getJsonRpcMaxRequestSize());

Also applies to: 100-101

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java`
at line 89, The request body is being capped with
parameter.getJsonRpcMaxResponseSize() in JsonRpcServlet.readBody call which
wrongly couples request-size to response-size; introduce or use a distinct
request-size limit (e.g., parameter.getJsonRpcMaxRequestSize() or a new config
field), change the readBody invocation in JsonRpcServlet to use that request
limit instead of getJsonRpcMaxResponseSize(), leave response-size uses
(getJsonRpcMaxResponseSize()) unchanged, and update configuration defaults/docs
and any parameter parsing to include the new request-size setting (with a
sensible fallback to avoid breaking deployments).

Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 issue found across 6 files (changes from recent commits).

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java">

<violation number="1" location="framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java:104">
P1: Request timeout enforcement was removed; JSON-RPC calls can now run unbounded and tie up servlet threads.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

resp, parameter.getJsonRpcMaxResponseSize());

try {
rpcServer.handle(cachedReq, bufferedResp);
Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai Bot Apr 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1: Request timeout enforcement was removed; JSON-RPC calls can now run unbounded and tie up servlet threads.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java, line 104:

<comment>Request timeout enforcement was removed; JSON-RPC calls can now run unbounded and tie up servlet threads.</comment>

<file context>
@@ -109,65 +96,51 @@ protected void doPost(HttpServletRequest req, HttpServletResponse resp) throws I
-      future.get(timeoutSec, TimeUnit.SECONDS);
-    } catch (TimeoutException e) {
-      future.cancel(true);
+      rpcServer.handle(cachedReq, bufferedResp);
+    } catch (JsonRpcResponseTooLargeException e) {
       JsonNode idNode = (!rootNode.isArray()) ? rootNode.get("id") : null;
</file context>
Fix with Cubic

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java (1)

33-35: ⚠️ Potential issue | 🟠 Major

Restore bounded, JSON-RPC-mapped request parsing.

readBody still buffers the entire request with no cap before batch validation, so a large body can exhaust heap. Also, MAPPER.readTree runs outside the try, so malformed JSON still escapes as a servlet error instead of a JSON-RPC parse error. Use a dedicated request-body limit and map parse failures to -32700. This is the same risk previously flagged around pre-parse handling and request-size limiting.

Suggested direction
+import com.fasterxml.jackson.core.JsonProcessingException;
 ...
   enum JsonRpcError {
+    PARSE_ERROR(-32700),
     EXCEED_LIMIT(-32005),
     RESPONSE_TOO_LARGE(-32003);
-    byte[] body = readBody(req.getInputStream());
-
-    JsonNode rootNode = MAPPER.readTree(body);
+    byte[] body;
+    JsonNode rootNode;
+    try {
+      body = readBody(req.getInputStream(), parameter.getJsonRpcMaxRequestSize());
+      rootNode = MAPPER.readTree(body);
+    } catch (RequestBodyTooLargeException e) {
+      writeJsonRpcError(resp, JsonRpcError.EXCEED_LIMIT, e.getMessage(), null);
+      return;
+    } catch (JsonProcessingException e) {
+      writeJsonRpcError(resp, JsonRpcError.PARSE_ERROR, "Parse error", null);
+      return;
+    }
-  private byte[] readBody(InputStream in) throws IOException {
+  private byte[] readBody(InputStream in, int maxBytes) throws IOException {
     ByteArrayOutputStream buffer = new ByteArrayOutputStream();
     byte[] tmp = new byte[4096];
     int n;
+    int total = 0;
     while ((n = in.read(tmp)) != -1) {
+      total += n;
+      if (maxBytes > 0 && total > maxBytes) {
+        throw new RequestBodyTooLargeException(
+            "Request body exceeds the limit of " + maxBytes);
+      }
       buffer.write(tmp, 0, n);
     }
     return buffer.toByteArray();
   }

Please verify that a separate jsonRpcMaxRequestSize / getJsonRpcMaxRequestSize() is added to config and parameter loading rather than reusing jsonRpcMaxResponseSize.

#!/bin/bash
# Verify JSON-RPC request-size config exists and readBody is bounded.
rg -n -C3 'jsonRpcMax(Request|Response)Size|getJsonRpcMax(Request|Response)Size|readBody\s*\('

Also applies to: 89-91, 116-124

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java`
around lines 33 - 35, Add a dedicated request-size cap and map JSON parse
failures to the JSON-RPC parse error (-32700): introduce a config getter
getJsonRpcMaxRequestSize()/jsonRpcMaxRequestSize and use it in readBody to bound
reading the request (prevent full buffering), throw or return Json-RPC error
code -32700 on MAPPER.readTree failures by wrapping the parse call in the
try/catch that yields the JSON-RPC error response, and ensure exceed-limit
conditions use JsonRpcError.EXCEED_LIMIT/RESPONSE_TOO_LARGE as appropriate;
apply the same changes to the other readBody occurrences noted (lines ~89-91,
116-124).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java`:
- Around line 33-35: Add a dedicated request-size cap and map JSON parse
failures to the JSON-RPC parse error (-32700): introduce a config getter
getJsonRpcMaxRequestSize()/jsonRpcMaxRequestSize and use it in readBody to bound
reading the request (prevent full buffering), throw or return Json-RPC error
code -32700 on MAPPER.readTree failures by wrapping the parse call in the
try/catch that yields the JSON-RPC error response, and ensure exceed-limit
conditions use JsonRpcError.EXCEED_LIMIT/RESPONSE_TOO_LARGE as appropriate;
apply the same changes to the other readBody occurrences noted (lines ~89-91,
116-124).

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: ee275e1b-1c36-4cb8-82d2-17c535ecf953

📥 Commits

Reviewing files that changed from the base of the PR and between 2dad857 and bac4886.

📒 Files selected for processing (1)
  • framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java

Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 issue found across 1 file (changes from recent commits).

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java">

<violation number="1" location="framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java:94">
P2: Handle empty-body parsing explicitly: `readTree` may return null, which can trigger a NullPointerException on the next `rootNode` access.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

JsonNode rootNode;
try {
body = readBody(req.getInputStream());
rootNode = MAPPER.readTree(body);
Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai Bot Apr 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: Handle empty-body parsing explicitly: readTree may return null, which can trigger a NullPointerException on the next rootNode access.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java, line 94:

<comment>Handle empty-body parsing explicitly: `readTree` may return null, which can trigger a NullPointerException on the next `rootNode` access.</comment>

<file context>
@@ -86,9 +87,15 @@ public Integer getJsonRpcCode(int httpStatusCode) {
+    JsonNode rootNode;
+    try {
+      body = readBody(req.getInputStream());
+      rootNode = MAPPER.readTree(body);
+    } catch (IOException e) {
+      writeJsonRpcError(resp, JsonRpcError.PARSE_ERROR, "Parse json error", null);
</file context>
Suggested change
rootNode = MAPPER.readTree(body);
rootNode = MAPPER.readTree(body);
if (rootNode == null) {
writeJsonRpcError(resp, JsonRpcError.PARSE_ERROR, "Parse json error", null);
return;
}
Fix with Cubic

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (1)
framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java (1)

110-121: The buffer wrapping design is sound; no collision risk exists — but add a comment explaining the batch trade-off.

The BufferedResponseWrapper contract is correctly implemented: it strictly buffers the response body (to a ByteArrayOutputStream) and only stores header mutations (setStatus, setContentType) locally until commitToResponse(). When JsonRpcResponseTooLargeException is thrown, writeJsonRpcError(resp, ...) writes directly to the unwrapped resp and returns immediately — bufferedResp.commitToResponse() is never called, so there is no collision. The underlying response is still clean (not committed) at that point, making the write safe.

However, for a batched request that overflows, the code intentionally returns a single top-level error with id=null, discarding any partial per-call results already produced. While this is JSON-RPC spec-legal, a comment acknowledging this trade-off (e.g., on line 114–115) would help future maintainers understand the design choice.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java`
around lines 110 - 121, Add a clarifying comment in JsonRpcServlet near the
JsonRpcResponseTooLargeException catch (around where rpcServer.handle,
bufferedResp.commitToResponse and writeJsonRpcError are used) that explains the
buffering contract and the intentional trade-off for batched requests:
BufferedResponseWrapper only buffers body and header mutations until
commitToResponse(), so when JsonRpcResponseTooLargeException is thrown the
handler writes a single top-level error (id=null) directly to the unwrapped resp
and returns without calling bufferedResp.commitToResponse(), discarding any
partial per-call results produced for the batch; this documents why a single
error is returned instead of per-call partials.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java`:
- Around line 92-98: The current try/catch in JsonRpcServlet conflates transport
IO failures from readBody and JSON parsing failures from MAPPER.readTree into a
single JsonRpcError.PARSE_ERROR; change the code to call
readBody(req.getInputStream()) and MAPPER.readTree(body) in sequence and handle
failures separately: catch com.fasterxml.jackson.core.JsonProcessingException
(or JsonParseException) around MAPPER.readTree and call writeJsonRpcError(resp,
JsonRpcError.PARSE_ERROR, "Parse json error", null), but do not map plain
java.io.IOException from readBody to PARSE_ERROR—either let IOExceptions
propagate (so the servlet/container handles/logs them) or handle them as a
distinct transport/internal error and log appropriately; reference readBody,
MAPPER.readTree, writeJsonRpcError, and JsonRpcError.PARSE_ERROR when making the
change.
- Around line 87-131: The readBody(InputStream in) currently reads the entire
request into memory with no cap, enabling an OOM DoS; change it to enforce a
configured request-size cap (add or use
CommonParameter.getJsonRpcMaxRequestSize(), falling back to
getJsonRpcMaxResponseSize() if missing) inside readBody so it stops reading and
signals an error when the limit is exceeded; have doPost catch that condition
and call writeJsonRpcError(resp, JsonRpcError.EXCEED_LIMIT, "Request body
exceeds limit", idNode) (distinct from PARSE_ERROR), and ensure the new logic
references readBody, doPost, CommonParameter, and JsonRpcError.EXCEED_LIMIT so
oversized requests are rejected before buffering the full body.

In `@framework/src/main/resources/config.conf`:
- Around line 380-383: The current behavior rejects batches and responses when
maxBatchSize/maxResponseSize are 0/negative; change checks to treat <= 0 as "no
limit": in JsonRpcServlet.doPost adjust the batch check to enforce only when
parameter.getJsonRpcMaxBatchSize() > 0 (e.g. if
(parameter.getJsonRpcMaxBatchSize() > 0 && rootNode.size() >
parameter.getJsonRpcMaxBatchSize()) ...), and update BufferedResponseWrapper (or
the code that passes maxResponseSize) to only cap/validate response bytes when
maxResponseSize > 0 so 0/negative means unlimited; ensure references to
maxBatchSize and maxResponseSize in parameter and BufferedResponseWrapper follow
this rule.

---

Nitpick comments:
In `@framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java`:
- Around line 110-121: Add a clarifying comment in JsonRpcServlet near the
JsonRpcResponseTooLargeException catch (around where rpcServer.handle,
bufferedResp.commitToResponse and writeJsonRpcError are used) that explains the
buffering contract and the intentional trade-off for batched requests:
BufferedResponseWrapper only buffers body and header mutations until
commitToResponse(), so when JsonRpcResponseTooLargeException is thrown the
handler writes a single top-level error (id=null) directly to the unwrapped resp
and returns without calling bufferedResp.commitToResponse(), discarding any
partial per-call results produced for the batch; this documents why a single
error is returned instead of per-call partials.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: f1824724-0ab2-4c30-b121-d0ea656e8945

📥 Commits

Reviewing files that changed from the base of the PR and between bac4886 and 32618c5.

📒 Files selected for processing (5)
  • common/src/main/java/org/tron/common/parameter/CommonParameter.java
  • framework/src/main/java/org/tron/core/services/filter/BufferedResponseWrapper.java
  • framework/src/main/java/org/tron/core/services/filter/CachedBodyRequestWrapper.java
  • framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java
  • framework/src/main/resources/config.conf
🚧 Files skipped from review as they are similar to previous changes (2)
  • framework/src/main/java/org/tron/core/services/filter/CachedBodyRequestWrapper.java
  • framework/src/main/java/org/tron/core/services/filter/BufferedResponseWrapper.java

Comment on lines 87 to +131
protected void doPost(HttpServletRequest req, HttpServletResponse resp) throws IOException {
rpcServer.handle(req, resp);
CommonParameter parameter = CommonParameter.getInstance();

byte[] body;
JsonNode rootNode;
try {
body = readBody(req.getInputStream());
rootNode = MAPPER.readTree(body);
} catch (IOException e) {
writeJsonRpcError(resp, JsonRpcError.PARSE_ERROR, "Parse json error", null);
return;
}
if (rootNode.isArray() && rootNode.size() > parameter.getJsonRpcMaxBatchSize()) {
writeJsonRpcError(resp, JsonRpcError.EXCEED_LIMIT,
"Batch size " + rootNode.size() + " exceeds the limit of "
+ parameter.getJsonRpcMaxBatchSize(), null);
return;
}

CachedBodyRequestWrapper cachedReq = new CachedBodyRequestWrapper(req, body);
BufferedResponseWrapper bufferedResp = new BufferedResponseWrapper(
resp, parameter.getJsonRpcMaxResponseSize());

try {
rpcServer.handle(cachedReq, bufferedResp);
} catch (JsonRpcResponseTooLargeException e) {
JsonNode idNode = (!rootNode.isArray()) ? rootNode.get("id") : null;
writeJsonRpcError(resp, JsonRpcError.RESPONSE_TOO_LARGE, e.getMessage(), idNode);
return;
} catch (Exception e) {
throw new IOException("RPC execution failed", e);
}

bufferedResp.commitToResponse();
}

private byte[] readBody(InputStream in) throws IOException {
ByteArrayOutputStream buffer = new ByteArrayOutputStream();
byte[] tmp = new byte[4096];
int n;
while ((n = in.read(tmp)) != -1) {
buffer.write(tmp, 0, n);
}
return buffer.toByteArray();
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical: request body is read unbounded — defeats the PR goal and is a memory-exhaustion DoS vector.

readBody(InputStream in) (lines 123–131) copies the entire request stream into a ByteArrayOutputStream with no size cap at all. A single malicious (or buggy) client sending a huge Content-Length or chunked body will force the servlet to allocate the full payload in heap before any validation can run. This is strictly worse than the earlier version, which at least passed a limit, and it directly contradicts the PR objective of “block oversized calls to reduce memory use.”

The previous review asked you to decouple request-size from response-size — the right fix is to introduce a dedicated request-size cap (e.g. jsonRpcMaxRequestSize, falling back to jsonRpcMaxResponseSize for backwards compatibility) and enforce it inside readBody, mapping the overflow to EXCEED_LIMIT (-32005), distinct from PARSE_ERROR (-32700).

🔒️ Proposed fix
-    byte[] body;
-    JsonNode rootNode;
-    try {
-      body = readBody(req.getInputStream());
-      rootNode = MAPPER.readTree(body);
-    } catch (IOException e) {
-      writeJsonRpcError(resp, JsonRpcError.PARSE_ERROR, "Parse json error", null);
-      return;
-    }
+    byte[] body;
+    try {
+      body = readBody(req.getInputStream(), parameter.getJsonRpcMaxRequestSize());
+    } catch (RequestTooLargeException e) {
+      writeJsonRpcError(resp, JsonRpcError.EXCEED_LIMIT,
+          "Request body exceeds the limit of " + parameter.getJsonRpcMaxRequestSize(), null);
+      return;
+    }
+    JsonNode rootNode;
+    try {
+      rootNode = MAPPER.readTree(body);
+    } catch (JsonProcessingException e) {
+      writeJsonRpcError(resp, JsonRpcError.PARSE_ERROR, "Parse json error", null);
+      return;
+    }
@@
-  private byte[] readBody(InputStream in) throws IOException {
-    ByteArrayOutputStream buffer = new ByteArrayOutputStream();
-    byte[] tmp = new byte[4096];
-    int n;
-    while ((n = in.read(tmp)) != -1) {
-      buffer.write(tmp, 0, n);
-    }
-    return buffer.toByteArray();
-  }
+  private byte[] readBody(InputStream in, int maxSize) throws IOException {
+    ByteArrayOutputStream buffer = new ByteArrayOutputStream();
+    byte[] tmp = new byte[4096];
+    int n;
+    int total = 0;
+    while ((n = in.read(tmp)) != -1) {
+      total += n;
+      if (maxSize > 0 && total > maxSize) {
+        throw new RequestTooLargeException("Request body exceeds " + maxSize + " bytes");
+      }
+      buffer.write(tmp, 0, n);
+    }
+    return buffer.toByteArray();
+  }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java`
around lines 87 - 131, The readBody(InputStream in) currently reads the entire
request into memory with no cap, enabling an OOM DoS; change it to enforce a
configured request-size cap (add or use
CommonParameter.getJsonRpcMaxRequestSize(), falling back to
getJsonRpcMaxResponseSize() if missing) inside readBody so it stops reading and
signals an error when the limit is exceeded; have doPost catch that condition
and call writeJsonRpcError(resp, JsonRpcError.EXCEED_LIMIT, "Request body
exceeds limit", idNode) (distinct from PARSE_ERROR), and ensure the new logic
references readBody, doPost, CommonParameter, and JsonRpcError.EXCEED_LIMIT so
oversized requests are rejected before buffering the full body.

Comment on lines +92 to +98
try {
body = readBody(req.getInputStream());
rootNode = MAPPER.readTree(body);
} catch (IOException e) {
writeJsonRpcError(resp, JsonRpcError.PARSE_ERROR, "Parse json error", null);
return;
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Collapsing IOException and JSON parse errors into a single PARSE_ERROR is misleading and hides real failures.

readBody can throw IOException for legitimate transport problems (client aborts, socket reset, read timeout) — none of those are "Parse json error", yet clients would see -32700. MAPPER.readTree throws JsonProcessingException (an IOException subclass), which is the only case that should map to -32700. Split the handling so each failure mode reports an accurate code/message, and consider letting true transport IOExceptions propagate (the container will log them appropriately) rather than masking them as a JSON-RPC parse error. The diff in the critical comment above already splits these; the key point is: don’t let a network blip look like malformed JSON.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java`
around lines 92 - 98, The current try/catch in JsonRpcServlet conflates
transport IO failures from readBody and JSON parsing failures from
MAPPER.readTree into a single JsonRpcError.PARSE_ERROR; change the code to call
readBody(req.getInputStream()) and MAPPER.readTree(body) in sequence and handle
failures separately: catch com.fasterxml.jackson.core.JsonProcessingException
(or JsonParseException) around MAPPER.readTree and call writeJsonRpcError(resp,
JsonRpcError.PARSE_ERROR, "Parse json error", null), but do not map plain
java.io.IOException from readBody to PARSE_ERROR—either let IOExceptions
propagate (so the servlet/container handles/logs them) or handle them as a
distinct transport/internal error and log appropriately; reference readBody,
MAPPER.readTree, writeJsonRpcError, and JsonRpcError.PARSE_ERROR when making the
change.

Comment on lines +380 to +383
# Allowed batch size
maxBatchSize = 100
# Allowed max response byte size
maxResponseSize = 26214400 // 25 MB = 25 * 1024 * 1024 B
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Document the boundary semantics for maxBatchSize / maxResponseSize.

Sibling keys (maxBlockRange, maxAddressSize, maxSubTopics) document that a value <= 0 disables the limit. The new keys don’t, and the current implementation does not support that semantic either:

  • In JsonRpcServlet.doPost the check is rootNode.size() > parameter.getJsonRpcMaxBatchSize(), so setting maxBatchSize = 0 would reject every batch with size ≥ 1 rather than disable the cap.
  • maxResponseSize is passed verbatim to BufferedResponseWrapper, so a 0/negative value would likely block any response.

Please either (a) align the code with the documented convention (treat <= 0 as “no limit”), or (b) explicitly document the accepted range here so operators don’t foot-gun themselves.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@framework/src/main/resources/config.conf` around lines 380 - 383, The current
behavior rejects batches and responses when maxBatchSize/maxResponseSize are
0/negative; change checks to treat <= 0 as "no limit": in JsonRpcServlet.doPost
adjust the batch check to enforce only when parameter.getJsonRpcMaxBatchSize() >
0 (e.g. if (parameter.getJsonRpcMaxBatchSize() > 0 && rootNode.size() >
parameter.getJsonRpcMaxBatchSize()) ...), and update BufferedResponseWrapper (or
the code that passes maxResponseSize) to only cap/validate response bytes when
maxResponseSize > 0 so 0/negative means unlimited; ensure references to
maxBatchSize and maxResponseSize in parameter and BufferedResponseWrapper follow
this rule.

Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 issue found across 2 files (changes from recent commits).

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="framework/src/main/java/org/tron/core/services/filter/BufferedResponseWrapper.java">

<violation number="1" location="framework/src/main/java/org/tron/core/services/filter/BufferedResponseWrapper.java:88">
P2: `getWriter()` uses JVM default charset instead of the response charset, which can corrupt JSON-RPC responses containing non-ASCII characters.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.


@Override
public PrintWriter getWriter() {
return new PrintWriter(outputStream, true);
Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai Bot Apr 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: getWriter() uses JVM default charset instead of the response charset, which can corrupt JSON-RPC responses containing non-ASCII characters.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At framework/src/main/java/org/tron/core/services/filter/BufferedResponseWrapper.java, line 88:

<comment>`getWriter()` uses JVM default charset instead of the response charset, which can corrupt JSON-RPC responses containing non-ASCII characters.</comment>

<file context>
@@ -82,6 +83,11 @@ public ServletOutputStream getOutputStream() {
 
+  @Override
+  public PrintWriter getWriter() {
+    return new PrintWriter(outputStream, true);
+  }
+
</file context>
Fix with Cubic

writeJsonRpcError(resp, JsonRpcError.PARSE_ERROR, "Parse error", null);
return;
}
if (rootNode.isArray() && rootNode.size() > parameter.getJsonRpcMaxBatchSize()) {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[MUST] maxBatchSize=0 does not honor the "0 means no limit" contract

The batch-size guard at line 99 reads rootNode.size() > parameter.getJsonRpcMaxBatchSize() without a > 0 precondition. The PR description and the sibling keys maxResponseSize (BufferedResponseWrapper.checkLimit) and maxAddressSize (LogFilter) all state that 0 disables the limit; this branch does the opposite — configuring maxBatchSize = 0 rejects every non-empty batch with -32005. That is a silent operator trap.

Suggestion: add a max > 0 && guard so the three limits follow the same semantics.

writeJsonRpcError(resp, JsonRpcError.RESPONSE_TOO_LARGE, e.getMessage(), idNode);
return;
} catch (Exception e) {
throw new IOException("RPC execution failed", e);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[SHOULD] Wrapping every handler exception as IOException escapes as HTTP 500

catch (Exception e) throw new IOException("RPC execution failed", e) rethrows all non-JsonRpcResponseTooLargeException runtime errors. The parent RateLimiterServlet.service only catches ServletException | IOException and re-throws, so the servlet container will emit an HTTP 500 with no JSON-RPC body. jsonrpc4j's ErrorResolver would normally have produced a structured error response on HTTP 200 — that contract is now lost, and web3.js/ethers-style clients will interpret 500 as a transport failure and retry.

Suggestion: either drop this catch (let the original IOException path from rpcServer.handle propagate) or log the root cause and emit a -32603 Internal error through writeJsonRpcError so the HTTP 200 + JSON-RPC error contract is preserved.

JsonNode rootNode;
try {
body = readBody(req.getInputStream());
rootNode = MAPPER.readTree(body);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[SHOULD] Empty or whitespace-only body makes readTree return null and NPE on the next line

MAPPER.readTree(body) returns null (not a MissingNode) for a zero-length input, so the subsequent rootNode.isArray() at line 99 throws NullPointerException. The NPE is not caught by the IOException clause at line 95; it bubbles out as an unchecked exception, gets wrapped by the catch at line 116, and the client sees HTTP 500 instead of the structured -32700 Parse error that the parse path was meant to return.

Suggestion: treat rootNode == null || rootNode.isMissingNode() as a parse error and return -32700 via writeJsonRpcError.

if (rootNode.isArray() && rootNode.size() > parameter.getJsonRpcMaxBatchSize()) {
writeJsonRpcError(resp, JsonRpcError.EXCEED_LIMIT,
"Batch size " + rootNode.size() + " exceeds the limit of "
+ parameter.getJsonRpcMaxBatchSize(), null);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[SHOULD] Batch-size rejection returns a single error object, not a batch array

A JSON-RPC 2.0 batch request must be answered with a JSON array of responses. When rootNode.isArray() is true and the size exceeds maxBatchSize, the current code returns a single-object error response, which breaks the batch contract — standard clients (web3j, web3.js, ethers) will fail to parse the response as a batch and either error out or retry the whole batch.

Suggestion: when rejecting an over-size batch, wrap the -32005 error object in a JSON array of length 1 (id=null), or document this deviation prominently in config.conf and the PR description.

}
}

@Override
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[SHOULD] getStatus() and setHeader("Content-Length", ...) are not intercepted

Header capture only covers setStatus, setContentType, setContentLength(int|long). Two gaps:

  1. getStatus() is not overridden, so callers (logging filter, metrics) read the underlying response status, not the this.status field — they will see SC_OK until commitToResponse() is called.
  2. setHeader("Content-Length", ...) / addHeader(...) pass straight through to the underlying response. jsonrpc4j currently calls setContentLength, but any downstream filter or library upgrade using setHeader would write a Content-Length to the real response before the size check runs.

Suggestion: override getStatus() to return this.status, and intercept setHeader / addHeader for the Content-Length name (case-insensitive).

@@ -0,0 +1,17 @@
package org.tron.core.exception.jsonrpc;

public class JsonRpcResponseTooLargeException extends RuntimeException {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[SHOULD] New exception breaks the project's JsonRpcException family

Every other class in common/src/main/java/org/tron/core/exception/jsonrpc/ extends JsonRpcException (which in turn extends TronException and reports itself). This new class extends RuntimeException instead, so it cannot be wired into @JsonRpcError annotations or the shared JsonRpcErrorResolver if future methods need to surface response-too-large through the normal error path.

Suggestion: extend JsonRpcException, or document in a Javadoc why an unchecked non-TronException base was chosen (e.g. to avoid forcing the wrapper signature to throws).

}

@Override
public ServletInputStream getInputStream() {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[SHOULD] getInputStream and getReader are not mutually exclusive

Servlet spec requires that once one of getInputStream() / getReader() has been used on a request, the other must throw IllegalStateException. This wrapper returns a fresh stream/reader from the cached byte array for both methods, allowing arbitrary interleaving. jsonrpc4j only calls one today, so the bug is latent — but any future filter that reads the body through the other accessor would silently double-read.

Suggestion: track which accessor was used first and throw IllegalStateException on the second, as the spec requires.

public int jsonRpcMaxBatchSize = 100;
@Getter
@Setter
public int jsonRpcMaxResponseSize = 25 * 1024 * 1024;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[SHOULD] Size-limit fields use int; no range validation on config load

jsonRpcMaxResponseSize and the peer maxBytes parameter in BufferedResponseWrapper are int, capped at 2 GiB. The current 25 MiB default is fine, but reading via config.getInt(...) with no range validation means a negative config value silently becomes a permanent "no limit" state (thanks to the maxBytes > 0 guard) and a value near Integer.MAX_VALUE will put ByteArrayOutputStream near its addressable limit.

Suggestion: either switch the field (and wrapper param) to long plus validate >= 0 during Args.applyConfigParams, or reject out-of-range values with a clear error at startup.

* only forwarded to the real response via {@link #commitToResponse()}, preventing a timed-out
* handler thread from racing with the timeout error writer.
*/
public class BufferedResponseWrapper extends HttpServletResponseWrapper {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[DISCUSS] Consider Jetty SizeLimitHandler for the response-size cap

Scoped to the response-size concern only — the batch-size and address-size checks must stay at the JSON-RPC layer and are out of scope for this discussion.

The response-size cap this class implements can also be enforced one layer lower. Jetty 9.4 (project is on 9.4.57 per framework/build.gradle:45) ships org.eclipse.jetty.server.handler.SizeLimitHandler, which wraps the output stream, counts bytes on the fly, and aborts the response when the configured cap is reached. Wrapping the ServletContextHandler in FullNodeJsonRpcHttpService.addServlet (line 30) with it would:

  • Drop the full in-memory buffer. Peak memory per connection falls from ~maxResponseSize (the ByteArrayOutputStream in checkLimit) to the Jetty output buffer (~8 KiB), i.e. true streaming instead of buffer-then-commit.
  • Remove this wrapper and its JsonRpcResponseTooLargeException plumbing — the same attacker model (oversized response → bounded memory) is still covered.

Tradeoff: SizeLimitHandler aborts with HttpException.RuntimeException → HTTP 413 plain-text, not a JSON-RPC -32003 structured error. Clients like web3j / ethers would see a transport-level failure rather than a parseable JSON-RPC error object. If that structured-error UX is a product requirement (for example so clients can auto-fallback to a smaller range), keep the current wrapper. If the cap is intended as a safety net where 413 is acceptable, SizeLimitHandler is simpler and more memory-efficient.

No change requested — just worth deciding explicitly which tradeoff this PR is making.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Feature] Introduce resource limits for JSON-RPC (batch size, response size, address size, timeout)

2 participants