refactor(config): replace manual parsing with ConfigBeanFactory binding#6615
refactor(config): replace manual parsing with ConfigBeanFactory binding#6615vividctrlalt wants to merge 13 commits intotronprotocol:developfrom
Conversation
HOCON null values cannot be bound by ConfigBeanFactory to String fields. Use empty string instead, which has the same effect (triggers the automatic IP detection fallback in java-tron). Related: tronprotocol/java-tron#6615
Replace ~850 lines of manual if/hasPath/getXxx blocks in Args.applyConfigParams() with ConfigBeanFactory.create() automatic binding. Each config.conf domain now maps to a typed Java bean class (VmConfig, BlockConfig, CommitteeConfig, MetricsConfig, NodeConfig, EventConfig, StorageConfig, etc.). - Delete ConfigKey.java (~100 string constants), config.conf is sole source of truth - Migrate Storage.java static getters to read from StorageConfig bean - Add unit tests for all config bean classes - Migrate DynamicArgs to use bean binding
Move all default values from scattered bean field initializers into reference.conf, making it the single source of truth for config defaults. Expose config beans as static singletons for convenient access. - Add comprehensive reference.conf with defaults for all config domains - Auto-bind discovery, PBFT, and list fields in NodeConfig - Expose config beans as static singletons (NodeConfig.getInstance() etc.) - Move postProcess logic into bean classes - Fix test configs (external.ip=null -> empty string) - Document manual-read keys with reasons in reference.conf
…Config Move default/defaultM/defaultL LevelDB option reading into StorageConfig, so Storage no longer touches Config directly. - Add DbOptionOverride with nullable boxed types for partial overrides - Fix cacheSize type from int to long to match LevelDB Options API - Remove dead externalIp(Config) bridge method - Remove setIfNeeded and Config field from Storage
- Replace null values (discovery.external.ip, trustNode) with empty string before ConfigBeanFactory binding for external config compat (system-test uses "external.ip = null" which ConfigBeanFactory cannot bind to String fields; recommend updating system-test to use "" instead) - Fix floating point comparison with Double.compare (java:S1244) - Extract duplicated string literals into constants/variables (java:S1192)
eb841d0 to
4d8d9b9
Compare
| @@ -79,7 +80,11 @@ private void updateActiveNodes(Config config) { | |||
| } | |||
|
|
|||
There was a problem hiding this comment.
DynamicArgs: NodeConfig constructed twice per reload — wrong abstraction boundary
private void updateActiveNodes(Config config) {
NodeConfig nodeConfig = NodeConfig.fromConfig(config); // parse #1
...
}
private void updateTrustNodes(Config config) {
NodeConfig nodeConfig = NodeConfig.fromConfig(config); // parse #2 — same Config object
...
}Beyond the redundant ConfigBeanFactory reflection cost, there is a deeper engineering issue:
Broken single source of truth. Both methods are steps of one atomic reload, yet they produce two separate NodeConfig instances from the same input. The code gives no guarantee they reflect the same state — any future side effect or environment-sensitive substitution in fromConfig() could cause them to silently diverge. The root cause: Config is the wrong parameter type here. Both methods''' actual contract is NodeConfig.
There was a problem hiding this comment.
Re: DynamicArgs NodeConfig parsed twice — Fixed. NodeConfig is now parsed once in reload() and passed to both methods.
|
Unused files these related to configuration |
| * All getXxxFromConfig methods now read from StorageConfig bean instead of | ||
| * manual string constants. Signatures preserved for backward compatibility. | ||
| */ | ||
|
|
There was a problem hiding this comment.
Including getDbEngineFromConfig, a total of 12 methods are no longer used and can be removed.
There was a problem hiding this comment.
Re: Storage 12 unused methods — Fixed. All removed, confirmed zero callers.
| private static final String PBFT_EXPIRE_NUM_KEY = "pBFTExpireNum"; | ||
| private static final String ALLOW_PBFT_KEY = "allowPBFT"; | ||
|
|
||
| public static CommitteeConfig fromConfig(Config config) { |
There was a problem hiding this comment.
Suggest to replace class-level @Getter/@Setter with per-field annotations in CommitteeConfig.
The previous approach used class-level annotations but silently suppressed them on allowPBFT and pBFTExpireNum via AccessLevel.NONE. This is misleading — a reader assumes all fields are covered until they scan the entire file to find the exceptions. The code has poor readability.
Per-field annotations make each field's contract explicit. The two non-standard fields carry no Lombok annotations at all, so their manual accessors immediately signal that special handling is required.
There is no need to modify the configuration parameter name solely due to case differences.
Lombok best practice: use class-level annotations only when the policy is uniform across all fields; switch to per-field when exceptions exist.
Suggest to optimize like this concisely:
//no Getter Setter
@Slf4j
public class CommitteeConfig {
@Getter @Setter private long allowCreationOfContracts = 0;
...
@Getter @Setter private long changedDelegation = 0;
// These two fields which violates JavaBean naming convention are excluded from auto-binding and
// handled manually in fromConfig().
@Getter private long allowPBFT = 0;
@Getter private long pBFTExpireNum = 20;
@Getter @Setter private long allowTvmFreeze = 0;
...
@Getter @Setter private long dynamicEnergyMaxFactor = 0;
// proposalExpireTime is NOT a committee field — it's in block.* and handled by BlockConfig
// Defaults come from reference.conf (loaded globally via Configuration.java)
private static final String PBFT_EXPIRE_NUM_KEY = "pBFTExpireNum";
private static final String ALLOW_PBFT_KEY = "allowPBFT";
public static CommitteeConfig fromConfig(Config config) {
Config section = config.getConfig("committee");
CommitteeConfig cc = ConfigBeanFactory.create(section, CommitteeConfig.class);
// Ensure the manually-named fields get the right values from the original keys
cc.allowPBFT = section.hasPath(ALLOW_PBFT_KEY) ? section.getLong(ALLOW_PBFT_KEY) : 0;
cc.pBFTExpireNum = section.hasPath(PBFT_EXPIRE_NUM_KEY)
? section.getLong(PBFT_EXPIRE_NUM_KEY) : 20;
cc.postProcess();
return cc;
}
private void postProcess() {
...
There was a problem hiding this comment.
Re: CommitteeConfig per-field @Getter/https://github.com/Setter — Good suggestion for readability. However, the root cause is that allowPBFT and pBFTExpireNum violate JavaBean naming convention (consecutive uppercase "PBFT"). Switching annotation style is a cosmetic fix.
Since PBFT is not yet active, I'd suggest addressing this in a future version by renaming the config keys to standard camelCase (allowPbft, pbftExpireNum). That eliminates the need for manual binding entirely, and the annotation style issue goes away with it. The same approach should apply to other beans (NodeConfig, StorageConfig, EventConfig) that have similar AccessLevel.NONE workarounds.
| + "committee.allowNewRewardAlgorithm = 1" | ||
| + " or committee.allowNewReward = 1" | ||
| + " or committee.allowTvmVote = 1."); | ||
| } |
There was a problem hiding this comment.
It should throws TronError instead of IllegalArgumentException here, as the IllegalArgumentException here appears to be an oversight and may be silently swallowed by an upstream catch block, causing the node to start with an invalid committee configuration instead of exiting immediately.
There was a problem hiding this comment.
Agree that TronError is the better choice semantically. However, this PR is a pure refactor — the original code in Args.java used IllegalArgumentException here, so I kept it unchanged to avoid behavioral changes.
Also, in the current call stack (FullNode.main → Args.setParam → applyConfigParams → CommitteeConfig.fromConfig → postProcess), there is no catch(Exception) that would swallow it. The IllegalArgumentException propagates to the UncaughtExceptionHandler and exits the node correctly.
That said, unifying all config validation errors to TronError would be a good follow-up.
| influxdb { | ||
| ip = "" | ||
| port = 8086 | ||
| database = "" |
There was a problem hiding this comment.
It missing default value is "metrics" here, it should be same as that in code,
There was a problem hiding this comment.
Fixed. Changed to database = "metrics" to match MetricsConfig.InfluxDbConfig default.
|
Good catch. Both files are indeed unused:
Both are out of scope for this PR (config binding refactor). I'd suggest a separate cleanup PR to remove them along with any other dead config artifacts. |
- DynamicArgs: parse NodeConfig once in reload(), pass to both methods - Storage: remove 12 unused getXxxFromConfig static methods - reference.conf: fix influxdb database default to "metrics"
| enable = true | ||
| persist = true | ||
| external.ip = null | ||
| external.ip = "" |
There was a problem hiding this comment.
I noticed the test configuration files changed external.ip from null to "":
- external.ip = null
+ external.ip = ""Could this affect the external IP detection logic? In the original code, null might trigger auto-detection fallback, while empty string "" could be treated as a valid (but empty) value. Want to confirm this is intentional and won't break IP auto-discovery in tests.
There was a problem hiding this comment.
Good question, but verified safe. Both values trigger the same fallback path:
// Args.java:1157
private static void externalIp(NodeConfig nodeConfig) {
String externalIp = nodeConfig.getDiscoveryExternalIp();
if (StringUtils.isEmpty(externalIp)) { // null AND "" are both empty
if (PARAMETER.nodeExternalIp == null) {
PARAMETER.nodeExternalIp = PARAMETER.p2pConfig.getIp(); // auto-detect
...StringUtils.isEmpty() treats both null and "" as empty, so the auto-detection fallback triggers in both cases. No behavioral difference.
The reason for changing null to "" is that ConfigBeanFactory cannot bind a null value to a String field — it throws ConfigException$Null. Empty string is the only way to express "unset" for a String field under bean binding.
| private long allowAccountAssetOptimization = 0; | ||
| private long allowAssetOptimization = 0; | ||
| private long allowNewReward = 0; | ||
| private long memoFee = 0; |
There was a problem hiding this comment.
[Behavioral regression] memoFee clamping is lost.
The original Args.java clamped memoFee to [0, 1_000_000_000]:
// Args.java (develop branch)
if (config.hasPath(ConfigKey.MEMO_FEE)) {
PARAMETER.memoFee = config.getLong(ConfigKey.MEMO_FEE);
if (PARAMETER.memoFee > 1_000_000_000) {
PARAMETER.memoFee = 1_000_000_000;
}
if (PARAMETER.memoFee < 0) {
PARAMETER.memoFee = 0;
}
}After this PR, postProcess() does not clamp memoFee, so out-of-range values are passed through unchanged. For example:
committee.memoFee = 5000000000→ develop clamps to1_000_000_000, this PR keeps5_000_000_000committee.memoFee = -100→ develop clamps to0, this PR keeps-100
Suggested fix in postProcess():
if (memoFee < 0) { memoFee = 0; }
if (memoFee > 1_000_000_000L) { memoFee = 1_000_000_000L; }Verified by a regression test against this PR branch — 2 of 5 memoFee boundary tests fail without this clamp.
There was a problem hiding this comment.
Great catch — fixed in c38f388. Both clamps restored in CommitteeConfig.postProcess():
if (allowNewReward < 0) { allowNewReward = 0; }
if (allowNewReward > 1) { allowNewReward = 1; }
if (memoFee < 0) { memoFee = 0; }
if (memoFee > 1_000_000_000L) { memoFee = 1_000_000_000L; }Also added 31 boundary test cases across CommitteeConfigTest, NodeConfigTest, VmConfigTest, and ArgsTest to pin every clamp (below/above/in-range) so this kind of regression cannot happen silently again. Specifically pinned the allowNewReward = 2 + allowOldRewardOpt = 1 case you flagged — verifies the clamp runs before the cross-field check.
Thanks for the rigorous review — the original Args.java had no test coverage for these clamps, so writing the regression test was the only way this could have been caught.
| private long allowReceiptsMerkleRoot = 0; | ||
| private long allowAccountAssetOptimization = 0; | ||
| private long allowAssetOptimization = 0; | ||
| private long allowNewReward = 0; |
There was a problem hiding this comment.
[Behavioral regression] allowNewReward clamping is lost — and this changes the cross-field check semantics below.
The original Args.java clamped allowNewReward to [0, 1]:
// Args.java (develop branch)
if (config.hasPath(ConfigKey.ALLOW_NEW_REWARD)) {
PARAMETER.allowNewReward = config.getLong(ConfigKey.ALLOW_NEW_REWARD);
if (PARAMETER.allowNewReward > 1) { PARAMETER.allowNewReward = 1; }
if (PARAMETER.allowNewReward < 0) { PARAMETER.allowNewReward = 0; }
}This is especially critical because the cross-field validation in postProcess() (line 152) uses allowNewReward != 1:
if (allowOldRewardOpt == 1 && allowNewRewardAlgorithm != 1
&& allowNewReward != 1 && allowTvmVote != 1) {
throw new IllegalArgumentException(...);
}Without clamping, the semantics diverge from develop:
| Config | develop behavior | this PR behavior |
|---|---|---|
allowNewReward = 2 + allowOldRewardOpt = 1 |
Clamped to 1, check passes | Stays 2, check fails, throws exception |
allowNewReward = 99 |
Clamped to 1 | Stays 99 |
allowNewReward = -5 |
Clamped to 0 | Stays -5 |
A user who configured allowNewReward = 2 (intending to enable it) would have their node start fine on develop, but fail to start after this PR.
Suggested fix in postProcess(), before the cross-field check at line 152:
if (allowNewReward < 0) { allowNewReward = 0; }
if (allowNewReward > 1) { allowNewReward = 1; }Verified by a regression test against this PR branch — the cross-field semantic test fails without this clamp.
There was a problem hiding this comment.
Same fix in c38f388 — see the reply on the allowNewReward thread above. Thanks again.
…ary tests Add missing clamps in CommitteeConfig.postProcess(): - memoFee clamped to [0, 1_000_000_000] (regression from manual parsing) - allowNewReward clamped to [0, 1] (must run before cross-field check) Add boundary test coverage for every clamp in CommitteeConfig, NodeConfig, VmConfig, and Args bridge code (fetchBlockTimeout). 31 new test cases pin each clamp's below/above/in-range behavior to prevent silent regression in future refactors. Reported by reviewer kuny0707.
Suggestion: Replace ConfigBeanFactory with custom annotations to eliminate manual fallback codeThe current approach uses
These scattered manual fallbacks are easy to miss during maintenance — the memoFee/allowNewReward clamp regression is a case in point. Approach: Custom
|
| Dimension | ConfigBeanFactory (current) | Annotation approach |
|---|---|---|
| PBFT naming issues | Manual aliasing + manual assignment | @ConfigValue("pBFTExpireNum") — solved |
| "is" prefix / PascalCase | Manual reads | Annotation specifies key directly |
| Java reserved words (native) | withoutPath + manual binding | Annotation specifies key directly |
| fromConfig() boilerplate | One factory method per bean | Unified ConfigBinder.bind() |
| Clamp validation | postProcess() per bean | Declarative @Clamp |
| Cost of adding new config | Add field + possibly modify fromConfig | Add one annotated field |
| Implementation cost | Zero (built-in Typesafe Config API) | ~100-150 lines of binder code |
| External dependencies | None | None |
The key advantage is centralizing key-to-field mappings from scattered fromConfig() code into field-level declarations. Adding a new config parameter becomes a single annotated field — no parsing logic, no risk of missing a clamp. Worth considering as a follow-up iteration.
bladehan1
left a comment
There was a problem hiding this comment.
Review: default value behavioral changes in reference.conf
|
|
||
| # Maximum percentage of producing block interval (provides time for broadcast etc.) | ||
| blockProducedTimeOut = 75 | ||
|
|
There was a problem hiding this comment.
Default value behavioral change — blockProducedTimeOut: 50 → 75
The old Args.java hardcoded fallback was BLOCK_PRODUCE_TIMEOUT_PERCENT = 50 (used when the key was absent from config). In the old shipped config.conf this key was commented out, so the effective default for all nodes not explicitly setting it was 50.
reference.conf now sets it to 75, which changes block production timing for any node that omits this key in their config.conf.
Suggest changing to 50 to preserve behavioral equivalence, or documenting this as an intentional change in the PR description.
There was a problem hiding this comment.
Thanks — fixing both in the same commit. The reference.conf should reflect the code-level default (the value the runtime uses when the key is absent), not whatever was in the shipped sample config.conf. I'll change blockProducedTimeOut back to 50 and txCache.initOptimization back to false.
| txCache.estimatedTransactions = 1000 | ||
| # If true, transaction cache initialization will be faster. | ||
| txCache.initOptimization = true | ||
|
|
There was a problem hiding this comment.
Default value behavioral change — txCache.initOptimization: false → true
The old Storage.java field defaulted to false, and the hasPath && getBoolean pattern also yielded false when the key was absent. The old shipped config.conf had this set to true, so mainnet nodes are unaffected.
However, custom/test configurations that omit this key will now get true instead of false, changing tx cache initialization behavior.
Suggest either:
- Changing to
falseto match the old code-level default, or - Documenting this as an intentional improvement (since
trueis the recommended production setting)
There was a problem hiding this comment.
See reply on the blockProducedTimeOut thread — fixing both in the same commit. Thanks for catching these.
|
Two thoughts:
|
- StorageConfig: restore try-catch around parseInt/parseLong in readDbOption() to preserve friendly error messages with field name, expected type, and actual value (matches develop Storage.setIfNeeded). - NodeConfig: align JsonRpcConfig.maxBlockFilterNum default (0 -> 50000) with reference.conf and develop CommonParameter default. - reference.conf: change DNS maxMergeSize (5 -> 0) and changeThreshold (0.1 -> 0.0) so behavior matches develop when users omit these keys (develop uses hasPath guard, leaving PublishConfig defaults of 0). - CommitteeConfig: remove redundant alias block + public setters for allowPBFT/pBFTExpireNum. Public setters caused ConfigBeanFactory to derive uppercase property names that don't match config keys, which previously required the alias workaround. Keep only getters + direct field assignment in fromConfig().
|
Thanks for flagging the bridge layer coverage gap. I actually prototyped 7 bridge integration tests (one per
That said, your point stands — the |
| # Example per-database overrides: | ||
| # { | ||
| # name = "account", | ||
| # path = "storage_directory_test", | ||
| # createIfMissing = true, | ||
| # paranoidChecks = true, | ||
| # verifyChecksums = true, | ||
| # compressionType = 1, // compressed with snappy | ||
| # blockSize = 4096, // 4 KB = 4 * 1024 B | ||
| # writeBufferSize = 10485760, // 10 MB = 10 * 1024 * 1024 B | ||
| # cacheSize = 10485760, // 10 MB = 10 * 1024 * 1024 B | ||
| # maxOpenFiles = 100 | ||
| # } | ||
| properties = [] |
There was a problem hiding this comment.
Suggest to optimize properties like this, otherwise it is hard to understand the format:
# Example per-database overrides:
properties = [
# {
# name = "account",
# path = "storage_directory_test",
# createIfMissing = true,
# paranoidChecks = true,
# verifyChecksums = true,
# compressionType = 1, // compressed with snappy
# blockSize = 4096, // 4 KB = 4 * 1024 B
# writeBufferSize = 10485760, // 10 MB = 10 * 1024 * 1024 B
# cacheSize = 10485760, // 10 MB = 10 * 1024 * 1024 B
# maxOpenFiles = 100
# }
]
There was a problem hiding this comment.
Good suggestion — the inline field names with unit annotations read much better than the current form.
I'd like to scope this out of #6615 to keep this PR focused on the bean-binding refactor. Plan is to open a dedicated follow-up PR on reference.conf that addresses:
- The
propertiesarray example you outlined above (all fields inline, with unit comments) - Documentation comments for every key in
reference.conf— parameter meaning, accepted value range, units, and the effect on node behavior - Consistent comment style across sections (
storage,node,vm,event.subscribe, etc.)
The goal is that anyone reading reference.conf standalone can understand what each key does without cross-referencing Args.java or other source. Will link the new PR here once it's opened.
| activeConnectFactor = 0.1 | ||
| connectFactor = 0.6 | ||
| disconnectNumberFactor = 0.4 | ||
| maxActiveNodesWithSameIp = 2 |
There was a problem hiding this comment.
maxActiveNodesWithSameIp has been replaced by maxConnectionsWithSameIp, should be deleted.
There was a problem hiding this comment.
Thanks — fixed in 8a76db8. One nuance I want to flag because it affects back-compat:
The legacy key maxActiveNodesWithSameIp is still read on develop (Args.java:392-399) and takes priority over maxConnectionsWithSameIp when both are set. Users who haven't migrated their config.conf would break if we dropped this path entirely. So the fix is targeted:
reference.conf— removed themaxActiveNodesWithSameIp = 2line. Shipping it meant HOCON merges always madesection.hasPath(...)true, causing the alias-fallback branch to silently clobber any user-suppliedmaxConnectionsWithSameIp.NodeConfig.java— removed the unused bean fieldmaxActiveNodesWithSameIp(it was only declared to satisfy ConfigBeanFactory, never actually read), but kept the alias read atfromConfig()that peeks atsection.hasPath("maxActiveNodesWithSameIp")directly. Users who still set the legacy key in their config.conf get it routed tomaxConnectionsWithSameIp, matching develop.
Added regression tests in NodeConfigTest:
testMaxConnectionsWithSameIpNotOverriddenByReferenceConfAlias— verifies the fixtestMaxActiveNodesWithSameIpLegacyAliasStillWorks— verifies back-compattestLegacyAliasTakesPriorityOverModernKey— verifies develop parity
| @Setter | ||
| public static class DbSettingsConfig { | ||
| private int levelNumber = 7; | ||
| private int compactThreads = 32; |
There was a problem hiding this comment.
In version 4.8.1, we init compactThreads like this:
int compactThreads = config.hasPath(prefix + "compactThreads")
? config.getInt(prefix + "compactThreads")
: max(Runtime.getRuntime().availableProcessors(), 1, true);
But it is hardcoded as 32 here. Suggest use 0.
There was a problem hiding this comment.
Thanks — adopted the 0 = auto convention in 8a76db8.
Changes:
StorageConfig.DbSettingsConfig.compactThreadsdefault:32→0reference.conf:compactThreads = 0- Added
DbSettingsConfig.postProcess()that expands0→Math.max(Runtime.getRuntime().availableProcessors(), 1), matching developArgs.java:1609-1611
Regression tests in StorageConfigTest:
testCompactThreadsAutoExpand— asserts 0 expands to availableProcessorstestCompactThreadsExplicitPreserved— asserts non-zero values pass through
Also took this as a cue to sweep the rest of dbSettings against develop's fallbacks — blocksize, level0FileNumCompactionTrigger, and targetFileSizeBase had drifted too; all are now aligned in the same commit. testDbSettingsDefaults was updated to assert the new aligned values.
| shieldedTransInPendingMaxCounts = 10 | ||
|
|
||
| # Contract proto validation thread pool | ||
| validContractProto.threads = 2 |
There was a problem hiding this comment.
validContractProto.threads is hardcoded as 2, but in 4.8.1, we use Runtime.getRuntime().availableProcessors(). Suggest use 0 instead.
There was a problem hiding this comment.
Thanks — adopted in 8a76db8.
Changes:
reference.conf:validContractProto.threads = 0NodeConfig.ValidContractProtoConfig.threadsdefault:2→0NodeConfig.postProcess()expands0→Runtime.getRuntime().availableProcessors(), matching developArgs.java:743-746(also keeps this symmetric with the existingsolidity.threadsandvalidateSignThreadNumhandling)
Regression tests in NodeConfigTest:
testValidContractProtoThreadsDefaultAutoExpands— asserts the default expands to availableProcessorstestValidContractProtoThreadsExplicitPreserved— asserts explicit non-zero values pass through
| if (ec.isEnable() || ec.getVersion() != 0 || !ec.getTopics().isEmpty() | ||
| || StringUtils.isNotEmpty(ec.getPath()) || StringUtils.isNotEmpty(ec.getServer())) { |
There was a problem hiding this comment.
It’s a bit hard to follow these "||" conditions. Maybe consider splitting them?
Also, if topics is not empty—even when all enable = false—an empty (but non-null) EventPluginConfig will still be created, which triggers the event plugin. In this case, triggering the plugin is actually the unexpected behavior.
There was a problem hiding this comment.
Thanks for the review.
1. On readability: agreed, the || chain is hard to follow. Will refactor (extract a helper + simplify the condition) in a follow-up PR.
2. On "triggers the event plugin": traced the call chain — the plugin is not triggered in this case. EventPluginLoader.start() is only invoked from Manager.startEventSubscribing(), and that call is guarded on isEventSubscribe() (= ec.isEnable()):
// Manager.java:564
if (Args.getInstance().isEventSubscribe()) {
startEventSubscribing();
}So when enable = false, the plugin never starts — regardless of whether eventPluginConfig is null or populated. The only consumer of eventPluginConfig (besides Args) is Manager.java:2151, which is behind this guard.
This also matches develop's behavior: template-based deployments on develop already populate eventPluginConfig with enable=false (since config.conf ships the section), and the plugin still doesn't start.
Planned follow-up (separate PR): tighten applyEventConfig to only build eventPluginConfig when ec.isEnable() is true — makes the "enable=false ⇒ null" semantic strict and cleans up the condition at the same time.
| # Strongly recommend NOT modifying unless you know every item's meaning clearly. | ||
| dbSettings = { | ||
| levelNumber = 7 | ||
| compactThreads = 32 |
There was a problem hiding this comment.
Several fallback values here no longer match 's runtime defaults when the keys are omitted. In particular, , , , and used different fallback values before, so this becomes a silent RocksDB tuning change for minimal configs.
There was a problem hiding this comment.
Replying on the corrected thread — looks like this one's template placeholders didn't resolve. TL;DR: all four drifted dbSettings keys are aligned with develop in 8a76db8.
| node { | ||
| # Trust node for solidity node | ||
| # trustNode = "ip:port" | ||
| trustNode = "127.0.0.1:50051" |
There was a problem hiding this comment.
This changes the runtime fallback for node.trustNode. On develop, trustNodeAddr stayed null unless the key was explicitly present. With this default in reference.conf, any trimmed external config that omits node.trustNode will now behave as if a localhost trust node was configured.
There was a problem hiding this comment.
Fixed in 8a76db8. reference.conf no longer ships a literal trust-node address; trustNode = "" so the existing Args bridge logic (Args.java:645-648) converts the empty string to null, preserving develop's behavior:
if (StringUtils.isEmpty(PARAMETER.trustNodeAddr)) {
String trustNode = nc.getTrustNode();
PARAMETER.trustNodeAddr = StringUtils.isEmpty(trustNode) ? null : trustNode;
}(Kept an empty-string default rather than removing the key entirely because NodeConfig.trustNode is a declared bean field — ConfigBeanFactory requires the key to be bindable. Empty-string → null happens at the bridge layer.)
Regression test: NodeConfigTest.testTrustNodeNotDefaultedByReferenceConf.
| shieldedTransInPendingMaxCounts = 10 | ||
|
|
||
| # Contract proto validation thread pool | ||
| validContractProto.threads = 2 |
There was a problem hiding this comment.
This changes the fallback for node.validContractProto.threads. On develop, an absent key meant availableProcessors(), but this now defaults to 2, which can silently shrink the validation pool on multi-core nodes.
There was a problem hiding this comment.
Fixed in 8a76db8 — same fix @317787106 flagged on this line. reference.conf now uses validContractProto.threads = 0; NodeConfig.postProcess() expands 0 → Runtime.getRuntime().availableProcessors(), matching develop Args.java:743-746. Symmetric with the existing solidity.threads and validateSignThreadNum handling in the same postProcess().
Regression tests:
testValidContractProtoThreadsDefaultAutoExpands— default expands toavailableProcessors()testValidContractProtoThreadsExplicitPreserved— explicit non-zero passes through
| # Strongly recommend NOT modifying unless you know every item's meaning clearly. | ||
| dbSettings = { | ||
| levelNumber = 7 | ||
| compactThreads = 32 |
There was a problem hiding this comment.
Several storage.dbSettings.* fallback values here no longer match develop's runtime defaults when the keys are omitted. In particular, compactThreads, blocksize, level0FileNumCompactionTrigger, and targetFileSizeBase used different fallback values before, so this becomes a silent RocksDB tuning change for minimal configs.
There was a problem hiding this comment.
Thanks for catching this — fixed in 8a76db8. Verified each value against develop's Args.initRocksDbSettings (Args.java:1605-1626):
| Key | develop fallback | PR (before) | PR (fixed) |
|---|---|---|---|
compactThreads |
max(availableProcessors, 1) |
32 |
0 (auto-expanded in postProcess) |
blocksize |
16 |
64 |
16 |
level0FileNumCompactionTrigger |
2 |
4 |
2 |
targetFileSizeBase |
64 |
256 |
64 |
The other dbSettings keys (levelNumber, maxBytesForLevelBase, maxBytesForLevelMultiplier, targetFileSizeMultiplier, maxOpenFiles) already matched — no change there.
StorageConfigTest.testDbSettingsDefaults now asserts the aligned values, plus new cases testCompactThreadsAutoExpand / testCompactThreadsExplicitPreserved to catch future drift.
…acks Address review feedback from 317787106 (2026-04-16) and lxcmyf (2026-04-17) covering silent default-value drift introduced by the new reference.conf. Verified each drift against develop Args.initRocksDbSettings / applyConfigParams runtime fallbacks. reference.conf: - storage.dbSettings.compactThreads: 32 -> 0 (0 = auto: max(availableProcessors, 1)) - storage.dbSettings.blocksize: 64 -> 16 - storage.dbSettings.level0FileNumCompactionTrigger: 4 -> 2 - storage.dbSettings.targetFileSizeBase: 256 -> 64 - node.trustNode: "127.0.0.1:50051" -> "" (Args bridge converts empty -> null) - node.maxActiveNodesWithSameIp: line removed. Shipping the legacy alias in reference.conf caused HOCON merges to always mask user-supplied maxConnectionsWithSameIp via the alias-fallback branch. - node.validContractProto.threads: 2 -> 0 (0 = auto: availableProcessors) StorageConfig.java: - DbSettingsConfig field defaults mirror reference.conf - DbSettingsConfig.postProcess() expands compactThreads == 0 to max(availableProcessors, 1), matching develop Args.java:1609-1611 NodeConfig.java: - ValidContractProtoConfig.threads default: 2 -> 0 - postProcess() expands validContractProto.threads == 0 to availableProcessors(), matching develop Args.java:743-746 - Removed unused field maxActiveNodesWithSameIp. The alias read at fromConfig() uses section.hasPath() directly, and removing the bean field lets ConfigBeanFactory stop requiring a reference.conf default for it (which is what caused the alias pollution to begin with). Tests: - StorageConfigTest.testDbSettingsDefaults asserts the new fallbacks - StorageConfigTest: testCompactThreadsAutoExpand / testCompactThreadsExplicitPreserved - NodeConfigTest: testValidContractProtoThreadsDefaultAutoExpands / testValidContractProtoThreadsExplicitPreserved - NodeConfigTest: testTrustNodeNotDefaultedByReferenceConf - NodeConfigTest: testMaxConnectionsWithSameIpNotOverriddenByReferenceConfAlias - NodeConfigTest: testMaxActiveNodesWithSameIpLegacyAliasStillWorks - NodeConfigTest: testLegacyAliasTakesPriorityOverModernKey (matches develop Args.java:392-399)
check-math CI job flagged uses of java.lang.Math introduced in 8a76db8 (compactThreads auto-expand logic in StorageConfig.postProcess and the corresponding test assertions). Swap both to StrictMathWrapper.max, which is the project-wide convention enforced by the check-math scanner.
Address PR review from lvs0075 (2026-04-22, reference.conf:218).
reference.conf ships `allowShieldedTransactionApi = true`, so after the HOCON
`withFallback` merge, `section.hasPath("allowShieldedTransactionApi")` is
always true regardless of what the user wrote. That made the legacy-key
compatibility branch in NodeConfig.fromConfig() dead code: a user who wrote
only `node.fullNodeAllowShieldedTransaction = false` in their config.conf
silently got `true`, unintentionally enabling the shielded transaction API.
Fix: replace the unreachable else-if chain with a direct override. The legacy
key is intentionally not defaulted in reference.conf, so `hasPath` on it
reliably means "user supplied it". Same pattern as maxActiveNodesWithSameIp.
Also restores the deprecation warning develop's Args.java emits when a user
still uses the legacy key; this PR dropped it along with the rewrite.
Regression tests:
- testShieldedApiDefaultsToTrueWhenNeitherKeySet
- testShieldedApiModernKeyRespected
- testShieldedApiLegacyKeyRespected (guards the reported bug)
- testShieldedApiLegacyKeyTakesPriorityOverModern (mirrors
testLegacyAliasTakesPriorityOverModernKey)
| // which ConfigBeanFactory cannot bind to a String field. | ||
| // Note: hasPath() returns false for null values, use hasPathOrNull() instead. | ||
| section = replaceNullWithEmpty(section, "discovery.external.ip"); | ||
| section = replaceNullWithEmpty(section, "trustNode"); |
There was a problem hiding this comment.
[SHOULD] The null-value sanitization here only hardcodes two paths (discovery.external.ip and trustNode). Defaulting trustNode = "" in reference.conf reduces exposure, but any user who writes foo = null for another String-typed key (external configs like system-test do this) will still crash ConfigBeanFactory.create. Consider a generic sweep over the section that replaces every hasPathOrNull && !hasPath key with an empty string, so we don't have to add a new line here every time a String field is added.
There was a problem hiding this comment.
Thanks, but I'd prefer to keep this explicit rather than generalize:
- Writing
= nullin a config is an anti-pattern —nullis a Java/HOCON keyword, not a value any typed bean field can consume. External configs shouldn't be doing this. - With auto-binding, any existing
nullwe didn't sanitize fails fast at startup with a clearConfigBeanFactory.ValidationFailednaming the path. Nothing slips through silently. - Same guarantee covers the future: if a new String field ever gets a
nullfrom some external config, auto-binding will surface it immediately — no silent breakage possible. So a preemptive generic sweep isn't needed; we'll just add the explicit line when (and if) a second case actually shows up.
There was a problem hiding this comment.
On reflection — the existing replaceNullWithEmpty was itself a temporary workaround added because system-test writes = null, which ConfigBeanFactory cannot bind to a String field. Rather than generalizing the workaround, I'll just remove it and go back to pure auto-binding. Any = null in external configs will then fail fast with a clear ConfigBeanFactory.ValidationFailed naming the path, which is the correct long-term signal that the external config needs to be fixed.
| } | ||
| if (maxFlush > 500) { | ||
| throw new IllegalArgumentException("MaxFlushCount value must not exceed 500!"); | ||
| } |
There was a problem hiding this comment.
[SHOULD] The estimatedTransactions clamp (238-245) and maxFlushCount validation (249-255) still live in the bridge, while NodeConfig / VmConfig / CommitteeConfig all put clamps and validation inside their own bean's postProcess(). Consider moving these into StorageConfig.postProcess() (attached to SnapshotConfig and TxCacheConfig) for architectural consistency — this also lets StorageConfig enforce its invariants when used standalone (e.g. in tests) without relying on the Args bridge.
There was a problem hiding this comment.
Agreed — done. Moved the maxFlushCount validation into SnapshotConfig.postProcess() and the estimatedTransactions clamp into TxCacheConfig.postProcess(), called from StorageConfig.fromConfig() alongside the existing dbSettings.postProcess(). Args.applyStorageConfig now just reads already-validated values. Covered by new tests in StorageConfigTest.
Address PR review from lxcmyf (2026-04-22, Args.java:255) — architectural consistency with NodeConfig / VmConfig / CommitteeConfig where clamps and validation live in the bean's own postProcess(). Before: Args.applyStorageConfig held an inline clamp for estimatedTransactions ([100, 10000]) and an inline range check for maxFlushCount ((0, 500]). This was carried over from develop's static Storage helpers during the bean-binding refactor. Now: - SnapshotConfig.postProcess() enforces maxFlushCount in (0, 500], throwing IllegalArgumentException on violation (same messages as develop). - TxCacheConfig.postProcess() clamps estimatedTransactions to [100, 10000]. - StorageConfig.fromConfig() calls both alongside the existing dbSettings.postProcess(). - Args.applyStorageConfig now simply reads already-validated values. This also lets StorageConfig enforce its invariants when loaded standalone (e.g. in tests) without depending on the Args bridge. Tests (StorageConfigTest): - testSnapshotMaxFlushCountZeroRejected - testSnapshotMaxFlushCountNegativeRejected - testSnapshotMaxFlushCountOver500Rejected - testTxCacheEstimatedClampedBelowMin - testTxCacheEstimatedClampedAboveMax - testTxCacheEstimatedWithinRangePreserved
Address PR review from lxcmyf (2026-04-22, NodeConfig.java:370). The `replaceNullWithEmpty` helper was originally a compatibility shim for system-test configs that write `discovery.external.ip = null`. Rather than generalizing the shim over all paths (the reviewer's concern), remove it entirely and let ConfigBeanFactory fail fast on HOCON null values. The auto-binding itself is the right safety net: any String-typed field that receives `= null` from an external config surfaces at startup as `ConfigBeanFactory.ValidationFailed` naming the exact path. That's the correct long-term signal that the external config needs to use `""` or remove the key, not to be papered over. No in-repo configs write `= null`; external configs (system-test) should update their own defaults.
What
Replace ~850 lines of manual
if (config.hasPath(KEY)) / getXxxblocks inArgs.applyConfigParams()withConfigBeanFactory.create()automatic binding.Each
config.confdomain now maps to a typed Java bean class. DeleteConfigKey.java(~100 string constants) —
config.confbecomes the sole source of truth for key names.Why
Adding a new config parameter previously required editing 3 files (
config.conf,ConfigKey.java,Args.java). The manual parsing was error-prone, hard to review,and duplicated every key name as a Java string constant. With bean binding, adding
a parameter only requires adding a field to the bean class and a line in
reference.conf.Introducing reference.conf
Previously, default values were scattered across three places: bean field initializers,
ternary expressions in
Args.java, and comments inConfigKey.java. Defaults couldbe inconsistent, and there was no single place to see all config parameters and their
default values at a glance.
reference.confis the official recommended practice of the Typesafe Config library(see official docs):
libraries and applications declare all config parameters and their defaults in
src/main/resources/reference.conf, and users only need to override the values theywant to change — Typesafe Config merges them automatically.
Akka, Play Framework, and Spark all follow this convention.
Benefits:
scatter and inconsistency
reference.confshows the complete list of configparameters, defaults, and comments — no need to dig through Java code
corresponding config key.
reference.confensures binding never fails due tomissing keys, even when users don't configure a value
reference.confto exist, lowering the learning curve for new contributorsChanges
Commit 1: Core refactor — ConfigBeanFactory binding
VmConfig,BlockConfig,CommitteeConfig,MetricsConfig,NodeConfig,EventConfig,StorageConfig,GenesisConfig,RateLimiterConfig,MiscConfig,LocalWitnessConfigArgs.applyConfigParams()from ~850 lines to ~50 lines of domain binding callsConfigKey.java(~100 string constants)Storage.javastatic getters to read fromStorageConfigbeanCommit 2: reference.conf as single source of defaults
common/src/main/resources/reference.confwith defaults for all config domainsreference.confNodeConfig.getInstance()etc.)NodeConfigCommit 3: Storage cleanup
default/defaultM/defaultLLevelDB option reading fromStorageintoStorageConfig, soStorageno longer touchesConfigdirectlyDbOptionOverridewith nullable boxed types for partial override semanticscacheSizetype frominttolongto match LevelDBOptionsAPIScope
config.confvalues are parsed into Java objectsCommonParameter(future Phase 2)config.confformat or CLI parameter handlingCommonParameter.getInstance()call sitesFuture Plan
This refactor is Phase 1 — it only replaces the config reading layer. After bean
binding, values are still copied one by one to
CommonParameter's flat fields, because847 call sites across the codebase depend on
CommonParameter.getInstance().The goal of Phase 2 is to remove the
CommonParameterintermediary:by both
config.confand CLI arguments (JCommander), converging onCommonParameter.The CLI override logic needs to be unified into Typesafe Config's override mechanism
(
ConfigFactory.systemProperties()or custom overrides), eliminating CLI's directwrite dependency on
CommonParameterCommonParameter.getInstance().getXxx()calls with direct domain bean access —
NodeConfig.getInstance().getXxx(),VmConfig.getInstance().getXxx(), etc.CommonParameter: once all call sites are migrated and CLI arguments nolonger write directly, remove
CommonParameterand the bridge-copy code inArgsEnd state:
config.conf→reference.conffallback →ConfigBeanFactory→ domainbean singletons. Fully type-safe, no intermediary layer, no string constants.
Test