feat(dash-spv): hardcoded masternode seed files + weekly auto-refresh#678
Conversation
📝 WalkthroughWalkthroughAdds a seed-generation workflow and a fetcher CLI; introduces a library embedding/parsing masternode seed files; integrates embedded seeds into SPV peer discovery; updates workspace members and manifests; and adjusts CI to schedule weekly seed updates and exclude the tooling binary from normal CI runs. Changes
Sequence Diagram(s)sequenceDiagram
rect rgba(200,200,255,0.5)
participant CLI as Seed Fetcher CLI
participant Peer as P2P Peer
participant Protocol as Dash Protocol Handler
participant Probe as Probe Module
participant FS as File System
end
CLI->>Peer: TCP connect & Version handshake
Peer->>Protocol: Exchange Version/Verack/Ping/Pong
Protocol->>Peer: Send SendHeaders
Peer->>Protocol: Return Headers (tip)
CLI->>Peer: Send GetMnListD(base=0, block=tip)
Peer->>Protocol: Return MnListDiff (masternodes)
Protocol->>CLI: Provide masternode entries
alt Probing enabled
CLI->>Probe: probe_core(ip:port)
Probe->>Peer: TCP connect + P2P Version
Probe-->>CLI: Reachability + start_height
CLI->>Probe: probe_platform(ip, http_port)
Probe->>Peer: TLS handshake + HTTP GET
Probe-->>CLI: PlatformStatus (live + SSL)
end
CLI->>FS: Write deterministic seed file
FS-->>CLI: File written
sequenceDiagram
rect rgba(200,255,200,0.5)
participant Caller as Peer Discovery Caller
participant Discovery as DnsDiscovery
participant Embedded as Embedded Seeds
participant DNS as DNS Resolver
end
Caller->>Discovery: discover_peers(network)
Discovery->>Embedded: addresses(network)
Embedded-->>Discovery: embedded IPs
Discovery->>DNS: resolve network.dns_seeds()
DNS-->>Discovery: resolved IPs (or fail)
Discovery->>Discovery: union, dedup, sort peers
Discovery-->>Caller: combined peer list (embedded + DNS)
Estimated Code Review Effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## v0.42-dev #678 +/- ##
=============================================
+ Coverage 69.80% 69.84% +0.04%
=============================================
Files 319 320 +1
Lines 66687 66705 +18
=============================================
+ Hits 46549 46591 +42
+ Misses 20138 20114 -24
|
Brings in PR #678 (hardcoded masternode seed files + weekly auto-refresh, new dash-network-seeds crate, masternode-seeds-fetcher P2P refactor) so feat/platform-wallet2 has the embedded seed infrastructure available while #678 reviews. Conflict resolution: - dash-spv/src/network/constants.rs: kept both TESTNET_FIXED_PEERS (from PR #658 already on pw2) and new MAINNET_P2P_PORT / TESTNET_P2P_PORT constants from #678. - dash-spv/src/network/discovery.rs: kept both the testnet fixed-peers fallback loop and the richer embedded-seeds-count log message from #678. - key-wallet-ffi/src/error.rs: incoming #678 rewrote FFIError's impl with set()/clean() methods; preserved the HEAD-side associated functions (success/error/set_error/set_success/free_message) that PR #672 callers depend on, alongside the new methods. Deleted duplicate From<key_wallet::Error> and From<key_wallet_manager::WalletError> impls in favor of #678's newer versions, and extended the latter's match to cover pw2's ApplyChangeSet and Persistence variants. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
This PR has merge conflicts with the base branch. Please rebase or merge the base branch into your branch to resolve them. |
Replaces the small inline `TESTNET_FIXED_PEERS` list with versioned
seed files (`dash-spv/seeds/{mainnet,testnet}.txt`) embedded via
`include_str!`. Peer discovery now treats these as the primary source
and DNS resolution as a best-effort backup — DNS failures are logged
but never prevent bootstrap.
A new `masternode-seeds-fetcher` tooling crate queries
`masternode list json` via dashd RPC, filters to `ENABLED`, sorts &
dedupes, and writes the seed file. A scheduled workflow runs it
weekly for both networks and opens a PR when the output changes.
Closes #658
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Pre-commit `typos` hook rejects "unparseable"; use "unparsable" in both doc comment and tracing message. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The fetcher now connects to a peer from DNS seeds or the existing seed
file, runs the Dash P2P handshake, sends `sendheaders`, and waits for
the peer's next new-block announcement to obtain a fresh chain tip
hash. It then sends `getmnlistd` with base_block_hash=0 and uses the
returned `mnlistdiff` to extract valid masternode service addresses.
Removes all RPC-related args, env vars, and secrets plumbing — the
weekly CI workflow no longer needs `{MAINNET,TESTNET}_DASHD_RPC_*`
secrets and runs fully self-contained.
Verified end-to-end against live testnet peer 44.225.73.155:19999:
39s total runtime, 543 masternodes → 86 valid addresses written.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…entiate evo/regular Split the masternode IP seed lists out of `dash-spv` into a new, lightweight `dash-network-seeds` crate. Consumers that only need a bootstrap peer list (wallets, explorers, light clients) can now depend on this crate directly without pulling in dash-spv's full async/sync/storage stack. Seed file format now differentiates regular vs Evo (HPMN) masternodes: regular 1.2.3.4:9999 evo 68.67.122.1:19999 Lines with only a bare address are still accepted and default to Regular for backwards compatibility. The `dash-network-seeds` crate exposes: - `seeds(Network)` — all entries - `addresses(Network)` — addresses only - `regular_seeds(Network)` / `evo_seeds(Network)` — filtered views - `parse(raw, default_port)` — for tooling - Optional `dashcore` feature provides `TryFrom<dashcore::Network>` Counts at this commit (fetched live from the P2P network): - mainnet: 2031 seeds = 1671 regular + 360 evo - testnet: 86 seeds = 55 regular + 31 evo `dash-spv` now pulls its seeds from the new crate; the fetcher writes the new format to `dash-network-seeds/seeds/`; the weekly workflow and `ci-groups.yml` are updated accordingly. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
CI invokes `cargo test -p dash-network-seeds --all-features`, which activates the `dashcore` optional feature. With `default-features = false` that pulled dashcore in without `secp-recovery`, and dashcore's own `signer.rs` uses `RecoverableSignature` / `RecoveryId` / `sign_ecdsa_recoverable` / `recover_ecdsa` unconditionally — so dashcore itself failed to compile. Drop `default-features = false` so dashcore builds with its usual defaults when our `dashcore` feature is enabled. Since `dashcore` is still `optional = true`, users who don't turn it on still pull in nothing beyond std. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…reach+sync, SSL)
Each seed line now carries best-effort probe results captured at the
moment the list was refreshed, alongside the masternode type and
Core P2P address.
Columns: <type> <addr> <platform_http_port|-> <core_reach> <core_sync> <plat_reach> <plat_live> <plat_ssl>
New probe types exposed from `dash-network-seeds`:
- Reachability: Unknown | Ok | Timeout | Refused | Error
- SyncStatus: Unknown | Synced | Behind(N) | Ahead(N)
- PlatformLiveness: Unknown | Ok | None
- SslStatus: Unknown | Valid | Expired | SelfSigned | Untrusted | NoHandshake
Plus:
- New `reachable_seeds` / `reachable_addresses` public functions for
consumers that only want nodes whose last probe succeeded.
- `MasternodeSeed::to_line()` / `Display` round-trip the full format.
- Legacy 1- and 2-column formats still parse for backwards compat.
`masternode-seeds-fetcher` now runs a probe phase after fetching the
mnlistdiff. For each masternode it:
* Opens TCP + Dash P2P handshake to the Core port, capturing
`start_height` for sync classification against the coinbase tip.
* For Evo nodes, opens TLS on `platform_http_port` with a custom
verifier that accepts any chain (we connect by IP, not hostname),
inspects the leaf cert via x509-parser for expired / self-signed
classification, then does a minimal HTTP GET to gauge liveness.
Probes run with bounded concurrency via tokio + FuturesUnordered;
`--skip-probes` and `--probe-concurrency` flags control behavior.
Live refresh results pushed in the same commit:
- mainnet: 2034 seeds (1674 regular + 360 evo), core_ok 1987/2034,
platform_ok 352/360, ssl_valid 294/360. Probes ran in 25 s.
- testnet: 86 seeds (55 regular + 31 evo), core_ok 85/86,
platform_ok 31/31, ssl_valid 28/31. Probes ran in 10 s.
CI workflow timeout bumped to 60 min and the run commands now pass
--probe-concurrency=100 for mainnet (64 for testnet).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
7f59eed to
aa2ce5b
Compare
There was a problem hiding this comment.
Actionable comments posted: 5
🧹 Nitpick comments (8)
.github/workflows/update-masternode-seeds.yml (1)
22-27: Add aconcurrencygroup.Without one, a manual
workflow_dispatchtriggered while the scheduled run is in flight (or two back-to-back dispatches) can race on the samechore/refresh-masternode-seedsbranch. A simple group +cancel-in-progress: trueavoids that.jobs: refresh: name: Refresh masternode seed files runs-on: ubuntu-latest timeout-minutes: 60 + concurrency: + group: update-masternode-seeds + cancel-in-progress: false🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/update-masternode-seeds.yml around lines 22 - 27, Add a GitHub Actions concurrency group to the refresh job to prevent overlapping runs: for the job named "refresh" (Refresh masternode seed files) configure concurrency with a unique group string like "chore/refresh-masternode-seeds" and set cancel-in-progress: true so any in-flight run on that group is canceled when a new workflow_dispatch or scheduled run starts; update the job definition for refresh to include this concurrency block.dash-network/src/lib.rs (1)
122-128: Duplication withdash-spv/src/network/constants.rsseed constants.
MAINNET_DNS_SEEDS/TESTNET_DNS_SEEDSindash-spv/src/network/constants.rs(lines 20–27) hold the same hostnames. Now thatNetwork::dns_seeds()is the canonical source, consider having those constants (or their sole consumer) delegate tonetwork.dns_seeds()so the lists don't drift.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-network/src/lib.rs` around lines 122 - 128, The MAINNET_DNS_SEEDS and TESTNET_DNS_SEEDS constants are duplicating the hostnames already returned by Network::dns_seeds(); update the code so the constants (or their sole consumer) delegate to Network::dns_seeds() instead of hardcoding the lists: remove or replace the duplicated string arrays with a call that converts Network::Mainnet.dns_seeds() / Network::Testnet.dns_seeds() to the expected type used by the consumer (or change the consumer to use Network::dns_seeds() directly), ensuring only Network::dns_seeds() remains the canonical source.masternode-seeds-fetcher/src/probe.rs (2)
227-257:classify_certLGTM with a small caveat on self-signed detection.
issuer == subject && chain.len() == 1is a reasonable heuristic but will miss self-signed leaves where the server also sends an intermediate/root bundle alongside the leaf — those will be classifiedValid. Given that hostname verification is explicitly skipped and this is best-effort status metadata for a seed file (not a security decision), this is acceptable. Worth a comment so future readers don't assume stronger guarantees.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@masternode-seeds-fetcher/src/probe.rs` around lines 227 - 257, Update classify_cert to document the self-signed heuristic limitation: add a concise comment near the self_signed_hint calculation (in function classify_cert) stating that using issuer == subject && chain.len() == 1 is a heuristic that will miss self-signed leaves when servers also present intermediates/roots, and that this is intentional because we skip hostname/PKI verification and only want best-effort metadata for seed files; keep the existing logic unchanged.
177-179: Avoidunwrap()on the dummyServerName.
"example.invalid"is a valid DNS name so this never fires today, but the guideline discouragesunwrap()insrc/. Aconst+expectwith a descriptive message (or aonce_cell) makes the intent explicit and survives future edits.♻️ Proposed fix
- let dummy_name: ServerName<'static> = ServerName::try_from("example.invalid").unwrap(); + // Safe: "example.invalid" is a well-formed DNS name; the custom verifier ignores it. + let dummy_name: ServerName<'static> = ServerName::try_from("example.invalid") + .expect("static DNS literal must parse");As per coding guidelines: "Avoid
unwrap()andexpect()in library code".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@masternode-seeds-fetcher/src/probe.rs` around lines 177 - 179, Replace the direct call that uses unwrap on ServerName::try_from("example.invalid") (the dummy_name binding) with a safely initialized static so the intent is explicit and the unwrap is avoided at use sites: create a once_cell::sync::Lazy (or similar) static like DUMMY_SERVER_NAME initialized with ServerName::try_from("example.invalid").expect("failed to create dummy ServerName for custom TLS verifier") and then use DUMMY_SERVER_NAME (or a clone/reference) where dummy_name is currently used; this moves the check to one initialization point and replaces the ephemeral unwrap in the dummy_name binding.masternode-seeds-fetcher/src/main.rs (1)
240-246: Redundant.max(max)in truncate length.
max.saturating_mul(4)is always ≥max(formax ≥ 1; whenmax == 0both sides are 0). The outer.max(max)is dead code.♻️ Proposed fix
- list.truncate(max.saturating_mul(4).max(max)); + list.truncate(max.saturating_mul(4));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@masternode-seeds-fetcher/src/main.rs` around lines 240 - 246, The truncate call uses a redundant .max(max): compute the desired truncate length as max.saturating_mul(4) (which already >= max for max >= 1 and equals 0 when max == 0) and remove the outer .max(max) so change list.truncate(max.saturating_mul(4).max(max)) to list.truncate(max.saturating_mul(4)) in the block that builds/shuffles `list` (the variables/functions to edit are `list`, `max`, the `shuffle`/`truncate` sequence).dash-spv/src/network/discovery.rs (1)
130-145: Test name oversells what is actually verified.The assertion
peers.len() >= 29is satisfied regardless of whether DNS resolved or not, so this doesn't actually exercise the "DNS fails" path. Consider renaming to something liketest_dns_discovery_testnet_has_embedded_seeds, or inject a failing resolver to truly cover the fallback branch. Minor.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/src/network/discovery.rs` around lines 130 - 145, The test name test_dns_discovery_testnet_returns_embedded_when_dns_fails is misleading because the assertion peers.len() >= 29 passes even when DNS succeeds; either rename the test to test_dns_discovery_testnet_has_embedded_seeds to match what it verifies, or change the test to inject a failing resolver into DnsDiscovery (e.g. add or use a constructor like DnsDiscovery::with_resolver or make DnsDiscovery accept a Resolver trait) so the discover_peers(Network::Testnet) call exercises the DNS-failure fallback and then assert the embedded seeds are returned and ports equal TESTNET_P2P_PORT.dash-network-seeds/src/lib.rs (1)
600-618: Good invariant coverage in tests.The
testnet_seeds_contain_expected_evo_ips,mainnet_seeds_non_empty_and_well_formed, and platform-status evo/regular tests together catch the most likely regressions when the weekly CI refresh runs. Consider adding a test that round-trips the embedded file throughparseand asserts no lines were silently dropped — that would catch format drift between fetcher output and parser expectations early.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-network-seeds/src/lib.rs` around lines 600 - 618, Add a test that round-trips the embedded seed file through the parser to ensure no lines are silently dropped: load the raw embedded seeds string used by seeds (the include_str! payload in this module), compute the expected count by splitting into lines and filtering out empty/comment lines, call parse(raw) (or the module's parse function) and assert parsed.len() == expected_count; also assert that seeds(Network::X).len() equals parse(raw).len() for at least one network to validate the end-to-end mapping.dash-spv/src/network/constants.rs (1)
29-32: Replace hardcoded port constants withNetwork::default_p2p_port()method.The constants
MAINNET_P2P_PORTandTESTNET_P2P_PORTduplicate the functionality ofdashcore::Network::default_p2p_port(), which is already used indash-network-seeds/src/lib.rs:380. This creates a second source of truth and violates the coding guideline: "Never hardcode network parameters, addresses, or keys in the codebase".♻️ Proposed refactor
-/// Default Dash P2P port for mainnet. -pub const MAINNET_P2P_PORT: u16 = 9999; -/// Default Dash P2P port for testnet. -pub const TESTNET_P2P_PORT: u16 = 19999; -Replace
MAINNET_P2P_PORTandTESTNET_P2P_PORTcall-sites indiscovery.rsandconstants.rswithNetwork::Mainnet.default_p2p_port()andNetwork::Testnet.default_p2p_port()respectively.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/src/network/constants.rs` around lines 29 - 32, Replace the hardcoded constants MAINNET_P2P_PORT and TESTNET_P2P_PORT with calls to dashcore's network method: remove or stop using the constants defined in constants.rs and change all call-sites (e.g., in discovery.rs) to use Network::Mainnet.default_p2p_port() for mainnet and Network::Testnet.default_p2p_port() for testnet; ensure you import or fully-qualify dashcore::Network so the compiler resolves default_p2p_port(), and delete the unused MAINNET_P2P_PORT and TESTNET_P2P_PORT symbols to avoid a second source of truth.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/ci-groups.yml:
- Line 31: Update the stale exclusion comment for "masternode-seeds-fetcher" in
the .github/ci-groups.yml entry: replace the existing note about requiring live
dashd RPC with a concise statement that the tool uses the P2P mesh
(getmnlistd/mnlistdiff) and does not require dashd RPC or credentials, so
maintainers know to look at P2P behavior rather than RPC plumbing.
In @.github/workflows/update-masternode-seeds.yml:
- Around line 62-82: The workflow opens PRs using
peter-evans/create-pull-request@v7 with the default GITHUB_TOKEN which prevents
on: pull_request CI from running; update the action invocation (the block that
includes uses: peter-evans/create-pull-request@v7 and the with: inputs like
commit-message/title/body/branch) to pass a token: input referencing a
repository secret (e.g., token: ${{ secrets.GITHUB_PAT }} or a GitHub App token)
so the created PR will trigger workflows, or alternatively add a clear comment
in the workflow body describing that maintainers must manually close/reopen the
PR to run CI if you prefer not to use a PAT.
In `@dash-network-seeds/Cargo.toml`:
- Around line 15-24: The crate currently imports dashcore::Network
unconditionally and exposes public APIs (seeds(), addresses(), evo_seeds(),
etc.) that use that type, so the fix is to make the crate consistent: either
remove optional = true for the dashcore dependency in Cargo.toml (make dashcore
required and drop the misleading comment), or gate all dashcore-dependent items
behind a feature (add #[cfg(feature = "dashcore")] to the dashcore::Network
import and to any impls/signatures that reference dashcore::Network) and provide
non-dashcore fallbacks (e.g., a local enum Network and #[cfg(not(feature =
"dashcore"))] function signatures or wrapper types) so cargo build
--no-default-features succeeds; update the feature list and public API
accordingly so seeds(), addresses(), evo_seeds(), and any TryFrom/dashcore impls
are correctly feature-gated or removed when the feature is off.
In `@dash-network-seeds/src/lib.rs`:
- Around line 67-73: The function raw_seed_file currently panics for unsupported
Network variants; change its signature to return Option<&'static str> and update
its match to return Some(MAINNET_SEED_FILE)/Some(TESTNET_SEED_FILE) or None for
other variants, then update all public helpers that call it (seeds, addresses,
evo_seeds, regular_seeds, reachable_seeds, reachable_addresses) to handle None
by returning an empty collection (or appropriate empty result) instead of
unwrapping or panicking; ensure no unwrap()/expect() remain in these call sites
and propagate Option/empty-result semantics in the public API.
In `@dash-network/src/lib.rs`:
- Around line 122-128: The dns_seeds() function currently returns &[""] for
Network::Devnet and Network::Regtest which creates a single empty-string seed
and causes DNS lookup errors; change the match arm in pub const fn
dns_seeds(self) -> &'static [&'static str] so Devnet | Regtest returns an empty
slice (&[]) instead of &[""], and add a short doc comment above dns_seeds
describing it returns DNS seeds (empty for Devnet/Regtest) to match the other
pub const fn doc style. Ensure references to Network::Devnet and
Network::Regtest and the dns_seeds function name are updated accordingly.
---
Nitpick comments:
In @.github/workflows/update-masternode-seeds.yml:
- Around line 22-27: Add a GitHub Actions concurrency group to the refresh job
to prevent overlapping runs: for the job named "refresh" (Refresh masternode
seed files) configure concurrency with a unique group string like
"chore/refresh-masternode-seeds" and set cancel-in-progress: true so any
in-flight run on that group is canceled when a new workflow_dispatch or
scheduled run starts; update the job definition for refresh to include this
concurrency block.
In `@dash-network-seeds/src/lib.rs`:
- Around line 600-618: Add a test that round-trips the embedded seed file
through the parser to ensure no lines are silently dropped: load the raw
embedded seeds string used by seeds (the include_str! payload in this module),
compute the expected count by splitting into lines and filtering out
empty/comment lines, call parse(raw) (or the module's parse function) and assert
parsed.len() == expected_count; also assert that seeds(Network::X).len() equals
parse(raw).len() for at least one network to validate the end-to-end mapping.
In `@dash-network/src/lib.rs`:
- Around line 122-128: The MAINNET_DNS_SEEDS and TESTNET_DNS_SEEDS constants are
duplicating the hostnames already returned by Network::dns_seeds(); update the
code so the constants (or their sole consumer) delegate to Network::dns_seeds()
instead of hardcoding the lists: remove or replace the duplicated string arrays
with a call that converts Network::Mainnet.dns_seeds() /
Network::Testnet.dns_seeds() to the expected type used by the consumer (or
change the consumer to use Network::dns_seeds() directly), ensuring only
Network::dns_seeds() remains the canonical source.
In `@dash-spv/src/network/constants.rs`:
- Around line 29-32: Replace the hardcoded constants MAINNET_P2P_PORT and
TESTNET_P2P_PORT with calls to dashcore's network method: remove or stop using
the constants defined in constants.rs and change all call-sites (e.g., in
discovery.rs) to use Network::Mainnet.default_p2p_port() for mainnet and
Network::Testnet.default_p2p_port() for testnet; ensure you import or
fully-qualify dashcore::Network so the compiler resolves default_p2p_port(), and
delete the unused MAINNET_P2P_PORT and TESTNET_P2P_PORT symbols to avoid a
second source of truth.
In `@dash-spv/src/network/discovery.rs`:
- Around line 130-145: The test name
test_dns_discovery_testnet_returns_embedded_when_dns_fails is misleading because
the assertion peers.len() >= 29 passes even when DNS succeeds; either rename the
test to test_dns_discovery_testnet_has_embedded_seeds to match what it verifies,
or change the test to inject a failing resolver into DnsDiscovery (e.g. add or
use a constructor like DnsDiscovery::with_resolver or make DnsDiscovery accept a
Resolver trait) so the discover_peers(Network::Testnet) call exercises the
DNS-failure fallback and then assert the embedded seeds are returned and ports
equal TESTNET_P2P_PORT.
In `@masternode-seeds-fetcher/src/main.rs`:
- Around line 240-246: The truncate call uses a redundant .max(max): compute the
desired truncate length as max.saturating_mul(4) (which already >= max for max
>= 1 and equals 0 when max == 0) and remove the outer .max(max) so change
list.truncate(max.saturating_mul(4).max(max)) to
list.truncate(max.saturating_mul(4)) in the block that builds/shuffles `list`
(the variables/functions to edit are `list`, `max`, the `shuffle`/`truncate`
sequence).
In `@masternode-seeds-fetcher/src/probe.rs`:
- Around line 227-257: Update classify_cert to document the self-signed
heuristic limitation: add a concise comment near the self_signed_hint
calculation (in function classify_cert) stating that using issuer == subject &&
chain.len() == 1 is a heuristic that will miss self-signed leaves when servers
also present intermediates/roots, and that this is intentional because we skip
hostname/PKI verification and only want best-effort metadata for seed files;
keep the existing logic unchanged.
- Around line 177-179: Replace the direct call that uses unwrap on
ServerName::try_from("example.invalid") (the dummy_name binding) with a safely
initialized static so the intent is explicit and the unwrap is avoided at use
sites: create a once_cell::sync::Lazy (or similar) static like DUMMY_SERVER_NAME
initialized with ServerName::try_from("example.invalid").expect("failed to
create dummy ServerName for custom TLS verifier") and then use DUMMY_SERVER_NAME
(or a clone/reference) where dummy_name is currently used; this moves the check
to one initialization point and replaces the ephemeral unwrap in the dummy_name
binding.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 97c32f44-09a0-4292-bec5-a6b70ae720e1
📒 Files selected for processing (14)
.github/ci-groups.yml.github/workflows/update-masternode-seeds.ymlCargo.tomldash-network-seeds/Cargo.tomldash-network-seeds/seeds/mainnet.txtdash-network-seeds/seeds/testnet.txtdash-network-seeds/src/lib.rsdash-network/src/lib.rsdash-spv/Cargo.tomldash-spv/src/network/constants.rsdash-spv/src/network/discovery.rsmasternode-seeds-fetcher/Cargo.tomlmasternode-seeds-fetcher/src/main.rsmasternode-seeds-fetcher/src/probe.rs
| excluded: | ||
| - integration_test # Requires live Dash node | ||
| - dash-fuzz # Honggfuzz binary targets, tested by fuzz.yml | ||
| - masternode-seeds-fetcher # Tooling binary; needs live dashd RPC, exercised by update-masternode-seeds.yml |
There was a problem hiding this comment.
Exclusion comment is stale.
Per the PR summary and the fetcher refactor, masternode-seeds-fetcher no longer uses dashd RPC — it connects to the P2P network (getmnlistd/mnlistdiff) with no credentials. Update the comment so maintainers don't look for RPC plumbing that isn't there.
- - masternode-seeds-fetcher # Tooling binary; needs live dashd RPC, exercised by update-masternode-seeds.yml
+ - masternode-seeds-fetcher # Tooling binary; connects to the live Dash P2P network, exercised by update-masternode-seeds.yml📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - masternode-seeds-fetcher # Tooling binary; needs live dashd RPC, exercised by update-masternode-seeds.yml | |
| - masternode-seeds-fetcher # Tooling binary; connects to the live Dash P2P network, exercised by update-masternode-seeds.yml |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/ci-groups.yml at line 31, Update the stale exclusion comment for
"masternode-seeds-fetcher" in the .github/ci-groups.yml entry: replace the
existing note about requiring live dashd RPC with a concise statement that the
tool uses the P2P mesh (getmnlistd/mnlistdiff) and does not require dashd RPC or
credentials, so maintainers know to look at P2P behavior rather than RPC
plumbing.
| - name: Open pull request on change | ||
| uses: peter-evans/create-pull-request@v7 | ||
| with: | ||
| commit-message: "chore(dash-spv): refresh masternode seed files" | ||
| title: "chore(dash-spv): refresh masternode seed files" | ||
| body: | | ||
| Weekly automated refresh of the hardcoded masternode IP lists | ||
| embedded in `dash-spv` (`dash-network-seeds/seeds/mainnet.txt`, | ||
| `dash-network-seeds/seeds/testnet.txt`). | ||
|
|
||
| Generated by `.github/workflows/update-masternode-seeds.yml` | ||
| via the `masternode-seeds-fetcher` crate, which fetches the | ||
| current masternode list from the Dash P2P network using a | ||
| `getmnlistd` request against a peer discovered through DNS | ||
| seeds and the existing seed files. | ||
| branch: chore/refresh-masternode-seeds | ||
| delete-branch: true | ||
| add-paths: | | ||
| dash-network-seeds/seeds/mainnet.txt | ||
| dash-network-seeds/seeds/testnet.txt | ||
| labels: automated |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find .github/workflows -name "*.yml" -o -name "*.yaml" | head -20Repository: dashpay/rust-dashcore
Length of output: 373
🏁 Script executed:
cat -n .github/workflows/update-masternode-seeds.ymlRepository: dashpay/rust-dashcore
Length of output: 3471
🏁 Script executed:
cat -n .github/workflows/rust.yml | head -100Repository: dashpay/rust-dashcore
Length of output: 2986
🏁 Script executed:
rg "spv" .github/workflows/ -A 3 -B 3Repository: dashpay/rust-dashcore
Length of output: 5151
🏁 Script executed:
rg "pull_request" .github/workflows/rust.ymlRepository: dashpay/rust-dashcore
Length of output: 80
🏁 Script executed:
rg "dash-network-seeds" .github/scripts/ -lRepository: dashpay/rust-dashcore
Length of output: 47
🌐 Web query:
GitHub Actions GITHUB_TOKEN pull_request workflow limitation
💡 Result:
The primary limitation of the GITHUB_TOKEN in GitHub Actions workflows triggered by pull_request events is that actions performed with it (e.g., pushing commits to the base branch or creating pull requests) will not trigger new workflow runs for most events (except workflow_dispatch and repository_dispatch). This prevents recursive or infinite workflow loops. For pull_request workflows (not pull_request_target) from forked repositories, the GITHUB_TOKEN has read-only permissions by default (write permissions are downgraded to read-only unless the repository setting "Send write tokens to workflows from pull requests" is enabled by an admin). This limits what the workflow can do, such as preventing pushes or modifications. Additionally, GITHUB_TOKEN has a rate limit of 1,000 requests per hour per repository (15,000 for GitHub Enterprise Cloud resources). To trigger workflows from a pull_request workflow using GITHUB_TOKEN actions, use a Personal Access Token (PAT) or GitHub App token instead.
Citations:
- 1: https://docs.github.com/en/actions/concepts/security/github_token
- 2: https://docs.github.com/actions/using-workflows/workflow-syntax-for-github-actions
- 3: https://docs.github.com/en/actions/reference/github_token-reference
- 4: https://docs.github.com/actions/using-workflows/triggering-a-workflow
- 5: https://stackoverflow.com/questions/67550727/push-event-doesnt-trigger-workflow-on-push-paths
🏁 Script executed:
cat -n .github/scripts/ci_config.py | head -200Repository: dashpay/rust-dashcore
Length of output: 7152
🏁 Script executed:
rg "dash-network-seeds|spv" .github/scripts/ci_config.py -A 2 -B 2Repository: dashpay/rust-dashcore
Length of output: 47
🏁 Script executed:
cat .github/ci-groups.ymlRepository: dashpay/rust-dashcore
Length of output: 751
Auto-opened PRs won't trigger CI with the default GITHUB_TOKEN.
peter-evans/create-pull-request using the default GITHUB_TOKEN will open the PR, but per GitHub's documented limitation, events created by that token don't trigger on: pull_request workflows. The weekly seed-refresh PR will therefore land without running the normal CI suite (including the spv group that covers dash-network-seeds). Either use a PAT (or GitHub App token) via a token: input to enable automatic CI, or document that a maintainer must close/reopen the PR to trigger CI.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/update-masternode-seeds.yml around lines 62 - 82, The
workflow opens PRs using peter-evans/create-pull-request@v7 with the default
GITHUB_TOKEN which prevents on: pull_request CI from running; update the action
invocation (the block that includes uses: peter-evans/create-pull-request@v7 and
the with: inputs like commit-message/title/body/branch) to pass a token: input
referencing a repository secret (e.g., token: ${{ secrets.GITHUB_PAT }} or a
GitHub App token) so the created PR will trigger workflows, or alternatively add
a clear comment in the workflow body describing that maintainers must manually
close/reopen the PR to run CI if you prefer not to use a PAT.
| pub const fn dns_seeds(self) -> &'static [&'static str] { | ||
| match self { | ||
| Network::Mainnet => &["dnsseed.dash.org"], | ||
| Network::Testnet => &["testnet-seed.dashdot.io"], | ||
| Network::Devnet | Network::Regtest => &[""], | ||
| } | ||
| } |
There was a problem hiding this comment.
Return an empty slice for Devnet/Regtest instead of &[""].
&[""] contains one empty-string entry, which callers like masternode-seeds-fetcher/src/main.rs pass directly to resolver.lookup_ip(*seed), producing a guaranteed DNS error and a noisy warning on every Devnet/Regtest run. An empty slice expresses the actual intent ("no DNS seeds") and lets the for loop naturally skip.
Also consider adding a doc comment for consistency with the other pub const fns in this impl.
🔧 Proposed fix
- pub const fn dns_seeds(self) -> &'static [&'static str] {
- match self {
- Network::Mainnet => &["dnsseed.dash.org"],
- Network::Testnet => &["testnet-seed.dashdot.io"],
- Network::Devnet | Network::Regtest => &[""],
- }
- }
+ /// DNS seeds used for peer discovery on this network.
+ ///
+ /// Returns an empty slice for networks without public DNS seeds
+ /// (`Devnet`, `Regtest`).
+ pub const fn dns_seeds(self) -> &'static [&'static str] {
+ match self {
+ Network::Mainnet => &["dnsseed.dash.org"],
+ Network::Testnet => &["testnet-seed.dashdot.io"],
+ Network::Devnet | Network::Regtest => &[],
+ }
+ }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@dash-network/src/lib.rs` around lines 122 - 128, The dns_seeds() function
currently returns &[""] for Network::Devnet and Network::Regtest which creates a
single empty-string seed and causes DNS lookup errors; change the match arm in
pub const fn dns_seeds(self) -> &'static [&'static str] so Devnet | Regtest
returns an empty slice (&[]) instead of &[""], and add a short doc comment above
dns_seeds describing it returns DNS seeds (empty for Devnet/Regtest) to match
the other pub const fn doc style. Ensure references to Network::Devnet and
Network::Regtest and the dns_seeds function name are updated accordingly.
aa2ce5b to
52a1353
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
dash-spv/src/network/discovery.rs (1)
103-119:⚠️ Potential issue | 🟡 MinorMainnet port assertion may be brittle if any embedded seed uses a non-default port.
The assertion
peer.port() == Network::Mainnet.default_p2p_port()holds only if every embedded mainnet seed advertises port 9999. Masternode service addresses are normally on the default port, but on-chain registrations technically permit alternative ports, so if the weekly regeneratedmainnet.txtever picks up such an entry this test would start failing.Since the test is
#[ignore]this is low risk; just worth being aware of when investigating future CI surprises.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/src/network/discovery.rs` around lines 103 - 119, The test test_dns_discovery_mainnet assumes every discovered peer uses Network::Mainnet.default_p2p_port(), which is brittle if any embedded seed advertises a non-default port; update the test in discovery.rs (the async test function test_dns_discovery_mainnet that constructs DnsDiscovery and calls discover_peers) to stop asserting strict equality on peer.port(): either (a) assert the port is a valid TCP port (e.g. 1..=65535) and optionally log or collect peers whose port != Network::Mainnet.default_p2p_port(), or (b) only assert equality for peers that explicitly indicate they should use the default port and skip others; pick one of these approaches and replace the loop that currently calls assert_eq!(peer.port(), Network::Mainnet.default_p2p_port()) accordingly.
🧹 Nitpick comments (5)
dash-spv/src/network/discovery.rs (1)
52-89: LGTM — embedded-first, DNS-fallback discovery matches issue#658.Merge + sort + dedup logic is correct, DNS failures are now non-fatal, and the module-level doc clearly states the priority order. The 29 HP-MN testnet IPs from
#658will be picked up throughdash_network_seeds::addresses(Network::Testnet).Minor wording nit: the log
"(N from embedded seeds + DNS)"reads slightly ambiguously (as if N is the sum). Consider"{total} unique peers for {network:?} ({embedded} embedded + {dns_added} from DNS)"for clarity, wheredns_added = addresses.len() - embedded_counttracked before sort/dedup.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/src/network/discovery.rs` around lines 52 - 89, The final log message is ambiguous about which count is the sum; inside discover_peers track how many DNS addresses were appended by recording addresses.len() into a pre_sort_count (or compute dns_added = addresses.len() - embedded_count) immediately after the DNS loop and before sort/dedup, then change the tracing::info to print the total unique addresses plus a breakdown like "{total} unique peers for {:?} ({embedded_count} embedded + {dns_added} from DNS)"; reference the discover_peers function and the embedded_count and addresses variables when making this change.masternode-seeds-fetcher/src/main.rs (1)
240-245: Redundant.max(max)on truncate cap.
max.saturating_mul(4)is always≥ max(saturating multiplication never wraps below the operand forusize), so the trailing.max(max)is a no-op.♻️ Simplify
- list.truncate(max.saturating_mul(4).max(max)); + list.truncate(max.saturating_mul(4));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@masternode-seeds-fetcher/src/main.rs` around lines 240 - 245, The truncate cap uses max.saturating_mul(4).max(max) which is redundant because saturating_mul(4) is always >= max; update the truncate call in the function where `list` is created to pass only `max.saturating_mul(4)` (remove the trailing `.max(max)`), e.g., change the argument to `list.truncate(max.saturating_mul(4))` so the no-op max is eliminated while preserving behavior.masternode-seeds-fetcher/src/probe.rs (3)
45-54:_networkunderscore prefix is misleading — the parameter is actually consumed.It’s threaded into
probe_core_innerand used byPeer::connect. Dropping the underscore makes the API self-describing.♻️ Rename
pub async fn probe_core( peer_addr: SocketAddr, - _network: DashNetwork, + network: DashNetwork, ) -> (Reachability, Option<u32>) { - match tokio::time::timeout(CORE_PROBE_BUDGET, probe_core_inner(peer_addr, _network)).await { + match tokio::time::timeout(CORE_PROBE_BUDGET, probe_core_inner(peer_addr, network)).await {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@masternode-seeds-fetcher/src/probe.rs` around lines 45 - 54, The parameter named `_network` in function probe_core is misleading because it is actually used; rename it to `network` in the probe_core signature and all call sites, and pass the renamed `network` into `probe_core_inner(peer_addr, network)` (and ensure the private function probe_core_inner still accepts the `network` parameter). Update any internal references to `_network` to `network` so the symbol `probe_core`, its parameter `network`, and the call to `probe_core_inner` are consistent and the API accurately reflects that the argument is consumed.
80-108: Inner loop deadline is dominated by the outertimeout.
probe_corewraps this intokio::time::timeout(CORE_PROBE_BUDGET, ...)and the inner loop also usesCORE_PROBE_BUDGETas its bound. In practice the outer timeout will almost always fire first, making thewhile start.elapsed() < CORE_PROBE_BUDGETbound effectively unreachable (except through the finalErr("handshake incomplete within budget")mapping which becomesReachability::Errorrather thanReachability::Timeout).Consider either dropping the inner deadline (let outer timeout do its job; incomplete handshake is already a timeout) or giving the inner loop a smaller budget so the classification is accurate.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@masternode-seeds-fetcher/src/probe.rs` around lines 80 - 108, The inner loop in probe.rs uses the same CORE_PROBE_BUDGET as the outer tokio::time::timeout in probe_core, so the outer timeout will always trigger first; change the inner loop to either remove the start.elapsed() < CORE_PROBE_BUDGET check (let the outer timeout classify timeouts) or reduce the inner bound to a smaller constant (e.g., PER_POLL_BUDGET) so the loop can return Err("handshake incomplete within budget") distinctly; update references to CORE_PROBE_BUDGET inside the while loop and keep probe_core's outer timeout intact so Reachability::Timeout vs Reachability::Error are classified correctly.
177-190: Avoidunwrap(); preferexpectwith context or handle the error path.Per coding guidelines, avoid
unwrap()in**/src/**/*.rs.ServerName::try_from("example.invalid")won’t fail in practice, but a panic message here would be opaque if it ever did.🛡️ Proposed fix
- let dummy_name: ServerName<'static> = ServerName::try_from("example.invalid").unwrap(); + let dummy_name: ServerName<'static> = ServerName::try_from("example.invalid") + .expect("static hostname 'example.invalid' is a valid DNS name");As per coding guidelines: "Avoid
unwrap()andexpect()in library code; use proper error types".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@masternode-seeds-fetcher/src/probe.rs` around lines 177 - 190, ServerName::try_from("example.invalid") currently uses unwrap() — replace it with proper error handling: attempt ServerName::try_from and on Err return the same PlatformStatus used for connector.connect failures (Reachability::Ok, PlatformLiveness::None, SslStatus::NoHandshake) or log the error for diagnostics; keep the variable name dummy_name and then call connector.connect(dummy_name, tcp).await as before so the function never panics if ServerName creation fails.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@dash-spv/src/network/discovery.rs`:
- Around line 103-119: The test test_dns_discovery_mainnet assumes every
discovered peer uses Network::Mainnet.default_p2p_port(), which is brittle if
any embedded seed advertises a non-default port; update the test in discovery.rs
(the async test function test_dns_discovery_mainnet that constructs DnsDiscovery
and calls discover_peers) to stop asserting strict equality on peer.port():
either (a) assert the port is a valid TCP port (e.g. 1..=65535) and optionally
log or collect peers whose port != Network::Mainnet.default_p2p_port(), or (b)
only assert equality for peers that explicitly indicate they should use the
default port and skip others; pick one of these approaches and replace the loop
that currently calls assert_eq!(peer.port(),
Network::Mainnet.default_p2p_port()) accordingly.
---
Nitpick comments:
In `@dash-spv/src/network/discovery.rs`:
- Around line 52-89: The final log message is ambiguous about which count is the
sum; inside discover_peers track how many DNS addresses were appended by
recording addresses.len() into a pre_sort_count (or compute dns_added =
addresses.len() - embedded_count) immediately after the DNS loop and before
sort/dedup, then change the tracing::info to print the total unique addresses
plus a breakdown like "{total} unique peers for {:?} ({embedded_count} embedded
+ {dns_added} from DNS)"; reference the discover_peers function and the
embedded_count and addresses variables when making this change.
In `@masternode-seeds-fetcher/src/main.rs`:
- Around line 240-245: The truncate cap uses max.saturating_mul(4).max(max)
which is redundant because saturating_mul(4) is always >= max; update the
truncate call in the function where `list` is created to pass only
`max.saturating_mul(4)` (remove the trailing `.max(max)`), e.g., change the
argument to `list.truncate(max.saturating_mul(4))` so the no-op max is
eliminated while preserving behavior.
In `@masternode-seeds-fetcher/src/probe.rs`:
- Around line 45-54: The parameter named `_network` in function probe_core is
misleading because it is actually used; rename it to `network` in the probe_core
signature and all call sites, and pass the renamed `network` into
`probe_core_inner(peer_addr, network)` (and ensure the private function
probe_core_inner still accepts the `network` parameter). Update any internal
references to `_network` to `network` so the symbol `probe_core`, its parameter
`network`, and the call to `probe_core_inner` are consistent and the API
accurately reflects that the argument is consumed.
- Around line 80-108: The inner loop in probe.rs uses the same CORE_PROBE_BUDGET
as the outer tokio::time::timeout in probe_core, so the outer timeout will
always trigger first; change the inner loop to either remove the start.elapsed()
< CORE_PROBE_BUDGET check (let the outer timeout classify timeouts) or reduce
the inner bound to a smaller constant (e.g., PER_POLL_BUDGET) so the loop can
return Err("handshake incomplete within budget") distinctly; update references
to CORE_PROBE_BUDGET inside the while loop and keep probe_core's outer timeout
intact so Reachability::Timeout vs Reachability::Error are classified correctly.
- Around line 177-190: ServerName::try_from("example.invalid") currently uses
unwrap() — replace it with proper error handling: attempt ServerName::try_from
and on Err return the same PlatformStatus used for connector.connect failures
(Reachability::Ok, PlatformLiveness::None, SslStatus::NoHandshake) or log the
error for diagnostics; keep the variable name dummy_name and then call
connector.connect(dummy_name, tcp).await as before so the function never panics
if ServerName creation fails.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: a3453a36-7ddf-433b-8b91-d81c66e15608
📒 Files selected for processing (9)
dash-network-seeds/Cargo.tomldash-network-seeds/src/lib.rsdash-network/src/lib.rsdash-spv/Cargo.tomldash-spv/src/network/constants.rsdash-spv/src/network/discovery.rsmasternode-seeds-fetcher/Cargo.tomlmasternode-seeds-fetcher/src/main.rsmasternode-seeds-fetcher/src/probe.rs
💤 Files with no reviewable changes (1)
- dash-spv/src/network/constants.rs
✅ Files skipped from review due to trivial changes (1)
- masternode-seeds-fetcher/Cargo.toml
🚧 Files skipped from review as they are similar to previous changes (4)
- dash-network-seeds/Cargo.toml
- dash-spv/Cargo.toml
- dash-network/src/lib.rs
- dash-network-seeds/src/lib.rs
| fn classify_cert(chain: Option<&[CertificateDer<'_>]>) -> SslStatus { | ||
| let Some(chain) = chain else { | ||
| return SslStatus::NoHandshake; | ||
| }; | ||
| let Some(leaf_der) = chain.first() else { | ||
| return SslStatus::NoHandshake; | ||
| }; | ||
| let Ok((_, leaf)) = x509_parser::certificate::X509Certificate::from_der(leaf_der.as_ref()) | ||
| else { | ||
| return SslStatus::Untrusted; | ||
| }; | ||
| if leaf.tbs_certificate.version != X509Version::V3 { | ||
| // V1 / V2 certs in the wild are almost always dodgy. | ||
| return SslStatus::Untrusted; | ||
| } | ||
| let not_after = leaf.tbs_certificate.validity.not_after; | ||
| let now = chrono::Utc::now().timestamp(); | ||
| if not_after.timestamp() < now { | ||
| return SslStatus::Expired; | ||
| } | ||
| // Self-signed: Issuer == Subject and only one cert in the chain. | ||
| let self_signed_hint = | ||
| leaf.tbs_certificate.issuer == leaf.tbs_certificate.subject && chain.len() == 1; | ||
| if self_signed_hint { | ||
| return SslStatus::SelfSigned; | ||
| } | ||
| // We don't do a full PKI walk (that's what rustls's default verifier is | ||
| // for, but it'd reject based on hostname mismatch). Lacking full chain | ||
| // verification, the best we can say is "looks valid enough". | ||
| SslStatus::Valid | ||
| } |
There was a problem hiding this comment.
DER parse failure probably shouldn’t collapse to Untrusted.
If X509Certificate::from_der errors, we have a handshake-completed but malformed/unparseable leaf — Untrusted conflates that with "chain didn’t verify". SslStatus::Unknown (or a new Malformed variant) would be more faithful to the observation.
Also worth noting for future readers: because the custom verifier accepts every chain, a cert that would fail PKI validation (e.g., chain rooted at an unknown CA) will be reported as Valid as long as it’s V3, unexpired, and not single-cert self-signed. The doc comment acknowledges this, but the column name "ssl_valid" in the summary line may mislead downstream consumers of the seed file.
ZocoLini
left a comment
There was a problem hiding this comment.
I know is late for you and prob you just want to get this merged so I will just approve but for the record. I took a quick look into one file and saw some only-tested code like
/// Return only regular masternode seeds.
pub fn regular_seeds(network: Network) -> Vec<MasternodeSeed> {
seeds(network).into_iter().filter(|s| s.mn_type == MasternodeType::Regular).collect()
}
/// Return seeds whose last Core probe reported the node reachable. Useful
/// as a "liveliest" bootstrap list.
pub fn reachable_seeds(network: Network) -> Vec<MasternodeSeed> {
seeds(network).into_iter().filter(|s| s.core.reachable == Reachability::Ok).collect()
}Feel free to merge if you are okay with this
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
dash-network-seeds/src/lib.rs (1)
174-181: Nit: preferfmt::Displayover ad-hocas_string.Unlike the other status enums which expose
as_str(&'static),SyncStatus::as_stringallocates aStringfor every call (including the static"?"/"sync"cases) and is invoked per-seed into_line. Implementingfmt::Display(or returningCow<'static, str>) would avoid the allocations for the common variants and align the API shape with the other enums.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-network-seeds/src/lib.rs` around lines 174 - 181, Replace the ad-hoc SyncStatus::as_string method with a fmt::Display impl for SyncStatus so common variants avoid heap allocation and the API matches other enums; implement fmt::Display for SyncStatus and in its fmt(&self, f: &mut fmt::Formatter) write static &str for SyncStatus::Unknown and SyncStatus::Synced and formatted values for SyncStatus::Behind(n) and SyncStatus::Ahead(n); update callers like to_line to use format!("{}", status) or write!(...) so no behavior changes are needed.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@dash-network-seeds/src/lib.rs`:
- Around line 468-495: parse_full currently accepts any numeric toks[2] for
MasternodeType::Regular, silently coercing platform_http_port; change parse_full
so that if mn_type == MasternodeType::Regular and toks[2] != "-" you return None
(reject the row) to avoid constructing a Regular seed with a platform_http_port,
otherwise continue parsing as before; reference parse_full,
MasternodeType::Regular, platform_http_port and ensure this validation lives
before parsing toks[2] into Some(u16) so it either rejects or (if you prefer the
other approach) explicitly sets platform_http_port = None for Regular entries.
- Around line 479-487: The Evo branch currently calls
Reachability::parse/PlatformLiveness::parse/SslStatus::parse on tokens r/l/s
which causes early returns when those functions return None for "-" values;
update the match arm handling for (MasternodeType::Evo, r, l, s) in the parsing
function so that if r/l/s are all "-" you treat the platform as None (or
normalize each "-" to "?" before parsing) instead of blindly calling the
parsers; reference MasternodeType::Evo, PlatformStatus, Reachability::parse,
PlatformLiveness::parse, SslStatus::parse and toks[5..7] to locate and change
the logic accordingly.
---
Nitpick comments:
In `@dash-network-seeds/src/lib.rs`:
- Around line 174-181: Replace the ad-hoc SyncStatus::as_string method with a
fmt::Display impl for SyncStatus so common variants avoid heap allocation and
the API matches other enums; implement fmt::Display for SyncStatus and in its
fmt(&self, f: &mut fmt::Formatter) write static &str for SyncStatus::Unknown and
SyncStatus::Synced and formatted values for SyncStatus::Behind(n) and
SyncStatus::Ahead(n); update callers like to_line to use format!("{}", status)
or write!(...) so no behavior changes are needed.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 80288d40-4ea0-4162-9783-d73adf597b15
📒 Files selected for processing (2)
dash-network-seeds/Cargo.tomldash-network-seeds/src/lib.rs
✅ Files skipped from review due to trivial changes (1)
- dash-network-seeds/Cargo.toml
| fn parse_full(toks: &[&str], default_port: u16) -> Option<MasternodeSeed> { | ||
| let mn_type = MasternodeType::parse(toks[0])?; | ||
| let address = parse_address(toks[1], default_port)?; | ||
| let platform_http_port = match toks[2] { | ||
| "-" => None, | ||
| n => Some(n.parse::<u16>().ok()?), | ||
| }; | ||
| let core = CoreStatus { | ||
| reachable: Reachability::parse(toks[3])?, | ||
| synced: SyncStatus::parse(toks[4])?, | ||
| }; | ||
| let platform = match (mn_type, toks[5], toks[6], toks[7]) { | ||
| (MasternodeType::Regular, "-", "-", "-") => None, | ||
| (MasternodeType::Evo, r, l, s) => Some(PlatformStatus { | ||
| reachable: Reachability::parse(r)?, | ||
| live: PlatformLiveness::parse(l)?, | ||
| ssl: SslStatus::parse(s)?, | ||
| }), | ||
| _ => return None, | ||
| }; | ||
| Some(MasternodeSeed { | ||
| address, | ||
| mn_type, | ||
| platform_http_port, | ||
| core, | ||
| platform, | ||
| }) | ||
| } |
There was a problem hiding this comment.
Regular entries with a non-- platform HTTP port are silently coerced.
parse_full accepts any u16 for toks[2] regardless of mn_type, so a regular … 443 … row will construct a Regular seed carrying platform_http_port: Some(443) — which then round-trips differently (to_line prints - for Regular platform but keeps the port field). Consider rejecting non-- platform port tokens for MasternodeType::Regular, or explicitly normalizing platform_http_port to None for Regular.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@dash-network-seeds/src/lib.rs` around lines 468 - 495, parse_full currently
accepts any numeric toks[2] for MasternodeType::Regular, silently coercing
platform_http_port; change parse_full so that if mn_type ==
MasternodeType::Regular and toks[2] != "-" you return None (reject the row) to
avoid constructing a Regular seed with a platform_http_port, otherwise continue
parsing as before; reference parse_full, MasternodeType::Regular,
platform_http_port and ensure this validation lives before parsing toks[2] into
Some(u16) so it either rejects or (if you prefer the other approach) explicitly
sets platform_http_port = None for Regular entries.
| let platform = match (mn_type, toks[5], toks[6], toks[7]) { | ||
| (MasternodeType::Regular, "-", "-", "-") => None, | ||
| (MasternodeType::Evo, r, l, s) => Some(PlatformStatus { | ||
| reachable: Reachability::parse(r)?, | ||
| live: PlatformLiveness::parse(l)?, | ||
| ssl: SslStatus::parse(s)?, | ||
| }), | ||
| _ => return None, | ||
| }; |
There was a problem hiding this comment.
Evo entries with all-dash platform columns are silently dropped.
The (MasternodeType::Evo, r, l, s) arm matches even when r/l/s are all "-", but Reachability::parse("-") / PlatformLiveness::parse("-") / SslStatus::parse("-") all return None, so the whole line is rejected via ?. If the fetcher ever emits an Evo row with unknown/unprobed platform columns using -, those seeds will disappear instead of parsing with default/unknown platform status. Consider either accepting "-" as equivalent to "?" for Evo platform tokens, or documenting that Evo rows must use ? placeholders.
Possible adjustment
- (MasternodeType::Evo, r, l, s) => Some(PlatformStatus {
- reachable: Reachability::parse(r)?,
- live: PlatformLiveness::parse(l)?,
- ssl: SslStatus::parse(s)?,
- }),
+ (MasternodeType::Evo, "-", "-", "-") => Some(PlatformStatus::default()),
+ (MasternodeType::Evo, r, l, s) => Some(PlatformStatus {
+ reachable: Reachability::parse(r)?,
+ live: PlatformLiveness::parse(l)?,
+ ssl: SslStatus::parse(s)?,
+ }),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@dash-network-seeds/src/lib.rs` around lines 479 - 487, The Evo branch
currently calls Reachability::parse/PlatformLiveness::parse/SslStatus::parse on
tokens r/l/s which causes early returns when those functions return None for "-"
values; update the match arm handling for (MasternodeType::Evo, r, l, s) in the
parsing function so that if r/l/s are all "-" you treat the platform as None (or
normalize each "-" to "?" before parsing) instead of blindly calling the
parsers; reference MasternodeType::Evo, PlatformStatus, Reachability::parse,
PlatformLiveness::parse, SslStatus::parse and toks[5..7] to locate and change
the logic accordingly.
Pulls in upstream changes through 0093278: - PR #678 (hardcoded masternode seed files + weekly auto-refresh) — already present on pw2 via direct branch merge (84adf54); the upstream versions are taken via `theirs` to stay canonical (edition 2024, `--max-peers 16` flag + `timeout-minutes: 60` in the workflow, "with probes" refresh step name, richer lib.rs doc comments, probe-enriched seed-file format). - PR #679 (refactor(dashcore): extract Network into standalone dash-network crate) — dashcore re-exports Network at its original path, so consumers are unaffected at the API level. Internal reroutes applied where needed. Conflict resolution: - Add/add (took theirs = v0.42-dev canonical): - .github/workflows/update-masternode-seeds.yml - dash-network-seeds/{Cargo.toml,src/lib.rs,seeds/mainnet.txt,seeds/testnet.txt} - masternode-seeds-fetcher/{Cargo.toml,src/main.rs} - Cargo.toml (root): unioned workspace members — kept pw2's dash-network-seeds + masternode-seeds-fetcher and added #679's new dash-network crate. - dash-spv/Cargo.toml: dropped pw2's now-defunct dash-network-seeds `features = ["dashcore"]` (the upstream crate has no such feature anymore; it depends on dash-network directly) and added an explicit dash-network dep for direct Network::Mainnet/Testnet references. - dash-spv/src/network/constants.rs: kept pw2's HEAD block in full (MAINNET_DNS_SEEDS, TESTNET_DNS_SEEDS, TESTNET_FIXED_PEERS, MAINNET_P2P_PORT, TESTNET_P2P_PORT) and rewrote the two seed-peer helper functions to call `dash_network_seeds::addresses( dash_network::Network::Mainnet|Testnet)` — `Network` moved out of dash_network_seeds. - dash-spv/src/network/discovery.rs: kept pw2's full behavior — the match-on-Network arm that picks DNS-seed/port/embedded-list tuples per network and the testnet TESTNET_FIXED_PEERS fallback loop — rather than v0.42-dev's simpler `network.dns_seeds()` / generic path, because pw2's code additionally defends against DNS failure on testnet with a coded-in peer list (PR #658). Fallout from #679: - key-wallet/src/account/account_type.rs: removed two now-dead `_ => Err(InvalidNetwork)` arms in IdentityAuthenticationEcdsa / IdentityAuthenticationBls derivation-path construction. Network is no longer `#[non_exhaustive]` once re-exported from the standalone dash-network crate, and all four variants were already covered, so the wildcard arm became `unreachable_patterns` under -D warnings. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Summary
Supersedes #658. Ships three related pieces:
dash-network-seedscrate that hardcodes masternode IP lists for mainnet and testnet, differentiated by regular vs evo (HPMN) type and annotated with per-node probe results (core/platform reachability, sync status, platform SSL health). Zero dependency ondashcoreordash-spvby default — anyone can[dependencies] dash-network-seeds = "..."and get a typed seed list.masternode-seeds-fetcherCLI tool (publish = false) that regenerates those seed files by:getmnlistd→mnlistdiff) — no RPC, no credentials.dash-spv's peer discovery pulls fromdash-network-seedsas the primary source, with DNS as a best-effort backup (failures logged, never fatal).Status columns
Each seed line carries:
core_reach,plat_reachok/timeout/refused/error/?core_syncsync(within ±10 blocks of reference tip) /-N(behind) /+N(ahead) /?plat_liveok(HTTP responded) /none/?/-(regular)plat_sslvalid/expired/self-signed/untrusted/no-handshake/?/-(regular)Example (mainnet):
dash-network-seeds API
Optional
dashcorefeature addsTryFrom<dashcore::Network>conversions.Current seed counts (refreshed live from P2P network + probes)
The remaining 900-ish entries in mainnet's
2934 total masternodeswere filtered out (not-is_validon-chain or sentinel0.0.0.0/port-0 addresses).How the fetcher works
sendheaders→ wait for the peer's next new-block announcement (headersorinv) to obtain a fresh tip hash. Typical wait 30–90 s, hard cap 6 min.getmnlistd(base = 0, block = tip)→ fullmnlistdiff.is_valid = true, classify as Regular/Evo fromEntryMasternodeType.--probe-concurrency, default 64; CI uses 100):start_height; classify sync against reference.Verified live: testnet 86 seeds probed in 10 s, mainnet 2034 seeds probed in 25 s.
Scheduled workflow
.github/workflows/update-masternode-seeds.ymlruns Mondays 06:00 UTC (+ manual dispatch). No secrets required. Builds the fetcher, refreshes both networks with probes, runscargo test -p dash-network-seeds --libto verify the output still parses, and opens a refresh PR viapeter-evans/create-pull-request@v7when files change. Timeout bumped to 60 min to accommodate the probe phase.Test plan
cargo build --workspacecargo clippy -p dash-network-seeds -p dash-spv -p masternode-seeds-fetcher --all-targets --all-features -- -D warningscargo fmt --checkcargo test -p dash-network-seeds --all-features— 11 unit + 1 doctest, all passing. Covers: round-trip of rich format, legacy bare / typed formats, status parsing (including-N/+Nsync deltas), mainnet/testnet seed invariants.cargo test -p dash-spv --lib— 377 passed, 0 failed. Seed-driven tests (testnet_seed_peers_contains_hp_mn_ips,mainnet_seed_peers_non_empty) still hold with the new format.python3 .github/scripts/ci_config.py verify-groupspasses.yaml.safe_loadon the workflow.typospre-commit hook clean.Closes #658
🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Chores