feat: log proof log errors to db#64
Conversation
|
Important Review skippedAuto reviews are disabled on base/target branches other than the default branch. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the WalkthroughThe pull request introduces several changes across multiple files, primarily focusing on modifications to the Changes
Possibly related PRs
Suggested reviewers
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 5
🧹 Outside diff range and nitpick comments (14)
src/database/settings.rs (2)
48-58: Consider adding version validationThe implementation looks good and follows the established patterns. However, consider adding validation for the
new_versionparameter to ensure it's within acceptable bounds and greater than the current version.Consider enhancing the implementation:
pub fn update_database_version(&self, new_version: u16) -> Result<()> { + // Verify the update was successful + let updated = self.execute( "UPDATE settings SET database_version = ? WHERE id = 1", params![new_version], - )?; + )?.changes(); + + if updated == 0 { + return Err(rusqlite::Error::QueryReturnedNoRows); + } Ok(()) }
47-47: Enhance documentation for version managementThe current documentation is minimal. Consider adding details about version constraints, when this method should be called, and its relationship with database migrations.
Example documentation:
- /// Updates the database version in the settings table. + /// Updates the database version in the settings table. + /// + /// # Arguments + /// * `new_version` - The new database version to set. Should be greater than the current version + /// and correspond to a valid migration version. + /// + /// # Returns + /// * `Result<()>` - Ok if the version was updated successfully, Error otherwise + /// + /// # Usage + /// This method should be called after successful database migrations to record the new schema version.src/model/proof_log_item.rs (2)
1-35: LGTM! Consider adding documentation.The
RequestTypeenum is well-structured with explicit discriminants and follows Rust best practices. Consider adding documentation comments to:
- Describe the purpose of the enum
- Explain when each variant is used
- Document any invariants or assumptions
Example documentation:
/// Represents different types of requests that can be logged in the proof system. /// Each variant corresponds to a specific operation with a unique identifier. #[derive(Debug, Clone, Copy)] pub enum RequestType { /// Represents a request to broadcast a state transition BroadcastStateTransition = 1, // ... (document other variants) }
37-85: Consider using a custom error type for better error handling.The conversion implementations are correct, but using unit type
()as the error type limits error reporting. Consider creating a custom error type to provide more context when conversion fails.Example implementation:
#[derive(Debug, thiserror::Error)] pub enum RequestTypeError { #[error("Invalid request type value: {0}")] InvalidValue(u8), } impl TryFrom<u8> for RequestType { type Error = RequestTypeError; fn try_from(value: u8) -> Result<Self, Self::Error> { match value { 1 => Ok(RequestType::BroadcastStateTransition), // ... other matches ... _ => Err(RequestTypeError::InvalidValue(value)), } } }src/database/proof_log.rs (2)
22-44: Review index creation for potential redundancyMultiple indexes are being created on the
proof_logtable, some of which may overlap in functionality. Over-indexing can negatively impact write performance and increase storage requirements. Consider reviewing the indexing strategy to ensure that each index is necessary and that there is no redundancy.
75-76: Validate therangeparameter to prevent potential issuesCurrently, there is no validation to ensure that
range.endis greater than or equal torange.start. Ifrange.endis less thanrange.start, it could result in unexpected behavior or panics. Consider adding a check to validate therangebefore using it.Apply the following validation:
pub fn get_proof_log_items( &self, only_get_errored: bool, range: Range<u64>, ) -> rusqlite::Result<Vec<ProofLogItem>> { + if range.end < range.start { + return Err(rusqlite::Error::InvalidParameterName("Invalid range: range.end must be greater than or equal to range.start".to_string())); + } let conn = self.conn.lock().unwrap();src/backend_task/contested_names/query_dpns_contested_resources.rs (5)
42-74: Refactor error handling inmap_errfor improved readabilityThe error handling logic within the
map_errclosure is deeply nested and could be refactored into a separate function to enhance readability and maintainability. Extracting this logic will make the code cleaner and easier to understand.Consider applying this refactor:
- .map_err(|e| { - tracing::error!("error fetching contested resources: {}", e); - if let dash_sdk::Error::Proof(dash_sdk::ProofVerifierError::GroveDBError { - proof_bytes, - height, - time_ms, - error, - }) = &e - { - // Encode the query using bincode - let encoded_query = - match bincode::encode_to_vec(&query, bincode::config::standard()) - .map_err(|encode_err| { - tracing::error!("error encoding query: {}", encode_err); - format!("error encoding query: {}", encode_err) - }) { - Ok(encoded_query) => encoded_query, - Err(e) => return e, - }; - - if let Err(e) = self - .db - .insert_proof_log_item(ProofLogItem { - request_type: RequestType::GetContestedResources, - request_bytes: encoded_query, - height: *height, - time_ms: *time_ms, - proof_bytes: proof_bytes.clone(), - error: Some(error.clone()), - }) - .map_err(|e| e.to_string()) - { - return e; - } - } - format!("error fetching contested resources: {}", e) - })?; + .map_err(|e| { + tracing::error!("error fetching contested resources: {}", e); + if let dash_sdk::Error::Proof(dash_sdk::ProofVerifierError::GroveDBError { proof_bytes, height, time_ms, error }) = &e { + if let Err(e) = self.handle_proof_error(&query, proof_bytes, *height, *time_ms, error).map_err(|e| e.to_string()) { + return e; + } + } + format!("error fetching contested resources: {}", e) + })?; + // Add a new method to handle proof errors + fn handle_proof_error(&self, query: &VotePollsByDocumentTypeQuery, proof_bytes: &Vec<u8>, height: u64, time_ms: u64, error: &str) -> Result<(), String> { + // Encode the query using bincode + let encoded_query = bincode::encode_to_vec(query, bincode::config::standard()) + .map_err(|encode_err| { + tracing::error!("error encoding query: {}", encode_err); + format!("error encoding query: {}", encode_err) + })?; + + self.db.insert_proof_log_item(ProofLogItem { + request_type: RequestType::GetContestedResources, + request_bytes: encoded_query, + height, + time_ms, + proof_bytes: proof_bytes.clone(), + error: Some(error.to_string()), + }).map_err(|e| e.to_string()) + }
60-74: Handle potential database insertion errors more gracefullyCurrently, if inserting the
ProofLogIteminto the database fails, the error is returned immediately. Consider handling this error more gracefully, possibly by logging the error and continuing execution, to prevent the entire operation from failing due to logging issues.
Line range hint
105-105: Avoid usingunwrap()when acquiring semaphore permitsUsing
unwrap()onsemaphore.acquire_owned().await.unwrap()can cause a panic if the semaphore is closed. It's safer to handle theResultreturned byacquire_owned()to gracefully manage potential errors.Apply this change to handle errors when acquiring permits:
- let _permit: OwnedSemaphorePermit = semaphore.acquire_owned().await.unwrap(); + let permit = semaphore.acquire_owned().await; + let _permit: OwnedSemaphorePermit = match permit { + Ok(permit) => permit, + Err(e) => { + tracing::error!("Failed to acquire semaphore permit: {:?}", e); + // Handle the error appropriately, possibly by returning or continuing + return; + } + };Also applies to: 135-135
Line range hint
77-83: Handle potential errors when sendingTaskResult::RefreshWhen sending
TaskResult::Refresh, the code usesexpect("expected to send refresh"), which will panic if the send fails. Consider handling the error to prevent a potential panic if the receiver has been dropped.Modify the code to handle the error:
- sender - .send(TaskResult::Refresh) - .await - .expect("expected to send refresh"); + if let Err(e) = sender.send(TaskResult::Refresh).await { + tracing::error!("Failed to send TaskResult::Refresh: {}", e); + // Handle the error appropriately + }
AppContextis not safely clonable and should not be clonedLooking at the
AppContextstruct definition insrc/context.rs, it:
- Is not derived with
Clone- Contains non-cloneable types like
Mutex,RwLock, and channels- Has thread-safe shared state with
Arcand synchronization primitives- Manages critical resources like database connections and SDK instances
The comment "Assuming self is cloneable" is incorrect -
AppContextshould be accessed throughArcfor shared ownership instead of being cloned.🔗 Analysis chain
Line range hint
99-99: Ensure thatAppContextis safely clonableThe comment
// Assuming self is cloneableindicates uncertainty about cloningself. Verify thatAppContextimplements theClonetrait and that all its fields are safely clonable without unintended side effects.Run the following script to confirm that
AppContextimplementsClone:Also applies to: 119-119
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify that `AppContext` implements the `Clone` trait. # Search for the `Clone` implementation or derivation for `AppContext` rg --type rust -A3 $'^struct\s+AppContext' | grep 'derive.*Clone'Length of output: 66
Script:
#!/bin/bash # Let's first find the AppContext struct definition without filtering for Clone rg --type rust -A 5 "^struct\s+AppContext" # Also search for any manual implementation of Clone for AppContext rg --type rust -A 5 "impl\s+Clone\s+for\s+AppContext"Length of output: 100
Script:
#!/bin/bash # Let's try a broader search to find any file containing AppContext rg --type rust -l "AppContext" # Also search for the file containing the review comment to understand the context fd "query_dpns_contested_resources.rs"Length of output: 2092
Script:
#!/bin/bash # Let's check the context.rs file which likely contains the AppContext definition cat src/context.rsLength of output: 15109
src/database/initialization.rs (3)
Line range hint
82-95: Handle database version retrieval errors more preciselyIn the
is_outdatedmethod, usingunwrap_or(0)when querying thedatabase_versionsuppresses all errors and defaults to version0. This can mask underlying issues such as database corruption, missing columns, or other unexpected errors.Consider specifically handling the
QueryReturnedNoRowserror while propagating other errors. This ensures that only the absence of a version defaults to0, and other issues are appropriately addressed.Apply this diff to improve error handling:
let version: u16 = match conn.query_row( "SELECT database_version FROM settings WHERE id = 1", [], |row| row.get(0), ) { Ok(v) => v, - Err(_) => 0, // Default to version 0 if there's no version set + Err(rusqlite::Error::QueryReturnedNoRows) => 0, // No version set, default to 0 + Err(e) => return Err(e), // Propagate other errors }; if version < CURRENT_DB_VERSION { Ok(Some(version)) } else { Ok(None) }
125-125: Update the documentation comment to reflect the method's functionalityThe doc comment for
recreate_dbincorrectly states that it backs up the existing database. The actual backup occurs in thebackup_dbmethod. Therecreate_dbmethod focuses on removing the existing database file and creating a new one.Update the doc comment to accurately describe the method's purpose.
Apply this diff to correct the documentation:
-/// Backs up the existing database with a unique timestamped filename, recreates `data.db`, and refreshes the connection. +/// Recreates `data.db` by removing the existing database file, initializing a new database with default settings, and refreshing the connection. fn recreate_db(&self, db_file_path: &Path) -> rusqlite::Result<()> {
341-342: Avoid duplication by refactoring table initializationThe method
initialize_proof_log_tableis called in bothcreate_tablesandapply_version_changes. This duplication can lead to maintenance challenges and potential inconsistencies if changes are made in one place and not the other.Consider refactoring to centralize the initialization of tables. One approach is to have
create_tablescallapply_version_changesfor the current version, ensuring all version-specific tables are set up in one place.Example:
fn create_tables(&self) -> rusqlite::Result<()> { // Existing table creation logic... // Instead of directly calling initialize_proof_log_table() // Call apply_version_changes with the current database version self.apply_version_changes(CURRENT_DB_VERSION)?; Ok(()) }This ensures that table initialization remains consistent across both new setups and migrations.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (8)
Cargo.toml(1 hunks)src/backend_task/contested_names/query_dpns_contested_resources.rs(2 hunks)src/database/initialization.rs(6 hunks)src/database/mod.rs(1 hunks)src/database/proof_log.rs(1 hunks)src/database/settings.rs(1 hunks)src/model/mod.rs(1 hunks)src/model/proof_log_item.rs(1 hunks)
✅ Files skipped from review due to trivial changes (2)
- src/database/mod.rs
- src/model/mod.rs
🔇 Additional comments (1)
src/model/proof_log_item.rs (1)
87-94: Verify database integration for ProofLogItem.
Since this struct is meant to be stored in the database, we should verify:
- Existence of database migrations
- Implementation of database access methods
✅ Verification successful
Database integration for ProofLogItem is properly implemented
The database integration is complete and well-structured with:
- Table creation with appropriate columns and indexes
- Comprehensive CRUD operations:
initialize_proof_log_table()with proper schema and indexesinsert_proof_log_item()for storing new entriesget_proof_log_items()with filtering and pagination support
- Proper type conversions and error handling
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check for database migration files
echo "Checking for migration files..."
fd -e sql -e rs "proof.*log"
# Check for database access implementations
echo "Checking for database access implementations..."
rg -l "ProofLogItem.*(&*db|Database|Connection)"
Length of output: 313
Script:
#!/bin/bash
# Let's check the content of the database implementation file
echo "Checking database implementation..."
cat src/database/proof_log.rs
# Let's also search for any SQL queries or database operations related to ProofLogItem
echo -e "\nSearching for SQL queries..."
rg "INSERT INTO|CREATE TABLE.*proof" -A 5
# Let's check if there are any database tests
echo -e "\nChecking for database tests..."
fd -e rs "test.*proof.*log"
Length of output: 10900
| base64 = "0.22.1" | ||
| copypasta = "0.10.1" | ||
| dash-sdk = { git = "https://github.com/dashpay/platform", rev = "994f7627080e4ab490f1dbf0f09c0fb37d0fbbb0" } | ||
| dash-sdk = { git = "https://github.com/dashpay/platform", branch = "test/testWithoutSpan2" } |
There was a problem hiding this comment.
💡 Codebase verification
Using test/testWithoutSpan2 branch is risky for production dependency
Based on the verification results:
- The branch is actively being developed with 20 commits ahead of v1.6-dev
- It has 30 open pull requests targeting it, indicating it's an unstable testing branch
- The branch is unprotected, allowing direct pushes without review
- Recent commits show it's frequently merging changes from v1.6-dev
Recommendations:
- Pin to a specific commit from v1.6-dev branch (e.g.,
a78c7b06ef6522478c22c3116632ceef483eee85) - Or wait until the changes in test/testWithoutSpan2 are merged into v1.6-dev and use that branch instead
🔗 Analysis chain
Verify the stability and production readiness of the test branch.
Switching from a fixed commit hash (994f7627080e4ab490f1dbf0f09c0fb37d0fbbb0) to a test branch (test/testWithoutSpan2) raises several concerns:
- The dependency is now less stable as it tracks a branch instead of a fixed commit
- The branch name suggests it's for testing purposes, which may not be suitable for production use
- The code could change unexpectedly when the branch is updated
Consider either:
- Pinning to a specific commit hash from this branch
- Using a more stable, production-ready branch
Let's verify the branch status and recent changes:
Let me gather more information about the relationship between this test branch and the main development branches.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check the branch status and recent commits
# Fetch the latest commits on the branch
gh api \
-H "Accept: application/vnd.github+json" \
repos/dashpay/platform/commits?sha=test/testWithoutSpan2 \
--jq '.[0:5] | .[] | {sha: .sha, date: .commit.author.date, message: .commit.message}'
# Check if this is an active development branch
gh api \
-H "Accept: application/vnd.github+json" \
repos/dashpay/platform/branches/test/testWithoutSpan2 \
--jq '{protected: .protected, behind_by: .behind_by}'
Length of output: 1104
Script:
#!/bin/bash
# Check the base branch and compare with main development branches
gh api \
-H "Accept: application/vnd.github+json" \
repos/dashpay/platform/compare/v1.6-dev...test/testWithoutSpan2 \
--jq '{ahead_by: .ahead_by, behind_by: .behind_by, status: .status}'
# Get latest commits from v1.6-dev branch for comparison
gh api \
-H "Accept: application/vnd.github+json" \
repos/dashpay/platform/commits?sha=v1.6-dev \
--jq '.[0:3] | .[] | {sha: .sha, date: .commit.author.date, message: .commit.message}'
# Check if there are any open PRs targeting this test branch
gh api \
-H "Accept: application/vnd.github+json" \
repos/dashpay/platform/pulls?head=test/testWithoutSpan2 \
--jq 'length'
Length of output: 992
| pub struct ProofLogItem { | ||
| pub request_type: RequestType, | ||
| pub request_bytes: Vec<u8>, | ||
| pub height: u64, | ||
| pub time_ms: u64, | ||
| pub proof_bytes: Vec<u8>, | ||
| pub error: Option<String>, | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Enhance the struct implementation with additional functionality.
The ProofLogItem struct looks good but could benefit from some improvements:
- Add common trait derives
- Consider adding validation and constructor methods
- Add documentation for the fields
Example implementation:
/// Represents a log entry for a proof-related operation
#[derive(Debug, Clone)]
pub struct ProofLogItem {
/// Type of the request that generated this log
pub request_type: RequestType,
/// Serialized request data
pub request_bytes: Vec<u8>,
/// Block height at which the request was made
pub height: u64,
/// Timestamp in milliseconds
pub time_ms: u64,
/// Serialized proof data
pub proof_bytes: Vec<u8>,
/// Optional error message if the operation failed
pub error: Option<String>,
}
impl ProofLogItem {
/// Creates a new ProofLogItem with the given parameters
pub fn new(
request_type: RequestType,
request_bytes: Vec<u8>,
height: u64,
time_ms: u64,
proof_bytes: Vec<u8>,
error: Option<String>,
) -> Self {
Self {
request_type,
request_bytes,
height,
time_ms,
proof_bytes,
error,
}
}
}| let request_type = RequestType::try_from(request_type_int).map_err(|_| { | ||
| rusqlite::Error::FromSqlConversionFailure( | ||
| request_type_int as usize, | ||
| rusqlite::types::Type::Integer, | ||
| Box::new(std::fmt::Error), | ||
| ) | ||
| })?; |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Use a more appropriate error type in FromSqlConversionFailure
The error handling when converting request_type_int to RequestType uses std::fmt::Error, which may not accurately represent the error context. Using a more descriptive error type will improve error reporting and debugging.
Consider applying the following change:
let request_type = RequestType::try_from(request_type_int).map_err(|_| {
rusqlite::Error::FromSqlConversionFailure(
request_type_int as usize,
rusqlite::types::Type::Integer,
- Box::new(std::fmt::Error),
+ Box::new(std::io::Error::new(std::io::ErrorKind::InvalidData, format!("Invalid RequestType value: {}", request_type_int))),
)
})?;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let request_type = RequestType::try_from(request_type_int).map_err(|_| { | |
| rusqlite::Error::FromSqlConversionFailure( | |
| request_type_int as usize, | |
| rusqlite::types::Type::Integer, | |
| Box::new(std::fmt::Error), | |
| ) | |
| })?; | |
| let request_type = RequestType::try_from(request_type_int).map_err(|_| { | |
| rusqlite::Error::FromSqlConversionFailure( | |
| request_type_int as usize, | |
| rusqlite::types::Type::Integer, | |
| Box::new(std::io::Error::new(std::io::ErrorKind::InvalidData, format!("Invalid RequestType value: {}", request_type_int))), | |
| ) | |
| })?; |
|
|
||
| // Rename `data.db` to the unique backup file | ||
| fs::rename(db_file_path, &backup_path) | ||
| fs::copy(db_file_path, &backup_path) |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Use appropriate error types for filesystem operations
In backup_db and recreate_db, wrapping std::io::Error into rusqlite::Error variants like ToSqlConversionFailure or FromSqlConversionFailure is misleading. These errors are related to filesystem operations, not SQL conversion, and misrepresenting them can make debugging more challenging.
Consider propagating the std::io::Error directly or mapping it to a more suitable error type. You might also define a custom error type to represent filesystem errors in your database operations.
Apply this diff to adjust error handling:
// In backup_db
fs::copy(db_file_path, &backup_path)
- .map_err(|e| rusqlite::Error::ToSqlConversionFailure(e.into()))?;
+ .map_err(|e| rusqlite::Error::Other(Box::new(e)))?;
// In recreate_db
fs::remove_file(db_file_path).map_err(|e| {
- rusqlite::Error::FromSqlConversionFailure(
- 0,
- rusqlite::types::Type::Text,
- Box::new(e),
- )
+ rusqlite::Error::Other(Box::new(e))
})?;Alternatively, if you decide to define a custom error type:
#[derive(Debug)]
enum DatabaseError {
IoError(std::io::Error),
SqliteError(rusqlite::Error),
// Add other variants as needed
}
impl From<std::io::Error> for DatabaseError {
fn from(error: std::io::Error) -> Self {
DatabaseError::IoError(error)
}
}
impl From<rusqlite::Error> for DatabaseError {
fn from(error: rusqlite::Error) -> Self {
DatabaseError::SqliteError(error)
}
}Then update your method signatures to return Result<(), DatabaseError> and adjust error propagation accordingly.
Also applies to: 129-135
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Summary by CodeRabbit
Release Notes
New Features
Bug Fixes
Documentation