Align news schema and migrations (4.1.1) with ExecPlan and tests#306
Align news schema and migrations (4.1.1) with ExecPlan and tests#306
Conversation
WalkthroughThis pull request aligns the news persistence layer schema across SQLite and PostgreSQL to match the documented specification. It introduces database migrations with backfill logic for identifiers and timestamps, establishes permission and user-permission tables, updates Rust models and Diesel schema, adds schema validation tests, and extends user and developer documentation with migration procedures and protocol usage guidance. Changes
Poem
Caution Pre-merge checks failedPlease resolve all errors before merging. Addressing warnings is optional.
❌ Failed checks (3 errors, 4 warnings)
✅ Passed checks (11 passed)
✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Review rate limit: 7/10 reviews remaining, refill in 17 minutes and 22 seconds. Comment |
Reviewer's GuideAligns the news schema with docs/news-schema.md via additive dual-backend migrations, updates Diesel schema/models and developer docs, and adds regression tests plus BDD scenarios to validate both fresh installs and upgrades on SQLite and PostgreSQL without changing observable routing behaviour. Entity relationship diagram for aligned news and permission schemaerDiagram
users {
int id
string username
}
news_bundles {
int id
int parent_bundle_id
string name
string guid
datetime created_at
}
news_categories {
int id
int bundle_id
string name
string guid
int add_sn
int delete_sn
datetime created_at
}
news_articles {
int id
int category_id
int parent_article_id
int prev_article_id
int next_article_id
int first_child_article_id
string title
string poster
datetime posted_at
int flags
string data_flavor
string data
}
permissions {
int id
int code
string name
string scope
}
user_permissions {
int user_id
int permission_id
}
news_bundles ||--o{ news_bundles : parent_bundle
news_bundles ||--o{ news_categories : contains
news_categories ||--o{ news_articles : categorizes
news_articles ||--o{ news_articles : parent_article
news_articles ||--o{ news_articles : prev_article
news_articles ||--o{ news_articles : next_article
news_articles ||--o{ news_articles : first_child_article
users ||--o{ user_permissions : has
permissions ||--o{ user_permissions : granted_to
Class diagram for updated Rust models and new permission typesclassDiagram
class Bundle {
+i32 id
+Option~i32~ parent_bundle_id
+String name
+Option~String~ guid
+Option~NaiveDateTime~ created_at
}
class NewBundle {
+Option~i32~ parent_bundle_id
+&str name
}
class Category {
+i32 id
+Option~i32~ bundle_id
+String name
+Option~String~ guid
+Option~i32~ add_sn
+Option~i32~ delete_sn
+Option~NaiveDateTime~ created_at
}
class NewCategory {
+Option~i32~ bundle_id
+&str name
}
class Permission {
+i32 id
+i32 code
+String name
+String scope
}
class NewPermission {
+i32 code
+&str name
+&str scope
}
class UserPermission {
+i32 user_id
+i32 permission_id
}
class NewUserPermission {
+i32 user_id
+i32 permission_id
}
class User {
+i32 id
+String username
}
Bundle "1" --> "0..*" Category : bundles
Category "1" --> "0..*" Bundle : scoped_by_bundle
Category "1" --> "0..*" User : used_by_tests
User "1" --> "0..*" UserPermission : has_permissions
Permission "1" --> "0..*" UserPermission : granted_to
NewBundle --> Bundle : inserts_into
NewCategory --> Category : inserts_into
NewPermission --> Permission : inserts_into
NewUserPermission --> UserPermission : inserts_into
Flow diagram for SQLite news schema alignment migration 00000000000006flowchart TD
Start["Start migration
00000000000006_align_news_schema"]
DropIdx["Drop existing news indices
(idx_articles_category, idx_categories_bundle,
idx_bundles_name_parent, idx_bundles_parent)"]
RenameTables["Rename tables to *_old
news_articles_old, news_categories_old, news_bundles_old"]
CreateBundles["Create new news_bundles
with guid, created_at, UNIQUE(name,parent_bundle_id)"]
CopyBundles["Copy bundles
from news_bundles_old
and backfill guid, created_at"]
BundleIdx["Create indices
idx_bundles_parent,
idx_bundles_name_parent"]
CreateCategories["Create new news_categories
with guid, add_sn, delete_sn, created_at,
UNIQUE(name,bundle_id)"]
CopyCategories["Copy categories
from news_categories_old
and backfill:
- guid (random)
- add_sn = COUNT(articles_old per category)
- delete_sn = 0
- created_at = CURRENT_TIMESTAMP"]
CategoryIdx["Create index
idx_categories_bundle"]
CreateArticles["Create new news_articles
with threading link columns
and NOT NULL category_id"]
CopyArticles["Copy all rows
from news_articles_old"]
ArticleIdx["Create indices on
category_id and threading links
(parent, prev, next, first_child)"]
DropOld["Drop old tables
news_articles_old,
news_categories_old,
news_bundles_old"]
CreatePerms["Create permissions table
(id, code UNIQUE, name, scope CHECK)"]
CreateUserPerms["Create user_permissions table
(user_id, permission_id,
FK to users and permissions,
PRIMARY KEY(user_id, permission_id))"]
UserPermIdx["Create indices
idx_user_permissions_user,
idx_user_permissions_perm"]
End["Migration complete"]
Start --> DropIdx --> RenameTables
RenameTables --> CreateBundles --> CopyBundles --> BundleIdx
BundleIdx --> CreateCategories --> CopyCategories --> CategoryIdx
CategoryIdx --> CreateArticles --> CopyArticles --> ArticleIdx
ArticleIdx --> DropOld --> CreatePerms --> CreateUserPerms --> UserPermIdx --> End
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
|
@coderabbitai Please suggest a fix for this issue and supply a prompt for an AI coding agent to enable it to apply the fix. Include the file and symbol names indicated in the issue at the head of your response. Ensure that this is validated against the current version of the codegraph. If further refinement to address this finding would be deleterious, please supply a clear explanatory one to two paragraph markdown message I can paste into the CodeScene web ui's diagnostic suppression function so this diagnostic can be silenced. Code Duplicationsrc/db/schema_alignment_tests.rs: What lead to degradation?The module contains 4 functions with similar structure: postgres_upgrade_backfills_legacy_news_rows,setup_postgres_legacy_schema,setup_sqlite_legacy_schema,sqlite_upgrade_backfills_legacy_news_rows Why does this problem occur?Duplicated code often leads to code that's harder to change since the same logical change has to be done in multiple functions. More duplication gives lower code health. How to fix it?A certain degree of duplicated code might be acceptable. The problems start when it is the same behavior that is duplicated across the functions in the module, ie. a violation of the Don't Repeat Yourself (DRY) principle. DRY violations lead to code that is changed together in predictable patterns, which is both expensive and risky. DRY violations can be identified using CodeScene's X-Ray analysis to detect clusters of change coupled functions with high code similarity. Read More |
This comment was marked as resolved.
This comment was marked as resolved.
|
@coderabbitai Please suggest a fix for this issue and supply a prompt for an AI coding agent to enable it to apply the fix. Include the file and symbol names indicated in the issue at the head of your response. src/db/schema_alignment_tests.rs Comment on lines +87 to +130 async fn setup_sqlite_legacy_schema(conn: &mut DbConnection) -> TestResult<()> {
run_statements(
conn,
&[
"CREATE TABLE __diesel_schema_migrations (version VARCHAR(50) PRIMARY KEY NOT NULL, \
run_on TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP)",
"CREATE TABLE users (id INTEGER PRIMARY KEY AUTOINCREMENT, username TEXT NOT NULL \
UNIQUE, password TEXT NOT NULL)",
"CREATE TABLE news_categories (id INTEGER PRIMARY KEY AUTOINCREMENT, name TEXT NOT \
NULL UNIQUE, bundle_id INTEGER REFERENCES news_bundles(id) ON DELETE CASCADE)",
"CREATE TABLE news_bundles (id INTEGER PRIMARY KEY AUTOINCREMENT, parent_bundle_id \
INTEGER REFERENCES news_bundles(id) ON DELETE CASCADE, name TEXT NOT NULL, \
UNIQUE(name, parent_bundle_id))",
"CREATE INDEX idx_bundles_parent ON news_bundles(parent_bundle_id)",
"CREATE INDEX idx_categories_bundle ON news_categories(bundle_id)",
"CREATE TABLE news_articles (id INTEGER PRIMARY KEY AUTOINCREMENT, category_id \
INTEGER NOT NULL REFERENCES news_categories(id) ON DELETE CASCADE, parent_article_id \
INTEGER REFERENCES news_articles(id), prev_article_id INTEGER REFERENCES \
news_articles(id), next_article_id INTEGER REFERENCES news_articles(id), \
first_child_article_id INTEGER REFERENCES news_articles(id), title TEXT NOT NULL, \
poster TEXT, posted_at DATETIME NOT NULL, flags INTEGER DEFAULT 0, data_flavor TEXT \
DEFAULT 'text/plain', data TEXT, CHECK (category_id IS NOT NULL))",
"CREATE INDEX idx_articles_category ON news_articles(category_id)",
"CREATE TABLE files (id INTEGER PRIMARY KEY AUTOINCREMENT, name TEXT NOT NULL UNIQUE, \
object_key TEXT NOT NULL, size INTEGER NOT NULL DEFAULT 0)",
"CREATE TABLE file_acl (file_id INTEGER NOT NULL REFERENCES files(id) ON DELETE \
CASCADE, user_id INTEGER NOT NULL REFERENCES users(id) ON DELETE CASCADE, PRIMARY \
KEY (file_id, user_id))",
"CREATE INDEX idx_file_acl_user_file ON file_acl (user_id, file_id)",
"CREATE INDEX idx_bundles_name_parent ON news_bundles(name, parent_bundle_id)",
"INSERT INTO __diesel_schema_migrations (version) VALUES ('00000000000000'), \
('00000000000001'), ('00000000000002'), ('00000000000003'), ('00000000000004'), \
('00000000000005')",
"INSERT INTO users (id, username, password) VALUES (1, 'alice', 'hash')",
"INSERT INTO news_bundles (id, parent_bundle_id, name) VALUES (1, NULL, 'Bundle')",
"INSERT INTO news_categories (id, name, bundle_id) VALUES (1, 'General', 1)",
"INSERT INTO news_articles (id, category_id, parent_article_id, prev_article_id, \
next_article_id, first_child_article_id, title, poster, posted_at, flags, \
data_flavor, data) VALUES (1, 1, NULL, NULL, NULL, NULL, 'First', 'alice', \
'2026-04-13 00:00:00', 0, 'text/plain', 'hello')",
],
)
.await
}❌ New issue: Code Duplication |
This comment was marked as resolved.
This comment was marked as resolved.
Add a comprehensive ExecPlan document detailing the roadmap step 4.1.1 to align the news persistence schema and migrations across SQLite and PostgreSQL backends. The document covers purpose, constraints, risks, progress checkpoints, decision logs, and a staged plan including migration strategy, schema updates, testing coverage, and documentation updates to ensure consistent schema alignment without user-visible behavior changes. Co-authored-by: devboxerhub[bot] <devboxerhub[bot]@users.noreply.github.com>
- Add aligned migrations for news bundles, categories, articles, permissions, and user permissions tables. - Backfill historical news rows during upgrade to populate GUID and metadata columns immediately. - Add explicit indices on all article threading link columns for improved query performance. - Update docs with migration strategy, scope boundaries, and developer guidance. - Preserve existing news behavior and coverage while realigning schema. - Separate user-visible changes (users-guide) from internal/documentation changes (developers-guide, ADRs). Co-authored-by: devboxerhub[bot] <devboxerhub[bot]@users.noreply.github.com>
…ackfill - Introduce migration pair 00000000000006_align_news_schema for SQLite and PostgreSQL - Add bundle/category metadata columns and unique constraints - Rebuild SQLite tables for safe default and uniqueness enforcement - Backfill legacy rows with GUIDs, timestamps, and serial counters - Create new permissions and user_permissions tables (runtime wiring deferred) - Update Diesel schema, models, and fixtures to match aligned schema - Add comprehensive migration and routing tests for both backends This migration aligns the news persistence schema with the design specification while preserving legacy data and ensuring immediate usability post-upgrade. Co-authored-by: devboxerhub[bot] <devboxerhub[bot]@users.noreply.github.com>
…nment Aligned with roadmap item 4.1.1, this update adds developer-facing documentation covering: - Dual-backend news schema alignment strategies - SQLite table rebuild criteria over incremental ALTER TABLE - Preservation of stable primary keys during copy-forward migrations - Handling of GUID and category serial-counter backfill as migration concerns - Separation of schema work from runtime privilege loading and catalog seeding - Validation of schema changes with backend-specific regression tests This guidance helps maintain consistent and robust news schema migrations across SQLite and PostgreSQL backends without introducing user-visible behavior changes. Co-authored-by: devboxerhub[bot] <devboxerhub[bot]@users.noreply.github.com>
Co-authored-by: devboxerhub[bot] <devboxerhub[bot]@users.noreply.github.com>
- Introduced `run_statements` helper for executing SQL statements sequences - Added `assert_upgrade_backfills` to validate backfill fields for both SQLite and Postgres - Replaced duplicated inline SQL statement execution with `run_statements` calls in SQLite and Postgres legacy schema setup - Updated tests to use `assert_upgrade_backfills` for consistent assertions This improves test maintainability and code reuse for schema alignment tests. Co-authored-by: devboxerhub[bot] <devboxerhub[bot]@users.noreply.github.com>
- Consolidate common legacy schema creation into create_legacy_common_schema used by both SQLite and Postgres setups. - Extract legacy seed data insertion into insert_legacy_seed_data function. - Adjust setup_sqlite_legacy_schema and setup_postgres_legacy_schema to use new common functions for improved code reuse and clarity. - Remove duplicate schema and seed data definitions reducing redundancy. Co-authored-by: devboxerhub[bot] <devboxerhub[bot]@users.noreply.github.com>
def0f1f to
8bd9fa7
Compare
|
@coderabbitai Please suggest a fix for this issue and supply a prompt for an AI coding agent to enable it to apply the fix. Include the file and symbol names indicated in the issue at the head of your response. src/db/schema_alignment_tests.rs Comment on lines +127 to +152 async fn setup_sqlite_legacy_schema(conn: &mut DbConnection) -> TestResult<()> {
run_statements(
conn,
&[
"CREATE TABLE users (id INTEGER PRIMARY KEY AUTOINCREMENT, username TEXT NOT NULL \
UNIQUE, password TEXT NOT NULL)",
"CREATE TABLE news_bundles (id INTEGER PRIMARY KEY AUTOINCREMENT, parent_bundle_id \
INTEGER REFERENCES news_bundles(id) ON DELETE CASCADE, name TEXT NOT NULL, \
UNIQUE(name, parent_bundle_id))",
"CREATE TABLE news_categories (id INTEGER PRIMARY KEY AUTOINCREMENT, name TEXT NOT \
NULL UNIQUE, bundle_id INTEGER REFERENCES news_bundles(id) ON DELETE CASCADE)",
"CREATE TABLE news_articles (id INTEGER PRIMARY KEY AUTOINCREMENT, category_id \
INTEGER NOT NULL REFERENCES news_categories(id) ON DELETE CASCADE, parent_article_id \
INTEGER REFERENCES news_articles(id), prev_article_id INTEGER REFERENCES \
news_articles(id), next_article_id INTEGER REFERENCES news_articles(id), \
first_child_article_id INTEGER REFERENCES news_articles(id), title TEXT NOT NULL, \
poster TEXT, posted_at DATETIME NOT NULL, flags INTEGER DEFAULT 0, data_flavor TEXT \
DEFAULT 'text/plain', data TEXT, CHECK (category_id IS NOT NULL))",
"CREATE TABLE files (id INTEGER PRIMARY KEY AUTOINCREMENT, name TEXT NOT NULL UNIQUE, \
object_key TEXT NOT NULL, size INTEGER NOT NULL DEFAULT 0)",
],
)
.await?;
create_legacy_common_schema(conn).await?;
insert_legacy_seed_data(conn).await
}❌ New issue: Code Duplication |
This comment was marked as resolved.
This comment was marked as resolved.
Extract the common three-step setup skeleton used by the sqlite and postgres legacy schema helpers in schema_alignment_tests. This removes another layer of structural duplication while keeping the backend-specific DDL lists and test behaviour unchanged.
This comment was marked as resolved.
This comment was marked as resolved.
There was a problem hiding this comment.
Hey - I've found 6 issues, and left some high level feedback:
- The legacy schema setup in
schema_alignment_tests.rshardcodes table definitions and seed data that mirror earlier migrations; consider refactoring this to reuse existing migration SQL or a shared helper so that future schema changes don’t require updating multiple divergent copies. - Dropping
Serialize/DeserializefromCategoryandBundleinmodels.rschanges their external surface; double-check that these types are not used for any JSON or cross-layer serialization and, if they are, either retain the derives or introduce dedicated DTO structs to avoid breaking those call sites.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The legacy schema setup in `schema_alignment_tests.rs` hardcodes table definitions and seed data that mirror earlier migrations; consider refactoring this to reuse existing migration SQL or a shared helper so that future schema changes don’t require updating multiple divergent copies.
- Dropping `Serialize`/`Deserialize` from `Category` and `Bundle` in `models.rs` changes their external surface; double-check that these types are not used for any JSON or cross-layer serialization and, if they are, either retain the derives or introduce dedicated DTO structs to avoid breaking those call sites.
## Individual Comments
### Comment 1
<location path="migrations/sqlite/00000000000006_align_news_schema/down.sql" line_range="35-39" />
<code_context>
+
+CREATE TABLE news_categories (
+ id INTEGER PRIMARY KEY AUTOINCREMENT,
+ name TEXT NOT NULL UNIQUE,
+ bundle_id INTEGER REFERENCES news_bundles(id) ON DELETE CASCADE
+);
+
+INSERT INTO news_categories (id, name, bundle_id)
+SELECT id, name, bundle_id
+FROM news_categories_new;
</code_context>
<issue_to_address>
**issue (bug_risk):** Down migration may fail if multiple categories share the same name across different bundles.
With the new `UNIQUE(name, bundle_id)` constraint, data can legally contain the same `name` in different bundles. The down migration recreates the old schema with `name` globally unique and bulk-inserts from `news_categories_new`, so any such duplicates will cause the insert (and thus the down migration) to fail. If reversibility matters, consider either relaxing the uniqueness in the legacy schema here or explicitly resolving conflicting rows before inserting.
</issue_to_address>
### Comment 2
<location path="src/db/schema_alignment_tests.rs" line_range="9-18" />
<code_context>
+ }
+}
+
+#[cfg(feature = "postgres")]
+fn with_postgres_test_db<F, Fut>(test: F) -> TestResult<()>
+where
+ F: FnOnce(String) -> Fut + Send + 'static,
+ Fut: Future<Output = TestResult<()>> + Send + 'static,
+{
+ let Some(db) = embedded_postgres_db()? else {
+ return Ok(());
+ };
+
+ let runtime = tokio::runtime::Builder::new_current_thread()
+ .enable_all()
+ .build()?;
+ runtime.block_on(async move { test(db.url.to_string()).await })
+}
+
+#[cfg(feature = "postgres")]
+#[test]
+fn postgres_fresh_migration_creates_aligned_schema() -> TestResult<()> {
+ with_postgres_test_db(|url| async move {
+ let mut conn = DbConnection::establish(&url).await?;
</code_context>
<issue_to_address>
**suggestion (testing):** PostgreSQL fresh migration test mirrors SQLite but misses bundle schema checks and minimal permissions smoke test
To more fully validate the Postgres schema and its alignment with Diesel, consider also asserting the `news_bundles` columns (including `guid` and `created_at`) and adding a minimal insert/select round-trip for `permissions`/`user_permissions` (e.g., insert a permission and user_permission, then read them back). This would better catch schema or migration mismatches that index/table checks alone might miss.
</issue_to_address>
### Comment 3
<location path="docs/execplans/4-1-1-align-news-schema-and-migrations.md" line_range="500-503" />
<code_context>
+1. Prepare the embedded PostgreSQL runtime:
+
+ ```sh
+ pg_embedded_setup_unpriv \
+ | tee /tmp/pg-setup-$PROJECT-$BRANCH.log
+ ```
</code_context>
<issue_to_address>
**issue (typo):** Command name for the PostgreSQL setup tool is inconsistent with earlier references.
Earlier in this doc the helper is named `pg-embed-setup-unpriv`, but here it’s `pg_embedded_setup_unpriv`. Please standardize on the correct command name so readers can run it without confusion.
```suggestion
```sh
pg-embed-setup-unpriv \
| tee /tmp/pg-setup-$PROJECT-$BRANCH.log
```
```
</issue_to_address>
### Comment 4
<location path="docs/design.md" line_range="1379" />
<code_context>
This approach makes it easy to query which users have, say, “News Admin”
rights, or to extend with new permissions.
+For roadmap item 4.1.1 we aligned the implemented schema to this design using
+an additive migration pair (`00000000000006_align_news_schema`) rather than
+rewriting historical migrations in place. PostgreSQL can add the missing
</code_context>
<issue_to_address>
**suggestion (review_instructions):** This sentence uses the first-person pronoun “we”, which the style guidelines prohibit.
Consider rephrasing to avoid the first-person plural. For example: “For roadmap item 4.1.1 the implemented schema was aligned to this design using an additive migration pair …”.
<details>
<summary>Review instructions:</summary>
**Path patterns:** `**/*.md`
**Instructions:**
Avoid 2nd person or 1st person pronouns ("I", "you", "we").
</details>
</issue_to_address>
### Comment 5
<location path="docs/execplans/4-1-1-align-news-schema-and-migrations.md" line_range="17" />
<code_context>
+Roadmap item 4.1.1 requires the repository's news persistence layer to match
+`docs/news-schema.md` across both SQLite and PostgreSQL. The target state is
+not just "news tables exist"; it is a schema that supports hierarchical
+bundles, categories with GUID and sequence metadata, threaded articles with
+referential integrity, and normalized permission tables that later roadmap
+steps can seed and enforce.
</code_context>
<issue_to_address>
**suggestion (review_instructions):** “GUID” is used here without being expanded on first use.
Please expand GUID on first use, for example: “categories with globally unique identifier (GUID) and sequence metadata …”. Subsequent uses can use the acronym alone.
<details>
<summary>Review instructions:</summary>
**Path patterns:** `**/*.md`
**Instructions:**
Define uncommon acronyms on first use.
</details>
</issue_to_address>
### Comment 6
<location path="docs/execplans/4-1-1-align-news-schema-and-migrations.md" line_range="300" />
<code_context>
+- `src/schema.rs`: Diesel table definitions that must be regenerated or updated
+ to match the new schema.
+- `src/models.rs`: current Rust-side row and insert structs for news records.
+- `src/news_path.rs`: recursive CTE path lookup helper built on
+ `diesel-cte-ext`.
+- `src/db/bundles.rs`, `src/db/categories.rs`, `src/db/articles.rs`: news
</code_context>
<issue_to_address>
**suggestion (review_instructions):** “CTE” is introduced without definition, which may be unclear to some readers.
Consider expanding CTE on first use, for example: “recursive common table expression (CTE) path lookup helper …”. Later references can use “CTE” alone.
<details>
<summary>Review instructions:</summary>
**Path patterns:** `**/*.md`
**Instructions:**
Define uncommon acronyms on first use.
</details>
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| name TEXT NOT NULL UNIQUE, | ||
| bundle_id INTEGER REFERENCES news_bundles(id) ON DELETE CASCADE | ||
| ); | ||
|
|
||
| INSERT INTO news_categories (id, name, bundle_id) |
There was a problem hiding this comment.
issue (bug_risk): Down migration may fail if multiple categories share the same name across different bundles.
With the new UNIQUE(name, bundle_id) constraint, data can legally contain the same name in different bundles. The down migration recreates the old schema with name globally unique and bulk-inserts from news_categories_new, so any such duplicates will cause the insert (and thus the down migration) to fail. If reversibility matters, consider either relaxing the uniqueness in the legacy schema here or explicitly resolving conflicting rows before inserting.
| #[cfg(feature = "postgres")] | ||
| use std::future::Future; | ||
|
|
||
| use chrono::NaiveDateTime; | ||
| use diesel::{ | ||
| QueryableByName, | ||
| sql_query, | ||
| sql_types::{Integer, Nullable, Text, Timestamp}, | ||
| }; | ||
| use diesel_async::{AsyncConnection, RunQueryDsl}; |
There was a problem hiding this comment.
suggestion (testing): PostgreSQL fresh migration test mirrors SQLite but misses bundle schema checks and minimal permissions smoke test
To more fully validate the Postgres schema and its alignment with Diesel, consider also asserting the news_bundles columns (including guid and created_at) and adding a minimal insert/select round-trip for permissions/user_permissions (e.g., insert a permission and user_permission, then read them back). This would better catch schema or migration mismatches that index/table checks alone might miss.
| ```sh | ||
| pg_embedded_setup_unpriv \ | ||
| | tee /tmp/pg-setup-$PROJECT-$BRANCH.log | ||
| ``` |
There was a problem hiding this comment.
issue (typo): Command name for the PostgreSQL setup tool is inconsistent with earlier references.
Earlier in this doc the helper is named pg-embed-setup-unpriv, but here it’s pg_embedded_setup_unpriv. Please standardize on the correct command name so readers can run it without confusion.
| ```sh | |
| pg_embedded_setup_unpriv \ | |
| | tee /tmp/pg-setup-$PROJECT-$BRANCH.log | |
| ``` | |
| ```sh | |
| pg-embed-setup-unpriv \ | |
| | tee /tmp/pg-setup-$PROJECT-$BRANCH.log |
| This approach makes it easy to query which users have, say, “News Admin” | ||
| rights, or to extend with new permissions. | ||
|
|
||
| For roadmap item 4.1.1 we aligned the implemented schema to this design using |
There was a problem hiding this comment.
suggestion (review_instructions): This sentence uses the first-person pronoun “we”, which the style guidelines prohibit.
Consider rephrasing to avoid the first-person plural. For example: “For roadmap item 4.1.1 the implemented schema was aligned to this design using an additive migration pair …”.
Review instructions:
Path patterns: **/*.md
Instructions:
Avoid 2nd person or 1st person pronouns ("I", "you", "we").
| Roadmap item 4.1.1 requires the repository's news persistence layer to match | ||
| `docs/news-schema.md` across both SQLite and PostgreSQL. The target state is | ||
| not just "news tables exist"; it is a schema that supports hierarchical | ||
| bundles, categories with GUID and sequence metadata, threaded articles with |
There was a problem hiding this comment.
suggestion (review_instructions): “GUID” is used here without being expanded on first use.
Please expand GUID on first use, for example: “categories with globally unique identifier (GUID) and sequence metadata …”. Subsequent uses can use the acronym alone.
Review instructions:
Path patterns: **/*.md
Instructions:
Define uncommon acronyms on first use.
| - `src/schema.rs`: Diesel table definitions that must be regenerated or updated | ||
| to match the new schema. | ||
| - `src/models.rs`: current Rust-side row and insert structs for news records. | ||
| - `src/news_path.rs`: recursive CTE path lookup helper built on |
There was a problem hiding this comment.
suggestion (review_instructions): “CTE” is introduced without definition, which may be unclear to some readers.
Consider expanding CTE on first use, for example: “recursive common table expression (CTE) path lookup helper …”. Later references can use “CTE” alone.
Review instructions:
Path patterns: **/*.md
Instructions:
Define uncommon acronyms on first use.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 1b3a853cda
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| add_sn INTEGER, | ||
| delete_sn INTEGER, | ||
| created_at DATETIME DEFAULT CURRENT_TIMESTAMP, | ||
| UNIQUE(name, bundle_id) |
There was a problem hiding this comment.
Enforce root category uniqueness for NULL bundle IDs
The new UNIQUE(name, bundle_id) constraint allows duplicate top-level category names because both SQLite and PostgreSQL treat NULL values as distinct in unique constraints. That means two root categories named the same can now be inserted, but category lookup logic resolves paths by (name, bundle_id) (src/news_path.rs/src/db/categories.rs), so root-path requests can resolve nondeterministically to an arbitrary row. This weakens a previously enforced invariant (name was globally unique) and can cause incorrect category/article routing for legitimate user input.
Useful? React with 👍 / 👎.
| c.bundle_id, | ||
| c.name, | ||
| lower(hex(randomblob(16))), | ||
| (SELECT COUNT(*) FROM news_articles_old a WHERE a.category_id = c.id), |
There was a problem hiding this comment.
Keep article index during SQLite add_sn backfill
This backfill computes add_sn with a correlated COUNT(*) over news_articles_old for every category, but the migration drops idx_articles_category before running it. On larger datasets this turns the upgrade into repeated full scans (O(categories × articles)), which can make migration time balloon and risk startup failures under the repository’s short migration timeout. Retaining the old index until after backfill (or pre-aggregating counts) avoids this regression.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Actionable comments posted: 8
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/execplans/4-1-1-align-news-schema-and-migrations.md`:
- Around line 485-545: The "Verification and quality gates" section is missing
the mandated per-feature-set Rust checks; update this section to explicitly list
and document the required Rust gate matrix by adding the commands and feature
sets: include "cargo fmt --workspace -- --check", run Clippy with the
test-support feature for each feature set (postgres, sqlite, wireframe-only)
with warnings denied (deny warnings), and run the test suite with "cargo
nextest" for each feature set; place these entries alongside the existing steps
(e.g., after "Verify Rust formatting" and before/with test steps) and show the
exact commands for each feature set so the ExecPlan becomes a reliable source of
truth.
In `@migrations/postgres/00000000000006_align_news_schema/down.sql`:
- Around line 11-15: Before re-adding the global unique constraint on
news_categories.name (the ADD CONSTRAINT news_categories_name_key UNIQUE (name)
statement), guard the downgrade by checking for duplicate names across the
table: run a check (e.g., in a DO $$ block) that SELECTs name, COUNT(*) grouped
by name and raises an exception if any count > 1, and only proceed to execute
the ADD CONSTRAINT if no duplicates are found; alternatively, if you prefer to
mark the downgrade unsupported, replace the ADD CONSTRAINT with an unconditional
RAISE EXCEPTION explaining that duplicate category names across bundles prevent
the downgrade.
In `@migrations/postgres/00000000000006_align_news_schema/up.sql`:
- Around line 31-32: The UNIQUE constraint news_categories_name_bundle_id_key on
table news_categories doesn't prevent duplicate root-level categories because
bundle_id NULLs are considered distinct; remove that constraint and replace it
with either (A) a partial unique index for non-root rows plus a separate unique
constraint/index that enforces uniqueness of name for root rows (e.g., create
UNIQUE INDEX ... ON news_categories(name, bundle_id) WHERE bundle_id IS NOT NULL
and a UNIQUE INDEX ON news_categories(name) WHERE bundle_id IS NULL), or (B) a
single expression unique index using COALESCE to normalize NULL to a sentinel
(e.g., an index on (name, COALESCE(bundle_id, <sentinel>))). Implement one of
these in place of news_categories_name_bundle_id_key and update any migration
comments accordingly.
In `@migrations/sqlite/00000000000006_align_news_schema/down.sql`:
- Around line 33-41: The down migration will fail when scoped duplicates exist
because news_categories is recreated with UNIQUE(name) while the up migration
allowed UNIQUE(name, bundle_id); before the INSERT from news_categories_new, add
a precondition that queries news_categories_new for names that appear in more
than one bundle (GROUP BY name HAVING COUNT(DISTINCT bundle_id) > 1) and abort
the migration with a clear error if any are found; place this check in the down
migration immediately before the INSERT INTO news_categories ... SELECT ... FROM
news_categories_new so the rollback fails fast with an explanatory message
rather than blowing up on a constraint violation.
In `@migrations/sqlite/00000000000006_align_news_schema/up.sql`:
- Around line 31-40: The UNIQUE(name, bundle_id) in news_categories doesn't
prevent duplicate root-level categories because bundle_id is NULL there; add a
unique index after the CREATE TABLE to enforce root-level uniqueness by mapping
NULL to a sentinel (create UNIQUE INDEX idx_news_categories_unique on
news_categories (name, IFNULL(bundle_id, -1))) or, if you only need scoped
uniqueness, create a partial unique index (idx_news_categories_scoped) over
(name, bundle_id) WHERE bundle_id IS NOT NULL; place the chosen CREATE UNIQUE
INDEX statement immediately after the news_categories table definition.
In `@src/db/schema_alignment_tests.rs`:
- Line 7: The file-level attribute #![expect(clippy::panic_in_result_fn, reason
= "test assertions")] is too broad; remove this module-level expect and instead
add #[expect(clippy::panic_in_result_fn, reason = "test assertions")] directly
to each test function that intentionally uses panic-based assertions (the
#[test] functions in src/db/schema_alignment_tests.rs that assert via panic).
Keep all helper functions and other tests lint-clean by not suppressing the lint
globally; only annotate the specific test item identifiers that require the
exception.
- Around line 68-83: The helper assert_upgrade_backfills currently only checks
backfilled fields for bundles and categories; update it to also assert that the
aligned schema elements present in the fresh-schema tests still exist after
upgrade by querying and asserting presence of rows in permissions and
user_permissions and that the article index exists. Specifically, extend
assert_upgrade_backfills (and reuse the same checks used by the fresh-schema
assertions) to run SQL queries against permissions and user_permissions tables
and assert non-empty results, and verify the article index (by querying
pg_indexes or attempting an index-only plan) so that BundleBackfillRow and
CategoryBackfillRow checks are complemented by schema-alignment checks for
permissions, user_permissions, and the article index.
- Around line 268-301: The tests currently treat POSTGRES_TEST_URL as a signal
to skip embedded DBs (should_skip_embedded_postgres) which causes
with_postgres_test_db to return early without running assertions; add a helper
postgres_test_url_from_env() that returns Some(url) when POSTGRES_TEST_URL is
set and non-empty, then modify with_postgres_test_db to check that helper first
and, if present, run the provided test against that URL via a tokio runtime and
return its result; remove or stop using should_skip_embedded_postgres so
embedded_postgres_db() always delegates to
start_embedded_postgres_db()/PostgresTestDb::new() when no env URL is provided.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: fe939d94-e38c-4ac3-b913-473afc1c3916
📒 Files selected for processing (13)
docs/design.mddocs/developers-guide.mddocs/execplans/4-1-1-align-news-schema-and-migrations.mdmigrations/postgres/00000000000006_align_news_schema/down.sqlmigrations/postgres/00000000000006_align_news_schema/up.sqlmigrations/sqlite/00000000000006_align_news_schema/down.sqlmigrations/sqlite/00000000000006_align_news_schema/up.sqlsrc/db/mod.rssrc/db/schema_alignment_tests.rssrc/models.rssrc/schema.rstests/features/wireframe_routing.featuretests/wireframe_routing_bdd.rs
Add regression coverage for permission tables and indices on fresh and upgraded databases so the migration path stays aligned with the target schema. Exercise `Permission` and `UserPermission` model round-trips plus cascade deletion behaviour, and document nested news category listing for users.
Use the standard user install paths for Cargo and Whitaker when the hook environment does not include them on `PATH`.
Include Cargo's bin directory in the environment for Whitaker lint runs so hook environments with a minimal `PATH` can still execute `cargo dylint` through the Whitaker shim.
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (1)
src/db/schema_alignment_tests.rs (1)
377-444:⚠️ Potential issue | 🟠 MajorRun against
POSTGRES_TEST_URLinstead of skipping.Treat
POSTGRES_TEST_URLas the external database target. The current flow
short-circuitsembedded_postgres_db()toNone, so both PostgreSQL tests
returnOk(())without executing a single assertion whenever that variable is
set.Based on learnings, "Keep
POSTGRES_TEST_URLsupport for external PostgreSQL integration tests when adopting the v0.5.0 migration strategy".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/db/schema_alignment_tests.rs` around lines 377 - 444, The tests currently skip whenever POSTGRES_TEST_URL is set (should_skip_embedded_postgres returns true), which causes no assertions to run; instead detect POSTGRES_TEST_URL and run tests against that external DB: change should_skip_embedded_postgres to not skip when POSTGRES_TEST_URL is set (return false), and update embedded_postgres_db to, when POSTGRES_TEST_URL is present, construct and return a PostgresTestDb representing the external URL (e.g., add/Use a PostgresTestDb::from_url or similar constructor) so start_embedded_postgres_db()/with_postgres_test_db still receive Some(PostgresTestDb) and tests run against the provided URL.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/users-guide.md`:
- Around line 96-100: Change the fenced block language identifier from "text" to
"plaintext" for the path example block shown (the triple-fenced block containing
"/", "/Releases", "/Releases/2026"); update the opening fence to ```plaintext
while leaving the block content and closing fence unchanged so it follows the
repository's Markdown convention for non-code samples.
In `@src/db/schema_alignment_tests.rs`:
- Around line 134-444: The file exceeds the 400-line limit because
backend-specific SQLite and Postgres tests/live helpers are kept inline; split
those into two submodules (e.g., mod sqlite_tests and mod postgres_tests) and
leave only shared helpers in this file. Move all SQLite-specific functions and
tests (setup_sqlite_legacy_schema, sqlite_names,
assert_sqlite_permission_schema, assert_sqlite_news_schema,
assert_sqlite_aligned_schema, sqlite_fresh_migration_creates_aligned_schema,
sqlite_upgrade_backfills_legacy_news_rows) into the new sqlite_tests
module/file, and move all Postgres-specific functions and tests
(setup_postgres_legacy_schema, postgres_names,
assert_postgres_permission_schema, assert_postgres_news_schema,
assert_postgres_aligned_schema, should_skip_embedded_postgres,
start_embedded_postgres_db, with_postgres_test_db,
postgres_fresh_migration_creates_aligned_schema,
postgres_upgrade_backfills_legacy_news_rows, embedded_postgres_db) into the new
postgres_tests module/file; keep shared utilities (e.g., setup_legacy_schema,
assert_upgrade_backfills, apply_migrations, DbConnection, NameRow) in this file
and update use/visibility (pub(crate) or make functions public) and imports so
the new modules reference them.
In `@src/db/tests.rs`:
- Around line 223-367: The tests duplicate SQLite permission setup and push the
file past 400 lines; extract the permission-related tests into a new dedicated
test module, move seed_user_permission and the three tests
(test_permission_model_round_trip,
test_user_permission_cascades_when_user_is_deleted,
test_user_permission_cascades_when_permission_is_deleted) into that module,
create an rstest fixture (e.g., permission_fixture) that performs the shared
setup used by seed_user_permission, and replace the two cascade tests with a
single parameterized #[rstest] test that runs the delete-user and
delete-permission cases using the fixture; ensure you keep calls to
diesel::sql_query("PRAGMA foreign_keys = ON") and references to NewUser,
NewPermission, NewUserPermission, create_user, get_user_by_name, and the schema
filters so behavior is unchanged and the original arrange/act/assert flow is
preserved.
---
Duplicate comments:
In `@src/db/schema_alignment_tests.rs`:
- Around line 377-444: The tests currently skip whenever POSTGRES_TEST_URL is
set (should_skip_embedded_postgres returns true), which causes no assertions to
run; instead detect POSTGRES_TEST_URL and run tests against that external DB:
change should_skip_embedded_postgres to not skip when POSTGRES_TEST_URL is set
(return false), and update embedded_postgres_db to, when POSTGRES_TEST_URL is
present, construct and return a PostgresTestDb representing the external URL
(e.g., add/Use a PostgresTestDb::from_url or similar constructor) so
start_embedded_postgres_db()/with_postgres_test_db still receive
Some(PostgresTestDb) and tests run against the provided URL.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro Plus
Run ID: f44cae4f-4ccc-4d74-a120-92f6bdeadceb
📒 Files selected for processing (3)
docs/users-guide.mdsrc/db/schema_alignment_tests.rssrc/db/tests.rs
| #[cfg(feature = "sqlite")] | ||
| async fn setup_sqlite_legacy_schema(conn: &mut DbConnection) -> TestResult<()> { | ||
| setup_legacy_schema( | ||
| conn, | ||
| &[ | ||
| "CREATE TABLE users (id INTEGER PRIMARY KEY AUTOINCREMENT, username TEXT NOT NULL \ | ||
| UNIQUE, password TEXT NOT NULL)", | ||
| "CREATE TABLE news_bundles (id INTEGER PRIMARY KEY AUTOINCREMENT, parent_bundle_id \ | ||
| INTEGER REFERENCES news_bundles(id) ON DELETE CASCADE, name TEXT NOT NULL, \ | ||
| UNIQUE(name, parent_bundle_id))", | ||
| "CREATE TABLE news_categories (id INTEGER PRIMARY KEY AUTOINCREMENT, name TEXT NOT \ | ||
| NULL UNIQUE, bundle_id INTEGER REFERENCES news_bundles(id) ON DELETE CASCADE)", | ||
| "CREATE TABLE news_articles (id INTEGER PRIMARY KEY AUTOINCREMENT, category_id \ | ||
| INTEGER NOT NULL REFERENCES news_categories(id) ON DELETE CASCADE, parent_article_id \ | ||
| INTEGER REFERENCES news_articles(id), prev_article_id INTEGER REFERENCES \ | ||
| news_articles(id), next_article_id INTEGER REFERENCES news_articles(id), \ | ||
| first_child_article_id INTEGER REFERENCES news_articles(id), title TEXT NOT NULL, \ | ||
| poster TEXT, posted_at DATETIME NOT NULL, flags INTEGER DEFAULT 0, data_flavor TEXT \ | ||
| DEFAULT 'text/plain', data TEXT, CHECK (category_id IS NOT NULL))", | ||
| "CREATE TABLE files (id INTEGER PRIMARY KEY AUTOINCREMENT, name TEXT NOT NULL UNIQUE, \ | ||
| object_key TEXT NOT NULL, size INTEGER NOT NULL DEFAULT 0)", | ||
| ], | ||
| ) | ||
| .await | ||
| } | ||
|
|
||
| #[cfg(feature = "sqlite")] | ||
| async fn sqlite_names(conn: &mut DbConnection, sql: &str) -> TestResult<Vec<String>> { | ||
| Ok(sql_query(sql) | ||
| .load::<NameRow>(conn) | ||
| .await? | ||
| .into_iter() | ||
| .map(|row| row.name) | ||
| .collect()) | ||
| } | ||
|
|
||
| #[cfg(feature = "sqlite")] | ||
| async fn assert_sqlite_permission_schema(conn: &mut DbConnection) -> TestResult<()> { | ||
| let tables = sqlite_names( | ||
| conn, | ||
| "SELECT name FROM sqlite_master WHERE type = 'table' AND name IN ('permissions', \ | ||
| 'user_permissions') ORDER BY name", | ||
| ) | ||
| .await?; | ||
| assert_eq!(tables, vec!["permissions", "user_permissions"]); | ||
|
|
||
| let permission_indices = sqlite_names( | ||
| conn, | ||
| "SELECT name FROM pragma_index_list('permissions') ORDER BY name", | ||
| ) | ||
| .await?; | ||
| assert!( | ||
| permission_indices | ||
| .iter() | ||
| .any(|name| name == "sqlite_autoindex_permissions_1") | ||
| ); | ||
|
|
||
| let user_permission_indices = sqlite_names( | ||
| conn, | ||
| "SELECT name FROM pragma_index_list('user_permissions') ORDER BY name", | ||
| ) | ||
| .await?; | ||
| for expected in [ | ||
| "idx_user_permissions_perm", | ||
| "idx_user_permissions_user", | ||
| "sqlite_autoindex_user_permissions_1", | ||
| ] { | ||
| assert!(user_permission_indices.iter().any(|name| name == expected)); | ||
| } | ||
| Ok(()) | ||
| } | ||
|
|
||
| #[cfg(feature = "sqlite")] | ||
| async fn assert_sqlite_news_schema(conn: &mut DbConnection) -> TestResult<()> { | ||
| let article_indices = sqlite_names( | ||
| conn, | ||
| "SELECT name FROM pragma_index_list('news_articles') ORDER BY name", | ||
| ) | ||
| .await?; | ||
| for expected in [ | ||
| "idx_articles_category", | ||
| "idx_articles_first_child_article", | ||
| "idx_articles_next_article", | ||
| "idx_articles_parent_article", | ||
| "idx_articles_prev_article", | ||
| ] { | ||
| assert!(article_indices.iter().any(|name| name == expected)); | ||
| } | ||
|
|
||
| let category_columns = sqlite_names( | ||
| conn, | ||
| "SELECT name FROM pragma_table_info('news_categories') ORDER BY cid", | ||
| ) | ||
| .await?; | ||
| assert_eq!( | ||
| category_columns, | ||
| vec![ | ||
| "id", | ||
| "bundle_id", | ||
| "name", | ||
| "guid", | ||
| "add_sn", | ||
| "delete_sn", | ||
| "created_at" | ||
| ] | ||
| ); | ||
| Ok(()) | ||
| } | ||
|
|
||
| #[cfg(feature = "sqlite")] | ||
| async fn assert_sqlite_aligned_schema(conn: &mut DbConnection) -> TestResult<()> { | ||
| assert_sqlite_permission_schema(conn).await?; | ||
| assert_sqlite_news_schema(conn).await | ||
| } | ||
|
|
||
| #[cfg(feature = "sqlite")] | ||
| #[tokio::test] | ||
| async fn sqlite_fresh_migration_creates_aligned_schema() -> TestResult<()> { | ||
| let mut conn = sqlite_conn().await?; | ||
|
|
||
| assert_sqlite_aligned_schema(&mut conn).await | ||
| } | ||
|
|
||
| #[cfg(feature = "sqlite")] | ||
| #[tokio::test] | ||
| async fn sqlite_upgrade_backfills_legacy_news_rows() -> TestResult<()> { | ||
| let mut conn = DbConnection::establish(":memory:").await?; | ||
| setup_sqlite_legacy_schema(&mut conn).await?; | ||
| apply_migrations(&mut conn, "").await?; | ||
|
|
||
| assert_upgrade_backfills(&mut conn).await?; | ||
| assert_sqlite_aligned_schema(&mut conn).await | ||
| } | ||
|
|
||
| #[cfg(feature = "postgres")] | ||
| async fn setup_postgres_legacy_schema(conn: &mut DbConnection) -> TestResult<()> { | ||
| setup_legacy_schema( | ||
| conn, | ||
| &[ | ||
| "CREATE TABLE users (id INTEGER PRIMARY KEY GENERATED BY DEFAULT AS IDENTITY, \ | ||
| username TEXT NOT NULL UNIQUE, password TEXT NOT NULL)", | ||
| "CREATE TABLE news_bundles (id INTEGER PRIMARY KEY GENERATED BY DEFAULT AS IDENTITY, \ | ||
| parent_bundle_id INTEGER REFERENCES news_bundles(id) ON DELETE CASCADE, name TEXT \ | ||
| NOT NULL, UNIQUE(name, parent_bundle_id))", | ||
| "CREATE TABLE news_categories (id INTEGER PRIMARY KEY GENERATED BY DEFAULT AS \ | ||
| IDENTITY, name TEXT NOT NULL UNIQUE, bundle_id INTEGER REFERENCES news_bundles(id) \ | ||
| ON DELETE CASCADE)", | ||
| "CREATE TABLE news_articles (id INTEGER PRIMARY KEY GENERATED BY DEFAULT AS IDENTITY, \ | ||
| category_id INTEGER NOT NULL REFERENCES news_categories(id) ON DELETE CASCADE, \ | ||
| parent_article_id INTEGER REFERENCES news_articles(id), prev_article_id INTEGER \ | ||
| REFERENCES news_articles(id), next_article_id INTEGER REFERENCES news_articles(id), \ | ||
| first_child_article_id INTEGER REFERENCES news_articles(id), title TEXT NOT NULL, \ | ||
| poster TEXT, posted_at TIMESTAMP NOT NULL, flags INTEGER DEFAULT 0, data_flavor TEXT \ | ||
| DEFAULT 'text/plain', data TEXT, CHECK (category_id IS NOT NULL))", | ||
| "CREATE TABLE files (id INTEGER PRIMARY KEY GENERATED BY DEFAULT AS IDENTITY, name \ | ||
| TEXT NOT NULL UNIQUE, object_key TEXT NOT NULL, size BIGINT NOT NULL DEFAULT 0)", | ||
| ], | ||
| ) | ||
| .await | ||
| } | ||
|
|
||
| #[cfg(feature = "postgres")] | ||
| async fn postgres_names(conn: &mut DbConnection, sql: &str) -> TestResult<Vec<String>> { | ||
| Ok(sql_query(sql) | ||
| .load::<NameRow>(conn) | ||
| .await? | ||
| .into_iter() | ||
| .map(|row| row.name) | ||
| .collect()) | ||
| } | ||
|
|
||
| #[cfg(feature = "postgres")] | ||
| async fn assert_postgres_permission_schema(conn: &mut DbConnection) -> TestResult<()> { | ||
| let permission_tables = postgres_names( | ||
| conn, | ||
| "SELECT table_name AS name FROM information_schema.tables WHERE table_schema = 'public' \ | ||
| AND table_name IN ('permissions', 'user_permissions') ORDER BY table_name", | ||
| ) | ||
| .await?; | ||
| assert_eq!(permission_tables, vec!["permissions", "user_permissions"]); | ||
|
|
||
| let permission_indices = postgres_names( | ||
| conn, | ||
| "SELECT indexname AS name FROM pg_indexes WHERE schemaname = 'public' AND tablename IN \ | ||
| ('permissions', 'user_permissions') ORDER BY indexname", | ||
| ) | ||
| .await?; | ||
| for expected in [ | ||
| "idx_user_permissions_perm", | ||
| "idx_user_permissions_user", | ||
| "permissions_code_key", | ||
| "user_permissions_pkey", | ||
| ] { | ||
| assert!(permission_indices.iter().any(|name| name == expected)); | ||
| } | ||
| Ok(()) | ||
| } | ||
|
|
||
| #[cfg(feature = "postgres")] | ||
| async fn assert_postgres_news_schema(conn: &mut DbConnection) -> TestResult<()> { | ||
| let category_columns = postgres_names( | ||
| conn, | ||
| "SELECT column_name AS name FROM information_schema.columns WHERE table_name = \ | ||
| 'news_categories' ORDER BY ordinal_position", | ||
| ) | ||
| .await?; | ||
| assert_eq!( | ||
| category_columns, | ||
| vec![ | ||
| "id", | ||
| "bundle_id", | ||
| "name", | ||
| "guid", | ||
| "add_sn", | ||
| "delete_sn", | ||
| "created_at" | ||
| ] | ||
| ); | ||
|
|
||
| let article_indices = postgres_names( | ||
| conn, | ||
| "SELECT indexname AS name FROM pg_indexes WHERE tablename = 'news_articles' ORDER BY \ | ||
| indexname", | ||
| ) | ||
| .await?; | ||
| for expected in [ | ||
| "idx_articles_category", | ||
| "idx_articles_first_child_article", | ||
| "idx_articles_next_article", | ||
| "idx_articles_parent_article", | ||
| "idx_articles_prev_article", | ||
| ] { | ||
| assert!(article_indices.iter().any(|name| name == expected)); | ||
| } | ||
| Ok(()) | ||
| } | ||
|
|
||
| #[cfg(feature = "postgres")] | ||
| async fn assert_postgres_aligned_schema(conn: &mut DbConnection) -> TestResult<()> { | ||
| assert_postgres_permission_schema(conn).await?; | ||
| assert_postgres_news_schema(conn).await | ||
| } | ||
|
|
||
| #[cfg(feature = "postgres")] | ||
| fn should_skip_embedded_postgres() -> bool { | ||
| if std::env::var_os("POSTGRES_TEST_URL").is_some() { | ||
| tracing::warn!("SKIP-TEST-CLUSTER: POSTGRES_TEST_URL set, skipping embedded postgres test"); | ||
| return true; | ||
| } | ||
| false | ||
| } | ||
|
|
||
| #[cfg(feature = "postgres")] | ||
| fn start_embedded_postgres_db() -> TestResult<Option<PostgresTestDb>> { | ||
| match PostgresTestDb::new() { | ||
| Ok(db) => Ok(Some(db)), | ||
| Err(PostgresTestDbError::Unavailable(_)) => { | ||
| tracing::warn!("SKIP-TEST-CLUSTER: PostgreSQL unavailable"); | ||
| Ok(None) | ||
| } | ||
| Err(error) => Err(error.into()), | ||
| } | ||
| } | ||
|
|
||
| #[cfg(feature = "postgres")] | ||
| fn with_postgres_test_db<F, Fut>(test: F) -> TestResult<()> | ||
| where | ||
| F: FnOnce(String) -> Fut + Send + 'static, | ||
| Fut: Future<Output = TestResult<()>> + Send + 'static, | ||
| { | ||
| let Some(db) = embedded_postgres_db()? else { | ||
| return Ok(()); | ||
| }; | ||
|
|
||
| let runtime = tokio::runtime::Builder::new_current_thread() | ||
| .enable_all() | ||
| .build()?; | ||
| runtime.block_on(async move { test(db.url.to_string()).await }) | ||
| } | ||
|
|
||
| #[cfg(feature = "postgres")] | ||
| #[test] | ||
| fn postgres_fresh_migration_creates_aligned_schema() -> TestResult<()> { | ||
| with_postgres_test_db(|url| async move { | ||
| let mut conn = DbConnection::establish(&url).await?; | ||
| apply_migrations(&mut conn, &url).await?; | ||
|
|
||
| assert_postgres_aligned_schema(&mut conn).await | ||
| }) | ||
| } | ||
|
|
||
| #[cfg(feature = "postgres")] | ||
| #[test] | ||
| fn postgres_upgrade_backfills_legacy_news_rows() -> TestResult<()> { | ||
| with_postgres_test_db(|url| async move { | ||
| let mut conn = DbConnection::establish(&url).await?; | ||
| setup_postgres_legacy_schema(&mut conn).await?; | ||
| apply_migrations(&mut conn, &url).await?; | ||
|
|
||
| assert_upgrade_backfills(&mut conn).await?; | ||
| assert_postgres_aligned_schema(&mut conn).await | ||
| }) | ||
| } | ||
|
|
||
| #[cfg(feature = "postgres")] | ||
| fn embedded_postgres_db() -> TestResult<Option<PostgresTestDb>> { | ||
| if should_skip_embedded_postgres() { | ||
| return Ok(None); | ||
| } | ||
| start_embedded_postgres_db() | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Split the backend-specific schema tests into submodules.
Move the SQLite and PostgreSQL halves into separate test modules and keep only
the shared helpers here. The new file is 445 lines long, and the backend split
is already explicit enough to decompose cleanly.
As per coding guidelines, "Files must not exceed 400 lines in length".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/db/schema_alignment_tests.rs` around lines 134 - 444, The file exceeds
the 400-line limit because backend-specific SQLite and Postgres tests/live
helpers are kept inline; split those into two submodules (e.g., mod sqlite_tests
and mod postgres_tests) and leave only shared helpers in this file. Move all
SQLite-specific functions and tests (setup_sqlite_legacy_schema, sqlite_names,
assert_sqlite_permission_schema, assert_sqlite_news_schema,
assert_sqlite_aligned_schema, sqlite_fresh_migration_creates_aligned_schema,
sqlite_upgrade_backfills_legacy_news_rows) into the new sqlite_tests
module/file, and move all Postgres-specific functions and tests
(setup_postgres_legacy_schema, postgres_names,
assert_postgres_permission_schema, assert_postgres_news_schema,
assert_postgres_aligned_schema, should_skip_embedded_postgres,
start_embedded_postgres_db, with_postgres_test_db,
postgres_fresh_migration_creates_aligned_schema,
postgres_upgrade_backfills_legacy_news_rows, embedded_postgres_db) into the new
postgres_tests module/file; keep shared utilities (e.g., setup_legacy_schema,
assert_upgrade_backfills, apply_migrations, DbConnection, NameRow) in this file
and update use/visibility (pub(crate) or make functions public) and imports so
the new modules reference them.
| #[cfg(feature = "sqlite")] | ||
| async fn seed_user_permission(conn: &mut DbConnection) -> (i32, i32) { | ||
| let user = NewUser { | ||
| username: "dana", | ||
| password: "hash", | ||
| }; | ||
| create_user(conn, &user) | ||
| .await | ||
| .expect("failed to create permission test user"); | ||
| let stored_user = get_user_by_name(conn, "dana") | ||
| .await | ||
| .expect("permission test user lookup failed") | ||
| .expect("permission test user missing"); | ||
|
|
||
| let permission = NewPermission { | ||
| code: 34, | ||
| name: "News Create Category", | ||
| scope: "bundle", | ||
| }; | ||
| diesel::insert_into(permissions::permissions) | ||
| .values(&permission) | ||
| .execute(conn) | ||
| .await | ||
| .expect("failed to insert permission"); | ||
| let permission_id = permissions::permissions | ||
| .filter(permissions::code.eq(34)) | ||
| .select(permissions::id) | ||
| .first::<i32>(conn) | ||
| .await | ||
| .expect("permission id"); | ||
|
|
||
| (stored_user.id, permission_id) | ||
| } | ||
|
|
||
| #[cfg(feature = "sqlite")] | ||
| #[rstest] | ||
| #[tokio::test] | ||
| async fn test_permission_model_round_trip(#[future] migrated_conn: DbConnection) { | ||
| let mut conn = migrated_conn.await; | ||
| let (user_id, permission_id) = seed_user_permission(&mut conn).await; | ||
| let user_permission = NewUserPermission { | ||
| user_id, | ||
| permission_id, | ||
| }; | ||
| diesel::insert_into(user_permissions::user_permissions) | ||
| .values(&user_permission) | ||
| .execute(&mut conn) | ||
| .await | ||
| .expect("failed to assign permission"); | ||
|
|
||
| let permission = permissions::permissions | ||
| .filter(permissions::id.eq(permission_id)) | ||
| .first::<Permission>(&mut conn) | ||
| .await | ||
| .expect("permission row"); | ||
| assert_eq!(permission.code, 34); | ||
| assert_eq!(permission.name, "News Create Category"); | ||
| assert_eq!(permission.scope, "bundle"); | ||
|
|
||
| let assigned = user_permissions::user_permissions | ||
| .first::<UserPermission>(&mut conn) | ||
| .await | ||
| .expect("user permission row"); | ||
| assert_eq!(assigned.user_id, user_id); | ||
| assert_eq!(assigned.permission_id, permission_id); | ||
| } | ||
|
|
||
| #[cfg(feature = "sqlite")] | ||
| #[rstest] | ||
| #[tokio::test] | ||
| async fn test_user_permission_cascades_when_user_is_deleted(#[future] migrated_conn: DbConnection) { | ||
| let mut conn = migrated_conn.await; | ||
| diesel::sql_query("PRAGMA foreign_keys = ON") | ||
| .execute(&mut conn) | ||
| .await | ||
| .expect("failed to enable sqlite foreign keys"); | ||
| let (user_id, permission_id) = seed_user_permission(&mut conn).await; | ||
| let user_permission = NewUserPermission { | ||
| user_id, | ||
| permission_id, | ||
| }; | ||
| diesel::insert_into(user_permissions::user_permissions) | ||
| .values(&user_permission) | ||
| .execute(&mut conn) | ||
| .await | ||
| .expect("failed to assign permission"); | ||
|
|
||
| diesel::delete( | ||
| crate::schema::users::dsl::users.filter(crate::schema::users::dsl::id.eq(user_id)), | ||
| ) | ||
| .execute(&mut conn) | ||
| .await | ||
| .expect("failed to delete user"); | ||
| let assignments = user_permissions::user_permissions | ||
| .load::<UserPermission>(&mut conn) | ||
| .await | ||
| .expect("failed to load remaining assignments"); | ||
| assert!(assignments.is_empty()); | ||
|
|
||
| let permission = permissions::permissions | ||
| .filter(permissions::id.eq(permission_id)) | ||
| .first::<Permission>(&mut conn) | ||
| .await | ||
| .expect("permission should remain after user deletion"); | ||
| assert_eq!(permission.code, 34); | ||
| } | ||
|
|
||
| #[cfg(feature = "sqlite")] | ||
| #[rstest] | ||
| #[tokio::test] | ||
| async fn test_user_permission_cascades_when_permission_is_deleted( | ||
| #[future] migrated_conn: DbConnection, | ||
| ) { | ||
| let mut conn = migrated_conn.await; | ||
| diesel::sql_query("PRAGMA foreign_keys = ON") | ||
| .execute(&mut conn) | ||
| .await | ||
| .expect("failed to enable sqlite foreign keys"); | ||
| let (user_id, permission_id) = seed_user_permission(&mut conn).await; | ||
| let user_permission = NewUserPermission { | ||
| user_id, | ||
| permission_id, | ||
| }; | ||
| diesel::insert_into(user_permissions::user_permissions) | ||
| .values(&user_permission) | ||
| .execute(&mut conn) | ||
| .await | ||
| .expect("failed to assign permission"); | ||
|
|
||
| diesel::delete(permissions::permissions.filter(permissions::id.eq(permission_id))) | ||
| .execute(&mut conn) | ||
| .await | ||
| .expect("failed to delete permission"); | ||
| let assignments = user_permissions::user_permissions | ||
| .load::<UserPermission>(&mut conn) | ||
| .await | ||
| .expect("failed to load remaining assignments"); | ||
| assert!(assignments.is_empty()); | ||
|
|
||
| let user = get_user_by_name(&mut conn, "dana") | ||
| .await | ||
| .expect("user lookup failed") | ||
| .expect("user should remain after permission deletion"); | ||
| assert_eq!(user.id, user_id); | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Split and de-duplicate the SQLite permission tests.
Extract this block into a dedicated test module and fold the two cascade cases
into a single #[rstest] case backed by a fixture. These additions push
src/db/tests.rs past the 400-line cap and keep the same arrange/act/assert
flow duplicated in two places.
As per coding guidelines, "Files must not exceed 400 lines in length", "Use rstest fixtures for shared setup", and "Replace duplicated tests with #[rstest(...)] parameterised cases".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/db/tests.rs` around lines 223 - 367, The tests duplicate SQLite
permission setup and push the file past 400 lines; extract the
permission-related tests into a new dedicated test module, move
seed_user_permission and the three tests (test_permission_model_round_trip,
test_user_permission_cascades_when_user_is_deleted,
test_user_permission_cascades_when_permission_is_deleted) into that module,
create an rstest fixture (e.g., permission_fixture) that performs the shared
setup used by seed_user_permission, and replace the two cascade tests with a
single parameterized #[rstest] test that runs the delete-user and
delete-permission cases using the fixture; ensure you keep calls to
diesel::sql_query("PRAGMA foreign_keys = ON") and references to NewUser,
NewPermission, NewUserPermission, create_user, get_user_by_name, and the schema
filters so behavior is unchanged and the original arrange/act/assert flow is
preserved.
Add Whitaker's resolved bin directory to the lint tool path so `cargo dylint` can find the `cargo-dylint` subcommand in hook environments that omit the user local bin directory.
Reuse the historical migration SQL when constructing legacy schema fixtures so upgrade tests do not carry duplicate table definitions. Extend PostgreSQL schema alignment coverage with bundle-column checks and a permissions round trip. Keep category and bundle serde derives by enabling chrono's serde support, and make down migrations tolerant of duplicate category names that are valid under scoped category uniqueness. Clean up the review-noted documentation wording, acronym expansion, and PostgreSQL setup command name.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@Makefile`:
- Around line 13-15: The current Makefile can inject the current directory into
PATH because CARGO_PATH and CARGO_BIN_DIR are computed from the entire CARGO
string (which may contain arguments) and fall back to the raw command on lookup
failure; fix this by using firstword to extract the executable token from CARGO,
run command -v on that token, and only set CARGO_PATH/CARGO_BIN_DIR when command
-v returns a non-empty path (i.e., successful lookup); ensure CARGO_BIN_DIR is
derived only from a successful CARGO_PATH and avoid defaulting to the raw
command so no './' directory is prepended to PATH—apply the same pattern to the
other similar variables/blocks referenced (the analogous lines around
CARGO_PATH/CARGO_BIN_DIR usage).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
| CARGO_PATH := $(shell command -v $(CARGO) 2>/dev/null || printf '%s\n' "$(CARGO)") | ||
| CARGO_BIN_DIR := $(dir $(CARGO_PATH)) | ||
| BUILD_JOBS ?= |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
tmp_makefile="$(mktemp)"
trap 'rm -f "$tmp_makefile"' EXIT
cat >"$tmp_makefile" <<'MAKE'
CARGO := cargo +nightly
CARGO_PATH := $(shell command -v $(CARGO) 2>/dev/null || printf '%s\n' "$(CARGO)")
CARGO_BIN_DIR := $(dir $(CARGO_PATH))
WHITAKER := definitely-missing-whitaker
WHITAKER_PATH := $(shell command -v $(WHITAKER) 2>/dev/null || printf '%s\n' "$(WHITAKER)")
WHITAKER_BIN_DIR := $(dir $(WHITAKER_PATH))
all:
`@printf` 'CARGO_PATH=[%s]\n' '$(CARGO_PATH)'
`@printf` 'CARGO_BIN_DIR=[%s]\n' '$(CARGO_BIN_DIR)'
`@printf` 'WHITAKER_PATH=[%s]\n' '$(WHITAKER_PATH)'
`@printf` 'WHITAKER_BIN_DIR=[%s]\n' '$(WHITAKER_BIN_DIR)'
MAKE
make -f "$tmp_makefile"Repository: leynos/mxd
Length of output: 100
🏁 Script executed:
cat -n Makefile | head -100Repository: leynos/mxd
Length of output: 4869
Eliminate current-working-directory injection into PATH via failed tool resolution.
Lines 13–14 and 33–34 apply fallback to the raw command string when lookup fails. GNU make normalises $(dir ...) to ./ for bare names, so line 35 prepends the current working directory to PATH for lines 86, 90, and 94. This creates a command execution hazard. Additionally, overrides such as CARGO='cargo +nightly' bypass the conditional checks at lines 6–11 and 26–31, causing command -v to fail on the full string and inject ./ into the derived paths.
Resolve only the executable token via command -v and conditionally prepend a directory only after successful lookup. Use firstword to extract the command part from overrides containing arguments.
Proposed fix
+resolve_cmd = $(strip $(shell command -v $(firstword $(1)) 2>/dev/null))
+
-CARGO_PATH := $(shell command -v $(CARGO) 2>/dev/null || printf '%s\n' "$(CARGO)")
-CARGO_BIN_DIR := $(dir $(CARGO_PATH))
+CARGO_PATH := $(call resolve_cmd,$(CARGO))
+CARGO_BIN_DIR := $(if $(CARGO_PATH),$(dir $(CARGO_PATH)))
@@
-WHITAKER_PATH := $(shell command -v $(WHITAKER) 2>/dev/null || printf '%s\n' "$(WHITAKER)")
-WHITAKER_BIN_DIR := $(dir $(WHITAKER_PATH))
-TOOL_PATH := $(CARGO_BIN_DIR):$(WHITAKER_BIN_DIR):$(PATH)
+WHITAKER_PATH := $(call resolve_cmd,$(WHITAKER))
+WHITAKER_BIN_DIR := $(if $(WHITAKER_PATH),$(dir $(WHITAKER_PATH)))
+TOOL_PATH := $(strip $(if $(CARGO_BIN_DIR),$(CARGO_BIN_DIR):)$(if $(WHITAKER_BIN_DIR),$(WHITAKER_BIN_DIR):)$(PATH))Also applies to: 33–35
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Makefile` around lines 13 - 15, The current Makefile can inject the current
directory into PATH because CARGO_PATH and CARGO_BIN_DIR are computed from the
entire CARGO string (which may contain arguments) and fall back to the raw
command on lookup failure; fix this by using firstword to extract the executable
token from CARGO, run command -v on that token, and only set
CARGO_PATH/CARGO_BIN_DIR when command -v returns a non-empty path (i.e.,
successful lookup); ensure CARGO_BIN_DIR is derived only from a successful
CARGO_PATH and avoid defaulting to the raw command so no './' directory is
prepended to PATH—apply the same pattern to the other similar variables/blocks
referenced (the analogous lines around CARGO_PATH/CARGO_BIN_DIR usage).
Add a partial unique index for root news categories so top-level names remain unique when `bundle_id` is `NULL`. Keep the scoped `UNIQUE(name, bundle_id)` constraint for categories inside bundles. Cover the invariant in schema alignment tests by asserting both index presence and duplicate root-category rejection for SQLite and PostgreSQL.
Retain the legacy `idx_articles_category` index until after the SQLite category backfill counts articles per category, then drop it before recreating the index on the replacement table. This keeps the migration from turning the `add_sn` backfill into repeated full scans on larger news datasets.
Replace nullable composite category uniqueness with explicit root and scoped uniqueness checks across the news schema migrations. Add clear downgrade preconditions before restoring the legacy global category-name constraint. Extend schema-alignment upgrade tests to exercise permission joins, article-index presence, and `POSTGRES_TEST_URL` execution. Document the per-feature Rust gate matrix in the ExecPlan.
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (3)
src/db/schema_alignment_tests.rs (2)
462-489:⚠️ Potential issue | 🟠 MajorHonour
POSTGRES_TEST_URLinstead of treating it as a skip signal.Setting that variable makes
embedded_postgres_db()returnNone, so
with_postgres_test_db()exitsOk(())without running a single PostgreSQL
assertion. Run the closure against the configured URL first, then fall back to
the embedded cluster only when the variable is absent.Also applies to: 523-529
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/db/schema_alignment_tests.rs` around lines 462 - 489, The tests currently treat POSTGRES_TEST_URL as a signal to skip instead of using it; update with_postgres_test_db (and the duplicate at lines ~523-529) to first check std::env::var_os("POSTGRES_TEST_URL") and, if present, call the provided closure test with that URL (convert to String) and return its result, otherwise fall back to starting/using the embedded DB via start_embedded_postgres_db()/embedded_postgres_db() as before; keep existing behavior for unavailable embedded DB (return Ok(())) and preserve error propagation paths.
157-529: 🛠️ Refactor suggestion | 🟠 MajorSplit the backend-specific suites out of this module.
Move the SQLite and PostgreSQL halves into dedicated submodules/files and keep
only the shared helpers here. At 530 lines, this test module still breaches the
repository limit and is harder to maintain than it needs to be.As per coding guidelines, "Files must not exceed 400 lines in length".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/db/schema_alignment_tests.rs` around lines 157 - 529, This file is too large—split SQLite and PostgreSQL test halves into their own modules/files: create src/db/sqlite_schema_alignment_tests.rs and src/db/postgres_schema_alignment_tests.rs, move all sqlite-prefixed items (e.g., setup_sqlite_legacy_schema, sqlite_names, assert_sqlite_permission_schema, assert_sqlite_news_schema, assert_sqlite_aligned_schema, sqlite_fresh_migration_creates_aligned_schema, sqlite_upgrade_backfills_legacy_news_rows) into the SQLite file and all postgres-prefixed items (e.g., setup_postgres_legacy_schema, postgres_names, assert_postgres_permission_schema, assert_postgres_bundle_schema, assert_postgres_category_schema, assert_postgres_article_indices, assert_postgres_news_schema, assert_postgres_aligned_schema, should_skip_embedded_postgres, start_embedded_postgres_db, with_postgres_test_db, postgres_fresh_migration_creates_aligned_schema, postgres_upgrade_backfills_legacy_news_rows, embedded_postgres_db) into the Postgres file, leaving only shared helpers (like setup_legacy_schema, apply_migrations, assert_root_category_names_are_unique, CountRow/NameRow types, DbConnection, run_statements) in this module; preserve all #[cfg(feature = "...")] annotations on moved items, update module imports/uses and test visibility (pub(crate) or pub as needed), and add mod declarations in the parent module so tests compile and maintain existing symbol paths used across the codebase.docs/execplans/4-1-1-align-news-schema-and-migrations.md (1)
491-551:⚠️ Potential issue | 🟠 MajorRecord the required Rust gate matrix here.
Replace the high-level
make-only list with the explicit Rust gates the repo
requires:cargo fmt --workspace -- --check, Clippy forpostgres,sqlite,
and wireframe-only withtest-supportand warnings denied, andcargo nextest
for the same feature sets. This ExecPlan is not a reliable merge checklist until
those commands are written down here.As per coding guidelines, "Rust: Run
cargo fmt --workspace -- --checkto
validate formatting across the entire workspace without modifying files", "Rust:
Run Clippy for each feature set (postgres, sqlite, and wireframe-only) with the
test-supportfeature included, denying all Clippy warnings", and "Rust: Run
the test suite for each feature set (postgres, sqlite, and wireframe-only)
usingcargo nextest".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/execplans/4-1-1-align-news-schema-and-migrations.md` around lines 491 - 551, Update the verification steps to include the explicit Rust gate matrix instead of only generic `make` targets: add `cargo fmt --workspace -- --check` to validate formatting; run Clippy three times for feature sets `postgres`, `sqlite`, and `wireframe-only` with the `test-support` feature enabled and warnings denied (e.g., `cargo clippy --features "test-support <feature>" -D warnings` for each feature set); and run the test suite with `cargo nextest` for the same three feature sets (e.g., `cargo nextest --features "test-support <feature>"`), ensuring these exact commands replace or augment the `make fmt`, `make lint`, and `make test` steps so the ExecPlan reliably enforces workspace formatting, Clippy (denied warnings), and nextest coverage for `postgres`, `sqlite`, and `wireframe-only`.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/execplans/4-1-1-align-news-schema-and-migrations.md`:
- Around line 168-170: The note stating that src/schema.rs "currently exposes
only `users`, `news_bundles`, `news_categories`, `news_articles`, `files`, and
`file_acl`" is stale because `permissions` and `user_permissions` are now
defined; update the entry in
docs/execplans/4-1-1-align-news-schema-and-migrations.md to reflect that
discovery by either rewriting the sentence as a past/historical note (e.g.,
"Previously, src/schema.rs lacked `permissions` and `user_permissions`") or mark
the item as resolved, and reference the actual symbols `permissions` and
`user_permissions` and the file `src/schema.rs` so the documentation matches the
repository state.
In `@src/db/schema_alignment_tests.rs`:
- Around line 222-299: Add explicit checks of the news_bundles table so bundle
regressions (guid/created_at order, UNIQUE(name, parent_bundle_id), and bundle
parent/name index) are caught: create a new helper function
assert_sqlite_bundles_schema that queries pragma_table_info('news_bundles') and
pragma_index_list('news_bundles') and asserts the column order includes guid and
created_at in the expected positions and that indices include the parent/name
index and the UNIQUE(name,parent_bundle_id) (sqlite_autoindex or explicit unique
index), then call that helper from assert_sqlite_news_schema or
assert_sqlite_aligned_schema; likewise add assert_postgres_bundles_schema (or
extend the existing Postgres alignment test) to run SQL that validates column
order and existence of the UNIQUE(name,parent_bundle_id) constraint and the
bundle parent/name index, and invoke it from the Postgres-aligned test (the
function that runs Postgres schema assertions, e.g., the Postgres counterpart to
assert_sqlite_aligned_schema).
In `@src/models.rs`:
- Around line 39-48: NewCategory and NewBundle write-side structs currently only
include name and parent_id so new rows omit guid, add_sn, delete_sn and
created_at; update the write models (NewCategory, NewBundle) to include the new
fields (guid: Option<String>, add_sn: Option<i32>, delete_sn: Option<i32>,
created_at: Option<NaiveDateTime>) and ensure the code paths that construct
NewCategory/NewBundle (the insert builders) set sensible values
(generate/passthrough guid when available, default add_sn/delete_sn to None or
provided values, set created_at to now() or DB timestamp) so inserts persist the
same metadata the read structs expect. Reference NewCategory and NewBundle to
find where to add fields and populate them in the create/insert functions.
---
Duplicate comments:
In `@docs/execplans/4-1-1-align-news-schema-and-migrations.md`:
- Around line 491-551: Update the verification steps to include the explicit
Rust gate matrix instead of only generic `make` targets: add `cargo fmt
--workspace -- --check` to validate formatting; run Clippy three times for
feature sets `postgres`, `sqlite`, and `wireframe-only` with the `test-support`
feature enabled and warnings denied (e.g., `cargo clippy --features
"test-support <feature>" -D warnings` for each feature set); and run the test
suite with `cargo nextest` for the same three feature sets (e.g., `cargo nextest
--features "test-support <feature>"`), ensuring these exact commands replace or
augment the `make fmt`, `make lint`, and `make test` steps so the ExecPlan
reliably enforces workspace formatting, Clippy (denied warnings), and nextest
coverage for `postgres`, `sqlite`, and `wireframe-only`.
In `@src/db/schema_alignment_tests.rs`:
- Around line 462-489: The tests currently treat POSTGRES_TEST_URL as a signal
to skip instead of using it; update with_postgres_test_db (and the duplicate at
lines ~523-529) to first check std::env::var_os("POSTGRES_TEST_URL") and, if
present, call the provided closure test with that URL (convert to String) and
return its result, otherwise fall back to starting/using the embedded DB via
start_embedded_postgres_db()/embedded_postgres_db() as before; keep existing
behavior for unavailable embedded DB (return Ok(())) and preserve error
propagation paths.
- Around line 157-529: This file is too large—split SQLite and PostgreSQL test
halves into their own modules/files: create
src/db/sqlite_schema_alignment_tests.rs and
src/db/postgres_schema_alignment_tests.rs, move all sqlite-prefixed items (e.g.,
setup_sqlite_legacy_schema, sqlite_names, assert_sqlite_permission_schema,
assert_sqlite_news_schema, assert_sqlite_aligned_schema,
sqlite_fresh_migration_creates_aligned_schema,
sqlite_upgrade_backfills_legacy_news_rows) into the SQLite file and all
postgres-prefixed items (e.g., setup_postgres_legacy_schema, postgres_names,
assert_postgres_permission_schema, assert_postgres_bundle_schema,
assert_postgres_category_schema, assert_postgres_article_indices,
assert_postgres_news_schema, assert_postgres_aligned_schema,
should_skip_embedded_postgres, start_embedded_postgres_db,
with_postgres_test_db, postgres_fresh_migration_creates_aligned_schema,
postgres_upgrade_backfills_legacy_news_rows, embedded_postgres_db) into the
Postgres file, leaving only shared helpers (like setup_legacy_schema,
apply_migrations, assert_root_category_names_are_unique, CountRow/NameRow types,
DbConnection, run_statements) in this module; preserve all #[cfg(feature =
"...")] annotations on moved items, update module imports/uses and test
visibility (pub(crate) or pub as needed), and add mod declarations in the parent
module so tests compile and maintain existing symbol paths used across the
codebase.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro Plus
Run ID: e215525d-63fe-43f2-b7da-94dce7f5a895
📒 Files selected for processing (9)
Cargo.tomldocs/design.mddocs/execplans/4-1-1-align-news-schema-and-migrations.mdmigrations/postgres/00000000000006_align_news_schema/down.sqlmigrations/postgres/00000000000006_align_news_schema/up.sqlmigrations/sqlite/00000000000006_align_news_schema/down.sqlmigrations/sqlite/00000000000006_align_news_schema/up.sqlsrc/db/schema_alignment_tests.rssrc/models.rs
| - `src/schema.rs` currently exposes only `users`, `news_bundles`, | ||
| `news_categories`, `news_articles`, `files`, and `file_acl`; there are no | ||
| Diesel definitions yet for `permissions` or `user_permissions`. |
There was a problem hiding this comment.
Update the schema-status note to match the code.
Rewrite this entry as a historical discovery or mark it resolved. src/schema.rs
now defines both permissions and user_permissions, so the present-tense note
here contradicts the current repository state.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/execplans/4-1-1-align-news-schema-and-migrations.md` around lines 168 -
170, The note stating that src/schema.rs "currently exposes only `users`,
`news_bundles`, `news_categories`, `news_articles`, `files`, and `file_acl`" is
stale because `permissions` and `user_permissions` are now defined; update the
entry in docs/execplans/4-1-1-align-news-schema-and-migrations.md to reflect
that discovery by either rewriting the sentence as a past/historical note (e.g.,
"Previously, src/schema.rs lacked `permissions` and `user_permissions`") or mark
the item as resolved, and reference the actual symbols `permissions` and
`user_permissions` and the file `src/schema.rs` so the documentation matches the
repository state.
| async fn assert_sqlite_news_schema(conn: &mut DbConnection) -> TestResult<()> { | ||
| let article_indices = sqlite_names( | ||
| conn, | ||
| "SELECT name FROM pragma_index_list('news_articles') ORDER BY name", | ||
| ) | ||
| .await?; | ||
| for expected in [ | ||
| "idx_articles_category", | ||
| "idx_articles_first_child_article", | ||
| "idx_articles_next_article", | ||
| "idx_articles_parent_article", | ||
| "idx_articles_prev_article", | ||
| ] { | ||
| assert!(article_indices.iter().any(|name| name == expected)); | ||
| } | ||
|
|
||
| let category_indices = sqlite_names( | ||
| conn, | ||
| "SELECT name FROM pragma_index_list('news_categories') ORDER BY name", | ||
| ) | ||
| .await?; | ||
| for expected in [ | ||
| "idx_categories_bundle", | ||
| "idx_categories_root_name_unique", | ||
| "sqlite_autoindex_news_categories_1", | ||
| ] { | ||
| assert!(category_indices.iter().any(|name| name == expected)); | ||
| } | ||
|
|
||
| let category_columns = sqlite_names( | ||
| conn, | ||
| "SELECT name FROM pragma_table_info('news_categories') ORDER BY cid", | ||
| ) | ||
| .await?; | ||
| assert_eq!( | ||
| category_columns, | ||
| vec![ | ||
| "id", | ||
| "bundle_id", | ||
| "name", | ||
| "guid", | ||
| "add_sn", | ||
| "delete_sn", | ||
| "created_at" | ||
| ] | ||
| ); | ||
| Ok(()) | ||
| } | ||
|
|
||
| #[cfg(feature = "postgres")] | ||
| async fn assert_permission_round_trip(conn: &mut DbConnection) -> TestResult<()> { | ||
| run_statements( | ||
| conn, | ||
| &[ | ||
| "INSERT INTO users (id, username, password) VALUES (42, 'schema-user', 'hash')", | ||
| "INSERT INTO permissions (id, code, name, scope) VALUES (42, 34, 'News Create \ | ||
| Category', 'bundle')", | ||
| "INSERT INTO user_permissions (user_id, permission_id) VALUES (42, 42)", | ||
| ], | ||
| ) | ||
| .await?; | ||
|
|
||
| let permissions = sql_query( | ||
| "SELECT COUNT(*) AS count FROM permissions p INNER JOIN user_permissions up ON \ | ||
| up.permission_id = p.id WHERE p.code = 34 AND p.scope = 'bundle' AND up.user_id = 42", | ||
| ) | ||
| .get_result::<CountRow>(conn) | ||
| .await?; | ||
| assert_eq!(permissions.count, 1); | ||
| Ok(()) | ||
| } | ||
|
|
||
| #[cfg(feature = "sqlite")] | ||
| async fn assert_sqlite_aligned_schema(conn: &mut DbConnection) -> TestResult<()> { | ||
| assert_sqlite_permission_schema(conn).await?; | ||
| assert_sqlite_news_schema(conn).await?; | ||
| assert_root_category_names_are_unique(conn).await | ||
| } |
There was a problem hiding this comment.
Assert the bundle contract as well.
Add explicit news_bundles schema checks in both backends. SQLite never
introspects the bundle table, and PostgreSQL only verifies column order, so
regressions in guid/created_at, UNIQUE(name, parent_bundle_id), or the
bundle parent/name index can slip through while these tests stay green.
Also applies to: 375-387, 447-458
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/db/schema_alignment_tests.rs` around lines 222 - 299, Add explicit checks
of the news_bundles table so bundle regressions (guid/created_at order,
UNIQUE(name, parent_bundle_id), and bundle parent/name index) are caught: create
a new helper function assert_sqlite_bundles_schema that queries
pragma_table_info('news_bundles') and pragma_index_list('news_bundles') and
asserts the column order includes guid and created_at in the expected positions
and that indices include the parent/name index and the
UNIQUE(name,parent_bundle_id) (sqlite_autoindex or explicit unique index), then
call that helper from assert_sqlite_news_schema or assert_sqlite_aligned_schema;
likewise add assert_postgres_bundles_schema (or extend the existing Postgres
alignment test) to run SQL that validates column order and existence of the
UNIQUE(name,parent_bundle_id) constraint and the bundle parent/name index, and
invoke it from the Postgres-aligned test (the function that runs Postgres schema
assertions, e.g., the Postgres counterpart to assert_sqlite_aligned_schema).
| /// Category name. | ||
| pub name: String, | ||
| /// Category GUID. | ||
| pub guid: Option<String>, | ||
| /// Add serial number metadata. | ||
| pub add_sn: Option<i32>, | ||
| /// Delete serial number metadata. | ||
| pub delete_sn: Option<i32>, | ||
| /// Creation timestamp. | ||
| pub created_at: Option<NaiveDateTime>, |
There was a problem hiding this comment.
Populate the new bundle/category metadata on write.
Update the write-side models at the same time as these read-side structs. NewCategory and NewBundle still only carry names and parent IDs, so any row created after this migration will persist NULL for guid, and categories will also persist NULL for add_sn and delete_sn. That leaves newly created rows out of alignment with the metadata this PR backfills for legacy data.
Patch the write path as well
pub struct NewCategory<'a> {
/// Category name.
pub name: &'a str,
/// Parent bundle identifier, if any.
pub bundle_id: Option<i32>,
+ /// Category GUID.
+ pub guid: &'a str,
+ /// Add serial number metadata.
+ pub add_sn: i32,
+ /// Delete serial number metadata.
+ pub delete_sn: i32,
}
@@
pub struct NewBundle<'a> {
/// Parent bundle identifier for nested bundles.
pub parent_bundle_id: Option<i32>,
/// Bundle name.
pub name: &'a str,
+ /// Bundle GUID.
+ pub guid: &'a str,
}Also applies to: 70-73
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/models.rs` around lines 39 - 48, NewCategory and NewBundle write-side
structs currently only include name and parent_id so new rows omit guid, add_sn,
delete_sn and created_at; update the write models (NewCategory, NewBundle) to
include the new fields (guid: Option<String>, add_sn: Option<i32>, delete_sn:
Option<i32>, created_at: Option<NaiveDateTime>) and ensure the code paths that
construct NewCategory/NewBundle (the insert builders) set sensible values
(generate/passthrough guid when available, default add_sn/delete_sn to None or
provided values, set created_at to now() or DB timestamp) so inserts persist the
same metadata the read structs expect. Reference NewCategory and NewBundle to
find where to add fields and populate them in the create/insert functions.
Move schema-alignment tests into backend-specific child modules so the shared migration helpers stay small and reusable. Move SQLite permission model tests into their own module with a shared fixture and parameterized cascade coverage. Update the users guide path sample fence to use the repository's plaintext convention for non-code examples.
There was a problem hiding this comment.
Gates Passed
6 Quality Gates Passed
See analysis details in CodeScene
Quality Gate Profile: Pay Down Tech Debt
Install CodeScene MCP: safeguard and uplift AI-generated code. Catch issues early with our IDE extension and CLI tool.
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (3)
docs/execplans/4-1-1-align-news-schema-and-migrations.md (1)
168-170:⚠️ Potential issue | 🟡 MinorRewrite this discovery note as history or mark it resolved.
src/schema.rsin this PR now definespermissionsanduser_permissions, so
this present-tense note is false. Record it as a past discovery or resolved
item to keep the ExecPlan aligned with the repository state. As per coding guidelines, "Use the markdown files within thedocs/directory as a knowledge base and source of truth for project requirements, dependency choices, and architectural decisions."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/execplans/4-1-1-align-news-schema-and-migrations.md` around lines 168 - 170, The note in the execplan is written in present tense and claims `src/schema.rs` lacks Diesel definitions for `permissions` and `user_permissions`; update this discovery to reflect that these tables are now defined by changing the sentence to past tense or mark the entry as resolved. Locate the paragraph referencing `src/schema.rs` and the identifiers `permissions` and `user_permissions` and either rewrite it as a historical note (e.g., "Previously, `src/schema.rs` did not define `permissions`/`user_permissions`") or add a "Resolved" status explaining that the Diesel definitions were added in this PR.src/db/schema_alignment_tests.rs (2)
237-266:⚠️ Potential issue | 🟠 MajorAssert the
news_bundlescontract on both backends.SQLite still never introspects
news_bundles, and PostgreSQL still checks only
column order. AssertUNIQUE(name, parent_bundle_id),idx_bundles_parent,
andidx_bundles_name_parent, or bundle regressions will keep slipping through
these migration tests. As per coding guidelines, "Rust: Write unit and behavioural tests for new functionality. Run both before and after making any change."Also applies to: 410-421
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/db/schema_alignment_tests.rs` around lines 237 - 266, Add assertions to the schema tests to validate the news_bundles contract: in assert_sqlite_news_schema (and the analogous PostgreSQL test) call sqlite_names (or pg_names) to fetch indexes and columns for news_bundles and assert the presence of UNIQUE(name, parent_bundle_id) and indexes named idx_bundles_parent and idx_bundles_name_parent, and also ensure column ordering matches expectations; update the test blocks that currently only check news_categories (and the duplicated block around lines 410-421) to include these explicit checks so both backends verify the bundles constraints and indexes.
172-564: 🛠️ Refactor suggestion | 🟠 MajorSplit the backend-specific halves into submodules.
Keep the shared helpers here and move the SQLite/PostgreSQL-specific setup,
assertions, and tests into sibling modules. At 564 lines, this file is already
past the repository limit and is harder to maintain than the backend split
warrants. As per coding guidelines, "**/*.rs: Files must not exceed 400 lines in length"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/db/schema_alignment_tests.rs` around lines 172 - 564, This file exceeds the 400-line limit because SQLite- and Postgres-specific setup/assertions/tests are mixed in; extract backend-specific halves into sibling modules. Create src/db/schema_alignment_sqlite.rs and src/db/schema_alignment_postgres.rs and move functions like setup_sqlite_legacy_schema, sqlite_names, assert_sqlite_permission_schema, assert_sqlite_news_schema, assert_sqlite_aligned_schema, sqlite_fresh_migration_creates_aligned_schema, sqlite_upgrade_backfills_legacy_news_rows into the sqlite module, and move setup_postgres_legacy_schema, postgres_names, assert_postgres_* helpers, postgres_test_url_from_env, start_embedded_postgres_db, with_postgres_test_db, postgres_fresh_migration_creates_aligned_schema, postgres_upgrade_backfills_legacy_news_rows, embedded_postgres_db into the postgres module; keep shared helpers (e.g. setup_legacy_schema, apply_migrations, assert_root_category_names_are_unique, assert_permission_round_trip_with_ids) in the original file, add mod declarations (pub(crate) mod schema_alignment_sqlite; pub(crate) mod schema_alignment_postgres;) and appropriate #[cfg(feature = "...")] on the new files, update use/import paths and test annotations so tests still run under their cfgs, and run cargo test to verify no missing symbols.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/execplans/4-1-1-align-news-schema-and-migrations.md`:
- Around line 243-247: The note incorrectly states both DBs use a "partial
unique index on root categories in addition to UNIQUE(name, bundle_id)"; instead
clarify that PostgreSQL enforces top-level (root) name uniqueness with partial
unique indexes on root rows, while SQLite enforces the same invariant by
creating a unique expression index idx_news_categories_unique on (name,
IFNULL(bundle_id, -1)) rather than a partial index; update the text to mention
both patterns explicitly and reference UNIQUE(name, bundle_id), the PostgreSQL
partial unique index approach, and the SQLite expression index using
IFNULL(bundle_id, -1) so future migrations pick the correct implementation for
each engine.
---
Duplicate comments:
In `@docs/execplans/4-1-1-align-news-schema-and-migrations.md`:
- Around line 168-170: The note in the execplan is written in present tense and
claims `src/schema.rs` lacks Diesel definitions for `permissions` and
`user_permissions`; update this discovery to reflect that these tables are now
defined by changing the sentence to past tense or mark the entry as resolved.
Locate the paragraph referencing `src/schema.rs` and the identifiers
`permissions` and `user_permissions` and either rewrite it as a historical note
(e.g., "Previously, `src/schema.rs` did not define
`permissions`/`user_permissions`") or add a "Resolved" status explaining that
the Diesel definitions were added in this PR.
In `@src/db/schema_alignment_tests.rs`:
- Around line 237-266: Add assertions to the schema tests to validate the
news_bundles contract: in assert_sqlite_news_schema (and the analogous
PostgreSQL test) call sqlite_names (or pg_names) to fetch indexes and columns
for news_bundles and assert the presence of UNIQUE(name, parent_bundle_id) and
indexes named idx_bundles_parent and idx_bundles_name_parent, and also ensure
column ordering matches expectations; update the test blocks that currently only
check news_categories (and the duplicated block around lines 410-421) to include
these explicit checks so both backends verify the bundles constraints and
indexes.
- Around line 172-564: This file exceeds the 400-line limit because SQLite- and
Postgres-specific setup/assertions/tests are mixed in; extract backend-specific
halves into sibling modules. Create src/db/schema_alignment_sqlite.rs and
src/db/schema_alignment_postgres.rs and move functions like
setup_sqlite_legacy_schema, sqlite_names, assert_sqlite_permission_schema,
assert_sqlite_news_schema, assert_sqlite_aligned_schema,
sqlite_fresh_migration_creates_aligned_schema,
sqlite_upgrade_backfills_legacy_news_rows into the sqlite module, and move
setup_postgres_legacy_schema, postgres_names, assert_postgres_* helpers,
postgres_test_url_from_env, start_embedded_postgres_db, with_postgres_test_db,
postgres_fresh_migration_creates_aligned_schema,
postgres_upgrade_backfills_legacy_news_rows, embedded_postgres_db into the
postgres module; keep shared helpers (e.g. setup_legacy_schema,
apply_migrations, assert_root_category_names_are_unique,
assert_permission_round_trip_with_ids) in the original file, add mod
declarations (pub(crate) mod schema_alignment_sqlite; pub(crate) mod
schema_alignment_postgres;) and appropriate #[cfg(feature = "...")] on the new
files, update use/import paths and test annotations so tests still run under
their cfgs, and run cargo test to verify no missing symbols.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro Plus
Run ID: 1d62547e-6ae5-48e3-ba0a-6736e5adf961
📒 Files selected for processing (6)
docs/execplans/4-1-1-align-news-schema-and-migrations.mdmigrations/postgres/00000000000006_align_news_schema/down.sqlmigrations/postgres/00000000000006_align_news_schema/up.sqlmigrations/sqlite/00000000000006_align_news_schema/down.sqlmigrations/sqlite/00000000000006_align_news_schema/up.sqlsrc/db/schema_alignment_tests.rs
| - Decision: enforce top-level category-name uniqueness with a partial unique | ||
| index on root categories in addition to `UNIQUE(name, bundle_id)`. Rationale: | ||
| both supported databases treat `NULL` values as distinct in composite unique | ||
| constraints, so root category lookup needs an explicit invariant to remain | ||
| deterministic. Date/Author: 2026-04-29 / Codex. |
There was a problem hiding this comment.
Describe the implemented root-uniqueness strategy precisely.
This entry says the solution is a partial unique index on root categories plus
UNIQUE(name, bundle_id), but the SQLite migration actually uses the
expression index idx_news_categories_unique on (name, IFNULL(bundle_id, -1)) while PostgreSQL uses partial unique indices. Rewrite this note so future
migration work does not copy the wrong SQLite pattern. As per coding guidelines, "Use the markdown files within the docs/ directory as a knowledge base and source of truth for project requirements, dependency choices, and architectural decisions."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/execplans/4-1-1-align-news-schema-and-migrations.md` around lines 243 -
247, The note incorrectly states both DBs use a "partial unique index on root
categories in addition to UNIQUE(name, bundle_id)"; instead clarify that
PostgreSQL enforces top-level (root) name uniqueness with partial unique indexes
on root rows, while SQLite enforces the same invariant by creating a unique
expression index idx_news_categories_unique on (name, IFNULL(bundle_id, -1))
rather than a partial index; update the text to mention both patterns explicitly
and reference UNIQUE(name, bundle_id), the PostgreSQL partial unique index
approach, and the SQLite expression index using IFNULL(bundle_id, -1) so future
migrations pick the correct implementation for each engine.
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/db/schema_alignment_tests/postgres_tests.rs`:
- Around line 157-194: The tests currently reuse the external POSTGRES_TEST_URL
(postgres_test_url_from_env) causing cross-test collisions; modify
with_postgres_test_db so that when postgres_test_url_from_env() returns
Some(url) it creates a unique per-run database or schema (e.g., by connecting to
the provided URL, creating a DB named with a UUID/timestamp or creating a unique
schema and setting search_path), run migrations against that new DB/schema, pass
the new connection string into the provided test(url) callback, and ensure
cleanup (drop DB or schema) after the runtime.block_on call; also update any
fixtures that insert fixed IDs to use generated/random IDs to avoid collisions
across runs.
In `@src/db/schema_alignment_tests/sqlite_tests.rs`:
- Around line 83-112: Extend the existing assert_sqlite_news_schema test to also
query the news_bundles table and assert its column names include the new bundle
fields: call sqlite_names(conn, "SELECT name FROM
pragma_table_info('news_bundles') ORDER BY cid").await? and then assert the
returned Vec equals (or contains) the expected column list including "guid" and
"created_at" (matching how category_columns is asserted); update the function
assert_sqlite_news_schema to perform this additional check after the category
assertions so regressions that drop/rename bundle columns are caught.
In `@src/db/tests/mod.rs`:
- Around line 16-17: Add a Postgres-backed test module alongside the existing
SQLite one: keep the current #[cfg(feature = "sqlite")] mod permission_tests;
and add a new #[cfg(feature = "postgres")] mod permission_tests_postgres; inside
that new module implement a test that mirrors test_user_permission_cascades
(same insert of user and permissions, run migrations for Postgres, delete the
user, then assert permission rows are removed) using the embedded Postgres setup
used elsewhere in tests (start embedded Postgres, run migrations, obtain a
connection/transaction, perform the insert/delete/assert). The goal is to
exercise ON DELETE CASCADE for the Postgres migration so failures in the
migration will be caught by the test. Ensure the test name mirrors
test_user_permission_cascades and that migrations are applied before assertions.
In `@src/db/tests/permission_tests.rs`:
- Around line 29-86: Change the fallible shared helpers and fixtures to return
Results instead of panicking: update permission_fixture to return
Result<PermissionFixture, Box<dyn std::error::Error>> (or your crate's error
type) and remove .expect() calls by propagating errors with ?; change
seed_user_permission to return Result<(i32,i32), _> and replace its .expect()
and .first(...).await.expect() calls with ? propagation; do the same for
assert_permission_remains and assert_user_remains so they return Result<_,_> and
use ? internally; finally, update tests that consume these rstest fixtures to
return Result and call the fixture with ? to propagate errors rather than
letting helpers panic. Ensure references to permission_fixture and
seed_user_permission are updated to the new Result signatures.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro Plus
Run ID: d6b91873-cce0-4b43-9805-3221fad87976
📒 Files selected for processing (6)
docs/users-guide.mdsrc/db/schema_alignment_tests/mod.rssrc/db/schema_alignment_tests/postgres_tests.rssrc/db/schema_alignment_tests/sqlite_tests.rssrc/db/tests/mod.rssrc/db/tests/permission_tests.rs
| fn postgres_test_url_from_env() -> Option<String> { | ||
| std::env::var("POSTGRES_TEST_URL") | ||
| .ok() | ||
| .filter(|url| !url.trim().is_empty()) | ||
| } | ||
|
|
||
| fn start_embedded_postgres_db() -> TestResult<Option<PostgresTestDb>> { | ||
| match PostgresTestDb::new() { | ||
| Ok(db) => Ok(Some(db)), | ||
| Err(PostgresTestDbError::Unavailable(_)) => { | ||
| tracing::warn!("SKIP-TEST-CLUSTER: PostgreSQL unavailable"); | ||
| Ok(None) | ||
| } | ||
| Err(error) => Err(error.into()), | ||
| } | ||
| } | ||
|
|
||
| fn with_postgres_test_db<F, Fut>(test: F) -> TestResult<()> | ||
| where | ||
| F: FnOnce(String) -> Fut + Send + 'static, | ||
| Fut: Future<Output = TestResult<()>> + Send + 'static, | ||
| { | ||
| if let Some(url) = postgres_test_url_from_env() { | ||
| let runtime = tokio::runtime::Builder::new_current_thread() | ||
| .enable_all() | ||
| .build()?; | ||
| return runtime.block_on(async move { test(url).await }); | ||
| } | ||
|
|
||
| let Some(db) = embedded_postgres_db()? else { | ||
| return Ok(()); | ||
| }; | ||
|
|
||
| let runtime = tokio::runtime::Builder::new_current_thread() | ||
| .enable_all() | ||
| .build()?; | ||
| runtime.block_on(async move { test(db.url.to_string()).await }) | ||
| } |
There was a problem hiding this comment.
Isolate the external Postgres database per test run.
Create a dedicated temporary database or schema when POSTGRES_TEST_URL is set. Reusing the same URL makes these tests order-dependent and racy: one run applies all migrations, another lays down the legacy schema, and both also insert fixed IDs (42, 84, 9001, 9002). Parallel or repeated runs will collide.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/db/schema_alignment_tests/postgres_tests.rs` around lines 157 - 194, The
tests currently reuse the external POSTGRES_TEST_URL
(postgres_test_url_from_env) causing cross-test collisions; modify
with_postgres_test_db so that when postgres_test_url_from_env() returns
Some(url) it creates a unique per-run database or schema (e.g., by connecting to
the provided URL, creating a DB named with a UUID/timestamp or creating a unique
schema and setting search_path), run migrations against that new DB/schema, pass
the new connection string into the provided test(url) callback, and ensure
cleanup (drop DB or schema) after the runtime.block_on call; also update any
fixtures that insert fixed IDs to use generated/random IDs to avoid collisions
across runs.
| async fn assert_sqlite_news_schema(conn: &mut DbConnection) -> TestResult<()> { | ||
| assert_sqlite_article_indices(conn).await?; | ||
|
|
||
| let category_indices = sqlite_names( | ||
| conn, | ||
| "SELECT name FROM pragma_index_list('news_categories') ORDER BY name", | ||
| ) | ||
| .await?; | ||
| for expected in ["idx_categories_bundle", "idx_news_categories_unique"] { | ||
| assert!(category_indices.iter().any(|name| name == expected)); | ||
| } | ||
|
|
||
| let category_columns = sqlite_names( | ||
| conn, | ||
| "SELECT name FROM pragma_table_info('news_categories') ORDER BY cid", | ||
| ) | ||
| .await?; | ||
| assert_eq!( | ||
| category_columns, | ||
| vec![ | ||
| "id", | ||
| "bundle_id", | ||
| "name", | ||
| "guid", | ||
| "add_sn", | ||
| "delete_sn", | ||
| "created_at" | ||
| ] | ||
| ); | ||
| Ok(()) |
There was a problem hiding this comment.
Assert the SQLite bundle columns as well.
Extend assert_sqlite_news_schema to inspect news_bundles for guid and created_at. The fresh-migration path currently validates categories and article indices only, so a regression that drops or renames the new bundle columns would still pass on SQLite.
Patch sketch
+async fn assert_sqlite_bundle_schema(conn: &mut DbConnection) -> TestResult<()> {
+ let bundle_columns = sqlite_names(
+ conn,
+ "SELECT name FROM pragma_table_info('news_bundles') ORDER BY cid",
+ )
+ .await?;
+ assert_eq!(
+ bundle_columns,
+ vec!["id", "parent_bundle_id", "name", "guid", "created_at"]
+ );
+ Ok(())
+}
+
async fn assert_sqlite_news_schema(conn: &mut DbConnection) -> TestResult<()> {
+ assert_sqlite_bundle_schema(conn).await?;
assert_sqlite_article_indices(conn).await?;As per coding guidelines, "Write unit and behavioural tests for new functionality".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/db/schema_alignment_tests/sqlite_tests.rs` around lines 83 - 112, Extend
the existing assert_sqlite_news_schema test to also query the news_bundles table
and assert its column names include the new bundle fields: call
sqlite_names(conn, "SELECT name FROM pragma_table_info('news_bundles') ORDER BY
cid").await? and then assert the returned Vec equals (or contains) the expected
column list including "guid" and "created_at" (matching how category_columns is
asserted); update the function assert_sqlite_news_schema to perform this
additional check after the category assertions so regressions that drop/rename
bundle columns are caught.
| #[cfg(feature = "sqlite")] | ||
| mod permission_tests; |
There was a problem hiding this comment.
Add equivalent Postgres cascade coverage for the new permission tables.
Keep the SQLite-only module if you need PRAGMA foreign_keys, but mirror test_user_permission_cascades in the Postgres path as well. Right now a broken ON DELETE CASCADE in the Postgres migration would still pass because that backend only exercises the insert/query path.
Based on learnings, "Test both SQLite (in-memory and file-based) and PostgreSQL (embedded via postgresql-embedded crate) backends in CI with separate feature flag builds".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/db/tests/mod.rs` around lines 16 - 17, Add a Postgres-backed test module
alongside the existing SQLite one: keep the current #[cfg(feature = "sqlite")]
mod permission_tests; and add a new #[cfg(feature = "postgres")] mod
permission_tests_postgres; inside that new module implement a test that mirrors
test_user_permission_cascades (same insert of user and permissions, run
migrations for Postgres, delete the user, then assert permission rows are
removed) using the embedded Postgres setup used elsewhere in tests (start
embedded Postgres, run migrations, obtain a connection/transaction, perform the
insert/delete/assert). The goal is to exercise ON DELETE CASCADE for the
Postgres migration so failures in the migration will be caught by the test.
Ensure the test name mirrors test_user_permission_cascades and that migrations
are applied before assertions.
| #[fixture] | ||
| async fn permission_fixture(#[future] migrated_conn: DbConnection) -> PermissionFixture { | ||
| let mut conn = migrated_conn.await; | ||
| diesel::sql_query("PRAGMA foreign_keys = ON") | ||
| .execute(&mut conn) | ||
| .await | ||
| .expect("failed to enable sqlite foreign keys"); | ||
|
|
||
| let (user_id, permission_id) = seed_user_permission(&mut conn).await; | ||
| let user_permission = NewUserPermission { | ||
| user_id, | ||
| permission_id, | ||
| }; | ||
| diesel::insert_into(user_permissions::user_permissions) | ||
| .values(&user_permission) | ||
| .execute(&mut conn) | ||
| .await | ||
| .expect("failed to assign permission"); | ||
|
|
||
| PermissionFixture { | ||
| conn, | ||
| user_id, | ||
| permission_id, | ||
| } | ||
| } | ||
|
|
||
| async fn seed_user_permission(conn: &mut DbConnection) -> (i32, i32) { | ||
| let user = NewUser { | ||
| username: "dana", | ||
| password: "hash", | ||
| }; | ||
| create_user(conn, &user) | ||
| .await | ||
| .expect("failed to create permission test user"); | ||
| let stored_user = get_user_by_name(conn, "dana") | ||
| .await | ||
| .expect("permission test user lookup failed") | ||
| .expect("permission test user missing"); | ||
|
|
||
| let permission = NewPermission { | ||
| code: 34, | ||
| name: "News Create Category", | ||
| scope: "bundle", | ||
| }; | ||
| diesel::insert_into(permissions::permissions) | ||
| .values(&permission) | ||
| .execute(conn) | ||
| .await | ||
| .expect("failed to insert permission"); | ||
| let permission_id = permissions::permissions | ||
| .filter(permissions::code.eq(34)) | ||
| .select(permissions::id) | ||
| .first::<i32>(conn) | ||
| .await | ||
| .expect("permission id"); | ||
|
|
||
| (stored_user.id, permission_id) | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Stop panicking from shared fixtures and helpers.
Return Result from permission_fixture, seed_user_permission, assert_permission_remains, and assert_user_remains, then propagate with ? from the tests. Keeping expect(...) in shared setup hides the failing step and breaks the repo rule for fallible fixtures.
As per coding guidelines, "In production code and shared fixtures, avoid .expect() entirely: return Result and use ? to propagate errors instead of panicking" and "Consume fallible fixtures in rstest by making the test return Result and applying ? to the fixture".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/db/tests/permission_tests.rs` around lines 29 - 86, Change the fallible
shared helpers and fixtures to return Results instead of panicking: update
permission_fixture to return Result<PermissionFixture, Box<dyn
std::error::Error>> (or your crate's error type) and remove .expect() calls by
propagating errors with ?; change seed_user_permission to return
Result<(i32,i32), _> and replace its .expect() and .first(...).await.expect()
calls with ? propagation; do the same for assert_permission_remains and
assert_user_remains so they return Result<_,_> and use ? internally; finally,
update tests that consume these rstest fixtures to return Result and call the
fixture with ? to propagate errors rather than letting helpers panic. Ensure
references to permission_fixture and seed_user_permission are updated to the new
Result signatures.
Summary
Changes
Rationale
Plan overview
Plan of work (summary)
Exit criteria (verification gates)
Open questions to resolve during implementation
How to review
Related docs and follow-ups
◳ Generated by DevBoxer
ℹ️ Tag @devboxerhub to ask questions and address PR feedback
📎 Task: https://www.devboxer.com/task/5ea71213-c300-4c0d-b764-d5790ef0cbb4
Summary by Sourcery
Documentation:
📎 Task: https://www.devboxer.com/task/89a5a22e-8753-4169-a4df-ec9cdb3cf2db
Summary by Sourcery
Align the news persistence layer and documentation with the documented news schema while preserving existing behaviour and upgrade safety across SQLite and PostgreSQL.
New Features:
Enhancements:
Documentation:
Tests: