Add DML workflow SPA and local demo bootstrap#18
Conversation
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 2 minutes and 13 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughThis PR adds workflow functionality and demo tooling: new frontend Vue views and routes, TypeScript API types/functions, backend workflow REST endpoints/serializers/tests, demo seeding scripts and management commands, local MySQL/Postgres demo services and init scripts, Docker/compose updates, and documentation changes. Changes
Sequence DiagramssequenceDiagram
actor User
participant Frontend as Frontend UI<br/>(DML Workflow<br/>Create View)
participant API as Backend API<br/>(Workflow Endpoints)
participant DB as Database<br/>(SqlWorkflow,<br/>Audit)
User->>Frontend: Open /workflows/dml/new
Frontend->>API: GET /v1/workflow/submission-metadata/
API->>DB: Query resource_groups, instance_grants
DB-->>API: Metadata
API-->>Frontend: Metadata (groups, instances, perms)
User->>Frontend: Select group
Frontend->>API: GET /v1/workflow/approval-preview/?group_id=X
API->>DB: Query approval chain
DB-->>API: Approval nodes
API-->>Frontend: Approval flow preview
User->>Frontend: Enter SQL
Frontend->>API: POST /v1/workflow/check/
API->>API: Parse & validate SQL
API-->>Frontend: Check result
User->>Frontend: Submit workflow
Frontend->>API: POST /v1/workflow/
API->>DB: Create SqlWorkflow, logs, audits
DB-->>API: workflow_id
API-->>Frontend: Created workflow info
sequenceDiagram
actor Reviewer
participant Frontend as Frontend UI<br/>(Workflows Console)
participant API as Backend API<br/>(Workflow Endpoints)
participant DB as Database<br/>(SqlWorkflow,<br/>Audit, Timing)
participant Queue as Task Queue
Reviewer->>Frontend: Open /workflows
Frontend->>API: GET /v1/workflow/?scope=pending_review
API->>DB: Query pending workflows
DB-->>API: List
API-->>Frontend: Summary list
Reviewer->>Frontend: View workflow detail
Frontend->>API: GET /v1/workflow/{id}/
API->>DB: Query workflow, approval, logs
DB-->>API: Detail
API-->>Frontend: Detail + action flags
Reviewer->>Frontend: Schedule execution
Frontend->>API: POST /v1/workflow/{id}/schedule/
API->>DB: Update status, add schedule
API->>Queue: add_sql_schedule(run_date)
Queue-->>API: Job created
API-->>Frontend: Success
Frontend->>Frontend: Poll every 5s for status
Frontend->>API: GET /v1/workflow/?scope=mine
API->>DB: Query updated workflows
DB-->>API: Updated state (run_date/status)
API-->>Frontend: Updated list
Estimated code review effort🎯 4 (Complex) | ⏱️ ~70 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
sql_api/urls.py (1)
76-80:⚠️ Potential issue | 🟡 MinorDuplicate URL route definitions for instance tags.
Lines 76-77 and 79-80 define identical routes for
v1/instance/tag/andv1/instance/tag/<int:pk>/. While Django uses the first match, these duplicates should be removed to avoid confusion and potential maintenance issues.🔧 Proposed fix
path("v1/instance/tag/", api_instance.InstanceTagList.as_view()), path("v1/instance/tag/<int:pk>/", api_instance.InstanceTagDetail.as_view()), path("v1/instance/metadata/", api_instance.InstanceMetadata.as_view()), - path("v1/instance/tag/", api_instance.InstanceTagList.as_view()), - path("v1/instance/tag/<int:pk>/", api_instance.InstanceTagDetail.as_view()), path(🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sql_api/urls.py` around lines 76 - 80, The URL patterns contain duplicate route entries for the instance tag endpoints: remove the repeated path lines that register api_instance.InstanceTagList.as_view() and api_instance.InstanceTagDetail.as_view() so each route (v1/instance/tag/ and v1/instance/tag/<int:pk>/) is only declared once; locate the duplicated uses of api_instance.InstanceTagList and api_instance.InstanceTagDetail in the urlpatterns and delete the redundant entries, leaving a single registration for each view.
🧹 Nitpick comments (5)
README.md (1)
48-64: Add an explicit “local-only / never production” warning near demo credentials.This section is useful, but a short caution line will reduce accidental reuse of seeded credentials outside local demo environments.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` around lines 48 - 64, Add a clear, prominent one-line warning under the "Local Demo Users" heading (near the seeded credentials list) that reads something like: "WARNING: These demo credentials are for local development and testing only — never use in production." Ensure the line is visually distinct (e.g., bold or prefixed with WARNING) and placed immediately above or below the seeded app users list so readers cannot miss it; update the README.md "Local Demo Users" section accordingly.frontend/src/views/WorkflowsView.vue (2)
1049-1054: SharedreviewForm.audit_remarkbetween review and terminate sections.Both the "Review action" section (lines 1049-1054) and "Terminate workflow" section (lines 1137-1142) bind to the same
reviewForm.audit_remarkmodel. This means text entered in one textarea will appear in the other, which could confuse users or cause unintended submissions.Consider using separate form fields for review remarks and termination reasons.
Also applies to: 1137-1142
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/views/WorkflowsView.vue` around lines 1049 - 1054, The two textareas both bind to reviewForm.audit_remark which causes shared state between "Review action" and "Terminate workflow"; create distinct fields on the form object (e.g., reviewForm.review_remark for the review textarea and reviewForm.terminate_reason for the terminate textarea), update the v-model bindings on the corresponding <textarea> elements, and update any submission handlers or validators that read audit_remark (e.g., review submission and terminate submission functions) to use the new field names while keeping existing props like :disabled="reviewSubmitting" and :class="textareaClass".
580-594: Polling watcher should also trigger on selection changes.The watcher on
shouldPollSelectedWorkflowstarts/stops the poll timer when the workflow status changes, but the timer keeps the oldselectedWorkflow.value.idin closure. If the user selects a different workflow while polling is active, the timer continues polling the previous workflow until status changes.Consider using
selectedWorkflowIdas a dependency to reset the timer when the selected workflow changes:♻️ Proposed fix to watch both status and selection
watch( - shouldPollSelectedWorkflow, - (shouldPoll) => { + [shouldPollSelectedWorkflow, selectedWorkflowId], + ([shouldPoll]) => { clearPollTimer() if (!shouldPoll || !selectedWorkflow.value) { return } pollTimer = window.setInterval(() => { if (selectedWorkflow.value) { void loadSelectedWorkflow(selectedWorkflow.value.id, true) void loadWorkflows() } }, 5000) }, )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/views/WorkflowsView.vue` around lines 580 - 594, The poll timer is only watching shouldPollSelectedWorkflow so its interval callback can close over an old selectedWorkflow.value.id; update the watcher to depend on both polling state and selection so the timer resets when selection changes (watch an array or a computed tuple including shouldPollSelectedWorkflow and selectedWorkflowId), call clearPollTimer() then recreate pollTimer when shouldPollSelectedWorkflow is true and selectedWorkflowId is present, and ensure the interval callback uses selectedWorkflowId (or reads selectedWorkflow.value) so loadSelectedWorkflow and loadWorkflows always run for the current selection; keep existing helpers clearPollTimer, pollTimer, loadSelectedWorkflow, and loadWorkflows.sql/management/commands/smoke_local_demo.py (1)
116-119: Minor: Error check default value is inconsistent with ResultSet API.Per
sql/engines/models.py,ResultSet.errorisNoneon success, not an empty string. Whilegetattr(connection_result, "error", "")works due to Python truthiness, usingNoneas the default would be more accurate:♻️ Minor consistency fix
- if getattr(connection_result, "error", ""): + if getattr(connection_result, "error", None):🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sql/management/commands/smoke_local_demo.py` around lines 116 - 119, The error check uses getattr(connection_result, "error", "") but ResultSet.error is None on success; change the default to None and check truthiness accordingly by using getattr(connection_result, "error", None) (keeping the existing raise of CommandError with instance.instance_name and connection_result.error) so the code aligns with the ResultSet.error API and is more semantically correct when no error is present.frontend/src/views/DmlWorkflowCreateView.vue (1)
326-347: Redundant database reload when group changes but instance is retained.When
form.groupIdchanges but the current instance remains valid (line 344-346),loadDatabasesis called. However, the database list shouldn't change just because the resource group changed—the instance is the same.This is a minor inefficiency but causes an unnecessary API call.
♻️ Skip database reload when instance is unchanged
watch( () => form.groupId, (groupIdValue) => { const retainedInstanceId = Number(form.instanceId) const keepsCurrentInstance = filteredInstances.value.some((instance) => instance.id === retainedInstanceId) approvalPreview.value = null approvalError.value = '' form.instanceId = keepsCurrentInstance ? form.instanceId : '' - form.dbName = '' - availableDatabases.value = [] + if (!keepsCurrentInstance) { + form.dbName = '' + availableDatabases.value = [] + } databasesError.value = '' invalidateCheck() const groupId = Number(groupIdValue) if (groupId) { void loadApprovalPreview(groupId) } - if (groupId && keepsCurrentInstance && retainedInstanceId) { - void loadDatabases(retainedInstanceId) - } }, )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/views/DmlWorkflowCreateView.vue` around lines 326 - 347, The watcher on form.groupId currently calls loadDatabases(retainedInstanceId) when the instance is still valid (keepsCurrentInstance), causing an unnecessary API call; remove the conditional call to loadDatabases from this watch (the block using retainedInstanceId / keepsCurrentInstance / loadDatabases) and instead ensure database loading is triggered from the instance-change logic (e.g., the watch or handler that reacts to form.instanceId), so databases are only loaded when the instance selection actually changes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@sql_api/api_workflow.py`:
- Around line 451-464: The loop that builds group_names and review_info from
audit_auth_groups can crash on malformed or deleted IDs; wrap the parsing and
Group.objects.get(id=int(auth_group_id)) lookup in validation that catches
ValueError and Group.DoesNotExist (and any non-integer/empty parts), and instead
of letting it 500, raise or return a clear configuration error indicating
stale/malformed approval groups for the workflow (mentioning audit_auth_groups),
so callers see a user-facing config error; keep the rest of the logic that
builds group_names and review_info unchanged.
- Around line 778-793: The scheduler registration add_sql_schedule(...) is being
called inside the transaction.atomic() block along with workflow.status update
and Audit.add_log, which can leave an external django-q job registered if the DB
transaction later rolls back; move the call to add_sql_schedule(schedule_name,
run_date, workflow_id) out of the transaction.atomic() so that only the DB
mutations (workflow.status change and Audit.add_log) occur inside the atomic
block and the external side effect is performed after the transaction commits
successfully; keep the same parameters and ensure the call happens only after
workflow.save(update_fields=["status"]) and Audit.add_log(...) complete.
In `@sql_api/serializers.py`:
- Around line 29-35: Remove the unused imports from serializers.py: drop Const
from the common.utils.const import line and remove can_execute, can_timingtask,
can_cancel, can_rollback and task_info (the latter from sql.utils.tasks) so
serializers.py only imports symbols it uses (e.g., WorkflowType, WorkflowStatus,
SysConfig, traceback, logging, OffLineDownLoad). Then update api_workflow.py so
it no longer re-exports those functions from .serializers — import can_execute,
can_timingtask, can_cancel, can_rollback directly from sql.utils.sql_review
instead of from serializers to eliminate the unnecessary re-export pattern.
In `@sql/local_demo.py`:
- Around line 242-260: seed_local_demo currently performs destructive actions
(creating a fixed-password superuser and overwriting auth groups) without
guarding environment; add an explicit environment safety check at the start of
seed_local_demo that refuses to run unless the process is a local/dev
environment (e.g., settings.DEBUG is True or a dedicated env var like DJANGO_ENV
== "local"/"dev"); if the check fails, raise an exception (RuntimeError or
PermissionError) or log and abort before calling _seed_auth_groups, _seed_users,
or _seed_instances to ensure superuser/group seeding never runs against
shared/prod DBs. Ensure the check references seed_local_demo and prevents calls
to helpers _seed_users and _seed_auth_groups when not in a safe environment.
In `@src/docker-compose/mysql-demo/init/001-demo-schema.sql`:
- Around line 1-8: The SQL script depends on an external env var to create
demo_orders but only explicitly creates demo_billing; add an explicit CREATE
DATABASE IF NOT EXISTS for demo_orders at the top of the script (before the
existing CREATE DATABASE IF NOT EXISTS demo_billing) so both databases are
created within this script and the subsequent GRANT ALL PRIVILEGES and USE
demo_orders statements target an explicitly-created database.
---
Outside diff comments:
In `@sql_api/urls.py`:
- Around line 76-80: The URL patterns contain duplicate route entries for the
instance tag endpoints: remove the repeated path lines that register
api_instance.InstanceTagList.as_view() and
api_instance.InstanceTagDetail.as_view() so each route (v1/instance/tag/ and
v1/instance/tag/<int:pk>/) is only declared once; locate the duplicated uses of
api_instance.InstanceTagList and api_instance.InstanceTagDetail in the
urlpatterns and delete the redundant entries, leaving a single registration for
each view.
---
Nitpick comments:
In `@frontend/src/views/DmlWorkflowCreateView.vue`:
- Around line 326-347: The watcher on form.groupId currently calls
loadDatabases(retainedInstanceId) when the instance is still valid
(keepsCurrentInstance), causing an unnecessary API call; remove the conditional
call to loadDatabases from this watch (the block using retainedInstanceId /
keepsCurrentInstance / loadDatabases) and instead ensure database loading is
triggered from the instance-change logic (e.g., the watch or handler that reacts
to form.instanceId), so databases are only loaded when the instance selection
actually changes.
In `@frontend/src/views/WorkflowsView.vue`:
- Around line 1049-1054: The two textareas both bind to reviewForm.audit_remark
which causes shared state between "Review action" and "Terminate workflow";
create distinct fields on the form object (e.g., reviewForm.review_remark for
the review textarea and reviewForm.terminate_reason for the terminate textarea),
update the v-model bindings on the corresponding <textarea> elements, and update
any submission handlers or validators that read audit_remark (e.g., review
submission and terminate submission functions) to use the new field names while
keeping existing props like :disabled="reviewSubmitting" and
:class="textareaClass".
- Around line 580-594: The poll timer is only watching
shouldPollSelectedWorkflow so its interval callback can close over an old
selectedWorkflow.value.id; update the watcher to depend on both polling state
and selection so the timer resets when selection changes (watch an array or a
computed tuple including shouldPollSelectedWorkflow and selectedWorkflowId),
call clearPollTimer() then recreate pollTimer when shouldPollSelectedWorkflow is
true and selectedWorkflowId is present, and ensure the interval callback uses
selectedWorkflowId (or reads selectedWorkflow.value) so loadSelectedWorkflow and
loadWorkflows always run for the current selection; keep existing helpers
clearPollTimer, pollTimer, loadSelectedWorkflow, and loadWorkflows.
In `@README.md`:
- Around line 48-64: Add a clear, prominent one-line warning under the "Local
Demo Users" heading (near the seeded credentials list) that reads something
like: "WARNING: These demo credentials are for local development and testing
only — never use in production." Ensure the line is visually distinct (e.g.,
bold or prefixed with WARNING) and placed immediately above or below the seeded
app users list so readers cannot miss it; update the README.md "Local Demo
Users" section accordingly.
In `@sql/management/commands/smoke_local_demo.py`:
- Around line 116-119: The error check uses getattr(connection_result, "error",
"") but ResultSet.error is None on success; change the default to None and check
truthiness accordingly by using getattr(connection_result, "error", None)
(keeping the existing raise of CommandError with instance.instance_name and
connection_result.error) so the code aligns with the ResultSet.error API and is
more semantically correct when no error is present.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 47736e8a-4cb6-49d5-b0ff-6f8e11dc68e6
📒 Files selected for processing (22)
AGENTS.mdREADME.mdfrontend/src/lib/api.tsfrontend/src/router/index.tsfrontend/src/views/DmlWorkflowCreateView.vuefrontend/src/views/WorkflowsView.vuesql/local_demo.pysql/management/__init__.pysql/management/commands/__init__.pysql/management/commands/seed_local_demo.pysql/management/commands/smoke_local_demo.pysql/test_local_demo.pysql_api/api_workflow.pysql_api/serializers.pysql_api/tests.pysql_api/urls.pysrc/docker-compose/.env.local-armsrc/docker-compose/LOCAL_DEMO.mdsrc/docker-compose/docker-compose.local-arm.ymlsrc/docker-compose/mysql-demo/init/001-demo-schema.sqlsrc/docker-compose/postgres-demo/init/001-demo-schema.sqlsrc/docker/startup.sh
| with transaction.atomic(): | ||
| workflow.status = "workflow_timingtask" | ||
| workflow.save(update_fields=["status"]) | ||
| add_sql_schedule(schedule_name, run_date, workflow_id) | ||
| audit_id = Audit.detail_by_workflow_id( | ||
| workflow_id=workflow_id, | ||
| workflow_type=WorkflowType.SQL_REVIEW, | ||
| ).audit_id | ||
| Audit.add_log( | ||
| audit_id=audit_id, | ||
| operation_type=4, | ||
| operation_type_desc="Scheduled Execution", | ||
| operation_info="Scheduled execution time: {}".format(run_date), | ||
| operator=request.user.username, | ||
| operator_display=request.user.display, | ||
| ) |
There was a problem hiding this comment.
Move scheduler registration out of the atomic block.
add_sql_schedule() is an external side effect. If anything later in this transaction rolls back, the django-q job can survive while the workflow status/log do not, leaving scheduler state and DB state out of sync.
Suggested direction
with transaction.atomic():
workflow.status = "workflow_timingtask"
workflow.save(update_fields=["status"])
- add_sql_schedule(schedule_name, run_date, workflow_id)
audit_id = Audit.detail_by_workflow_id(
workflow_id=workflow_id,
workflow_type=WorkflowType.SQL_REVIEW,
).audit_id
Audit.add_log(
audit_id=audit_id,
operation_type=4,
operation_type_desc="Scheduled Execution",
operation_info="Scheduled execution time: {}".format(run_date),
operator=request.user.username,
operator_display=request.user.display,
)
+ transaction.on_commit(
+ lambda: add_sql_schedule(schedule_name, run_date, workflow_id)
+ )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@sql_api/api_workflow.py` around lines 778 - 793, The scheduler registration
add_sql_schedule(...) is being called inside the transaction.atomic() block
along with workflow.status update and Audit.add_log, which can leave an external
django-q job registered if the DB transaction later rolls back; move the call to
add_sql_schedule(schedule_name, run_date, workflow_id) out of the
transaction.atomic() so that only the DB mutations (workflow.status change and
Audit.add_log) occur inside the atomic block and the external side effect is
performed after the transaction commits successfully; keep the same parameters
and ensure the call happens only after workflow.save(update_fields=["status"])
and Audit.add_log(...) complete.
| from common.utils.const import WorkflowType, WorkflowStatus, Const | ||
| from common.config import SysConfig | ||
| import traceback | ||
| import logging | ||
| from sql.offlinedownload import OffLineDownLoad | ||
| from sql.utils.sql_review import can_execute, can_timingtask, can_cancel, can_rollback | ||
| from sql.utils.tasks import task_info |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if these imports are actually used in serializers.py
rg -n "can_execute|can_timingtask|can_cancel|can_rollback|task_info|Const" sql_api/serializers.py | grep -v "^.*:.*import"Repository: jruszo/Datamingle
Length of output: 43
🏁 Script executed:
# First, let's see the exact imports at lines 29-35
head -40 sql_api/serializers.py | tail -15Repository: jruszo/Datamingle
Length of output: 552
🏁 Script executed:
# Check if api_workflow.py imports these from serializers.py
rg -n "from.*serializers import|import.*serializers" sql_api/api_workflow.pyRepository: jruszo/Datamingle
Length of output: 170
🏁 Script executed:
# Check if these utilities are used elsewhere in api_workflow.py
rg -n "can_execute|can_timingtask|can_cancel|can_rollback|task_info" sql_api/api_workflow.py | head -20Repository: jruszo/Datamingle
Length of output: 1002
🏁 Script executed:
rg -n "Const\." sql_api/serializers.pyRepository: jruszo/Datamingle
Length of output: 43
Remove unused imports from serializers.py.
The imports Const, can_execute, can_timingtask, can_cancel, can_rollback, and task_info are present at lines 29 and 34-36 but are not used within this file. Additionally, api_workflow.py imports can_execute, can_timingtask, can_cancel, and can_rollback from .serializers (lines 39-42), creating an unnecessary re-export pattern. These should instead be imported directly from sql.utils.sql_review in api_workflow.py. Remove the unused imports from serializers.py.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@sql_api/serializers.py` around lines 29 - 35, Remove the unused imports from
serializers.py: drop Const from the common.utils.const import line and remove
can_execute, can_timingtask, can_cancel, can_rollback and task_info (the latter
from sql.utils.tasks) so serializers.py only imports symbols it uses (e.g.,
WorkflowType, WorkflowStatus, SysConfig, traceback, logging, OffLineDownLoad).
Then update api_workflow.py so it no longer re-exports those functions from
.serializers — import can_execute, can_timingtask, can_cancel, can_rollback
directly from sql.utils.sql_review instead of from serializers to eliminate the
unnecessary re-export pattern.
| def seed_local_demo(write_line=None): | ||
| def log(message): | ||
| if write_line: | ||
| write_line(message) | ||
|
|
||
| with transaction.atomic(): | ||
| auth_groups = _seed_auth_groups(log) | ||
| resource_groups = _seed_resource_groups(log) | ||
| tags = _seed_instance_tags(log) | ||
| users = _seed_users(auth_groups, resource_groups, log) | ||
| instances = _seed_instances(resource_groups, tags, log) | ||
| _seed_workflow_settings(auth_groups, resource_groups, log) | ||
|
|
||
| return { | ||
| "auth_groups": list(auth_groups.keys()), | ||
| "resource_groups": [group.group_name for group in resource_groups.values()], | ||
| "users": list(users.keys()), | ||
| "instances": [instance.instance_name for instance in instances.values()], | ||
| } |
There was a problem hiding this comment.
Refuse to run this seeder outside local/dev.
seed_local_demo() currently reaches helpers that create a superuser with a fixed password and overwrite generic auth groups like DBA/RD in place. If someone points this at a shared database, it can create a known-credential admin and clobber existing role permissions.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@sql/local_demo.py` around lines 242 - 260, seed_local_demo currently performs
destructive actions (creating a fixed-password superuser and overwriting auth
groups) without guarding environment; add an explicit environment safety check
at the start of seed_local_demo that refuses to run unless the process is a
local/dev environment (e.g., settings.DEBUG is True or a dedicated env var like
DJANGO_ENV == "local"/"dev"); if the check fails, raise an exception
(RuntimeError or PermissionError) or log and abort before calling
_seed_auth_groups, _seed_users, or _seed_instances to ensure superuser/group
seeding never runs against shared/prod DBs. Ensure the check references
seed_local_demo and prevents calls to helpers _seed_users and _seed_auth_groups
when not in a safe environment.
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Summary
sql_apiendpointsTesting
npm run buildinfrontend/docker exec datamingle-app python manage.py test sql_api.tests.TestWorkflowdocker exec datamingle-app python manage.py test sql.test_local_demodocker exec datamingle-app python manage.py smoke_local_demoSummary by CodeRabbit
New Features
Documentation
Tests
Chores