Split SQL API into dedicated app modules#31
Conversation
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 35 minutes and 48 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (10)
📝 WalkthroughWalkthroughA large refactor extracts API surface from a monolithic Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Client
participant Root as Django URLConf
participant Include as api_* url includes
participant View as API View
participant Serializer as Serializer
participant DB as Database
Client->>Root: HTTP request /api/v1/<app>/...
Root->>Include: route to included urlpatterns
Include->>View: dispatch to View.as_view()
View->>Serializer: validate/serialize request
Serializer->>DB: read/write models
DB-->>Serializer: rows/result
Serializer-->>View: validated data / serialized response
View-->>Client: HTTP response (JSON)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 14
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
api_instances/views.py (1)
267-531:⚠️ Potential issue | 🔴 CriticalDuplicate class definitions —
InstanceTagListandInstanceTagDetailare declared twice.
InstanceTagListis defined at line 267 and again at line 403;InstanceTagDetailis defined at line 346 and again at line 482. In Python the second definition silently shadows the first, so:
- The first
InstanceTagDetail(lines 346-400) and its_validate_deactivationstaticmethod are dead code and the nicer "Remove it from those instances before deactivating it." error message is lost — the second definition's inline check wins.- The first
InstanceTagList's default ordering (order_by("id")) is replaced by the second's (order_by("tag_name", "id")), which may or may not be the intended default.- URL routing resolves to whichever class Python binds last (the second), so any test/docs referencing the first variant's behavior will silently diverge.
This looks like a merge artifact from the split. Please keep a single authoritative copy of each class.
♻️ Suggested cleanup
Delete the duplicate block (lines 403-531) and keep the first pair, or delete the first pair (lines 267-400) and keep the second — whichever matches the intended behavior. Make sure the retained
InstanceTagDetail.putpreserves the desired deactivation-guard message.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api_instances/views.py` around lines 267 - 531, There are duplicate definitions of InstanceTagList and InstanceTagDetail causing the second set to shadow the first and lose the _validate_deactivation message; remove the later duplicate class definitions (the second InstanceTagList and the second InstanceTagDetail) and keep the original classes that include the staticmethod _validate_deactivation and the original queryset ordering (InstanceTagList with queryset.order_by("id")), ensuring InstanceTagDetail.put still uses _validate_deactivation to produce the "Remove it from those instances before deactivating it." error message.
🧹 Nitpick comments (2)
api_archives/serializers.py (1)
6-9: Consider an explicit field list instead offields = "__all__".Using
"__all__"onArchiveConfigwill silently expose any field added later (e.g., credentials, connection strings, internal flags) through both read and write paths, which is easy to miss during future model changes. Listing the intended fields explicitly — or splitting into read/write serializers — makes the API surface review-friendly and avoids accidental PII/credential leakage. This is a low-risk refactor but worth doing while the new module is being introduced.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api_archives/serializers.py` around lines 6 - 9, Replace the catch-all fields = "__all__" in ArchiveConfigSerializer.Meta with an explicit list of allowed field names from the ArchiveConfig model (e.g., ["id", "name", "type", "endpoint", ...]) or, if you need different read/write exposures, create two serializers (e.g., ArchiveConfigSerializer and ArchiveConfigWriteSerializer) with explicit fields for each; update ArchiveConfigSerializer (and any usages) to reference only the intended attributes and ensure sensitive fields (credentials, connection_string, internal_flags, etc.) are omitted from the public serializer(s).api_queries/serializers.py (1)
6-9: Whitelist serialized fields instead of using__all__.Line 9 will automatically expose any future
QueryPrivilegesApplymodel field through this serializer. That is risky for API/notification consumers because the model includes user and audit workflow fields.Proposed explicit field list
class QueryPrivilegesApplySerializer(serializers.ModelSerializer): class Meta: model = QueryPrivilegesApply - fields = "__all__" + fields = ( + "apply_id", + "group_id", + "group_name", + "title", + "user_name", + "user_display", + "instance", + "db_list", + "table_list", + "valid_date", + "limit_num", + "priv_type", + "status", + "audit_auth_groups", + "create_time", + "sys_time", + )Trim this tuple further if consumers do not need internal audit/user fields.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api_queries/serializers.py` around lines 6 - 9, Replace the wildcard fields usage in QueryPrivilegesApplySerializer.Meta (which currently uses fields="__all__") with an explicit tuple of allowed field names so future model additions (especially user/audit/workflow fields) are not auto-exposed; update the Meta.fields in QueryPrivilegesApplySerializer to list only the public API fields required by consumers (e.g., the business-related fields and exclude internal fields such as user, created_by, updated_by, audit_status, workflow_*), trimming further if some public fields aren't needed.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@api_admin/views.py`:
- Around line 9-15: The __all__ list in this module is unsorted and triggers
Ruff RUF022; sort the string entries alphabetically (by the symbol names like
DashboardOverview, SystemSettingsEmailTestView,
SystemSettingsGoInceptionTestView, SystemSettingsStorageTestView,
SystemSettingsView) so the list is in ascending order to satisfy the linter.
In `@api_core/extensions.py`:
- Around line 1-21: The get_extension_urlpatterns function should use
django.apps.apps.is_installed to verify each app in
settings.DATAMINGLE_API_EXTENSION_APPS (instead of doing exact string membership
against settings.INSTALLED_APPS) and should wrap the
import_module(f"{app_path}.api_urls") call in a try/except ModuleNotFoundError
to re-raise a django.core.exceptions.ImproperlyConfigured with a clear message
that the extension's api_urls module is missing; preserve the existing
getattr(module, "urlpatterns") check and raise ImproperlyConfigured if
urlpatterns is None.
In `@api_core/tests.py`:
- Around line 8-40: The test fails because URLConf is built at import time so
`@override_settings` doesn't add extension routes; update the
ApiExtensionRoutingTests.test_extension_routes_are_loaded_from_settings to
explicitly load extension URL patterns by importing get_extension_urlpatterns
(from sql_api.urls) and merging its return into the active urlpatterns (or
alternatively importlib.reload the module that constructs urlpatterns after
setting DATAMINGLE_API_EXTENSION_APPS), so that get_extension_urlpatterns() is
executed under the overridden DATAMINGLE_API_EXTENSION_APPS and the
/api/extensions/ping/ route is registered before performing the client.get.
In `@api_instances/serializers.py`:
- Around line 405-411: The except block currently returns internal exception
text via serializers.ValidationError({"errors": str(exc)}); instead, keep
logging the full traceback (already done via logger.error(...)) but raise a
generic validation error to clients (e.g. serializers.ValidationError({"errors":
"Unable to create Aliyun RDS configuration."})); update the except handling
around the transaction.atomic() that creates CloudAccessKey and AliyunRdsConfig
so no internal exception details (str(exc)) are returned to API callers while
preserving the logger.error(traceback.format_exc()) for diagnostics.
- Around line 382-386: The TunnelSerializer.Meta currently uses the unsupported
write_only_fields, so sensitive fields password, pkey, and pkey_password are
still serialized; remove write_only_fields from TunnelSerializer.Meta and add
extra_kwargs = {"password": {"write_only": True}, "pkey": {"write_only": True},
"pkey_password": {"write_only": True}} to ensure these fields are write-only in
the TunnelSerializer model serializer (referencing the TunnelSerializer class
and its Meta inner class).
In `@api_queries/serializers.py`:
- Around line 66-71: The QueryFavoriteSerializer currently allows oversized
strings for alias; update alias = serializers.CharField(...) to include a
max_length that matches the corresponding DB column limit (use the exact column
size from the schema) and keep required=False, allow_blank=True and default="";
also scan the same file for other serializers that accept title, instance_name,
group_name, database_name and table_name (the blocks later in the file) and add
matching max_length constraints to those CharField definitions so serializer
validation rejects values larger than the database columns before saving.
In `@api_users/serializers.py`:
- Around line 47-87: The password fields are being trimmed by DRF by default;
update the serializer field declarations to preserve exact input by adding
trim_whitespace=False: change the password field in
UserManagementCreateSerializer (symbol: UserManagementCreateSerializer.password)
to use serializers.CharField(write_only=True, trim_whitespace=False), change the
password field in UserAuthSerializer (symbol: UserAuthSerializer.password) the
same way, and change both current_password and new_password in
CurrentUserPasswordChangeSerializer (symbols:
CurrentUserPasswordChangeSerializer.current_password and
CurrentUserPasswordChangeSerializer.new_password) to
serializers.CharField(write_only=True, trim_whitespace=False).
- Around line 394-405: TwoFAVerifySerializer currently defines otp as an
IntegerField (losing leading zeroes) and auth_type as a plain CharField; change
otp to a CharField or RegexField (e.g., fixed length/digits) so "000123" stays
as "000123" for string comparison in validate, and change auth_type to a
ChoiceField with explicit choices ("totp", "sms") so only those values are
accepted at serialization time; update any related validation in validate (which
references auth_type and phone) to work with the new field types.
In `@api_users/urls.py`:
- Around line 13-15: The path call for the route using ResourceGroupUserLookup
is not Black-formatted; run Black (black --check .) and reformat the call so the
"v1/user/resourcegroup/users/lookup/" string and the
views.ResourceGroupUserLookup.as_view() expression are on separate lines as
Black would split them (i.e., the path(...) arguments each on their own line) so
the urls.py entry for path(...) and the ResourceGroupUserLookup view conform to
the project's Black formatting rules.
In `@api_workflows/serializers.py`:
- Line 158: Ensure the chosen ResourceGroup actually belongs to the target
Instance: after fetching ResourceGroup (currently using
ResourceGroup.objects.get(pk=workflow_data["group_id"])), verify its instance
(e.g., group.instance_id or group.instance) matches workflow_data["instance_id"]
(or the resolved Instance object); if not, raise serializers.ValidationError (or
appropriate ValidationError) so the serializer rejects a group that doesn't
belong to the instance. Apply this same check in both places where group lookup
occurs (the occurrence around ResourceGroup.objects.get at line ~158 and the
block at lines ~242-250).
- Around line 199-200: Replace returning raw exception text from the except
blocks that currently do raise serializers.ValidationError({"errors": str(exc)})
with code that logs the exception server-side (e.g., logger.exception or sentry
capture) and re-raises a ValidationError with a generic, non-sensitive message
(e.g., {"errors": "An internal validation error occurred"}). Update both
occurrences (the block using except Exception as exc at the shown diff and the
similar block around lines 262-264) so they log the full exc for debugging but
never surface exc or its string to API clients, keeping the API response
generic.
- Around line 236-240: The current logic always sets is_backup to False when the
incoming workflow_data omits "is_backup", which overrides the
SqlWorkflow.is_backup default; change the assignment so that if "is_backup" is
present in workflow_data you use its value, otherwise you preserve the model
default (SqlWorkflow.is_backup) for SQL workflows; still enforce is_backup =
False when is_offline_export is True. Update the code surrounding the is_backup
variable assignment (referencing workflow_data["is_backup"], is_offline_export,
and SqlWorkflow.is_backup) to implement this conditional behavior.
- Around line 225-229: The conditional denying access incorrectly suppresses
has_temporary_write_access when has_group_write_access is true; update the check
so temporary access is honored regardless of group membership by changing the
third clause from "has_temporary_write_access and not has_group_write_access" to
simply "has_temporary_write_access" (preserve the other checks involving
actor.is_superuser and the "(has_group_write_access and
actor.has_perm('sql.sql_submit'))" clause).
- Around line 265-269: The workflow status update (setting
auditor.workflow.status based on auditor.audit.current_status and calling
auditor.workflow.save()) must be executed inside the same DB transaction that
updates the audit so they commit atomically; move or duplicate this logic into
the existing atomic/transaction block that writes the audit (or wrap both the
audit update and the status assignment/save in a single transaction.atomic()
block), using the same model instances (auditor.audit.current_status,
WorkflowStatus.REJECTED/PASSED, auditor.workflow.status and
auditor.workflow.save()) so a save failure will roll back the audit change.
---
Outside diff comments:
In `@api_instances/views.py`:
- Around line 267-531: There are duplicate definitions of InstanceTagList and
InstanceTagDetail causing the second set to shadow the first and lose the
_validate_deactivation message; remove the later duplicate class definitions
(the second InstanceTagList and the second InstanceTagDetail) and keep the
original classes that include the staticmethod _validate_deactivation and the
original queryset ordering (InstanceTagList with queryset.order_by("id")),
ensuring InstanceTagDetail.put still uses _validate_deactivation to produce the
"Remove it from those instances before deactivating it." error message.
---
Nitpick comments:
In `@api_archives/serializers.py`:
- Around line 6-9: Replace the catch-all fields = "__all__" in
ArchiveConfigSerializer.Meta with an explicit list of allowed field names from
the ArchiveConfig model (e.g., ["id", "name", "type", "endpoint", ...]) or, if
you need different read/write exposures, create two serializers (e.g.,
ArchiveConfigSerializer and ArchiveConfigWriteSerializer) with explicit fields
for each; update ArchiveConfigSerializer (and any usages) to reference only the
intended attributes and ensure sensitive fields (credentials, connection_string,
internal_flags, etc.) are omitted from the public serializer(s).
In `@api_queries/serializers.py`:
- Around line 6-9: Replace the wildcard fields usage in
QueryPrivilegesApplySerializer.Meta (which currently uses fields="__all__") with
an explicit tuple of allowed field names so future model additions (especially
user/audit/workflow fields) are not auto-exposed; update the Meta.fields in
QueryPrivilegesApplySerializer to list only the public API fields required by
consumers (e.g., the business-related fields and exclude internal fields such as
user, created_by, updated_by, audit_status, workflow_*), trimming further if
some public fields aren't needed.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: e93dfabd-879a-4a5c-a18a-34a60be78b65
📒 Files selected for processing (67)
api_access/__init__.pyapi_access/apps.pyapi_access/tests.pyapi_access/urls.pyapi_access/views.pyapi_admin/__init__.pyapi_admin/apps.pyapi_admin/dashboard.pyapi_admin/settings.pyapi_admin/tests.pyapi_admin/urls.pyapi_admin/views.pyapi_archives/__init__.pyapi_archives/apps.pyapi_archives/serializers.pyapi_archives/tests.pyapi_archives/urls.pyapi_archives/views.pyapi_auth/__init__.pyapi_auth/apps.pyapi_auth/test_workos_auth.pyapi_auth/urls.pyapi_auth/views.pyapi_core/__init__.pyapi_core/apps.pyapi_core/extensions.pyapi_core/legacy_tests.pyapi_core/pagination.pyapi_core/permissions.pyapi_core/response.pyapi_core/tests.pyapi_core/views.pyapi_instances/__init__.pyapi_instances/apps.pyapi_instances/serializers.pyapi_instances/tests.pyapi_instances/urls.pyapi_instances/views.pyapi_queries/__init__.pyapi_queries/apps.pyapi_queries/serializers.pyapi_queries/tests.pyapi_queries/urls.pyapi_queries/views.pyapi_users/__init__.pyapi_users/apps.pyapi_users/filters.pyapi_users/serializers.pyapi_users/tests.pyapi_users/urls.pyapi_users/views.pyapi_workflows/__init__.pyapi_workflows/apps.pyapi_workflows/filters.pyapi_workflows/serializers.pyapi_workflows/tests.pyapi_workflows/urls.pyapi_workflows/views.pyarchery/settings.pysql/notify.pysql_api/apps.pysql_api/serializers.pysql_api/urls.pytest_api_extensions/__init__.pytest_api_extensions/api_urls.pytest_api_extensions/apps.pytest_api_extensions/views.py
💤 Files with no reviewable changes (1)
- sql_api/serializers.py
| if auditor.audit.current_status == WorkflowStatus.REJECTED: | ||
| auditor.workflow.status = "workflow_autoreviewwrong" | ||
| elif auditor.audit.current_status == WorkflowStatus.PASSED: | ||
| auditor.workflow.status = "workflow_review_pass" | ||
| auditor.workflow.save() |
There was a problem hiding this comment.
Persist the audit-derived workflow status inside the transaction.
The workflow/content/audit rows commit before this status update. If this save fails, the audit can say passed/rejected while the workflow remains workflow_manreviewing.
Suggested direction
- if auditor.audit.current_status == WorkflowStatus.REJECTED:
- auditor.workflow.status = "workflow_autoreviewwrong"
- elif auditor.audit.current_status == WorkflowStatus.PASSED:
- auditor.workflow.status = "workflow_review_pass"
- auditor.workflow.save()
+ if auditor.audit.current_status == WorkflowStatus.REJECTED:
+ auditor.workflow.status = "workflow_autoreviewwrong"
+ auditor.workflow.save(update_fields=["status"])
+ elif auditor.audit.current_status == WorkflowStatus.PASSED:
+ auditor.workflow.status = "workflow_review_pass"
+ auditor.workflow.save(update_fields=["status"])🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@api_workflows/serializers.py` around lines 265 - 269, The workflow status
update (setting auditor.workflow.status based on auditor.audit.current_status
and calling auditor.workflow.save()) must be executed inside the same DB
transaction that updates the audit so they commit atomically; move or duplicate
this logic into the existing atomic/transaction block that writes the audit (or
wrap both the audit update and the status assignment/save in a single
transaction.atomic() block), using the same model instances
(auditor.audit.current_status, WorkflowStatus.REJECTED/PASSED,
auditor.workflow.status and auditor.workflow.save()) so a save failure will roll
back the audit change.
There was a problem hiding this comment.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
api_instances/views.py (1)
420-426:⚠️ Potential issue | 🟠 MajorAvoid returning raw backend exception text to API clients.
str(exc)/str(msg)can expose driver errors, SQL details, hostnames, or credential-related connection messages. Line 642 also catches the intentionalValidationErrorfrom Lines 639-641 and re-wraps it as a stringified internal representation. Log server-side and return stable client-safe messages.🛡️ Proposed direction
+import logging + +logger = logging.getLogger(__name__) + ... - except Exception as exc: - raise serializers.ValidationError( - {"errors": f"Unable to connect to instance. {str(exc)}"} - ) + except Exception as exc: + logger.exception("Unable to test instance connection for instance %s", pk) + raise serializers.ValidationError( + {"errors": "Unable to connect to instance."} + ) from exc ... - except Exception as exc: - raise serializers.ValidationError( - {"errors": f"Unable to connect to instance. {str(exc)}"} - ) + except Exception as exc: + logger.exception("Unable to test draft instance connection") + raise serializers.ValidationError( + {"errors": "Unable to connect to instance."} + ) from exc ... - except Exception as msg: - raise serializers.ValidationError({"errors": str(msg)}) + except serializers.ValidationError: + raise + except Exception as exc: + logger.exception("Unable to load instance resources for instance %s", instance_id) + raise serializers.ValidationError( + {"errors": "Unable to load instance resources."} + ) from excAlso applies to: 454-465, 642-643
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api_instances/views.py` around lines 420 - 426, The code returns raw backend exception text to API clients when connection tests fail; update the try/except around get_engine(instance) and query_engine.test_connection() to log the full exception server-side (e.g., logger.exception or logger.error with exc_info) and raise serializers.ValidationError with a stable, generic message like "Unable to connect to instance. Check configuration." Also ensure any places that catch an existing serializers.ValidationError (the blocks referencing the same pattern) do not re-wrap or stringify that ValidationError — let it propagate or re-raise it directly so client-safe messages are preserved.
♻️ Duplicate comments (1)
api_core/legacy_tests.py (1)
3166-3177:⚠️ Potential issue | 🟠 MajorDo not keep tests that require raw backend exception text.
These retargeted tests still lock in
"RuntimeError"/"COUNT(*) failed"as client-visible responses. That preserves the exception-disclosure behavior this PR is otherwise tightening; assert a generic API error and verify detailed failures are logged server-side instead.Suggested test expectation change
- self.assertDictEqual(json.loads(r.content), {"errors": "RuntimeError"}) + self.assertDictEqual( + json.loads(r.content), + {"errors": "An internal validation error occurred."}, + ) @@ - self.assertEqual(r.json()["errors"], "COUNT(*) failed") + self.assertEqual( + r.json()["errors"], + "An internal validation error occurred.", + )Also applies to: 3329-3350
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api_core/legacy_tests.py` around lines 3166 - 3177, The test test_check_inception_Exception is asserting raw backend exception text ("RuntimeError") in the API response; change it to assert a generic client-facing error (e.g. that the response contains a non-detailed error key or message like {"errors": "Internal Server Error"} or status code 500) and remove any dependency on the exact exception string, and instead assert that the detailed RuntimeError is recorded via server-side logging by patching or mocking the logger and asserting logger.error/log.exception was called with the RuntimeError; apply the same change pattern to the other affected tests around lines 3329-3350.
🧹 Nitpick comments (5)
api_core/tests.py (2)
40-53: Test only weakly validates extension route registration.Asserting a 302 redirect to
/loginconfirms the URL resolves, but it exercises the auth middleware rather thanTestExtensionPingView. An unrelated auth-protected URL would produce the same assertions, so a regression in the extension view (e.g., wrong view bound, broken import) could pass unnoticed. Consider authenticating a user and asserting the real 200 response fromTestExtensionPingView, or at minimum resolving the URL name viareverse()to pin the view binding.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api_core/tests.py` around lines 40 - 53, The test test_extension_routes_are_loaded_from_settings weakly validates registration by only checking a 302 to /login; change it to assert the actual extension view response by either authenticating the test client (e.g., log in a user before calling self.client.get("/api/extensions/ping/") so TestExtensionPingView returns HTTP 200) or, at minimum, resolve the URL via reverse() (use reverse("your_extension_ping_name") or resolve("/api/extensions/ping/") and assert it maps to TestExtensionPingView) after injecting get_extension_urlpatterns() into sql_api_urls.urlpatterns and clearing caches.
56-60: Same concern for/api/schema/.
test_schema_route_resolvesonly verifies the login redirect, not thatdrf_spectacular's schema view is actually wired. A typo in the URL pattern pointing to any auth-protected handler would still pass. Consider authenticating and asserting a 200 with an OpenAPI content-type, or usingreverse("schema")to verify the named route.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api_core/tests.py` around lines 56 - 60, The test currently only checks for a login redirect; update ApiGatewayDocsTests.test_schema_route_resolves to verify the actual schema view is wired by (a) resolving the named route via reverse("schema") instead of hardcoding "/api/schema/" and (b) performing an authenticated request (e.g., create a test user and authenticate or use self.client.force_authenticate) and asserting a 200 response and that the Content-Type matches an OpenAPI/schema MIME (e.g., contains "openapi" or "application/json" depending on your DRF Spectacular configuration); reference the test_schema_route_resolves method and the ApiGatewayDocsTests class when making the changes.api_users/serializers.py (3)
133-137: Preserve exception chain when re-raising.Use
raise ... fromto preserve the traceback chain.⛓️ Suggested fix
if password is not None: try: validate_password(password, user=self.instance) except ValidationError as msg: - raise serializers.ValidationError({"password": msg.messages}) + raise serializers.ValidationError({"password": msg.messages}) from msgAs per coding guidelines, this addresses the Ruff B904 warning about exception handling.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api_users/serializers.py` around lines 133 - 137, The except block in the password validation (validate_password called with user=self.instance) re-raises a serializers.ValidationError but drops the original ValidationError traceback; update the except in the serializer to re-raise using "raise serializers.ValidationError({...}) from msg" so the original exception chain (ValidationError) is preserved when raising serializers.ValidationError for the "password" field.
340-343: Preserve exception chain when re-raising.Use
raise ... fromto preserve the traceback chain.⛓️ Suggested fix
try: validate_password(attrs["new_password"], user=self.context["request"].user) except ValidationError as msg: - raise serializers.ValidationError({"new_password": msg.messages}) + raise serializers.ValidationError({"new_password": msg.messages}) from msgAs per coding guidelines, this addresses the Ruff B904 warning about exception handling.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api_users/serializers.py` around lines 340 - 343, The exception handler around validate_password should re-raise the serializers.ValidationError while preserving the original exception chain; in the except block that catches ValidationError (from validate_password), change the raise to use "raise serializers.ValidationError({\"new_password\": msg.messages}) from msg so the original ValidationError (msg) is attached to the new serializers.ValidationError and the traceback is preserved.
63-68: Preserve exception chain when re-raising.Use
raise ... fromto preserve the traceback chain, making debugging easier and following Python best practices.⛓️ Suggested fix
def validate_password(self, password): try: validate_password(password) except ValidationError as msg: - raise serializers.ValidationError(msg.messages) + raise serializers.ValidationError(msg.messages) from msg return passwordAs per coding guidelines, this addresses the Ruff B904 warning about exception handling.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api_users/serializers.py` around lines 63 - 68, The validate_password method currently catches django.core.exceptions.ValidationError and re-raises rest_framework's serializers.ValidationError without preserving the original exception chain; update the exception handling in validate_password to re-raise serializers.ValidationError using "raise ... from" so the original ValidationError is attached (i.e., raise serializers.ValidationError(msg.messages) from msg) to preserve traceback and satisfy the Ruff B904 guideline.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@api_instances/serializers.py`:
- Around line 527-535: The validate method in the serializer (validate in
api_instances/serializers.py) is performing a global
Instance.objects.get(id=instance_id) check which leaks existence information;
remove that global lookup and the corresponding Instance.DoesNotExist
ValidationError from validate, and rely on the authorization-scoped lookup in
the view (e.g., InstanceResource.get which uses user_instances(request.user)) as
the source of truth for existence/authorization checks; simply return attrs from
validate and let the view raise the appropriate not-found/permission error when
it attempts the scoped lookup.
- Around line 118-155: The create path currently only trims instance_name
(validate_instance_name) so whitespace-only values for other string fields
(e.g., host, user, db_name, service_name, sid, charset) can be persisted;
update/draft paths reject those. Fix by normalizing the same string fields on
create: in InstanceCreateSerializer.create, before creating the Instance (before
Instance.objects.create(**validated_data)), iterate the relevant keys (host,
user, db_name, service_name, sid, charset, and any other string fields you
validate on update) and if a value is a str replace it with value.strip(); then
if any required field (like host) becomes empty raise
serializers.ValidationError (mirroring update/draft behavior). Alternatively,
add per-field validators (e.g., validate_host) similar to validate_instance_name
to ensure trimming and blank rejection and rely on serializer validation before
create.
- Around line 523-525: The CharField serializers db_name, schema_name, and
tb_name are optional but reject empty strings; update their declarations in
api_instances/serializers.py (the serializers.CharField instances for db_name,
schema_name, tb_name) to allow blank values by setting allow_blank=True (and
keep required=False) so requests like resource_type=database&db_name= validate
as the view expects.
In `@api_queries/tests.py`:
- Line 1: Remove the unused legacy test class import to prevent Django/unittest
from discovering and running those legacy tests: delete the "from
api_core.legacy_tests import TestQueryAPI" import in this module (the reference
to TestQueryAPI is the offending symbol) so api_queries.tests no longer pulls in
legacy tests inadvertently.
In `@api_users/serializers.py`:
- Around line 404-410: TwoFAVerifySerializer.validate is missing the
required-key check for auth_type "totp"; mirror the logic in
TwoFASaveSerializer.validate so that when attrs.get("auth_type") == "totp" and
attrs.get("key") is falsy you raise serializers.ValidationError (e.g. {"errors":
"Missing key."}) before returning attrs, ensuring the view gets a validated
'key' for TOTP verification.
In `@api_workflows/serializers.py`:
- Around line 191-193: The export_format assignment in serializers.py (the line
setting export_format from workflow_data) is currently split in a way that fails
Black; reformat that assignment to comply with Black style (e.g., a single-line
or Black-preferred multiline expression) and run black --check . to verify;
update the commit so the export_format = ( (workflow_data.get("export_format")
or "").lower().strip() ) expression is formatted by Black and the repo passes
the Black check.
- Around line 187-253: Move the coarse authorization checks (calls to
user_has_group_instance_access and user_has_instance_query_access and the actor
permission checks) to run before calling get_engine, execute_check, or
sql_export.pre_count_check so unauthorized requests are rejected without
touching the DB/engine; specifically, compute has_group_write_access,
has_group_read_access, and has_temporary_read_access
(user_has_instance_query_access) and apply the offline-export and
non-offline-submit permission gates first using actor.is_superuser and
actor.has_perm, and only if those gates fail and you need to determine temporary
workflow access (user_has_instance_workflow_access requires
check_result.syntax_type) then run
get_engine/OffLineDownLoad/pre_count_check/execute_check to obtain check_result
and re-evaluate has_temporary_write_access using
user_has_instance_workflow_access(actor, instance, check_result.syntax_type).
---
Outside diff comments:
In `@api_instances/views.py`:
- Around line 420-426: The code returns raw backend exception text to API
clients when connection tests fail; update the try/except around
get_engine(instance) and query_engine.test_connection() to log the full
exception server-side (e.g., logger.exception or logger.error with exc_info) and
raise serializers.ValidationError with a stable, generic message like "Unable to
connect to instance. Check configuration." Also ensure any places that catch an
existing serializers.ValidationError (the blocks referencing the same pattern)
do not re-wrap or stringify that ValidationError — let it propagate or re-raise
it directly so client-safe messages are preserved.
---
Duplicate comments:
In `@api_core/legacy_tests.py`:
- Around line 3166-3177: The test test_check_inception_Exception is asserting
raw backend exception text ("RuntimeError") in the API response; change it to
assert a generic client-facing error (e.g. that the response contains a
non-detailed error key or message like {"errors": "Internal Server Error"} or
status code 500) and remove any dependency on the exact exception string, and
instead assert that the detailed RuntimeError is recorded via server-side
logging by patching or mocking the logger and asserting
logger.error/log.exception was called with the RuntimeError; apply the same
change pattern to the other affected tests around lines 3329-3350.
---
Nitpick comments:
In `@api_core/tests.py`:
- Around line 40-53: The test test_extension_routes_are_loaded_from_settings
weakly validates registration by only checking a 302 to /login; change it to
assert the actual extension view response by either authenticating the test
client (e.g., log in a user before calling
self.client.get("/api/extensions/ping/") so TestExtensionPingView returns HTTP
200) or, at minimum, resolve the URL via reverse() (use
reverse("your_extension_ping_name") or resolve("/api/extensions/ping/") and
assert it maps to TestExtensionPingView) after injecting
get_extension_urlpatterns() into sql_api_urls.urlpatterns and clearing caches.
- Around line 56-60: The test currently only checks for a login redirect; update
ApiGatewayDocsTests.test_schema_route_resolves to verify the actual schema view
is wired by (a) resolving the named route via reverse("schema") instead of
hardcoding "/api/schema/" and (b) performing an authenticated request (e.g.,
create a test user and authenticate or use self.client.force_authenticate) and
asserting a 200 response and that the Content-Type matches an OpenAPI/schema
MIME (e.g., contains "openapi" or "application/json" depending on your DRF
Spectacular configuration); reference the test_schema_route_resolves method and
the ApiGatewayDocsTests class when making the changes.
In `@api_users/serializers.py`:
- Around line 133-137: The except block in the password validation
(validate_password called with user=self.instance) re-raises a
serializers.ValidationError but drops the original ValidationError traceback;
update the except in the serializer to re-raise using "raise
serializers.ValidationError({...}) from msg" so the original exception chain
(ValidationError) is preserved when raising serializers.ValidationError for the
"password" field.
- Around line 340-343: The exception handler around validate_password should
re-raise the serializers.ValidationError while preserving the original exception
chain; in the except block that catches ValidationError (from
validate_password), change the raise to use "raise
serializers.ValidationError({\"new_password\": msg.messages}) from msg so the
original ValidationError (msg) is attached to the new
serializers.ValidationError and the traceback is preserved.
- Around line 63-68: The validate_password method currently catches
django.core.exceptions.ValidationError and re-raises rest_framework's
serializers.ValidationError without preserving the original exception chain;
update the exception handling in validate_password to re-raise
serializers.ValidationError using "raise ... from" so the original
ValidationError is attached (i.e., raise
serializers.ValidationError(msg.messages) from msg) to preserve traceback and
satisfy the Ruff B904 guideline.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 25504f07-4805-4a9d-8e72-7dc928f899fa
📒 Files selected for processing (15)
api_admin/views.pyapi_archives/serializers.pyapi_archives/tests.pyapi_core/extensions.pyapi_core/legacy_tests.pyapi_core/tests.pyapi_instances/serializers.pyapi_instances/tests.pyapi_instances/views.pyapi_queries/serializers.pyapi_queries/tests.pyapi_users/serializers.pyapi_users/tests.pyapi_users/urls.pyapi_workflows/serializers.py
✅ Files skipped from review due to trivial changes (2)
- api_admin/views.py
- api_users/urls.py
🚧 Files skipped from review as they are similar to previous changes (5)
- api_instances/tests.py
- api_core/extensions.py
- api_users/tests.py
- api_archives/serializers.py
- api_archives/tests.py
| try: | ||
| check_engine = get_engine(instance=instance) | ||
| sql_export = OffLineDownLoad() | ||
| if is_offline_export: | ||
| export_format = ( | ||
| (workflow_data.get("export_format") or "").lower().strip() | ||
| ) | ||
| if export_format not in EXPORT_FORMAT_CHOICES: | ||
| raise serializers.ValidationError( | ||
| { | ||
| "errors": ( | ||
| "Export format must be one of: csv, tsv, sql, xlsx." | ||
| ) | ||
| } | ||
| ) | ||
| workflow_data["export_format"] = export_format | ||
| instance.sql_content = sql_content | ||
| instance.db_name = workflow_data["db_name"] | ||
| instance.schema_name = workflow_data.get("schema_name") or "" | ||
| instance.export_format = export_format | ||
| check_result = sql_export.pre_count_check(workflow=instance) | ||
| else: | ||
| workflow_data["export_format"] = None | ||
| check_result = check_engine.execute_check( | ||
| db_name=workflow_data["db_name"], sql=sql_content | ||
| ) | ||
| except serializers.ValidationError: | ||
| raise | ||
| except Exception: | ||
| logger.exception("Unexpected error while validating workflow submission.") | ||
| raise serializers.ValidationError( | ||
| {"errors": "An internal validation error occurred."} | ||
| ) | ||
|
|
||
| has_group_write_access = user_has_group_instance_access( | ||
| actor, instance, tag_codes=["can_write"] | ||
| ) | ||
| has_group_read_access = user_has_group_instance_access( | ||
| actor, instance, tag_codes=["can_read"] | ||
| ) | ||
| has_temporary_write_access = user_has_instance_workflow_access( | ||
| actor, instance, check_result.syntax_type | ||
| ) | ||
| has_temporary_read_access = user_has_instance_query_access(actor, instance) | ||
| if is_offline_export: | ||
| if not (actor.is_superuser or actor.has_perm("sql.sqlexport_submit")): | ||
| raise serializers.ValidationError( | ||
| {"errors": "You do not have permission to submit export workflows."} | ||
| ) | ||
| if not (has_group_read_access or has_temporary_read_access): | ||
| raise serializers.ValidationError( | ||
| { | ||
| "errors": ( | ||
| "You do not have permission to submit export workflows for this instance." | ||
| ) | ||
| } | ||
| ) | ||
| elif not ( | ||
| actor.is_superuser | ||
| or (has_group_write_access and actor.has_perm("sql.sql_submit")) | ||
| or has_temporary_write_access | ||
| ): | ||
| raise serializers.ValidationError( | ||
| { | ||
| "errors": "You do not have permission to submit SQL for this instance." | ||
| } | ||
| ) |
There was a problem hiding this comment.
Move authorization before engine/export checks.
execute_check() and pre_count_check() run before the user is rejected on Lines 231-253, so unauthorized requests can still trigger database/engine work. Do the coarse permission checks first, and only run the SQL check when it is needed to evaluate temporary workflow access.
Suggested direction
- try:
- check_engine = get_engine(instance=instance)
- sql_export = OffLineDownLoad()
- if is_offline_export:
+ has_group_write_access = user_has_group_instance_access(
+ actor, instance, tag_codes=["can_write"]
+ )
+ has_group_read_access = user_has_group_instance_access(
+ actor, instance, tag_codes=["can_read"]
+ )
+ has_temporary_read_access = user_has_instance_query_access(actor, instance)
+
+ if is_offline_export:
+ if not (actor.is_superuser or actor.has_perm("sql.sqlexport_submit")):
+ raise serializers.ValidationError(
+ {"errors": "You do not have permission to submit export workflows."}
+ )
+ if not (has_group_read_access or has_temporary_read_access):
+ raise serializers.ValidationError(
+ {
+ "errors": (
+ "You do not have permission to submit export workflows for this instance."
+ )
+ }
+ )
+
+ try:
+ if is_offline_export:
+ sql_export = OffLineDownLoad()
export_format = (
(workflow_data.get("export_format") or "").lower().strip()
)🧰 Tools
🪛 Ruff (0.15.10)
[warning] 217-219: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@api_workflows/serializers.py` around lines 187 - 253, Move the coarse
authorization checks (calls to user_has_group_instance_access and
user_has_instance_query_access and the actor permission checks) to run before
calling get_engine, execute_check, or sql_export.pre_count_check so unauthorized
requests are rejected without touching the DB/engine; specifically, compute
has_group_write_access, has_group_read_access, and has_temporary_read_access
(user_has_instance_query_access) and apply the offline-export and
non-offline-submit permission gates first using actor.is_superuser and
actor.has_perm, and only if those gates fail and you need to determine temporary
workflow access (user_has_instance_workflow_access requires
check_result.syntax_type) then run
get_engine/OffLineDownLoad/pre_count_check/execute_check to obtain check_result
and re-evaluate has_temporary_write_access using
user_has_instance_workflow_access(actor, instance, check_result.syntax_type).
Summary
Testing
Summary by CodeRabbit
New Features
Refactor