feat(api): add complete Pages CRUD endpoints to v1 API#8865
feat(api): add complete Pages CRUD endpoints to v1 API#8865nprodromou wants to merge 1 commit intomakeplane:previewfrom
Conversation
Adds list, create, retrieve, update, delete, archive/unarchive, and lock/unlock endpoints for project pages in the public v1 API. Combines the best approaches from PRs makeplane#8650, makeplane#8669, and makeplane#8800: - Pagination on list endpoint (from makeplane#8650/makeplane#8669) - external_id/external_source support (from makeplane#8650/makeplane#8669) - description_binary reset on create/update (from makeplane#8800) - drf-spectacular OpenAPI annotations (from makeplane#8800) - transaction.atomic for data integrity (from makeplane#8650/makeplane#8669) - page_transaction task for version history (from makeplane#8650/makeplane#8669) - Archive/unarchive and lock/unlock endpoints (from makeplane#8650) - Comprehensive test suite Closes makeplane#8598, closes makeplane#7319 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
|
📝 WalkthroughWalkthroughThis pull request introduces complete REST API support for Page management in Plane, adding four new endpoint classes with full CRUD operations, state transitions (archive/unarchive, lock/unlock), permission enforcement, and comprehensive contract tests covering all workflows and edge cases. Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Client
participant View as PageListCreateAPIEndpoint
participant Serializer as PageAPISerializer
participant DB as Database
participant Task as Task Queue
Client->>View: POST /pages/ with page data
View->>View: Check ProjectPagePermission
View->>Serializer: Validate input data
Serializer->>Serializer: Validate external_id uniqueness
rect rgba(100, 150, 200, 0.5)
Note over DB: Atomic Transaction
View->>DB: Create Page instance
View->>DB: Create ProjectPage join record
end
View->>Task: Enqueue page_transaction task
View->>Serializer: Serialize created page
Serializer->>View: Return serialized response
View->>Client: 201 Created + Page data
sequenceDiagram
participant Client as Client
participant View as PageDetailAPIEndpoint
participant Serializer as PageAPISerializer
participant DB as Database
participant Task as Task Queue
Client->>View: PATCH /pages/{page_id}/ with updates
View->>DB: Retrieve page
View->>View: Check page exists & not archived
View->>View: Verify not locked
alt Non-owner changing access
View->>View: Reject with 403
View->>Client: Permission Denied
else Valid update
View->>DB: Clear description_binary if description_html provided
View->>DB: Update page fields
View->>Task: Enqueue page_transaction if description_html changed
View->>Serializer: Serialize updated page
View->>Client: 200 OK + Updated page
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/api/plane/api/views/page.py`:
- Around line 121-146: The POST-only duplicate external mapping check (using
Page.objects.filter with external_source/external_id) allows PATCH to create
duplicate mappings; move this logic into shared validation used for both create
and update (for example implement validate or validate_external_mapping in
PageAPISerializer) and invoke it for PATCH too; when validating on update,
exclude the current page by ID (use serializer.instance.id or request/kwargs
page id) from the queryset to avoid false-positive conflicts and return the same
409-style response or raise a ValidationError to be handled consistently.
- Around line 275-289: The code currently checks truthiness of
request.data.get("description_html") which ignores explicit empty updates;
change both branches to detect presence of the key using "description_html" in
request.data instead of truthiness so that
serializer.save(description_binary=None) runs when the client PATCHes
{"description_html": ""} and page_transaction.delay(...) is always enqueued when
the key is present; update the two occurrences that reference
request.data.get("description_html") and keep existing references to
serializer.save, description_binary, page_transaction.delay,
old_description_html, and page_id.
- Around line 55-66: get_queryset in the Page view returns all project pages
regardless of per-page privacy, so list routes leak other users' private pages;
update get_queryset (in apps/api/plane/api/views/page.py) to only include pages
that are public OR owned by the current request user (e.g., add a filter like
Q(is_private=False) | Q(owned_by=self.request.user) or equivalent depending on
the Page model's privacy field), using self.request.user and Django Q
expressions to combine the conditions while preserving the existing
workspace/project filters, select_related, ordering and distinct.
- Around line 57-63: The Page queryset is incorrectly mixing filters on the
through relation (projects__) with project_pages__ predicates, allowing
predicates to match different join rows; update the Page.objects.filter chain so
the project-scoped filters all use the project_pages join: replace
projects__id=... with project_pages__project_id=..., replace
projects__archived_at__isnull=True with
project_pages__project__archived_at__isnull=True, and keep
project_pages__deleted_at__isnull=True so all three predicates apply to the same
join; apply this same pattern wherever Page queries use projects__* and
project_pages__* (the other occurrences noted in the comment).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: c1dad396-17ad-4286-9ddf-4b51edaedfdf
📒 Files selected for processing (7)
apps/api/plane/api/serializers/__init__.pyapps/api/plane/api/serializers/page.pyapps/api/plane/api/urls/__init__.pyapps/api/plane/api/urls/page.pyapps/api/plane/api/views/__init__.pyapps/api/plane/api/views/page.pyapps/api/plane/tests/contract/api/test_pages.py
| def get_queryset(self): | ||
| return ( | ||
| Page.objects.filter(workspace__slug=self.kwargs.get("slug")) | ||
| .filter( | ||
| projects__id=self.kwargs.get("project_id"), | ||
| projects__archived_at__isnull=True, | ||
| ) | ||
| .filter(project_pages__deleted_at__isnull=True) | ||
| .select_related("workspace", "owned_by") | ||
| .order_by("-created_at") | ||
| .distinct() | ||
| ) |
There was a problem hiding this comment.
Exclude other users’ private pages from the list queryset.
ProjectPagePermission only checks private-page visibility when page_id is present (apps/api/plane/utils/permissions/page.py:18-125). On the collection route there is no page_id, so any active project member/guest that passes permission will get every page in this queryset, including private pages owned by someone else. Filter the list to public pages or pages owned by request.user.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/api/plane/api/views/page.py` around lines 55 - 66, get_queryset in the
Page view returns all project pages regardless of per-page privacy, so list
routes leak other users' private pages; update get_queryset (in
apps/api/plane/api/views/page.py) to only include pages that are public OR owned
by the current request user (e.g., add a filter like Q(is_private=False) |
Q(owned_by=self.request.user) or equivalent depending on the Page model's
privacy field), using self.request.user and Django Q expressions to combine the
conditions while preserving the existing workspace/project filters,
select_related, ordering and distinct.
| Page.objects.filter(workspace__slug=self.kwargs.get("slug")) | ||
| .filter( | ||
| projects__id=self.kwargs.get("project_id"), | ||
| projects__archived_at__isnull=True, | ||
| ) | ||
| .filter(project_pages__deleted_at__isnull=True) | ||
| .select_related("workspace", "owned_by") |
There was a problem hiding this comment.
Scope every page lookup through the same ProjectPage row.
Because Page.projects is a through relation (apps/api/plane/db/models/page.py:1-67), mixing projects__id=... with project_pages__deleted_at__isnull=True lets Django satisfy those predicates from different join rows. A page soft-deleted from project A but still active in project B can therefore still be listed, fetched, archived, locked, or reparented through project A. Use project_pages__project_id=..., project_pages__deleted_at__isnull=True, and project_pages__project__archived_at__isnull=True off the same join everywhere.
🔧 Example of the scoped join pattern
- return (
- Page.objects.filter(workspace__slug=self.kwargs.get("slug"))
- .filter(
- projects__id=self.kwargs.get("project_id"),
- projects__archived_at__isnull=True,
- )
- .filter(project_pages__deleted_at__isnull=True)
+ return (
+ Page.objects.filter(
+ workspace__slug=self.kwargs.get("slug"),
+ project_pages__project_id=self.kwargs.get("project_id"),
+ project_pages__deleted_at__isnull=True,
+ project_pages__project__archived_at__isnull=True,
+ )
.select_related("workspace", "owned_by")
.order_by("-created_at")
.distinct()
)Also applies to: 192-198, 334-339, 382-387, 419-424, 457-462, 493-498
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/api/plane/api/views/page.py` around lines 57 - 63, The Page queryset is
incorrectly mixing filters on the through relation (projects__) with
project_pages__ predicates, allowing predicates to match different join rows;
update the Page.objects.filter chain so the project-scoped filters all use the
project_pages join: replace projects__id=... with project_pages__project_id=...,
replace projects__archived_at__isnull=True with
project_pages__project__archived_at__isnull=True, and keep
project_pages__deleted_at__isnull=True so all three predicates apply to the same
join; apply this same pattern wherever Page queries use projects__* and
project_pages__* (the other occurrences noted in the comment).
| serializer = PageAPISerializer(data=request.data) | ||
| if serializer.is_valid(): | ||
| # Check for duplicate external_id | ||
| if ( | ||
| request.data.get("external_id") | ||
| and request.data.get("external_source") | ||
| and Page.objects.filter( | ||
| projects__id=project_id, | ||
| workspace__slug=slug, | ||
| external_source=request.data.get("external_source"), | ||
| external_id=request.data.get("external_id"), | ||
| ).exists() | ||
| ): | ||
| existing = Page.objects.filter( | ||
| workspace__slug=slug, | ||
| projects__id=project_id, | ||
| external_source=request.data.get("external_source"), | ||
| external_id=request.data.get("external_id"), | ||
| ).first() | ||
| return Response( | ||
| { | ||
| "error": "Page with the same external id and external source already exists", | ||
| "id": str(existing.id), | ||
| }, | ||
| status=status.HTTP_409_CONFLICT, | ||
| ) |
There was a problem hiding this comment.
Enforce the external mapping uniqueness rule on PATCH as well.
The 409 guard only runs in POST. PATCH accepts external_id and external_source as writable fields and never rechecks them, so an update can duplicate another page’s external mapping even though creation rejects the same state. Extract this into shared project-scoped validation and exclude the current page when updating.
Also applies to: 273-280
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/api/plane/api/views/page.py` around lines 121 - 146, The POST-only
duplicate external mapping check (using Page.objects.filter with
external_source/external_id) allows PATCH to create duplicate mappings; move
this logic into shared validation used for both create and update (for example
implement validate or validate_external_mapping in PageAPISerializer) and invoke
it for PATCH too; when validating on update, exclude the current page by ID (use
serializer.instance.id or request/kwargs page id) from the queryset to avoid
false-positive conflicts and return the same 409-style response or raise a
ValidationError to be handled consistently.
| # Reset description_binary when description_html changes | ||
| if request.data.get("description_html"): | ||
| serializer.save(description_binary=None) | ||
| else: | ||
| serializer.save() | ||
|
|
||
| # Track page transaction for version history | ||
| if request.data.get("description_html"): | ||
| page_transaction.delay( | ||
| new_description_html=request.data.get( | ||
| "description_html", "<p></p>" | ||
| ), | ||
| old_description_html=old_description_html, | ||
| page_id=page_id, | ||
| ) |
There was a problem hiding this comment.
Detect description_html updates by key presence, not truthiness.
PATCH {"description_html": ""} will still clear the page through the serializer, but this branch is skipped because "" is falsy. That leaves description_binary stale and suppresses page_transaction for a real content change. Check for "description_html" in request.data in both places instead.
🩹 Proposed fix
- if request.data.get("description_html"):
+ if "description_html" in request.data:
serializer.save(description_binary=None)
else:
serializer.save()
# Track page transaction for version history
- if request.data.get("description_html"):
+ if "description_html" in request.data:
page_transaction.delay(
new_description_html=request.data.get(
"description_html", "<p></p>"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Reset description_binary when description_html changes | |
| if request.data.get("description_html"): | |
| serializer.save(description_binary=None) | |
| else: | |
| serializer.save() | |
| # Track page transaction for version history | |
| if request.data.get("description_html"): | |
| page_transaction.delay( | |
| new_description_html=request.data.get( | |
| "description_html", "<p></p>" | |
| ), | |
| old_description_html=old_description_html, | |
| page_id=page_id, | |
| ) | |
| # Reset description_binary when description_html changes | |
| if "description_html" in request.data: | |
| serializer.save(description_binary=None) | |
| else: | |
| serializer.save() | |
| # Track page transaction for version history | |
| if "description_html" in request.data: | |
| page_transaction.delay( | |
| new_description_html=request.data.get( | |
| "description_html", "<p></p>" | |
| ), | |
| old_description_html=old_description_html, | |
| page_id=page_id, | |
| ) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/api/plane/api/views/page.py` around lines 275 - 289, The code currently
checks truthiness of request.data.get("description_html") which ignores explicit
empty updates; change both branches to detect presence of the key using
"description_html" in request.data instead of truthiness so that
serializer.save(description_binary=None) runs when the client PATCHes
{"description_html": ""} and page_transaction.delay(...) is always enqueued when
the key is present; update the two occurrences that reference
request.data.get("description_html") and keep existing references to
serializer.save, description_binary, page_transaction.delay,
old_description_html, and page_id.
Summary
Adds complete Pages CRUD support to the public v1 API, closing the gap between the internal app API and the external developer API.
Endpoints added:
.../projects/{id}/pages/.../projects/{id}/pages/.../projects/{id}/pages/{page_id}/.../projects/{id}/pages/{page_id}/.../projects/{id}/pages/{page_id}/.../projects/{id}/pages/{page_id}/archive/.../projects/{id}/pages/{page_id}/archive/.../projects/{id}/pages/{page_id}/lock/.../projects/{id}/pages/{page_id}/lock/Design
This PR combines the best approaches from #8650, #8669, and #8800 into a single comprehensive implementation:
self.paginate()(from feat: add api/v1 pages endpoint #8650/Add Pages endpoints to v1 API #8669)external_id/external_sourcesupport with 409 Conflict on duplicates (from feat: add api/v1 pages endpoint #8650/Add Pages endpoints to v1 API #8669)description_binary = Nonereset on create/update — forces the Tiptap/Yjs collab server to reload content from HTML rather than serving stale binary state (from feat(api): add Pages CRUD endpoints to v1 API #8800, unique to that PR)transaction.atomic()around Page + ProjectPage creation (from feat: add api/v1 pages endpoint #8650/Add Pages endpoints to v1 API #8669)page_transaction.delay()for description version history tracking (from feat: add api/v1 pages endpoint #8650/Add Pages endpoints to v1 API #8669)@extend_schemaannotations for OpenAPI docs (from feat(api): add Pages CRUD endpoints to v1 API #8800)Key decisions
page_id(notpk) to matchProjectPagePermissionwhich readsview.kwargs.get("page_id")for per-page permission checksparentset toNonerather than being cascade-deletedTest plan
tests/contract/api/test_pages.pydescription_binaryreset works with Tiptap collab serverpage_transactionrecords version history correctlyCloses #8598
Closes #7319
🤖 Generated with Claude Code
Summary by CodeRabbit