diff --git a/.ai/active/SPRINT_PACKET.md b/.ai/active/SPRINT_PACKET.md
index 31c1210..ee50023 100644
--- a/.ai/active/SPRINT_PACKET.md
+++ b/.ai/active/SPRINT_PACKET.md
@@ -1,33 +1,161 @@
# Sprint Packet
-## Status
+## Sprint Title
-- No active build sprint is open.
-- Context Compaction 01 is complete and archived under `docs/archive/planning/2026-04-08-context-compaction/` and `.ai/archive/planning/2026-04-08-context-compaction/`.
-- Phase 10 planning docs are not defined yet.
+Phase 10 Sprint 1 (P10-S1): Identity + Workspace Bootstrap
-## Why This File Exists
+## Sprint Type
-- Control Tower expects `.ai/active/SPRINT_PACKET.md` to exist even when the repo is between planning cycles.
-- Keep this file as an idle-state pointer, not as a fake active sprint.
+feature
-## Current Approval Branch
+## Sprint Reason
-- Branch purpose: one-off context compaction and archival cleanup before Phase 10 planning, not a new product sprint.
-- Branch name: `codex/refactor-context-compaction-01`
-- Base branch: `main`
-- PR strategy: create-or-update
-- Merge policy: squash-merge only after `REVIEW_REPORT.md` is `PASS` and Control Tower issues explicit merge approval.
+Phase 9 proved Alice can be installed, interoperate, remember, and resume deterministically. Phase 10 must make Alice usable without local-only developer setup. `P10-S1` establishes the hosted identity and workspace foundations required before Telegram, chat-native continuity, and scheduled briefs can ship.
-## Branch Scope
+## Sprint Intent
-- compact live operating docs so active project memory reflects only current, durable Phase 9 truth
-- preserve superseded planning/control material in archive instead of deleting it
-- keep shipped Phase 9 release/quickstart/integration artifacts live and canonical
-- limit non-doc code changes to validation tooling/tests required for the new archive and idle-state control truth
+- hosted account and session model
+- magic-link authentication only for the first hosted entry path
+- workspace creation and bootstrap flow
+- deterministic device linking and device management
+- preferences and hosted settings foundation for timezone and future brief policy inputs
+- beta cohort and feature-flag support
-## Next Activation Criteria
+## Git Instructions
-- Run the Phase 9 release checklist and runbook on a clean environment.
-- Add canonical Phase 10 planning docs.
-- Replace this placeholder only when a new approved sprint is activated.
+- Branch Name: `codex/phase10-sprint-1-identity-workspace-bootstrap`
+- Base Branch: `main`
+- PR Strategy: one sprint branch, one PR
+- Merge Policy: squash merge only after review `PASS` and explicit approval
+
+## Redundancy Guard
+
+- Already shipped baseline:
+ - Alice Core local-first runtime
+ - deterministic CLI continuity contract
+ - deterministic MCP transport
+ - OpenClaw, Markdown, and ChatGPT importers
+ - continuity engine, approvals, and eval harness
+- Required now:
+ - hosted identity, sessions, and device trust model
+ - workspace bootstrap and preferences model
+ - onboarding/settings foundations that later sprints can attach Telegram to
+- Explicitly out of `P10-S1`:
+ - passkeys or alternate auth methods beyond magic-link
+ - Telegram transport
+ - Telegram link/unlink UX
+ - chat-native continuity flows
+ - daily brief delivery
+ - scheduler execution
+ - backup or sync payload movement
+ - admin/support dashboards
+ - launch hardening
+
+## Exact APIs In Scope
+
+- `POST /v1/auth/magic-link/start`
+- `POST /v1/auth/magic-link/verify`
+- `POST /v1/auth/logout`
+- `GET /v1/auth/session`
+- `POST /v1/workspaces`
+- `GET /v1/workspaces/current`
+- `POST /v1/workspaces/bootstrap`
+- `GET /v1/workspaces/bootstrap/status`
+- `POST /v1/devices/link/start`
+- `POST /v1/devices/link/confirm`
+- `GET /v1/devices`
+- `DELETE /v1/devices/{device_id}`
+- `GET /v1/preferences`
+- `PATCH /v1/preferences`
+
+## Exact Data Additions In Scope
+
+- `user_accounts`
+- `auth_sessions`
+- `magic_link_challenges`
+- `devices`
+- `device_link_challenges`
+- `workspaces`
+- `workspace_members`
+- `user_preferences`
+- `beta_cohorts`
+- `feature_flags`
+
+## Exact Files And Modules In Scope
+
+- `apps/api/src/alicebot_api/main.py`
+- `apps/api/src/alicebot_api/config.py`
+- `apps/api/src/alicebot_api/contracts.py`
+- `apps/api/src/alicebot_api/store.py`
+- new hosted auth / workspace bootstrap / device / preferences modules under `apps/api/src/alicebot_api/`
+- API migrations under `apps/api/alembic/versions/`
+- hosted onboarding/settings pages and supporting UI under `apps/web/app/` and `apps/web/components/`
+- sprint-owned unit, integration, and web tests under `tests/` and `apps/web/app/**/*.test.tsx`
+- sprint-owned documentation updates required to keep active control truth aligned
+
+## Implementation Workstreams
+
+### API And Persistence
+
+- add hosted account, session, workspace, device, and preference contracts
+- add magic-link challenge lifecycle and authenticated session resolution
+- add workspace bootstrap state and feature-flag visibility needed by hosted onboarding
+- keep hosted identity/workspace records mapped cleanly onto the shipped Alice Core user/workspace semantics
+
+### Hosted UX
+
+- add the minimal hosted web flow needed to sign in, create or bootstrap a workspace, manage linked devices, and update preferences
+- keep the surface narrow and utilitarian; this sprint is foundation, not launch polish
+- show hosted bootstrap readiness only; do not imply Telegram is available yet
+
+### Verification
+
+- add unit coverage for auth, session, device, workspace bootstrap, and preference logic
+- add integration coverage for all `P10-S1` endpoints, including invalid token, expired token, duplicate bootstrap, and revoked-device paths
+- add web tests for the hosted onboarding/settings slice
+- keep control-doc truth checks passing after packet and current-state updates
+
+## Required Deliverables
+
+- hosted account model
+- magic-link auth
+- device linking
+- workspace bootstrap flow
+- hosted settings page for timezone, brief-preference inputs, quiet-hours inputs, and device visibility
+- beta cohort and feature-flag support
+
+## Acceptance Criteria
+
+- a new user can create a workspace without touching a repo
+- a returning user can log in securely
+- device linking works deterministically
+- preferences persist and are exposed in hosted bootstrap/settings responses for later brief scheduling
+- Phase 9 shipped scope is baseline truth, not sprint work
+- hosted identity does not diverge from local workspace semantics
+- no `P10-S1` screen or API claims that Telegram is already linked or available
+
+## Required Verification Commands
+
+- `python3 scripts/check_control_doc_truth.py`
+- `./.venv/bin/python -m pytest tests/unit tests/integration -q`
+- `pnpm --dir apps/web test`
+
+## Review Evidence Requirements
+
+- `BUILD_REPORT.md` must list the exact sprint-owned files changed and the exact command results above
+- `REVIEW_REPORT.md` must grade against `P10-S1` specifically, not generic Phase 10 planning
+- if local archive paths remain dirty, they must be called out explicitly as excluded from sprint merge scope
+
+## Implementation Constraints
+
+- do not fork continuity semantics between hosted surfaces and Alice Core
+- keep OSS versus product boundaries explicit in docs and API naming
+- preserve existing approval, provenance, and correction discipline
+- do not widen Phase 10 scope to Telegram or notifications inside this sprint
+- do not ship a scheduler in `P10-S1`; preference storage is enough
+- do not represent Telegram channel state before `P10-S2`
+- prefer additive hosted-control-plane seams over invasive rewrites of shipped Phase 9 paths
+
+## Exit Condition
+
+`P10-S1` is complete when a user can authenticate by magic link, create or bootstrap a workspace, link and revoke devices, persist hosted preferences, and land in a hosted bootstrap state that is explicitly ready for later Telegram linkage without reopening shipped Phase 9 scope.
diff --git a/.ai/handoff/CURRENT_STATE.md b/.ai/handoff/CURRENT_STATE.md
index 692acc0..35ceee0 100644
--- a/.ai/handoff/CURRENT_STATE.md
+++ b/.ai/handoff/CURRENT_STATE.md
@@ -2,26 +2,37 @@
## Status
-- Phase 9 is complete.
-- No active build sprint is open.
-- Phase 10 planning docs are not defined yet.
+- Phase 9 is complete and shipped.
+- Phase 10 planning is defined as Alice Connect.
+- P10-S1 (Identity + Workspace Bootstrap) is the first execution sprint packet.
+- No Phase 10 product surface is shipped yet.
-## Canonical Shipped Surface
+## Canonical Baseline
-- Alice ships a local-first runtime, deterministic CLI, deterministic MCP transport, OpenClaw/Markdown/ChatGPT importers, and a reproducible local evaluation harness.
-- Canonical Phase 9 launch references remain under `docs/quickstart/`, `docs/integrations/`, `docs/examples/phase9-command-walkthrough.md`, `docs/release/`, `docs/runbooks/phase9-public-release-runbook.md`, and `eval/`.
+- Shipped OSS surface: local-first runtime, deterministic CLI, deterministic MCP transport, OpenClaw/Markdown/ChatGPT importers, continuity engine, approvals, and evaluation harness.
+- Canonical shipped docs remain under `docs/quickstart/`, `docs/integrations/`, `docs/examples/phase9-command-walkthrough.md`, `docs/release/`, `docs/runbooks/phase9-public-release-runbook.md`, and `eval/`.
+- Repo runtime remains a modular monolith: FastAPI API/core, Next.js web app, workers, Postgres, Redis, and MinIO.
-## Active Constraints
+## Current Phase 10 Target
-- Do not reopen shipped Phase 9 product scope while the repo is waiting for Phase 10 planning.
-- Keep public claims tied to runnable commands and committed evidence.
-- Archive planning history instead of carrying it in live control files.
+- hosted identity and workspace bootstrap
+- device and channel linking
+- Telegram-first chat access
+- chat-native capture, recall, resume, correction, open-loop review, and approvals
+- daily brief and notification loop
+- beta rollout, support, and observability tooling
+
+## Active Sprint Focus
-## Next Control Move
+- `P10-S1` covers account/session foundations, workspace bootstrap, device linking, preferences, and beta controls.
+- Telegram transport, chat-native continuity, daily briefs, and launch hardening are later Phase 10 milestones.
+- Phase 9 shipped scope is baseline truth and must not be reopened as sprint work.
-- Run the Phase 9 release checklist and runbook end to end on a clean environment.
-- Cut `v0.1.0` only after those gates pass and explicit approval is given.
-- Add Phase 10 planning docs before activating another sprint.
+## Active Constraints
+
+- Preserve parity between local, CLI, MCP, and future Telegram behavior.
+- Keep OSS versus hosted product scope explicit in docs and APIs.
+- Archive planning history instead of carrying it in live control files.
## Archive Pointers
diff --git a/ARCHITECTURE.md b/ARCHITECTURE.md
index 8bd29cf..5cb0153 100644
--- a/ARCHITECTURE.md
+++ b/ARCHITECTURE.md
@@ -2,65 +2,86 @@
## System Overview
-Alice is a local-first continuity system built around durable events, typed continuity objects, correction-aware retrieval, and deterministic recall/resumption compilation.
+Phase 10 keeps the shipped Phase 9 modular monolith and adds a hosted product layer on top of the same continuity core. Alice Core remains authoritative for continuity objects, recall, resume, corrections, approvals, and provenance-backed retrieval. Hosted identity, Telegram, and scheduling orchestrate access to that core; they do not create a second semantics stack.
-The public v0.1 surface exposes already-shipped seams through a narrow local-first contract:
+## Technical Stack
-- local runtime and package boundary
-- deterministic CLI continuity contract
-- deterministic MCP transport with a narrow tool surface
-- OpenClaw, Markdown, and ChatGPT import adapters
-- a reproducible evaluation harness and launch docs grounded in those shipped paths
+- API and core runtime: Python 3.12 + FastAPI under `apps/api/src/alicebot_api`
+- Web app: Next.js 15 + React 19 under `apps/web`
+- Background/task surface: `workers`
+- Primary data store: Postgres with `pgvector`
+- Local support services: Redis and MinIO via `docker-compose.yml`
+- Packaging: `alice-core` in `pyproject.toml`
+- Test surface: pytest, Vitest, and the Phase 9 evaluation harness
-## Technical Stack
+## Runtime Boundaries
+
+### Core Data Plane
+
+Owns:
+
+- continuity capture and revision persistence
+- typed continuity objects and memory revisions
+- recall and resumption compilation
+- entities, edges, and open loops
+- approvals and audit traces
+- CLI and MCP semantics
+- importer provenance and deterministic dedupe
+
+### Hosted Control Plane
+
+Owns:
+
+- user accounts and auth sessions
+- devices and trust levels
+- workspaces and bootstrap state
+- channel bindings
+- user preferences and notification policy
+- beta cohorts and feature flags
+- telemetry and support tooling
+
+### Surface Layer
-- Backend: Python + FastAPI
-- Web shell: Next.js + React
-- Data store: Postgres (`pgvector` enabled)
-- Local infra: Docker Compose, Redis, MinIO
-- Test stack: pytest + Vitest
-- Public package metadata: `pyproject.toml` (`alice-core` version `0.1.0`)
+- local API and CLI
+- MCP server
+- Telegram adapter and chat routing layer
+- web onboarding/settings/admin surfaces
+- brief and notification scheduler
-## Runtime Layers
+## Phase 10 Core Flows
-1. Continuity capture and revision/event persistence
-2. Recall and resumption compilation layer
-3. Trust and memory-quality posture
-4. CLI and MCP transport surface
-5. Import adapters with deterministic provenance/dedupe
-6. Evaluation harness and evidence outputs
+### Onboarding
-## Public Interface Boundaries
+1. User authenticates with a hosted session.
+2. User creates or boots a workspace.
+3. Device and channel bindings are established.
+4. Preferences and import choices are stored.
+5. Alice generates a first brief against the existing continuity core.
-### CLI (`P9-S34`)
+### Inbound Chat
-- entrypoints: `python -m alicebot_api` and optional `alicebot`
-- commands: `status`, `capture`, `recall`, `resume`, `open-loops`, `review *`
-- output posture: deterministic formatting with provenance snippets
+1. Telegram webhook receives an inbound message.
+2. The message is normalized into a common channel message contract.
+3. Routing resolves workspace, actor, and best-fit continuity context.
+4. Core capture/recall/resume/correction logic executes.
+5. A reply is dispatched back through the same channel thread.
-### MCP (`P9-S35`)
+### Approval
-- entrypoints: `python -m alicebot_api.mcp_server` and optional `alicebot-mcp`
-- intentionally narrow tools:
- - `alice_capture`
- - `alice_recall`
- - `alice_resume`
- - `alice_open_loops`
- - `alice_recent_decisions`
- - `alice_recent_changes`
- - `alice_memory_review`
- - `alice_memory_correct`
- - `alice_context_pack`
+1. Core logic emits an approval request.
+2. Chat surface presents approve/reject/context actions.
+3. Approval resolution writes back to the same approval and audit objects used by other surfaces.
-### Importers (`P9-S36` / `P9-S37`)
+### Daily Brief
-- `openclaw_import`
-- `markdown_import`
-- `chatgpt_import`
+1. Scheduler selects workspaces due for delivery.
+2. Brief compiler builds a deterministic summary from continuity state.
+3. Notification policy and quiet hours are applied.
+4. Delivery receipts and failures are recorded for support tooling.
-All importers keep source-specific provenance fields and deterministic dedupe keys.
+## Data Model Summary
-## Core Data Objects
+### Existing Baseline Objects
- continuity capture events
- typed continuity objects
@@ -68,17 +89,47 @@ All importers keep source-specific provenance fields and deterministic dedupe ke
- open loops and brief-ready summaries
- import provenance with explicit `source_kind`
-## Security and Governance Posture
+### Phase 10 Additions
+
+Control-plane tables:
+
+- `user_accounts`
+- `auth_sessions`
+- `devices`
+- `workspaces`
+- `workspace_members`
+- `user_preferences`
+- `beta_cohorts`
+- `feature_flags`
+
+Channel and scheduler tables:
+
+- `channel_identities`
+- `channel_messages`
+- `channel_threads`
+- `channel_delivery_receipts`
+- `chat_intents`
+- `continuity_briefs`
+- `approval_challenges`
+- `daily_brief_jobs`
+- `notification_subscriptions`
+- `open_loop_reviews`
+- `chat_telemetry`
+
+## Security and Governance
- Postgres remains the system of record.
-- User-owned tables remain RLS-governed.
-- Append-only event/revision semantics are preserved.
-- Public surfaces do not bypass trust/provenance discipline.
-- Consequential actions remain approval-bounded.
+- Hosted identity and channel access add to, but do not bypass, existing approval and provenance discipline.
+- Append-only continuity and correction history stay intact.
+- Device linking, channel binding, and session expiry are explicit control-plane concerns.
+- Consequential actions remain approval-bounded even when initiated from chat.
+- Opt-in backup/sync must preserve user isolation and encryption boundaries.
+
+## Deployment
-## Local Deployment Model
+### Shipped Baseline
-Canonical startup path:
+Canonical local startup path remains:
```bash
docker compose up -d
@@ -87,15 +138,16 @@ docker compose up -d
APP_RELOAD=false ./scripts/api_dev.sh
```
-Health check:
+### Phase 10 Production Additions
-```bash
-curl -sS http://127.0.0.1:8000/healthz
-```
+- hosted auth/session endpoints
+- public webhook ingress for Telegram
+- scheduler/worker execution for briefs and notifications
+- support/admin visibility for beta operations
-## Evidence and Test Surface
+## Testing
-Required verification commands for launch docs and release assets:
+Existing quality gates remain:
```bash
./.venv/bin/python -m pytest tests/unit tests/integration
@@ -103,17 +155,22 @@ pnpm --dir apps/web test
./scripts/run_phase9_eval.sh --report-path eval/reports/phase9_eval_latest.json
```
-Evidence artifacts:
+Phase 10 adds targeted verification for:
-- `eval/baselines/phase9_s37_baseline.json`
-- `eval/reports/phase9_eval_latest.json`
+- auth and workspace bootstrap
+- device and channel linking
+- idempotent webhook ingest and outbound delivery
+- cross-surface parity between local, CLI, MCP, and Telegram
+- daily brief scheduling, quiet hours, and failure handling
+- support telemetry and rollout controls
## Architecture Constraints
-- Preserve shipped P5/P6/P7/P8 semantics.
-- Do not expand the MCP tool surface without an explicit planning update.
-- Do not add importer families beyond shipped OpenClaw/Markdown/ChatGPT paths without roadmap approval.
-- Keep public docs aligned to real command paths and committed evidence.
+- Phase 10 must not fork semantics between local, CLI, MCP, and Telegram.
+- Telegram is another surface on the same core objects, not a separate assistant stack.
+- Control-plane additions must not rewrite shipped Alice Core contracts.
+- Do not expand connector breadth beyond Telegram in Phase 10 without an explicit roadmap change.
+- Keep docs clear about what is shipped OSS baseline versus planned beta surface.
## Historical Traceability
diff --git a/ARCHIVE_RECOMMENDATIONS.md b/ARCHIVE_RECOMMENDATIONS.md
index a7c178e..7219dc3 100644
--- a/ARCHIVE_RECOMMENDATIONS.md
+++ b/ARCHIVE_RECOMMENDATIONS.md
@@ -2,18 +2,19 @@
## Archive Instead Of Keeping In Live Agent Memory
-- Investor framing, executive rhetoric, and narrative persuasion.
-- Long strategy memos once their decisions have been distilled into product, architecture, roadmap, and rules.
-- Raw brainstorms, option dumps, and redundant scope alternatives.
-- Verbose roadmap history and schedule speculation.
-- Meeting notes, implementation diaries, and retrospective prose.
-- Detailed vendor pricing snapshots and model-cost assumptions that will drift quickly.
-- Duplicate sprint plans or decomposition notes once a current sprint packet exists.
-- Example-heavy explanatory text that does not change the operating rules.
+- investor framing, fundraising rhetoric, and narrative persuasion
+- long strategy memos once the decisions are captured in product, architecture, roadmap, rules, and ADRs
+- duplicate UX-flow descriptions after the canonical operating files are updated
+- exhaustive endpoint and table catalogs once implementation docs or ADRs exist
+- launch checklist detail once it is converted into runbooks, tests, and rollout tasks
+- go-to-market speculation such as pricing, waitlist copy, launch-post drafts, and demo collateral plans
+- schedule guesswork, sprint diary prose, and repeated acceptance-criteria wording
+- explicit non-goal brainstorming beyond the canonical non-goals list
## Keep As Archived Reference Only
-- Original source plans and memos that fed the bootstrap.
-- Full schema sketches and endpoint catalogs before implementation-specific docs are created.
-- Older roadmap versions after milestone sequencing changes.
-- Historical task and review notes that may help reconstruct decision context later.
+- the original Phase 10 source packet and any mixed concept memo that fed it
+- full API and schema packets before implementation-specific docs exist
+- historical roadmap versions after milestone ordering changes
+- detailed beta-operations notes that are useful for later audits but not for active agent context
+- superseded sprint packets and planning snapshots kept under `docs/archive/` and `.ai/archive/`
diff --git a/BUILD_REPORT.md b/BUILD_REPORT.md
index 678b667..c885eb0 100644
--- a/BUILD_REPORT.md
+++ b/BUILD_REPORT.md
@@ -1,87 +1,109 @@
-# BUILD_REPORT.md
+# BUILD_REPORT
## sprint objective
-
-Compact the repo's live operating docs so `README.md`, `ROADMAP.md`, `RULES.md`, and `.ai/handoff/CURRENT_STATE.md` hold only current, durable Phase 9 truth while superseded planning and control material is preserved in archive.
+Implement **Phase 10 Sprint 1 (P10-S1): Identity + Workspace Bootstrap** with hosted magic-link auth, hosted workspace bootstrap, deterministic device linking/management, hosted preferences persistence, and beta cohort/feature-flag foundations without expanding into Telegram delivery/linking scope.
## completed work
-
-- Rewrote the live control docs to reflect the correct idle state:
- - Phase 9 is complete
- - no active build sprint is open
- - Phase 10 planning docs are not defined yet
-- Replaced `.ai/active/SPRINT_PACKET.md` with an explicit idle-state placeholder so the active control path matches the repo's no-active-sprint truth.
-- Slimmed `README.md` to onboarding, shipped-product truth, and canonical doc pointers.
-- Rewrote `ROADMAP.md` to be future-facing instead of a sprint ledger.
-- Pruned `RULES.md` down to durable reusable rules.
-- Compacted `.ai/handoff/CURRENT_STATE.md` into current-state truth plus next control move.
-- Pruned `PRODUCT_BRIEF.md` and `ARCHITECTURE.md` to remove stale sprint-ledger and legacy-marker language from canonical docs.
-- Slimmed `CHANGELOG.md` to short release-facing history.
-- Archived superseded Phase 9 planning/history docs under `docs/archive/planning/2026-04-08-context-compaction/`:
- - `phase9-product-spec.md`
- - `phase9-sprint-33-38-plan.md`
- - `phase9-sprint-33-control-tower-packet.md`
- - `phase9-bootstrap-notes.md`
-- Preserved pre-compaction snapshots of the live docs in the same archive folder:
- - `README.pre-compaction.md`
- - `ROADMAP.pre-compaction.md`
- - `RULES.pre-compaction.md`
-- Preserved superseded control snapshots under `.ai/archive/planning/2026-04-08-context-compaction/`:
- - `CURRENT_STATE.pre-compaction.md`
- - `SPRINT_PACKET.context-compaction-01.md`
-- Added `docs/archive/planning/2026-04-08-context-compaction/README.md` as the canonical archive index for this compaction pass.
-- Repaired archived snapshot links where moving the file would otherwise leave dead relative references.
-- Updated the existing control-doc validation script and its unit test so repo validation matches the compacted control truth, including the idle active-sprint placeholder.
+- Updated the active control/docs layer to reflect an active `P10-S1` execution sprint instead of the post-Phase-9 idle placeholder:
+ - `.ai/active/SPRINT_PACKET.md`
+ - `.ai/handoff/CURRENT_STATE.md`
+ - `README.md`
+ - `ROADMAP.md`
+ - `RULES.md`
+ - `ARCHITECTURE.md`
+ - `PRODUCT_BRIEF.md`
+ - `ARCHIVE_RECOMMENDATIONS.md`
+ - `RECOMMENDED_ADRS.md`
+- Added hosted control-plane migration for all sprint data additions:
+ - `user_accounts`, `auth_sessions`, `magic_link_challenges`, `devices`, `device_link_challenges`, `workspaces`, `workspace_members`, `user_preferences`, `beta_cohorts`, `feature_flags`.
+- Implemented new hosted modules under API source:
+ - `hosted_auth.py` (magic-link lifecycle, session issuance/validation/logout, feature-flag resolution)
+ - `hosted_workspace.py` (workspace creation/current selection/bootstrap status/complete)
+ - `hosted_devices.py` (device-link challenge start/confirm, list, revoke + session revocation)
+ - `hosted_preferences.py` (timezone validation + preference get/patch persistence)
+- Added full `v1` API surface in `main.py`:
+ - `POST /v1/auth/magic-link/start`
+ - `POST /v1/auth/magic-link/verify`
+ - `POST /v1/auth/logout`
+ - `GET /v1/auth/session`
+ - `POST /v1/workspaces`
+ - `GET /v1/workspaces/current`
+ - `POST /v1/workspaces/bootstrap`
+ - `GET /v1/workspaces/bootstrap/status`
+ - `POST /v1/devices/link/start`
+ - `POST /v1/devices/link/confirm`
+ - `GET /v1/devices`
+ - `DELETE /v1/devices/{device_id}`
+ - `GET /v1/preferences`
+ - `PATCH /v1/preferences`
+- Added config knobs for hosted TTL controls:
+ - `MAGIC_LINK_TTL_SECONDS`, `AUTH_SESSION_TTL_SECONDS`, `DEVICE_LINK_TTL_SECONDS`.
+- Added hosted contract types in `contracts.py` for account/session/workspace/device/preferences records and statuses.
+- Added hosted onboarding/settings web slice:
+ - new routes `/onboarding` and `/settings`
+ - supporting components for onboarding and settings posture
+ - navigation + overview route-card updates
+ - explicit messaging that Telegram linkage is not available in `P10-S1`.
+- Added verification coverage:
+ - integration coverage for all new `v1` flows, including invalid token, expired token, duplicate bootstrap, and revoked-device session path
+ - unit coverage for hosted helper logic and migration wiring
+ - web tests for onboarding/settings pages.
## incomplete work
-
-- No broader historical docs outside the moved Phase 9 planning/control set were archived in this sprint.
+- None within `P10-S1` acceptance scope.
## files changed
-
-- `.ai/handoff/CURRENT_STATE.md`
+Sprint-owned files changed:
- `.ai/active/SPRINT_PACKET.md`
+- `.ai/handoff/CURRENT_STATE.md`
+- `ARCHITECTURE.md`
+- `ARCHIVE_RECOMMENDATIONS.md`
+- `PRODUCT_BRIEF.md`
- `README.md`
+- `RECOMMENDED_ADRS.md`
- `ROADMAP.md`
- `RULES.md`
-- `PRODUCT_BRIEF.md`
-- `ARCHITECTURE.md`
-- `CHANGELOG.md`
-- `docs/archive/planning/2026-04-08-context-compaction/README.md`
-- `docs/archive/planning/2026-04-08-context-compaction/README.pre-compaction.md`
-- `docs/archive/planning/2026-04-08-context-compaction/ROADMAP.pre-compaction.md`
-- `docs/archive/planning/2026-04-08-context-compaction/RULES.pre-compaction.md`
-- `docs/archive/planning/2026-04-08-context-compaction/phase9-product-spec.md`
-- `docs/archive/planning/2026-04-08-context-compaction/phase9-sprint-33-38-plan.md`
-- `docs/archive/planning/2026-04-08-context-compaction/phase9-sprint-33-control-tower-packet.md`
-- `docs/archive/planning/2026-04-08-context-compaction/phase9-bootstrap-notes.md`
-- `.ai/archive/planning/2026-04-08-context-compaction/CURRENT_STATE.pre-compaction.md`
-- `.ai/archive/planning/2026-04-08-context-compaction/SPRINT_PACKET.context-compaction-01.md`
- `scripts/check_control_doc_truth.py`
-- `tests/unit/test_control_doc_truth.py`
- `BUILD_REPORT.md`
- `REVIEW_REPORT.md`
+- `apps/api/alembic/versions/20260408_0043_phase10_identity_workspace_bootstrap.py`
+- `apps/api/src/alicebot_api/config.py`
+- `apps/api/src/alicebot_api/contracts.py`
+- `apps/api/src/alicebot_api/main.py`
+- `apps/api/src/alicebot_api/hosted_auth.py`
+- `apps/api/src/alicebot_api/hosted_workspace.py`
+- `apps/api/src/alicebot_api/hosted_devices.py`
+- `apps/api/src/alicebot_api/hosted_preferences.py`
+- `tests/integration/test_phase10_identity_workspace_bootstrap_api.py`
+- `tests/unit/test_20260408_0043_phase10_identity_workspace_bootstrap.py`
+- `tests/unit/test_phase10_hosted_modules.py`
+- `apps/web/app/onboarding/page.tsx`
+- `apps/web/app/onboarding/page.test.tsx`
+- `apps/web/app/settings/page.tsx`
+- `apps/web/app/settings/page.test.tsx`
+- `apps/web/components/hosted-onboarding-panel.tsx`
+- `apps/web/components/hosted-settings-panel.tsx`
+- `apps/web/components/app-shell.tsx`
+- `apps/web/app/page.tsx`
## tests run
+Required verification commands and results:
+- `python3 scripts/check_control_doc_truth.py`
+ - `Control-doc truth check: PASS`
+ - Verified: `README.md`, `ROADMAP.md`, `.ai/active/SPRINT_PACKET.md`, `RULES.md`, `.ai/handoff/CURRENT_STATE.md`, `docs/archive/planning/2026-04-08-context-compaction/README.md`
+- `./.venv/bin/python -m pytest tests/unit tests/integration -q`
+ - `990 passed in 108.75s (0:01:48)`
+- `pnpm --dir apps/web test`
+ - `Test Files 59 passed (59)`
+ - `Tests 194 passed (194)`
-- Manual review of `README.md`, `ROADMAP.md`, `RULES.md`, `.ai/handoff/CURRENT_STATE.md`, and `CHANGELOG.md` for duplication and stale control language.
-- `rg -n "through Phase 3 Sprint 9|Active Sprint focus is Phase 4 Sprint 14|Gate ownership is canonicalized to Phase 4 runner scripts|Gate ownership is canonicalized to Phase 4 runner script names|Legacy Compatibility Markers|Phase 9 Sprint Sequence" README.md ROADMAP.md RULES.md .ai/handoff/CURRENT_STATE.md`
- - PASS (no stale live-control markers)
-- `rg -n "docs/phase9-product-spec.md|docs/phase9-sprint-33-38-plan.md|docs/phase9-sprint-33-control-tower-packet.md|docs/phase9-bootstrap-notes.md" README.md CHANGELOG.md ROADMAP.md RULES.md .ai/handoff/CURRENT_STATE.md docs scripts tests .ai`
- - PASS (no stale references from canonical/live surfaces)
-- `rg --pcre2 -n "\\]\\((?!https?://|/)[^)]+\\)" docs/archive/planning/2026-04-08-context-compaction .ai/archive/planning/2026-04-08-context-compaction`
- - PASS after link normalization review for archived snapshots
-- `./.venv/bin/python scripts/check_control_doc_truth.py`
- - PASS
-- `./.venv/bin/python -m pytest tests/unit/test_control_doc_truth.py -q`
- - PASS (`5 passed`)
-- `git diff --name-only`
- - PASS for scope review: only docs plus doc-validation tooling/tests changed; no product-behavior files were modified in this sprint work
+Additional focused checks run during implementation:
+- `./.venv/bin/python -m pytest tests/unit/test_phase10_hosted_modules.py tests/unit/test_20260408_0043_phase10_identity_workspace_bootstrap.py tests/integration/test_phase10_identity_workspace_bootstrap_api.py -q`
+ - `9 passed in 1.37s`
## blockers/issues
-
-- No remaining functional blockers in branch scope.
+- No implementation blockers.
+- One transient web test assertion ambiguity (duplicate text match) was resolved by tightening the selector to role-based heading assertions.
## recommended next step
-
-Seek explicit Control Tower merge approval for this compaction branch, then proceed to Phase 10 planning document creation only after the Phase 9 release checklist/runbook gates are complete.
+Seek explicit Control Tower merge approval for `P10-S1`, using this branch head and the verification evidence above.
diff --git a/PRODUCT_BRIEF.md b/PRODUCT_BRIEF.md
index dad7664..c23fac2 100644
--- a/PRODUCT_BRIEF.md
+++ b/PRODUCT_BRIEF.md
@@ -2,75 +2,88 @@
## Product Summary
-Alice is a local-first memory and continuity layer for AI agents. It persists durable context, compiles useful working context on demand, and improves future retrieval when users apply corrections.
+Alice Connect is the Phase 10 product layer on top of shipped Phase 9 Alice Core. Alice Core remains the open-source local-first continuity engine; Alice Connect adds hosted identity, workspace bootstrap, Telegram-first access, chat-native continuity actions, and a daily brief loop for non-developer beta users.
## Problem
-General-purpose assistants and agent stacks still lose long-horizon continuity. They forget decisions, drop open loops, and require repeated context restatement.
+Phase 9 proved Alice can be installed, interoperate, remember, and resume deterministically. It does not yet make Alice usable every day for someone who will not touch a repo, CLI, or MCP setup.
## Target Users
-- Technical individual users who want a local continuity engine.
-- Developers and agent builders who need durable recall, resumption, and correction-aware memory.
-- Users with existing workspace/chat/note exports they want to import into one governed continuity store.
+- Non-developer beta users who want a personal continuity assistant in chat.
+- Individual professionals who need capture, recall, resume, open-loop review, and lightweight approvals in Telegram.
+- OSS adopters who start local-first and may later opt into a hosted product layer.
-## Core Value Proposition
+## Why It Matters
-- Durable memory and continuity across sessions.
-- Deterministic recall and resumption output.
-- Open-loop visibility (blocked, waiting, next action).
-- Correction-aware retrieval that updates future output.
-- Interoperability via CLI and MCP with a deliberately narrow contract.
+- turns continuity from a technical engine into a daily habit
+- makes chat the default interface without forking core semantics
+- creates a clear OSS-to-product path instead of a separate product rewrite
-## Current Shipped Surface
+## Shipped Baseline
-The shipped v0.1 wedge includes:
+Phase 9 is complete and shipped. Baseline truth is:
-- a local-first runtime boundary
+- Alice Core local-first runtime
- deterministic CLI continuity commands
- deterministic MCP transport with a narrow tool surface
-- OpenClaw, Markdown, and ChatGPT import paths
-- a reproducible local evaluation harness and baseline evidence
-- quickstart, integration, release, and runbook docs grounded in those shipped paths
+- OpenClaw, Markdown, and ChatGPT importers
+- continuity engine, approvals, and evaluation harness
+- public quickstart, integration, release, and runbook docs for the OSS wedge
-## Non-Goals (v0.1)
+## V1 Scope (Phase 10)
-- hosted SaaS dependency for initial launch
-- broad connector write actions
-- Telegram/WhatsApp channel expansion
-- deep browser automation
-- enterprise platform expansion in v0.1
+### Open Source Surface
-## Key User Journeys
+- Alice Core
+- CLI
+- MCP
+- importers
+- OpenClaw adapter
-1. Install Alice locally and get a first useful recall result in under 30 minutes.
-2. Load deterministic sample data and generate a resumption brief.
-3. Import external context from OpenClaw, Markdown, or ChatGPT export.
-4. Run correction flow and verify future retrieval follows corrected truth.
+### Product / Beta Surface
-## Constraints
+- Alice Connect account
+- hosted workspace bootstrap
+- device and channel linking
+- Telegram access
+- chat-native capture, recall, resume, correction, and open-loop review
+- approvals in chat
+- daily brief and notification loop
+- opt-in encrypted backup/sync metadata path
+- beta onboarding, cohort gating, and support tooling
-- Local-first deployment for v0.1.
-- Deterministic, provenance-backed outputs.
-- Corrections must influence future behavior.
-- Public docs must only claim shipped command paths and evidence.
-- Consequential execution remains approval-bounded.
+## Non-Goals
+
+- WhatsApp or broad channel expansion
+- browser automation
+- high-risk autonomous execution
+- enterprise collaboration features
+- new vertical agents
+- reopening more core release-control work as Phase 10 scope
## Success Criteria
-- External technical users can follow docs from install to first useful result without handholding.
-- Shipped CLI/MCP/importer paths are reproducible from documented commands.
-- Evaluation evidence is reproducible from `./scripts/run_phase9_eval.sh`.
-- Release assets are sufficient to cut v0.1 without reopening product semantics.
+At the end of Phase 10, a non-developer beta user can:
+
+- create an account
+- link Telegram
+- import initial data or skip import
+- capture things naturally in chat
+- ask recall questions
+- get resume briefs
+- review open loops
+- approve simple actions in chat
+- receive a useful daily brief
## Product Non-Negotiables
-- Durable context must come from governed storage, not transcript stuffing.
-- Corrections must improve later retrieval/resumption.
-- Provenance must remain visible.
-- Public launch must not depend on unsafe autonomy or broad connector side effects.
-- Alice must remain useful as a standalone local continuity engine.
+- Alice Core remains the baseline truth; Phase 10 builds on it rather than replacing it.
+- Telegram is a product surface on top of the same continuity semantics as local, CLI, and MCP.
+- Durable answers remain provenance-backed and correction-aware.
+- Consequential actions remain approval-bounded.
+- Hosted product docs must clearly distinguish OSS surface from beta product surface.
## Historical Traceability
-Superseded rollout plans and control snapshots live under `docs/archive/planning/2026-04-08-context-compaction/README.md`.
+Superseded planning and control material lives under `docs/archive/planning/2026-04-08-context-compaction/README.md`.
diff --git a/README.md b/README.md
index 4e18c2d..9907ac8 100644
--- a/README.md
+++ b/README.md
@@ -2,7 +2,7 @@
Alice is a local-first memory and continuity engine for AI agents.
-Phase 9 is complete. The repo is waiting for Phase 10 planning docs before another build sprint is activated.
+Phase 9 is complete. Alice Connect is the planned Phase 10 product layer on top of that shipped core, and `P10-S1` is the first execution sprint.
## What v0.1 Ships
diff --git a/RECOMMENDED_ADRS.md b/RECOMMENDED_ADRS.md
index 0fc8050..95d943e 100644
--- a/RECOMMENDED_ADRS.md
+++ b/RECOMMENDED_ADRS.md
@@ -1,61 +1,47 @@
# Recommended ADRs
-## ADR-001: Modular Monolith for V1
+## Existing Accepted ADRs Still In Force
-- Why it deserves an ADR: service boundaries, deployment complexity, team workflow, and failure modes all depend on this choice.
-- Proposed status: Proposed
-
-## ADR-002: Postgres + `pgvector` as V1 System of Record and Retrieval Store
-
-- Why it deserves an ADR: it sets the data platform, query model, operational burden, and later migration path.
-- Proposed status: Proposed
-
-## ADR-003: Append-Only Continuity Model for Threads, Sessions, and Events
-
-- Why it deserves an ADR: this decision defines auditability, replay behavior, and how memory derives from source truth.
-- Proposed status: Proposed
-
-## ADR-004: Memory as a Derived, Revisioned Projection
-
-- Why it deserves an ADR: it governs data integrity, contradiction handling, consolidation, and user trust.
-- Proposed status: Proposed
-
-## ADR-005: Deterministic Context Compiler Contract
+- `ADR-001` public core package boundary
+- `ADR-002` public runtime baseline
+- `ADR-003` MCP tool surface contract
+- `ADR-004` OpenClaw integration boundary
+- `ADR-005` import provenance and dedupe strategy
+- `ADR-007` public evaluation harness scope
-- Why it deserves an ADR: it affects explainability, cache reuse, testing strategy, and model portability.
-- Proposed status: Proposed
+## Next ADRs To Author For Phase 10
-## ADR-006: Auth and Per-User Isolation Model
+### ADR-006: Hosted Identity, Session, and Device Trust Model
-- Why it deserves an ADR: username/password plus TOTP, database user context, and RLS policy shape are hard security boundaries.
+- Why it deserves an ADR: auth mode, session expiry, device linking, and trust levels become hard security boundaries for the hosted product layer.
- Proposed status: Proposed
-## ADR-007: Policy Engine + Tool Proxy + Approval Boundary
+### ADR-008: Alice Connect Control Plane vs Data Plane Boundary
-- Why it deserves an ADR: this is the core safety architecture for any external action or sensitive data access.
+- Why it deserves an ADR: this decision determines what lives in hosted orchestration versus Alice Core and prevents semantic drift across surfaces.
- Proposed status: Proposed
-## ADR-008: Relational Entity and Relationship Storage in V1
+### ADR-009: Cross-Surface Continuity Parity Contract
-- Why it deserves an ADR: choosing relational storage over a graph database affects schema design, query strategy, and scale assumptions.
+- Why it deserves an ADR: Telegram must reuse local, CLI, and MCP semantics instead of inventing a separate chat behavior model.
- Proposed status: Proposed
-## ADR-009: Object Storage and Scoped Task Workspace Strategy
+### ADR-010: Telegram Message Normalization and Routing Contract
-- Why it deserves an ADR: artifact handling, document ingestion, and task isolation depend on this storage boundary.
+- Why it deserves an ADR: inbound idempotency, attachment handling, workspace resolution, and thread routing define the chat transport boundary.
- Proposed status: Proposed
-## ADR-010: Read-Only Connector Strategy for Gmail and Calendar
+### ADR-011: Daily Brief and Notification Policy Model
-- Why it deserves an ADR: connector permission scope has major product, security, and delivery consequences.
+- Why it deserves an ADR: scheduling semantics, quiet hours, delivery retries, and brief composition will otherwise sprawl across product and ops code.
- Proposed status: Proposed
-## ADR-011: Trace-First Observability and Audit Logging Model
+### ADR-012: Opt-In Encrypted Backup and Sync Boundary
-- Why it deserves an ADR: explainability, incident review, and ship-gate validation depend on what is logged and retained.
+- Why it deserves an ADR: backup scope, encryption posture, and recovery semantics affect trust, support burden, and OSS-to-product separation.
- Proposed status: Proposed
-## ADR-012: Deployment Architecture for V1
+### ADR-013: Beta Rollout, Feature Flag, and Support Telemetry Model
-- Why it deserves an ADR: VPS versus managed container hosting, secret handling, backup posture, and runtime topology affect both cost and risk.
+- Why it deserves an ADR: safe rollout, rollback, observability, and support diagnostics are launch-critical and cut across every Phase 10 surface.
- Proposed status: Proposed
diff --git a/REVIEW_REPORT.md b/REVIEW_REPORT.md
index b1a7e02..6296030 100644
--- a/REVIEW_REPORT.md
+++ b/REVIEW_REPORT.md
@@ -1,53 +1,58 @@
-# REVIEW_REPORT.md
+# REVIEW_REPORT
## verdict
PASS
## criteria met
-- Live operating docs now reflect the correct idle post-Phase-9 state:
- - no active build sprint is open
- - Phase 9 remains the current shipped truth
- - Phase 10 planning is explicitly not defined yet
-- Superseded Phase 9 planning/control material is preserved in archive rather than deleted:
- - `docs/archive/planning/2026-04-08-context-compaction/`
- - `.ai/archive/planning/2026-04-08-context-compaction/`
-- Canonical live surfaces are materially smaller and more trustworthy after compaction:
+- All `P10-S1` in-scope hosted APIs are implemented and exercised:
+ - `POST /v1/auth/magic-link/start`
+ - `POST /v1/auth/magic-link/verify`
+ - `POST /v1/auth/logout`
+ - `GET /v1/auth/session`
+ - `POST /v1/workspaces`
+ - `GET /v1/workspaces/current`
+ - `POST /v1/workspaces/bootstrap`
+ - `GET /v1/workspaces/bootstrap/status`
+ - `POST /v1/devices/link/start`
+ - `POST /v1/devices/link/confirm`
+ - `GET /v1/devices`
+ - `DELETE /v1/devices/{device_id}`
+ - `GET /v1/preferences`
+ - `PATCH /v1/preferences`
+- Hosted challenge-token security posture is improved: magic-link and device-link tokens are now hashed at rest (`challenge_token_hash`) in migration and runtime lookup logic.
+- New/returning user paths, workspace bootstrap, deterministic device linking/revocation, and hosted preferences persistence are validated via integration tests.
+- Telegram remains explicitly out of scope in API/UI (`telegram_state: not_available_in_p10_s1` and matching web copy).
+- Control docs and planning surfaces are aligned to an active `P10-S1` execution sprint rather than the prior idle post-Phase-9 placeholder:
+ - `.ai/active/SPRINT_PACKET.md`
+ - `.ai/handoff/CURRENT_STATE.md`
- `README.md`
- `ROADMAP.md`
- `RULES.md`
- - `.ai/handoff/CURRENT_STATE.md`
- - `.ai/active/SPRINT_PACKET.md`
-- Scope stayed doc/control-only apart from the required validation guardrail updates:
- - `scripts/check_control_doc_truth.py`
- - `tests/unit/test_control_doc_truth.py`
-- Validation evidence is clean:
- - stale live-control markers removed from canonical files
- - stale references to moved Phase 9 planning docs removed from canonical/live surfaces
- - archived snapshot links reviewed and normalized
- - control-doc validation script passes
- - control-doc unit test passes
+- Required verification commands passed in this re-review:
+ - `python3 scripts/check_control_doc_truth.py` -> PASS
+ - `./.venv/bin/python -m pytest tests/unit tests/integration -q` -> `990 passed`
+ - `pnpm --dir apps/web test` -> `59 passed` test files, `194 passed` tests
## criteria missed
-- None.
+- None identified for `P10-S1` acceptance criteria.
## quality issues
-- No blocking quality issues found in current pass.
-- Archive preservation is explicit, and no product-behavior files were changed by this branch beyond doc-validation tooling/tests.
+- No blocking quality issues found after fixes.
## regression risks
- Low.
-- Main residual risk is future tooling or docs assuming the old live Phase 9 planning paths still exist; the updated validation script and archive index reduce that risk.
+- Main residual risk is ordinary follow-on scope pressure: future Telegram and scheduler sprints must reuse these hosted identity/workspace seams instead of bypassing them.
## docs issues
-- No blocking docs issues remain for this branch.
-- The live repo now presents a clean waiting state for Phase 10 rather than stale active-sprint/project-history clutter.
+- No blocking docs issues for sprint acceptance.
+- Optional follow-up: add concise API reference docs for the new hosted `v1` endpoints if not already planned.
## should anything be added to RULES.md?
-- No further rule update is required from review; the compacted rules file already retains the durable guidance.
+- Optional hardening rule worth keeping: one-time auth challenge secrets must be stored hashed at rest.
## should anything update ARCHITECTURE.md?
-- No further architecture update is required from review; the compaction removed stale sprint-ledger material without changing architectural claims.
+- Optional: add a brief hosted auth token lifecycle note (issue, hash-at-rest, verify by hash, TTL/revocation).
## recommended next action
1. Ready for Control Tower merge approval under policy.
-2. After merge, run the Phase 9 release checklist/runbook and then add canonical Phase 10 planning docs before opening another sprint branch.
+2. After merge, open `P10-S2` only on top of these hosted identity/workspace/device foundations without widening scope early.
diff --git a/ROADMAP.md b/ROADMAP.md
index 5e2a41e..4f28588 100644
--- a/ROADMAP.md
+++ b/ROADMAP.md
@@ -1,24 +1,67 @@
# Roadmap
-## Current State
+## Planning Basis
-- Phase 4 through Phase 9 are complete.
-- No active build sprint is open.
-- Phase 10 planning docs are not defined yet.
+- Phase 9 is shipped baseline truth, not roadmap work.
+- Phase 10 is the next delivery phase: Alice Connect.
-## Next Moves
+## Phase 10 Milestones
-- Run the Phase 9 release checklist and runbook on a clean environment.
-- Cut `v0.1.0` only after those gates pass and approval is explicit.
-- Write canonical Phase 10 planning docs.
-- Activate the first non-redundant Phase 10 sprint.
+### P10-S1: Identity + Workspace Bootstrap
+
+- hosted account and session model
+- workspace creation and bootstrap flow
+- device linking
+- user preferences and settings foundation
+- beta cohort and feature-flag support
+
+### P10-S2: Telegram Transport + Message Normalization
+
+- Telegram bot and webhook ingress
+- Telegram link/unlink flow
+- normalized inbound message contract
+- outbound dispatcher and delivery receipts
+- workspace/thread routing for chat traffic
+
+### P10-S3: Chat-Native Continuity + Approvals
+
+- capture, recall, resume, correction, and open-loop review in Telegram
+- deterministic routing to best-fit continuity context
+- approval prompts and resolution in chat
+- provenance-backed answers and correction uptake
+
+### P10-S4: Daily Brief + Notifications + Open-Loop Review
+
+- daily brief generation and delivery scheduler
+- quiet hours and notification controls
+- waiting-for and stale-item prompts
+- one-tap open-loop review actions in chat
+
+### P10-S5: Beta Hardening + Launch Readiness
+
+- beta onboarding funnel
+- admin/support tooling
+- analytics and observability for chat flows
+- rate limiting, abuse controls, and rollout flags
+- launch assets and hosted-vs-OSS product clarity
+
+## Sequencing Rules
+
+- Do not start Telegram transport before identity and workspace bootstrap are stable.
+- Do not add chat-native continuity before transport and routing are deterministic.
+- Do not turn on scheduled briefs until chat continuity and notification preferences are trustworthy.
+- Treat beta hardening as a launch gate, not optional polish.
+
+## Phase 10 Exit
+
+Phase 10 is done when a non-technical beta user can onboard, use Alice through Telegram, capture and recall continuity, receive a useful daily brief, approve simple actions in chat, and do so without semantic drift from Alice Core.
## Roadmap Guardrails
-- Start future planning from the shipped Phase 9 wedge, not from archived sprint ledgers.
-- Keep this file forward-looking; completed sprint sequences belong in archive.
-- Preserve the current shipped contract until Phase 10 planning explicitly changes it.
+- Keep this file future-facing; completed work and sprint history belong in archive.
+- Do not rewrite shipped Phase 9 capabilities as future milestones.
+- Preserve the OSS baseline while layering product capabilities on top of it.
## Archived Planning
-- Phase 9 planning and superseded control docs: `docs/archive/planning/2026-04-08-context-compaction/README.md`
+- Historical planning and superseded control docs: `docs/archive/planning/2026-04-08-context-compaction/README.md`
diff --git a/RULES.md b/RULES.md
index ff39131..91d136b 100644
--- a/RULES.md
+++ b/RULES.md
@@ -1,27 +1,37 @@
# Rules
+## Baseline Truth
+
+- Treat shipped Phase 9 capability as baseline truth, not as future roadmap scope.
+- Do not rewrite shipped Phase 9 capabilities as future roadmap items.
+- Do not rewrite shipped Alice Core, CLI, MCP, importer, or eval-harness behavior as aspirational work.
+
## Product Scope
-- Treat Alice as a memory and continuity layer first, not a broad autonomous platform.
-- Keep the public contract focused on capture, recall, resume, correction, and open loops until the roadmap changes.
-- Do not widen channels, hosted deployment, or connector write breadth without explicit planning updates.
+- Alice remains a continuity product first, not a broad autonomous platform.
+- Hosted product work must preserve a clear OSS-to-product boundary.
+- Telegram is the only new user-facing channel in Phase 10 unless the roadmap changes.
+- Do not add browser automation, broad connector expansion, enterprise collaboration, or new vertical agents under Phase 10.
-## System Behavior
+## Architecture
-- Compile context from durable stored truth, not transcript replay.
-- Keep public interop surfaces narrow, deterministic, and schema-driven.
+- Phase 10 must not fork semantics between local, CLI, MCP, and Telegram.
+- Telegram is another surface on the same core objects.
+- Control plane owns identity, devices, channel bindings, preferences, feature flags, and telemetry.
+- Data plane owns continuity objects, memory revisions, open loops, approvals, audit traces, and interop semantics.
+- Compile answers from durable stored truth, not transcript replay.
- Preserve append-only continuity, correction history, and explicit provenance.
-- Importers must map or reject unknown external states; never silently coerce them.
-## Docs And Control
+## Operations And Delivery
-- Public docs are product surface and must match runnable commands, tests, and evidence.
-- Keep `ROADMAP.md` future-facing, `.ai/handoff/CURRENT_STATE.md` current-state only, and `RULES.md` limited to durable guidance.
-- Archive superseded planning and control snapshots instead of keeping them in live files.
-- Do not create a fake active sprint when the repo is between planning cycles.
+- Consequential actions remain approval-bounded on every surface.
+- Inbound chat handling and outbound delivery must be idempotent and auditable.
+- Daily briefs and notifications must respect timezone, preferences, and quiet hours.
+- Public docs must distinguish shipped OSS surface from beta product surface.
+- New public-facing flows require smoke validation, not only unit tests.
-## Release Discipline
+## Control Docs
-- New public surfaces require smoke validation, not only unit tests.
-- Public quality claims require committed evidence artifacts.
-- Stateful release-gating commands must document deterministic preconditions.
+- Keep `ROADMAP.md` future-facing, `.ai/handoff/CURRENT_STATE.md` factual, and `RULES.md` limited to durable guidance.
+- Archive superseded planning and control snapshots instead of keeping them in live files.
+- Do not create or overwrite `.ai/active/SPRINT_PACKET.md` unless the next execution sprint is explicitly defined.
diff --git a/apps/api/alembic/versions/20260408_0043_phase10_identity_workspace_bootstrap.py b/apps/api/alembic/versions/20260408_0043_phase10_identity_workspace_bootstrap.py
new file mode 100644
index 0000000..5a4fb03
--- /dev/null
+++ b/apps/api/alembic/versions/20260408_0043_phase10_identity_workspace_bootstrap.py
@@ -0,0 +1,250 @@
+"""Add Phase 10 Sprint 1 hosted identity/workspace bootstrap control-plane tables."""
+
+from __future__ import annotations
+
+from alembic import op
+
+
+revision = "20260408_0043"
+down_revision = "20260330_0042"
+branch_labels = None
+depends_on = None
+
+
+_UPGRADE_STATEMENTS = (
+ """
+ CREATE TABLE beta_cohorts (
+ id uuid PRIMARY KEY DEFAULT gen_random_uuid(),
+ cohort_key text NOT NULL UNIQUE,
+ description text NULL,
+ created_at timestamptz NOT NULL DEFAULT now(),
+ CONSTRAINT beta_cohorts_key_length_check
+ CHECK (char_length(cohort_key) >= 1 AND char_length(cohort_key) <= 120)
+ )
+ """,
+ """
+ CREATE TABLE feature_flags (
+ id uuid PRIMARY KEY DEFAULT gen_random_uuid(),
+ flag_key text NOT NULL,
+ cohort_key text NULL REFERENCES beta_cohorts(cohort_key) ON DELETE SET NULL,
+ enabled boolean NOT NULL DEFAULT false,
+ description text NULL,
+ created_at timestamptz NOT NULL DEFAULT now(),
+ updated_at timestamptz NOT NULL DEFAULT now(),
+ CONSTRAINT feature_flags_key_length_check
+ CHECK (char_length(flag_key) >= 1 AND char_length(flag_key) <= 120)
+ )
+ """,
+ (
+ "CREATE UNIQUE INDEX feature_flags_global_key_uidx "
+ "ON feature_flags (flag_key) WHERE cohort_key IS NULL"
+ ),
+ (
+ "CREATE UNIQUE INDEX feature_flags_scoped_key_uidx "
+ "ON feature_flags (flag_key, cohort_key) WHERE cohort_key IS NOT NULL"
+ ),
+ (
+ "CREATE INDEX feature_flags_enabled_idx "
+ "ON feature_flags (enabled, flag_key, cohort_key)"
+ ),
+ """
+ CREATE TABLE user_accounts (
+ id uuid PRIMARY KEY DEFAULT gen_random_uuid(),
+ email text NOT NULL UNIQUE,
+ display_name text NULL,
+ beta_cohort_key text NULL REFERENCES beta_cohorts(cohort_key) ON DELETE SET NULL,
+ created_at timestamptz NOT NULL DEFAULT now(),
+ CONSTRAINT user_accounts_email_length_check
+ CHECK (char_length(email) >= 3 AND char_length(email) <= 320)
+ )
+ """,
+ """
+ CREATE TABLE workspaces (
+ id uuid PRIMARY KEY DEFAULT gen_random_uuid(),
+ owner_user_account_id uuid NOT NULL REFERENCES user_accounts(id) ON DELETE CASCADE,
+ slug text NOT NULL UNIQUE,
+ name text NOT NULL,
+ bootstrap_status text NOT NULL DEFAULT 'pending',
+ bootstrapped_at timestamptz NULL,
+ created_at timestamptz NOT NULL DEFAULT now(),
+ updated_at timestamptz NOT NULL DEFAULT now(),
+ CONSTRAINT workspaces_slug_length_check
+ CHECK (char_length(slug) >= 3 AND char_length(slug) <= 120),
+ CONSTRAINT workspaces_name_length_check
+ CHECK (char_length(name) >= 1 AND char_length(name) <= 160),
+ CONSTRAINT workspaces_bootstrap_status_check
+ CHECK (bootstrap_status IN ('pending', 'ready'))
+ )
+ """,
+ (
+ "CREATE INDEX workspaces_owner_created_idx "
+ "ON workspaces (owner_user_account_id, created_at DESC, id DESC)"
+ ),
+ """
+ CREATE TABLE workspace_members (
+ id uuid PRIMARY KEY DEFAULT gen_random_uuid(),
+ workspace_id uuid NOT NULL REFERENCES workspaces(id) ON DELETE CASCADE,
+ user_account_id uuid NOT NULL REFERENCES user_accounts(id) ON DELETE CASCADE,
+ role text NOT NULL,
+ created_at timestamptz NOT NULL DEFAULT now(),
+ UNIQUE (workspace_id, user_account_id),
+ CONSTRAINT workspace_members_role_check
+ CHECK (role IN ('owner', 'member'))
+ )
+ """,
+ (
+ "CREATE UNIQUE INDEX workspace_members_single_owner_uidx "
+ "ON workspace_members (workspace_id) WHERE role = 'owner'"
+ ),
+ (
+ "CREATE INDEX workspace_members_user_created_idx "
+ "ON workspace_members (user_account_id, created_at DESC, id DESC)"
+ ),
+ """
+ CREATE TABLE magic_link_challenges (
+ id uuid PRIMARY KEY DEFAULT gen_random_uuid(),
+ email text NOT NULL,
+ challenge_token_hash text NOT NULL UNIQUE,
+ status text NOT NULL,
+ expires_at timestamptz NOT NULL,
+ consumed_at timestamptz NULL,
+ created_at timestamptz NOT NULL DEFAULT now(),
+ CONSTRAINT magic_link_challenges_status_check
+ CHECK (status IN ('pending', 'consumed', 'expired'))
+ )
+ """,
+ (
+ "CREATE INDEX magic_link_challenges_email_status_idx "
+ "ON magic_link_challenges (email, status, expires_at DESC, created_at DESC)"
+ ),
+ """
+ CREATE TABLE devices (
+ id uuid PRIMARY KEY DEFAULT gen_random_uuid(),
+ user_account_id uuid NOT NULL REFERENCES user_accounts(id) ON DELETE CASCADE,
+ workspace_id uuid NULL REFERENCES workspaces(id) ON DELETE SET NULL,
+ device_key text NOT NULL,
+ device_label text NOT NULL,
+ status text NOT NULL DEFAULT 'active',
+ last_seen_at timestamptz NULL,
+ revoked_at timestamptz NULL,
+ created_at timestamptz NOT NULL DEFAULT now(),
+ updated_at timestamptz NOT NULL DEFAULT now(),
+ UNIQUE (user_account_id, device_key),
+ CONSTRAINT devices_status_check CHECK (status IN ('active', 'revoked')),
+ CONSTRAINT devices_label_length_check
+ CHECK (char_length(device_label) >= 1 AND char_length(device_label) <= 120)
+ )
+ """,
+ (
+ "CREATE INDEX devices_user_workspace_status_idx "
+ "ON devices (user_account_id, workspace_id, status, created_at DESC, id DESC)"
+ ),
+ """
+ CREATE TABLE device_link_challenges (
+ id uuid PRIMARY KEY DEFAULT gen_random_uuid(),
+ user_account_id uuid NOT NULL REFERENCES user_accounts(id) ON DELETE CASCADE,
+ workspace_id uuid NULL REFERENCES workspaces(id) ON DELETE SET NULL,
+ device_key text NOT NULL,
+ device_label text NOT NULL,
+ challenge_token_hash text NOT NULL UNIQUE,
+ status text NOT NULL,
+ expires_at timestamptz NOT NULL,
+ confirmed_at timestamptz NULL,
+ device_id uuid NULL REFERENCES devices(id) ON DELETE SET NULL,
+ created_at timestamptz NOT NULL DEFAULT now(),
+ CONSTRAINT device_link_challenges_status_check
+ CHECK (status IN ('pending', 'confirmed', 'expired')),
+ CONSTRAINT device_link_challenges_label_length_check
+ CHECK (char_length(device_label) >= 1 AND char_length(device_label) <= 120)
+ )
+ """,
+ (
+ "CREATE INDEX device_link_challenges_user_device_status_idx "
+ "ON device_link_challenges (user_account_id, device_key, status, expires_at DESC, created_at DESC)"
+ ),
+ """
+ CREATE TABLE auth_sessions (
+ id uuid PRIMARY KEY DEFAULT gen_random_uuid(),
+ user_account_id uuid NOT NULL REFERENCES user_accounts(id) ON DELETE CASCADE,
+ workspace_id uuid NULL REFERENCES workspaces(id) ON DELETE SET NULL,
+ device_id uuid NULL REFERENCES devices(id) ON DELETE SET NULL,
+ session_token_hash text NOT NULL UNIQUE,
+ status text NOT NULL DEFAULT 'active',
+ expires_at timestamptz NOT NULL,
+ revoked_at timestamptz NULL,
+ last_seen_at timestamptz NULL,
+ created_at timestamptz NOT NULL DEFAULT now(),
+ CONSTRAINT auth_sessions_status_check
+ CHECK (status IN ('active', 'revoked', 'expired'))
+ )
+ """,
+ (
+ "CREATE INDEX auth_sessions_user_status_idx "
+ "ON auth_sessions (user_account_id, status, expires_at DESC, created_at DESC)"
+ ),
+ """
+ CREATE TABLE user_preferences (
+ id uuid PRIMARY KEY DEFAULT gen_random_uuid(),
+ user_account_id uuid NOT NULL UNIQUE REFERENCES user_accounts(id) ON DELETE CASCADE,
+ timezone text NOT NULL DEFAULT 'UTC',
+ brief_preferences jsonb NOT NULL DEFAULT '{}'::jsonb,
+ quiet_hours jsonb NOT NULL DEFAULT '{}'::jsonb,
+ created_at timestamptz NOT NULL DEFAULT now(),
+ updated_at timestamptz NOT NULL DEFAULT now(),
+ CONSTRAINT user_preferences_timezone_length_check
+ CHECK (char_length(timezone) >= 1 AND char_length(timezone) <= 120)
+ )
+ """,
+ "INSERT INTO beta_cohorts (cohort_key, description) VALUES ('p10-beta', 'Phase 10 hosted beta cohort') ON CONFLICT (cohort_key) DO NOTHING",
+ """
+ INSERT INTO feature_flags (flag_key, cohort_key, enabled, description)
+ VALUES
+ ('hosted_onboarding', NULL, true, 'Hosted onboarding surface foundation'),
+ ('hosted_settings', NULL, true, 'Hosted settings surface foundation'),
+ ('telegram_linking', 'p10-beta', false, 'Reserved for P10-S2 Telegram linkage')
+ ON CONFLICT DO NOTHING
+ """,
+)
+
+_UPGRADE_GRANT_STATEMENTS = (
+ "GRANT SELECT, INSERT, UPDATE, DELETE ON beta_cohorts TO alicebot_app",
+ "GRANT SELECT, INSERT, UPDATE, DELETE ON feature_flags TO alicebot_app",
+ "GRANT SELECT, INSERT, UPDATE, DELETE ON user_accounts TO alicebot_app",
+ "GRANT SELECT, INSERT, UPDATE, DELETE ON workspaces TO alicebot_app",
+ "GRANT SELECT, INSERT, UPDATE, DELETE ON workspace_members TO alicebot_app",
+ "GRANT SELECT, INSERT, UPDATE, DELETE ON magic_link_challenges TO alicebot_app",
+ "GRANT SELECT, INSERT, UPDATE, DELETE ON devices TO alicebot_app",
+ "GRANT SELECT, INSERT, UPDATE, DELETE ON device_link_challenges TO alicebot_app",
+ "GRANT SELECT, INSERT, UPDATE, DELETE ON auth_sessions TO alicebot_app",
+ "GRANT SELECT, INSERT, UPDATE, DELETE ON user_preferences TO alicebot_app",
+)
+
+_DOWNGRADE_STATEMENTS = (
+ "DROP TABLE IF EXISTS user_preferences",
+ "DROP TABLE IF EXISTS auth_sessions",
+ "DROP TABLE IF EXISTS device_link_challenges",
+ "DROP TABLE IF EXISTS devices",
+ "DROP TABLE IF EXISTS magic_link_challenges",
+ "DROP INDEX IF EXISTS workspace_members_single_owner_uidx",
+ "DROP TABLE IF EXISTS workspace_members",
+ "DROP TABLE IF EXISTS workspaces",
+ "DROP TABLE IF EXISTS user_accounts",
+ "DROP INDEX IF EXISTS feature_flags_scoped_key_uidx",
+ "DROP INDEX IF EXISTS feature_flags_global_key_uidx",
+ "DROP TABLE IF EXISTS feature_flags",
+ "DROP TABLE IF EXISTS beta_cohorts",
+)
+
+
+def _execute_statements(statements: tuple[str, ...]) -> None:
+ for statement in statements:
+ op.execute(statement)
+
+
+def upgrade() -> None:
+ _execute_statements(_UPGRADE_STATEMENTS)
+ _execute_statements(_UPGRADE_GRANT_STATEMENTS)
+
+
+def downgrade() -> None:
+ _execute_statements(_DOWNGRADE_STATEMENTS)
diff --git a/apps/api/src/alicebot_api/config.py b/apps/api/src/alicebot_api/config.py
index 695c215..f8b4f24 100644
--- a/apps/api/src/alicebot_api/config.py
+++ b/apps/api/src/alicebot_api/config.py
@@ -37,6 +37,9 @@
DEFAULT_AUTH_USER_ID = ""
DEFAULT_RESPONSE_RATE_LIMIT_WINDOW_SECONDS = 60
DEFAULT_RESPONSE_RATE_LIMIT_MAX_REQUESTS = 20
+DEFAULT_MAGIC_LINK_TTL_SECONDS = 900
+DEFAULT_AUTH_SESSION_TTL_SECONDS = 2_592_000
+DEFAULT_DEVICE_LINK_TTL_SECONDS = 600
Environment = Mapping[str, str]
@@ -80,6 +83,9 @@ class Settings:
auth_user_id: str = DEFAULT_AUTH_USER_ID
response_rate_limit_window_seconds: int = DEFAULT_RESPONSE_RATE_LIMIT_WINDOW_SECONDS
response_rate_limit_max_requests: int = DEFAULT_RESPONSE_RATE_LIMIT_MAX_REQUESTS
+ magic_link_ttl_seconds: int = DEFAULT_MAGIC_LINK_TTL_SECONDS
+ auth_session_ttl_seconds: int = DEFAULT_AUTH_SESSION_TTL_SECONDS
+ device_link_ttl_seconds: int = DEFAULT_DEVICE_LINK_TTL_SECONDS
@classmethod
def from_env(cls, env: Environment | None = None) -> "Settings":
@@ -143,6 +149,21 @@ def from_env(cls, env: Environment | None = None) -> "Settings":
"RESPONSE_RATE_LIMIT_MAX_REQUESTS",
cls.response_rate_limit_max_requests,
),
+ magic_link_ttl_seconds=_get_env_int(
+ current_env,
+ "MAGIC_LINK_TTL_SECONDS",
+ cls.magic_link_ttl_seconds,
+ ),
+ auth_session_ttl_seconds=_get_env_int(
+ current_env,
+ "AUTH_SESSION_TTL_SECONDS",
+ cls.auth_session_ttl_seconds,
+ ),
+ device_link_ttl_seconds=_get_env_int(
+ current_env,
+ "DEVICE_LINK_TTL_SECONDS",
+ cls.device_link_ttl_seconds,
+ ),
)
return _validate_settings(settings)
@@ -158,6 +179,12 @@ def _validate_settings(settings: Settings) -> Settings:
raise ValueError("RESPONSE_RATE_LIMIT_WINDOW_SECONDS must be a positive integer")
if settings.response_rate_limit_max_requests <= 0:
raise ValueError("RESPONSE_RATE_LIMIT_MAX_REQUESTS must be a positive integer")
+ if settings.magic_link_ttl_seconds <= 0:
+ raise ValueError("MAGIC_LINK_TTL_SECONDS must be a positive integer")
+ if settings.auth_session_ttl_seconds <= 0:
+ raise ValueError("AUTH_SESSION_TTL_SECONDS must be a positive integer")
+ if settings.device_link_ttl_seconds <= 0:
+ raise ValueError("DEVICE_LINK_TTL_SECONDS must be a positive integer")
if settings.app_env not in {"development", "test"}:
if settings.auth_user_id == "":
diff --git a/apps/api/src/alicebot_api/contracts.py b/apps/api/src/alicebot_api/contracts.py
index 6c342ee..70764ab 100644
--- a/apps/api/src/alicebot_api/contracts.py
+++ b/apps/api/src/alicebot_api/contracts.py
@@ -86,6 +86,12 @@
"awaiting_user",
]
TaskWorkspaceStatus = Literal["active"]
+HostedAuthSessionStatus = Literal["active", "revoked", "expired"]
+HostedMagicLinkChallengeStatus = Literal["pending", "consumed", "expired"]
+HostedDeviceLinkChallengeStatus = Literal["pending", "confirmed", "expired"]
+HostedDeviceStatus = Literal["active", "revoked"]
+HostedWorkspaceBootstrapStatus = Literal["pending", "ready"]
+HostedWorkspaceMemberRole = Literal["owner", "member"]
TaskArtifactStatus = Literal["registered"]
TaskArtifactIngestionStatus = Literal["pending", "ingested"]
TaskArtifactChunkRetrievalScopeKind = Literal["task", "artifact"]
@@ -5065,3 +5071,89 @@ def isoformat_or_none(value: datetime | None) -> str | None:
if value is None:
return None
return value.isoformat()
+
+
+class HostedUserAccountRecord(TypedDict):
+ id: str
+ email: str
+ display_name: str | None
+ beta_cohort_key: str | None
+ created_at: str
+
+
+class HostedAuthSessionRecord(TypedDict):
+ id: str
+ user_account_id: str
+ workspace_id: str | None
+ device_id: str | None
+ status: HostedAuthSessionStatus
+ expires_at: str
+ revoked_at: str | None
+ last_seen_at: str | None
+ created_at: str
+
+
+class HostedMagicLinkChallengeRecord(TypedDict):
+ id: str
+ email: str
+ challenge_token_hash: str
+ status: HostedMagicLinkChallengeStatus
+ expires_at: str
+ consumed_at: str | None
+ created_at: str
+
+
+class HostedWorkspaceRecord(TypedDict):
+ id: str
+ owner_user_account_id: str
+ slug: str
+ name: str
+ bootstrap_status: HostedWorkspaceBootstrapStatus
+ bootstrapped_at: str | None
+ created_at: str
+ updated_at: str
+
+
+class HostedBootstrapStatusRecord(TypedDict):
+ workspace_id: str
+ status: HostedWorkspaceBootstrapStatus
+ bootstrapped_at: str | None
+ ready_for_next_phase_telegram_linkage: bool
+ telegram_state: Literal["not_available_in_p10_s1"]
+
+
+class HostedDeviceRecord(TypedDict):
+ id: str
+ user_account_id: str
+ workspace_id: str | None
+ device_key: str
+ device_label: str
+ status: HostedDeviceStatus
+ last_seen_at: str | None
+ revoked_at: str | None
+ created_at: str
+ updated_at: str
+
+
+class HostedDeviceLinkChallengeRecord(TypedDict):
+ id: str
+ user_account_id: str
+ workspace_id: str | None
+ device_key: str
+ device_label: str
+ challenge_token_hash: str
+ status: HostedDeviceLinkChallengeStatus
+ expires_at: str
+ confirmed_at: str | None
+ device_id: str | None
+ created_at: str
+
+
+class HostedUserPreferencesRecord(TypedDict):
+ id: str
+ user_account_id: str
+ timezone: str
+ brief_preferences: JsonObject
+ quiet_hours: JsonObject
+ created_at: str
+ updated_at: str
diff --git a/apps/api/src/alicebot_api/hosted_auth.py b/apps/api/src/alicebot_api/hosted_auth.py
new file mode 100644
index 0000000..36da8c6
--- /dev/null
+++ b/apps/api/src/alicebot_api/hosted_auth.py
@@ -0,0 +1,572 @@
+from __future__ import annotations
+
+from datetime import datetime, timedelta, timezone
+import hashlib
+import re
+import secrets
+from typing import TypedDict
+from uuid import UUID
+
+from psycopg.types.json import Jsonb
+
+
+EMAIL_PATTERN = re.compile(r"^[^@\s]+@[^@\s]+\.[^@\s]+$")
+
+
+class MagicLinkTokenInvalidError(ValueError):
+ """Raised when a magic-link challenge token is unknown or already consumed."""
+
+
+class MagicLinkTokenExpiredError(ValueError):
+ """Raised when a magic-link challenge token has expired."""
+
+
+class AuthSessionInvalidError(ValueError):
+ """Raised when an auth session token is missing or invalid."""
+
+
+class AuthSessionExpiredError(ValueError):
+ """Raised when an auth session is expired."""
+
+
+class AuthSessionRevokedDeviceError(ValueError):
+ """Raised when the auth session is bound to a revoked device."""
+
+
+class UserAccountRow(TypedDict):
+ id: UUID
+ email: str
+ display_name: str | None
+ beta_cohort_key: str | None
+ created_at: datetime
+
+
+class MagicLinkChallengeRow(TypedDict):
+ id: UUID
+ email: str
+ status: str
+ expires_at: datetime
+ consumed_at: datetime | None
+ created_at: datetime
+
+
+class IssuedMagicLinkChallengeRow(MagicLinkChallengeRow):
+ challenge_token: str
+
+
+class AuthSessionRow(TypedDict):
+ id: UUID
+ user_account_id: UUID
+ workspace_id: UUID | None
+ device_id: UUID | None
+ session_token_hash: str
+ status: str
+ expires_at: datetime
+ revoked_at: datetime | None
+ last_seen_at: datetime | None
+ created_at: datetime
+
+
+class SessionResolution(TypedDict):
+ session: AuthSessionRow
+ user_account: UserAccountRow
+ device_status: str | None
+ device_label: str | None
+
+
+def utc_now() -> datetime:
+ return datetime.now(timezone.utc)
+
+
+def normalize_email(email: str) -> str:
+ normalized = email.strip().lower()
+ if not EMAIL_PATTERN.match(normalized):
+ raise ValueError("email must be valid for magic-link authentication")
+ return normalized
+
+
+def hash_token(token: str) -> str:
+ return hashlib.sha256(token.encode("utf-8")).hexdigest()
+
+
+def generate_token() -> str:
+ return secrets.token_urlsafe(32)
+
+
+def _default_display_name(email: str) -> str:
+ stem = email.split("@", 1)[0].replace(".", " ").replace("_", " ").strip()
+ if stem == "":
+ return "Alice User"
+ words = [word for word in stem.split(" ") if word]
+ return " ".join(word.capitalize() for word in words[:3])
+
+
+def ensure_beta_cohort(conn, cohort_key: str = "p10-beta") -> None:
+ with conn.cursor() as cur:
+ cur.execute(
+ """
+ INSERT INTO beta_cohorts (cohort_key, description)
+ VALUES (%s, %s)
+ ON CONFLICT (cohort_key) DO NOTHING
+ """,
+ (cohort_key, "Phase 10 hosted beta cohort"),
+ )
+
+
+def get_or_create_user_account_by_email(conn, *, email: str) -> UserAccountRow:
+ normalized_email = normalize_email(email)
+ ensure_beta_cohort(conn)
+
+ with conn.cursor() as cur:
+ cur.execute(
+ """
+ INSERT INTO user_accounts (email, display_name, beta_cohort_key)
+ VALUES (%s, %s, %s)
+ ON CONFLICT (email) DO UPDATE
+ SET email = EXCLUDED.email
+ RETURNING id, email, display_name, beta_cohort_key, created_at
+ """,
+ (normalized_email, _default_display_name(normalized_email), "p10-beta"),
+ )
+ row = cur.fetchone()
+
+ if row is None:
+ raise RuntimeError("failed to load or create hosted user account")
+ return row
+
+
+def start_magic_link_challenge(
+ conn,
+ *,
+ email: str,
+ ttl_seconds: int,
+) -> IssuedMagicLinkChallengeRow:
+ normalized_email = normalize_email(email)
+ now = utc_now()
+
+ with conn.cursor() as cur:
+ cur.execute(
+ """
+ UPDATE magic_link_challenges
+ SET status = 'expired'
+ WHERE email = %s
+ AND status = 'pending'
+ AND expires_at > %s
+ """,
+ (normalized_email, now),
+ )
+ challenge_token = generate_token()
+ challenge_token_hash = hash_token(challenge_token)
+ expires_at = now + timedelta(seconds=ttl_seconds)
+ cur.execute(
+ """
+ INSERT INTO magic_link_challenges (email, challenge_token_hash, status, expires_at)
+ VALUES (%s, %s, 'pending', %s)
+ RETURNING id, email, status, expires_at, consumed_at, created_at
+ """,
+ (normalized_email, challenge_token_hash, expires_at),
+ )
+ created = cur.fetchone()
+
+ if created is None:
+ raise RuntimeError("failed to create magic-link challenge")
+ created["challenge_token"] = challenge_token
+ return created
+
+
+def _lookup_magic_link_challenge_for_update(
+ conn,
+ *,
+ challenge_token: str,
+) -> MagicLinkChallengeRow | None:
+ token = challenge_token.strip()
+ if token == "":
+ return None
+
+ with conn.cursor() as cur:
+ cur.execute(
+ """
+ SELECT id, email, status, expires_at, consumed_at, created_at
+ FROM magic_link_challenges
+ WHERE challenge_token_hash = %s
+ FOR UPDATE
+ """,
+ (hash_token(token),),
+ )
+ return cur.fetchone()
+
+
+def _derive_device_key(user_account_id: UUID, device_label: str) -> str:
+ token_source = f"{user_account_id}:{device_label.strip().lower()}"
+ return hashlib.sha256(token_source.encode("utf-8")).hexdigest()[:48]
+
+
+def _upsert_device(
+ conn,
+ *,
+ user_account_id: UUID,
+ workspace_id: UUID | None,
+ device_label: str,
+ device_key: str,
+) -> dict[str, object]:
+ now = utc_now()
+ with conn.cursor() as cur:
+ cur.execute(
+ """
+ INSERT INTO devices (
+ user_account_id,
+ workspace_id,
+ device_key,
+ device_label,
+ status,
+ last_seen_at,
+ updated_at
+ )
+ VALUES (%s, %s, %s, %s, 'active', %s, %s)
+ ON CONFLICT (user_account_id, device_key) DO UPDATE
+ SET workspace_id = EXCLUDED.workspace_id,
+ device_label = EXCLUDED.device_label,
+ status = 'active',
+ revoked_at = NULL,
+ last_seen_at = EXCLUDED.last_seen_at,
+ updated_at = EXCLUDED.updated_at
+ RETURNING id, user_account_id, workspace_id, device_key, device_label, status,
+ last_seen_at, revoked_at, created_at, updated_at
+ """,
+ (user_account_id, workspace_id, device_key, device_label, now, now),
+ )
+ row = cur.fetchone()
+
+ if row is None:
+ raise RuntimeError("failed to upsert hosted device")
+ return row
+
+
+def _get_current_workspace_id(conn, *, user_account_id: UUID) -> UUID | None:
+ with conn.cursor() as cur:
+ cur.execute(
+ """
+ SELECT workspace_id
+ FROM workspace_members
+ WHERE user_account_id = %s
+ ORDER BY CASE WHEN role = 'owner' THEN 0 ELSE 1 END, created_at ASC, id ASC
+ LIMIT 1
+ """,
+ (user_account_id,),
+ )
+ row = cur.fetchone()
+
+ if row is None:
+ return None
+ return row["workspace_id"]
+
+
+def verify_magic_link_challenge(
+ conn,
+ *,
+ challenge_token: str,
+ session_ttl_seconds: int,
+ device_label: str,
+ device_key: str | None,
+) -> tuple[UserAccountRow, AuthSessionRow, str, dict[str, object]]:
+ now = utc_now()
+ challenge = _lookup_magic_link_challenge_for_update(conn, challenge_token=challenge_token)
+
+ if challenge is None:
+ raise MagicLinkTokenInvalidError("magic-link token is invalid")
+
+ if challenge["status"] != "pending":
+ raise MagicLinkTokenInvalidError("magic-link token is no longer valid")
+
+ if challenge["expires_at"] <= now:
+ with conn.cursor() as cur:
+ cur.execute(
+ """
+ UPDATE magic_link_challenges
+ SET status = 'expired'
+ WHERE id = %s
+ """,
+ (challenge["id"],),
+ )
+ raise MagicLinkTokenExpiredError("magic-link token has expired")
+
+ user_account = get_or_create_user_account_by_email(conn, email=challenge["email"])
+ workspace_id = _get_current_workspace_id(conn, user_account_id=user_account["id"])
+ normalized_device_label = device_label.strip() or "Primary device"
+ resolved_device_key = (device_key or "").strip() or _derive_device_key(
+ user_account["id"], normalized_device_label
+ )
+ device = _upsert_device(
+ conn,
+ user_account_id=user_account["id"],
+ workspace_id=workspace_id,
+ device_label=normalized_device_label,
+ device_key=resolved_device_key,
+ )
+
+ raw_session_token = generate_token()
+ session_expires_at = now + timedelta(seconds=session_ttl_seconds)
+ with conn.cursor() as cur:
+ cur.execute(
+ """
+ INSERT INTO auth_sessions (
+ user_account_id,
+ workspace_id,
+ device_id,
+ session_token_hash,
+ status,
+ expires_at,
+ last_seen_at
+ )
+ VALUES (%s, %s, %s, %s, 'active', %s, %s)
+ RETURNING id, user_account_id, workspace_id, device_id, session_token_hash, status,
+ expires_at, revoked_at, last_seen_at, created_at
+ """,
+ (
+ user_account["id"],
+ workspace_id,
+ device["id"],
+ hash_token(raw_session_token),
+ session_expires_at,
+ now,
+ ),
+ )
+ session = cur.fetchone()
+ cur.execute(
+ """
+ UPDATE magic_link_challenges
+ SET status = 'consumed',
+ consumed_at = %s
+ WHERE id = %s
+ """,
+ (now, challenge["id"]),
+ )
+
+ if session is None:
+ raise RuntimeError("failed to create auth session")
+
+ return user_account, session, raw_session_token, device
+
+
+def resolve_auth_session(conn, *, session_token: str) -> SessionResolution:
+ token = session_token.strip()
+ if token == "":
+ raise AuthSessionInvalidError("session token is required")
+
+ token_hash = hash_token(token)
+ now = utc_now()
+
+ with conn.cursor() as cur:
+ cur.execute(
+ """
+ SELECT
+ s.id,
+ s.user_account_id,
+ s.workspace_id,
+ s.device_id,
+ s.session_token_hash,
+ s.status,
+ s.expires_at,
+ s.revoked_at,
+ s.last_seen_at,
+ s.created_at,
+ u.email AS user_email,
+ u.display_name AS user_display_name,
+ u.beta_cohort_key AS user_beta_cohort_key,
+ u.created_at AS user_created_at,
+ d.status AS device_status,
+ d.device_label AS device_label
+ FROM auth_sessions AS s
+ JOIN user_accounts AS u
+ ON u.id = s.user_account_id
+ LEFT JOIN devices AS d
+ ON d.id = s.device_id
+ WHERE s.session_token_hash = %s
+ LIMIT 1
+ """,
+ (token_hash,),
+ )
+ row = cur.fetchone()
+
+ if row is None:
+ raise AuthSessionInvalidError("session token is invalid")
+
+ if row["status"] != "active":
+ if row["device_status"] == "revoked":
+ raise AuthSessionRevokedDeviceError("session device has been revoked")
+ raise AuthSessionInvalidError("session is not active")
+
+ if row["expires_at"] <= now:
+ with conn.cursor() as cur:
+ cur.execute(
+ """
+ UPDATE auth_sessions
+ SET status = 'expired'
+ WHERE id = %s
+ AND status = 'active'
+ """,
+ (row["id"],),
+ )
+ raise AuthSessionExpiredError("session token has expired")
+
+ if row["device_status"] == "revoked":
+ raise AuthSessionRevokedDeviceError("session device has been revoked")
+
+ with conn.cursor() as cur:
+ cur.execute(
+ """
+ UPDATE auth_sessions
+ SET last_seen_at = %s
+ WHERE id = %s
+ """,
+ (now, row["id"]),
+ )
+ if row["device_id"] is not None:
+ cur.execute(
+ """
+ UPDATE devices
+ SET last_seen_at = %s,
+ updated_at = %s
+ WHERE id = %s
+ """,
+ (now, now, row["device_id"]),
+ )
+
+ session: AuthSessionRow = {
+ "id": row["id"],
+ "user_account_id": row["user_account_id"],
+ "workspace_id": row["workspace_id"],
+ "device_id": row["device_id"],
+ "session_token_hash": row["session_token_hash"],
+ "status": row["status"],
+ "expires_at": row["expires_at"],
+ "revoked_at": row["revoked_at"],
+ "last_seen_at": now,
+ "created_at": row["created_at"],
+ }
+ user_account: UserAccountRow = {
+ "id": row["user_account_id"],
+ "email": row["user_email"],
+ "display_name": row["user_display_name"],
+ "beta_cohort_key": row["user_beta_cohort_key"],
+ "created_at": row["user_created_at"],
+ }
+ return {
+ "session": session,
+ "user_account": user_account,
+ "device_status": row["device_status"],
+ "device_label": row["device_label"],
+ }
+
+
+def logout_auth_session(conn, *, session_token: str) -> None:
+ token = session_token.strip()
+ if token == "":
+ raise AuthSessionInvalidError("session token is required")
+
+ with conn.cursor() as cur:
+ cur.execute(
+ """
+ UPDATE auth_sessions
+ SET status = 'revoked',
+ revoked_at = %s
+ WHERE session_token_hash = %s
+ AND status = 'active'
+ RETURNING id
+ """,
+ (utc_now(), hash_token(token)),
+ )
+ row = cur.fetchone()
+
+ if row is None:
+ raise AuthSessionInvalidError("session token is invalid")
+
+
+def list_feature_flags_for_user(conn, *, user_account_id: UUID) -> list[str]:
+ with conn.cursor() as cur:
+ cur.execute(
+ """
+ SELECT beta_cohort_key
+ FROM user_accounts
+ WHERE id = %s
+ """,
+ (user_account_id,),
+ )
+ user = cur.fetchone()
+
+ if user is None:
+ return []
+
+ cohort_key = user["beta_cohort_key"]
+ with conn.cursor() as cur:
+ cur.execute(
+ """
+ SELECT flag_key
+ FROM feature_flags
+ WHERE enabled = true
+ AND (cohort_key IS NULL OR cohort_key = %s)
+ ORDER BY flag_key ASC
+ """,
+ (cohort_key,),
+ )
+ rows = cur.fetchall()
+
+ return [str(row["flag_key"]) for row in rows]
+
+
+def serialize_user_account(user_account: UserAccountRow) -> dict[str, object]:
+ return {
+ "id": str(user_account["id"]),
+ "email": user_account["email"],
+ "display_name": user_account["display_name"],
+ "beta_cohort_key": user_account["beta_cohort_key"],
+ "created_at": user_account["created_at"].isoformat(),
+ }
+
+
+def serialize_auth_session(session: AuthSessionRow) -> dict[str, object]:
+ return {
+ "id": str(session["id"]),
+ "user_account_id": str(session["user_account_id"]),
+ "workspace_id": None if session["workspace_id"] is None else str(session["workspace_id"]),
+ "device_id": None if session["device_id"] is None else str(session["device_id"]),
+ "status": session["status"],
+ "expires_at": session["expires_at"].isoformat(),
+ "revoked_at": None if session["revoked_at"] is None else session["revoked_at"].isoformat(),
+ "last_seen_at": None if session["last_seen_at"] is None else session["last_seen_at"].isoformat(),
+ "created_at": session["created_at"].isoformat(),
+ }
+
+
+def serialize_magic_link_challenge(challenge: IssuedMagicLinkChallengeRow) -> dict[str, object]:
+ return {
+ "id": str(challenge["id"]),
+ "email": challenge["email"],
+ "challenge_token": challenge["challenge_token"],
+ "status": challenge["status"],
+ "expires_at": challenge["expires_at"].isoformat(),
+ "consumed_at": None if challenge["consumed_at"] is None else challenge["consumed_at"].isoformat(),
+ "created_at": challenge["created_at"].isoformat(),
+ }
+
+
+def ensure_user_preferences_row(conn, *, user_account_id: UUID) -> None:
+ with conn.cursor() as cur:
+ cur.execute(
+ """
+ INSERT INTO user_preferences (
+ user_account_id,
+ timezone,
+ brief_preferences,
+ quiet_hours
+ )
+ VALUES (%s, %s, %s, %s)
+ ON CONFLICT (user_account_id) DO NOTHING
+ """,
+ (
+ user_account_id,
+ "UTC",
+ Jsonb({"daily_brief": {"enabled": False, "window_start": "07:00"}}),
+ Jsonb({"start": "22:00", "end": "07:00", "enabled": False}),
+ ),
+ )
diff --git a/apps/api/src/alicebot_api/hosted_devices.py b/apps/api/src/alicebot_api/hosted_devices.py
new file mode 100644
index 0000000..4624c6e
--- /dev/null
+++ b/apps/api/src/alicebot_api/hosted_devices.py
@@ -0,0 +1,364 @@
+from __future__ import annotations
+
+from datetime import datetime, timedelta
+from typing import TypedDict
+from uuid import UUID
+
+from alicebot_api.hosted_auth import generate_token, hash_token, utc_now
+
+
+class DeviceLinkTokenInvalidError(ValueError):
+ """Raised when a device-link challenge token is invalid."""
+
+
+class DeviceLinkTokenExpiredError(ValueError):
+ """Raised when a device-link challenge has expired."""
+
+
+class HostedDeviceNotFoundError(LookupError):
+ """Raised when a hosted device is not visible for the account."""
+
+
+class DeviceRow(TypedDict):
+ id: UUID
+ user_account_id: UUID
+ workspace_id: UUID | None
+ device_key: str
+ device_label: str
+ status: str
+ last_seen_at: datetime | None
+ revoked_at: datetime | None
+ created_at: datetime
+ updated_at: datetime
+
+
+class DeviceLinkChallengeRow(TypedDict):
+ id: UUID
+ user_account_id: UUID
+ workspace_id: UUID | None
+ device_key: str
+ device_label: str
+ status: str
+ expires_at: datetime
+ confirmed_at: datetime | None
+ device_id: UUID | None
+ created_at: datetime
+
+
+class IssuedDeviceLinkChallengeRow(DeviceLinkChallengeRow):
+ challenge_token: str
+
+
+def _normalize_device_label(device_label: str) -> str:
+ normalized = device_label.strip()
+ if normalized == "":
+ raise ValueError("device_label is required")
+ return normalized[:120]
+
+
+def _normalize_device_key(device_key: str) -> str:
+ normalized = device_key.strip()
+ if normalized == "":
+ raise ValueError("device_key is required")
+ return normalized[:160]
+
+
+def _upsert_device(
+ conn,
+ *,
+ user_account_id: UUID,
+ workspace_id: UUID | None,
+ device_key: str,
+ device_label: str,
+) -> DeviceRow:
+ now = utc_now()
+ with conn.cursor() as cur:
+ cur.execute(
+ """
+ INSERT INTO devices (
+ user_account_id,
+ workspace_id,
+ device_key,
+ device_label,
+ status,
+ last_seen_at,
+ updated_at
+ )
+ VALUES (%s, %s, %s, %s, 'active', %s, %s)
+ ON CONFLICT (user_account_id, device_key) DO UPDATE
+ SET workspace_id = EXCLUDED.workspace_id,
+ device_label = EXCLUDED.device_label,
+ status = 'active',
+ revoked_at = NULL,
+ last_seen_at = EXCLUDED.last_seen_at,
+ updated_at = EXCLUDED.updated_at
+ RETURNING id, user_account_id, workspace_id, device_key, device_label, status,
+ last_seen_at, revoked_at, created_at, updated_at
+ """,
+ (user_account_id, workspace_id, device_key, device_label, now, now),
+ )
+ row = cur.fetchone()
+
+ if row is None:
+ raise RuntimeError("failed to upsert hosted device")
+ return row
+
+
+def start_device_link_challenge(
+ conn,
+ *,
+ user_account_id: UUID,
+ workspace_id: UUID | None,
+ device_key: str,
+ device_label: str,
+ ttl_seconds: int,
+) -> IssuedDeviceLinkChallengeRow:
+ normalized_key = _normalize_device_key(device_key)
+ normalized_label = _normalize_device_label(device_label)
+ now = utc_now()
+
+ with conn.cursor() as cur:
+ cur.execute(
+ """
+ UPDATE device_link_challenges
+ SET status = 'expired'
+ WHERE user_account_id = %s
+ AND device_key = %s
+ AND status = 'pending'
+ AND expires_at > %s
+ """,
+ (user_account_id, normalized_key, now),
+ )
+ challenge_token = generate_token()
+ challenge_token_hash = hash_token(challenge_token)
+ expires_at = now + timedelta(seconds=ttl_seconds)
+ cur.execute(
+ """
+ INSERT INTO device_link_challenges (
+ user_account_id,
+ workspace_id,
+ device_key,
+ device_label,
+ challenge_token_hash,
+ status,
+ expires_at
+ )
+ VALUES (%s, %s, %s, %s, %s, 'pending', %s)
+ RETURNING id, user_account_id, workspace_id, device_key, device_label,
+ status, expires_at, confirmed_at, device_id, created_at
+ """,
+ (
+ user_account_id,
+ workspace_id,
+ normalized_key,
+ normalized_label,
+ challenge_token_hash,
+ expires_at,
+ ),
+ )
+ challenge = cur.fetchone()
+
+ if challenge is None:
+ raise RuntimeError("failed to create device-link challenge")
+ challenge["challenge_token"] = challenge_token
+ return challenge
+
+
+def _lookup_device_link_challenge_for_update(
+ conn,
+ *,
+ user_account_id: UUID,
+ challenge_token: str,
+) -> DeviceLinkChallengeRow | None:
+ token = challenge_token.strip()
+ if token == "":
+ return None
+
+ with conn.cursor() as cur:
+ cur.execute(
+ """
+ SELECT id, user_account_id, workspace_id, device_key, device_label,
+ status, expires_at, confirmed_at, device_id, created_at
+ FROM device_link_challenges
+ WHERE user_account_id = %s
+ AND challenge_token_hash = %s
+ FOR UPDATE
+ """,
+ (user_account_id, hash_token(token)),
+ )
+ return cur.fetchone()
+
+
+def confirm_device_link_challenge(
+ conn,
+ *,
+ user_account_id: UUID,
+ challenge_token: str,
+) -> DeviceRow:
+ now = utc_now()
+ challenge = _lookup_device_link_challenge_for_update(
+ conn,
+ user_account_id=user_account_id,
+ challenge_token=challenge_token,
+ )
+
+ if challenge is None:
+ raise DeviceLinkTokenInvalidError("device-link token is invalid")
+
+ if challenge["status"] == "confirmed" and challenge["device_id"] is not None:
+ with conn.cursor() as cur:
+ cur.execute(
+ """
+ SELECT id, user_account_id, workspace_id, device_key, device_label, status,
+ last_seen_at, revoked_at, created_at, updated_at
+ FROM devices
+ WHERE id = %s
+ """,
+ (challenge["device_id"],),
+ )
+ existing_device = cur.fetchone()
+ if existing_device is not None:
+ return existing_device
+
+ if challenge["status"] != "pending":
+ raise DeviceLinkTokenInvalidError("device-link token is no longer valid")
+
+ if challenge["expires_at"] <= now:
+ with conn.cursor() as cur:
+ cur.execute(
+ """
+ UPDATE device_link_challenges
+ SET status = 'expired'
+ WHERE id = %s
+ """,
+ (challenge["id"],),
+ )
+ raise DeviceLinkTokenExpiredError("device-link token has expired")
+
+ device = _upsert_device(
+ conn,
+ user_account_id=user_account_id,
+ workspace_id=challenge["workspace_id"],
+ device_key=challenge["device_key"],
+ device_label=challenge["device_label"],
+ )
+
+ with conn.cursor() as cur:
+ cur.execute(
+ """
+ UPDATE device_link_challenges
+ SET status = 'confirmed',
+ confirmed_at = %s,
+ device_id = %s
+ WHERE id = %s
+ """,
+ (now, device["id"], challenge["id"]),
+ )
+
+ return device
+
+
+def list_devices(
+ conn,
+ *,
+ user_account_id: UUID,
+ workspace_id: UUID | None,
+) -> list[DeviceRow]:
+ with conn.cursor() as cur:
+ if workspace_id is None:
+ cur.execute(
+ """
+ SELECT id, user_account_id, workspace_id, device_key, device_label, status,
+ last_seen_at, revoked_at, created_at, updated_at
+ FROM devices
+ WHERE user_account_id = %s
+ ORDER BY created_at DESC, id DESC
+ """,
+ (user_account_id,),
+ )
+ else:
+ cur.execute(
+ """
+ SELECT id, user_account_id, workspace_id, device_key, device_label, status,
+ last_seen_at, revoked_at, created_at, updated_at
+ FROM devices
+ WHERE user_account_id = %s
+ AND (workspace_id = %s OR workspace_id IS NULL)
+ ORDER BY created_at DESC, id DESC
+ """,
+ (user_account_id, workspace_id),
+ )
+ rows = cur.fetchall()
+
+ return rows
+
+
+def revoke_device(
+ conn,
+ *,
+ user_account_id: UUID,
+ device_id: UUID,
+) -> DeviceRow:
+ now = utc_now()
+ with conn.cursor() as cur:
+ cur.execute(
+ """
+ UPDATE devices
+ SET status = 'revoked',
+ revoked_at = %s,
+ updated_at = %s
+ WHERE id = %s
+ AND user_account_id = %s
+ RETURNING id, user_account_id, workspace_id, device_key, device_label, status,
+ last_seen_at, revoked_at, created_at, updated_at
+ """,
+ (now, now, device_id, user_account_id),
+ )
+ row = cur.fetchone()
+
+ if row is None:
+ raise HostedDeviceNotFoundError(f"device {device_id} was not found")
+
+ cur.execute(
+ """
+ UPDATE auth_sessions
+ SET status = 'revoked',
+ revoked_at = %s
+ WHERE device_id = %s
+ AND status = 'active'
+ """,
+ (now, device_id),
+ )
+
+ return row
+
+
+def serialize_device(device: DeviceRow) -> dict[str, object]:
+ return {
+ "id": str(device["id"]),
+ "user_account_id": str(device["user_account_id"]),
+ "workspace_id": None if device["workspace_id"] is None else str(device["workspace_id"]),
+ "device_key": device["device_key"],
+ "device_label": device["device_label"],
+ "status": device["status"],
+ "last_seen_at": None if device["last_seen_at"] is None else device["last_seen_at"].isoformat(),
+ "revoked_at": None if device["revoked_at"] is None else device["revoked_at"].isoformat(),
+ "created_at": device["created_at"].isoformat(),
+ "updated_at": device["updated_at"].isoformat(),
+ }
+
+
+def serialize_device_link_challenge(challenge: IssuedDeviceLinkChallengeRow) -> dict[str, object]:
+ return {
+ "id": str(challenge["id"]),
+ "user_account_id": str(challenge["user_account_id"]),
+ "workspace_id": None if challenge["workspace_id"] is None else str(challenge["workspace_id"]),
+ "device_key": challenge["device_key"],
+ "device_label": challenge["device_label"],
+ "challenge_token": challenge["challenge_token"],
+ "status": challenge["status"],
+ "expires_at": challenge["expires_at"].isoformat(),
+ "confirmed_at": None if challenge["confirmed_at"] is None else challenge["confirmed_at"].isoformat(),
+ "device_id": None if challenge["device_id"] is None else str(challenge["device_id"]),
+ "created_at": challenge["created_at"].isoformat(),
+ }
diff --git a/apps/api/src/alicebot_api/hosted_preferences.py b/apps/api/src/alicebot_api/hosted_preferences.py
new file mode 100644
index 0000000..ae5369f
--- /dev/null
+++ b/apps/api/src/alicebot_api/hosted_preferences.py
@@ -0,0 +1,170 @@
+from __future__ import annotations
+
+from datetime import datetime
+from typing import TypedDict
+from uuid import UUID
+from zoneinfo import ZoneInfo, ZoneInfoNotFoundError
+
+from psycopg.types.json import Jsonb
+
+
+class HostedPreferencesValidationError(ValueError):
+ """Raised when hosted preference input is invalid."""
+
+
+class UserPreferencesRow(TypedDict):
+ id: UUID
+ user_account_id: UUID
+ timezone: str
+ brief_preferences: dict[str, object]
+ quiet_hours: dict[str, object]
+ created_at: datetime
+ updated_at: datetime
+
+
+DEFAULT_TIMEZONE = "UTC"
+DEFAULT_BRIEF_PREFERENCES: dict[str, object] = {
+ "daily_brief": {
+ "enabled": False,
+ "window_start": "07:00",
+ }
+}
+DEFAULT_QUIET_HOURS: dict[str, object] = {
+ "enabled": False,
+ "start": "22:00",
+ "end": "07:00",
+}
+
+
+def validate_timezone(timezone: str) -> str:
+ normalized = timezone.strip()
+ if normalized == "":
+ raise HostedPreferencesValidationError("timezone must not be empty")
+
+ try:
+ ZoneInfo(normalized)
+ except ZoneInfoNotFoundError as exc:
+ raise HostedPreferencesValidationError(f"timezone {timezone!r} is not recognized") from exc
+
+ return normalized
+
+
+def _default_brief_preferences() -> dict[str, object]:
+ return {
+ "daily_brief": {
+ "enabled": False,
+ "window_start": "07:00",
+ }
+ }
+
+
+def _default_quiet_hours() -> dict[str, object]:
+ return {
+ "enabled": False,
+ "start": "22:00",
+ "end": "07:00",
+ }
+
+
+def ensure_user_preferences(
+ conn,
+ *,
+ user_account_id: UUID,
+) -> UserPreferencesRow:
+ with conn.cursor() as cur:
+ cur.execute(
+ """
+ INSERT INTO user_preferences (user_account_id, timezone, brief_preferences, quiet_hours)
+ VALUES (%s, %s, %s, %s)
+ ON CONFLICT (user_account_id) DO NOTHING
+ """,
+ (
+ user_account_id,
+ DEFAULT_TIMEZONE,
+ Jsonb(_default_brief_preferences()),
+ Jsonb(_default_quiet_hours()),
+ ),
+ )
+ cur.execute(
+ """
+ SELECT id,
+ user_account_id,
+ timezone,
+ brief_preferences,
+ quiet_hours,
+ created_at,
+ updated_at
+ FROM user_preferences
+ WHERE user_account_id = %s
+ """,
+ (user_account_id,),
+ )
+ row = cur.fetchone()
+
+ if row is None:
+ raise RuntimeError("failed to load hosted user preferences")
+
+ return row
+
+
+def patch_user_preferences(
+ conn,
+ *,
+ user_account_id: UUID,
+ timezone: str | None,
+ brief_preferences: dict[str, object] | None,
+ quiet_hours: dict[str, object] | None,
+) -> UserPreferencesRow:
+ existing = ensure_user_preferences(conn, user_account_id=user_account_id)
+
+ resolved_timezone = existing["timezone"] if timezone is None else validate_timezone(timezone)
+ resolved_brief = existing["brief_preferences"] if brief_preferences is None else brief_preferences
+ resolved_quiet = existing["quiet_hours"] if quiet_hours is None else quiet_hours
+
+ if not isinstance(resolved_brief, dict):
+ raise HostedPreferencesValidationError("brief_preferences must be an object")
+ if not isinstance(resolved_quiet, dict):
+ raise HostedPreferencesValidationError("quiet_hours must be an object")
+
+ with conn.cursor() as cur:
+ cur.execute(
+ """
+ UPDATE user_preferences
+ SET timezone = %s,
+ brief_preferences = %s,
+ quiet_hours = %s,
+ updated_at = clock_timestamp()
+ WHERE user_account_id = %s
+ RETURNING id,
+ user_account_id,
+ timezone,
+ brief_preferences,
+ quiet_hours,
+ created_at,
+ updated_at
+ """,
+ (
+ resolved_timezone,
+ Jsonb(resolved_brief),
+ Jsonb(resolved_quiet),
+ user_account_id,
+ ),
+ )
+ row = cur.fetchone()
+
+ if row is None:
+ raise RuntimeError("failed to update hosted user preferences")
+
+ return row
+
+
+def serialize_user_preferences(preferences: UserPreferencesRow) -> dict[str, object]:
+ return {
+ "id": str(preferences["id"]),
+ "user_account_id": str(preferences["user_account_id"]),
+ "timezone": preferences["timezone"],
+ "brief_preferences": preferences["brief_preferences"],
+ "quiet_hours": preferences["quiet_hours"],
+ "created_at": preferences["created_at"].isoformat(),
+ "updated_at": preferences["updated_at"].isoformat(),
+ }
diff --git a/apps/api/src/alicebot_api/hosted_workspace.py b/apps/api/src/alicebot_api/hosted_workspace.py
new file mode 100644
index 0000000..c169772
--- /dev/null
+++ b/apps/api/src/alicebot_api/hosted_workspace.py
@@ -0,0 +1,272 @@
+from __future__ import annotations
+
+from datetime import datetime
+import re
+from typing import TypedDict
+from uuid import UUID
+
+
+SLUG_SANITIZE_PATTERN = re.compile(r"[^a-z0-9-]+")
+SLUG_COLLAPSE_PATTERN = re.compile(r"-+")
+
+
+class HostedWorkspaceNotFoundError(LookupError):
+ """Raised when a hosted workspace is not visible for the current account."""
+
+
+class HostedWorkspaceBootstrapConflictError(RuntimeError):
+ """Raised when hosted bootstrap is requested after completion."""
+
+
+class WorkspaceRow(TypedDict):
+ id: UUID
+ owner_user_account_id: UUID
+ slug: str
+ name: str
+ bootstrap_status: str
+ bootstrapped_at: datetime | None
+ created_at: datetime
+ updated_at: datetime
+
+
+def slugify_workspace_name(value: str) -> str:
+ normalized = value.strip().lower().replace(" ", "-")
+ normalized = SLUG_SANITIZE_PATTERN.sub("-", normalized)
+ normalized = SLUG_COLLAPSE_PATTERN.sub("-", normalized).strip("-")
+ if normalized == "":
+ return "alice-workspace"
+ return normalized[:120]
+
+
+def _next_available_slug(conn, *, preferred_slug: str) -> str:
+ base_slug = slugify_workspace_name(preferred_slug)
+ with conn.cursor() as cur:
+ for suffix in range(1, 201):
+ candidate = base_slug if suffix == 1 else f"{base_slug}-{suffix}"
+ cur.execute("SELECT 1 FROM workspaces WHERE slug = %s", (candidate,))
+ if cur.fetchone() is None:
+ return candidate
+
+ raise RuntimeError("unable to allocate unique workspace slug")
+
+
+def create_workspace(
+ conn,
+ *,
+ user_account_id: UUID,
+ name: str,
+ slug: str | None,
+) -> WorkspaceRow:
+ workspace_name = name.strip()
+ if workspace_name == "":
+ raise ValueError("workspace name is required")
+
+ preferred_slug = slug if slug is not None and slug.strip() != "" else workspace_name
+ workspace_slug = _next_available_slug(conn, preferred_slug=preferred_slug)
+
+ with conn.cursor() as cur:
+ cur.execute(
+ """
+ INSERT INTO workspaces (owner_user_account_id, slug, name, bootstrap_status)
+ VALUES (%s, %s, %s, 'pending')
+ RETURNING id, owner_user_account_id, slug, name, bootstrap_status, bootstrapped_at,
+ created_at, updated_at
+ """,
+ (user_account_id, workspace_slug, workspace_name),
+ )
+ workspace = cur.fetchone()
+
+ if workspace is None:
+ raise RuntimeError("failed to create workspace")
+
+ with conn.cursor() as cur:
+ cur.execute(
+ """
+ INSERT INTO workspace_members (workspace_id, user_account_id, role)
+ VALUES (%s, %s, 'owner')
+ ON CONFLICT (workspace_id, user_account_id) DO UPDATE
+ SET role = EXCLUDED.role
+ """,
+ (workspace["id"], user_account_id),
+ )
+
+ return workspace
+
+
+def get_workspace_for_member(
+ conn,
+ *,
+ workspace_id: UUID,
+ user_account_id: UUID,
+) -> WorkspaceRow | None:
+ with conn.cursor() as cur:
+ cur.execute(
+ """
+ SELECT w.id,
+ w.owner_user_account_id,
+ w.slug,
+ w.name,
+ w.bootstrap_status,
+ w.bootstrapped_at,
+ w.created_at,
+ w.updated_at
+ FROM workspaces AS w
+ JOIN workspace_members AS wm
+ ON wm.workspace_id = w.id
+ WHERE w.id = %s
+ AND wm.user_account_id = %s
+ LIMIT 1
+ """,
+ (workspace_id, user_account_id),
+ )
+ row = cur.fetchone()
+
+ return row
+
+
+def get_current_workspace(
+ conn,
+ *,
+ user_account_id: UUID,
+ preferred_workspace_id: UUID | None,
+) -> WorkspaceRow | None:
+ if preferred_workspace_id is not None:
+ preferred = get_workspace_for_member(
+ conn,
+ workspace_id=preferred_workspace_id,
+ user_account_id=user_account_id,
+ )
+ if preferred is not None:
+ return preferred
+
+ with conn.cursor() as cur:
+ cur.execute(
+ """
+ SELECT w.id,
+ w.owner_user_account_id,
+ w.slug,
+ w.name,
+ w.bootstrap_status,
+ w.bootstrapped_at,
+ w.created_at,
+ w.updated_at
+ FROM workspaces AS w
+ JOIN workspace_members AS wm
+ ON wm.workspace_id = w.id
+ WHERE wm.user_account_id = %s
+ ORDER BY CASE WHEN wm.role = 'owner' THEN 0 ELSE 1 END,
+ w.created_at ASC,
+ w.id ASC
+ LIMIT 1
+ """,
+ (user_account_id,),
+ )
+ row = cur.fetchone()
+
+ return row
+
+
+def set_session_workspace(
+ conn,
+ *,
+ session_id: UUID,
+ user_account_id: UUID,
+ workspace_id: UUID,
+) -> None:
+ workspace = get_workspace_for_member(
+ conn,
+ workspace_id=workspace_id,
+ user_account_id=user_account_id,
+ )
+ if workspace is None:
+ raise HostedWorkspaceNotFoundError(f"workspace {workspace_id} was not found")
+
+ with conn.cursor() as cur:
+ cur.execute(
+ """
+ UPDATE auth_sessions
+ SET workspace_id = %s
+ WHERE id = %s
+ AND user_account_id = %s
+ """,
+ (workspace_id, session_id, user_account_id),
+ )
+
+
+def complete_workspace_bootstrap(
+ conn,
+ *,
+ workspace_id: UUID,
+ user_account_id: UUID,
+) -> WorkspaceRow:
+ workspace = get_workspace_for_member(
+ conn,
+ workspace_id=workspace_id,
+ user_account_id=user_account_id,
+ )
+ if workspace is None:
+ raise HostedWorkspaceNotFoundError(f"workspace {workspace_id} was not found")
+
+ if workspace["bootstrap_status"] == "ready":
+ raise HostedWorkspaceBootstrapConflictError(
+ f"workspace {workspace_id} bootstrap is already complete"
+ )
+
+ with conn.cursor() as cur:
+ cur.execute(
+ """
+ UPDATE workspaces
+ SET bootstrap_status = 'ready',
+ bootstrapped_at = clock_timestamp(),
+ updated_at = clock_timestamp()
+ WHERE id = %s
+ RETURNING id, owner_user_account_id, slug, name, bootstrap_status, bootstrapped_at,
+ created_at, updated_at
+ """,
+ (workspace_id,),
+ )
+ row = cur.fetchone()
+
+ if row is None:
+ raise HostedWorkspaceNotFoundError(f"workspace {workspace_id} was not found")
+ return row
+
+
+def get_bootstrap_status(
+ conn,
+ *,
+ workspace_id: UUID,
+ user_account_id: UUID,
+) -> dict[str, object]:
+ workspace = get_workspace_for_member(
+ conn,
+ workspace_id=workspace_id,
+ user_account_id=user_account_id,
+ )
+ if workspace is None:
+ raise HostedWorkspaceNotFoundError(f"workspace {workspace_id} was not found")
+
+ return {
+ "workspace_id": str(workspace["id"]),
+ "status": workspace["bootstrap_status"],
+ "bootstrapped_at": None
+ if workspace["bootstrapped_at"] is None
+ else workspace["bootstrapped_at"].isoformat(),
+ "ready_for_next_phase_telegram_linkage": workspace["bootstrap_status"] == "ready",
+ "telegram_state": "not_available_in_p10_s1",
+ }
+
+
+def serialize_workspace(workspace: WorkspaceRow) -> dict[str, object]:
+ return {
+ "id": str(workspace["id"]),
+ "owner_user_account_id": str(workspace["owner_user_account_id"]),
+ "slug": workspace["slug"],
+ "name": workspace["name"],
+ "bootstrap_status": workspace["bootstrap_status"],
+ "bootstrapped_at": None
+ if workspace["bootstrapped_at"] is None
+ else workspace["bootstrapped_at"].isoformat(),
+ "created_at": workspace["created_at"].isoformat(),
+ "updated_at": workspace["updated_at"].isoformat(),
+ }
diff --git a/apps/api/src/alicebot_api/main.py b/apps/api/src/alicebot_api/main.py
index b430311..f86bce3 100644
--- a/apps/api/src/alicebot_api/main.py
+++ b/apps/api/src/alicebot_api/main.py
@@ -343,6 +343,50 @@
query_continuity_recall,
)
from alicebot_api.retrieval_evaluation import get_retrieval_evaluation_summary
+from alicebot_api.hosted_auth import (
+ AuthSessionExpiredError,
+ AuthSessionInvalidError,
+ AuthSessionRevokedDeviceError,
+ MagicLinkTokenExpiredError,
+ MagicLinkTokenInvalidError,
+ ensure_user_preferences_row,
+ list_feature_flags_for_user,
+ logout_auth_session,
+ resolve_auth_session,
+ serialize_auth_session,
+ serialize_magic_link_challenge,
+ serialize_user_account,
+ start_magic_link_challenge,
+ verify_magic_link_challenge,
+)
+from alicebot_api.hosted_devices import (
+ DeviceLinkTokenExpiredError,
+ DeviceLinkTokenInvalidError,
+ HostedDeviceNotFoundError,
+ confirm_device_link_challenge,
+ list_devices as list_hosted_devices,
+ revoke_device as revoke_hosted_device,
+ serialize_device,
+ serialize_device_link_challenge,
+ start_device_link_challenge,
+)
+from alicebot_api.hosted_preferences import (
+ HostedPreferencesValidationError,
+ ensure_user_preferences,
+ patch_user_preferences,
+ serialize_user_preferences,
+)
+from alicebot_api.hosted_workspace import (
+ HostedWorkspaceBootstrapConflictError,
+ HostedWorkspaceNotFoundError,
+ complete_workspace_bootstrap,
+ create_workspace,
+ get_bootstrap_status,
+ get_current_workspace,
+ get_workspace_for_member,
+ serialize_workspace,
+ set_session_workspace,
+)
from alicebot_api.continuity_review import (
ContinuityReviewNotFoundError,
ContinuityReviewValidationError,
@@ -1246,6 +1290,84 @@ async def enforce_authenticated_user_identity(
return await call_next(request)
+class MagicLinkStartRequest(BaseModel):
+ model_config = ConfigDict(extra="forbid")
+
+ email: str = Field(min_length=3, max_length=320)
+
+
+class MagicLinkVerifyRequest(BaseModel):
+ model_config = ConfigDict(extra="forbid")
+
+ challenge_token: str = Field(min_length=16, max_length=256)
+ device_label: str = Field(default="Primary device", min_length=1, max_length=120)
+ device_key: str | None = Field(default=None, min_length=1, max_length=160)
+
+
+class HostedWorkspaceCreateRequest(BaseModel):
+ model_config = ConfigDict(extra="forbid")
+
+ name: str = Field(min_length=1, max_length=160)
+ slug: str | None = Field(default=None, min_length=1, max_length=120)
+
+
+class HostedWorkspaceBootstrapRequest(BaseModel):
+ model_config = ConfigDict(extra="forbid")
+
+ workspace_id: UUID | None = None
+
+
+class DeviceLinkStartRequest(BaseModel):
+ model_config = ConfigDict(extra="forbid")
+
+ device_key: str = Field(min_length=1, max_length=160)
+ device_label: str = Field(min_length=1, max_length=120)
+ workspace_id: UUID | None = None
+
+
+class DeviceLinkConfirmRequest(BaseModel):
+ model_config = ConfigDict(extra="forbid")
+
+ challenge_token: str = Field(min_length=16, max_length=256)
+
+
+class HostedPreferencesPatchRequest(BaseModel):
+ model_config = ConfigDict(extra="forbid")
+
+ timezone: str | None = Field(default=None, min_length=1, max_length=120)
+ brief_preferences: dict[str, object] | None = None
+ quiet_hours: dict[str, object] | None = None
+
+
+def _extract_bearer_token(request: Request) -> str:
+ raw_authorization = request.headers.get("authorization", "").strip()
+ if raw_authorization == "":
+ raise AuthSessionInvalidError("authorization bearer token is required")
+
+ scheme, _, token = raw_authorization.partition(" ")
+ if scheme.lower() != "bearer" or token.strip() == "":
+ raise AuthSessionInvalidError("authorization header must use Bearer token format")
+ return token.strip()
+
+
+def _serialize_hosted_session_payload(
+ *,
+ session: dict[str, object],
+ user_account: dict[str, object],
+ workspace: dict[str, object] | None,
+ preferences: dict[str, object],
+ feature_flags: list[str],
+) -> dict[str, object]:
+ return {
+ "session": session,
+ "user_account": user_account,
+ "workspace": workspace,
+ "preferences": preferences,
+ "feature_flags": feature_flags,
+ "telegram_state": "not_available_in_p10_s1",
+ }
+
+
@app.get("/healthz")
def healthcheck() -> JSONResponse:
settings = get_settings()
@@ -4473,3 +4595,477 @@ def get_entity(entity_id: UUID, user_id: UUID) -> JSONResponse:
status_code=200,
content=jsonable_encoder(payload),
)
+
+
+@app.post("/v1/auth/magic-link/start")
+def start_v1_magic_link(request: MagicLinkStartRequest) -> JSONResponse:
+ settings = get_settings()
+
+ try:
+ with psycopg.connect(settings.database_url, row_factory=dict_row) as conn:
+ with conn.transaction():
+ challenge = start_magic_link_challenge(
+ conn,
+ email=request.email,
+ ttl_seconds=settings.magic_link_ttl_seconds,
+ )
+ except ValueError as exc:
+ return JSONResponse(status_code=400, content={"detail": str(exc)})
+
+ payload = {
+ "challenge": serialize_magic_link_challenge(challenge),
+ "delivery": {
+ "kind": "simulated_magic_link",
+ "posture": "builder_visible_only",
+ },
+ }
+ return JSONResponse(status_code=200, content=jsonable_encoder(payload))
+
+
+@app.post("/v1/auth/magic-link/verify")
+def verify_v1_magic_link(request: MagicLinkVerifyRequest) -> JSONResponse:
+ settings = get_settings()
+
+ try:
+ with psycopg.connect(settings.database_url, row_factory=dict_row) as conn:
+ with conn.transaction():
+ user_account, session, session_token, _device = verify_magic_link_challenge(
+ conn,
+ challenge_token=request.challenge_token,
+ session_ttl_seconds=settings.auth_session_ttl_seconds,
+ device_label=request.device_label,
+ device_key=request.device_key,
+ )
+ ensure_user_preferences_row(conn, user_account_id=user_account["id"])
+ preferences = ensure_user_preferences(conn, user_account_id=user_account["id"])
+ workspace = None
+ if session["workspace_id"] is not None:
+ workspace = get_workspace_for_member(
+ conn,
+ workspace_id=session["workspace_id"],
+ user_account_id=user_account["id"],
+ )
+ feature_flags = list_feature_flags_for_user(conn, user_account_id=user_account["id"])
+ except MagicLinkTokenExpiredError as exc:
+ return JSONResponse(status_code=401, content={"detail": str(exc)})
+ except (MagicLinkTokenInvalidError, ValueError) as exc:
+ return JSONResponse(status_code=400, content={"detail": str(exc)})
+
+ payload = _serialize_hosted_session_payload(
+ session=serialize_auth_session(session),
+ user_account=serialize_user_account(user_account),
+ workspace=None if workspace is None else serialize_workspace(workspace),
+ preferences=serialize_user_preferences(preferences),
+ feature_flags=feature_flags,
+ )
+ payload["session_token"] = session_token
+ return JSONResponse(status_code=200, content=jsonable_encoder(payload))
+
+
+@app.post("/v1/auth/logout")
+def logout_v1_auth_session(request: Request) -> JSONResponse:
+ settings = get_settings()
+
+ try:
+ session_token = _extract_bearer_token(request)
+ with psycopg.connect(settings.database_url, row_factory=dict_row) as conn:
+ with conn.transaction():
+ logout_auth_session(conn, session_token=session_token)
+ except (AuthSessionInvalidError, ValueError) as exc:
+ return JSONResponse(status_code=401, content={"detail": str(exc)})
+
+ return JSONResponse(status_code=200, content={"status": "logged_out"})
+
+
+@app.get("/v1/auth/session")
+def get_v1_auth_session(request: Request) -> JSONResponse:
+ settings = get_settings()
+
+ try:
+ session_token = _extract_bearer_token(request)
+ with psycopg.connect(settings.database_url, row_factory=dict_row) as conn:
+ with conn.transaction():
+ resolution = resolve_auth_session(conn, session_token=session_token)
+ user_account_id = resolution["user_account"]["id"]
+ workspace = get_current_workspace(
+ conn,
+ user_account_id=user_account_id,
+ preferred_workspace_id=resolution["session"]["workspace_id"],
+ )
+ if workspace is not None and resolution["session"]["workspace_id"] != workspace["id"]:
+ set_session_workspace(
+ conn,
+ session_id=resolution["session"]["id"],
+ user_account_id=user_account_id,
+ workspace_id=workspace["id"],
+ )
+ resolution["session"]["workspace_id"] = workspace["id"]
+ preferences = ensure_user_preferences(conn, user_account_id=user_account_id)
+ feature_flags = list_feature_flags_for_user(conn, user_account_id=user_account_id)
+ except (AuthSessionInvalidError, AuthSessionExpiredError, AuthSessionRevokedDeviceError) as exc:
+ return JSONResponse(status_code=401, content={"detail": str(exc)})
+
+ payload = _serialize_hosted_session_payload(
+ session=serialize_auth_session(resolution["session"]),
+ user_account=serialize_user_account(resolution["user_account"]),
+ workspace=None if workspace is None else serialize_workspace(workspace),
+ preferences=serialize_user_preferences(preferences),
+ feature_flags=feature_flags,
+ )
+ return JSONResponse(status_code=200, content=jsonable_encoder(payload))
+
+
+@app.post("/v1/workspaces")
+def create_v1_workspace(request: Request, body: HostedWorkspaceCreateRequest) -> JSONResponse:
+ settings = get_settings()
+
+ try:
+ session_token = _extract_bearer_token(request)
+ with psycopg.connect(settings.database_url, row_factory=dict_row) as conn:
+ with conn.transaction():
+ resolution = resolve_auth_session(conn, session_token=session_token)
+ workspace = create_workspace(
+ conn,
+ user_account_id=resolution["user_account"]["id"],
+ name=body.name,
+ slug=body.slug,
+ )
+ set_session_workspace(
+ conn,
+ session_id=resolution["session"]["id"],
+ user_account_id=resolution["user_account"]["id"],
+ workspace_id=workspace["id"],
+ )
+ except (AuthSessionInvalidError, AuthSessionExpiredError, AuthSessionRevokedDeviceError) as exc:
+ return JSONResponse(status_code=401, content={"detail": str(exc)})
+ except ValueError as exc:
+ return JSONResponse(status_code=400, content={"detail": str(exc)})
+
+ return JSONResponse(
+ status_code=201,
+ content=jsonable_encoder({"workspace": serialize_workspace(workspace)}),
+ )
+
+
+@app.get("/v1/workspaces/current")
+def get_v1_current_workspace(request: Request) -> JSONResponse:
+ settings = get_settings()
+
+ try:
+ session_token = _extract_bearer_token(request)
+ with psycopg.connect(settings.database_url, row_factory=dict_row) as conn:
+ with conn.transaction():
+ resolution = resolve_auth_session(conn, session_token=session_token)
+ workspace = get_current_workspace(
+ conn,
+ user_account_id=resolution["user_account"]["id"],
+ preferred_workspace_id=resolution["session"]["workspace_id"],
+ )
+ if workspace is None:
+ return JSONResponse(status_code=404, content={"detail": "no workspace is currently selected"})
+ if resolution["session"]["workspace_id"] != workspace["id"]:
+ set_session_workspace(
+ conn,
+ session_id=resolution["session"]["id"],
+ user_account_id=resolution["user_account"]["id"],
+ workspace_id=workspace["id"],
+ )
+ except (AuthSessionInvalidError, AuthSessionExpiredError, AuthSessionRevokedDeviceError) as exc:
+ return JSONResponse(status_code=401, content={"detail": str(exc)})
+
+ return JSONResponse(
+ status_code=200,
+ content=jsonable_encoder({"workspace": serialize_workspace(workspace)}),
+ )
+
+
+@app.post("/v1/workspaces/bootstrap")
+def bootstrap_v1_workspace(
+ request: Request,
+ body: HostedWorkspaceBootstrapRequest,
+) -> JSONResponse:
+ settings = get_settings()
+
+ try:
+ session_token = _extract_bearer_token(request)
+ with psycopg.connect(settings.database_url, row_factory=dict_row) as conn:
+ with conn.transaction():
+ resolution = resolve_auth_session(conn, session_token=session_token)
+ user_account_id = resolution["user_account"]["id"]
+ workspace = None
+ if body.workspace_id is not None:
+ workspace = get_workspace_for_member(
+ conn,
+ workspace_id=body.workspace_id,
+ user_account_id=user_account_id,
+ )
+ if workspace is None:
+ raise HostedWorkspaceNotFoundError(f"workspace {body.workspace_id} was not found")
+ set_session_workspace(
+ conn,
+ session_id=resolution["session"]["id"],
+ user_account_id=user_account_id,
+ workspace_id=workspace["id"],
+ )
+ else:
+ workspace = get_current_workspace(
+ conn,
+ user_account_id=user_account_id,
+ preferred_workspace_id=resolution["session"]["workspace_id"],
+ )
+ if workspace is None:
+ return JSONResponse(status_code=404, content={"detail": "no workspace is currently selected"})
+
+ bootstrapped_workspace = complete_workspace_bootstrap(
+ conn,
+ workspace_id=workspace["id"],
+ user_account_id=user_account_id,
+ )
+ preferences = ensure_user_preferences(conn, user_account_id=user_account_id)
+ status_payload = get_bootstrap_status(
+ conn,
+ workspace_id=workspace["id"],
+ user_account_id=user_account_id,
+ )
+ feature_flags = list_feature_flags_for_user(conn, user_account_id=user_account_id)
+ except (AuthSessionInvalidError, AuthSessionExpiredError, AuthSessionRevokedDeviceError) as exc:
+ return JSONResponse(status_code=401, content={"detail": str(exc)})
+ except HostedWorkspaceNotFoundError as exc:
+ return JSONResponse(status_code=404, content={"detail": str(exc)})
+ except HostedWorkspaceBootstrapConflictError as exc:
+ return JSONResponse(status_code=409, content={"detail": str(exc)})
+
+ return JSONResponse(
+ status_code=200,
+ content=jsonable_encoder(
+ {
+ "workspace": serialize_workspace(bootstrapped_workspace),
+ "bootstrap": status_payload,
+ "preferences": serialize_user_preferences(preferences),
+ "feature_flags": feature_flags,
+ "telegram_state": "not_available_in_p10_s1",
+ }
+ ),
+ )
+
+
+@app.get("/v1/workspaces/bootstrap/status")
+def get_v1_workspace_bootstrap_status(request: Request) -> JSONResponse:
+ settings = get_settings()
+
+ try:
+ session_token = _extract_bearer_token(request)
+ with psycopg.connect(settings.database_url, row_factory=dict_row) as conn:
+ with conn.transaction():
+ resolution = resolve_auth_session(conn, session_token=session_token)
+ workspace = get_current_workspace(
+ conn,
+ user_account_id=resolution["user_account"]["id"],
+ preferred_workspace_id=resolution["session"]["workspace_id"],
+ )
+ if workspace is None:
+ return JSONResponse(status_code=404, content={"detail": "no workspace is currently selected"})
+ status_payload = get_bootstrap_status(
+ conn,
+ workspace_id=workspace["id"],
+ user_account_id=resolution["user_account"]["id"],
+ )
+ feature_flags = list_feature_flags_for_user(
+ conn,
+ user_account_id=resolution["user_account"]["id"],
+ )
+ except (AuthSessionInvalidError, AuthSessionExpiredError, AuthSessionRevokedDeviceError) as exc:
+ return JSONResponse(status_code=401, content={"detail": str(exc)})
+ except HostedWorkspaceNotFoundError as exc:
+ return JSONResponse(status_code=404, content={"detail": str(exc)})
+
+ return JSONResponse(
+ status_code=200,
+ content=jsonable_encoder(
+ {
+ "workspace": serialize_workspace(workspace),
+ "bootstrap": status_payload,
+ "feature_flags": feature_flags,
+ "telegram_state": "not_available_in_p10_s1",
+ }
+ ),
+ )
+
+
+@app.post("/v1/devices/link/start")
+def start_v1_device_link(request: Request, body: DeviceLinkStartRequest) -> JSONResponse:
+ settings = get_settings()
+
+ try:
+ session_token = _extract_bearer_token(request)
+ with psycopg.connect(settings.database_url, row_factory=dict_row) as conn:
+ with conn.transaction():
+ resolution = resolve_auth_session(conn, session_token=session_token)
+ user_account_id = resolution["user_account"]["id"]
+ workspace_id = body.workspace_id or resolution["session"]["workspace_id"]
+ if body.workspace_id is not None:
+ workspace = get_workspace_for_member(
+ conn,
+ workspace_id=body.workspace_id,
+ user_account_id=user_account_id,
+ )
+ if workspace is None:
+ raise HostedWorkspaceNotFoundError(f"workspace {body.workspace_id} was not found")
+ workspace_id = workspace["id"]
+ challenge = start_device_link_challenge(
+ conn,
+ user_account_id=user_account_id,
+ workspace_id=workspace_id,
+ device_key=body.device_key,
+ device_label=body.device_label,
+ ttl_seconds=settings.device_link_ttl_seconds,
+ )
+ except (AuthSessionInvalidError, AuthSessionExpiredError, AuthSessionRevokedDeviceError) as exc:
+ return JSONResponse(status_code=401, content={"detail": str(exc)})
+ except HostedWorkspaceNotFoundError as exc:
+ return JSONResponse(status_code=404, content={"detail": str(exc)})
+ except ValueError as exc:
+ return JSONResponse(status_code=400, content={"detail": str(exc)})
+
+ return JSONResponse(
+ status_code=200,
+ content=jsonable_encoder({"challenge": serialize_device_link_challenge(challenge)}),
+ )
+
+
+@app.post("/v1/devices/link/confirm")
+def confirm_v1_device_link(request: Request, body: DeviceLinkConfirmRequest) -> JSONResponse:
+ settings = get_settings()
+
+ try:
+ session_token = _extract_bearer_token(request)
+ with psycopg.connect(settings.database_url, row_factory=dict_row) as conn:
+ with conn.transaction():
+ resolution = resolve_auth_session(conn, session_token=session_token)
+ device = confirm_device_link_challenge(
+ conn,
+ user_account_id=resolution["user_account"]["id"],
+ challenge_token=body.challenge_token,
+ )
+ except (AuthSessionInvalidError, AuthSessionExpiredError, AuthSessionRevokedDeviceError) as exc:
+ return JSONResponse(status_code=401, content={"detail": str(exc)})
+ except DeviceLinkTokenExpiredError as exc:
+ return JSONResponse(status_code=401, content={"detail": str(exc)})
+ except DeviceLinkTokenInvalidError as exc:
+ return JSONResponse(status_code=400, content={"detail": str(exc)})
+
+ return JSONResponse(
+ status_code=201,
+ content=jsonable_encoder({"device": serialize_device(device)}),
+ )
+
+
+@app.get("/v1/devices")
+def list_v1_devices(request: Request) -> JSONResponse:
+ settings = get_settings()
+
+ try:
+ session_token = _extract_bearer_token(request)
+ with psycopg.connect(settings.database_url, row_factory=dict_row) as conn:
+ with conn.transaction():
+ resolution = resolve_auth_session(conn, session_token=session_token)
+ devices = list_hosted_devices(
+ conn,
+ user_account_id=resolution["user_account"]["id"],
+ workspace_id=resolution["session"]["workspace_id"],
+ )
+ except (AuthSessionInvalidError, AuthSessionExpiredError, AuthSessionRevokedDeviceError) as exc:
+ return JSONResponse(status_code=401, content={"detail": str(exc)})
+
+ items = [serialize_device(device) for device in devices]
+ return JSONResponse(
+ status_code=200,
+ content=jsonable_encoder(
+ {
+ "items": items,
+ "summary": {
+ "total_count": len(items),
+ "active_count": sum(1 for item in items if item["status"] == "active"),
+ "revoked_count": sum(1 for item in items if item["status"] == "revoked"),
+ "order": ["created_at_desc", "id_desc"],
+ },
+ }
+ ),
+ )
+
+
+@app.delete("/v1/devices/{device_id}")
+def delete_v1_device(device_id: UUID, request: Request) -> JSONResponse:
+ settings = get_settings()
+
+ try:
+ session_token = _extract_bearer_token(request)
+ with psycopg.connect(settings.database_url, row_factory=dict_row) as conn:
+ with conn.transaction():
+ resolution = resolve_auth_session(conn, session_token=session_token)
+ device = revoke_hosted_device(
+ conn,
+ user_account_id=resolution["user_account"]["id"],
+ device_id=device_id,
+ )
+ except (AuthSessionInvalidError, AuthSessionExpiredError, AuthSessionRevokedDeviceError) as exc:
+ return JSONResponse(status_code=401, content={"detail": str(exc)})
+ except HostedDeviceNotFoundError as exc:
+ return JSONResponse(status_code=404, content={"detail": str(exc)})
+
+ return JSONResponse(
+ status_code=200,
+ content=jsonable_encoder({"device": serialize_device(device)}),
+ )
+
+
+@app.get("/v1/preferences")
+def get_v1_preferences(request: Request) -> JSONResponse:
+ settings = get_settings()
+
+ try:
+ session_token = _extract_bearer_token(request)
+ with psycopg.connect(settings.database_url, row_factory=dict_row) as conn:
+ with conn.transaction():
+ resolution = resolve_auth_session(conn, session_token=session_token)
+ preferences = ensure_user_preferences(
+ conn,
+ user_account_id=resolution["user_account"]["id"],
+ )
+ except (AuthSessionInvalidError, AuthSessionExpiredError, AuthSessionRevokedDeviceError) as exc:
+ return JSONResponse(status_code=401, content={"detail": str(exc)})
+
+ return JSONResponse(
+ status_code=200,
+ content=jsonable_encoder({"preferences": serialize_user_preferences(preferences)}),
+ )
+
+
+@app.patch("/v1/preferences")
+def patch_v1_preferences(
+ request: Request,
+ body: HostedPreferencesPatchRequest,
+) -> JSONResponse:
+ settings = get_settings()
+
+ try:
+ session_token = _extract_bearer_token(request)
+ with psycopg.connect(settings.database_url, row_factory=dict_row) as conn:
+ with conn.transaction():
+ resolution = resolve_auth_session(conn, session_token=session_token)
+ preferences = patch_user_preferences(
+ conn,
+ user_account_id=resolution["user_account"]["id"],
+ timezone=body.timezone,
+ brief_preferences=body.brief_preferences,
+ quiet_hours=body.quiet_hours,
+ )
+ except (AuthSessionInvalidError, AuthSessionExpiredError, AuthSessionRevokedDeviceError) as exc:
+ return JSONResponse(status_code=401, content={"detail": str(exc)})
+ except HostedPreferencesValidationError as exc:
+ return JSONResponse(status_code=400, content={"detail": str(exc)})
+
+ return JSONResponse(
+ status_code=200,
+ content=jsonable_encoder({"preferences": serialize_user_preferences(preferences)}),
+ )
diff --git a/apps/web/app/onboarding/page.test.tsx b/apps/web/app/onboarding/page.test.tsx
new file mode 100644
index 0000000..f212312
--- /dev/null
+++ b/apps/web/app/onboarding/page.test.tsx
@@ -0,0 +1,20 @@
+import React from "react";
+import { cleanup, render, screen } from "@testing-library/react";
+import { afterEach, describe, expect, it } from "vitest";
+
+import OnboardingPage from "./page";
+
+describe("OnboardingPage", () => {
+ afterEach(() => {
+ cleanup();
+ });
+
+ it("renders hosted onboarding scope and guards Telegram claims", () => {
+ render(
AliceBot
- Calm, governed views for requests, approvals, tasks, artifacts, Gmail, Calendar, - memories, chief-of-staff priorities, entities, and explainability. + Calm, governed views for hosted onboarding/settings plus requests, approvals, tasks, + artifacts, Gmail, Calendar, memories, chief-of-staff priorities, entities, and + explainability.
@@ -122,7 +133,7 @@ export function AppShell({ children }: { children: ReactNode }) {+ Telegram channel linkage is not available in P10-S1. This screen only + confirms that hosted identity, workspace bootstrap, devices, and preferences are ready for + a later Telegram sprint. +
++ Hosted settings expose readiness inputs and device state, but do not claim Telegram linkage, + scheduler execution, or brief delivery in Phase 10 Sprint 1. +
+