Conversation
Adds a Backup & Restore tab to the v3 web UI that packages user config, secrets, WiFi, user-uploaded fonts, plugin image uploads, and the installed plugin list into a single ZIP for safe reinstall recovery. Restore extracts the bundle, snapshots current state via the existing atomic config manager (so rollback stays available), reapplies the selected sections, and optionally reinstalls missing plugins from the store. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
📝 WalkthroughWalkthroughIntroduces a complete backup and restore feature for LEDMatrix, including a versioned ZIP-based backup schema, size-limited safe extraction with manifest validation, selective restoration via configurable options, and a web API with UI for export, import, list, download, and delete operations across config, secrets, WiFi, fonts, plugin uploads, and plugin state. Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant Browser
participant API as API Endpoint
participant BkupMgr as Backup Manager
participant FS as Filesystem
participant ZIP as ZIP Archive
User->>Browser: Click "Download backup"
Browser->>API: POST /backup/export
API->>BkupMgr: create_backup(project_root, output_dir)
BkupMgr->>FS: Read config, secrets, WiFi files
BkupMgr->>FS: Scan user fonts (exclude bundled)
BkupMgr->>FS: Scan plugin uploads directory
BkupMgr->>FS: Read plugin_state.json
BkupMgr->>BkupMgr: Generate manifest.json + plugins.json
BkupMgr->>ZIP: Create ZIP with all files
BkupMgr->>FS: Atomic write to exports dir
BkupMgr-->>API: Return ZIP path
API-->>Browser: Return filename, size, timestamp
Browser->>FS: Download ZIP file
Browser->>User: Backup saved
sequenceDiagram
actor User
participant Browser
participant API as API Endpoint
participant BkupMgr as Backup Manager
participant FS as Filesystem
participant PluginMgr as Plugin Manager
User->>Browser: Upload & validate backup ZIP
Browser->>API: POST /backup/validate (multipart)
API->>BkupMgr: validate_backup(temp_zip)
BkupMgr->>ZIP: Check archive integrity
BkupMgr->>BkupMgr: Verify manifest.json schema
BkupMgr->>BkupMgr: Parse plugins.json (if present)
BkupMgr-->>API: Return validation result
API-->>Browser: Display preview (manifest, plugins, size)
User->>Browser: Select restore options & click Restore
Browser->>API: POST /backup/restore (options JSON)
API->>BkupMgr: restore_backup(zip_path, project_root, options)
BkupMgr->>FS: Extract to temp directory (safe paths)
BkupMgr->>FS: Conditionally restore config/secrets/WiFi
BkupMgr->>FS: Restore user fonts (skip bundled collisions)
BkupMgr->>FS: Restore plugin uploads
BkupMgr->>BkupMgr: Aggregate restored/skipped/errors per category
BkupMgr-->>API: Return RestoreResult with plugins_to_install
API->>PluginMgr: Reinstall selected plugins (if enabled)
API->>FS: Clear fonts cache, reload config
API-->>Browser: Return restore summary (success/partial, counts)
Browser->>User: Display results panel
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 10
🧹 Nitpick comments (2)
test/test_backup_manager.py (1)
217-217: Mark intentionally unused unpacked variable.At Line 217, rename
errto_err(or_) to make intent explicit and silence lint noise.Suggested fix
- ok, err, _ = validate_backup(p) + ok, _err, _ = validate_backup(p)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/test_backup_manager.py` at line 217, The test unpacks validate_backup(p) into ok, err, _ but leaves err unused; rename that variable to _err (or _) to mark it intentionally unused and silence linters — locate the unpack in the test where validate_backup is called and change the middle identifier from err to _err (or _) in that assignment.web_interface/blueprints/api_v3.py (1)
1339-1340: Avoid silentexcept ...: passin restore flow.Line 1339 and Line 1350 swallow errors without logs, which makes restore/debug issues hard to triage remotely.
Suggested fix
- except Exception: - pass + except Exception: + logger.warning("[Backup] Failed to clear fonts_catalog cache", exc_info=True) @@ - except Exception: - pass + except Exception: + logger.warning("[Backup] Could not reload config after restore", exc_info=True)As per coding guidelines, "Catch specific exceptions, not bare
except:statements" and "Implement comprehensive logging for remote debugging on Raspberry Pi".Also applies to: 1350-1351
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@web_interface/blueprints/api_v3.py` around lines 1339 - 1340, The bare "except Exception: pass" in the restore flow must be replaced with explicit exception handling and logging: identify the try/except blocks in the restore handler (the restore flow where the code currently uses "except Exception: pass"), catch specific exceptions (e.g., IOError/OSError, ValueError, subprocess.CalledProcessError or the concrete exceptions raised by the restore logic) instead of a blanket except, log the full traceback using logger.exception or processLogger.error(..., exc_info=True), and return or raise an appropriate error/HTTP response so failures aren't silently swallowed; ensure both occurrences (the two empty except blocks) are handled the same way.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/backup_manager.py`:
- Around line 248-263: iter_plugin_uploads currently yields any files under
assets/plugins/*/uploads recursively but restore paths elsewhere (lines noted
432-433 and 565-579) accept arbitrary members under assets/plugins/, so an
attacker can smuggle non-uploads files into a plugin-uploads restore; fix by
making iter_plugin_uploads and the restore validation only accept/emit paths
that strictly match the pattern
"<_PLUGIN_UPLOADS_REL>/<plugin_name>/uploads/<...>" (i.e. ensure the path
contains the uploads component immediately under each plugin directory and
produce/store members with that exact prefix), and add explicit validation in
the restore code paths referenced (lines 432-433 and 565-579) to reject any
archive member that does not start with _PLUGIN_UPLOADS_REL + "/" and contains
"/uploads/" as the next path component. Ensure you use the existing
_PLUGIN_UPLOADS_REL symbol and update both iter_plugin_uploads and the restore
membership checks to enforce this whitelist.
- Around line 187-203: The code reads plugin info from the wrong top-level key;
change the logic that assigns raw_plugins so it checks for the nested "states"
mapping produced by state_manager (i.e., extract state.get("states", {}) then
get its "plugins" subkey) and fall back to the existing top-level "plugins"
behavior if necessary; update the raw_plugins lookup near state_file/_STATE_REL
and preserve the existing type checks and population of the plugins dict (the
loop that fills plugins[plugin_id] with "plugin_id", "version", and "enabled")
so persisted plugin IDs/versions are discovered from the saved states structure.
- Around line 293-339: The code builds the entire ZIP in memory via buffer =
io.BytesIO() and tmp_path.write_bytes(buffer.getvalue()), which can OOM on
Raspberry Pi; change to stream directly to a temporary file on disk by opening
zipfile.ZipFile(tmp_path, "w", compression=zipfile.ZIP_DEFLATED) (or open
tmp_path as binary and pass to ZipFile) and write the same entries (using
iter_user_fonts, iter_plugin_uploads, list_installed_plugins,
PLUGINS_MANIFEST_NAME, _build_manifest, MANIFEST_NAME) with the manifest written
last, then atomically replace tmp_path to zip_path with os.replace — remove use
of buffer and buffer.getvalue() entirely.
In `@test/test_backup_manager.py`:
- Line 207: The test writes a malicious ZIP fixture with a hardcoded
"schema_version": 1 which makes the test brittle; replace the literal 1 with the
authoritative schema version constant from the production code (e.g., import
SCHEMA_VERSION or CURRENT_MANIFEST_SCHEMA from the module that defines manifest
schema) and use that when calling zf.writestr("manifest.json",
json.dumps({"schema_version": SCHEMA_VERSION, "contents": []})); apply the same
change to the second occurrence so both fixtures use the canonical schema
version rather than a hardcoded value.
In `@web_interface/blueprints/api_v3.py`:
- Around line 1301-1303: The pre-restore snapshot call to _save_config_atomic
currently discards its (success, error) return value; change the code around
api_v3.config_manager.load_config() so you capture the return like (success,
err) = _save_config_atomic(api_v3.config_manager, current, create_backup=True),
then if success is False log the error with context via processLogger (or the
module logger) including err and current/config id, and decide a graceful path:
either abort the restore by raising or return a failure response, or continue
while recording that snapshot creation failed; ensure you reference
_save_config_atomic and api_v3.config_manager.load_config in your change so the
behavior is explicit and properly logged.
- Around line 1225-1237: The current code only validates upload.filename via
validate_file_upload but does not enforce payload size before writing; modify
the upload handling around validate_file_upload,
_tempfile.mkstemp/_tempfile.mkstemp usage, and upload.save so the actual byte
size is checked and limited (e.g., stream the incoming file in chunks to
tmp_path and accumulate bytes, abort and delete tmp_path if accumulated bytes
exceed max_size_mb * 1024 * 1024), or alternately inspect the incoming file-like
object's length if available before calling upload.save; ensure errors return
the same JSON error response and that tmp_path is cleaned up on failure.
- Around line 1288-1295: Replace the raw bool(...) conversions when building
RestoreOptions with the existing _coerce_to_bool utility so string values like
"false" are parsed correctly; for each field in RestoreOptions (restore_config,
restore_secrets, restore_wifi, restore_fonts, restore_plugin_uploads,
reinstall_plugins) call _coerce_to_bool(opts_dict.get('<key>'), default=True)
(using the same default semantics currently used) instead of
bool(opts_dict.get(...)) to ensure consistent form-data boolean coercion for
RestoreOptions creation.
In `@web_interface/blueprints/pages_v3.py`:
- Around line 305-306: The broad "except Exception as e" block should not return
raw exception text to clients or swallow specific errors; change it to catch and
handle specific expected exceptions (e.g., ValueError, KeyError,
requests.exceptions.RequestException) and any remaining unexpected errors via a
generic handler; for each specific case return a clear user-facing message
(e.g., "Unable to fetch recent NHL data, please try again later") with the
appropriate HTTP status, and log the original exception server-side with
structured context/tag "[NHL Recent]" using logger.exception or
logger.error(..., exc_info=True) so internal details are recorded but not leaked
to the client. Ensure the unique "except Exception as e" location is replaced
and that all log messages include the "[NHL Recent]" context.
In `@web_interface/templates/v3/partials/backup_restore.html`:
- Around line 55-56: The preview and restore flow must be tied to the exact file
the user inspected: capture and store a stable identifier (e.g., the selected
File object or its name+lastModified) when handling the "inspect" action for the
input element with id "restore-file-input", clear any stored inspection state on
input change, and when applying inspect responses only update the preview if the
response's identifier matches the currently stored file id; finally, make
runRestore() use the stored File reference/identifier (the inspectedFile) rather
than re-reading input.files[0] so restores always apply to the file the user
approved.
- Around line 305-321: The restore handler currently treats any non-success
payload as a warning and renders details instead of failing; update the async
block around the fetch to check payload.status after parsing (the response
assigned to `payload`) and if it's not 'success' throw an Error that includes
payload.message (or a fallback message) so the caller shows an error state
instead of "finished with warnings"; ensure this replaces the current branch
that sets `ok` and rendering for failures, so `result`/`restore-result` is only
populated for true success and `notify` receives the error via the thrown
exception path.
---
Nitpick comments:
In `@test/test_backup_manager.py`:
- Line 217: The test unpacks validate_backup(p) into ok, err, _ but leaves err
unused; rename that variable to _err (or _) to mark it intentionally unused and
silence linters — locate the unpack in the test where validate_backup is called
and change the middle identifier from err to _err (or _) in that assignment.
In `@web_interface/blueprints/api_v3.py`:
- Around line 1339-1340: The bare "except Exception: pass" in the restore flow
must be replaced with explicit exception handling and logging: identify the
try/except blocks in the restore handler (the restore flow where the code
currently uses "except Exception: pass"), catch specific exceptions (e.g.,
IOError/OSError, ValueError, subprocess.CalledProcessError or the concrete
exceptions raised by the restore logic) instead of a blanket except, log the
full traceback using logger.exception or processLogger.error(...,
exc_info=True), and return or raise an appropriate error/HTTP response so
failures aren't silently swallowed; ensure both occurrences (the two empty
except blocks) are handled the same way.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: b3376484-43ad-4400-b872-efe3b947d28b
📒 Files selected for processing (6)
src/backup_manager.pytest/test_backup_manager.pyweb_interface/blueprints/api_v3.pyweb_interface/blueprints/pages_v3.pyweb_interface/templates/v3/base.htmlweb_interface/templates/v3/partials/backup_restore.html
| state_file = project_root / _STATE_REL | ||
| if state_file.exists(): | ||
| try: | ||
| with state_file.open("r", encoding="utf-8") as f: | ||
| state = json.load(f) | ||
| raw_plugins = state.get("plugins", {}) if isinstance(state, dict) else {} | ||
| if isinstance(raw_plugins, dict): | ||
| for plugin_id, info in raw_plugins.items(): | ||
| if not isinstance(info, dict): | ||
| continue | ||
| plugins[plugin_id] = { | ||
| "plugin_id": plugin_id, | ||
| "version": info.get("version") or "", | ||
| "enabled": bool(info.get("enabled", True)), | ||
| } | ||
| except (OSError, json.JSONDecodeError) as e: | ||
| logger.warning("Could not read plugin_state.json: %s", e) |
There was a problem hiding this comment.
Read the persisted plugin list from states.
Line 192 looks under plugins, but src/plugin_system/state_manager.py:406-435 writes plugin_state.json under states. As written, the state file is effectively invisible here, so backups can lose plugin IDs/versions unless plugin-repos/ happens to be populated.
Possible fix
- raw_plugins = state.get("plugins", {}) if isinstance(state, dict) else {}
+ raw_plugins = {}
+ if isinstance(state, dict):
+ raw_plugins = state.get("states") or state.get("plugins") or {}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/backup_manager.py` around lines 187 - 203, The code reads plugin info
from the wrong top-level key; change the logic that assigns raw_plugins so it
checks for the nested "states" mapping produced by state_manager (i.e., extract
state.get("states", {}) then get its "plugins" subkey) and fall back to the
existing top-level "plugins" behavior if necessary; update the raw_plugins
lookup near state_file/_STATE_REL and preserve the existing type checks and
population of the plugins dict (the loop that fills plugins[plugin_id] with
"plugin_id", "version", and "enabled") so persisted plugin IDs/versions are
discovered from the saved states structure.
| def iter_plugin_uploads(project_root: Path) -> List[Path]: | ||
| """Return every file under ``assets/plugins/*/uploads/`` (recursive).""" | ||
| plugin_root = project_root / _PLUGIN_UPLOADS_REL | ||
| if not plugin_root.exists(): | ||
| return [] | ||
| out: List[Path] = [] | ||
| for plugin_dir in sorted(plugin_root.iterdir()): | ||
| if not plugin_dir.is_dir(): | ||
| continue | ||
| uploads = plugin_dir / "uploads" | ||
| if not uploads.exists(): | ||
| continue | ||
| for root, _dirs, files in os.walk(uploads): | ||
| for name in sorted(files): | ||
| out.append(Path(root) / name) | ||
| return out |
There was a problem hiding this comment.
Restrict backup members to assets/plugins/*/uploads/**.
Lines 248-263 only export files from each plugin’s uploads/ directory, but Lines 432-433 and 565-579 treat the entire assets/plugins/ subtree as valid restore input. A crafted backup can therefore smuggle unrelated plugin assets into the restore while the UI still labels them as “plugin uploads”.
Possible fix
+def _is_plugin_upload_path(path: Path) -> bool:
+ parts = path.parts
+ return (
+ len(parts) >= 5
+ and parts[0] == "assets"
+ and parts[1] == "plugins"
+ and parts[3] == "uploads"
+ )
...
- if any(n.startswith(_PLUGIN_UPLOADS_REL.as_posix() + "/") for n in names):
+ if any(_is_plugin_upload_path(Path(n)) for n in names):
detected.append("plugin_uploads")
...
for name in files:
src = Path(root) / name
rel = src.relative_to(tmp_dir)
+ if not _is_plugin_upload_path(rel):
+ result.errors.append(f"Unexpected plugin asset in backup: {rel.as_posix()}")
+ continue
try:
_copy_file(src, project_root / rel)Also applies to: 432-433, 565-579
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/backup_manager.py` around lines 248 - 263, iter_plugin_uploads currently
yields any files under assets/plugins/*/uploads recursively but restore paths
elsewhere (lines noted 432-433 and 565-579) accept arbitrary members under
assets/plugins/, so an attacker can smuggle non-uploads files into a
plugin-uploads restore; fix by making iter_plugin_uploads and the restore
validation only accept/emit paths that strictly match the pattern
"<_PLUGIN_UPLOADS_REL>/<plugin_name>/uploads/<...>" (i.e. ensure the path
contains the uploads component immediately under each plugin directory and
produce/store members with that exact prefix), and add explicit validation in
the restore code paths referenced (lines 432-433 and 565-579) to reject any
archive member that does not start with _PLUGIN_UPLOADS_REL + "/" and contains
"/uploads/" as the next path component. Ensure you use the existing
_PLUGIN_UPLOADS_REL symbol and update both iter_plugin_uploads and the restore
membership checks to enforce this whitelist.
| # Build bundle in memory first, then atomically write to final path. | ||
| buffer = io.BytesIO() | ||
| with zipfile.ZipFile(buffer, "w", compression=zipfile.ZIP_DEFLATED) as zf: | ||
| # Config files. | ||
| if (project_root / _CONFIG_REL).exists(): | ||
| zf.write(project_root / _CONFIG_REL, _CONFIG_REL.as_posix()) | ||
| contents.append("config") | ||
| if (project_root / _SECRETS_REL).exists(): | ||
| zf.write(project_root / _SECRETS_REL, _SECRETS_REL.as_posix()) | ||
| contents.append("secrets") | ||
| if (project_root / _WIFI_REL).exists(): | ||
| zf.write(project_root / _WIFI_REL, _WIFI_REL.as_posix()) | ||
| contents.append("wifi") | ||
|
|
||
| # User-uploaded fonts. | ||
| user_fonts = iter_user_fonts(project_root) | ||
| if user_fonts: | ||
| for font in user_fonts: | ||
| arcname = font.relative_to(project_root).as_posix() | ||
| zf.write(font, arcname) | ||
| contents.append("fonts") | ||
|
|
||
| # Plugin uploads. | ||
| plugin_uploads = iter_plugin_uploads(project_root) | ||
| if plugin_uploads: | ||
| for upload in plugin_uploads: | ||
| arcname = upload.relative_to(project_root).as_posix() | ||
| zf.write(upload, arcname) | ||
| contents.append("plugin_uploads") | ||
|
|
||
| # Installed plugins manifest. | ||
| plugins = list_installed_plugins(project_root) | ||
| if plugins: | ||
| zf.writestr( | ||
| PLUGINS_MANIFEST_NAME, | ||
| json.dumps(plugins, indent=2), | ||
| ) | ||
| contents.append("plugins") | ||
|
|
||
| # Manifest goes last so that `contents` reflects what we actually wrote. | ||
| manifest = _build_manifest(contents, project_root) | ||
| zf.writestr(MANIFEST_NAME, json.dumps(manifest, indent=2)) | ||
|
|
||
| # Write atomically. | ||
| tmp_path = zip_path.with_suffix(".zip.tmp") | ||
| tmp_path.write_bytes(buffer.getvalue()) | ||
| os.replace(tmp_path, zip_path) |
There was a problem hiding this comment.
Stream backup creation to disk instead of RAM.
Lines 293-339 build the whole archive in BytesIO and then duplicate it again with buffer.getvalue(). On Raspberry Pi, a backup with fonts/uploads can spike memory badly or OOM the process.
Possible fix
- # Build bundle in memory first, then atomically write to final path.
- buffer = io.BytesIO()
- with zipfile.ZipFile(buffer, "w", compression=zipfile.ZIP_DEFLATED) as zf:
+ # Stream to a temp file in the target directory, then atomically replace it.
+ with tempfile.NamedTemporaryFile(
+ mode="w+b",
+ prefix=f"{zip_name}.",
+ suffix=".tmp",
+ dir=output_dir,
+ delete=False,
+ ) as tmp_file:
+ tmp_path = Path(tmp_file.name)
+ try:
+ with zipfile.ZipFile(tmp_path, "w", compression=zipfile.ZIP_DEFLATED) as zf:
# Config files.
...
# Manifest goes last so that `contents` reflects what we actually wrote.
manifest = _build_manifest(contents, project_root)
zf.writestr(MANIFEST_NAME, json.dumps(manifest, indent=2))
-
- # Write atomically.
- tmp_path = zip_path.with_suffix(".zip.tmp")
- tmp_path.write_bytes(buffer.getvalue())
- os.replace(tmp_path, zip_path)
+ os.replace(tmp_path, zip_path)
+ finally:
+ if tmp_path.exists():
+ tmp_path.unlink()As per coding guidelines, "Code must run on Raspberry Pi, not Windows development machine" and "Optimize code for Raspberry Pi's limited RAM and CPU capabilities."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/backup_manager.py` around lines 293 - 339, The code builds the entire ZIP
in memory via buffer = io.BytesIO() and tmp_path.write_bytes(buffer.getvalue()),
which can OOM on Raspberry Pi; change to stream directly to a temporary file on
disk by opening zipfile.ZipFile(tmp_path, "w", compression=zipfile.ZIP_DEFLATED)
(or open tmp_path as binary and pass to ZipFile) and write the same entries
(using iter_user_fonts, iter_plugin_uploads, list_installed_plugins,
PLUGINS_MANIFEST_NAME, _build_manifest, MANIFEST_NAME) with the manifest written
last, then atomically replace tmp_path to zip_path with os.replace — remove use
of buffer and buffer.getvalue() entirely.
| def test_validate_backup_rejects_zip_traversal(tmp_path: Path) -> None: | ||
| zip_path = tmp_path / "malicious.zip" | ||
| with zipfile.ZipFile(zip_path, "w") as zf: | ||
| zf.writestr("manifest.json", json.dumps({"schema_version": 1, "contents": []})) |
There was a problem hiding this comment.
Avoid hardcoded schema version in malicious ZIP fixtures.
Using schema_version: 1 makes these tests brittle when schema evolves; they can fail on version mismatch before reaching the traversal assertion.
Suggested fix
- zf.writestr("manifest.json", json.dumps({"schema_version": 1, "contents": []}))
+ zf.writestr("manifest.json", json.dumps({"schema_version": backup_manager.SCHEMA_VERSION, "contents": []}))
@@
- zf.writestr("manifest.json", json.dumps({"schema_version": 1, "contents": []}))
+ zf.writestr("manifest.json", json.dumps({"schema_version": backup_manager.SCHEMA_VERSION, "contents": []}))Also applies to: 278-278
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/test_backup_manager.py` at line 207, The test writes a malicious ZIP
fixture with a hardcoded "schema_version": 1 which makes the test brittle;
replace the literal 1 with the authoritative schema version constant from the
production code (e.g., import SCHEMA_VERSION or CURRENT_MANIFEST_SCHEMA from the
module that defines manifest schema) and use that when calling
zf.writestr("manifest.json", json.dumps({"schema_version": SCHEMA_VERSION,
"contents": []})); apply the same change to the second occurrence so both
fixtures use the canonical schema version rather than a hardcoded value.
| is_valid, err = validate_file_upload( | ||
| upload.filename, | ||
| max_size_mb=200, | ||
| allowed_extensions=['.zip'], | ||
| ) | ||
| if not is_valid: | ||
| return None, (jsonify({'status': 'error', 'message': err}), 400) | ||
| fd, tmp_name = _tempfile.mkstemp(prefix='ledmatrix_upload_', suffix='.zip') | ||
| os.close(fd) | ||
| tmp_path = Path(tmp_name) | ||
| try: | ||
| upload.save(str(tmp_path)) | ||
| except Exception: |
There was a problem hiding this comment.
Enforce the backup size limit on actual uploaded bytes, not just filename validation.
At Line 1225, validate_file_upload(...) is called with upload.filename, which cannot prove payload size. A large upload can still be written at Line 1236, causing disk pressure on Pi devices.
Suggested fix
def _save_uploaded_backup_to_temp() -> Tuple[Optional[Path], Optional[Tuple[Response, int]]]:
@@
- is_valid, err = validate_file_upload(
+ is_valid, err = validate_file_upload(
upload.filename,
max_size_mb=200,
allowed_extensions=['.zip'],
)
@@
+ max_bytes = 200 * 1024 * 1024
+ if request.content_length is not None and request.content_length > max_bytes:
+ return None, (jsonify({'status': 'error', 'message': 'Backup file exceeds 200MB limit'}), 400)
@@
try:
upload.save(str(tmp_path))
+ if tmp_path.stat().st_size > max_bytes:
+ tmp_path.unlink(missing_ok=True)
+ return None, (jsonify({'status': 'error', 'message': 'Backup file exceeds 200MB limit'}), 400)
except Exception:As per coding guidelines, "Validate inputs and handle errors early (Fail Fast principle)" and "Optimize code for Raspberry Pi's limited RAM and CPU capabilities".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@web_interface/blueprints/api_v3.py` around lines 1225 - 1237, The current
code only validates upload.filename via validate_file_upload but does not
enforce payload size before writing; modify the upload handling around
validate_file_upload, _tempfile.mkstemp/_tempfile.mkstemp usage, and upload.save
so the actual byte size is checked and limited (e.g., stream the incoming file
in chunks to tmp_path and accumulate bytes, abort and delete tmp_path if
accumulated bytes exceed max_size_mb * 1024 * 1024), or alternately inspect the
incoming file-like object's length if available before calling upload.save;
ensure errors return the same JSON error response and that tmp_path is cleaned
up on failure.
| opts = backup_manager.RestoreOptions( | ||
| restore_config=bool(opts_dict.get('restore_config', True)), | ||
| restore_secrets=bool(opts_dict.get('restore_secrets', True)), | ||
| restore_wifi=bool(opts_dict.get('restore_wifi', True)), | ||
| restore_fonts=bool(opts_dict.get('restore_fonts', True)), | ||
| restore_plugin_uploads=bool(opts_dict.get('restore_plugin_uploads', True)), | ||
| reinstall_plugins=bool(opts_dict.get('reinstall_plugins', True)), | ||
| ) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's find the file and examine it
git ls-files | grep -E "api_v3.py"Repository: ChuckBuilds/LEDMatrix
Length of output: 99
🏁 Script executed:
# Check file size and locate the code around lines 1288-1295
wc -l web_interface/blueprints/api_v3.pyRepository: ChuckBuilds/LEDMatrix
Length of output: 104
🏁 Script executed:
# Read the specific section mentioned in the review
sed -n '1280,1310p' web_interface/blueprints/api_v3.pyRepository: ChuckBuilds/LEDMatrix
Length of output: 1668
🏁 Script executed:
# Look for context around this code - understand how opts_dict is populated
sed -n '1270,1300p' web_interface/blueprints/api_v3.pyRepository: ChuckBuilds/LEDMatrix
Length of output: 1422
🏁 Script executed:
# Check if there's any validation or parsing of opts_dict earlier in the function
sed -n '1250,1290p' web_interface/blueprints/api_v3.pyRepository: ChuckBuilds/LEDMatrix
Length of output: 1574
🏁 Script executed:
# Look for similar bool() coercion patterns in the file
rg "bool\(.*\.get\(" web_interface/blueprints/api_v3.pyRepository: ChuckBuilds/LEDMatrix
Length of output: 1130
🏁 Script executed:
# Check if there are other similar patterns with RestoreOptions or other option parsing
rg -A 3 "RestoreOptions\(" web_interface/blueprints/api_v3.pyRepository: ChuckBuilds/LEDMatrix
Length of output: 324
🏁 Script executed:
# Verify Python bool() behavior to confirm the claim
python3 <<'EOF'
import json
# Test 1: Proper JSON with boolean literals
json_str1 = '{"restore_config": false}'
data1 = json.loads(json_str1)
print(f"Proper JSON false: {data1['restore_config']} -> bool() = {bool(data1.get('restore_config', True))}")
# Test 2: Malformed JSON with string "false"
json_str2 = '{"restore_config": "false"}'
data2 = json.loads(json_str2)
print(f"String 'false': {data2['restore_config']} -> bool() = {bool(data2.get('restore_config', True))}")
# Test 3: Proper JSON with boolean true
json_str3 = '{"restore_config": true}'
data3 = json.loads(json_str3)
print(f"Proper JSON true: {data3['restore_config']} -> bool() = {bool(data3.get('restore_config', True))}")
# Test 4: String "true"
json_str4 = '{"restore_config": "true"}'
data4 = json.loads(json_str4)
print(f"String 'true': {data4['restore_config']} -> bool() = {bool(data4.get('restore_config', True))}")
# Test 5: Integer 0
json_str5 = '{"restore_config": 0}'
data5 = json.loads(json_str5)
print(f"Integer 0: {data5['restore_config']} -> bool() = {bool(data5.get('restore_config', True))}")
EOFRepository: ChuckBuilds/LEDMatrix
Length of output: 254
🏁 Script executed:
# Check how clients interact with this API - look for frontend code
fd -e "js" -e "ts" -e "jsx" -e "tsx" | xargs grep -l "backup.*restore" 2>/dev/null | head -5Repository: ChuckBuilds/LEDMatrix
Length of output: 47
🏁 Script executed:
# Look for any existing boolean parsing utilities in the codebase
rg -l "parse.*bool|str.*bool" --type py | head -10Repository: ChuckBuilds/LEDMatrix
Length of output: 327
🏁 Script executed:
# Find the _coerce_to_bool function definition
rg -B 2 -A 10 "def _coerce_to_bool" web_interface/blueprints/api_v3.pyRepository: ChuckBuilds/LEDMatrix
Length of output: 418
🏁 Script executed:
# Check where _coerce_to_bool is used in the file
rg -B 2 -A 2 "_coerce_to_bool" web_interface/blueprints/api_v3.py | head -40Repository: ChuckBuilds/LEDMatrix
Length of output: 1600
🏁 Script executed:
# Look at the broader context of the backup_restore function to understand error handling patterns
sed -n '1255,1350p' web_interface/blueprints/api_v3.pyRepository: ChuckBuilds/LEDMatrix
Length of output: 4339
🏁 Script executed:
# Get the full definition of _coerce_to_bool function
rg -A 20 "def _coerce_to_bool" web_interface/blueprints/api_v3.pyRepository: ChuckBuilds/LEDMatrix
Length of output: 715
🏁 Script executed:
# Verify the exact line numbers and formatting of _coerce_to_bool
sed -n '1110,1135p' web_interface/blueprints/api_v3.pyRepository: ChuckBuilds/LEDMatrix
Length of output: 980
🏁 Script executed:
# Check if _coerce_to_bool is accessible in the backup_restore function scope
rg -B 50 "def backup_restore" web_interface/blueprints/api_v3.py | tail -60 | head -10Repository: ChuckBuilds/LEDMatrix
Length of output: 609
🏁 Script executed:
# Get line numbers for _coerce_to_bool definition
rg -n "def _coerce_to_bool" web_interface/blueprints/api_v3.pyRepository: ChuckBuilds/LEDMatrix
Length of output: 96
🏁 Script executed:
# Get line numbers for backup_restore function
rg -n "def backup_restore" web_interface/blueprints/api_v3.pyRepository: ChuckBuilds/LEDMatrix
Length of output: 91
Use existing _coerce_to_bool() utility instead of raw bool() for RestoreOptions parsing.
At lines 1288–1295, bool(opts_dict.get(...)) misinterprets string values like "false" as True, potentially enabling restore operations the caller intended to disable. The codebase already defines _coerce_to_bool() at line 133 to handle this exact scenario for form data parsing. Apply it consistently here:
Suggested fix
opts = backup_manager.RestoreOptions(
- restore_config=bool(opts_dict.get('restore_config', True)),
- restore_secrets=bool(opts_dict.get('restore_secrets', True)),
- restore_wifi=bool(opts_dict.get('restore_wifi', True)),
- restore_fonts=bool(opts_dict.get('restore_fonts', True)),
- restore_plugin_uploads=bool(opts_dict.get('restore_plugin_uploads', True)),
- reinstall_plugins=bool(opts_dict.get('reinstall_plugins', True)),
+ restore_config=_coerce_to_bool(opts_dict.get('restore_config', True)),
+ restore_secrets=_coerce_to_bool(opts_dict.get('restore_secrets', True)),
+ restore_wifi=_coerce_to_bool(opts_dict.get('restore_wifi', True)),
+ restore_fonts=_coerce_to_bool(opts_dict.get('restore_fonts', True)),
+ restore_plugin_uploads=_coerce_to_bool(opts_dict.get('restore_plugin_uploads', True)),
+ reinstall_plugins=_coerce_to_bool(opts_dict.get('reinstall_plugins', True)),
)This aligns with existing code patterns elsewhere in the file and satisfies the "Explicit over Implicit" and "Fail Fast" principles by validating input types consistently.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@web_interface/blueprints/api_v3.py` around lines 1288 - 1295, Replace the raw
bool(...) conversions when building RestoreOptions with the existing
_coerce_to_bool utility so string values like "false" are parsed correctly; for
each field in RestoreOptions (restore_config, restore_secrets, restore_wifi,
restore_fonts, restore_plugin_uploads, reinstall_plugins) call
_coerce_to_bool(opts_dict.get('<key>'), default=True) (using the same default
semantics currently used) instead of bool(opts_dict.get(...)) to ensure
consistent form-data boolean coercion for RestoreOptions creation.
| current = api_v3.config_manager.load_config() | ||
| _save_config_atomic(api_v3.config_manager, current, create_backup=True) | ||
| except Exception: |
There was a problem hiding this comment.
Pre-restore snapshot failure is currently ignored.
At Line 1302, _save_config_atomic(...) returns (success, error) but the result is discarded. If snapshot creation fails without throwing, rollback safety is silently lost.
Suggested fix
- _save_config_atomic(api_v3.config_manager, current, create_backup=True)
+ snap_ok, snap_err = _save_config_atomic(api_v3.config_manager, current, create_backup=True)
+ if not snap_ok:
+ logger.warning("[Backup] Pre-restore snapshot failed: %s (continuing)", snap_err)As per coding guidelines, "Implement graceful degradation to continue operation when non-critical features fail" and "Implement comprehensive logging for remote debugging on Raspberry Pi".
🧰 Tools
🪛 Ruff (0.15.9)
[warning] 1303-1303: Do not catch blind exception: Exception
(BLE001)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@web_interface/blueprints/api_v3.py` around lines 1301 - 1303, The pre-restore
snapshot call to _save_config_atomic currently discards its (success, error)
return value; change the code around api_v3.config_manager.load_config() so you
capture the return like (success, err) =
_save_config_atomic(api_v3.config_manager, current, create_backup=True), then if
success is False log the error with context via processLogger (or the module
logger) including err and current/config id, and decide a graceful path: either
abort the restore by raising or return a failure response, or continue while
recording that snapshot creation failed; ensure you reference
_save_config_atomic and api_v3.config_manager.load_config in your change so the
behavior is explicit and properly logged.
| except Exception as e: | ||
| return f"Error: {str(e)}", 500 |
There was a problem hiding this comment.
Avoid broad catch + raw exception text in HTTP response.
At Line 305 and Line 306, this can leak internal details to clients and makes failures harder to triage cleanly. Log server-side with context and return a user-safe message.
Suggested fix
+from jinja2 import TemplateNotFound
@@
def _load_backup_restore_partial():
"""Load backup & restore partial."""
try:
return render_template('v3/partials/backup_restore.html')
- except Exception as e:
- return f"Error: {str(e)}", 500
+ except TemplateNotFound:
+ logger.exception("[Pages V3] Missing template: v3/partials/backup_restore.html")
+ return "Backup & Restore UI is temporarily unavailable. Please refresh or check server logs.", 500
+ except OSError:
+ logger.exception("[Pages V3] Failed to load Backup & Restore partial")
+ return "Unable to load Backup & Restore right now. Please try again.", 500As per coding guidelines, "Catch specific exceptions, not bare except: statements", "Provide user-friendly error messages that explain what went wrong and potential solutions", and "Use structured logging with context (e.g., [NHL Recent]) for logging messages".
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| except Exception as e: | |
| return f"Error: {str(e)}", 500 | |
| from jinja2 import TemplateNotFound | |
| def _load_backup_restore_partial(): | |
| """Load backup & restore partial.""" | |
| try: | |
| return render_template('v3/partials/backup_restore.html') | |
| except TemplateNotFound: | |
| logger.exception("[Pages V3] Missing template: v3/partials/backup_restore.html") | |
| return "Backup & Restore UI is temporarily unavailable. Please refresh or check server logs.", 500 | |
| except OSError: | |
| logger.exception("[Pages V3] Failed to load Backup & Restore partial") | |
| return "Unable to load Backup & Restore right now. Please try again.", 500 |
🧰 Tools
🪛 Ruff (0.15.9)
[warning] 305-305: Do not catch blind exception: Exception
(BLE001)
[warning] 306-306: Use explicit conversion flag
Replace with conversion flag
(RUF010)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@web_interface/blueprints/pages_v3.py` around lines 305 - 306, The broad
"except Exception as e" block should not return raw exception text to clients or
swallow specific errors; change it to catch and handle specific expected
exceptions (e.g., ValueError, KeyError, requests.exceptions.RequestException)
and any remaining unexpected errors via a generic handler; for each specific
case return a clear user-facing message (e.g., "Unable to fetch recent NHL data,
please try again later") with the appropriate HTTP status, and log the original
exception server-side with structured context/tag "[NHL Recent]" using
logger.exception or logger.error(..., exc_info=True) so internal details are
recorded but not leaked to the client. Ensure the unique "except Exception as e"
location is replaced and that all log messages include the "[NHL Recent]"
context.
| <input type="file" id="restore-file-input" accept=".zip" | ||
| class="block w-full text-sm text-gray-700 file:mr-4 file:py-2 file:px-4 file:rounded-md file:border-0 file:text-sm file:font-medium file:bg-blue-50 file:text-blue-700 hover:file:bg-blue-100"> |
There was a problem hiding this comment.
Bind restore to the file that was actually inspected.
After a successful inspect, changing the file input does not invalidate the preview. If the next inspect fails—or an older inspect response arrives late—the stale summary stays visible and runRestore() restores a different file than the one the user approved.
Possible fix
+ let validatedRestoreFileKey = null;
+
+ function currentRestoreFileKey() {
+ const file = document.getElementById('restore-file-input').files?.[0];
+ return file ? `${file.name}:${file.size}:${file.lastModified}` : null;
+ }
+
+ function resetRestorePanels() {
+ document.getElementById('restore-preview').classList.add('hidden');
+ document.getElementById('restore-result').classList.add('hidden');
+ }
+
+ document.getElementById('restore-file-input').addEventListener('change', () => {
+ validatedRestoreFileKey = null;
+ resetRestorePanels();
+ });
+
async function validateRestoreFile() {
const input = document.getElementById('restore-file-input');
if (!input.files || !input.files[0]) {
notify('Choose a backup file first', 'error');
return;
}
+ resetRestorePanels();
const fd = new FormData();
fd.append('backup_file', input.files[0]);
try {
const res = await fetch('/api/v3/backup/validate', { method: 'POST', body: fd });
const payload = await res.json();
if (payload.status !== 'success') throw new Error(payload.message || 'Validation failed');
+ validatedRestoreFileKey = currentRestoreFileKey();
renderRestorePreview(payload.data);
} catch (err) {
+ validatedRestoreFileKey = null;
+ resetRestorePanels();
notify('Invalid backup: ' + err.message, 'error');
}
}
...
+ if (validatedRestoreFileKey !== currentRestoreFileKey()) {
+ notify('Inspect this file before restoring it', 'error');
+ return;
+ }Also applies to: 242-257, 281-287
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@web_interface/templates/v3/partials/backup_restore.html` around lines 55 -
56, The preview and restore flow must be tied to the exact file the user
inspected: capture and store a stable identifier (e.g., the selected File object
or its name+lastModified) when handling the "inspect" action for the input
element with id "restore-file-input", clear any stored inspection state on input
change, and when applying inspect responses only update the preview if the
response's identifier matches the currently stored file id; finally, make
runRestore() use the stored File reference/identifier (the inspectedFile) rather
than re-reading input.files[0] so restores always apply to the file the user
approved.
| const res = await fetch('/api/v3/backup/restore', { method: 'POST', body: fd }); | ||
| const payload = await res.json(); | ||
| const data = payload.data || {}; | ||
| const result = document.getElementById('restore-result'); | ||
| const ok = payload.status === 'success'; | ||
| result.className = (ok ? 'bg-green-50 border-green-200 text-green-800' : 'bg-yellow-50 border-yellow-200 text-yellow-800') + ' border rounded-md p-4'; | ||
| result.classList.remove('hidden'); | ||
| result.innerHTML = ` | ||
| <h3 class="font-medium mb-2">${ok ? 'Restore complete' : 'Restore finished with warnings'}</h3> | ||
| <div><strong>Restored:</strong> ${(data.restored || []).map(escapeHtml).join(', ') || 'none'}</div> | ||
| <div><strong>Skipped:</strong> ${(data.skipped || []).map(escapeHtml).join(', ') || 'none'}</div> | ||
| <div><strong>Plugins installed:</strong> ${(data.plugins_installed || []).map(escapeHtml).join(', ') || 'none'}</div> | ||
| <div><strong>Plugins failed:</strong> ${(data.plugins_failed || []).map(p => escapeHtml(p.plugin_id + ' (' + p.error + ')')).join(', ') || 'none'}</div> | ||
| <div><strong>Errors:</strong> ${(data.errors || []).map(escapeHtml).join('; ') || 'none'}</div> | ||
| <p class="mt-2">Restart the display service to apply all changes.</p> | ||
| `; | ||
| notify(ok ? 'Restore complete' : 'Restore finished with warnings', ok ? 'success' : 'info'); |
There was a problem hiding this comment.
Fail fast on restore API errors.
This is the only backup action here that does not throw on payload.status !== 'success'. A 4xx/5xx restore error will be rendered as “finished with warnings”, drop payload.message, and still tell the user to restart.
Possible fix
try {
const res = await fetch('/api/v3/backup/restore', { method: 'POST', body: fd });
const payload = await res.json();
+ if (!res.ok || payload.status !== 'success') {
+ throw new Error(payload.message || 'Restore failed');
+ }
const data = payload.data || {};
const result = document.getElementById('restore-result');
- const ok = payload.status === 'success';
+ const ok = !(data.errors || []).length && !(data.plugins_failed || []).length;
result.className = (ok ? 'bg-green-50 border-green-200 text-green-800' : 'bg-yellow-50 border-yellow-200 text-yellow-800') + ' border rounded-md p-4';
result.classList.remove('hidden');📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const res = await fetch('/api/v3/backup/restore', { method: 'POST', body: fd }); | |
| const payload = await res.json(); | |
| const data = payload.data || {}; | |
| const result = document.getElementById('restore-result'); | |
| const ok = payload.status === 'success'; | |
| result.className = (ok ? 'bg-green-50 border-green-200 text-green-800' : 'bg-yellow-50 border-yellow-200 text-yellow-800') + ' border rounded-md p-4'; | |
| result.classList.remove('hidden'); | |
| result.innerHTML = ` | |
| <h3 class="font-medium mb-2">${ok ? 'Restore complete' : 'Restore finished with warnings'}</h3> | |
| <div><strong>Restored:</strong> ${(data.restored || []).map(escapeHtml).join(', ') || 'none'}</div> | |
| <div><strong>Skipped:</strong> ${(data.skipped || []).map(escapeHtml).join(', ') || 'none'}</div> | |
| <div><strong>Plugins installed:</strong> ${(data.plugins_installed || []).map(escapeHtml).join(', ') || 'none'}</div> | |
| <div><strong>Plugins failed:</strong> ${(data.plugins_failed || []).map(p => escapeHtml(p.plugin_id + ' (' + p.error + ')')).join(', ') || 'none'}</div> | |
| <div><strong>Errors:</strong> ${(data.errors || []).map(escapeHtml).join('; ') || 'none'}</div> | |
| <p class="mt-2">Restart the display service to apply all changes.</p> | |
| `; | |
| notify(ok ? 'Restore complete' : 'Restore finished with warnings', ok ? 'success' : 'info'); | |
| const res = await fetch('/api/v3/backup/restore', { method: 'POST', body: fd }); | |
| const payload = await res.json(); | |
| if (!res.ok || payload.status !== 'success') { | |
| throw new Error(payload.message || 'Restore failed'); | |
| } | |
| const data = payload.data || {}; | |
| const result = document.getElementById('restore-result'); | |
| const ok = !(data.errors || []).length && !(data.plugins_failed || []).length; | |
| result.className = (ok ? 'bg-green-50 border-green-200 text-green-800' : 'bg-yellow-50 border-yellow-200 text-yellow-800') + ' border rounded-md p-4'; | |
| result.classList.remove('hidden'); | |
| result.innerHTML = ` | |
| <h3 class="font-medium mb-2">${ok ? 'Restore complete' : 'Restore finished with warnings'}</h3> | |
| <div><strong>Restored:</strong> ${(data.restored || []).map(escapeHtml).join(', ') || 'none'}</div> | |
| <div><strong>Skipped:</strong> ${(data.skipped || []).map(escapeHtml).join(', ') || 'none'}</div> | |
| <div><strong>Plugins installed:</strong> ${(data.plugins_installed || []).map(escapeHtml).join(', ') || 'none'}</div> | |
| <div><strong>Plugins failed:</strong> ${(data.plugins_failed || []).map(p => escapeHtml(p.plugin_id + ' (' + p.error + ')')).join(', ') || 'none'}</div> | |
| <div><strong>Errors:</strong> ${(data.errors || []).map(escapeHtml).join('; ') || 'none'}</div> | |
| <p class="mt-2">Restart the display service to apply all changes.</p> | |
| `; | |
| notify(ok ? 'Restore complete' : 'Restore finished with warnings', ok ? 'success' : 'info'); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@web_interface/templates/v3/partials/backup_restore.html` around lines 305 -
321, The restore handler currently treats any non-success payload as a warning
and renders details instead of failing; update the async block around the fetch
to check payload.status after parsing (the response assigned to `payload`) and
if it's not 'success' throw an Error that includes payload.message (or a
fallback message) so the caller shows an error state instead of "finished with
warnings"; ensure this replaces the current branch that sets `ok` and rendering
for failures, so `result`/`restore-result` is only populated for true success
and `notify` receives the error via the thrown exception path.
Summary
src/backup_manager.py— Flask-free library for create / validate / restore with hardened ZIP traversal + size checks.BUNDLED_FONTSset keeps repo-shipped fonts out of exports./api/v3/backup/:preview,export,list,download/<file>,validate,restore,DELETE /<file>. Filename allow-list (ledmatrix-backup-*.zip) prevents path-traversal on download/delete.config/backups/and can be rolled back via the existingrollback_config()path. Plugin reinstall reusesplugin_store_manager.install_plugin.Why
Painless reinstall path: wipe the Pi's SD, install latest LEDMatrix + latest plugins, drop in one ZIP, and get back all customizations, API keys, favorites, uploaded assets, and the list of plugins to reinstall.
Backup bundle
UI prominently warns that the file contains API keys + WiFi passwords in plaintext.
Test plan
pytest test/test_backup_manager.py— 13/13 passing locallydevpi@devpi(Raspberry Pi, running on real hardware) and verified:python3 -m pytest test/test_backup_manager.py)POST /api/v3/backup/exportcreates a 4.2 MB ZIP with 9 members including the Pi's realassets/fonts/test_font.ttfand 2 plugin PNG uploadsGET /api/v3/backup/listanddownload/<file>stream the exact filePOST /api/v3/backup/validatereturns parsed manifest + detected contentsconfig/config.json, ran config-only restore, and confirmed SHA256 returned bit-exactly to the original and the sentinel was wipedconfig.json.backup.<ts>) was written toconfig/backups/so rollback remains availableGET /v3/partials/backup-restorereturns the UI partial (17 KB)DELETE /api/v3/backup/<file>removes the export🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Tests