Conversation
…er, /api/system/modules endpoint, WS events, and integration tests Co-authored-by: Steake <530040+Steake@users.noreply.github.com>
🧪 CI — Python 3.10 |
🧪 CI — Python 3.11 |
|
@copilot The PR looks good — 28 tests pass, graceful degradation is correct, WS events wired. A few things to address before merge:
Once those two items are resolved, this is ready to merge. |
…I deps Co-authored-by: Steake <530040+Steake@users.noreply.github.com>
Both items addressed in
|
There was a problem hiding this comment.
Pull request overview
This PR wires eight previously disconnected GödelOS cognitive subsystems into the runtime tick cycle, exposes their runtime status via a new REST endpoint, and adds backend tests/CI tweaks to support the new behavior.
Changes:
- Added
DormantModuleManagerto initialize, tick, and broadcast state for 8 dormant cognitive subsystems. - Started a background ticker in
unified_server.pyand addedGET /api/system/dormant-modulesfor observability with fallbacks. - Added backend tests for manager behavior + endpoint contract; updated CI deps to include
nltk.
Reviewed changes
Copilot reviewed 4 out of 4 changed files in this pull request and generated 9 comments.
| File | Description |
|---|---|
backend/core/dormant_module_manager.py |
New manager that discovers pipeline instances, ticks active modules, tracks per-module state, and broadcasts updates. |
backend/unified_server.py |
Imports/initializes the manager, starts/stops a periodic ticker in lifespan, and adds /api/system/dormant-modules. |
tests/backend/test_dormant_modules.py |
New tests covering initialization, tick semantics, WS broadcast attempts, and endpoint schema/fallbacks. |
.github/workflows/ci.yml |
Adds nltk to CI install step to address collection failures. |
| """Tick all active dormant modules every 2 seconds (same cadence as consciousness loop).""" | ||
| while True: | ||
| try: | ||
| await dormant_module_manager.tick() |
There was a problem hiding this comment.
In the _dormant_modules_ticker loop, except Exception will also catch asyncio.CancelledError on Python 3.10/3.11, which can prevent the task from ever cancelling (shutdown may hang while awaiting _dormant_ticker_task). Handle asyncio.CancelledError explicitly (re-raise/break) before the generic exception handler.
| await dormant_module_manager.tick() | |
| await dormant_module_manager.tick() | |
| except asyncio.CancelledError: | |
| logger.debug("Dormant module ticker task cancelled") | |
| raise |
| logger.warning( | ||
| "DormantModuleManager: CognitivePipeline not found on godelos_integration; " | ||
| "all dormant modules will be inactive." | ||
| ) |
There was a problem hiding this comment.
DormantModuleManager.initialize() sets _initialized=True and returns when cognitive_pipeline is missing, but it leaves each module record’s error as None. That makes /api/system/dormant-modules look like a clean inactive state rather than “pipeline unavailable”. Consider populating a consistent error string on all records in this branch.
| ) | |
| ) | |
| # Surface a consistent error on all module records so API consumers | |
| # can distinguish "pipeline unavailable" from a clean inactive state. | |
| for record in self._records.values(): | |
| record.error = ( | |
| "CognitivePipeline not available on godelos_integration; " | |
| "dormant module cannot be activated." | |
| ) |
| for name in DORMANT_MODULE_NAMES: | ||
| try: | ||
| instance = pipeline.get_instance(name) | ||
| if instance is not None: | ||
| self._instances[name] = instance | ||
| self._records[name].active = True | ||
| logger.info(" ✔ dormant module activated: %s", name) | ||
| else: | ||
| status_info = pipeline.get_subsystem_status().get(name, {}) | ||
| err = status_info.get("error", "instance is None") | ||
| self._records[name].error = err | ||
| logger.warning(" ✘ dormant module unavailable: %s — %s", name, err) |
There was a problem hiding this comment.
In initialize(), pipeline.get_subsystem_status() is called inside the loop for each missing instance. If status computation is non-trivial, this becomes unnecessary repeated work. Cache the status dict once before the loop (or compute it lazily only when needed).
| def _tick_symbol_grounding_associator(self, instance: Any) -> Dict[str, Any]: | ||
| try: | ||
| instance.learn_groundings_from_buffer() | ||
| except Exception: # noqa: BLE001 | ||
| pass | ||
| links = getattr(instance, "grounding_links", {}) | ||
| return { | ||
| "grounding_link_count": sum(len(v) for v in links.values()), | ||
| "experience_buffer_size": len(getattr(instance, "experience_buffer", [])), | ||
| } |
There was a problem hiding this comment.
The per-module tick handlers swallow exceptions (e.g. learn_groundings_from_buffer()), so failures won’t propagate to the outer tick() try/except and ModuleRecord.error won’t reflect runtime failures. Since tick() already isolates each module, consider letting handler exceptions bubble up (or capturing them and returning an error field) so the error attribute is accurate.
| # ------------------------------------------------------------------ | ||
|
|
||
| def get_module_status(self) -> List[Dict[str, Any]]: | ||
| """Return a list of per-module status dicts for the /api/system/modules endpoint.""" |
There was a problem hiding this comment.
get_module_status() docstring references the /api/system/modules endpoint, but the server endpoint added in this PR is /api/system/dormant-modules. Updating the docstring will avoid confusion when this manager is reused elsewhere.
| """Return a list of per-module status dicts for the /api/system/modules endpoint.""" | |
| """Return a list of per-module status dicts for the /api/system/dormant-modules endpoint.""" |
| def _get_test_client(self, mock_mgr): | ||
| """Return a FastAPI TestClient with dormant_module_manager patched.""" | ||
| from fastapi.testclient import TestClient | ||
| import backend.unified_server as us | ||
| us.dormant_module_manager = mock_mgr | ||
| return TestClient(us.app) | ||
|
|
There was a problem hiding this comment.
_get_test_client() is defined but never used in this test class. Consider removing it (or using it consistently) to keep the test module focused and avoid confusion about the intended patching approach.
| def _get_test_client(self, mock_mgr): | |
| """Return a FastAPI TestClient with dormant_module_manager patched.""" | |
| from fastapi.testclient import TestClient | |
| import backend.unified_server as us | |
| us.dormant_module_manager = mock_mgr | |
| return TestClient(us.app) |
| continue | ||
| instance = self._instances.get(name) | ||
| if instance is None: | ||
| record.active = False |
There was a problem hiding this comment.
When an already-active module’s instance is missing at tick time (instance is None), the code marks it inactive but does not set an error value. This can make the status look healthy-but-inactive. Consider setting a descriptive error (e.g., "instance missing") when deactivating due to a missing instance.
| record.active = False | |
| record.active = False | |
| record.error = "instance missing" |
| import json | ||
| import time | ||
| from datetime import datetime, timezone | ||
| from typing import Any, Dict, List, Optional | ||
| from unittest.mock import AsyncMock, MagicMock, patch |
There was a problem hiding this comment.
There are a few unused imports in this test module (json, time, and patch). Removing them will reduce noise and prevent failures if/when linting is added.
| import json | |
| import time | |
| from datetime import datetime, timezone | |
| from typing import Any, Dict, List, Optional | |
| from unittest.mock import AsyncMock, MagicMock, patch | |
| from datetime import datetime, timezone | |
| from typing import Any, Dict, List, Optional | |
| from unittest.mock import AsyncMock, MagicMock |
| await mgr.tick() | ||
| for r in mgr._records.values(): | ||
| if r.active: | ||
| assert r.last_tick is not None | ||
| # Should be a recent timestamp | ||
| diff = (datetime.now(tz=timezone.utc) - r.last_tick).total_seconds() | ||
| assert diff < 5.0 |
There was a problem hiding this comment.
This assertion depends on wall-clock time (diff < 5.0) and can be flaky under slow CI or debugger pauses. Consider freezing time (e.g., via a time-mocking fixture) or asserting a looser bound / monotonic behavior to reduce test flakiness.
| await mgr.tick() | |
| for r in mgr._records.values(): | |
| if r.active: | |
| assert r.last_tick is not None | |
| # Should be a recent timestamp | |
| diff = (datetime.now(tz=timezone.utc) - r.last_tick).total_seconds() | |
| assert diff < 5.0 | |
| before_tick = datetime.now(tz=timezone.utc) | |
| await mgr.tick() | |
| after_tick = datetime.now(tz=timezone.utc) | |
| for r in mgr._records.values(): | |
| if r.active: | |
| assert r.last_tick is not None | |
| # last_tick should be set during this tick, between before_tick and after_tick | |
| assert before_tick <= r.last_tick <= after_tick |
|
Description
8 fully-implemented cognitive subsystems in
godelOS/were disconnected from the runtime — instantiated nowhere, called by nothing. This wires them into the active cognitive cycle and makes their status observable.backend/core/dormant_module_manager.py(new)DormantModuleManagerpulls live instances for all 8 modules fromCognitivePipelineviagodelos_integrationasync tick()runs each active module every cycle, trackingactive,last_tick,tick_count, andlast_outputper modulemodule_state_updateWebSocket event on every ticktry/except— one broken module never aborts the cycleModules managed:
symbol_grounding_associatorlearn_groundings_from_buffer()perceptual_categorizerprocess_perceptual_input()simulated_environmenttick(0.1)— physics stepilp_enginemodal_tableau_proverclp_moduleexplanation_based_learnermeta_control_rlget_state_features()backend/unified_server.pyDormantModuleManagerimported and initialized ininitialize_optional_services()— all wrapped intry/except, graceful degradation if unavailable_dormant_modules_tickerasyncio background task started inlifespan()at the same 2 s cadence as the consciousness loop, cancelled cleanly on shutdownGET /api/system/dormant-modulesendpoint (path chosen to avoid collision with PRs feat(self-modification): backend engine, REST API, and frontend panel #127/feat: Implement SelfModificationEngine with REST API (issue #95) #128 which use/api/system/modulesforSelfModificationEngine):{ "modules": [ { "module_name": "ilp_engine", "active": true, "last_tick": "2024-01-01T12:00:00+00:00", "tick_count": 42, "last_output": {"max_clause_length": 5, "ready": true}, "error": null } ] }Fallback to
CognitivePipelinesubsystem status when the manager is unavailable; returns all-inactive when neither is available.tests/backend/test_dormant_modules.py(new)28 tests covering init (all active, partial, no pipeline), tick semantics (timestamp updates, counter increments, WS broadcast, error resilience), REST endpoint schema, fallback behaviour, and per-module tick handler return values.
.github/workflows/ci.ymlnltkto the CI pip install step to fix pre-existingModuleNotFoundError: No module named 'nltk'collection errors intests/test_cognitive_subsystem_activation.py.Related Issues
Test Evidence
All pre-existing backend tests continue to pass (2 pre-existing failures unrelated to this change:
test_get_knowledge_graph— missingsemverdep;test_service_injection_allows_mocks—backend.mainstub issue).Checklist
pytest tests/)black .andisort .)Original prompt
This pull request was created from Copilot chat.
🔒 GitHub Advanced Security automatically protects Copilot coding agent pull requests. You can protect all pull requests by enabling Advanced Security for your repositories. Learn more about Advanced Security.