Conversation
…mpetition, coalition register, and WebSocket emission
- Replace stub broadcast() with full GWT implementation:
- Coalition register (subsystem_id → activation_strength)
- Softmax attention competition for broadcast rights
- φ-based coalition dynamics (higher φ → broader coalitions)
- Wire global_broadcast event emission into consciousness loop after φ computation
- Wire global_broadcast event emission into process_with_unified_awareness()
- Emit {"type": "global_broadcast", "coalition": [...], "content": {...}} on WebSocket
- Add 21 tests covering coalition register, softmax, event structure, WebSocket emission
Co-authored-by: Steake <530040+Steake@users.noreply.github.com>
…ition Co-authored-by: Steake <530040+Steake@users.noreply.github.com>
🧪 CI — Python 3.11 |
🧪 CI — Python 3.10 |
There was a problem hiding this comment.
Pull request overview
This PR implements GlobalWorkspace.broadcast() with full Global Workspace Theory (GWT) semantics, replacing a previous stub. It adds a coalition register, softmax attention competition, and WebSocket emission of global_broadcast events, along with 21 new tests.
Changes:
GlobalWorkspacerefactored to maintain a coalition register, implement softmax attention competition with temperature-controlled sharpness, and emit structuredglobal_broadcastevents wired into both the consciousness loop andprocess_with_unified_awareness.- WebSocket emission of
global_broadcastevents added to_unified_consciousness_loopandprocess_with_unified_awarenessafter φ computation. - 21 new tests covering coalition dynamics, softmax attention, broadcast event structure, coalition breadth, WebSocket emission, and edge cases.
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 6 comments.
| File | Description |
|---|---|
backend/core/unified_consciousness_engine.py |
Full rewrite of GlobalWorkspace with coalition register, softmax competition, and WebSocket emission wiring in the consciousness loop and awareness processing |
tests/backend/test_global_workspace.py |
New test file with 21 tests covering coalition register, softmax, event structure, breadth, WebSocket emission, and edge cases |
| result = gw.broadcast({"phi_measure": 2.0, "cognitive_state": state}) | ||
| weights = result["broadcast_content"]["content"]["attention_weights"] | ||
| total = sum(weights.values()) | ||
| assert abs(total - 1.0) < 1e-4 |
There was a problem hiding this comment.
test_weights_sum_to_one uses the rounded (4 decimal places) attention weights from broadcast_content["content"]["attention_weights"] and asserts their sum is within 1e-4 of 1.0. With 7 subsystems, rounding each weight to 4 decimal places can introduce up to 7 × 5×10⁻⁵ = 3.5×10⁻⁴ of total error, which exceeds the 1e-4 tolerance. This test can fail for certain weight distributions. The tolerance should be at least 5×10⁻⁴ (or use the un-rounded weights directly from the _softmax_attention_competition internal result) to be reliable.
| assert abs(total - 1.0) < 1e-4 | |
| # Allow for rounding of attention weights to 4 decimal places across subsystems | |
| assert abs(total - 1.0) < 5e-4 |
| state = _make_state() | ||
| result = gw.broadcast({"phi_measure": 2.0, "cognitive_state": state}) | ||
| weights = result["broadcast_content"]["content"]["attention_weights"] | ||
| mean_w = 1.0 / len(GlobalWorkspace.SUBSYSTEM_IDS) |
There was a problem hiding this comment.
test_winners_above_mean checks weights[sid] >= mean_w where weights are the rounded (4dp) values from broadcast_content["content"]["attention_weights"] and mean_w = 1.0 / len(GlobalWorkspace.SUBSYSTEM_IDS) = 1/7 ≈ 0.142857. A subsystem whose un-rounded attention weight was just above 1/7 (e.g. 0.14286) qualifies as a winner during competition, but its rounded stored weight 0.1429 satisfies the check. However, a weight like 0.14285 rounds to 0.1428 < mean_w, which would cause a false assertion failure. The test should compare against round(mean_w, 4) to account for the rounding applied to the stored weights, or verify winners using un-rounded data.
| mean_w = 1.0 / len(GlobalWorkspace.SUBSYSTEM_IDS) | |
| mean_w = round(1.0 / len(GlobalWorkspace.SUBSYSTEM_IDS), 4) |
| async def test_consciousness_loop_emits_global_broadcast(self): | ||
| """After φ computation the consciousness loop should call ws broadcast.""" | ||
| ws_manager = MagicMock() | ||
| ws_manager.has_connections = MagicMock(return_value=True) | ||
| ws_manager.broadcast = AsyncMock() | ||
| ws_manager.broadcast_consciousness_update = AsyncMock() | ||
|
|
||
| from backend.core.unified_consciousness_engine import UnifiedConsciousnessEngine | ||
|
|
||
| engine = UnifiedConsciousnessEngine(websocket_manager=ws_manager) | ||
| # Run one tick of the loop manually | ||
| engine.consciousness_loop_active = True | ||
|
|
||
| # Patch the loop to run once then stop | ||
| original_loop = engine._unified_consciousness_loop | ||
|
|
||
| async def _one_tick(): | ||
| # We replicate the minimal sequence: capture state, compute phi, broadcast | ||
| state = engine.consciousness_state | ||
| phi = engine.information_integration_theory.calculate_phi( | ||
| state, | ||
| mean_contradiction=engine.self_model_validator.mean_contradiction_score, | ||
| ) | ||
| bc = engine.global_workspace.broadcast({ | ||
| "cognitive_state": state, | ||
| "phi_measure": phi, | ||
| "timestamp": time.time(), | ||
| }) | ||
| broadcast_event = bc.get("broadcast_content") | ||
| if ( | ||
| broadcast_event | ||
| and engine.websocket_manager | ||
| and hasattr(engine.websocket_manager, "has_connections") | ||
| and engine.websocket_manager.has_connections() | ||
| ): | ||
| await engine.websocket_manager.broadcast(broadcast_event) | ||
|
|
||
| await _one_tick() |
There was a problem hiding this comment.
The original_loop variable assigned on line 230 (original_loop = engine._unified_consciousness_loop) is never used. The test does not actually exercise the real _unified_consciousness_loop method; instead, it re-implements the broadcast and WebSocket emission logic inline. As a result, this test does not detect regressions where the emission block is removed from the actual loop implementation. Consider either calling _unified_consciousness_loop with a timeout/cancellation or removing the unused variable to avoid confusion about what is actually being tested.
| import asyncio | ||
| import math | ||
| import time | ||
| from unittest.mock import AsyncMock, MagicMock, patch |
There was a problem hiding this comment.
patch is imported from unittest.mock on line 13 but is never used anywhere in the test file. This unused import should be removed.
| from unittest.mock import AsyncMock, MagicMock, patch | |
| from unittest.mock import AsyncMock, MagicMock |
| # 3a. Emit global_broadcast event on WebSocket | ||
| broadcast_event = broadcast_content.get("broadcast_content") | ||
| if ( | ||
| broadcast_event | ||
| and self.websocket_manager | ||
| and hasattr(self.websocket_manager, "has_connections") | ||
| and self.websocket_manager.has_connections() | ||
| ): | ||
| try: | ||
| await self.websocket_manager.broadcast(broadcast_event) | ||
| except Exception as e: | ||
| logger.warning("Could not emit global_broadcast: %s", e) |
There was a problem hiding this comment.
The WebSocket emission guard block (lines 891–902 and 1018–1029) is duplicated verbatim between _unified_consciousness_loop and process_with_unified_awareness. If the emission logic needs to change (e.g. adding a rate limiter or changing the condition), it must be updated in both places. This block should be extracted into a small private helper method, for example async def _emit_broadcast_event(self, broadcast_event: Dict[str, Any]) -> None, called from both sites.
| elif isinstance(value, (int, float)): | ||
| activity += min(abs(float(value)), 1.0) | ||
| elif isinstance(value, str) and value.strip(): | ||
| activity += 0.5 | ||
| elif isinstance(value, bool): | ||
| activity += 0.3 |
There was a problem hiding this comment.
In _measure_subsystem_activity, the elif isinstance(value, bool) branch on line 642 is dead code and will never be reached. In Python, bool is a subclass of int, so isinstance(True, (int, float)) evaluates to True. This means True values are handled by the elif isinstance(value, (int, float)) branch on line 638 and scored as min(abs(float(True)), 1.0) = 1.0 rather than the intended 0.3. To fix this, the bool check must be placed before the int/float check (since isinstance short-circuits on the first match), or use a combined check that explicitly excludes booleans from the numeric branch: elif isinstance(value, (int, float)) and not isinstance(value, bool):.
…orkspace), #118 (EmergenceDetector), #119 (transparency), #120 (ChromaDB) Resolve .gitignore conflict (keep both gitignore entries). unified_consciousness_engine.py auto-merged cleanly: IIT φ calculator coexists with GlobalWorkspace broadcaster. All 51 tests pass (30 IIT + 21 GlobalWorkspace). Co-authored-by: Steake <530040+Steake@users.noreply.github.com>
…#116) * Initial plan * feat(iit): implement bipartition MI approximation for calculate_phi and add unit tests Replace the heuristic-based InformationIntegrationTheory.calculate_phi() with a tractable bipartition mutual-information approximation (Tononi 2004): - Convert subsystem dicts to numeric vectors via recursive flattening - Enumerate all non-trivial bipartitions at subsystem level (63 cuts) - φ = min MI across all cuts, with noise-floor suppression for idle states - Preserve contradiction penalty from self-model validator - Add 'phi' field to WebSocket broadcast payload for acceptance criteria - 27 unit tests: idle→φ=0, active→φ>0, penalty, helpers, performance <50ms Co-authored-by: Steake <530040+Steake@users.noreply.github.com> * address code review: document magic numbers, add division guard, expand test coverage Co-authored-by: Steake <530040+Steake@users.noreply.github.com> * merge: integrate main with PRs #114 (schema contracts), #117 (GlobalWorkspace), #118 (EmergenceDetector), #119 (transparency), #120 (ChromaDB) Resolve .gitignore conflict (keep both gitignore entries). unified_consciousness_engine.py auto-merged cleanly: IIT φ calculator coexists with GlobalWorkspace broadcaster. All 51 tests pass (30 IIT + 21 GlobalWorkspace). Co-authored-by: Steake <530040+Steake@users.noreply.github.com> --------- Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: Steake <530040+Steake@users.noreply.github.com>
Description
GlobalWorkspace.broadcast()was a stub that did basic threshold gating with no real coalition dynamics, no attention competition, and no WebSocket emission. This replaces it with a full GWT implementation.Coalition register & dynamics
Dict[str, float]mapping 7 cognitive subsystem IDs → activation strengthSoftmax attention competition
WebSocket emission
global_broadcastevents in both_unified_consciousness_loop()andprocess_with_unified_awareness(), after φ computation{"type": "global_broadcast", "coalition": [{"subsystem_id": "...", "activation": 0.32}], "content": {"phi_measure": 1.5, "coalition_strength": 0.41, "attention_weights": {...}, "conscious": true}}Backward compatibility
broadcast_content,coalition_strength,attention_focus,conscious_access) matchUnifiedConsciousnessState.global_workspace— all existing.update()call sites work unchanged.Related Issues
Test Evidence
21 new tests in
tests/backend/test_global_workspace.py:CodeQL: 0 alerts.
Checklist
pytest tests/)black .andisort .)Original prompt
💡 You can make Copilot smarter by setting up custom instructions, customizing its development environment and configuring Model Context Protocol (MCP) servers. Learn more Copilot coding agent tips in the docs.