Skip to content

merge: resolve conflicts — bring main into copilot/repair-broken-knowledge-endpoints (unblock PR #115)#124

Merged
Steake merged 10 commits intocopilot/repair-broken-knowledge-endpointsfrom
copilot/resolve-merge-conflict-knowledge-endpoints
Mar 6, 2026
Merged

merge: resolve conflicts — bring main into copilot/repair-broken-knowledge-endpoints (unblock PR #115)#124
Steake merged 10 commits intocopilot/repair-broken-knowledge-endpointsfrom
copilot/resolve-merge-conflict-knowledge-endpoints

Conversation

Copy link
Copy Markdown
Contributor

Copilot AI commented Mar 6, 2026

Description

PR #115 was blocked (mergeable_state: dirty) due to conflicts between copilot/repair-broken-knowledge-endpoints and main (which had advanced through PRs #114, #116, #119, #120). This PR resolves those conflicts via git merge --no-ff.

Conflicts resolved

backend/unified_server.pyPOST /api/knowledge/import/batch

  • Kept feature branch's real implementation (routes through KnowledgeIngestionService, returns per-item results array with status/error per source)
  • Discarded main's stub that only generated fake IDs with no real processing

godelOS/core_kr/knowledge_store/__init__.py — ChromaDB/hot-reloader imports

  • Kept main's try/except guards so missing chromadb in slim CI environments doesn't break the import chain
  • Dropped feature branch's bare imports which would raise ImportError in CI

Related Issues

Test Evidence

CodeQL scan: 0 alerts.

The merge commit preserves both parent histories. When this PR is merged into copilot/repair-broken-knowledge-endpoints, PR #115 will contain all of main's commits and its mergeable_state will clear.

Checklist

  • Tests pass locally (pytest tests/)
  • Code is formatted (black . and isort .)
  • Documentation updated (if applicable)
  • No secrets or credentials committed
  • Related issue linked above
Original prompt

Task

PR #115 (copilot/repair-broken-knowledge-endpointsmain) is blocked with mergeable_state: "dirty" — there is a merge conflict that must be resolved before the PR can be merged.

Instructions

  1. Merge main into the feature branch using a merge commit — do NOT rebase.
    git checkout copilot/repair-broken-knowledge-endpoints
    git fetch origin
    git merge origin/main --no-ff
  2. Resolve any and all conflicts that arise. The feature branch contains fixes to the knowledge import system (see PR fix: repair all 6 knowledge import endpoints (0/6 → 6/6 working) #115 description). When in doubt, prefer the feature branch's version of any changed code, but ensure the result is coherent and does not break either side.
  3. Push the resolved branch:
    git push origin copilot/repair-broken-knowledge-endpoints

Constraints

  • No rebase under any circumstances. Use git merge only.
  • Do not squash commits.
  • Do not force-push unless absolutely necessary to push the merge commit (prefer a regular push).
  • Do not modify the PR target branch (main) directly.
  • After the push, the PR fix: repair all 6 knowledge import endpoints (0/6 → 6/6 working) #115 should show mergeable_state: "clean" or "unstable" (not "dirty").

Context

This pull request was created from Copilot chat.


💬 We'd love your input! Share your thoughts on Copilot coding agent in our 2 minute survey.

Copilot AI and others added 9 commits March 6, 2026 15:03
…tention competition (#117)

* Initial plan

* feat: implement GlobalWorkspace.broadcast() with softmax attention competition, coalition register, and WebSocket emission

- Replace stub broadcast() with full GWT implementation:
  - Coalition register (subsystem_id → activation_strength)
  - Softmax attention competition for broadcast rights
  - φ-based coalition dynamics (higher φ → broader coalitions)
- Wire global_broadcast event emission into consciousness loop after φ computation
- Wire global_broadcast event emission into process_with_unified_awareness()
- Emit {"type": "global_broadcast", "coalition": [...], "content": {...}} on WebSocket
- Add 21 tests covering coalition register, softmax, event structure, WebSocket emission

Co-authored-by: Steake <530040+Steake@users.noreply.github.com>

* fix: add defensive guards for empty winners/weights in softmax competition

Co-authored-by: Steake <530040+Steake@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: Steake <530040+Steake@users.noreply.github.com>
…definitions) (#114)

* Initial plan

* feat: create backend/schemas.py and fix API contract mismatches

- Create backend/schemas.py with canonical Pydantic v2 request models
- Update unified_server.py handlers to use typed schemas instead of dict
- Update transparency_endpoints.py ProvenanceQuery/ProvenanceSnapshot models
- Fix frontend createKnowledgeSnapshot to send required fields

Co-authored-by: Steake <530040+Steake@users.noreply.github.com>

* test: add API schema contract integration tests (18 tests, 0 failures)

Remove stray test output JSON files and add gitignore pattern.

Co-authored-by: Steake <530040+Steake@users.noreply.github.com>

* fix: address code review - fix context type, add dict context test

Co-authored-by: Steake <530040+Steake@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: Steake <530040+Steake@users.noreply.github.com>
…akthrough alerting (#118)

* Initial plan

* feat: implement ConsciousnessEmergenceDetector with rolling-window scorer and breakthrough alerting

- Add backend/core/consciousness_emergence_detector.py with 5-dimension weighted scoring
- Add /api/consciousness/emergence GET endpoint to consciousness_endpoints.py
- Wire detector into unified_server.py lifespan startup
- Add 21 tests covering all functionality
- EMERGENCE_THRESHOLD=0.8 configurable via GODELOS_EMERGENCE_THRESHOLD env var
- Breakthroughs logged to logs/breakthroughs.jsonl and broadcast via WebSocket

Co-authored-by: Steake <530040+Steake@users.noreply.github.com>

* fix: address code review comments - type annotations, log dir env var, env var test

Co-authored-by: Steake <530040+Steake@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: Steake <530040+Steake@users.noreply.github.com>
…kend capabilities, remove broken views, fix hardcoded URLs (#119)

* Initial plan

* feat: add TransparencyPanel component with reasoning trace, decision log, cognitive map, and transparency mode toggle

- Create TransparencyPanel.svelte with three tabs:
  - Reasoning Trace: live WebSocket stream + HTTP fallback, expanding tree
  - Decision Log: paginated table from /api/transparency/decision-history
  - Cognitive Map: d3-force graph from /api/transparency/knowledge-graph/export
- Create transparency.js store for shared transparencyMode state
- Wire TransparencyPanel into App.svelte as direct component (not modal)
- Add Transparency Mode toggle to QueryInterface advanced options
- Modify ResponseDisplay to auto-expand reasoning inline when transparency mode on

Co-authored-by: Steake <530040+Steake@users.noreply.github.com>

* refactor: address code review — extract constants, improve ID generation, cache d3 import

- Extract WS_RECONNECT_DELAY_MS, MAX_REASONING_STEPS, STEP_GROUPING_WINDOW_SECONDS constants
- Use monotonic counter for collision-free WebSocket message IDs
- Add fallback for missing trace_id in HTTP trace loading
- Cache d3 dynamic import to avoid repeated imports on tab switches
- Log warning when invalid knowledge graph links are filtered

Co-authored-by: Steake <530040+Steake@users.noreply.github.com>

* fix: prevent infinite reload loop on empty cognitive map/decision data

Add decisionsLoaded and mapLoaded guard flags to prevent reactive
statements from re-triggering loadCognitiveMap/loadDecisions when
the backend returns empty results. Previously, the condition
mapNodes.length === 0 && !mapLoading would immediately re-trigger
the fetch after it completed with zero nodes.

Co-authored-by: Steake <530040+Steake@users.noreply.github.com>

* fix: clear stale data on cognitive map refresh

Reset mapNodes and mapEdges arrays when refresh button is clicked
to prevent stale data from lingering if the re-fetch fails or
returns empty results.

Co-authored-by: Steake <530040+Steake@users.noreply.github.com>

* refactor: remove broken views and fix hardcoded URLs across frontend

Frontend audit findings:
- ReasoningSessionViewer called non-existent /api/transparency/sessions/active (404)
  → Removed from nav; redundant with TransparencyPanel's Reasoning Trace tab
- ProvenanceTracker called non-existent /api/transparency/provenance/* (404)
  → Removed from nav; no backend endpoints exist
- CapabilityDashboard used hardcoded http://localhost:8000 → now uses config.js
- ChatInterface used hardcoded http://localhost:8000 → now uses config.js
- KnowledgeGraphCollaborativeManager used hardcoded ws://localhost:8000 → now uses config.js

Navigation reduced from 17 to 15 views, cutting only broken ones.

Co-authored-by: Steake <530040+Steake@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: Steake <530040+Steake@users.noreply.github.com>
…e with hot-reload (#120)

* Initial plan

* feat: implement SQLiteKnowledgeStore, OntologyHotReloader, and migration utility

Co-authored-by: Steake <530040+Steake@users.noreply.github.com>

* test: add comprehensive tests for SQLite store, hot reloader, and migration

Co-authored-by: Steake <530040+Steake@users.noreply.github.com>

* fix: address code review feedback — validation, encapsulation, and edge cases

Co-authored-by: Steake <530040+Steake@users.noreply.github.com>

* feat: pivot to ChromaDB — remove SQLite, implement ChromaKnowledgeStore with vector + metadata persistence

Co-authored-by: Steake <530040+Steake@users.noreply.github.com>

* fix: address code review — empty collection early return, sentinel filter, comment style

Co-authored-by: Steake <530040+Steake@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: Steake <530040+Steake@users.noreply.github.com>
…#116)

* Initial plan

* feat(iit): implement bipartition MI approximation for calculate_phi and add unit tests

Replace the heuristic-based InformationIntegrationTheory.calculate_phi()
with a tractable bipartition mutual-information approximation (Tononi 2004):
- Convert subsystem dicts to numeric vectors via recursive flattening
- Enumerate all non-trivial bipartitions at subsystem level (63 cuts)
- φ = min MI across all cuts, with noise-floor suppression for idle states
- Preserve contradiction penalty from self-model validator
- Add 'phi' field to WebSocket broadcast payload for acceptance criteria
- 27 unit tests: idle→φ=0, active→φ>0, penalty, helpers, performance <50ms

Co-authored-by: Steake <530040+Steake@users.noreply.github.com>

* address code review: document magic numbers, add division guard, expand test coverage

Co-authored-by: Steake <530040+Steake@users.noreply.github.com>

* merge: integrate main with PRs #114 (schema contracts), #117 (GlobalWorkspace), #118 (EmergenceDetector), #119 (transparency), #120 (ChromaDB)

Resolve .gitignore conflict (keep both gitignore entries).
unified_consciousness_engine.py auto-merged cleanly:
IIT φ calculator coexists with GlobalWorkspace broadcaster.
All 51 tests pass (30 IIT + 21 GlobalWorkspace).

Co-authored-by: Steake <530040+Steake@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: Steake <530040+Steake@users.noreply.github.com>
…oses #104)

Lazy-guard chromadb and spaCy imports; relax OPTIONAL_SUBSYSTEMS test assertions. Fixes #104.
…owledge-endpoints

# Conflicts:
#	backend/unified_server.py
#	godelOS/core_kr/knowledge_store/__init__.py
Copilot AI changed the title [WIP] Fix merge conflict in repair broken knowledge endpoints merge: resolve conflicts — bring main into copilot/repair-broken-knowledge-endpoints (unblock PR #115) Mar 6, 2026
@Steake Steake marked this pull request as ready for review March 6, 2026 17:40
@Steake Steake self-requested a review as a code owner March 6, 2026 17:40
Copilot AI review requested due to automatic review settings March 6, 2026 17:40
@Steake Steake merged commit 77d6be8 into copilot/repair-broken-knowledge-endpoints Mar 6, 2026
2 checks passed
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Merge commit to resolve conflicts and bring main forward into copilot/repair-broken-knowledge-endpoints, unblocking PR #115 while preserving upstream changes (IIT φ work, optional-dependency guards, and knowledge-store import resilience).

Changes:

  • Add optional-dependency guards for spaCy-driven NLU components and Chroma/hot-reloader exports to avoid import-chain breakage in slim CI environments.
  • Introduce/retain bipartition-based IIT φ implementation and add corresponding unit/performance tests.
  • Relax subsystem activation tests to tolerate missing heavy optional deps in CI.

Reviewed changes

Copilot reviewed 7 out of 7 changed files in this pull request and generated 5 comments.

Show a summary per file
File Description
wiki/Roadmap/v1.0.md Adds v1.0 deployment deliverables (install/startup, env vars, health check).
tests/test_cognitive_subsystem_activation.py Allows NLU/NLG subsystems to be non-active when optional deps are absent; filters init errors accordingly.
tests/backend/test_iit_phi_calculator.py Adds unit tests (incl. a strict runtime assertion) for the IIT φ calculator.
godelOS/nlu_nlg/nlu/pipeline.py Attempts to run NLU in a degraded mode when LAP/spaCy isn’t available.
godelOS/nlu_nlg/nlu/lexical_analyzer_parser.py Wraps spaCy imports to allow module import when spaCy is missing.
godelOS/core_kr/knowledge_store/init.py Guards optional imports for ChromaKnowledgeStore and OntologyHotReloader.
backend/core/unified_consciousness_engine.py Implements φ via bipartition MI approximation and includes φ in broadcast payload.

Comment on lines +23 to +24
LexicalAnalyzerParser = None # type: ignore[assignment,misc]
SyntacticParseOutput = None # type: ignore[assignment,misc]
Copy link

Copilot AI Mar 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The optional-import fallback sets SyntacticParseOutput = None, but NLUResult later annotates syntactic_parse: Optional[SyntacticParseOutput]. If the import ever fails, that becomes Optional[None] which raises TypeError at import time, defeating the degraded-mode intent and potentially breaking CognitivePipeline.initialize(). Use postponed evaluation (from __future__ import annotations) and/or set these fallbacks to a valid type (e.g., Any/a Protocol), or avoid overriding the symbol used in Optional[...].

Suggested change
LexicalAnalyzerParser = None # type: ignore[assignment,misc]
SyntacticParseOutput = None # type: ignore[assignment,misc]
LexicalAnalyzerParser = Any # type: ignore[assignment,misc]
SyntacticParseOutput = Any # type: ignore[assignment,misc]

Copilot uses AI. Check for mistakes.
Comment on lines +87 to +94
if _LAP_AVAILABLE and LexicalAnalyzerParser is not None:
self.lexical_analyzer_parser = LexicalAnalyzerParser()
else:
self.lexical_analyzer_parser = None
self.logger.warning(
"LexicalAnalyzerParser unavailable (spaCy not installed); "
"NLU pipeline running in degraded mode."
)
Copy link

Copilot AI Mar 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

_LAP_AVAILABLE is based on whether the module imports, but lexical_analyzer_parser.py now swallows spaCy ImportError, so this will still be True even when spaCy is missing. That means the pipeline will try to instantiate LexicalAnalyzerParser() and fail later with an AttributeError rather than cleanly entering the intended degraded mode. Consider checking an explicit availability flag from the LAP module (e.g., _SPACY_AVAILABLE) and/or guarding inside LexicalAnalyzerParser.__init__ with a clear error.

Copilot uses AI. Check for mistakes.
Comment on lines +22 to +26
except ImportError:
spacy = None # type: ignore[assignment]
Doc = None # type: ignore[assignment,misc]
SpacyToken = None # type: ignore[assignment,misc]
_SPACY_AVAILABLE = False
Copy link

Copilot AI Mar 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This makes the module importable even when spaCy isn't installed, but LexicalAnalyzerParser.__init__ unconditionally calls spacy.load(...). In a no-spaCy environment that turns into an AttributeError rather than a clear ImportError, and upstream availability checks based on module import will be wrong. Consider either (1) raising a descriptive ImportError/RuntimeError in LexicalAnalyzerParser.__init__ when _SPACY_AVAILABLE is false, or (2) re-raising ImportError at module import so callers can reliably detect the dependency.

Suggested change
except ImportError:
spacy = None # type: ignore[assignment]
Doc = None # type: ignore[assignment,misc]
SpacyToken = None # type: ignore[assignment,misc]
_SPACY_AVAILABLE = False
except ImportError as exc:
raise ImportError(
"The 'godelOS.nlu_nlg.nlu.lexical_analyzer_parser' module requires spaCy "
"to be installed. Please install spaCy (e.g. 'pip install spacy' and the "
"appropriate language model) before using the lexical analyzer and parser."
) from exc

Copilot uses AI. Check for mistakes.
Comment on lines +448 to +483
@staticmethod
def _binned_entropy(values: np.ndarray, num_bins: int = _NUM_BINS) -> float:
"""Shannon entropy (bits) estimated via histogram binning."""
if values.size < 2:
return 0.0

# Count non-empty/non-zero values as information
info_count = 0
for key, value in subsystem.items():
if value: # Non-empty, non-zero, non-None
if isinstance(value, (list, dict)):
info_count += len(value) if value else 0
elif isinstance(value, (int, float)):
info_count += 1 if value != 0 else 0
elif isinstance(value, str):
info_count += len(value.split()) if value.strip() else 0
else:
info_count += 1

return float(info_count)

def _calculate_integration(self, subsystem1: Dict[str, Any], subsystem2: Dict[str, Any]) -> float:
"""Calculate integration between two subsystems"""
# Look for shared concepts, cross-references, or causal relationships
shared_concepts = 0

# Simple heuristic: look for overlapping keys or values
keys1 = set(subsystem1.keys())
keys2 = set(subsystem2.keys())
shared_keys = keys1.intersection(keys2)
shared_concepts += len(shared_keys)

# Look for cross-references in values (simplified)
values1 = str(subsystem1).lower()
values2 = str(subsystem2).lower()

# Count word overlaps as integration
words1 = set(values1.split())
words2 = set(values2.split())
shared_words = words1.intersection(words2)
shared_concepts += len(shared_words)

return float(shared_concepts)
# ptp == 0 means all values are identical → zero entropy.
if np.ptp(values) == 0:
return 0.0
counts, _ = np.histogram(values, bins=num_bins)
total = counts.sum()
if total == 0:
return 0.0
probs = counts / total
probs = probs[probs > 0]
return float(-np.sum(probs * np.log2(probs)))

@classmethod
def _bipartition_mi(
cls,
vectors: List[np.ndarray],
partition_a: tuple,
partition_b: tuple,
) -> float:
"""Mutual information across a subsystem-level bipartition.

MI(A; B) = H(A) + H(B) − H(A ∪ B), clamped to ≥ 0.
"""
vec_a = np.concatenate([vectors[i] for i in partition_a])
vec_b = np.concatenate([vectors[i] for i in partition_b])
vec_all = np.concatenate([vec_a, vec_b])

h_a = cls._binned_entropy(vec_a)
h_b = cls._binned_entropy(vec_b)
h_all = cls._binned_entropy(vec_all)

return max(0.0, h_a + h_b - h_all)
Copy link

Copilot AI Mar 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The MI approximation uses _binned_entropy() which calls np.histogram(values, bins=num_bins) without a shared range/bin edges. Since np.histogram chooses bin edges based on each input array’s min/max, H(A), H(B), and H(A∪B) are computed over different discretizations, making H(A)+H(B)-H(A∪B) mathematically inconsistent and potentially distorting φ across partitions. Use common bin edges derived from vec_all (or a fixed global range) when computing entropies for vec_a, vec_b, and vec_all.

Copilot uses AI. Check for mistakes.
Comment on lines +311 to +314
max_ms = max(timings)

assert avg_ms < 50, f"φ avg {avg_ms:.1f} ms exceeds 50 ms target"
assert max_ms < 50, f"φ worst-case {max_ms:.1f} ms exceeds 50 ms target"
Copy link

Copilot AI Mar 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The performance test asserts both average and worst-case runtime < 50ms across 50 iterations. This is likely to be flaky on shared/contended CI runners and can cause unrelated PRs to fail. Consider marking this as a dedicated benchmark/performance test (separate marker/job), using a looser threshold for worst-case, or asserting on a percentile/median with a generous ceiling.

Suggested change
max_ms = max(timings)
assert avg_ms < 50, f"φ avg {avg_ms:.1f} ms exceeds 50 ms target"
assert max_ms < 50, f"φ worst-case {max_ms:.1f} ms exceeds 50 ms target"
p95_ms = float(np.percentile(timings, 95))
assert avg_ms < 50, f"φ avg {avg_ms:.1f} ms exceeds 50 ms target"
assert p95_ms < 75, f"φ 95th percentile {p95_ms:.1f} ms exceeds 75 ms ceiling"

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants