test: improve unit test coverage toward 80% target#47
Conversation
Add comprehensive test coverage across the codebase: **CI Infrastructure:** - Add PostgreSQL (pgvector/pg17) and Redis service containers to CI workflow - Set DATABASE_URL and REDIS_URL env vars for integration tests **Tier 1 — Pure unit tests (highest ROI):** - collab/presence.py: 0% → 95% (PresenceTracker) - collab/transform.py: 0% → 95% (OT transform logic) - collab/protocol.py: 0% → 90% (Pydantic models, enums) - core/encryption.py: 35% → 95% (Fernet encrypt/decrypt) - services/notification_service.py: 34% → 100% - services/user_service.py: 17% → 91% - services/github_service.py: 36% → 75% **Tier 2 — Service tests with mocked dependencies:** - services/project_service.py: 15% → 40%+ - services/ontology_index.py: 13% → 40%+ - services/ontology.py: 27% → 45%+ - services/remote_sync_service.py: 21% → 60% - services/join_request_service.py: 18% → 50%+ **Tier 3 — Route tests with dependency overrides:** - api/routes/auth.py: 24% → 70%+ - api/routes/lint.py: 24% → 50%+ - api/routes/notifications.py: 73% → 90%+ - api/routes/quality.py: 38% → 60%+ - api/routes/user_settings.py: 33% → 60%+ **Tier 4 — Git + worker tests:** - git/bare_repository.py: 23% → 55%+ (real pygit2 repos) - worker.py: 17% → 40%+ - Integration tests for git operations **Fixes:** - Fix missing xsd: prefix in sample_ontology_turtle fixture - Add GIT_SORT_TOPOLOGICAL to bare repo history for deterministic ordering Total: 477 tests (was 136), 52% coverage (was 40%). Closes #46 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (2)
✅ Files skipped from review due to trivial changes (1)
🚧 Files skipped from review as they are similar to previous changes (1)
📝 WalkthroughWalkthroughMassive test additions and CI service wiring; pygit2 commit-walking updated to use pygit2.enums.SortMode; timezone-aware presence timestamps; broad Pydantic v2 Changes
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
|
- Set HEAD to refs/heads/main in bare_git_repo fixture to avoid dependency on system git config (CI defaults to 'master') - Replace all pygit2.GIT_SORT_TIME with pygit2.enums.SortMode.TIME to fix mypy errors and remove type: ignore comments - Fix missing type parameter on dict return type in conftest Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 12
🧹 Nitpick comments (11)
tests/unit/test_user_service.py (5)
60-74: Consider usingSimpleNamespacefor cleaner settings mock.The dynamic
type()call works butSimpleNamespaceis more idiomatic for this pattern.💡 Optional: Use SimpleNamespace for clarity
+from types import SimpleNamespace + `@pytest.fixture`(autouse=True) def _mock_settings(monkeypatch: pytest.MonkeyPatch) -> None: """Set Zitadel settings for all tests.""" monkeypatch.setattr( "ontokit.services.user_service.settings", - type( - "S", - (), - { - "zitadel_service_token": "test-service-token", - "zitadel_issuer": "https://auth.example.com", - "zitadel_internal_url": "", - }, - )(), + SimpleNamespace( + zitadel_service_token="test-service-token", + zitadel_issuer="https://auth.example.com", + zitadel_internal_url="", + ), )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/test_user_service.py` around lines 60 - 74, Replace the ad-hoc dynamic type used to mock settings in the _mock_settings pytest fixture with a SimpleNamespace to make the mock clearer and more idiomatic: modify the monkeypatch.setattr call that targets "ontokit.services.user_service.settings" inside the _mock_settings fixture so it assigns an instance of types.SimpleNamespace (or from types import SimpleNamespace) with the same attributes ("zitadel_service_token", "zitadel_issuer", "zitadel_internal_url") instead of using type("S", (), {...})().
99-161: Extract repetitive mock client setup into a reusable fixture.The async mock client setup pattern (lines 104-107, 122-125, 138-141, 153-156) is duplicated across nearly every test. Extracting this into a parametrizable fixture or helper would reduce boilerplate and maintenance burden.
♻️ Suggested fixture to reduce duplication
Add a fixture that creates the mock client scaffold:
`@pytest.fixture` def mock_async_client() -> AsyncMock: """Create a mock httpx.AsyncClient with async context manager support.""" mock_client = AsyncMock() mock_client.__aenter__ = AsyncMock(return_value=mock_client) mock_client.__aexit__ = AsyncMock(return_value=False) return mock_clientThen simplify tests:
`@pytest.mark.asyncio` -async def test_successful_lookup(self, user_service: UserService) -> None: +async def test_successful_lookup( + self, user_service: UserService, mock_async_client: AsyncMock +) -> None: """Successful API call returns UserInfo dict.""" mock_resp = _mock_response(200, ZITADEL_USER_RESPONSE) - - mock_client = AsyncMock() - mock_client.get = AsyncMock(return_value=mock_resp) - mock_client.__aenter__ = AsyncMock(return_value=mock_client) - mock_client.__aexit__ = AsyncMock(return_value=False) + mock_async_client.get = AsyncMock(return_value=mock_resp) - with patch("httpx.AsyncClient", return_value=mock_client): + with patch("httpx.AsyncClient", return_value=mock_async_client): result = await user_service.get_user_info("user-001")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/test_user_service.py` around lines 99 - 161, Duplicate async httpx.AsyncClient setup is repeated across tests; extract it into a reusable pytest fixture (e.g., mock_async_client) that returns an AsyncMock with __aenter__ returning itself and __aexit__ stubbed, then update each test signature to accept mock_async_client and set mock_async_client.get = AsyncMock(return_value=...) or side_effect as needed before using with patch("httpx.AsyncClient", return_value=mock_async_client):; ensure fixture name matches in tests and keep existing assertions (references: tests test_successful_lookup, test_caching, test_http_error_returns_none, test_non_200_returns_none and symbol mock_client used in each).
273-287: Testing private_cacheattribute directly couples tests to implementation.Accessing
user_service._cache(lines 286-287) verifies internal state rather than observable behavior. Consider verifying caching by checking that a subsequentget_user_info("user-001")call doesn't trigger an HTTP request—similar to howtest_cachingworks.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/test_user_service.py` around lines 273 - 287, Replace the direct inspection of the private attribute user_service._cache in test_search_populates_cache with a behavioral assertion: after calling user_service.search_users("jane") verify that a subsequent await user_service.get_user_info("user-001") does not perform an HTTP request (e.g., reuse or update the existing AsyncMock httpx.AsyncClient to raise or assert its post method is not called on the second lookup), following the same approach used in test_caching; keep the initial patch of httpx.AsyncClient and the ZITADEL_SEARCH_RESPONSE setup but remove assertions that reference _cache directly.
370-389: Consider adding teardown to reset singleton state after tests.Directly setting
mod._user_service = Noneworks for isolation but doesn't restore the original state after the test. If other tests in the suite depend onget_user_service(), they could be affected by leftover state.💡 Optional: Use a fixture with cleanup
`@pytest.fixture` def reset_singleton(): """Reset UserService singleton before and after test.""" import ontokit.services.user_service as mod original = mod._user_service mod._user_service = None yield mod._user_service = original class TestGetUserServiceSingleton: """Tests for get_user_service() factory.""" def test_returns_user_service_instance(self, reset_singleton) -> None: """get_user_service returns a UserService singleton.""" svc = get_user_service() assert isinstance(svc, UserService) def test_returns_same_instance(self, reset_singleton) -> None: """Repeated calls return the same singleton.""" svc1 = get_user_service() svc2 = get_user_service() assert svc1 is svc2🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/test_user_service.py` around lines 370 - 389, The tests in TestGetUserServiceSingleton mutate the module singleton (mod._user_service) but don't restore it, risking cross-test pollution; add teardown/fixture-based cleanup that captures the original mod._user_service, sets it to None before each test and restores it after (e.g. a pytest fixture like reset_singleton) and apply it to both test_returns_user_service_instance and test_returns_same_instance so get_user_service() is isolated and the original singleton is reinstated after the test.
232-247: Sequential mock assumption may be fragile if implementation changes to concurrent fetching.The
side_effect=[mock_resp_ok, mock_resp_fail]assumes sequential HTTP calls. Ifget_users_infois refactored to useasyncio.gatherfor parallel requests, the order may become non-deterministic. This is acceptable for now but worth noting.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/test_user_service.py` around lines 232 - 247, The test test_batch_fetch_skips_failed relies on a sequential side_effect list for mock_client.get which breaks if user_service.get_users_info switches to concurrent fetching; replace the brittle side_effect=[mock_resp_ok, mock_resp_fail] with a deterministic side_effect function that inspects the incoming request (e.g., URL or user id argument) and returns mock_resp_ok for "user-001" and mock_resp_fail for "user-missing", or alternatively map requests to responses by key; update the AsyncMock assigned to mock_client.get (the mock for httpx.AsyncClient) so it returns responses based on input rather than call order to keep test stable regardless of concurrency in get_users_info.tests/unit/test_project_service.py (1)
1-1: Avoid file-wide Ruff ARG suppression.Line 1 disables
ARGchecks for the entire file; that can hide accidental unused-argument regressions. Prefer targeted per-test suppression only where needed.Proposed adjustment
-# ruff: noqa: ARG001, ARG002 +# Keep ARG rules enabled globally in this file. +# Add targeted noqa on specific test signatures only when unavoidable.As per coding guidelines, "Enable Ruff rules: E, W, F, I, B, C4, UP, ARG, SIM for linting".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/test_project_service.py` at line 1, Remove the file-wide "# ruff: noqa: ARG001, ARG002" suppression and instead add targeted per-test suppressions only where a test intentionally keeps unused arguments (e.g., fixture parameters like "client", "tmp_path", "mock_..." in specific test functions). Locate the test functions in tests/unit/test_project_service.py that require unused args and add inline noqa comments or decorator-level suppressions (e.g., append "# ruff: noqa: ARG001" to the specific function or its unused parameter) so ARG checks remain enabled for the rest of the file.tests/unit/test_encryption.py (1)
51-58: Test name/docstring doesn't match actual behavior tested.The test verifies that successive
get_fernet()calls produce compatible Fernet instances (can decrypt each other's output), but doesn't verify actual caching (same instance returned). Consider renaming to clarify intent or adding an assertion that verifies caching:♻️ Suggested clarification
def test_caches_instance(self) -> None: - """Successive calls with same key produce equivalent Fernet instances.""" + """Successive calls with same key produce compatible Fernet instances.""" f1 = get_fernet() f2 = get_fernet() - # Both should be valid Fernet instances that can decrypt each other's output + # Both should be valid Fernet instances that can decrypt each other's output. + # Note: This doesn't verify instance identity caching, just functional equivalence. plaintext = b"cache-test" cipher = f1.encrypt(plaintext) assert f2.decrypt(cipher) == plaintext🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/test_encryption.py` around lines 51 - 58, The test name/docstring claim caching but only checks compatibility; either rename test_caches_instance to reflect compatibility (e.g., test_get_fernet_instances_are_compatible) or add an assertion that verifies caching by checking object identity (assert f1 is f2) so the test truly asserts that get_fernet() returns the same cached Fernet instance; update the docstring to match the chosen behavior and keep references to get_fernet and test_caches_instance for locating the change.tests/unit/conftest.py (1)
18-51: Well-structured test fixture with a minor consideration.The fixture provides good isolation for unit tests. Note that
get_current_userandget_current_user_optionaloverrides use sync lambdas while the original dependencies are likely async. FastAPI handles this correctly, but for consistency you could useasync defoverrides:♻️ Optional: Use async overrides for consistency
- app.dependency_overrides[get_current_user] = lambda: user - app.dependency_overrides[get_current_user_optional] = lambda: user + async def _override_get_current_user() -> CurrentUser: + return user + + app.dependency_overrides[get_current_user] = _override_get_current_user + app.dependency_overrides[get_current_user_optional] = _override_get_current_user🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/conftest.py` around lines 18 - 51, The fixture authed_client overrides get_current_user and get_current_user_optional with sync lambdas while the original dependencies are async; change those overrides to async functions that return the user (e.g. define async def override_get_current_user() -> CurrentUser: return user and async def override_get_current_user_optional() -> CurrentUser: return user) and assign them to app.dependency_overrides[get_current_user] and app.dependency_overrides[get_current_user_optional] so the async/sync contract matches the original dependencies.ontokit/git/bare_repository.py (1)
382-384: Apply consistent sort flags across both code paths for clarity.The
all_branches=Falsepath now usesGIT_SORT_TIME | GIT_SORT_TOPOLOGICALfor deterministic ordering, but theall_branches=Truepath at line 364 still uses onlyGIT_SORT_TIME. While theall_branches=Truepath re-sorts commits afterward (making TOPOLOGICAL unnecessary for correctness), applying consistent flags improves maintainability and prevents subtle issues if the re-sort is removed in the future.Additionally, the pipeline failure (
GitError: reference 'refs/heads/master' not found) stems from_resolve_refwhenHEADpoints to a non-existent branch in a fresh repository. This is unrelated to the sort flag change but may affect tests callingcreate_branch(from_ref='HEAD')on newly initialized repos.🔧 Optional: Apply consistent sort flags
for ref_name in self.repo.references: if ref_name.startswith("refs/heads/"): ref = self.repo.references[ref_name] - for commit in self.repo.walk(ref.target, pygit2.GIT_SORT_TIME): # type: ignore[arg-type] + for commit in self.repo.walk(ref.target, pygit2.GIT_SORT_TIME | pygit2.GIT_SORT_TOPOLOGICAL): # type: ignore[arg-type] commit_hash = str(commit.id)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ontokit/git/bare_repository.py` around lines 382 - 384, The branch-iteration path should use the same sort flags as the other path: change the repo.walk call used when all_branches=True to include pygit2.GIT_SORT_TOPOLOGICAL in addition to pygit2.GIT_SORT_TIME (match the flags used in the else branch where enumerate(self.repo.walk(...)) is present). Also fix the HEAD resolution edge-case in _resolve_ref (and callers like create_branch) so that when HEAD is unborn or points to a missing branch you gracefully handle it (e.g., detect repo.head_is_unborn or catch the PyGit2 error when resolving 'HEAD' and return a clear None/default or raise a descriptive error), and update create_branch to handle that None/default path instead of failing with "reference 'refs/heads/master' not found". Ensure references to repo.walk, _resolve_ref, and create_branch are updated accordingly.tests/unit/test_lint_routes.py (2)
209-210: Move imports to the top of the file.Imports should be at module level rather than inside test methods.
UUIDis already imported at line 7; consolidatedatetime,UTC, anduuid4imports at the top.Proposed fix
Add to imports at the top of the file (after line 7):
from datetime import UTC, datetime from uuid import UUID, uuid4Then remove lines 209-210 from the test method.
def test_get_lint_run_with_issues( self, mock_access: AsyncMock, authed_client: tuple[TestClient, AsyncMock], ) -> None: """Returns run details with issues when run exists.""" client, mock_session = authed_client mock_access.return_value = Mock() - from datetime import UTC, datetime - from uuid import uuid4 - run_uuid = UUID(RUN_ID)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/test_lint_routes.py` around lines 209 - 210, Move the local imports out of the test and consolidate them with the module-level imports: add top-level imports for datetime and UTC (from datetime import UTC, datetime) and for uuid4 alongside the existing UUID import (from uuid import UUID, uuid4), then remove the in-test imports `from datetime import UTC, datetime` and `from uuid import uuid4` from the test body so all imports are at module scope and UUID/uuid4 are consolidated.
125-146: Consider adding a test for the positive case.The current test only covers the scenario when no runs exist. Adding a test that verifies the response when runs do exist (returning
last_rundata andtotal_issues > 0) would improve coverage for this endpoint.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/test_lint_routes.py` around lines 125 - 146, Add a complementary positive test to TestLintStatus that mocks a DB result with an existing run and nonzero issues: create a new test (e.g., test_lint_status_with_runs) that reuses authed_client and mock_access, set mock_session.execute.return_value.scalar_one_or_none to return a mock/record containing the expected fields (last_run and total_issues) and then call client.get("/api/v1/projects/{PROJECT_ID}/lint/status") asserting status_code == 200, that data["last_run"] matches the mocked last_run value and data["total_issues"] > 0; reference TestLintStatus, test_lint_status_no_runs, mock_session.execute, and mock_result.scalar_one_or_none to locate where to add the new test.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/integration/test_git_operations.py`:
- Line 66: The test branches are being created without an explicit base ref
which causes CI failures due to default branch resolution; update each call to
bare_git_repo.create_branch(...) (including the instances at the other
locations) to pass from_ref="main" (e.g.,
bare_git_repo.create_branch("feature-x", from_ref="main")) so the new branches
are created from the explicit main branch; apply the same change to all other
create_branch(...) calls in this test file that omit from_ref.
In `@tests/unit/test_bare_repository.py`:
- Line 101: The tests call bare_git_repo.create_branch(...) without specifying
the source ref, causing create_branch to default to resolving "master" while the
fixture uses "main"; update each bare_git_repo.create_branch call (e.g., the
invocation referenced and the other occurrences in this file) to pass
from_ref="main" so the branch is created from the correct base (use the
create_branch method on the bare_git_repo fixture and add the named parameter
from_ref="main").
In `@tests/unit/test_collab_presence.py`:
- Around line 3-4: Replace uses of the deprecated naive datetime.utcnow() with
timezone-aware calls: change any datetime.utcnow() occurrences in
tests/unit/test_collab_presence.py to datetime.now(tz=timezone.utc) and add the
corresponding import (e.g., from datetime import timezone) or use
datetime.now(tz=timezone.utc) where datetime is already imported; update all
instances (including the ones flagged around lines 260-261) so datetimes are
UTC-aware throughout the file.
- Around line 174-184: The test and PresenceTracker implementation use naive UTC
datetimes; change both to timezone-aware UTC datetimes: update PresenceTracker
(class PresenceTracker, methods join and update_cursor) to set and update
_last_seen entries using datetime.now(timezone.utc) (or
datetime.utcnow().replace(tzinfo=timezone.utc)) so stored values are aware, and
modify the test test_update_cursor_updates_last_seen to patch
ontokit.collab.presence.datetime to return an aware datetime (use timezone.utc)
and compare aware datetimes accordingly.
In `@tests/unit/test_collab_transform.py`:
- Line 3: Update the tests to use timezone-aware UTC datetimes: modify the
import in test_collab_transform.py to include timezone (e.g., add timezone or
UTC) and replace all uses of datetime.utcnow() (and any direct utcnow calls in
this file) with datetime.now(tz=timezone.utc) so all generated datetimes are
timezone-aware; ensure the same replacements are made for the other occurrences
called out (previously at lines referenced in the comment) and run tests to
confirm no further datetime assumptions remain.
In `@tests/unit/test_join_request_service.py`:
- Around line 123-128: The test test_create_request_for_public_project currently
only calls service._to_response; change it to exercise the public-project
creation path by calling service.create_request(...) with a JoinRequestCreate
instance (use the existing _make_user(user_id=REQUESTER_ID) and any project
fixture) instead of service._to_response(_make_join_request()); keep the
NotificationService patch in place, pass the correct requester and project
identifiers to create_request, and assert the expected side effects/response
(e.g., returned join request or repository state) rather than just verifying
_to_response behavior.
In `@tests/unit/test_notification_routes.py`:
- Around line 51-60: Replace the per-test manual setting and popping of
app.dependency_overrides[get_service] with a function-scoped fixture (e.g.,
mock_notification_service) that creates an AsyncMock(spec=NotificationService),
assigns app.dependency_overrides[get_service] = lambda: mock_service, yields the
mock, and ensures cleanup in a finally block by popping the override; then
update each test (those using app.dependency_overrides/get_service and
authed_client) to accept mock_notification_service and configure its return
values instead of manually setting/popping the override (alternatively use
patch.dict around app.dependency_overrides in each test).
In `@tests/unit/test_ontology_index_service.py`:
- Around line 263-264: The test currently checks mock_db.commit.called which
only detects coroutine creation; update the assertions to verify the AsyncMock
was actually awaited by using mock_db.commit.assert_awaited_once() (or
assert_awaited() / assert_not_awaited() as appropriate) after calling
service.delete_branch_index(PROJECT_ID, BRANCH, auto_commit=True); similarly
replace the other commit-related assertions in the same test (the one around the
subsequent delete call) to use the AsyncMock await assertions so the test fails
if the code forgets to await db.commit().
In `@tests/unit/test_project_service.py`:
- Around line 160-171: The test test_get_public_project_as_anonymous currently
calls _to_response directly and doesn't exercise ProjectService.get; change it
to call and await service.get(project.id, None) (or the appropriate signature)
so the service's fetch/access logic runs, reusing the existing mock_db.execute
return (mock_result.scalar_one_or_none -> project) and then assert the returned
response has is_public True and user_role is None; keep references to
ProjectService.get, test_get_public_project_as_anonymous, and _to_response to
locate where to modify the test.
In `@tests/unit/test_remote_sync_service.py`:
- Around line 187-188: Test is too broad: replace the generic Exception in the
pytest.raises with pydantic.errors.ValidationError so the test only catches
validation failures from RemoteSyncConfigResponse.model_validate; update
tests/unit/test_remote_sync_service.py by importing ValidationError from
pydantic (or pydantic.errors) and change the context manager to with
pytest.raises(ValidationError): await service.save_config(PROJECT_ID, data,
_make_user()) (also remove the noqa:B017 suppression).
- Line 1: Remove the module-wide ruff suppression by deleting the top-level "#
ruff: noqa: ARG002" so linting isn't globally disabled; keep the existing inline
per-parameter suppressions already present on the unused mock_db parameters (the
comments on the test functions around the mock_db fixtures at lines with those
mocks) so only those specific ARG002 warnings remain suppressed.
In `@tests/unit/test_user_settings_routes.py`:
- Around line 67-68: Dependency override assignments using
app.dependency_overrides = lambda: mock_github (and similar overrides for other
services) can leak if a test fails before calling pop(...); wrap each override
in a try/finally (or use a fixture/context manager) that ensures
app.dependency_overrides.pop(get_github_service, None) runs in the finally block
so the override is always removed. Locate places assigning
app.dependency_overrides[get_github_service] = ... (and the other listed
overrides) and change test blocks to set the override, run assertions in the
try, and remove the override in finally (or refactor to a reusable fixture that
sets and always cleans app.dependency_overrides).
---
Nitpick comments:
In `@ontokit/git/bare_repository.py`:
- Around line 382-384: The branch-iteration path should use the same sort flags
as the other path: change the repo.walk call used when all_branches=True to
include pygit2.GIT_SORT_TOPOLOGICAL in addition to pygit2.GIT_SORT_TIME (match
the flags used in the else branch where enumerate(self.repo.walk(...)) is
present). Also fix the HEAD resolution edge-case in _resolve_ref (and callers
like create_branch) so that when HEAD is unborn or points to a missing branch
you gracefully handle it (e.g., detect repo.head_is_unborn or catch the PyGit2
error when resolving 'HEAD' and return a clear None/default or raise a
descriptive error), and update create_branch to handle that None/default path
instead of failing with "reference 'refs/heads/master' not found". Ensure
references to repo.walk, _resolve_ref, and create_branch are updated
accordingly.
In `@tests/unit/conftest.py`:
- Around line 18-51: The fixture authed_client overrides get_current_user and
get_current_user_optional with sync lambdas while the original dependencies are
async; change those overrides to async functions that return the user (e.g.
define async def override_get_current_user() -> CurrentUser: return user and
async def override_get_current_user_optional() -> CurrentUser: return user) and
assign them to app.dependency_overrides[get_current_user] and
app.dependency_overrides[get_current_user_optional] so the async/sync contract
matches the original dependencies.
In `@tests/unit/test_encryption.py`:
- Around line 51-58: The test name/docstring claim caching but only checks
compatibility; either rename test_caches_instance to reflect compatibility
(e.g., test_get_fernet_instances_are_compatible) or add an assertion that
verifies caching by checking object identity (assert f1 is f2) so the test truly
asserts that get_fernet() returns the same cached Fernet instance; update the
docstring to match the chosen behavior and keep references to get_fernet and
test_caches_instance for locating the change.
In `@tests/unit/test_lint_routes.py`:
- Around line 209-210: Move the local imports out of the test and consolidate
them with the module-level imports: add top-level imports for datetime and UTC
(from datetime import UTC, datetime) and for uuid4 alongside the existing UUID
import (from uuid import UUID, uuid4), then remove the in-test imports `from
datetime import UTC, datetime` and `from uuid import uuid4` from the test body
so all imports are at module scope and UUID/uuid4 are consolidated.
- Around line 125-146: Add a complementary positive test to TestLintStatus that
mocks a DB result with an existing run and nonzero issues: create a new test
(e.g., test_lint_status_with_runs) that reuses authed_client and mock_access,
set mock_session.execute.return_value.scalar_one_or_none to return a mock/record
containing the expected fields (last_run and total_issues) and then call
client.get("/api/v1/projects/{PROJECT_ID}/lint/status") asserting status_code ==
200, that data["last_run"] matches the mocked last_run value and
data["total_issues"] > 0; reference TestLintStatus, test_lint_status_no_runs,
mock_session.execute, and mock_result.scalar_one_or_none to locate where to add
the new test.
In `@tests/unit/test_project_service.py`:
- Line 1: Remove the file-wide "# ruff: noqa: ARG001, ARG002" suppression and
instead add targeted per-test suppressions only where a test intentionally keeps
unused arguments (e.g., fixture parameters like "client", "tmp_path", "mock_..."
in specific test functions). Locate the test functions in
tests/unit/test_project_service.py that require unused args and add inline noqa
comments or decorator-level suppressions (e.g., append "# ruff: noqa: ARG001" to
the specific function or its unused parameter) so ARG checks remain enabled for
the rest of the file.
In `@tests/unit/test_user_service.py`:
- Around line 60-74: Replace the ad-hoc dynamic type used to mock settings in
the _mock_settings pytest fixture with a SimpleNamespace to make the mock
clearer and more idiomatic: modify the monkeypatch.setattr call that targets
"ontokit.services.user_service.settings" inside the _mock_settings fixture so it
assigns an instance of types.SimpleNamespace (or from types import
SimpleNamespace) with the same attributes ("zitadel_service_token",
"zitadel_issuer", "zitadel_internal_url") instead of using type("S", (),
{...})().
- Around line 99-161: Duplicate async httpx.AsyncClient setup is repeated across
tests; extract it into a reusable pytest fixture (e.g., mock_async_client) that
returns an AsyncMock with __aenter__ returning itself and __aexit__ stubbed,
then update each test signature to accept mock_async_client and set
mock_async_client.get = AsyncMock(return_value=...) or side_effect as needed
before using with patch("httpx.AsyncClient", return_value=mock_async_client):;
ensure fixture name matches in tests and keep existing assertions (references:
tests test_successful_lookup, test_caching, test_http_error_returns_none,
test_non_200_returns_none and symbol mock_client used in each).
- Around line 273-287: Replace the direct inspection of the private attribute
user_service._cache in test_search_populates_cache with a behavioral assertion:
after calling user_service.search_users("jane") verify that a subsequent await
user_service.get_user_info("user-001") does not perform an HTTP request (e.g.,
reuse or update the existing AsyncMock httpx.AsyncClient to raise or assert its
post method is not called on the second lookup), following the same approach
used in test_caching; keep the initial patch of httpx.AsyncClient and the
ZITADEL_SEARCH_RESPONSE setup but remove assertions that reference _cache
directly.
- Around line 370-389: The tests in TestGetUserServiceSingleton mutate the
module singleton (mod._user_service) but don't restore it, risking cross-test
pollution; add teardown/fixture-based cleanup that captures the original
mod._user_service, sets it to None before each test and restores it after (e.g.
a pytest fixture like reset_singleton) and apply it to both
test_returns_user_service_instance and test_returns_same_instance so
get_user_service() is isolated and the original singleton is reinstated after
the test.
- Around line 232-247: The test test_batch_fetch_skips_failed relies on a
sequential side_effect list for mock_client.get which breaks if
user_service.get_users_info switches to concurrent fetching; replace the brittle
side_effect=[mock_resp_ok, mock_resp_fail] with a deterministic side_effect
function that inspects the incoming request (e.g., URL or user id argument) and
returns mock_resp_ok for "user-001" and mock_resp_fail for "user-missing", or
alternatively map requests to responses by key; update the AsyncMock assigned to
mock_client.get (the mock for httpx.AsyncClient) so it returns responses based
on input rather than call order to keep test stable regardless of concurrency in
get_users_info.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 52497688-2747-4619-bb28-6322ce446d48
📒 Files selected for processing (28)
.github/workflows/release.ymlontokit/git/bare_repository.pytests/conftest.pytests/integration/conftest.pytests/integration/test_git_operations.pytests/unit/conftest.pytests/unit/test_auth.pytests/unit/test_auth_routes.pytests/unit/test_bare_repository.pytests/unit/test_collab_presence.pytests/unit/test_collab_protocol.pytests/unit/test_collab_transform.pytests/unit/test_encryption.pytests/unit/test_github_service.pytests/unit/test_join_request_service.pytests/unit/test_lint_routes.pytests/unit/test_linter.pytests/unit/test_notification_routes.pytests/unit/test_notification_service.pytests/unit/test_ontology_index_service.pytests/unit/test_ontology_service.pytests/unit/test_ontology_service_extended.pytests/unit/test_project_service.pytests/unit/test_quality_routes.pytests/unit/test_remote_sync_service.pytests/unit/test_user_service.pytests/unit/test_user_settings_routes.pytests/unit/test_worker.py
| with pytest.raises(Exception): # noqa: B017 | ||
| await service.save_config(PROJECT_ID, data, _make_user()) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Inspect save_config to identify the exact exception raised on model validation failure.
rg -n -C4 'def save_config|model_validate|raise ' ontokit/services/remote_sync_service.pyRepository: CatholicOS/ontokit-api
Length of output: 2535
🏁 Script executed:
# Read the test file around lines 187-188 to see the test context
sed -n '175,200p' tests/unit/test_remote_sync_service.pyRepository: CatholicOS/ontokit-api
Length of output: 994
🏁 Script executed:
# Check imports in the test file to see what exceptions are available
head -30 tests/unit/test_remote_sync_service.pyRepository: CatholicOS/ontokit-api
Length of output: 974
🏁 Script executed:
# Find Pydantic version in dependencies
rg -n 'pydantic|ValidationError' --type py pyproject.toml setup.py requirements.txt 2>/dev/null | head -20Repository: CatholicOS/ontokit-api
Length of output: 256
🏁 Script executed:
# Check if ValidationError is used elsewhere in the test file
rg -n 'ValidationError' tests/unit/test_remote_sync_service.pyRepository: CatholicOS/ontokit-api
Length of output: 48
🏁 Script executed:
# Check what Pydantic v2 raises from model_validate
rg -n 'from pydantic' tests/ --type py | head -20Repository: CatholicOS/ontokit-api
Length of output: 139
🏁 Script executed:
# Check the save_config implementation more carefully to see all exceptions
sed -n '79,116p' ontokit/services/remote_sync_service.pyRepository: CatholicOS/ontokit-api
Length of output: 1558
🏁 Script executed:
# Check what RemoteSyncConfigResponse looks like
rg -n 'class RemoteSyncConfigResponse' ontokit/Repository: CatholicOS/ontokit-api
Length of output: 142
🏁 Script executed:
# Check all exceptions that could be raised from save_config
# by looking at the schema validation
rg -n 'class RemoteSyncConfigResponse' -A 10 ontokit/schemas/remote_sync.pyRepository: CatholicOS/ontokit-api
Length of output: 347
🏁 Script executed:
# Verify that ValidationError is the correct exception from Pydantic v2
python3 << 'EOF'
from pydantic import BaseModel, ValidationError
class TestModel(BaseModel):
name: str
try:
TestModel.model_validate({"wrong_field": "value"})
except ValidationError as e:
print(f"Exception type: {type(e).__name__}")
print(f"Module: {type(e).__module__}")
EOFRepository: CatholicOS/ontokit-api
Length of output: 134
Use ValidationError instead of Exception.
The save_config method calls RemoteSyncConfigResponse.model_validate(config) which raises ValidationError when validation fails. Catching the broad Exception type can mask unrelated failures and weaken test signal. Import ValidationError from pydantic and catch the specific exception type.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/unit/test_remote_sync_service.py` around lines 187 - 188, Test is too
broad: replace the generic Exception in the pytest.raises with
pydantic.errors.ValidationError so the test only catches validation failures
from RemoteSyncConfigResponse.model_validate; update
tests/unit/test_remote_sync_service.py by importing ValidationError from
pydantic (or pydantic.errors) and change the context manager to with
pytest.raises(ValidationError): await service.save_config(PROJECT_ID, data,
_make_user()) (also remove the noqa:B017 suppression).
Prevents failures on CI where the default branch may be 'master'. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…e.now(UTC) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…response Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 6
🧹 Nitpick comments (6)
tests/conftest.py (2)
147-154: Prefer a single repo-init path viaBareOntologyRepository.The direct
pygit2.init_repository(...); set_head(...)setup is redundant with the repository wrapper’s initialization behavior and increases fixture maintenance surface.🧹 Suggested simplification
def bare_git_repo(tmp_path: Path, sample_ontology_turtle: str) -> BareOntologyRepository: """Create a real pygit2 bare repo with an initial Turtle commit.""" repo_path = tmp_path / "test-project.git" - raw_repo = pygit2.init_repository(str(repo_path), bare=True) - # Ensure HEAD points to refs/heads/main regardless of system git config - raw_repo.set_head("refs/heads/main") - repo = BareOntologyRepository(repo_path) repo.write_file( branch_name="main", filepath="ontology.ttl",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/conftest.py` around lines 147 - 154, The fixture bare_git_repo currently calls pygit2.init_repository and raw_repo.set_head directly which is redundant; instead use the repository wrapper to initialize the repo to keep a single init path. Remove the direct pygit2.init_repository and set_head calls in the bare_git_repo fixture and construct/initialize the repo via BareOntologyRepository (or call its init/create method if it exposes one) using repo_path so the wrapper handles creating a bare repo and setting HEAD to refs/heads/main.
119-127: Tightensample_project_datatyping instead ofdict[str, object].
dict[str, object]is too loose for strict typing and reduces fixture contract value in tests.♻️ Suggested refactor
+from typing import TypedDict + +class SampleProjectData(TypedDict): + id: uuid.UUID + name: str + description: str + is_public: bool + owner_id: str + `@pytest.fixture` -def sample_project_data() -> dict[str, object]: +def sample_project_data() -> SampleProjectData: """Provide sample project data as a dictionary.""" return { "id": uuid.UUID("12345678-1234-5678-1234-567812345678"), "name": "Test Ontology Project", "description": "A sample project for testing purposes.", "is_public": True, "owner_id": "test-user-id", }As per coding guidelines, "Enable MyPy strict mode type checking with Python 3.11 target".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/conftest.py` around lines 119 - 127, The fixture sample_project_data currently returns dict[str, object] which is too loose for strict MyPy; define a precise TypedDict (e.g., ProjectData or SampleProjectData) describing keys id: uuid.UUID, name: str, description: str, is_public: bool, owner_id: str and change the return annotation of sample_project_data to that TypedDict type (ensure TypedDict is imported from typing and uuid is used for the id field) so tests have a strict contract under Python 3.11 strict mode.ontokit/git/bare_repository.py (1)
159-159: Blockingrepo.walk(...)calls remain in touched paths; please route through async boundaries.These revwalks are blocking I/O and can stall async request handling unless wrapped (e.g.,
asyncio.to_thread) at call boundaries.As per coding guidelines "Use async/await for all I/O operations in Python code".
Also applies to: 364-390, 757-760
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ontokit/git/bare_repository.py` at line 159, The blocking calls to self.repo.walk(...) must be moved off the event loop: change the surrounding methods that call self.repo.walk (e.g., the methods in BareRepository that iterate commits using self.repo.walk) to async and perform the revwalk inside asyncio.to_thread so the blocking iteration runs in a thread (for example: commits = await asyncio.to_thread(lambda: list(self.repo.walk(self.repo.head.target, pygit2.enums.SortMode.TIME))); then iterate over that list or otherwise process it off-thread). Apply the same pattern to the other occurrences of self.repo.walk in this module so all revwalks are executed via asyncio.to_thread and awaited from async call-sites.tests/unit/test_quality_routes.py (1)
74-77: Make forwarded-parameter assertions less brittle.These assertions rely on positional call shapes (
call_args[0][1]and exact positionalassert_called_once_with). Prefer asserting named arguments when possible to avoid failures from harmless call-signature refactors.Suggested assertion hardening
- mock_load.assert_called_once() - call_args = mock_load.call_args - assert call_args[0][1] == "dev" + mock_load.assert_called_once() + _, kwargs = mock_load.call_args + assert kwargs.get("branch", "dev") == "dev" ... - mock_find.assert_called_once_with(mock_graph, 0.9) + mock_find.assert_called_once() + args, kwargs = mock_find.call_args + assert args[0] is mock_graph + assert kwargs.get("threshold", args[1] if len(args) > 1 else None) == 0.9Also applies to: 252-252
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/test_quality_routes.py` around lines 74 - 77, The test currently asserts forwarded parameters via positional indexing on mock_load (using call_args[0][1] and assert_called_once_with), which is brittle; update the assertions in tests/unit/test_quality_routes.py to check keyword arguments instead (e.g., inspect mock_load.call_args.kwargs or use assert_called_once_with(..., env="dev") with explicit keyword names) so the test validates the forwarded parameter by name rather than by positional index; apply the same change for the other occurrence that asserts positional args.tests/unit/test_project_service.py (1)
147-149: Useassert_awaited()instead of.calledfor async mock assertions.Lines 148-149 (and similarly at lines 223, 268-269, 361) use
.calledonAsyncMockobjects (flush,commit,delete). The.calledattribute returnsTrueeven if the mock was never awaited, which can miss regressions where async calls aren't properly awaited. Useassert_awaited()orassert_awaited_once()to verify the mocks were actually awaited.Suggested update for lines 148-149
- assert mock_db.flush.called - assert mock_db.commit.called + mock_db.flush.assert_awaited() + mock_db.commit.assert_awaited()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/test_project_service.py` around lines 147 - 149, Replace boolean `.called` checks on async mocks with awaited-assertions to ensure coroutines were actually awaited: for each AsyncMock in tests (e.g., mock_db.add, mock_db.flush, mock_db.commit, mock_db.delete in test_project_service.py) swap assertions like `assert mock_db.flush.called` / `assert mock_db.commit.called` / `assert mock_db.delete.called` to `mock_db.flush.assert_awaited()` or `mock_db.flush.assert_awaited_once()` (and similarly for `commit`, `delete`, `add`) depending on expected call count; update all occurrences mentioned (around the original checks and at the other noted locations) so async calls are validated with assert_awaited/assert_awaited_once instead of `.called`.tests/unit/test_notification_routes.py (1)
105-107: Useassert_awaited_once_with(...)for awaited async method calls.
list_notificationsis awaited in the route handler (line 29 ofontokit/api/routes/notifications.py), so verify the call with the await-aware assertion rather than the standard call assertion.Suggested update
- mock_notification_service.list_notifications.assert_called_once_with( + mock_notification_service.list_notifications.assert_awaited_once_with( "test-user-id", unread_only=True )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/test_notification_routes.py` around lines 105 - 107, The test currently uses mock_notification_service.list_notifications.assert_called_once_with(...) which is incorrect for an awaited async call; replace that assertion with the await-aware mock_notification_service.list_notifications.assert_awaited_once_with("test-user-id", unread_only=True) so the test verifies the awaited call made by the route handler (the async list_notifications call in the notifications route).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@ontokit/git/bare_repository.py`:
- Around line 760-763: The loop in get_commits_between currently breaks when it
encounters the first commit whose id is in from_ancestors (inside the for over
self.repo.walk(to_commit.id, pygit2.enums.SortMode.TIME)), which causes valid
commits later in the walk to be dropped; change the control flow so that when
str(commit.id) is in from_ancestors you skip adding that commit (do not append
self._commit_to_info(commit)) and continue the walk instead of breaking,
preserving traversal to collect all commits reachable from to_commit that are
not in from_ancestors.
In `@tests/conftest.py`:
- Around line 181-194: The fixture currently stubs service (Mock
spec=UserService) methods returning plain dicts; change get_user_info and
get_users_info to return actual UserInfo instances (and a mapping of IDs to
UserInfo) to match the UserService contract: import or reference the UserInfo
model and construct UserInfo(...) for service.get_user_info and for each value
in service.get_users_info so tests use the real typed objects expected by
callers (update the AsyncMock return_value for service.get_user_info and
service.get_users_info accordingly).
- Around line 167-175: In mock_github_service(), change verify_webhook_signature
from an AsyncMock to a synchronous Mock since the real
GitHubService.verify_webhook_signature is a regular function: replace
service.verify_webhook_signature = AsyncMock(return_value=True) with
service.verify_webhook_signature = Mock(return_value=True) so tests will fail if
the code incorrectly awaits the synchronous method; keep the rest of mock setup
(mock_github_service, GitHubService) unchanged.
In `@tests/unit/test_notification_routes.py`:
- Around line 11-12: The tests in tests/unit/test_notification_routes.py
currently use synchronous TestClient calls; convert each test function (the ones
between lines 54–154) to async by adding `@pytest.mark.asyncio` and changing def
test_* to async def, replace fastapi.testclient.TestClient usage with
httpx.AsyncClient and httpx.ASGITransport (or AsyncClient(app=app,
transport=ASGITransport(app=app))) and await all requests (e.g., await
client.get/post/...). Update any setup/teardown to use async context managers
(async with AsyncClient(...) as client:) and ensure imports are updated to
import httpx.AsyncClient, httpx.ASGITransport, and pytest.mark.asyncio.
In `@tests/unit/test_ontology_index_service.py`:
- Around line 185-234: The test test_indexes_entities_from_graph in
TestFullReindex uses a weak assertion (assert result > 0); change it to assert
the exact expected count (assert result == 4) to catch regressions—update the
assertion in the test function that calls OntologyIndexService.full_reindex with
sample_graph (which contains Person, Organization, worksFor, hasName) so the
test expects 4 indexed entities.
In `@tests/unit/test_quality_routes.py`:
- Around line 9-10: The tests currently use synchronous TestClient and must be
converted to async: update the authed_client fixture in conftest.py to create
and yield an httpx.AsyncClient with ASGITransport(app=app) and base_url (instead
of fastapi.testclient.TestClient), change imports from
fastapi.testclient/TestClient to from httpx import ASGITransport, AsyncClient,
and mark each test function in tests/unit/test_quality_routes.py as async with
`@pytest.mark.anyio`, replacing synchronous client.get/post calls with await
client.get(...) / await client.post(...); ensure any setup/teardown uses async
context managers (async with AsyncClient(...)) and close the client after tests.
---
Nitpick comments:
In `@ontokit/git/bare_repository.py`:
- Line 159: The blocking calls to self.repo.walk(...) must be moved off the
event loop: change the surrounding methods that call self.repo.walk (e.g., the
methods in BareRepository that iterate commits using self.repo.walk) to async
and perform the revwalk inside asyncio.to_thread so the blocking iteration runs
in a thread (for example: commits = await asyncio.to_thread(lambda:
list(self.repo.walk(self.repo.head.target, pygit2.enums.SortMode.TIME))); then
iterate over that list or otherwise process it off-thread). Apply the same
pattern to the other occurrences of self.repo.walk in this module so all
revwalks are executed via asyncio.to_thread and awaited from async call-sites.
In `@tests/conftest.py`:
- Around line 147-154: The fixture bare_git_repo currently calls
pygit2.init_repository and raw_repo.set_head directly which is redundant;
instead use the repository wrapper to initialize the repo to keep a single init
path. Remove the direct pygit2.init_repository and set_head calls in the
bare_git_repo fixture and construct/initialize the repo via
BareOntologyRepository (or call its init/create method if it exposes one) using
repo_path so the wrapper handles creating a bare repo and setting HEAD to
refs/heads/main.
- Around line 119-127: The fixture sample_project_data currently returns
dict[str, object] which is too loose for strict MyPy; define a precise TypedDict
(e.g., ProjectData or SampleProjectData) describing keys id: uuid.UUID, name:
str, description: str, is_public: bool, owner_id: str and change the return
annotation of sample_project_data to that TypedDict type (ensure TypedDict is
imported from typing and uuid is used for the id field) so tests have a strict
contract under Python 3.11 strict mode.
In `@tests/unit/test_notification_routes.py`:
- Around line 105-107: The test currently uses
mock_notification_service.list_notifications.assert_called_once_with(...) which
is incorrect for an awaited async call; replace that assertion with the
await-aware
mock_notification_service.list_notifications.assert_awaited_once_with("test-user-id",
unread_only=True) so the test verifies the awaited call made by the route
handler (the async list_notifications call in the notifications route).
In `@tests/unit/test_project_service.py`:
- Around line 147-149: Replace boolean `.called` checks on async mocks with
awaited-assertions to ensure coroutines were actually awaited: for each
AsyncMock in tests (e.g., mock_db.add, mock_db.flush, mock_db.commit,
mock_db.delete in test_project_service.py) swap assertions like `assert
mock_db.flush.called` / `assert mock_db.commit.called` / `assert
mock_db.delete.called` to `mock_db.flush.assert_awaited()` or
`mock_db.flush.assert_awaited_once()` (and similarly for `commit`, `delete`,
`add`) depending on expected call count; update all occurrences mentioned
(around the original checks and at the other noted locations) so async calls are
validated with assert_awaited/assert_awaited_once instead of `.called`.
In `@tests/unit/test_quality_routes.py`:
- Around line 74-77: The test currently asserts forwarded parameters via
positional indexing on mock_load (using call_args[0][1] and
assert_called_once_with), which is brittle; update the assertions in
tests/unit/test_quality_routes.py to check keyword arguments instead (e.g.,
inspect mock_load.call_args.kwargs or use assert_called_once_with(...,
env="dev") with explicit keyword names) so the test validates the forwarded
parameter by name rather than by positional index; apply the same change for the
other occurrence that asserts positional args.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 7f340254-0f66-4f99-ae5f-38bd58306656
📒 Files selected for processing (26)
ontokit/collab/presence.pyontokit/git/bare_repository.pytests/conftest.pytests/integration/test_git_operations.pytests/unit/conftest.pytests/unit/test_auth.pytests/unit/test_bare_repository.pytests/unit/test_beacon_token.pytests/unit/test_collab_presence.pytests/unit/test_collab_protocol.pytests/unit/test_collab_transform.pytests/unit/test_config.pytests/unit/test_encryption.pytests/unit/test_github_service.pytests/unit/test_join_request_service.pytests/unit/test_lint_routes.pytests/unit/test_linter.pytests/unit/test_notification_routes.pytests/unit/test_ontology_index_service.pytests/unit/test_project_service.pytests/unit/test_quality_routes.pytests/unit/test_remote_sync_service.pytests/unit/test_search.pytests/unit/test_user_service.pytests/unit/test_user_settings_routes.pytests/unit/test_worker.py
✅ Files skipped from review due to trivial changes (11)
- tests/unit/test_config.py
- ontokit/collab/presence.py
- tests/unit/test_linter.py
- tests/unit/test_auth.py
- tests/unit/test_encryption.py
- tests/unit/test_user_settings_routes.py
- tests/unit/test_remote_sync_service.py
- tests/unit/test_beacon_token.py
- tests/unit/test_collab_protocol.py
- tests/unit/test_collab_transform.py
- tests/unit/test_collab_presence.py
🚧 Files skipped from review as they are similar to previous changes (7)
- tests/unit/conftest.py
- tests/integration/test_git_operations.py
- tests/unit/test_github_service.py
- tests/unit/test_bare_repository.py
- tests/unit/test_user_service.py
- tests/unit/test_worker.py
- tests/unit/test_join_request_service.py
| service = Mock(spec=UserService) | ||
| service.get_user_info = AsyncMock( | ||
| return_value={"id": "test-user-id", "name": "Test User", "email": "test@example.com"} | ||
| ) | ||
| service.get_users_info = AsyncMock( | ||
| return_value={ | ||
| "test-user-id": { | ||
| "id": "test-user-id", | ||
| "name": "Test User", | ||
| "email": "test@example.com", | ||
| } | ||
| } | ||
| ) | ||
| service.search_users = AsyncMock(return_value=([], 0)) |
There was a problem hiding this comment.
mock_user_service returns the wrong value types for UserService contract.
UserService.get_user_info() and UserService.get_users_info() are typed around UserInfo, but the fixture returns plain dicts. That can make tests pass while violating real call-site expectations (attribute/model access).
As per coding guidelines, "Enable MyPy strict mode type checking with Python 3.11 target".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/conftest.py` around lines 181 - 194, The fixture currently stubs
service (Mock spec=UserService) methods returning plain dicts; change
get_user_info and get_users_info to return actual UserInfo instances (and a
mapping of IDs to UserInfo) to match the UserService contract: import or
reference the UserInfo model and construct UserInfo(...) for
service.get_user_info and for each value in service.get_users_info so tests use
the real typed objects expected by callers (update the AsyncMock return_value
for service.get_user_info and service.get_users_info accordingly).
| from fastapi.testclient import TestClient | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify synchronous client usage in this test module
rg -n 'TestClient|client\.(get|post|put|patch|delete)\(' tests/unit/test_notification_routes.pyRepository: CatholicOS/ontokit-api
Length of output: 878
🏁 Script executed:
# Read the test file to understand context, function signatures, and fixture setup
head -100 tests/unit/test_notification_routes.py | cat -nRepository: CatholicOS/ontokit-api
Length of output: 4117
🏁 Script executed:
# Check if test functions are async or sync, and examine fixture definitions
rg -n '^\s*@pytest|^\s*def test_|^\s*async def test_|^\s*def .*fixture|def authed_client' tests/unit/test_notification_routes.pyRepository: CatholicOS/ontokit-api
Length of output: 332
🏁 Script executed:
# Search for authed_client fixture definition
rg -n 'authed_client.*fixture|def authed_client' tests/unit/test_notification_routes.py tests/conftest.py 2>/dev/nullRepository: CatholicOS/ontokit-api
Length of output: 48
🏁 Script executed:
# Check if there's a conftest.py file that may define shared fixtures
find tests -name "conftest.py" -exec head -50 {} \;Repository: CatholicOS/ontokit-api
Length of output: 4269
🏁 Script executed:
# Check if the project uses async testing patterns elsewhere
find tests -name "*.py" -type f -exec rg -l "async def test_|AsyncClient|pytest.mark.asyncio" {} \;Repository: CatholicOS/ontokit-api
Length of output: 537
🏁 Script executed:
# Check the main routes/app to see if endpoints are async
rg -n "async def |from fastapi import|@app\." ontokit/main.py ontokit/api/routes/notifications.py 2>/dev/null | head -30Repository: CatholicOS/ontokit-api
Length of output: 1556
🏁 Script executed:
# Check if those test files actually contain async tests or just match the pattern search
rg -n "async def test_|@pytest.mark.asyncio|AsyncClient" tests/unit/test_auth.py tests/unit/test_notification_service.py 2>/dev/null | head -20Repository: CatholicOS/ontokit-api
Length of output: 1806
🏁 Script executed:
# Verify the notification routes are indeed async
cat -n ontokit/api/routes/notifications.pyRepository: CatholicOS/ontokit-api
Length of output: 2288
Convert route tests to async pattern for guideline compliance.
These tests perform HTTP I/O via synchronous TestClient calls, which violates the coding guideline requiring async/await for all I/O operations. The notification service layer tests already use @pytest.mark.asyncio with async test functions—apply the same pattern here by converting to async tests with httpx.AsyncClient and ASGITransport.
Affected lines: 54–154 (all test methods in the class)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/unit/test_notification_routes.py` around lines 11 - 12, The tests in
tests/unit/test_notification_routes.py currently use synchronous TestClient
calls; convert each test function (the ones between lines 54–154) to async by
adding `@pytest.mark.asyncio` and changing def test_* to async def, replace
fastapi.testclient.TestClient usage with httpx.AsyncClient and
httpx.ASGITransport (or AsyncClient(app=app, transport=ASGITransport(app=app)))
and await all requests (e.g., await client.get/post/...). Update any
setup/teardown to use async context managers (async with AsyncClient(...) as
client:) and ensure imports are updated to import httpx.AsyncClient,
httpx.ASGITransport, and pytest.mark.asyncio.
| from fastapi.testclient import TestClient | ||
| from rdflib import Graph |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
For FastAPI 0.115.0, what is the recommended approach for async endpoint tests using httpx.AsyncClient and ASGITransport?
💡 Result:
For FastAPI 0.115.0, the recommended pattern is to make the test itself async (e.g. with pytest.mark.anyio) and use httpx.AsyncClient + ASGITransport(app=app) (instead of fastapi.testclient.TestClient, which won’t work inside async tests). Also set a base_url so relative URLs like "/" work. [1][3]
import pytest
from httpx import ASGITransport, AsyncClient
from app.main import app
`@pytest.mark.anyio`
async def test_root():
transport = ASGITransport(app=app)
async with AsyncClient(transport=transport, base_url="http://test") as client:
r = await client.get("/")
assert r.status_code == 200If your app depends on lifespan/startup/shutdown events, AsyncClient does not trigger them—wrap the app with LifespanManager (from asgi-lifespan) when testing. [1][2]
Sources: FastAPI “Async Tests” docs [1], HTTPX ASGITransport docs [2], Starlette async testing notes (base_url + anyio marker) [3].
🏁 Script executed:
# Examine the test file to confirm TestClient usage
head -100 tests/unit/test_quality_routes.pyRepository: CatholicOS/ontokit-api
Length of output: 3724
🏁 Script executed:
# Check FastAPI version and project dependencies
find . -type f \( -name "pyproject.toml" -o -name "setup.py" -o -name "requirements*.txt" \) | head -5Repository: CatholicOS/ontokit-api
Length of output: 82
🏁 Script executed:
# Check if there's any async test configuration in pytest.ini or pyproject.toml
find . -type f \( -name "pytest.ini" -o -name "pyproject.toml" \) -exec grep -l "anyio\|async" {} \;Repository: CatholicOS/ontokit-api
Length of output: 82
🏁 Script executed:
# Check FastAPI version and project configuration
cat pyproject.toml | head -80Repository: CatholicOS/ontokit-api
Length of output: 2064
🏁 Script executed:
# Look for pytest configuration and anyio settings
grep -A 20 "\[tool.pytest\|anyio\|asyncio" pyproject.tomlRepository: CatholicOS/ontokit-api
Length of output: 665
🏁 Script executed:
# Verify the complete test file structure, especially how TestClient is initialized
wc -l tests/unit/test_quality_routes.pyRepository: CatholicOS/ontokit-api
Length of output: 103
🏁 Script executed:
# Check if there's a conftest.py that defines authed_client fixture
find tests -name "conftest.py" -exec cat {} \;Repository: CatholicOS/ontokit-api
Length of output: 9079
Migrate tests to use async I/O with httpx.AsyncClient instead of synchronous TestClient.
The coding guideline requires async/await for all I/O operations in Python files. This test file currently uses synchronous TestClient calls (lines 9–10 and throughout) for HTTP I/O. For FastAPI 0.115.0, the recommended pattern is to make tests async with pytest.mark.anyio and use httpx.AsyncClient with ASGITransport:
Example async test pattern
import pytest
from httpx import ASGITransport, AsyncClient
from ontokit.main import app
`@pytest.mark.anyio`
async def test_get_references_success():
transport = ASGITransport(app=app)
async with AsyncClient(transport=transport, base_url="http://test") as client:
response = await client.get(f"/api/v1/projects/{PROJECT_ID}/entities/{iri}/references")
assert response.status_code == 200Update the authed_client fixture in conftest.py to return an async client, and convert all test methods to async with @pytest.mark.anyio decorator.
Applies to all route test methods (lines 41–42, 68–71, 107–108, 137–139, 170–171, 191–193, 220–221, 247–251).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/unit/test_quality_routes.py` around lines 9 - 10, The tests currently
use synchronous TestClient and must be converted to async: update the
authed_client fixture in conftest.py to create and yield an httpx.AsyncClient
with ASGITransport(app=app) and base_url (instead of
fastapi.testclient.TestClient), change imports from
fastapi.testclient/TestClient to from httpx import ASGITransport, AsyncClient,
and mark each test function in tests/unit/test_quality_routes.py as async with
`@pytest.mark.anyio`, replacing synchronous client.get/post calls with await
client.get(...) / await client.post(...); ensure any setup/teardown uses async
context managers (async with AsyncClient(...)) and close the client after tests.
The real method is a synchronous @staticmethod, not async. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The sample graph contains exactly 4 entities (Person, Organization, worksFor, hasName). Assert == 4 instead of > 0 to catch regressions. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The route handler awaits list_notifications, so the assertion should verify the coroutine was actually awaited, not just called. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
ontokit/api/routes/projects.py (1)
541-560:⚠️ Potential issue | 🟡 MinorKeep git-backed parse failures on 422 as well.
Line 559 fixes the storage path, but the
load_from_git()branch above still folds content/parse failures into 503. The same malformed ontology will therefore return different status codes depending on whether it is loaded from git or storage.Suggested fix
if git is not None and git.repository_exists(project_id): try: await ontology.load_from_git(project_id, branch, filename, git) + except ValueError as e: + raise HTTPException( + status_code=status.HTTP_422_UNPROCESSABLE_CONTENT, + detail=str(e), + ) from e except Exception as e: raise HTTPException( status_code=status.HTTP_503_SERVICE_UNAVAILABLE, detail=f"Failed to load ontology from git branch '{branch}': {e}", ) from e🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ontokit/api/routes/projects.py` around lines 541 - 560, The git branch currently treats all exceptions from ontology.load_from_git(project_id, branch, filename, git) as 503, causing malformed/parse errors to differ from storage path behavior; update the try/except around load_from_git (the call to ontology.load_from_git) to mirror the storage handling by catching parse/content errors (e.g., ValueError or the specific parse exception your loader raises) and raise HTTPException with status_code=status.HTTP_422_UNPROCESSABLE_CONTENT for those, while keeping a separate except Exception to raise the 503 for other failures.
🧹 Nitpick comments (9)
tests/unit/test_indexed_ontology.py (2)
41-45: Incorrect__slots__rationale in fixture setupLine 43 states
IndexedOntologyServiceuses__slots__, but it does not. This comment is misleading; direct assignment is sufficient here.Proposed cleanup
- # Replace the real OntologyIndexService created by the constructor with an - # AsyncMock test double. We use object.__setattr__ because - # IndexedOntologyService uses __slots__, which prevents normal attribute - # assignment for slot-defined attributes after __init__. - object.__setattr__(svc, "index", AsyncMock()) + # Replace the real OntologyIndexService created by the constructor + # with an AsyncMock test double. + svc.index = AsyncMock()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/test_indexed_ontology.py` around lines 41 - 45, The comment and code use object.__setattr__ to assign svc.index citing __slots__ on IndexedOntologyService which is incorrect; change the assignment to a normal attribute assignment (replace object.__setattr__(svc, "index", AsyncMock()) with svc.index = AsyncMock()) and update the preceding comment to simply state that the real OntologyIndexService is being replaced with an AsyncMock for tests (remove the __slots__ rationale). Ensure references to svc, index, AsyncMock, and IndexedOntologyService are preserved so the intent is clear.
82-93: Fallback tests miss assertion for reindex side effectIn these fallback scenarios,
_enqueue_reindex_if_staleis mocked but never asserted. Without that check, regressions in fallback behavior can slip through.Proposed assertion additions
await service.get_root_tree_nodes(PROJECT_ID, branch=BRANCH) mock_ontology_service.get_root_tree_nodes.assert_awaited_once() + service._enqueue_reindex_if_stale.assert_awaited_once_with(PROJECT_ID, BRANCH) @@ await service.get_root_tree_nodes(PROJECT_ID, branch=BRANCH) mock_ontology_service.get_root_tree_nodes.assert_awaited_once() + service._enqueue_reindex_if_stale.assert_awaited_once_with(PROJECT_ID, BRANCH)Also applies to: 118-134
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/test_indexed_ontology.py` around lines 82 - 93, The fallback tests (e.g., test_falls_back_to_rdflib_when_index_not_ready) mock _enqueue_reindex_if_stale but never assert it was invoked; update the test to assert that service._enqueue_reindex_if_stale.assert_awaited_once() (or assert_awaited_with expected args) after calling service.get_root_tree_nodes to ensure the reindex side-effect is triggered, and apply the same assertion to the other fallback test covering the same behavior.tests/unit/test_bare_repository_service.py (1)
220-229: Test is correct, consider additional coverage for ordering.The test is valid for a single-commit scenario. Since
get_historyreturns commits newest-first (per context snippet 3),history[0]is the initial commit when only one exists.Consider adding a test that creates multiple commits to verify the ordering behavior—this would catch regressions if the
SortMode.TIMEsorting logic changes.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/test_bare_repository_service.py` around lines 220 - 229, Add a new unit test under TestGetHistory that creates multiple commits via the BareGitRepositoryService test fixture and then calls initialized_service.get_history(project_id, limit=10) to assert the returned list is ordered newest-first; use the same project_id fixture, create at least two additional commits (e.g. via the service's commit or append methods), and then assert history[0].timestamp >= history[1].timestamp and that messages/ids appear in the expected newest-to-oldest order to validate the SortMode.TIME behavior.tests/unit/test_embedding_service.py (1)
39-46: Addrollbackto the shared DB fixture for consistent exception-path tests.Several service paths await
db.rollback(). Defining it once inmock_dbavoids per-test setup drift and makes failure-path assertions consistent.Proposed fix
def mock_db() -> AsyncMock: """Create an async mock of AsyncSession.""" session = AsyncMock() session.commit = AsyncMock() + session.rollback = AsyncMock() session.execute = AsyncMock() session.refresh = AsyncMock() session.add = Mock() return session🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/test_embedding_service.py` around lines 39 - 46, The shared AsyncSession mock returned by mock_db() is missing a rollback method used by service error paths; add session.rollback = AsyncMock() inside mock_db (alongside session.commit, session.execute, session.refresh) so tests that exercise exception handling can reliably await db.rollback() without per-test setup.tests/unit/test_embeddings_routes.py (1)
148-151: Strengthen the generate-success assertion to validate returned job identity.This test currently passes even if the endpoint returns an incorrect
job_idvalue. Assert the exact ID and queue call to catch response/queue wiring regressions.Proposed fix
response = client.post(f"/api/v1/projects/{PROJECT_ID}/embeddings/generate") assert response.status_code == 202 - assert "job_id" in response.json() + assert response.json()["job_id"] == "embed-job-1" + mock_pool.enqueue_job.assert_awaited_once()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/test_embeddings_routes.py` around lines 148 - 151, The test should assert the exact job identity and that the queue was invoked with it: after the client.post to f"/api/v1/projects/{PROJECT_ID}/embeddings/generate" extract job_id = response.json()["job_id"], assert job_id == the expected job id value used by your test fixture/mocking, and assert the queue mock (e.g., mock_queue.enqueue or the specific mocked method used in the test) was called once with that job_id and PROJECT_ID (and any expected payload/arguments). Ensure you replace the generic "job_id in response.json()" assertion with these exact-value and mock-call assertions so response and queue wiring regressions are caught.tests/unit/test_normalization_routes.py (1)
96-98: Consider extracting repeated project-service setup into a small helper fixture.
mock_project_service.get = AsyncMock(...)(and_get_projectwhere needed) is repeated many times; a tiny helper would trim noise and make test intent easier to scan.Also applies to: 126-127, 154-154, 176-176, 200-200, 223-223, 241-241, 266-266, 286-286, 309-309, 329-329
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/test_normalization_routes.py` around lines 96 - 98, Extract the repeated mock setup for mock_project_service.get and mock_project_service._get_project into a small helper (e.g., a fixture or function like setup_mock_project_service or make_mock_project_service_with_get) that accepts the desired return values (defaulting to _make_project_response() and Mock()) and assigns AsyncMock to mock_project_service.get and mock_project_service._get_project accordingly; then replace each direct assignment (mock_project_service.get = AsyncMock(...), mock_project_service._get_project = AsyncMock(...)) in the tests with a single call to that helper to reduce duplication and clarify intent.tests/unit/test_projects_routes_extended.py (1)
466-474: Assert an actual members-route contract here.Allowing anything except 404/405 means 401/403/422/500 all count as success, so this can stay green while the handler is broken. Mock
list_members()and assert the expected payload or auth outcome instead.Suggested fix
-from ontokit.schemas.project import ProjectResponse +from ontokit.schemas.project import MemberListResponse, ProjectResponse ... def test_list_members_route_exists( self, authed_client: tuple[TestClient, AsyncMock], + mock_project_service: AsyncMock, ) -> None: - """GET /api/v1/projects/{id}/members is reachable (not 404/405).""" + """GET /api/v1/projects/{id}/members returns the member list.""" client, _db = authed_client + mock_project_service.list_members = AsyncMock( + return_value=MemberListResponse(items=[], total=0) + ) response = client.get(f"/api/v1/projects/{PROJECT_ID}/members") - # Route exists; may fail on service layer but not as 404/405 - assert response.status_code not in (404, 405) + assert response.status_code == 200 + assert response.json() == {"items": [], "total": 0}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/test_projects_routes_extended.py` around lines 466 - 474, The test test_list_members_route_exists is too permissive (it only asserts status not in 404/405) so mock the service method list_members() and assert the actual contract: patch the project-members service/function used by the route to return a known members list, call client.get(f"/api/v1/projects/{PROJECT_ID}/members") using authed_client, then assert the response.status_code is 200 and the JSON payload matches the mocked members structure (or, alternatively, simulate an auth failure and assert 401/403 accordingly); update assertions to validate both status and response body instead of only excluding 404/405.tests/unit/test_project_service.py (2)
494-515: Mock the realtransfer_ownership()follow-up path.After the first
_get_project()lookup, this method delegates tolist_members(). It does not run count/items queries, so the currentside_effectsequence falls through to genericMagicMocks and can pass even if the returned member-list path regresses.Suggested fix
- # After commit + refresh, list_members is called — mock its DB results - mock_members_result = MagicMock() - mock_members_result.scalars.return_value.all.return_value = [admin_member, owner_member] - mock_count_result = MagicMock() - mock_count_result.scalar_one.return_value = 2 - - mock_db.execute.side_effect = [ - mock_result_project, # _get_project - mock_count_result, # list_members count - mock_members_result, # list_members items - ] + mock_db.execute.side_effect = [mock_result_project] ... - with patch("ontokit.services.user_service.get_user_service") as mock_us: - mock_user_svc = MagicMock() - mock_user_svc.get_users_info = AsyncMock(return_value={}) - mock_us.return_value = mock_user_svc - - await service.transfer_ownership(PROJECT_ID, transfer, owner) + with patch.object(service, "list_members", new_callable=AsyncMock) as mock_list_members: + mock_list_members.return_value = MagicMock() + await service.transfer_ownership(PROJECT_ID, transfer, owner)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/test_project_service.py` around lines 494 - 515, The test's mock_db.execute.side_effect doesn't match transfer_ownership's real follow-up path: after _get_project the service calls list_members (not separate count/items queries), so update the side_effect sequence to return mock_result_project then the single mock result expected by list_members (e.g., [mock_result_project, mock_members_result]) and remove the unused mock_count_result; adjust references around transfer_ownership, _get_project, list_members and mock_db.execute to ensure the mocked DB calls mirror the actual call sequence.
608-745: Theselist_accessible()cases aren't checking the query predicates yet.Each variant feeds the same canned
scalar/executeresults back regardless of theSelectthat was built. For example, theprivateandminecases only assert pagination fields, so bugs infilter_type, membership gating, or search escaping would still pass. Please assert againstmock_db.execute.await_args.args[0]/mock_db.scalar.await_args_list, or move this block to a lightweight DB-backed test.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/test_project_service.py` around lines 608 - 745, The tests call ProjectService.list_accessible but never assert the actual Select that was built, so different filter_type/search cases can pass incorrectly; update the tests (e.g., in test_list_public_filter, test_list_private_filter, test_list_mine_filter, test_list_no_filter, test_list_anonymous_*, test_list_with_search) to inspect mock_db.execute.await_args.args[0] (the Select instance) and mock_db.scalar.await_args_list to verify the query predicates: for "public" assert the Select filters Project.is_public==True, for "private" assert it filters is_public==False and/or includes a membership JOIN/condition, for "mine" assert it includes a membership predicate for user.id, for search assert the WHERE contains the name/description ILIKE predicate (or the corresponding bound parameter), and for anonymous tests assert only public predicate is present; keep existing result assertions but add these Select-level assertions to ensure the service builds correct queries.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/coverage-plan.md`:
- Line 89: The Phase-1 projection text ("Phase 1 alone should reach ~79%") is
arithmetically inconsistent with the baseline and the "~550 statements" recovery
noted earlier; recompute the percentage using the baseline total statements and
the +~550 recovered statements (or explicitly state the baseline number and how
~550 was applied) and update the sentence to the corrected percentage or add a
parenthetical clarifying the assumptions (reference the phrases "baseline",
"~550 statements", "Phase 1", and "Phase 1 alone should reach ~79%") so the
roadmap reflects the correct math.
- Around line 4-6: Update the coverage baseline in the docs by replacing the
stale "**Baseline:** 72% (6891/9571 statements covered, 861 tests)" line with
the current PR state "**Baseline:** 78% (calculate and insert the updated
covered statements/total and tests count from the latest coverage report)" and
verify the "**Target:** 80%" line still makes sense given the new baseline;
ensure the numeric covered statements and the delta to target are recalculated
and reflected in the same format so prioritization and recovery estimates remain
accurate.
- Line 66: The table entry currently references `services/indexed_ontology.py`
but the PR uses `services/ontology_index.py`; update the Phase 2 coverage table
row to use the correct module path `services/ontology_index.py` (and adjust the
coverage numbers if they were tied to the wrong file) so the plan points to the
actual file referenced by functions/classes in the PR such as ontology indexing
logic in `services/ontology_index.py`.
In `@tests/unit/test_embedding_service.py`:
- Around line 256-270: The tests patch the wrong settings symbol so the
functions under test (_get_fernet, _encrypt_secret, _decrypt_secret) still read
the real config; change the patch target from
"ontokit.services.embedding_service.settings" to the actual module where
settings is imported at call time (e.g. "ontokit.core.config.settings") so the
mock_settings.secret_key is used during the test—update all with patch(...)
calls in this test file to point to the correct settings import (referencing
_get_fernet, _encrypt_secret, and _decrypt_secret to locate the tests).
In `@tests/unit/test_lint_routes.py`:
- Around line 10-461: Tests perform synchronous HTTP I/O using TestClient and
regular def tests; convert each test that calls client.get/client.post to async
by changing the test functions to async def and using AsyncClient
(httpx.AsyncClient) for the HTTP client (ensure the authed_client fixture yields
an AsyncClient and async mock session), then await client.get/post calls; update
test classes/method names such as
TestLintRules.test_get_lint_rules_returns_list,
TestTriggerLint.test_trigger_lint_success,
TestTriggerLint.test_trigger_lint_no_ontology_file,
TestTriggerLint.test_trigger_lint_already_running,
TestLintStatus.test_lint_status_no_runs,
TestListLintRuns.test_list_lint_runs_empty,
TestGetLintRun.test_get_lint_run_not_found,
TestGetLintRun.test_get_lint_run_with_issues, TestTriggerLintEnqueueFailure.*
tests, TestListLintRunsWithResults.test_list_lint_runs_with_pagination,
TestGetLintRunDetail.test_get_lint_run_no_issues, and TestGetLintIssues.* to use
async/await (keep existing patch/AsyncMock usage) and ensure all client HTTP
calls are awaited and any fixtures returning clients/sessions are adapted to
async.
---
Outside diff comments:
In `@ontokit/api/routes/projects.py`:
- Around line 541-560: The git branch currently treats all exceptions from
ontology.load_from_git(project_id, branch, filename, git) as 503, causing
malformed/parse errors to differ from storage path behavior; update the
try/except around load_from_git (the call to ontology.load_from_git) to mirror
the storage handling by catching parse/content errors (e.g., ValueError or the
specific parse exception your loader raises) and raise HTTPException with
status_code=status.HTTP_422_UNPROCESSABLE_CONTENT for those, while keeping a
separate except Exception to raise the 503 for other failures.
---
Nitpick comments:
In `@tests/unit/test_bare_repository_service.py`:
- Around line 220-229: Add a new unit test under TestGetHistory that creates
multiple commits via the BareGitRepositoryService test fixture and then calls
initialized_service.get_history(project_id, limit=10) to assert the returned
list is ordered newest-first; use the same project_id fixture, create at least
two additional commits (e.g. via the service's commit or append methods), and
then assert history[0].timestamp >= history[1].timestamp and that messages/ids
appear in the expected newest-to-oldest order to validate the SortMode.TIME
behavior.
In `@tests/unit/test_embedding_service.py`:
- Around line 39-46: The shared AsyncSession mock returned by mock_db() is
missing a rollback method used by service error paths; add session.rollback =
AsyncMock() inside mock_db (alongside session.commit, session.execute,
session.refresh) so tests that exercise exception handling can reliably await
db.rollback() without per-test setup.
In `@tests/unit/test_embeddings_routes.py`:
- Around line 148-151: The test should assert the exact job identity and that
the queue was invoked with it: after the client.post to
f"/api/v1/projects/{PROJECT_ID}/embeddings/generate" extract job_id =
response.json()["job_id"], assert job_id == the expected job id value used by
your test fixture/mocking, and assert the queue mock (e.g., mock_queue.enqueue
or the specific mocked method used in the test) was called once with that job_id
and PROJECT_ID (and any expected payload/arguments). Ensure you replace the
generic "job_id in response.json()" assertion with these exact-value and
mock-call assertions so response and queue wiring regressions are caught.
In `@tests/unit/test_indexed_ontology.py`:
- Around line 41-45: The comment and code use object.__setattr__ to assign
svc.index citing __slots__ on IndexedOntologyService which is incorrect; change
the assignment to a normal attribute assignment (replace object.__setattr__(svc,
"index", AsyncMock()) with svc.index = AsyncMock()) and update the preceding
comment to simply state that the real OntologyIndexService is being replaced
with an AsyncMock for tests (remove the __slots__ rationale). Ensure references
to svc, index, AsyncMock, and IndexedOntologyService are preserved so the intent
is clear.
- Around line 82-93: The fallback tests (e.g.,
test_falls_back_to_rdflib_when_index_not_ready) mock _enqueue_reindex_if_stale
but never assert it was invoked; update the test to assert that
service._enqueue_reindex_if_stale.assert_awaited_once() (or assert_awaited_with
expected args) after calling service.get_root_tree_nodes to ensure the reindex
side-effect is triggered, and apply the same assertion to the other fallback
test covering the same behavior.
In `@tests/unit/test_normalization_routes.py`:
- Around line 96-98: Extract the repeated mock setup for
mock_project_service.get and mock_project_service._get_project into a small
helper (e.g., a fixture or function like setup_mock_project_service or
make_mock_project_service_with_get) that accepts the desired return values
(defaulting to _make_project_response() and Mock()) and assigns AsyncMock to
mock_project_service.get and mock_project_service._get_project accordingly; then
replace each direct assignment (mock_project_service.get = AsyncMock(...),
mock_project_service._get_project = AsyncMock(...)) in the tests with a single
call to that helper to reduce duplication and clarify intent.
In `@tests/unit/test_project_service.py`:
- Around line 494-515: The test's mock_db.execute.side_effect doesn't match
transfer_ownership's real follow-up path: after _get_project the service calls
list_members (not separate count/items queries), so update the side_effect
sequence to return mock_result_project then the single mock result expected by
list_members (e.g., [mock_result_project, mock_members_result]) and remove the
unused mock_count_result; adjust references around transfer_ownership,
_get_project, list_members and mock_db.execute to ensure the mocked DB calls
mirror the actual call sequence.
- Around line 608-745: The tests call ProjectService.list_accessible but never
assert the actual Select that was built, so different filter_type/search cases
can pass incorrectly; update the tests (e.g., in test_list_public_filter,
test_list_private_filter, test_list_mine_filter, test_list_no_filter,
test_list_anonymous_*, test_list_with_search) to inspect
mock_db.execute.await_args.args[0] (the Select instance) and
mock_db.scalar.await_args_list to verify the query predicates: for "public"
assert the Select filters Project.is_public==True, for "private" assert it
filters is_public==False and/or includes a membership JOIN/condition, for "mine"
assert it includes a membership predicate for user.id, for search assert the
WHERE contains the name/description ILIKE predicate (or the corresponding bound
parameter), and for anonymous tests assert only public predicate is present;
keep existing result assertions but add these Select-level assertions to ensure
the service builds correct queries.
In `@tests/unit/test_projects_routes_extended.py`:
- Around line 466-474: The test test_list_members_route_exists is too permissive
(it only asserts status not in 404/405) so mock the service method
list_members() and assert the actual contract: patch the project-members
service/function used by the route to return a known members list, call
client.get(f"/api/v1/projects/{PROJECT_ID}/members") using authed_client, then
assert the response.status_code is 200 and the JSON payload matches the mocked
members structure (or, alternatively, simulate an auth failure and assert
401/403 accordingly); update assertions to validate both status and response
body instead of only excluding 404/405.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 55a81567-d196-4f8a-b76b-421ec348b8e1
📒 Files selected for processing (26)
docs/coverage-plan.mdontokit/api/routes/projects.pyontokit/services/project_service.pytests/unit/test_analytics_routes.pytests/unit/test_bare_repository_service.pytests/unit/test_embedding_service.pytests/unit/test_embedding_text_builder.pytests/unit/test_embeddings_routes.pytests/unit/test_github_service.pytests/unit/test_github_sync.pytests/unit/test_indexed_ontology.pytests/unit/test_join_request_service.pytests/unit/test_lint_routes.pytests/unit/test_normalization_routes.pytests/unit/test_normalization_service.pytests/unit/test_ontology_extractor.pytests/unit/test_ontology_index_service.pytests/unit/test_ontology_service_extended.pytests/unit/test_pr_routes.pytests/unit/test_project_service.pytests/unit/test_projects_routes.pytests/unit/test_projects_routes_extended.pytests/unit/test_pull_request_service.pytests/unit/test_remote_sync_service.pytests/unit/test_sitemap_notifier.pytests/unit/test_suggestion_service.py
✅ Files skipped from review due to trivial changes (10)
- tests/unit/test_embedding_text_builder.py
- tests/unit/test_sitemap_notifier.py
- tests/unit/test_pr_routes.py
- tests/unit/test_github_sync.py
- tests/unit/test_normalization_service.py
- tests/unit/test_ontology_extractor.py
- tests/unit/test_github_service.py
- tests/unit/test_analytics_routes.py
- tests/unit/test_remote_sync_service.py
- tests/unit/test_ontology_index_service.py
🚧 Files skipped from review as they are similar to previous changes (4)
- tests/unit/test_projects_routes.py
- tests/unit/test_ontology_service_extended.py
- tests/unit/test_pull_request_service.py
- tests/unit/test_join_request_service.py
docs/coverage-plan.md
Outdated
| | `git/bare_repository.py` | 70% | 150 | 80% | ~55 | | ||
| | `worker.py` | 70% | 111 | 80% | ~40 | | ||
| | `services/ontology_extractor.py` | 64% | 93 | 80% | ~45 | | ||
| | `services/indexed_ontology.py` | 44% | 50 | 80% | ~30 | |
There was a problem hiding this comment.
Possible module path typo in Phase 2 table.
services/indexed_ontology.py looks inconsistent with the rest of this PR context, which references services/ontology_index.py. If this path is wrong, the plan points effort at the wrong file.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/coverage-plan.md` at line 66, The table entry currently references
`services/indexed_ontology.py` but the PR uses `services/ontology_index.py`;
update the Phase 2 coverage table row to use the correct module path
`services/ontology_index.py` (and adjust the coverage numbers if they were tied
to the wrong file) so the plan points to the actual file referenced by
functions/classes in the PR such as ontology indexing logic in
`services/ontology_index.py`.
| from fastapi.testclient import TestClient | ||
|
|
||
| PROJECT_ID = "12345678-1234-5678-1234-567812345678" | ||
| RUN_ID = "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee" | ||
|
|
||
|
|
||
| class TestLintRules: | ||
| """Tests for GET /api/v1/projects/lint/rules (no auth required).""" | ||
|
|
||
| @patch("ontokit.api.routes.lint.get_available_rules") | ||
| def test_get_lint_rules_returns_list(self, mock_rules: MagicMock, client: TestClient) -> None: | ||
| """GET /api/v1/projects/lint/rules returns available rules.""" | ||
| mock_rules.return_value = [ | ||
| SimpleNamespace( | ||
| rule_id="R001", | ||
| name="Missing label", | ||
| description="Class has no label", | ||
| severity="warning", | ||
| ), | ||
| SimpleNamespace( | ||
| rule_id="R002", | ||
| name="Orphan class", | ||
| description="Class has no parent", | ||
| severity="info", | ||
| ), | ||
| ] | ||
|
|
||
| response = client.get("/api/v1/projects/lint/rules") | ||
| assert response.status_code == 200 | ||
| data = response.json() | ||
| assert len(data["rules"]) == 2 | ||
| assert data["rules"][0]["rule_id"] == "R001" | ||
| assert data["rules"][1]["severity"] == "info" | ||
|
|
||
| @patch("ontokit.api.routes.lint.get_available_rules") | ||
| def test_get_lint_rules_empty(self, mock_rules: MagicMock, client: TestClient) -> None: | ||
| """Returns empty rules list when no rules are defined.""" | ||
| mock_rules.return_value = [] | ||
|
|
||
| response = client.get("/api/v1/projects/lint/rules") | ||
| assert response.status_code == 200 | ||
| assert response.json()["rules"] == [] | ||
|
|
||
|
|
||
| class TestTriggerLint: | ||
| """Tests for POST /api/v1/projects/{id}/lint/run.""" | ||
|
|
||
| @patch("ontokit.api.routes.lint.get_arq_pool", new_callable=AsyncMock) | ||
| @patch("ontokit.api.routes.lint.verify_project_access", new_callable=AsyncMock) | ||
| def test_trigger_lint_success( | ||
| self, | ||
| mock_access: AsyncMock, | ||
| mock_pool_fn: AsyncMock, | ||
| authed_client: tuple[TestClient, AsyncMock], | ||
| ) -> None: | ||
| """Trigger lint returns 202 with job_id on success.""" | ||
| client, mock_session = authed_client | ||
|
|
||
| mock_project = Mock() | ||
| mock_project.source_file_path = "ontology.ttl" | ||
| mock_access.return_value = mock_project | ||
|
|
||
| # No existing running lint | ||
| mock_result = MagicMock() | ||
| mock_result.scalar_one_or_none.return_value = None | ||
| mock_session.execute.return_value = mock_result | ||
|
|
||
| mock_pool = AsyncMock() | ||
| mock_pool.enqueue_job.return_value = Mock(job_id="job-42") | ||
| mock_pool_fn.return_value = mock_pool | ||
|
|
||
| response = client.post(f"/api/v1/projects/{PROJECT_ID}/lint/run") | ||
| assert response.status_code == 202 | ||
| data = response.json() | ||
| assert data["job_id"] == "job-42" | ||
| assert data["status"] == "queued" | ||
|
|
||
| @patch("ontokit.api.routes.lint.verify_project_access", new_callable=AsyncMock) | ||
| def test_trigger_lint_no_ontology_file( | ||
| self, | ||
| mock_access: AsyncMock, | ||
| authed_client: tuple[TestClient, AsyncMock], | ||
| ) -> None: | ||
| """Returns 400 when project has no source file.""" | ||
| client, _ = authed_client | ||
|
|
||
| mock_project = Mock() | ||
| mock_project.source_file_path = None | ||
| mock_access.return_value = mock_project | ||
|
|
||
| response = client.post(f"/api/v1/projects/{PROJECT_ID}/lint/run") | ||
| assert response.status_code == 400 | ||
| assert "no ontology file" in response.json()["detail"].lower() | ||
|
|
||
| @patch("ontokit.api.routes.lint.verify_project_access", new_callable=AsyncMock) | ||
| def test_trigger_lint_already_running( | ||
| self, | ||
| mock_access: AsyncMock, | ||
| authed_client: tuple[TestClient, AsyncMock], | ||
| ) -> None: | ||
| """Returns 409 when a lint run is already in progress.""" | ||
| client, mock_session = authed_client | ||
|
|
||
| mock_project = Mock() | ||
| mock_project.source_file_path = "ontology.ttl" | ||
| mock_access.return_value = mock_project | ||
|
|
||
| mock_result = MagicMock() | ||
| mock_result.scalar_one_or_none.return_value = Mock() # existing run | ||
| mock_session.execute.return_value = mock_result | ||
|
|
||
| response = client.post(f"/api/v1/projects/{PROJECT_ID}/lint/run") | ||
| assert response.status_code == 409 | ||
| assert "already in progress" in response.json()["detail"].lower() | ||
|
|
||
|
|
||
| class TestLintStatus: | ||
| """Tests for GET /api/v1/projects/{id}/lint/status.""" | ||
|
|
||
| @patch("ontokit.api.routes.lint.verify_project_access", new_callable=AsyncMock) | ||
| def test_lint_status_no_runs( | ||
| self, | ||
| mock_access: AsyncMock, | ||
| authed_client: tuple[TestClient, AsyncMock], | ||
| ) -> None: | ||
| """Returns summary with no last_run when no runs exist.""" | ||
| client, mock_session = authed_client | ||
| mock_access.return_value = Mock() | ||
|
|
||
| mock_result = MagicMock() | ||
| mock_result.scalar_one_or_none.return_value = None | ||
| mock_session.execute.return_value = mock_result | ||
|
|
||
| response = client.get(f"/api/v1/projects/{PROJECT_ID}/lint/status") | ||
| assert response.status_code == 200 | ||
| data = response.json() | ||
| assert data["last_run"] is None | ||
| assert data["total_issues"] == 0 | ||
|
|
||
|
|
||
| class TestListLintRuns: | ||
| """Tests for GET /api/v1/projects/{id}/lint/runs.""" | ||
|
|
||
| @patch("ontokit.api.routes.lint.verify_project_access", new_callable=AsyncMock) | ||
| def test_list_lint_runs_empty( | ||
| self, | ||
| mock_access: AsyncMock, | ||
| authed_client: tuple[TestClient, AsyncMock], | ||
| ) -> None: | ||
| """Returns empty list when no runs exist.""" | ||
| client, mock_session = authed_client | ||
| mock_access.return_value = Mock() | ||
|
|
||
| # First call: count query, second call: runs query | ||
| mock_count_result = MagicMock() | ||
| mock_count_result.scalar.return_value = 0 | ||
|
|
||
| mock_runs_result = MagicMock() | ||
| mock_runs_result.scalars.return_value.all.return_value = [] | ||
|
|
||
| mock_session.execute.side_effect = [mock_count_result, mock_runs_result] | ||
|
|
||
| response = client.get(f"/api/v1/projects/{PROJECT_ID}/lint/runs") | ||
| assert response.status_code == 200 | ||
| data = response.json() | ||
| assert data["items"] == [] | ||
| assert data["total"] == 0 | ||
|
|
||
|
|
||
| class TestGetLintRun: | ||
| """Tests for GET /api/v1/projects/{id}/lint/runs/{run_id}.""" | ||
|
|
||
| @patch("ontokit.api.routes.lint.verify_project_access", new_callable=AsyncMock) | ||
| def test_get_lint_run_not_found( | ||
| self, | ||
| mock_access: AsyncMock, | ||
| authed_client: tuple[TestClient, AsyncMock], | ||
| ) -> None: | ||
| """Returns 404 when run does not exist.""" | ||
| client, mock_session = authed_client | ||
| mock_access.return_value = Mock() | ||
|
|
||
| mock_result = MagicMock() | ||
| mock_result.scalar_one_or_none.return_value = None | ||
| mock_session.execute.return_value = mock_result | ||
|
|
||
| response = client.get(f"/api/v1/projects/{PROJECT_ID}/lint/runs/{RUN_ID}") | ||
| assert response.status_code == 404 | ||
| assert "not found" in response.json()["detail"].lower() | ||
|
|
||
| @patch("ontokit.api.routes.lint.verify_project_access", new_callable=AsyncMock) | ||
| def test_get_lint_run_with_issues( | ||
| self, | ||
| mock_access: AsyncMock, | ||
| authed_client: tuple[TestClient, AsyncMock], | ||
| ) -> None: | ||
| """Returns run details with issues when run exists.""" | ||
| client, mock_session = authed_client | ||
| mock_access.return_value = Mock() | ||
|
|
||
| run_uuid = UUID(RUN_ID) | ||
| project_uuid = UUID(PROJECT_ID) | ||
| now = datetime.now(UTC) | ||
|
|
||
| mock_run = Mock() | ||
| mock_run.id = run_uuid | ||
| mock_run.project_id = project_uuid | ||
| mock_run.status = "completed" | ||
| mock_run.started_at = now | ||
| mock_run.completed_at = now | ||
| mock_run.issues_found = 1 | ||
| mock_run.error_message = None | ||
|
|
||
| mock_issue = Mock() | ||
| mock_issue.id = uuid4() | ||
| mock_issue.run_id = run_uuid | ||
| mock_issue.project_id = project_uuid | ||
| mock_issue.issue_type = "warning" | ||
| mock_issue.rule_id = "R001" | ||
| mock_issue.message = "Missing label" | ||
| mock_issue.subject_iri = "http://example.org/Foo" | ||
| mock_issue.details = None | ||
| mock_issue.created_at = now | ||
| mock_issue.resolved_at = None | ||
|
|
||
| # First execute: get run, second: get issues | ||
| mock_run_result = MagicMock() | ||
| mock_run_result.scalar_one_or_none.return_value = mock_run | ||
|
|
||
| mock_issues_result = MagicMock() | ||
| mock_issues_result.scalars.return_value.all.return_value = [mock_issue] | ||
|
|
||
| mock_session.execute.side_effect = [mock_run_result, mock_issues_result] | ||
|
|
||
| response = client.get(f"/api/v1/projects/{PROJECT_ID}/lint/runs/{RUN_ID}") | ||
| assert response.status_code == 200 | ||
| data = response.json() | ||
| assert data["status"] == "completed" | ||
| assert len(data["issues"]) == 1 | ||
| assert data["issues"][0]["rule_id"] == "R001" | ||
|
|
||
|
|
||
| class TestTriggerLintEnqueueFailure: | ||
| """Additional trigger lint tests.""" | ||
|
|
||
| @patch("ontokit.api.routes.lint.get_arq_pool", new_callable=AsyncMock) | ||
| @patch("ontokit.api.routes.lint.verify_project_access", new_callable=AsyncMock) | ||
| def test_trigger_lint_enqueue_returns_none( | ||
| self, | ||
| mock_access: AsyncMock, | ||
| mock_pool_fn: AsyncMock, | ||
| authed_client: tuple[TestClient, AsyncMock], | ||
| ) -> None: | ||
| """Returns 500 when enqueue_job returns None.""" | ||
| client, mock_session = authed_client | ||
|
|
||
| mock_project = Mock() | ||
| mock_project.source_file_path = "ontology.ttl" | ||
| mock_access.return_value = mock_project | ||
|
|
||
| mock_result = MagicMock() | ||
| mock_result.scalar_one_or_none.return_value = None | ||
| mock_session.execute.return_value = mock_result | ||
|
|
||
| mock_pool = AsyncMock() | ||
| mock_pool.enqueue_job.return_value = None | ||
| mock_pool_fn.return_value = mock_pool | ||
|
|
||
| response = client.post(f"/api/v1/projects/{PROJECT_ID}/lint/run") | ||
| assert response.status_code == 500 | ||
| assert "failed to enqueue" in response.json()["detail"].lower() | ||
|
|
||
| @patch("ontokit.api.routes.lint.get_arq_pool", new_callable=AsyncMock) | ||
| @patch("ontokit.api.routes.lint.verify_project_access", new_callable=AsyncMock) | ||
| def test_trigger_lint_enqueue_exception( | ||
| self, | ||
| mock_access: AsyncMock, | ||
| mock_pool_fn: AsyncMock, | ||
| authed_client: tuple[TestClient, AsyncMock], | ||
| ) -> None: | ||
| """Returns 500 when enqueue_job raises an exception.""" | ||
| client, mock_session = authed_client | ||
|
|
||
| mock_project = Mock() | ||
| mock_project.source_file_path = "ontology.ttl" | ||
| mock_access.return_value = mock_project | ||
|
|
||
| mock_result = MagicMock() | ||
| mock_result.scalar_one_or_none.return_value = None | ||
| mock_session.execute.return_value = mock_result | ||
|
|
||
| mock_pool_fn.side_effect = RuntimeError("Redis down") | ||
|
|
||
| response = client.post(f"/api/v1/projects/{PROJECT_ID}/lint/run") | ||
| assert response.status_code == 500 | ||
| assert "failed to start lint" in response.json()["detail"].lower() | ||
|
|
||
|
|
||
| class TestListLintRunsWithResults: | ||
| """Additional tests for GET /api/v1/projects/{id}/lint/runs.""" | ||
|
|
||
| @patch("ontokit.api.routes.lint.verify_project_access", new_callable=AsyncMock) | ||
| def test_list_lint_runs_with_pagination( | ||
| self, | ||
| mock_access: AsyncMock, | ||
| authed_client: tuple[TestClient, AsyncMock], | ||
| ) -> None: | ||
| """Returns paginated list when runs exist.""" | ||
| client, mock_session = authed_client | ||
| mock_access.return_value = Mock() | ||
|
|
||
| now = datetime.now(UTC) | ||
| mock_run = Mock() | ||
| mock_run.id = UUID(RUN_ID) | ||
| mock_run.project_id = UUID(PROJECT_ID) | ||
| mock_run.status = "completed" | ||
| mock_run.started_at = now | ||
| mock_run.completed_at = now | ||
| mock_run.issues_found = 3 | ||
| mock_run.error_message = None | ||
|
|
||
| mock_count_result = MagicMock() | ||
| mock_count_result.scalar.return_value = 1 | ||
|
|
||
| mock_runs_result = MagicMock() | ||
| mock_runs_result.scalars.return_value.all.return_value = [mock_run] | ||
|
|
||
| mock_session.execute.side_effect = [mock_count_result, mock_runs_result] | ||
|
|
||
| response = client.get(f"/api/v1/projects/{PROJECT_ID}/lint/runs?skip=0&limit=10") | ||
| assert response.status_code == 200 | ||
| data = response.json() | ||
| assert data["total"] == 1 | ||
| assert len(data["items"]) == 1 | ||
| assert data["items"][0]["issues_found"] == 3 | ||
| assert data["skip"] == 0 | ||
| assert data["limit"] == 10 | ||
|
|
||
|
|
||
| class TestGetLintRunDetail: | ||
| """Additional tests for GET /api/v1/projects/{id}/lint/runs/{run_id}.""" | ||
|
|
||
| @patch("ontokit.api.routes.lint.verify_project_access", new_callable=AsyncMock) | ||
| def test_get_lint_run_no_issues( | ||
| self, | ||
| mock_access: AsyncMock, | ||
| authed_client: tuple[TestClient, AsyncMock], | ||
| ) -> None: | ||
| """Returns run detail with empty issues list when run has no issues.""" | ||
| client, mock_session = authed_client | ||
| mock_access.return_value = Mock() | ||
|
|
||
| now = datetime.now(UTC) | ||
| mock_run = Mock() | ||
| mock_run.id = UUID(RUN_ID) | ||
| mock_run.project_id = UUID(PROJECT_ID) | ||
| mock_run.status = "completed" | ||
| mock_run.started_at = now | ||
| mock_run.completed_at = now | ||
| mock_run.issues_found = 0 | ||
| mock_run.error_message = None | ||
|
|
||
| mock_run_result = MagicMock() | ||
| mock_run_result.scalar_one_or_none.return_value = mock_run | ||
|
|
||
| mock_issues_result = MagicMock() | ||
| mock_issues_result.scalars.return_value.all.return_value = [] | ||
|
|
||
| mock_session.execute.side_effect = [mock_run_result, mock_issues_result] | ||
|
|
||
| response = client.get(f"/api/v1/projects/{PROJECT_ID}/lint/runs/{RUN_ID}") | ||
| assert response.status_code == 200 | ||
| data = response.json() | ||
| assert data["issues_found"] == 0 | ||
| assert data["issues"] == [] | ||
|
|
||
|
|
||
| class TestGetLintIssues: | ||
| """Tests for GET /api/v1/projects/{id}/lint/issues.""" | ||
|
|
||
| @patch("ontokit.api.routes.lint.verify_project_access", new_callable=AsyncMock) | ||
| def test_get_issues_no_completed_run( | ||
| self, | ||
| mock_access: AsyncMock, | ||
| authed_client: tuple[TestClient, AsyncMock], | ||
| ) -> None: | ||
| """Returns empty list when no completed run exists.""" | ||
| client, mock_session = authed_client | ||
| mock_access.return_value = Mock() | ||
|
|
||
| mock_result = MagicMock() | ||
| mock_result.scalar_one_or_none.return_value = None | ||
| mock_session.execute.return_value = mock_result | ||
|
|
||
| response = client.get(f"/api/v1/projects/{PROJECT_ID}/lint/issues") | ||
| assert response.status_code == 200 | ||
| data = response.json() | ||
| assert data["items"] == [] | ||
| assert data["total"] == 0 | ||
|
|
||
| @patch("ontokit.api.routes.lint.verify_project_access", new_callable=AsyncMock) | ||
| def test_get_issues_with_type_filter( | ||
| self, | ||
| mock_access: AsyncMock, | ||
| authed_client: tuple[TestClient, AsyncMock], | ||
| ) -> None: | ||
| """Returns filtered issues when issue_type query param is provided.""" | ||
| client, mock_session = authed_client | ||
| mock_access.return_value = Mock() | ||
|
|
||
| now = datetime.now(UTC) | ||
| project_uuid = UUID(PROJECT_ID) | ||
| run_uuid = UUID(RUN_ID) | ||
|
|
||
| mock_run = Mock() | ||
| mock_run.id = run_uuid | ||
| mock_run.status = "completed" | ||
|
|
||
| mock_issue = Mock() | ||
| mock_issue.id = uuid4() | ||
| mock_issue.run_id = run_uuid | ||
| mock_issue.project_id = project_uuid | ||
| mock_issue.issue_type = "error" | ||
| mock_issue.rule_id = "R010" | ||
| mock_issue.message = "Cyclic dependency" | ||
| mock_issue.subject_iri = "http://example.org/Bar" | ||
| mock_issue.details = None | ||
| mock_issue.created_at = now | ||
| mock_issue.resolved_at = None | ||
|
|
||
| # 1st: find last completed run, 2nd: count, 3rd: issues | ||
| mock_run_result = MagicMock() | ||
| mock_run_result.scalar_one_or_none.return_value = mock_run | ||
|
|
||
| mock_count_result = MagicMock() | ||
| mock_count_result.scalar.return_value = 1 | ||
|
|
||
| mock_issues_result = MagicMock() | ||
| mock_issues_result.scalars.return_value.all.return_value = [mock_issue] | ||
|
|
||
| mock_session.execute.side_effect = [ | ||
| mock_run_result, | ||
| mock_count_result, | ||
| mock_issues_result, | ||
| ] | ||
|
|
||
| response = client.get(f"/api/v1/projects/{PROJECT_ID}/lint/issues?issue_type=error") | ||
| assert response.status_code == 200 | ||
| data = response.json() | ||
| assert data["total"] == 1 | ||
| assert data["items"][0]["issue_type"] == "error" | ||
| assert data["items"][0]["rule_id"] == "R010" |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify sync client usage in this file and inspect client fixture types used by tests.
rg -n --type=py '\bTestClient\b|client\.(get|post)\(' tests/unit/test_lint_routes.py
fd -i 'conftest.py' tests | xargs rg -n --type=py '\b(TestClient|AsyncClient)\b|def\s+authed_client|def\s+client'Repository: CatholicOS/ontokit-api
Length of output: 1799
Convert test functions to async with AsyncClient for HTTP I/O operations
All tests currently use synchronous TestClient with synchronous test methods. Per coding guidelines, all I/O operations must use async/await. The HTTP client calls (client.get, client.post) are synchronous I/O and should be converted to async patterns using AsyncClient from httpx and async def test functions.
Migration pattern
-from fastapi.testclient import TestClient
+from httpx import AsyncClient
+import pytest
- def test_get_lint_rules_returns_list(self, mock_rules: MagicMock, client: TestClient) -> None:
+ `@pytest.mark.asyncio`
+ async def test_get_lint_rules_returns_list(
+ self, mock_rules: MagicMock, client: AsyncClient
+ ) -> None:
...
- response = client.get("/api/v1/projects/lint/rules")
+ response = await client.get("/api/v1/projects/lint/rules")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/unit/test_lint_routes.py` around lines 10 - 461, Tests perform
synchronous HTTP I/O using TestClient and regular def tests; convert each test
that calls client.get/client.post to async by changing the test functions to
async def and using AsyncClient (httpx.AsyncClient) for the HTTP client (ensure
the authed_client fixture yields an AsyncClient and async mock session), then
await client.get/post calls; update test classes/method names such as
TestLintRules.test_get_lint_rules_returns_list,
TestTriggerLint.test_trigger_lint_success,
TestTriggerLint.test_trigger_lint_no_ontology_file,
TestTriggerLint.test_trigger_lint_already_running,
TestLintStatus.test_lint_status_no_runs,
TestListLintRuns.test_list_lint_runs_empty,
TestGetLintRun.test_get_lint_run_not_found,
TestGetLintRun.test_get_lint_run_with_issues, TestTriggerLintEnqueueFailure.*
tests, TestListLintRunsWithResults.test_list_lint_runs_with_pagination,
TestGetLintRunDetail.test_get_lint_run_no_issues, and TestGetLintIssues.* to use
async/await (keep existing patch/AsyncMock usage) and ensure all client HTTP
calls are awaited and any fixtures returning clients/sessions are adapted to
async.
- Fix patch target for _get_fernet in test_embedding_service.py (use ontokit.core.config.settings) - Fix test_indexed_ontology.py: remove incorrect __slots__ comment, add type: ignore[method-assign] for mock assignments - Fix test_embeddings_routes.py: correct job_id assertion (route generates its own UUID, not from ARQ) - Fix transfer_ownership mock sequence in test_project_service.py (list_members calls _get_project once) - Add test_get_history_newest_first to test_bare_repository_service.py - Improve TestListMembers to assert 200 + response body by overriding get_current_user_with_token - Extract _setup_project_mock helper in test_normalization_routes.py to reduce duplication - Add ValueError → 422 handler in projects route for load_from_git parse errors - Restrict CI push trigger to [main, dev] branches to eliminate duplicate runs on PRs Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Covers _sync_merge_commits_to_prs, update_comment, delete_comment, list_branches, create_branch, switch_branch, GitHub sync paths for close/reopen/merge, review notifications, and GitHub integration CRUD. Overall coverage: 80% (1019 tests passing). Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 6
🧹 Nitpick comments (3)
tests/unit/test_pull_request_service_extended.py (1)
405-435: Missing assertion for GitHub sync call.The test claims to verify that
close_pull_request"syncs to GitHub when github_pr_number is set," but it only asserts the PR status change. Consider adding an assertion to verify the GitHub service was actually called:await service.close_pull_request(PROJECT_ID, 1, user) assert pr.status == PRStatus.CLOSED.value + mock_github_service.close_pull_request.assert_awaited_once()The same applies to
test_reopen_pr_with_github_pr_number_syncsat line 473-474.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/test_pull_request_service_extended.py` around lines 405 - 435, The test doesn't assert that the GitHub sync actually occurred; update TestClosePullRequestGitHubSync.test_close_pr_with_github_pr_number_syncs to assert that mock_github_service.close_pull_request was awaited once with the expected repository owner, repo name and the PR number (e.g. use mock_github_service.close_pull_request.assert_awaited_once_with(integration.repo_owner, integration.repo_name, pr.github_pr_number, ... ) or the exact parameter list your service uses). Do the same in test_reopen_pr_with_github_pr_number_syncs to assert mock_github_service.reopen_pull_request (or the appropriate method) was awaited with the same repo owner/repo name/PR number parameters.tests/unit/test_normalization_routes.py (1)
30-33: Minor: Mocking private method_get_project.Setting up
mock_svc._get_projecttests implementation details rather than the public API. If the internal implementation changes, these tests may break even though the public behavior remains correct. Consider whether this is necessary or if tests can rely solely on the public.getmethod.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/test_normalization_routes.py` around lines 30 - 33, The test helper _setup_project_mock currently mocks the private method mock_svc._get_project which couples tests to implementation; remove the line assigning mock_svc._get_project and instead ensure tests rely only on the public async method mock_svc.get (already set to return _make_project_response(user_role)); if some tests depend on behavior produced by _get_project, replace that by adjusting mock_svc.get's return_value or side_effect to produce the needed project object or patch the public entry points (e.g., mock_svc.get) used by the code under test rather than the private _get_project.tests/unit/test_projects_routes_extended.py (1)
466-506: Consider extracting auth override to a fixture.The manual override of
get_current_user_with_tokenwith try/finally cleanup works correctly, but extracting this pattern into a reusable fixture would improve consistency with the other service fixtures and reduce test boilerplate.Example fixture extraction
`@pytest.fixture` def mock_user_with_token() -> Generator[CurrentUser, None, None]: """Override get_current_user_with_token for tests requiring token access.""" user = CurrentUser( id="test-user-id", email="test@example.com", name="Test User", username="testuser", roles=["owner"], ) async def _override() -> tuple[CurrentUser, str]: return user, "test-token" app.dependency_overrides[get_current_user_with_token] = _override try: yield user finally: app.dependency_overrides.pop(get_current_user_with_token, None)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/test_projects_routes_extended.py` around lines 466 - 506, Extract the manual dependency override into a reusable pytest fixture (e.g., mock_user_with_token) that sets app.dependency_overrides[get_current_user_with_token] to an async override returning (CurrentUser, "test-token") and ensures cleanup by popping the override in a finally block; then update TestListMembers.test_list_members_returns_200 to accept the fixture (remove the try/finally override block) and use the yielded CurrentUser where needed so the test reuses the fixture for consistent auth setup.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/coverage-plan.md`:
- Around line 18-35: The document has inconsistent recovery counts for Phase 1:
"~170 statements recoverable" vs "Covering ~155 of the 305 missed statements
reaches 80%"; update the Phase 1 text so the two numbers match and clarify
intent—either change the "~170" mentions to "~155" to reflect the precise 80%
target, or explicitly label "~170" as a stretch goal (e.g., "stretch goal: ~170
statements") and keep "~155" as the required target; make the change near the
Phase 1 heading and the pull_request_service.py bullets to ensure both the
summary and the table match the chosen value.
In `@tests/unit/test_embedding_service.py`:
- Around line 580-621: The test test_embed_project_failure_marks_job_failed
should also assert that the job was transitioned to failed and the error was
recorded: after awaiting service.embed_project(PROJECT_ID, BRANCH, job_id) and
rollback assertion, inspect mock_db.execute calls
(mock_db.execute.call_args_list[-1]) or the mutated existing_job object and
assert existing_job.status == "failed" and that existing_job.error (or the
SQL/params in the last execute call) contains the exception message; update the
test to check either the final execute call performs an UPDATE to set
status="failed" and error, or directly assert fields on existing_job so the
failure-status update cannot regress.
In `@tests/unit/test_indexed_ontology.py`:
- Around line 65-70: The test line assigning the AsyncMock to
service.index.is_index_ready in test_returns_false_on_exception exceeds 100
chars; split the assignment across multiple lines (e.g., put the AsyncMock(...)
call on a new line or use a temporary variable) so the line length is <=100
while keeping the same AsyncMock(side_effect=Exception("table not found"))
behavior and the rest of the test (calling service._should_use_index and
asserting False) unchanged.
- Around line 109-122: The test has a line exceeding 100 chars where
service.index.get_root_classes is set to an AsyncMock with side_effect; split
that long statement across multiple lines (for example, assign the side effect
RuntimeError to a variable or break the AsyncMock(...) call into multiple
indented lines) so the call in test_falls_back_when_index_query_fails stays
under 100 characters while preserving the
AsyncMock(side_effect=RuntimeError("query failed")) behavior and keeping
references to service.index.get_root_classes and
test_falls_back_when_index_fails unchanged.
In `@tests/unit/test_project_service.py`:
- Around line 1771-1786: The first assignment to mock_db.execute.side_effect
(the list containing mock_result_project, mock_no_token, mock_no_token) is dead
code because it is immediately overwritten by the later assignment; remove the
initial assignment and ensure a single side_effect list is used that includes
the project fetchs and the final mock_members_result (i.e., combine the intended
sequence: mock_result_project, mock_no_token, mock_no_token,
mock_members_result) so mock_db.execute, mock_result_project, mock_no_token, and
mock_members_result are used in the correct order.
- Around line 503-506: Tests are patching the function at its definition module
instead of where it's imported; change the patch target for get_user_service in
the tests from "ontokit.services.user_service.get_user_service" to
"ontokit.services.project_service.get_user_service" for all occurrences (the
ones creating mock_us/mock_user_svc and setting mock_user_svc.get_users_info =
AsyncMock(...)) so the project_service code uses the patched object when calling
get_user_service.
---
Nitpick comments:
In `@tests/unit/test_normalization_routes.py`:
- Around line 30-33: The test helper _setup_project_mock currently mocks the
private method mock_svc._get_project which couples tests to implementation;
remove the line assigning mock_svc._get_project and instead ensure tests rely
only on the public async method mock_svc.get (already set to return
_make_project_response(user_role)); if some tests depend on behavior produced by
_get_project, replace that by adjusting mock_svc.get's return_value or
side_effect to produce the needed project object or patch the public entry
points (e.g., mock_svc.get) used by the code under test rather than the private
_get_project.
In `@tests/unit/test_projects_routes_extended.py`:
- Around line 466-506: Extract the manual dependency override into a reusable
pytest fixture (e.g., mock_user_with_token) that sets
app.dependency_overrides[get_current_user_with_token] to an async override
returning (CurrentUser, "test-token") and ensures cleanup by popping the
override in a finally block; then update
TestListMembers.test_list_members_returns_200 to accept the fixture (remove the
try/finally override block) and use the yielded CurrentUser where needed so the
test reuses the fixture for consistent auth setup.
In `@tests/unit/test_pull_request_service_extended.py`:
- Around line 405-435: The test doesn't assert that the GitHub sync actually
occurred; update
TestClosePullRequestGitHubSync.test_close_pr_with_github_pr_number_syncs to
assert that mock_github_service.close_pull_request was awaited once with the
expected repository owner, repo name and the PR number (e.g. use
mock_github_service.close_pull_request.assert_awaited_once_with(integration.repo_owner,
integration.repo_name, pr.github_pr_number, ... ) or the exact parameter list
your service uses). Do the same in test_reopen_pr_with_github_pr_number_syncs to
assert mock_github_service.reopen_pull_request (or the appropriate method) was
awaited with the same repo owner/repo name/PR number parameters.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 0e5994d4-81cc-4527-822e-a17b92385609
📒 Files selected for processing (11)
.github/workflows/release.ymldocs/coverage-plan.mdontokit/api/routes/projects.pytests/unit/test_bare_repository_service.pytests/unit/test_embedding_service.pytests/unit/test_embeddings_routes.pytests/unit/test_indexed_ontology.pytests/unit/test_normalization_routes.pytests/unit/test_project_service.pytests/unit/test_projects_routes_extended.pytests/unit/test_pull_request_service_extended.py
🚧 Files skipped from review as they are similar to previous changes (2)
- tests/unit/test_embeddings_routes.py
- tests/unit/test_bare_repository_service.py
| ## Phase 1 — Remaining (~170 statements recoverable) | ||
|
|
||
| | File | Current | Missed | Target | To Recover | | ||
| |------|---------|--------|--------|------------| | ||
| | `services/pull_request_service.py` | 56% | 305 | 80% | ~170 | | ||
|
|
||
| ### pull_request_service.py (56% → 80%) | ||
| - [ ] `create_pull_request()` — creation with validation | ||
| - [ ] `merge_pull_request()` — merge strategies | ||
| - [ ] `close_pull_request()`, `reopen_pull_request()` | ||
| - [ ] Review CRUD: `create_review()`, `list_reviews()` | ||
| - [ ] Comment CRUD: `create_comment()`, `list_comments()`, `update_comment()`, `delete_comment()` | ||
| - [ ] Branch management: `list_branches()`, `create_branch()` | ||
| - [ ] GitHub integration: `create_github_integration()`, `update_github_integration()`, `delete_github_integration()` | ||
| - [ ] Webhook handlers: `handle_github_pr_webhook()`, `handle_github_review_webhook()`, `handle_github_push_webhook()` | ||
| - [ ] PR settings: `get_pr_settings()`, `update_pr_settings()` | ||
|
|
||
| Covering ~155 of the 305 missed statements reaches 80% overall. |
There was a problem hiding this comment.
Reconcile inconsistent statement recovery targets.
Lines 18 and 22 claim "~170 statements recoverable" for the remaining Phase 1 work, but line 35 states "Covering ~155 of the 305 missed statements reaches 80% overall," which aligns with the overall gap of ~155 from line 6. This discrepancy creates ambiguity about the actual work needed.
Please clarify whether:
- ~170 represents a stretch goal beyond 80%, or
- ~155 is the precise target and lines 18/22 should be updated accordingly.
🧰 Tools
🪛 LanguageTool
[uncategorized] ~31-~31: The official name of this software platform is spelled with a capital “H”.
Context: ...ate_branch()- [ ] GitHub integration:create_github_integration(), update_github_integrat...
(GITHUB)
[uncategorized] ~31-~31: The official name of this software platform is spelled with a capital “H”.
Context: ...gration: create_github_integration(), update_github_integration(), `delete_github_integrat...
(GITHUB)
[uncategorized] ~32-~32: The official name of this software platform is spelled with a capital “H”.
Context: ..._integration()- [ ] Webhook handlers:handle_github_pr_webhook(), handle_github_review_we...
(GITHUB)
[uncategorized] ~32-~32: The official name of this software platform is spelled with a capital “H”.
Context: ...handlers: handle_github_pr_webhook(), handle_github_review_webhook(), `handle_github_push_...
(GITHUB)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/coverage-plan.md` around lines 18 - 35, The document has inconsistent
recovery counts for Phase 1: "~170 statements recoverable" vs "Covering ~155 of
the 305 missed statements reaches 80%"; update the Phase 1 text so the two
numbers match and clarify intent—either change the "~170" mentions to "~155" to
reflect the precise 80% target, or explicitly label "~170" as a stretch goal
(e.g., "stretch goal: ~170 statements") and keep "~155" as the required target;
make the change near the Phase 1 heading and the pull_request_service.py bullets
to ensure both the summary and the table match the chosen value.
| with patch("ontokit.services.user_service.get_user_service") as mock_us: | ||
| mock_user_svc = MagicMock() | ||
| mock_user_svc.get_users_info = AsyncMock(return_value={}) | ||
| mock_us.return_value = mock_user_svc |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check how get_user_service is imported in project_service.py
rg -n "from.*user_service.*import|import.*user_service" ontokit/services/project_service.py
# Find all occurrences of this patch pattern in the test file
rg -n 'patch\("ontokit\.services\.user_service\.get_user_service' tests/unit/test_project_service.pyRepository: CatholicOS/ontokit-api
Length of output: 623
🏁 Script executed:
# Check the test functions at lines 503, 764, 1515 to see what they're testing
sed -n '485,520p' tests/unit/test_project_service.py
sed -n '750,780p' tests/unit/test_project_service.py
sed -n '1500,1535p' tests/unit/test_project_service.pyRepository: CatholicOS/ontokit-api
Length of output: 4316
Patch target is incorrect in all four test locations—patch where the function is imported, not where it's defined.
The function get_user_service is imported into project_service.py via from ontokit.services.user_service import get_user_service. When code in project_service calls this function, it looks it up in the project_service namespace, not in user_service. The patch target must match where the lookup occurs.
Update all four occurrences (lines 367, 503, 764, 1515) from:
patch("ontokit.services.user_service.get_user_service")to:
patch("ontokit.services.project_service.get_user_service")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/unit/test_project_service.py` around lines 503 - 506, Tests are
patching the function at its definition module instead of where it's imported;
change the patch target for get_user_service in the tests from
"ontokit.services.user_service.get_user_service" to
"ontokit.services.project_service.get_user_service" for all occurrences (the
ones creating mock_us/mock_user_svc and setting mock_user_svc.get_users_info =
AsyncMock(...)) so the project_service code uses the patched object when calling
get_user_service.
- test_indexed_ontology: split long lines >100 chars for ruff compliance - test_pull_request_service_extended: fix GitHub sync tests to actually reach the sync path by providing a valid token_row; assert close/reopen_pull_request called with correct args - test_project_service: remove dead side_effect assignment that was immediately overwritten - test_embedding_service: assert failure UPDATE execute and commit after rollback in embed_project failure test Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 5
♻️ Duplicate comments (2)
tests/unit/test_embedding_service.py (1)
580-625:⚠️ Potential issue | 🟠 MajorAssert the failed status/error payload, not only call count
Line 583 says this test verifies failed status + error, but current assertions only check rollback/commit and
executecount. Please also assert the final failure update actually includesstatus="failed"and the error text (e.g., from the lastmock_db.executecall args/params, or from a mutatedexisting_jobobject).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/test_embedding_service.py` around lines 580 - 625, The test test_embed_project_failure_marks_job_failed currently only checks rollback/commit and execute call count; update it to also assert the final failure update includes status='failed' and the error message: after invoking await service.embed_project(...), inspect mock_db.execute.call_args_list[2] (or the mutated existing_job) and assert the SQL/params or object fields contain status="failed" and the exception text (or a substring of it) so the test verifies the actual failure payload is written; keep existing assertions for rollback/commit and call count.tests/unit/test_project_service.py (1)
367-380:⚠️ Potential issue | 🟠 MajorIncorrect patch target: mock won't be used by the code under test.
The patch targets
ontokit.services.user_service.get_user_service, butproject_service.pyimports the function withfrom ontokit.services.user_service import get_user_service. Whenadd_membercallsget_user_service(), it looks up the name inproject_service's namespace, not inuser_service. The patch must target where the lookup occurs.This same issue exists at lines 503, 764, and 1515.
Proposed fix for all four occurrences
- with patch("ontokit.services.user_service.get_user_service") as mock_us: + with patch("ontokit.services.project_service.get_user_service") as mock_us:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/test_project_service.py` around lines 367 - 380, The patch currently targets ontokit.services.user_service.get_user_service but project_service imports get_user_service into its own namespace, so update the patch target to the symbol used by the code under test (patch "ontokit.services.project_service.get_user_service") for the add_member test and the three other tests in this file that patch get_user_service; keep the same MagicMock/AsyncMock setup (mock_user_service.get_user_info, mock_us.return_value = mock_user_service) and assert calls as before so the tests exercise the patched function used by project_service.
🧹 Nitpick comments (3)
tests/unit/test_embedding_service.py (1)
536-540: Strengthen the “existing job” assertionThe comment states no
db.addfor the existing job, but there is no assertion enforcing that. Consider asserting that no added object corresponds tojob_id(while still allowing adds for embedding rows).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/test_embedding_service.py` around lines 536 - 540, The test currently comments that db.add should not be called for the existing job but lacks an assertion; update the test around existing_job and db.add to assert that none of the recorded db.add calls added an object with the existing job's id (e.g., check db.add.call_args_list or db.add.mock_calls and assert not any(call_arg where getattr(call_arg[0][0], "job_id", None) == existing_job.job_id)), while still allowing other adds (like embedding rows) to occur; reference the existing_job variable and the db.add mock when adding this assertion.tests/unit/test_project_service.py (2)
109-114: Factory test doesn't verifygit_servicedependency wiring.The test confirms
dbis passed through, butget_project_service()doesn't accept agit_serviceparameter—it always uses the defaultget_git_service(). This means tests using theservicefixture (which manually injectsmock_git_service) don't reflect production behavior where the factory is used.Consider whether the factory should accept an optional
git_serviceparameter for testability, or document this as an intentional design choice.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/test_project_service.py` around lines 109 - 114, The factory test misses verifying git_service wiring because get_project_service() always calls get_git_service(); update the factory to accept an optional git_service parameter and use it when provided (i.e., change get_project_service(get_git_service by default) to accept git_service: Optional[GitService] and pass that into ProjectService), then update test_factory_returns_instance to supply mock_git_service and assert svc.git_service is mock_git_service; reference get_project_service, get_git_service, ProjectService, test_factory_returns_instance and the service fixture/mock_git_service so the injected dependency is testable.
954-974: Consider extracting_simulate_refreshto a shared fixture.The
_simulate_refreshhelper is duplicated 6 times across the file with minor variations. Extracting it to a parameterized fixture would reduce duplication and maintenance burden.Example shared fixture approach
`@pytest.fixture` def simulate_refresh(): """Factory for creating refresh side effects with customizable attributes.""" def _create(owner_id: str = OWNER_ID, extra_attrs: dict | None = None): def _refresh(obj: Any, _attrs: list[str] | None = None) -> None: if getattr(obj, "id", None) is None: obj.id = uuid.uuid4() if getattr(obj, "created_at", None) is None: obj.created_at = datetime.now(UTC) if not getattr(obj, "members", None): obj.members = [_make_member(owner_id, "owner")] defaults = { "github_integration": None, "source_file_path": None, "ontology_iri": None, "normalization_report": None, "updated_at": None, "label_preferences": None, "pr_approval_required": 0, } defaults.update(extra_attrs or {}) for attr, value in defaults.items(): if not hasattr(obj, attr): setattr(obj, attr, value) return _refresh return _create🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/test_project_service.py` around lines 954 - 974, The duplicated helper _simulate_refresh should be extracted into a shared pytest fixture (e.g., simulate_refresh) that returns a factory to produce side-effect functions; implement a factory signature like simulate_refresh(owner_id: str = OWNER_ID, extra_attrs: dict | None = None) -> Callable and inside the returned _refresh populate id, created_at (use UTC), members via _make_member(owner_id, "owner"), and set defaults for github_integration, source_file_path, ontology_iri, normalization_report, updated_at, label_preferences, and pr_approval_required, merging extra_attrs to allow the six variants to be produced; replace the six inline definitions with calls to this fixture to keep tests DRY.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/unit/test_embedding_service.py`:
- Line 1415: The function signature for test_api_key_only_update in
tests/unit/test_embedding_service.py is too long; split the async def line so it
wraps under 100 characters by breaking after the function name and placing
parameters each on their own line (or grouping logically) — e.g., keep "async
def test_api_key_only_update(" then put "self: EmbeddingService," and "mock_db:
AsyncMock) -> None:" on subsequent indented lines — preserving the same
parameter names and types and keeping the rest of the test unchanged.
In `@tests/unit/test_indexed_ontology.py`:
- Around line 145-154: The test should not only check the returned value but
also verify the fallback and reindex triggers: after calling
service.get_class_count(PROJECT_ID, branch=BRANCH) assert that
mock_ontology_service.get_class_count was awaited with the same PROJECT_ID and
branch and that service._enqueue_reindex_if_stale was awaited (or awaited with
PROJECT_ID and branch if that helper accepts args); update the
test_falls_back_to_rdflib test to include these assertions referencing
service.get_class_count, mock_ontology_service.get_class_count,
service.index.is_index_ready, and service._enqueue_reindex_if_stale.
In `@tests/unit/test_pull_request_service_extended.py`:
- Around line 559-599: In test_merge_notifies_pr_author, replace the bare
mock_notif.create_notification.assert_awaited_once() with assertions that
validate the actual payload passed to create_notification (e.g., inspect
mock_notif.create_notification.await_args or use assert_awaited_once_with) and
check specific fields like notification_type (should be "pr_review" or the
production value), recipient/target id (PR author ID / EDITOR_ID), and any
relevant metadata derived from the merge (merge_commit_hash); reference the test
function name test_merge_notifies_pr_author and the mock object
mock_notif.create_notification to locate where to add these payload assertions,
and apply the same payload-level checks to the similar test block around lines
609-639.
- Around line 344-370: The test only asserts len(result.items) so it will pass
even if filters are ignored; update test_list_prs_with_status_filter (and the
similar test covering author_id) to assert the filter is passed to the DB call
by inspecting mock_db.execute call arguments from service.list_pull_requests:
verify that one of mock_db.execute.call_args or mock_db.execute.call_args_list
contains a statement/params referencing the status_filter value ("open") (or
author_id) or that the bound params include the correct key/value, rather than
only checking result length; use the service.list_pull_requests call and
mock_db.execute to assert the filter was actually applied.
- Around line 1095-1129: The test test_creates_sync_config_when_webhooks_enabled
only asserts mock_db.add was called; update it to capture the object passed to
mock_db.add (from mock_db.add.call_args or call_args_list) and assert its fields
are correct after _sync_remote_config_for_webhooks: frequency == "webhook",
enabled == True, branch (or default_branch field) equals
integration.default_branch, and ontology/file path field equals
integration.ontology_file_path (or the expected path property name); keep using
the same test and method names (_sync_remote_config_for_webhooks,
test_creates_sync_config_when_webhooks_enabled) to locate where to add these
assertions.
---
Duplicate comments:
In `@tests/unit/test_embedding_service.py`:
- Around line 580-625: The test test_embed_project_failure_marks_job_failed
currently only checks rollback/commit and execute call count; update it to also
assert the final failure update includes status='failed' and the error message:
after invoking await service.embed_project(...), inspect
mock_db.execute.call_args_list[2] (or the mutated existing_job) and assert the
SQL/params or object fields contain status="failed" and the exception text (or a
substring of it) so the test verifies the actual failure payload is written;
keep existing assertions for rollback/commit and call count.
In `@tests/unit/test_project_service.py`:
- Around line 367-380: The patch currently targets
ontokit.services.user_service.get_user_service but project_service imports
get_user_service into its own namespace, so update the patch target to the
symbol used by the code under test (patch
"ontokit.services.project_service.get_user_service") for the add_member test and
the three other tests in this file that patch get_user_service; keep the same
MagicMock/AsyncMock setup (mock_user_service.get_user_info, mock_us.return_value
= mock_user_service) and assert calls as before so the tests exercise the
patched function used by project_service.
---
Nitpick comments:
In `@tests/unit/test_embedding_service.py`:
- Around line 536-540: The test currently comments that db.add should not be
called for the existing job but lacks an assertion; update the test around
existing_job and db.add to assert that none of the recorded db.add calls added
an object with the existing job's id (e.g., check db.add.call_args_list or
db.add.mock_calls and assert not any(call_arg where getattr(call_arg[0][0],
"job_id", None) == existing_job.job_id)), while still allowing other adds (like
embedding rows) to occur; reference the existing_job variable and the db.add
mock when adding this assertion.
In `@tests/unit/test_project_service.py`:
- Around line 109-114: The factory test misses verifying git_service wiring
because get_project_service() always calls get_git_service(); update the factory
to accept an optional git_service parameter and use it when provided (i.e.,
change get_project_service(get_git_service by default) to accept git_service:
Optional[GitService] and pass that into ProjectService), then update
test_factory_returns_instance to supply mock_git_service and assert
svc.git_service is mock_git_service; reference get_project_service,
get_git_service, ProjectService, test_factory_returns_instance and the service
fixture/mock_git_service so the injected dependency is testable.
- Around line 954-974: The duplicated helper _simulate_refresh should be
extracted into a shared pytest fixture (e.g., simulate_refresh) that returns a
factory to produce side-effect functions; implement a factory signature like
simulate_refresh(owner_id: str = OWNER_ID, extra_attrs: dict | None = None) ->
Callable and inside the returned _refresh populate id, created_at (use UTC),
members via _make_member(owner_id, "owner"), and set defaults for
github_integration, source_file_path, ontology_iri, normalization_report,
updated_at, label_preferences, and pr_approval_required, merging extra_attrs to
allow the six variants to be produced; replace the six inline definitions with
calls to this fixture to keep tests DRY.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 120ed903-8c50-4482-b578-89a261ede3c0
📒 Files selected for processing (4)
tests/unit/test_embedding_service.pytests/unit/test_indexed_ontology.pytests/unit/test_project_service.pytests/unit/test_pull_request_service_extended.py
| assert existing.api_key_encrypted == "encrypted" | ||
|
|
||
| @pytest.mark.asyncio | ||
| async def test_api_key_only_update(self, service: EmbeddingService, mock_db: AsyncMock) -> None: |
There was a problem hiding this comment.
Wrap long function signature to 100-char max
Line 1415 exceeds the configured line length target. Split the signature across multiple lines.
Proposed fix
- async def test_api_key_only_update(self, service: EmbeddingService, mock_db: AsyncMock) -> None:
+ async def test_api_key_only_update(
+ self, service: EmbeddingService, mock_db: AsyncMock
+ ) -> None:As per coding guidelines, "Configure line length to 100 characters in code formatting".
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| async def test_api_key_only_update(self, service: EmbeddingService, mock_db: AsyncMock) -> None: | |
| async def test_api_key_only_update( | |
| self, service: EmbeddingService, mock_db: AsyncMock | |
| ) -> None: |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/unit/test_embedding_service.py` at line 1415, The function signature
for test_api_key_only_update in tests/unit/test_embedding_service.py is too
long; split the async def line so it wraps under 100 characters by breaking
after the function name and placing parameters each on their own line (or
grouping logically) — e.g., keep "async def test_api_key_only_update(" then put
"self: EmbeddingService," and "mock_db: AsyncMock) -> None:" on subsequent
indented lines — preserving the same parameter names and types and keeping the
rest of the test unchanged.
| async def test_list_prs_with_status_filter( | ||
| self, service: PullRequestService, mock_db: AsyncMock, mock_git_service: MagicMock | ||
| ) -> None: | ||
| """list_pull_requests passes status_filter to the query.""" | ||
| project = _make_project() | ||
| pr = _make_pr() | ||
| user = _make_user(EDITOR_ID) | ||
|
|
||
| mock_git_service.get_history.return_value = [] | ||
|
|
||
| merged_prs_result = _scalars_result([]) | ||
| max_number_result = _scalar_result(0) | ||
| list_result = _scalars_result([pr]) | ||
| project_result_2 = _project_result(project) | ||
|
|
||
| mock_db.execute.side_effect = [ | ||
| _project_result(project), | ||
| merged_prs_result, | ||
| max_number_result, | ||
| list_result, | ||
| project_result_2, | ||
| ] | ||
| mock_db.scalar = AsyncMock(return_value=1) | ||
|
|
||
| result = await service.list_pull_requests(PROJECT_ID, user, status_filter="open") | ||
| assert len(result.items) == 1 | ||
|
|
There was a problem hiding this comment.
Filter tests don’t verify that filters are actually applied to the query.
These tests currently pass even if status_filter or author_id is ignored, because only len(result.items) is asserted.
✅ Suggested strengthening of assertions
result = await service.list_pull_requests(PROJECT_ID, user, status_filter="open")
assert len(result.items) == 1
+ stmt = mock_db.execute.call_args_list[3].args[0]
+ compiled = stmt.compile()
+ assert "open" in compiled.params.values()
@@
result = await service.list_pull_requests(PROJECT_ID, user, author_id=EDITOR_ID)
assert len(result.items) == 1
+ stmt = mock_db.execute.call_args_list[3].args[0]
+ compiled = stmt.compile()
+ assert EDITOR_ID in compiled.params.values()Also applies to: 372-398
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/unit/test_pull_request_service_extended.py` around lines 344 - 370, The
test only asserts len(result.items) so it will pass even if filters are ignored;
update test_list_prs_with_status_filter (and the similar test covering
author_id) to assert the filter is passed to the DB call by inspecting
mock_db.execute call arguments from service.list_pull_requests: verify that one
of mock_db.execute.call_args or mock_db.execute.call_args_list contains a
statement/params referencing the status_filter value ("open") (or author_id) or
that the bound params include the correct key/value, rather than only checking
result length; use the service.list_pull_requests call and mock_db.execute to
assert the filter was actually applied.
- test_indexed_ontology: assert fallback delegates to ontology service and triggers reindex in get_class_count fallback test - test_pull_request_service_extended: assert notification payload fields (user_id, notification_type, project_id) in merge and review tests; assert sync config fields in _sync_remote_config test - test_embedding_service: inspect UPDATE statement to verify it contains 'failed' status in embed_project failure test Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
🧹 Nitpick comments (2)
tests/unit/test_indexed_ontology.py (2)
129-157: Add the missing exception-fallback test forget_class_count().
get_class_count()also has an “index ready but query fails” fallback path; adding that test will close an important branch gap.Proposed test addition
class TestGetClassCount: @@ async def test_falls_back_to_rdflib( self, service: IndexedOntologyService, mock_ontology_service: AsyncMock ) -> None: @@ mock_ontology_service.get_class_count.assert_awaited_once() service._enqueue_reindex_if_stale.assert_awaited_once() + + `@pytest.mark.asyncio` + async def test_falls_back_when_index_query_fails( + self, service: IndexedOntologyService, mock_ontology_service: AsyncMock + ) -> None: + """Falls back to OntologyService when index class-count query raises.""" + service.index.is_index_ready = AsyncMock(return_value=True) # type: ignore[method-assign] + service.index.get_class_count = AsyncMock( # type: ignore[method-assign] + side_effect=RuntimeError("query failed") + ) + service._enqueue_reindex_if_stale = AsyncMock() # type: ignore[method-assign] + mock_ontology_service.get_class_count = AsyncMock(return_value=42) + + count = await service.get_class_count(PROJECT_ID, branch=BRANCH) + assert count == 42 + mock_ontology_service.get_class_count.assert_awaited_once() + service._enqueue_reindex_if_stale.assert_awaited_once()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/test_indexed_ontology.py` around lines 129 - 157, Add a test case for the "index ready but query fails" branch of get_class_count: set service.index.is_index_ready = AsyncMock(return_value=True) and have service.index.get_class_count raise an exception (AsyncMock(side_effect=...)); ensure mock_ontology_service.get_class_count is awaited and its return value is returned by service.get_class_count, and verify service._enqueue_reindex_if_stale is awaited once. Reference the get_class_count method and the mocks service.index.get_class_count, mock_ontology_service.get_class_count, and service._enqueue_reindex_if_stale when locating where to add the test.
88-90: Strengthen assertions by checking awaited arguments, not only call count.Using
assert_awaited_once_with(...)here would better lock in correct delegation contracts.Proposed assertion tightening
- mock_ontology_service.get_root_tree_nodes.assert_awaited_once() + mock_ontology_service.get_root_tree_nodes.assert_awaited_once_with( + PROJECT_ID, None, BRANCH + ) service._enqueue_reindex_if_stale.assert_awaited_once() @@ - mock_ontology_service.get_root_tree_nodes.assert_awaited_once() + mock_ontology_service.get_root_tree_nodes.assert_awaited_once_with( + PROJECT_ID, None, BRANCH + ) service._enqueue_reindex_if_stale.assert_awaited_once() @@ - mock_ontology_service.serialize.assert_awaited_once() + mock_ontology_service.serialize.assert_awaited_once_with( + PROJECT_ID, "turtle", BRANCH + )Also applies to: 124-126, 169-171
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/test_indexed_ontology.py` around lines 88 - 90, Replace the loose call-count assertions with argument-checked awaits: for the get_root_tree_nodes test, change mock_ontology_service.get_root_tree_nodes.assert_awaited_once() to mock_ontology_service.get_root_tree_nodes.assert_awaited_once_with(PROJECT_ID, branch=BRANCH) and change service._enqueue_reindex_if_stale.assert_awaited_once() to service._enqueue_reindex_if_stale.assert_awaited_once_with(PROJECT_ID, BRANCH); apply the same tightening in the other two test locations that assert mock_ontology_service.get_nodes and service._enqueue_reindex_if_stale (use the concrete arguments those tests call with) so each await assertion verifies the exact parameters rather than only the call count.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@tests/unit/test_indexed_ontology.py`:
- Around line 129-157: Add a test case for the "index ready but query fails"
branch of get_class_count: set service.index.is_index_ready =
AsyncMock(return_value=True) and have service.index.get_class_count raise an
exception (AsyncMock(side_effect=...)); ensure
mock_ontology_service.get_class_count is awaited and its return value is
returned by service.get_class_count, and verify
service._enqueue_reindex_if_stale is awaited once. Reference the get_class_count
method and the mocks service.index.get_class_count,
mock_ontology_service.get_class_count, and service._enqueue_reindex_if_stale
when locating where to add the test.
- Around line 88-90: Replace the loose call-count assertions with
argument-checked awaits: for the get_root_tree_nodes test, change
mock_ontology_service.get_root_tree_nodes.assert_awaited_once() to
mock_ontology_service.get_root_tree_nodes.assert_awaited_once_with(PROJECT_ID,
branch=BRANCH) and change
service._enqueue_reindex_if_stale.assert_awaited_once() to
service._enqueue_reindex_if_stale.assert_awaited_once_with(PROJECT_ID, BRANCH);
apply the same tightening in the other two test locations that assert
mock_ontology_service.get_nodes and service._enqueue_reindex_if_stale (use the
concrete arguments those tests call with) so each await assertion verifies the
exact parameters rather than only the call count.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 3c5f89b4-7dcb-4b15-9aaa-6c2175a6d8cf
📒 Files selected for processing (3)
tests/unit/test_embedding_service.pytests/unit/test_indexed_ontology.pytests/unit/test_pull_request_service_extended.py
✅ Files skipped from review due to trivial changes (2)
- tests/unit/test_embedding_service.py
- tests/unit/test_pull_request_service_extended.py
…d assertion - test_project_service: replace 6 inline _simulate_refresh definitions with _make_simulate_refresh(owner_id, extended=bool) factory function - test_embedding_service: assert db.add was never called with existing job object in embed_project test Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
… test - Replace loose assert_awaited_once() with assert_awaited_once_with() using exact args (PROJECT_ID, BRANCH) across all fallback tests - Add test_falls_back_when_index_query_fails for get_class_count to cover the "index ready but query raises" branch Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
♻️ Duplicate comments (4)
tests/unit/test_project_service.py (4)
781-786:⚠️ Potential issue | 🟠 MajorSame incorrect patch target for
get_user_service.Update the patch target to
ontokit.services.project_service.get_user_service.Proposed fix
- with patch("ontokit.services.user_service.get_user_service") as mock_us: + with patch("ontokit.services.project_service.get_user_service") as mock_us:,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/test_project_service.py` around lines 781 - 786, The patch in the test uses the wrong target: change the patch call that currently mocks "ontokit.services.user_service.get_user_service" to mock "ontokit.services.project_service.get_user_service" so the test's mocked get_user_service is applied where project_service imports/uses it; update the patch string in the block that creates mock_user_service and sets get_user_info (the AsyncMock returning {"id": EDITOR_ID, "name": "Editor", "email": "editor@test.com"}) to use the corrected import path.
520-523:⚠️ Potential issue | 🟠 MajorSame incorrect patch target for
get_user_service.This has the same issue as line 384—patch the function where it's looked up, not where it's defined.
Proposed fix
- with patch("ontokit.services.user_service.get_user_service") as mock_us: + with patch("ontokit.services.project_service.get_user_service") as mock_us:,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/test_project_service.py` around lines 520 - 523, The test is patching get_user_service in the wrong module (patching ontokit.services.user_service.get_user_service); change the patch target to the module where the function is looked up in the code under test (patch ontokit.services.project_service.get_user_service) so the mock is applied where project service imports/looks up get_user_service; update the two occurrences (this block and the earlier one around line 384) to use patch("ontokit.services.project_service.get_user_service") while keeping the mock_us/ mock_user_svc, AsyncMock(return_value={}) and return_value assignment the same.
384-389:⚠️ Potential issue | 🟠 MajorIncorrect patch target—mock won't be used by the service under test.
The patch targets
ontokit.services.user_service.get_user_service, but sinceproject_service.pyimports this function viafrom ontokit.services.user_service import get_user_service, the lookup happens in theproject_servicemodule namespace. The mock won't intercept the call.Proposed fix
- with patch("ontokit.services.user_service.get_user_service") as mock_us: + with patch("ontokit.services.project_service.get_user_service") as mock_us:,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/test_project_service.py` around lines 384 - 389, The test patches the wrong target so the mocked get_user_service isn't injected into the code under test; change the patch target to the name imported in the project service module (patch "ontokit.services.project_service.get_user_service") so the AsyncMock assigned to mock_user_service.get_user_info is actually used by the functions in project_service that call get_user_service.
1444-1451:⚠️ Potential issue | 🟠 MajorSame incorrect patch target for
get_user_service.Update the patch target to
ontokit.services.project_service.get_user_service.Proposed fix
- with patch("ontokit.services.user_service.get_user_service") as mock_us: + with patch("ontokit.services.project_service.get_user_service") as mock_us:,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/test_project_service.py` around lines 1444 - 1451, The test currently patches the wrong symbol: change the patch target from "ontokit.services.user_service.get_user_service" to "ontokit.services.project_service.get_user_service" so the ProjectService code under test receives the mocked get_user_service; update the patch call in the test (the block creating mock_user_service with get_users_info AsyncMock) to use ontokit.services.project_service.get_user_service instead.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@tests/unit/test_project_service.py`:
- Around line 781-786: The patch in the test uses the wrong target: change the
patch call that currently mocks "ontokit.services.user_service.get_user_service"
to mock "ontokit.services.project_service.get_user_service" so the test's mocked
get_user_service is applied where project_service imports/uses it; update the
patch string in the block that creates mock_user_service and sets get_user_info
(the AsyncMock returning {"id": EDITOR_ID, "name": "Editor", "email":
"editor@test.com"}) to use the corrected import path.
- Around line 520-523: The test is patching get_user_service in the wrong module
(patching ontokit.services.user_service.get_user_service); change the patch
target to the module where the function is looked up in the code under test
(patch ontokit.services.project_service.get_user_service) so the mock is applied
where project service imports/looks up get_user_service; update the two
occurrences (this block and the earlier one around line 384) to use
patch("ontokit.services.project_service.get_user_service") while keeping the
mock_us/ mock_user_svc, AsyncMock(return_value={}) and return_value assignment
the same.
- Around line 384-389: The test patches the wrong target so the mocked
get_user_service isn't injected into the code under test; change the patch
target to the name imported in the project service module (patch
"ontokit.services.project_service.get_user_service") so the AsyncMock assigned
to mock_user_service.get_user_info is actually used by the functions in
project_service that call get_user_service.
- Around line 1444-1451: The test currently patches the wrong symbol: change the
patch target from "ontokit.services.user_service.get_user_service" to
"ontokit.services.project_service.get_user_service" so the ProjectService code
under test receives the mocked get_user_service; update the patch call in the
test (the block creating mock_user_service with get_users_info AsyncMock) to use
ontokit.services.project_service.get_user_service instead.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 3fe66c5c-4ae1-4b13-a371-5f43e23aed36
📒 Files selected for processing (2)
tests/unit/test_embedding_service.pytests/unit/test_project_service.py
✅ Files skipped from review due to trivial changes (1)
- tests/unit/test_embedding_service.py
project_service uses inline imports for get_user_service inside function bodies, so the symbol is resolved from user_service at call time and must be patched there — not on project_service's namespace. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
tests/unit/test_indexed_ontology.py (1)
178-190: Consider strengthening the assertion to verify argument pass-through.The test verifies
serialize()delegates toOntologyService, but unlike other tests in this file, it doesn't verify the arguments are passed correctly. Usingassert_awaited_once_withwould ensure the method receives the expected parameters.♻️ Suggested assertion improvement
result = await service.serialize(PROJECT_ID, format="turtle", branch=BRANCH) assert result == "<turtle content>" - mock_ontology_service.serialize.assert_awaited_once() + mock_ontology_service.serialize.assert_awaited_once_with( + PROJECT_ID, format="turtle", branch=BRANCH + )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/test_indexed_ontology.py` around lines 178 - 190, The test TestSerializePassThrough::test_always_delegates_to_ontology_service should assert that arguments are forwarded to OntologyService; replace the final mock_ontology_service.serialize.assert_awaited_once() with an assertion using assert_awaited_once_with and pass the expected parameters (PROJECT_ID, format="turtle", branch=BRANCH) so the test verifies service.serialize(...) calls mock_ontology_service.serialize with the correct arguments.tests/unit/test_project_service.py (1)
1116-1117: Tighten the DB add-count assertion.
assert mock_db.add.call_count >= 3is too loose and can hide unintended extra inserts. Prefer asserting the exact expected count for this path.🔍 Proposed change
- assert mock_db.add.call_count >= 3 + assert mock_db.add.call_count == 3🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/test_project_service.py` around lines 1116 - 1117, The current test uses a loose assertion on mock_db.add.call_count (assert mock_db.add.call_count >= 3); tighten it to assert the exact expected number of calls for this path (three calls: project, owner member, github integration) by changing the assertion to assert mock_db.add.call_count == 3 in the test (tests/unit/test_project_service.py) so unexpected extra inserts will fail the test.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/unit/test_project_service.py`:
- Line 1111: Replace the GitHub-like token literal used in test cases to avoid
secret-scanner false positives: locate uses of the github_token parameter (e.g.,
github_token="ghp_test123") in tests in tests/unit/test_project_service.py (the
test invocations that pass github_token) and change the value to a neutral
placeholder that does not match GitHub token patterns (for example "test-token"
or "placeholder-token") for all occurrences (previously at the three sites
flagged) so test logic remains the same but scanners are not triggered.
---
Nitpick comments:
In `@tests/unit/test_indexed_ontology.py`:
- Around line 178-190: The test
TestSerializePassThrough::test_always_delegates_to_ontology_service should
assert that arguments are forwarded to OntologyService; replace the final
mock_ontology_service.serialize.assert_awaited_once() with an assertion using
assert_awaited_once_with and pass the expected parameters (PROJECT_ID,
format="turtle", branch=BRANCH) so the test verifies service.serialize(...)
calls mock_ontology_service.serialize with the correct arguments.
In `@tests/unit/test_project_service.py`:
- Around line 1116-1117: The current test uses a loose assertion on
mock_db.add.call_count (assert mock_db.add.call_count >= 3); tighten it to
assert the exact expected number of calls for this path (three calls: project,
owner member, github integration) by changing the assertion to assert
mock_db.add.call_count == 3 in the test (tests/unit/test_project_service.py) so
unexpected extra inserts will fail the test.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: a86e9629-2a85-49c3-89f1-c1813481115a
📒 Files selected for processing (3)
tests/unit/test_embedding_service.pytests/unit/test_indexed_ontology.pytests/unit/test_project_service.py
✅ Files skipped from review due to trivial changes (1)
- tests/unit/test_embedding_service.py
… args - Replace ghp_test123 with test-token to avoid secret-scanner false positives - Tighten db.add assertion to exact count (4: project, member, integration, run) - Assert serialize forwards exact args to ontology service Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Coverage Report1020 tests passing — 80% overall coverage (7632/9573 statements covered, 1941 missing) Top files by missed statements
Key services at or above target
|
- test_pull_requests_routes.py: 38 tests covering all PR route delegation endpoints and GitHub webhook handler (auth, events, sync) - test_lint_routes_extended.py: 14 tests covering verify_project_access errors, lint status with completed run, issue filters, dismiss_issue, and LintConnectionManager - test_projects_routes_coverage.py: 30 tests covering list_branches full path, create/delete branch, save_source_content, reindex, update project sitemap, GitHub import, and scan files Overall: 1102 tests, 82% coverage (up from 80%). Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/unit/test_lint_routes_extended.py`:
- Around line 24-72: These tests patch the wrong symbol—verify_project_access is
what the route actually calls, not get_project_service—so update
test_project_not_found_returns_404 and test_write_access_for_viewer to patch
"ontokit.api.routes.lint.verify_project_access" (not get_project_service) and
set the patched verify_project_access to raise an HTTPException with
status_code=404 and detail="project not found" for the first test and
status_code=403 and detail="write access required" for the second (or set
side_effects that mimic those responses); keep the test assertions the same and
remove the unused mock_session/mock_result setup tied to get_project_service.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 7152ca80-0964-4bba-9936-5e4e5a3dcd35
📒 Files selected for processing (5)
tests/unit/test_indexed_ontology.pytests/unit/test_lint_routes_extended.pytests/unit/test_project_service.pytests/unit/test_projects_routes_coverage.pytests/unit/test_pull_requests_routes.py
🚧 Files skipped from review as they are similar to previous changes (1)
- tests/unit/test_indexed_ontology.py
Coverage Report (updated)1102 tests passing — 82% overall coverage (7873/9573 statements covered, 1700 missing) Progress since last report
New test files added
Top files by missed statements
Key services at or above target
|
- indexed_ontology.py: 47% → 100% (+20 tests) - github_sync.py: 61% → 100% (+11 tests) - worker.py: 70% → 99% (+24 tests) - pull_request_service.py: 74% → 88% (+33 tests) - ontology_index.py: 75% → 94% (+8 tests) - normalization routes/service: 73% → 96%/99% (+9 tests) - lint routes: 77% → 80% (+3 tests) - ontology_extractor.py: 78% → 94% (+4 tests) Overall: 1244 tests, 87% coverage (up from 1102 tests, 82%). Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- git/bare_repository.py: 70% → 86% (+48 tests covering merge, remotes, nested files, branch operations, module functions) - api/routes/projects.py: 68% → 92% (+36 tests covering ontology navigation, checkout, import, member endpoints, revision endpoints) Overall: 1328 tests, 89% coverage (up from 1244 tests, 87%). Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (7)
tests/unit/test_lint_routes_extended.py (1)
129-234: Optional: parametrize filter tests to reduce duplication.The
rule_idandsubject_iritests have near-identical setup and can be merged into one parameterized test for easier extension.♻️ Proposed refactor
@@ - `@patch`("ontokit.api.routes.lint.verify_project_access", new_callable=AsyncMock) - def test_get_issues_with_rule_id_filter(...): - ... - - `@patch`("ontokit.api.routes.lint.verify_project_access", new_callable=AsyncMock) - def test_get_issues_with_subject_iri_filter(...): - ... + `@pytest.mark.parametrize`( + ("query", "expected_field", "expected_value", "issue_type"), + [ + ("rule_id=R005", "rule_id", "R005", "warning"), + ( + "subject_iri=http%3A%2F%2Fexample.org%2FSpecificClass", + "subject_iri", + "http://example.org/SpecificClass", + "error", + ), + ], + ) + `@patch`("ontokit.api.routes.lint.verify_project_access", new_callable=AsyncMock) + def test_get_issues_with_filters( + self, + mock_access: AsyncMock, + authed_client: tuple[TestClient, AsyncMock], + query: str, + expected_field: str, + expected_value: str, + issue_type: str, + ) -> None: + client, mock_session = authed_client + mock_access.return_value = Mock() + now = datetime.now(UTC) + project_uuid = UUID(PROJECT_ID) + run_uuid = UUID(RUN_ID) + + mock_run = Mock() + mock_run.id = run_uuid + mock_run.status = "completed" + + mock_issue = Mock() + mock_issue.id = uuid4() + mock_issue.run_id = run_uuid + mock_issue.project_id = project_uuid + mock_issue.issue_type = issue_type + mock_issue.rule_id = "R005" + mock_issue.message = "message" + mock_issue.subject_iri = "http://example.org/SpecificClass" + mock_issue.details = None + mock_issue.created_at = now + mock_issue.resolved_at = None + + mock_run_result = MagicMock() + mock_run_result.scalar_one_or_none.return_value = mock_run + mock_count_result = MagicMock() + mock_count_result.scalar.return_value = 1 + mock_issues_result = MagicMock() + mock_issues_result.scalars.return_value.all.return_value = [mock_issue] + mock_session.execute.side_effect = [mock_run_result, mock_count_result, mock_issues_result] + + response = client.get(f"/api/v1/projects/{PROJECT_ID}/lint/issues?{query}") + assert response.status_code == 200 + data = response.json() + assert data["total"] == 1 + assert data["items"][0][expected_field] == expected_value🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/test_lint_routes_extended.py` around lines 129 - 234, The two near-duplicate tests test_get_issues_with_rule_id_filter and test_get_issues_with_subject_iri_filter should be merged into a single parametrized test (e.g., test_get_issues_with_filter) using pytest.mark.parametrize to pass the query string, expected response field ("rule_id" or "subject_iri") and expected value ("R005" or "http://example.org/SpecificClass"); keep the shared setup (authed_client, mock_access, now, project_uuid, run_uuid, mock_run, mock_issue template, mock_run_result, mock_count_result, mock_issues_result, and mock_session.execute side_effect) and vary only the mock_issue.rule_id/subject_iri and the client.get URL per parameter. Ensure the new test still asserts response.status_code == 200, data["total"] == 1 and data["items"][0][expected_field] == expected_value and keep the AsyncMock patch target verify_project_access and use the same authed_client fixture.tests/unit/test_ontology_extractor.py (2)
310-341: Strengthen “added property” tests with output-content assertions.These tests currently rely on
changestext only. Add assertions on serialized content to avoid false positives if change messages drift from actual graph updates.🔍 Suggested assertions
content, changes = updater.update_metadata( turtle_no_title, "onto.ttl", new_title="Brand New Title" ) assert any("dc:title" in c and "added" in c for c in changes) + assert b"Brand New Title" in content @@ content, changes = updater.update_metadata( turtle_no_desc, "onto.ttl", new_description="Brand New Description" ) assert any("dc:description" in c and "added" in c for c in changes) + assert b"Brand New Description" in content🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/test_ontology_extractor.py` around lines 310 - 341, The tests test_update_metadata_no_existing_title and test_update_metadata_no_existing_description currently only assert messages in changes; update them to also parse/inspect the returned content from OntologyMetadataUpdater.update_metadata (use the content variable) to ensure the dc:title and dc:description triples were actually added: for test_update_metadata_no_existing_title assert that the serialized turtle (content) contains a dc:title triple with "Brand New Title" (and/or the dc:title predicate URI), and for test_update_metadata_no_existing_description assert the serialized turtle (content) contains a dc:description triple with "Brand New Description"; keep the existing checks against changes but add these content assertions to the tests that use turtle_no_title and turtle_no_desc respectively.
64-71: Add.owxto format-detection parametrization.
OntologyMetadataExtractor.FORMAT_MAPsupports.owx, but this table doesn’t validate it.✅ Suggested test addition
`@pytest.mark.parametrize`( ("ext", "expected"), [ (".ttl", "turtle"), (".owl", "xml"), + (".owx", "xml"), (".jsonld", "json-ld"), (".csv", None), ], )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/test_ontology_extractor.py` around lines 64 - 71, The parametrized test in test_ontology_extractor.py is missing coverage for the .owx extension; update the pytest.mark.parametrize tuple in the test (the list of ("ext","expected") values) to include (" .owx", "xml") (matching OntologyMetadataExtractor.FORMAT_MAP) so the test validates that .owx is detected as "xml" by the extractor.tests/unit/test_normalization_routes.py (1)
356-357: Consider adding tests fordeferredandin_progressjob statuses.The current tests cover
not_found,complete, andqueuedstatuses. For full coverage of the status mapping in the route handler, consider adding tests fordeferred→"pending"andin_progress→"running".📋 Additional test cases
`@patch`("ontokit.api.routes.normalization.get_arq_pool", new_callable=AsyncMock) def test_job_status_deferred( self, mock_pool_fn: AsyncMock, authed_client: tuple[TestClient, AsyncMock], mock_project_service: AsyncMock, ) -> None: """Returns pending status for deferred jobs.""" client, _ = authed_client mock_project_service.get = AsyncMock(return_value=_make_project_response()) mock_pool = AsyncMock() mock_pool_fn.return_value = mock_pool with patch("ontokit.api.routes.normalization.Job") as MockJob: mock_job_instance = AsyncMock() mock_job_instance.status.return_value = _get_job_status("deferred") MockJob.return_value = mock_job_instance response = client.get(f"/api/v1/projects/{PROJECT_ID}/normalization/jobs/deferred-job") assert response.status_code == 200 assert response.json()["status"] == "pending" `@patch`("ontokit.api.routes.normalization.get_arq_pool", new_callable=AsyncMock) def test_job_status_in_progress( self, mock_pool_fn: AsyncMock, authed_client: tuple[TestClient, AsyncMock], mock_project_service: AsyncMock, ) -> None: """Returns running status for in-progress jobs.""" client, _ = authed_client mock_project_service.get = AsyncMock(return_value=_make_project_response()) mock_pool = AsyncMock() mock_pool_fn.return_value = mock_pool with patch("ontokit.api.routes.normalization.Job") as MockJob: mock_job_instance = AsyncMock() mock_job_instance.status.return_value = _get_job_status("in_progress") MockJob.return_value = mock_job_instance response = client.get(f"/api/v1/projects/{PROJECT_ID}/normalization/jobs/running-job") assert response.status_code == 200 assert response.json()["status"] == "running"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/test_normalization_routes.py` around lines 356 - 357, Add two unit tests to TestGetJobStatus to cover the missing status mappings: one for a "deferred" job that asserts the route returns "pending" and one for an "in_progress" job that asserts the route returns "running". In each test patch get_arq_pool (new_callable=AsyncMock), set mock_project_service.get to return _make_project_response(), create a mocked Job (patch "ontokit.api.routes.normalization.Job") whose status() returns _get_job_status("deferred") or _get_job_status("in_progress"), call client.get(...) for a representative job_id, and assert response.status_code == 200 and response.json()["status"] == "pending" or "running" respectively so the route's status mapping is fully covered.tests/unit/test_bare_repository_service.py (3)
448-466: Verify that the fast-forward actually updatesmain.
result.successandmerge_commit_hashalone do not prove the target branch moved. A merge implementation that reports success without updating the ref or file content would still satisfy this test.♻️ Suggested assertion tightening
repo = initialized_service.get_repository(project_id) + main_before = repo.get_branch_commit_hash("main") result = repo.merge_branch("feature", "main") assert result.success is True assert result.merge_commit_hash is not None + assert repo.get_branch_commit_hash("main") != main_before + assert repo.read_file("main", "ontology.ttl") == ( + b"@prefix : <http://example.org/> .\n:A a :B ; :p 1 .\n" + )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/test_bare_repository_service.py` around lines 448 - 466, The test test_fast_forward_merge currently asserts only result.success and result.merge_commit_hash; update it to verify the target branch actually moved and content changed: after calling initialized_service.get_repository(project_id) and result = repo.merge_branch("feature", "main"), resolve the new head of "main" (via repo.get_branch_head or repo.get_commit/get_branch methods) and assert it equals result.merge_commit_hash, and additionally read the file (e.g., via initialized_service.get_file or repo.read_blob at "main" / "ontology.ttl") to assert the ontology content matches the feature commit content that was added by commit_changes; use the existing create_branch, commit_changes, get_repository, and repo.merge_branch identifiers to locate where to add these assertions.
255-263: Assert UTC-awareness on history timestamps.This checks ordering, but it would still pass if
CommitInfo.timestampregressed to a naive datetime. Add an explicit UTC/tz-aware assertion here so the test enforces the datetime invariant, not just sort order. As per coding guidelines, "Maintain UTC timezone-aware datetime fields throughout the codebase".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/test_bare_repository_service.py` around lines 255 - 263, The test currently only checks ordering of CommitInfo timestamps; update the test around initialized_service.get_history and the history list to assert that each CommitInfo.timestamp is timezone-aware and in UTC (i.e., tzinfo is not None and equals datetime.timezone.utc) for at least the three checked entries (history[0], history[1], history[2]); ensure the test imports datetime.timezone if needed and place the UTC-awareness assertions alongside the existing ordering assertions so the test enforces the UTC-aware datetime invariant for CommitInfo.timestamp.
1072-1083: This doesn't currently test determinism.The helper contract is deterministic serialization, but this only checks that some Turtle was produced. It would still pass if triple ordering changed across runs. Prefer comparing the exact output of two equivalent graphs built with different insertion orders.
♻️ Suggested stronger test
- from rdflib import OWL, RDF, Graph, URIRef + from rdflib import OWL, RDF, Graph, Literal, URIRef - - g = Graph() iri = URIRef("http://example.org/ontology") - g.add((iri, RDF.type, OWL.Ontology)) - result = serialize_deterministic(g) - assert isinstance(result, str) - assert "Ontology" in result + predicate = URIRef("http://example.org/p") + g1 = Graph() + g2 = Graph() + for graph, values in ((g1, ["a", "b"]), (g2, ["b", "a"])): + graph.add((iri, RDF.type, OWL.Ontology)) + for value in values: + graph.add((iri, predicate, Literal(value))) + assert serialize_deterministic(g1) == serialize_deterministic(g2)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/test_bare_repository_service.py` around lines 1072 - 1083, The test test_serialize_deterministic currently only checks that Turtle is produced but not that serialization is stable; update it to construct two equivalent rdflib.Graph instances (e.g., using URIRef, RDF.type, OWL.Ontology and any additional triples) with different insertion orders, call serialize_deterministic on both graphs, and assert the returned strings are identical (and still assert they are str and contain "Ontology"); this verifies serialize_deterministic is deterministic. Reference serialize_deterministic and the test function name so the change is applied in that test.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/unit/test_ontology_extractor.py`:
- Around line 191-208: The tests claim to exercise the global fallback for
title/description when ontology_iri is None but they use TURTLE_WITH_DC which
contains an owl:Ontology node, so the fallback branch in
OntologyMetadataExtractor.extract_metadata is never hit; update both
TestExtractTitleFallback.test_title_found_via_global_search and
TestExtractDescriptionFallback.test_description_found_via_global_search to
invoke extract_metadata so that ontology_iri is actually None (either by passing
ontology_iri=None if the API accepts it or by using a turtle fixture/string that
does NOT include an owl:Ontology node, e.g., a TURTLE_WITHOUT_ONTOLOGY variant),
ensuring the extractor must perform the global search to find title/description.
In `@tests/unit/test_projects_routes_coverage.py`:
- Line 1121: Tests are using AsyncMock instances but asserting with sync
assertions (e.g., mock_pool.enqueue_job.assert_called_once(),
mock_db.add.assert_called_once()), which will not fail if the coroutine was
never awaited; find the AsyncMock usages (e.g., mock_pool.enqueue_job,
mock_db.add and other async mocks in these tests) and replace the sync
assertions with the async variants (use assert_awaited_once() or
assert_awaited_once_with(...) as appropriate) so the tests will fail when the
async call is not awaited.
---
Nitpick comments:
In `@tests/unit/test_bare_repository_service.py`:
- Around line 448-466: The test test_fast_forward_merge currently asserts only
result.success and result.merge_commit_hash; update it to verify the target
branch actually moved and content changed: after calling
initialized_service.get_repository(project_id) and result =
repo.merge_branch("feature", "main"), resolve the new head of "main" (via
repo.get_branch_head or repo.get_commit/get_branch methods) and assert it equals
result.merge_commit_hash, and additionally read the file (e.g., via
initialized_service.get_file or repo.read_blob at "main" / "ontology.ttl") to
assert the ontology content matches the feature commit content that was added by
commit_changes; use the existing create_branch, commit_changes, get_repository,
and repo.merge_branch identifiers to locate where to add these assertions.
- Around line 255-263: The test currently only checks ordering of CommitInfo
timestamps; update the test around initialized_service.get_history and the
history list to assert that each CommitInfo.timestamp is timezone-aware and in
UTC (i.e., tzinfo is not None and equals datetime.timezone.utc) for at least the
three checked entries (history[0], history[1], history[2]); ensure the test
imports datetime.timezone if needed and place the UTC-awareness assertions
alongside the existing ordering assertions so the test enforces the UTC-aware
datetime invariant for CommitInfo.timestamp.
- Around line 1072-1083: The test test_serialize_deterministic currently only
checks that Turtle is produced but not that serialization is stable; update it
to construct two equivalent rdflib.Graph instances (e.g., using URIRef,
RDF.type, OWL.Ontology and any additional triples) with different insertion
orders, call serialize_deterministic on both graphs, and assert the returned
strings are identical (and still assert they are str and contain "Ontology");
this verifies serialize_deterministic is deterministic. Reference
serialize_deterministic and the test function name so the change is applied in
that test.
In `@tests/unit/test_lint_routes_extended.py`:
- Around line 129-234: The two near-duplicate tests
test_get_issues_with_rule_id_filter and test_get_issues_with_subject_iri_filter
should be merged into a single parametrized test (e.g.,
test_get_issues_with_filter) using pytest.mark.parametrize to pass the query
string, expected response field ("rule_id" or "subject_iri") and expected value
("R005" or "http://example.org/SpecificClass"); keep the shared setup
(authed_client, mock_access, now, project_uuid, run_uuid, mock_run, mock_issue
template, mock_run_result, mock_count_result, mock_issues_result, and
mock_session.execute side_effect) and vary only the
mock_issue.rule_id/subject_iri and the client.get URL per parameter. Ensure the
new test still asserts response.status_code == 200, data["total"] == 1 and
data["items"][0][expected_field] == expected_value and keep the AsyncMock patch
target verify_project_access and use the same authed_client fixture.
In `@tests/unit/test_normalization_routes.py`:
- Around line 356-357: Add two unit tests to TestGetJobStatus to cover the
missing status mappings: one for a "deferred" job that asserts the route returns
"pending" and one for an "in_progress" job that asserts the route returns
"running". In each test patch get_arq_pool (new_callable=AsyncMock), set
mock_project_service.get to return _make_project_response(), create a mocked Job
(patch "ontokit.api.routes.normalization.Job") whose status() returns
_get_job_status("deferred") or _get_job_status("in_progress"), call
client.get(...) for a representative job_id, and assert response.status_code ==
200 and response.json()["status"] == "pending" or "running" respectively so the
route's status mapping is fully covered.
In `@tests/unit/test_ontology_extractor.py`:
- Around line 310-341: The tests test_update_metadata_no_existing_title and
test_update_metadata_no_existing_description currently only assert messages in
changes; update them to also parse/inspect the returned content from
OntologyMetadataUpdater.update_metadata (use the content variable) to ensure the
dc:title and dc:description triples were actually added: for
test_update_metadata_no_existing_title assert that the serialized turtle
(content) contains a dc:title triple with "Brand New Title" (and/or the dc:title
predicate URI), and for test_update_metadata_no_existing_description assert the
serialized turtle (content) contains a dc:description triple with "Brand New
Description"; keep the existing checks against changes but add these content
assertions to the tests that use turtle_no_title and turtle_no_desc
respectively.
- Around line 64-71: The parametrized test in test_ontology_extractor.py is
missing coverage for the .owx extension; update the pytest.mark.parametrize
tuple in the test (the list of ("ext","expected") values) to include (" .owx",
"xml") (matching OntologyMetadataExtractor.FORMAT_MAP) so the test validates
that .owx is detected as "xml" by the extractor.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: c675b015-d105-4ea7-8fe7-4cb2acac074a
📒 Files selected for processing (11)
tests/unit/test_bare_repository_service.pytests/unit/test_github_sync.pytests/unit/test_indexed_ontology.pytests/unit/test_lint_routes_extended.pytests/unit/test_normalization_routes.pytests/unit/test_normalization_service.pytests/unit/test_ontology_extractor.pytests/unit/test_ontology_index_service.pytests/unit/test_projects_routes_coverage.pytests/unit/test_pull_request_service_extended.pytests/unit/test_worker.py
✅ Files skipped from review due to trivial changes (1)
- tests/unit/test_normalization_service.py
🚧 Files skipped from review as they are similar to previous changes (1)
- tests/unit/test_github_sync.py
- Fix fallback tests to use turtle without owl:Ontology IRI so global search path is actually exercised - Add .owx to format detection parametrize - Add content assertions to update_metadata tests - Replace sync assert_called_once with assert_awaited_once for AsyncMock objects in projects routes tests Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
|
@CodeRabbit review |
✅ Actions performedReview triggered.
|
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
|
@CodeRabbit review |
✅ Actions performedReview triggered.
|
Summary
xsd:prefix insample_ontology_turtlefixtureHTTP_422_UNPROCESSABLE_ENTITY→HTTP_422_UNPROCESSABLE_CONTENTMagicMock(name=...)bug in PR service test (sets repr, not.nameattribute)Coverage by module (current state)
collab/presence.pycollab/transform.pycollab/protocol.pycore/encryption.pycore/auth.pycore/config.pycore/beacon_token.pyservices/notification_service.pyservices/embedding_text_builder.pyservices/rdf_utils.pyservices/user_service.pyservices/sitemap_notifier.pyservices/remote_sync_service.pyservices/change_event_service.pyservices/search.pyservices/consistency_service.pyservices/join_request_service.pyservices/ontology.pyservices/github_service.pyservices/linter.pyservices/cross_reference_service.pyservices/duplicate_detection_service.pyservices/ontology_index.pyservices/normalization_service.pyservices/ontology_extractor.pyservices/github_sync.pyservices/pull_request_service.pyservices/project_service.pyservices/suggestion_service.pyservices/embedding_service.pygit/bare_repository.pyworker.pyapi/routes/auth.pyapi/routes/analytics.pyapi/routes/notifications.pyapi/routes/search.pyapi/routes/embeddings.pyapi/routes/user_settings.pyapi/routes/quality.pyapi/routes/lint.pyapi/routes/pull_requests.pyapi/routes/normalization.pyapi/routes/suggestions.pyapi/routes/projects.pyCI changes
Added PostgreSQL (
pgvector/pgvector:pg17) and Redis 7 service containers to thetestjob in.github/workflows/release.ymlwithDATABASE_URLandREDIS_URLenvironment variables for integration tests.Test plan
Closes #46
🤖 Generated with Claude Code
Summary by CodeRabbit
Bug Fixes
Tests
Chores
Style