Conversation
…into feature/int_test
…into feature/int_test
…into feature/int_test
…into feature/int_test12
…into feature/int_test
…/mentoring into feature/int_test_1
…/mentoring into feature/int_test_1
…/mentoring into feature/int_test_1
…into migrations_update
…into migrations_update
WalkthroughReplaces SonarCloud-based CircleCI with a self-contained Docker-driven CI pipeline, adds 30+ integration test suites with JSON schemas and a custom Jest sequencer, and applies multiple DB migration/distribution updates plus small package and test-config changes. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant CI as CircleCI
participant Docker as Docker Compose
participant Services as Services (mentoring, user, scheduler, comms)
participant DB as PostgreSQL
participant Tests as Jest Integration Suite
CI->>Docker: checkout repos & pull/build images
Docker->>Services: docker-compose up (start containers)
Services->>DB: run migrations, enable extensions, apply seeds
DB-->>Services: migrations/seeds complete
Docker->>Services: health-check loop (wait for ready)
Services-->>Docker: healthy responses
Docker->>Tests: start Jest (integration tests)
Tests->>Services: HTTP requests (auth flows, seed/setup)
Services->>DB: queries / data operations
DB-->>Services: data responses
Services-->>Tests: API responses (validated against schemas)
Tests-->>CI: test results & diagnostics (on failure)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 22
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/setupFileAfterEnv.js (1)
16-28:⚠️ Potential issue | 🟠 MajorRace condition: database connection may not be ready when tests start.
db.connect()uses a fire-and-forget callback, andbeforeAllis empty. Tests can begin executing before the connection is established, leading to flaky failures. Move the connection intobeforeAllusing the promise-based API:Proposed fix
-db.connect((err) => { - if (err) { - console.error('Database connection error:', err) - } else { - console.log('Connected to DB') - } -}) - -global.db = db - beforeAll(async () => { - // You can add any setup code you need here + await db.connect() + console.log('Connected to DB') + global.db = db })
🤖 Fix all issues with AI agents
In @.circleci/config.yml:
- Around line 112-130: The "Wait for services to be healthy" step's loop can
finish without containers being healthy and currently lets the pipeline
continue; after the for loop that checks the unhealthy and starting variables,
add an explicit health check that re-evaluates docker ps (or checks if
$unhealthy or $starting are non-empty) and, if any containers remain
unhealthy/starting, emit a clear error message and exit non‑zero (e.g., exit 1)
to fail the job so migrations won't run against unhealthy services.
In `@src/distributionColumns.psql`:
- Around line 1-3: Change the incorrect creation of sessions as a reference
table: replace the call to create_reference_table('sessions') with the
distributed table creation used by other operational tables (e.g.
session_attendees) by calling create_distributed_table('sessions', 'id'), and
ensure the sessions table uses the composite primary key (tenant_code, id) for
multi-tenant sharding consistent with session_request and post_session_details.
In `@src/integration-tests-new/config/config.spec.js`:
- Around line 1-17: The test files under integration-tests-new (e.g.,
config.spec.js, admin.spec.js, cloud-services.spec.js) are using the singular
extension ".spec.js" but the Jest config variable testMatch in
integrationJest.config.js expects the plural pattern
'<rootDir>/integration-tests-new/**/*.specs.js', so the files are skipped; fix
by either renaming the test files to use the plural suffix (e.g.,
config.specs.js) or modify the testMatch glob to include both patterns (e.g.,
add '**/*.spec.js' or use a combined pattern) so Jest discovers these tests.
In `@src/integration-tests-new/default-rule/schemas/default-rule.schemas.json`:
- Around line 35-37: The read-schema in default-rule.schemas.json uses the wrong
field names; replace any occurrences of "matching_operator" with "operator" and
"role_config" with "requester_roles_config" within the read (response) schema so
the field names match the create/update schemas and the rest of the codebase
(validators/helpers/services/tests); ensure the value types for "operator" and
"requester_roles_config" match the existing definitions used by create/update
schemas (adjust object/array/string types accordingly) and update the read
schema property names wherever "matching_operator" or "role_config" appear.
In `@src/integration-tests-new/modules/modules.spec.js`:
- Around line 33-57: The tests use the literal `{id}` in the update/delete URLs
so they never hit a real resource; before calling the update or delete requests
in the tests inside the describe('POST /mentoring/v1/modules/update/{id}')
block, create a module via the create endpoint (capture the returned module id
from the create response), then replace `{id}` with that captured id when
constructing the URL for the update and delete requests (also ensure
Content-Type and auth headers are preserved and optionally clean up the created
module in a teardown or afterEach).
In `@src/integration-tests-new/org-admin/schemas/org-admin.schemas.json`:
- Around line 328-389: The POST_mentoring_v1_org-admin_uploadSampleCSV entry is
a concrete example, not a JSON Schema; replace it with a proper schema that
defines type: "object" and properties for responseCode (string, optional enum
like "OK"), message (string), result (array), and meta (object) where
meta.formsVersion is an array of objects with properties id (number), type
(string), version (number), plus correlation (string) and meetingPlatform
(string); ensure required fields mirror actual response and keep the top-level
entry named "POST_mentoring_v1_org-admin_uploadSampleCSV" so toMatchSchema calls
validate structure rather than literal values.
In `@src/integration-tests-new/permissions/permissions.spec.js`:
- Around line 38-56: The tests use the literal string "{id}" in URLs (url =
`/mentoring/v1/permissions/update/{id}` and delete variant) which hits no route;
fix by capturing the created permission ID from the create test response and
interpolating it into the update/delete URLs. Declare a shared variable (e.g.,
permissionId) in the spec scope, set permissionId = res.body.id (or the correct
ID field) in the create POST test (or create it in beforeAll), then replace
`/mentoring/v1/permissions/update/{id}` and
`/mentoring/v1/permissions/delete/{id}` with template strings using that
permissionId so updatePermission and deletePermission tests target the actual
resource; keep using BASE, request and schemas as before.
- Around line 1-84: The test file is named permissions.spec.js but Jest is
configured to run only *.specs.js, so it's being skipped; rename the test file
to permissions.specs.js and update any references. Specifically, rename the file
containing the describe block "permissions endpoints generated from
api-doc.yaml" from permissions.spec.js to permissions.specs.js so Jest's
testMatch (integrationJest.config.js) picks it up; also apply the same renaming
pattern to the other 18 test files using the singular .spec extension.
In `@src/integration-tests-new/question-set/question-set.spec.js`:
- Around line 1-74: Tests use the literal placeholder 'string' for the
x-auth-token in request setup (see the three places in question-set.spec.js
where req = req.set('x-auth-token', 'string')), so successful requests will
return 401/403; replace that with a real token fetched from the project's login
helper: add a beforeAll that calls the existing auth/login utility (e.g.,
login(), getAuthToken(), or global.signIn) to obtain a token and store it in a
variable (e.g., authToken), then update each request that calls
set('x-auth-token', 'string') to use that token (set('x-auth-token',
authToken)); ensure the helper uses valid credentials/environment and clean up
if required.
In `@src/integration-tests-new/report-mapping/report-mapping.spec.js`:
- Line 10: Replace the hardcoded placeholder header assignment (req =
req.set('x-auth-token', 'bearer {{token}}')) with a real token obtained from the
shared helper: call await commonHelper.adminLogin() to get the token, then set
the request header using the correct Bearer formatting (e.g., req =
req.set('x-auth-token', `Bearer ${token}`)); ensure the adminLogin call is
awaited before any requests that use the req variable so authenticated tests use
a valid token.
In `@src/integration-tests-new/report-queries/report-queries.spec.js`:
- Around line 1-66: Add a beforeAll hook that calls the shared auth helper
(e.g., commonHelper.adminLogin() or whatever login helper is used in other
specs) to get a real token, store it in a variable (e.g., authToken) and replace
the placeholder header values in each test's req.set('x-auth-token', 'bearer
{{token}}') with the real header using the token (e.g., req.set('x-auth-token',
`bearer ${authToken}`)); update all occurrences in this file (tests under POST
/mentoring/v1/report-queries/create, GET /mentoring/v1/report-queries/read, POST
/mentoring/v1/report-queries/update, DELETE /mentoring/v1/report-queries/delete)
so requests use the obtained token.
In `@src/integration-tests-new/report-type/report-type.spec.js`:
- Line 10: The test is using a literal Postman-style placeholder in the
Authorization header (req.set('x-auth-token', 'bearer {{token}}')), causing
401s; replace that hardcoded string with a real token source by wiring the
request header to your test auth helper or environment variable (for example use
the login helper that returns a JWT or process.env.AUTH_TOKEN) wherever
req.set('x-auth-token', 'bearer {{token}}') appears (including the other
occurrences in this spec) so tests send a valid bearer token.
- Line 37: The tests reference an undefined TOKEN constant (used when setting
the request header via req.set('x-auth-token', TOKEN) on the read/update tests),
causing a ReferenceError; fix by declaring and initializing TOKEN at the top of
the spec (or make the read/update tests use the same literal header string used
in create/delete, e.g. 'bearer {{token}}') so that req.set('x-auth-token',
TOKEN) (and the other occurrence) receive a defined value.
- Around line 1-3: Add the Jest configuration to load the setup file so the
toMatchSchema matcher from src/setupFileAfterEnv.js is registered: update the
jest config module (the object exported from src/jest.config.js) to include the
setupFilesAfterEnv property pointing to the setup file (ensure the string
references './setupFileAfterEnv.js' or '<rootDir>/setupFileAfterEnv.js') so
expect.extend(matchers) in src/setupFileAfterEnv.js runs before tests that call
toMatchSchema in report-type.spec.js.
In `@src/integration-tests-new/reports/reports.spec.js`:
- Line 10: The tests currently set the header using the literal placeholder
'bearer {{token}}' which causes 401/403; replace this with a real token fetched
at runtime (for example call a test helper like getAuthToken() or read
process.env.TEST_TOKEN) and set the header using string concatenation or
template interpolation (e.g., req.set('x-auth-token', 'bearer ' + token)), then
update every occurrence of req.set('x-auth-token', 'bearer {{token}}')
(including the repeated lines flagged) to use the runtime token value.
In `@src/integration-tests-new/requestSessions/requestSessions.specs.js`:
- Line 50: The org-id header is being set with the whole organization object for
mentee requests; update occurrences that use menteeDetails.organizations[0] to
access the id string instead by changing them to
menteeDetails.organizations[0].id.toString() so they match mentorDetails
usage—search for all .set('org-id', menteeDetails.organizations[0]) (e.g., the
requests around where menteeDetails is used) and replace with .set('org-id',
menteeDetails.organizations[0].id.toString()) to ensure consistent header values
across tests.
In `@src/integration-tests-new/role-extension/role-extension.spec.js`:
- Line 90: The test references an undeclared TOKEN variable in the request
header (req.set('x-auth-token', TOKEN)), causing a ReferenceError; replace TOKEN
with the intended token literal used elsewhere (e.g., 'bearer {{token}}' or
'test-token') so req.set('x-auth-token', ...) uses a defined string, or wire in
the correct test token variable if one exists, ensuring the header-setting call
in this test matches the other tests' token format.
In
`@src/integration-tests-new/rolePermissionMapping/rolePermissionMapping.spec.js`:
- Line 8: The test builds a URL with a literal "{role_id}" so requests hit a
non-existent route; replace the placeholder with a real role ID variable (e.g.,
use a test fixture or created roleId) wherever the URL is constructed (the const
url declarations in rolePermissionMapping.spec.js) so the path becomes
`/mentoring/v1/rolePermissionMapping/create/${roleId}` (apply the same change at
the four occurrences noted) and ensure the tests use that roleId when calling
the endpoint before asserting res.status === 201.
- Line 37: The tests use an undefined TOKEN constant which causes a
ReferenceError; update the two usages of req.set('x-auth-token', TOKEN) in
rolePermissionMapping.spec.js (both occurrences) to use a defined token source —
either import or define TOKEN at the top of the test file (preferably from a
shared test helper or process.env) or replace with the same literal used
elsewhere (e.g., 'string'); ensure both occurrences are changed so the test runs
without runtime errors.
In `@src/integration-tests-new/sessions/sessions.specs.js`:
- Around line 286-360: The bug is variable shadowing: the inner let userDetails
in the beforeAll callback (created via commonHelper.mentorLogIn()) hides the
outer describe-scoped userDetails so tests like the mentor feedback test use the
wrong token; fix by removing the inner declaration and assign the logged-in
mentor to the describe-scoped variable (i.e., replace "let userDetails = await
commonHelper.mentorLogIn()" inside beforeAll with "userDetails = await
commonHelper.mentorLogIn()"), ensuring the same userDetails.token is used in
start/completed and the mentor feedback POST.
In `@src/integration-tests-new/users/users.spec.js`:
- Around line 1-31: The tests use a literal placeholder token in the
x-auth-token header and lack a login step; add a beforeAll that performs
authentication (reusing the project's commonHelper login helper if available) to
obtain a real token and store it in a variable (e.g., authToken) and replace the
hardcoded 'string' in both tests with that authToken when calling
request(BASE).set('x-auth-token', authToken). Also add a couple of negative
tests for unauthorized access (e.g., missing or invalid token) to mirror other
spec files and ensure the endpoints return 401/403 as expected.
In `@src/integrationJest.config.js`:
- Line 22: The current Jest config's testMatch pattern uses
'<rootDir>/integration-tests-new/**/*.specs.js' which only matches files with
the plural '.specs.js' and misses existing tests named '*.spec.js'; update the
testMatch entry (the testMatch key in src/integrationJest.config.js) to broaden
the glob so it matches both '*.spec.js' and '*.specs.js' (for example by using a
pattern like '**/*.spec?(s).js' or an array with both globs) so the files
admin.spec.js, config.spec.js, cloud-services.spec.js, etc., are included in the
test run.
🟠 Major comments (37)
src/integration-tests-new/rolePermissionMapping/rolePermissionMapping.spec.js-10-10 (1)
10-10:⚠️ Potential issue | 🟠 MajorAuth token is a hardcoded placeholder
'string'— requests will likely be rejected as unauthorized.Setting
x-auth-tokento the literal value'string'won't pass any real auth middleware. For integration tests to meaningfully validate a 201 response, a valid token (or a mechanism to obtain one during test setup) is needed.Also applies to: 26-27
src/integration-tests-new/rolePermissionMapping/schemas/rolePermissionMapping.schemas.json-1-72 (1)
1-72: 🛠️ Refactor suggestion | 🟠 MajorSchemas lack
requiredproperties — they will match virtually any object, including{}.Neither schema specifies
requiredarrays at any level, so an empty object{}would pass validation. This largely defeats the purpose of response schema validation in the tests.Consider adding
requiredfields at minimum for the top-level properties and forresultin the create schema:Example for the create schema
"POST_mentoring_v1_rolePermissionMapping_create_role_id": { "$schema": "http://json-schema.org/draft-04/schema#", "type": "object", + "required": ["responseCode", "message", "result"], "properties": { "responseCode": { "type": "string" },Apply the same pattern to the delete schema and nested objects as appropriate.
src/integration-tests-new/issues/issues.spec.js-10-10 (1)
10-10:⚠️ Potential issue | 🟠 MajorHardcoded placeholder auth token will cause test failures or false positives.
'string'is not a valid auth token. Against a real service this will return 401; if the test still passes with 200, it means the endpoint is unauthenticated and the header is meaningless. Consider sourcing a valid token from the test environment (e.g.,process.env.AUTH_TOKEN).src/integration-tests-new/issues/schemas/issues.schemas.json-1-36 (1)
1-36:⚠️ Potential issue | 🟠 MajorSchema has no
requiredfields — validation will pass even for empty objects.Without a
requiredarray,toMatchSchemawill accept{}as valid. At minimum, add the fields the API must always return (e.g.,responseCode,message) to arequiredlist. Consider also setting"additionalProperties": falseif the shape should be strict.Proposed fix (example)
"POST_mentoring_v1_issues_create": { "$schema": "http://json-schema.org/draft-04/schema#", "type": "object", + "required": ["responseCode", "message", "result", "meta"], "properties": { "responseCode": {src/integration-tests-new/feedback/feedback.spec.js-10-10 (1)
10-10:⚠️ Potential issue | 🟠 MajorAuth token
'string'is a placeholder — the "should return 200" test will likely fail against a real server.If the API validates tokens, this request will be rejected. Use a valid token (e.g., from a login helper or environment variable) for the authenticated test cases to be meaningful.
src/integration-tests-new/feedback/schemas/feedback.schemas.json-2-88 (1)
2-88: 🛠️ Refactor suggestion | 🟠 MajorNo
requiredarrays in the GET schema — any subset of fields (including none) passes validation.Without
requiredat the top level and inside the form item, the schema accepts{}as a valid response. Addrequiredarrays to enforce the contract the test is meant to verify.Example for the top-level and form item objects
"GET_/mentoring/v1/feedback/forms/{SessionId}": { "type": "object", "properties": { "responseCode": { "type": "string" }, "message": { "type": "string" }, "result": { ... }, "meta": { ... } }, + "required": ["responseCode", "message", "result"] }Similarly, add
"required"inside the form item schema for fields that must always be present (e.g.,"id","name","type","status").src/integration-tests-new/report-type/schemas/report-type.schemas.json-1-61 (1)
1-61: 🛠️ Refactor suggestion | 🟠 MajorSchemas lack
requiredarrays, making validation too permissive.None of the four schemas declare
requiredon the top-level properties (responseCode,message,result) or onresultproperties (id,title, etc.). This means a response with an empty body{}would pass validation, defeating the purpose of schema-based contract testing.Also,
idis typed as"number"— consider using"integer"if IDs are always whole numbers, to catch accidental floating-point values.Example fix for the create schema (apply similarly to others)
"POST_mentoring_v1_report-type_create": { "type": "object", "properties": { "responseCode": { "type": "string" }, "message": { "type": "string" }, "result": { "type": "object", "properties": { "created_at": { "type": "string" }, "updated_at": { "type": "string" }, "title": { "type": "string" }, "id": { - "type": "number" + "type": "integer" }, "deleted_at": { "type": "null" } - } + }, + "required": ["id", "title", "created_at", "updated_at"] }, ... - } + }, + "required": ["responseCode", "message", "result"] },src/integration-tests-new/role-extension/role-extension.spec.js-5-104 (1)
5-104:⚠️ Potential issue | 🟠 MajorTests assume sequential execution order with no shared state.
The
read,update, anddeletetests query?title=mentee, presumably relying on the resource created by thecreatetest. However:
- Jest does not guarantee execution order across
describeblocks by default (especially with--runInBandnot being the default).- The
createtest sendstitle: 'string', nottitle: 'mentee', so even if ordering is preserved, the resource looked up in subsequent tests won't match what was created.Consider using
beforeAllto create the test resource andafterAllto clean it up, storing the created entity's identifier for use in subsequent tests.src/integration-tests-new/role-extension/schemas/role-extension.schemas.json-1-73 (1)
1-73: 🛠️ Refactor suggestion | 🟠 MajorSchemas lack
requiredfields — validation is effectively a no-op.None of the four schemas declare
requiredat the top level (e.g.,["responseCode", "message", "result"]) or within theresultobject. As a result,toMatchSchemawill pass even for an empty{}response, defeating the purpose of schema validation.Add
requiredarrays at minimum on the top-level object and onresult:Example fix for the create schema (apply similarly to others)
"POST_mentoring_v1_role-extension_create": { "type": "object", + "required": ["responseCode", "message", "result", "meta"], "properties": { "responseCode": { "type": "string" }, ... "result": { "type": "object", + "required": ["id", "title", "label", "status"], "properties": {src/integration-tests-new/role-extension/role-extension.spec.js-9-11 (1)
9-11:⚠️ Potential issue | 🟠 MajorDuplicate
x-auth-tokenheader — first value is silently overwritten.Line 10 sets the header to
'bearer {{token}}', then line 11 immediately overwrites it with'test-token'. The same pattern repeats at lines 36–37. This looks like leftover scaffolding from a template. Pick one consistent token strategy and remove the dead set call.Additionally, all token values across the file (
'bearer {{token}}','test-token') are non-functional placeholders. Against a real auth-protected API these tests will fail with 401/403 rather than exercising the happy path. Consider reading a valid token from the environment or a shared setup fixture (e.g., a login helper that runs before the suite).Proposed fix (example using env variable)
+const TOKEN = process.env.AUTH_TOKEN || 'test-token' + describe('role-extension endpoints generated from api-doc.yaml', () => { describe('POST /mentoring/v1/role-extension/create', () => { test('should return 201', async () => { const url = `/mentoring/v1/role-extension/create` let req = request(BASE).post(url) - req = req.set('x-auth-token', 'bearer {{token}}') - req = req.set('x-auth-token', 'test-token') + req = req.set('x-auth-token', TOKEN)src/integration-tests-new/reports/reports.spec.js-109-129 (1)
109-129:⚠️ Potential issue | 🟠 MajorPOST reportData endpoint should likely expect 200, not 201.
This is a query/reporting endpoint, not a resource creation endpoint. Expecting
201 Created(Line 125) seems incorrect —200 OKis the standard response for data retrieval operations.src/integration-tests-new/reports/reports.spec.js-60-70 (1)
60-70:⚠️ Potential issue | 🟠 MajorGET read endpoint should not expect HTTP 201.
A read/GET operation conventionally returns
200 OK, not201 Created. Line 61's test description and Line 66's assertion both use 201. This will either always fail (if the API correctly returns 200) or masks an API design issue.- test('should return 201', async () => { + test('should return 200', async () => { const url = `/mentoring/v1/reports/read?id=1` let req = request(BASE).get(url) req = req.set('x-auth-token', 'bearer {{token}}') const res = await req - expect(res.status).toBe(201) + expect(res.status).toBe(200)src/integration-tests-new/reports/reports.spec.js-73-93 (1)
73-93:⚠️ Potential issue | 🟠 MajorUpdate test uses literal
"string"as path param and POST instead of PUT/PATCH.Line 75 uses
/update/string— this is a placeholder, not a valid resource ID. Also,organization_idis sent as a number (1) on Line 85, but the create payload and the schema (reports.schemas.jsonLine 101-103) define it as a string type. This type mismatch will cause validation issues or silent data coercion.Additionally, Line 76 uses
.post()for an update — typically this should be PUT or PATCH to follow REST conventions, unless the API intentionally uses POST.- const url = `/mentoring/v1/reports/update/string` + const url = `/mentoring/v1/reports/update/${createdReportId}`- organization_id: 1, + organization_id: 'string',src/integration-tests-new/reports/schemas/reports.schemas.json-1-139 (1)
1-139:⚠️ Potential issue | 🟠 MajorSchemas lack top-level
requiredfields — validation will pass even ifresponseCode,message, orresultare missing.Without a
requiredarray at the top level of each schema,toMatchSchemawill accept a response body like{}as valid. This defeats the purpose of schema validation. The same applies to all six schemas in this file.Example fix for the create schema
"POST_mentoring_v1_reports_create": { "type": "object", "properties": { "responseCode": { "type": "string" }, ... }, + "required": ["responseCode", "message", "result"] },src/integration-tests-new/questions/schemas/questions.schemas.json-26-28 (1)
26-28:⚠️ Potential issue | 🟠 Major
optionsandcategoryare typed as"null"only — this will reject non-null values.The schema enforces
"type": "null"for bothoptionsandcategory. However, the PUT test (inquestions.spec.js) sendscategory: { evaluating: 'mentor' }as part of the request, and if the API echoes it back the schema validation will fail. If these fields can hold objects or arrays in certain responses, use a union type like["object", "null"]or["array", "null"].Proposed fix (example for category in the PUT schema)
"category": { - "type": "null" + "type": ["object", "null"] },Apply the same pattern to
optionsif it can return non-null values.Also applies to: 35-37, 123-125, 132-134
src/integration-tests-new/questions/questions.spec.js-10-10 (1)
10-10:⚠️ Potential issue | 🟠 MajorReplace placeholder token with valid authentication.
Lines 10, 51, and 88 use the literal string
'string'as thex-auth-tokenvalue. This is a placeholder that will cause authentication failures when tests run against a real service. The tests expect success responses (201, 202, 200) but cannot authenticate with an invalid token.Replace with a valid token sourced from environment variables or user setup, matching the pattern used in other integration test files (e.g.,
userDetails.tokenas seen in sessions.spec.js).src/integration-tests-new/org-admin/schemas/org-admin.schemas.json-390-395 (1)
390-395:⚠️ Potential issue | 🟠 Major
updateThemeandthemeDetailsschemas are{"type": "string"}— this asserts the entire response body is a string.These endpoints return JSON objects, not strings. The schema should be
{"type": "object", "properties": {...}}to validate the actual response structure. As-is,toMatchSchemawill fail for any valid JSON response.src/integration-tests-new/org-admin/org-admin.spec.js-104-116 (1)
104-116:⚠️ Potential issue | 🟠 Major
uploadSampleCSVsends a plain string instead of a file/CSV payload.Line 109 sends
'string'as the body withContent-Type: application/json. A CSV upload endpoint typically expectsmultipart/form-datawith a file attachment. This test won't exercise the intended functionality.src/integration-tests-new/org-admin/org-admin.spec.js-5-23 (1)
5-23:⚠️ Potential issue | 🟠 MajorAll tests use hardcoded placeholder values — they will fail against a real server.
Every test sends
'string'as the auth token and dummy payloads. These tests will receive 401/400 responses rather than the expected 200/201. Integration tests need valid auth tokens (e.g., obtained via a login step inbeforeAll) and realistic request data to be useful. As-is, the entire suite will fail.src/integration-tests-new/org-admin/org-admin.spec.js-58-80 (1)
58-80:⚠️ Potential issue | 🟠 MajorRequest body for
updateRelatedOrgsappears to be a copy of a response schema, not a valid request payload.The body sent (
responseCode,message,result,meta) matches the shape of an API response, not a plausible request to update related organizations. This test will likely get a 400/422 instead of the expected 200, and wouldn't test meaningful functionality even if the server ignored unknown fields.src/integration-tests-new/org-admin/org-admin.spec.js-118-137 (1)
118-137:⚠️ Potential issue | 🟠 MajorInconsistent auth header name and value — tests will likely fail or test the wrong thing.
Line 122 uses
X-auth-token(capitalX) with value'bearer {{token}}', while all other tests usex-auth-tokenwith value'string'. The{{token}}looks like an unresolved template placeholder (Postman/Insomnia style) that will be sent literally, not as a valid bearer token. The same issue applies to line 143.If headers are case-insensitive in HTTP, the casing difference is benign, but the
'bearer {{token}}'value will not authenticate — it should either be a real token or the same placeholder'string'used elsewhere for consistency.- req = req.set('X-auth-token', 'bearer {{token}}') + req = req.set('x-auth-token', 'string')src/integration-tests-new/modules/schemas/modules.schemas.json-15-16 (1)
15-16:⚠️ Potential issue | 🟠 MajorInconsistent property name:
Idvsid.The create schema uses
Id(capital I) on line 15, while the update schema (line 52) and list schema (line 121) use lowercaseid. The required array in the list schema also uses lowercaseid, indicating this is the standard convention. This inconsistency will cause schema validation to fail when the API actually returnsidin the create response.Fix by changing line 15 to use lowercase
id:Fix
- "Id": { + "id": { "type": "number" },src/integration-tests-new/mentoring/schemas/mentoring.schemas.json-1-70 (1)
1-70:⚠️ Potential issue | 🟠 MajorSchema lacks
requiredarrays — validation is effectively a no-op.Without
requiredat the top-level object (lines 3–69), theparamsobject (lines 15–32), and theresultobject (lines 38–68),toMatchSchemawill pass for any object (including{}), since JSON Schema only validates types of properties that are present. This defeats the purpose of schema validation.Proposed fix — add required arrays
{ "GET_mentoring_health": { "type": "object", "properties": { "id": { "type": "string" }, ... }, + "required": ["id", "ver", "ts", "params", "status", "result"] } }Similarly, add
"required"toparamsandresult:"params": { "type": "object", "properties": { ... }, "required": ["resmsgid", "msgid", "status"] }, "result": { "type": "object", "properties": { ... }, "required": ["version", "healthy", "checks"] }src/integration-tests-new/permissions/permissions.spec.js-9-10 (1)
9-10:⚠️ Potential issue | 🟠 MajorPlaceholder
'string'auth token will fail against a real API.All requests use
x-auth-token: 'string'which is not a valid authentication token. If these integration tests run against a real service (as the CI pipeline suggests), every authenticated request will be rejected. Other test suites in this project appear to use the same pattern — consider implementing a shared test helper that acquires a valid token (e.g., via a login flow or from environment variables) and injects it into requests.#!/bin/bash # Check if there's a shared test helper or setup that provides real auth tokens fd -e js -p 'helper|setup|auth|token' src/integration-tests-new/ --exec head -30 {} # Also check the test config for any global setup fd -e js -p 'jest|config|sequencer' src/ -d 2 --exec cat {}.circleci/config.yml-134-139 (1)
134-139:⚠️ Potential issue | 🟠 MajorFragile container name resolution via
grep.
docker ps --format "{{.Names}}" | grep mentoring(andgrep user) can match multiple containers or partial names (e.g.,mentoring-db,user-cache). If multiple matches occur, thedocker execcommands will fail. Use a more specific pattern or--filterflag.🔧 Proposed fix: use grep with anchored pattern or docker-compose service names
- export MENTORING=$(docker ps --format "{{.Names}}" | grep mentoring) + export MENTORING=$(docker-compose -f dev-ops/docker-compose.yml ps -q mentoring)Or if using
docker ps, use a more specific grep pattern:- export MENTORING=$(docker ps --format "{{.Names}}" | grep mentoring) + export MENTORING=$(docker ps --format "{{.Names}}" | grep -w mentoring)Also applies to: 164-169, 171-176
.circleci/config.yml-178-197 (1)
178-197:⚠️ Potential issue | 🟠 MajorOverwriting
$USERenvironment variable — reserved Unix variable.Line 181, 188, and 195 assign
export USER=$(docker ps ...).USERis a standard Unix environment variable ($USER= current username). Overwriting it can break commands that depend on it (e.g.,whoami, SSH operations, or any tool that reads$USER). Use a different variable name.🐛 Proposed fix: rename to USER_CONTAINER
- export USER=$(docker ps --format "{{.Names}}" | grep user) - echo "Running user migrations in $USER" - docker exec "$USER" npm run db:init + export USER_CONTAINER=$(docker ps --format "{{.Names}}" | grep user) + echo "Running user migrations in $USER_CONTAINER" + docker exec "$USER_CONTAINER" npm run db:initApply the same rename in the seeders and startup steps.
.circleci/config.yml-225-231 (1)
225-231:⚠️ Potential issue | 🟠 MajorThe tag filter regex pattern needs proper anchoring, and a
branchesfilter is needed to run on the intended branches.CircleCI uses Java regex (not Go), so
\bword boundaries are supported. However, the pattern\b(dev|develop|main)\blacks the full-string anchors required by CircleCI's filter logic. Use/^(dev|develop|main)$/instead. Also, unquote the regex in YAML to avoid ambiguity (use single quotes:only: '/^(dev|develop|main)$/').More critically: this config only triggers on matching tags, not branches. To run on branches named dev, develop, or main, add a
branchesfilter:Suggested fix
workflows: build-and-test: jobs: - build: filters: branches: only: '/^(dev|develop|main)$/' tags: only: '/^(dev|develop|main)$/'src/integration-tests-new/connections/connections.specs.js-8-11 (1)
8-11:⚠️ Potential issue | 🟠 Major
adminDetailsis never declared — implicit global assignment.Line 10 assigns to
adminDetailswithout a priorlet/const/vardeclaration. In strict mode this throws aReferenceError. Additionally,adminDetailsis never used in any test, so the entirebeforeAlllogin is unnecessary overhead.Proposed fix — either declare it or remove it
If admin context is needed later:
+let adminDetails = null + beforeAll(async () => { console.log('Attempting to login...') adminDetails = await commonHelper.adminLogin() })If not needed, remove the
beforeAllentirely to speed up test execution.src/integration-tests-new/commonTests.js-22-55 (1)
22-55:⚠️ Potential issue | 🟠 MajorBug: returned
passworddoesn't match the password actually used.Line 23 generates a random password via
faker.internet.password(), but lines 27 and 33 use the hardcoded'PassworD@@@123'for account creation and login. The returneduserDetails.password(line 53) contains the faker-generated value, which is wrong. Any downstream test that usesuserDetails.passwordto re-authenticate will fail.The same issue exists in
mentorLogIn(line 78 vs lines 83/90).Proposed fix
- let password = faker.internet.password() + let password = 'PassworD@@@123'Or alternatively, use the faker password everywhere:
let password = faker.internet.password() let res = await request.post('/user/v1/account/create').set('origin', 'localhost').send({ name: 'adithya', email: email, - password: 'PassworD@@@123', + password: password, role: common.MENTEE_ROLE, }) res = await request.post('/user/v1/account/login').set('origin', 'localhost').send({ identifier: email, - password: 'PassworD@@@123', + password: password, })src/integration-tests-new/entity/entity.specs.js-23-24 (1)
23-24:⚠️ Potential issue | 🟠 MajorFix missing
bearerprefix inx-auth-tokenheader.The middleware (
src/middlewares/authenticator.jslines 384-388) requires thebearerprefix whenIS_AUTH_TOKEN_BEARERistrue(the default). It splits the header on whitespace and validates the auth type isbearer; if not, it throws an unauthorized error.Current token usage in entity.specs.js (lines 23, 37, 67, 83, 92, 122, 137, 146, 169, 192) sets the raw token without the prefix, inconsistent with
commonTests.jsand the API scripts which usebearer ${token}format. These tests will fail with the default configuration.Add the
bearerprefix:.set('x-auth-token', 'bearer ' + adminDetails.token).src/integration-tests-new/commonTests.js-135-143 (1)
135-143:⚠️ Potential issue | 🟠 MajorUse environment variable instead of hardcoded token string.
Line 137 should use
process.env.INTERNAL_ACCESS_TOKENinstead of the hardcoded string'internal_access_token'. The middleware validates tokens against the environment variable (seesrc/middlewares/authenticator.js:23), and all production code usesprocess.env.INTERNAL_ACCESS_TOKEN. The test will only pass if the test environment explicitly setsINTERNAL_ACCESS_TOKEN='internal_access_token', which doesn't follow the production pattern. Update to.set('internal_access_token', process.env.INTERNAL_ACCESS_TOKEN)and ensure the test environment has the token properly configured.src/integration-tests-new/cloud-services/cloud-services.spec.js-10-10 (1)
10-10:⚠️ Potential issue | 🟠 Major
x-auth-tokenis set to the literal string'string'— not a valid auth token.Both tests use a hardcoded placeholder
'string'as the auth token. Unless these endpoints are unauthenticated, the requests will fail with 401. Other test files (e.g.,entity-type.specs.js,mentees.specs.js) properly log in viacommonHelperand use real tokens. This file should do the same.Also applies to: 23-23
src/integration-tests-new/admin/admin.spec.js-6-16 (1)
6-16:⚠️ Potential issue | 🟠 MajorAdmin tests use placeholder
'string'as auth token, anduserDeletehas no target user specified.All admin tests use
'string'as thex-auth-token, which will either fail with 401 or, if the endpoint lacks proper auth, execute destructive operations. TheDELETE /userDeletetest is particularly concerning — it sends no request body or user ID, so if it somehow authenticates, the behavior is undefined.Use real admin credentials via
commonHelper.adminLogin()and provide a valid (test-created) user ID for deletion.src/integration-tests-new/mentees/mentees.specs.js-105-105 (1)
105-105:⚠️ Potential issue | 🟠 Major
userDetails.idis likelyundefined— the property isuserId.Based on the
commonTests.jshelper (lines 47-54 in the snippet), the returned object usesuserId, notid. This means thementorIdquery parameter will beundefined, and the test may pass vacuously or test the wrong behavior.Proposed fix
- const url = `/mentoring/v1/mentees/list?page=1&limit=5&search=&connected_mentees=true&mentorId=${userDetails.id}` + const url = `/mentoring/v1/mentees/list?page=1&limit=5&search=&connected_mentees=true&mentorId=${userDetails.userId}`Apply to both Line 105 and Line 114.
Also applies to: 114-114
src/integration-tests-new/profile/profile.specs.js-7-11 (1)
7-11:⚠️ Potential issue | 🟠 MajorVariable named
mentorDetailsbut initialized viacommonHelper.logIn()(mentee login).In other test files,
commonHelper.logIn()returns a mentee whilecommonHelper.mentorLogIn()returns a mentor. If these profile endpoints require a mentor role, this will fail. If a mentee is intentional, rename the variable tomenteeDetailsoruserDetailsto avoid confusion.src/integration-tests-new/sessions/sessions.specs.js-15-17 (1)
15-17:⚠️ Potential issue | 🟠 MajorRemove debug
console.logstatements.These appear to be leftover debugging artifacts. They'll pollute test output with credential-bearing objects (
userDetails,menteeDetails) on every run — both noisy and a potential PII/secret leak in CI logs.src/integration-tests-new/testSequencer.js-43-50 (1)
43-50:⚠️ Potential issue | 🟠 MajorBug: Inconsistent directory name in
path.relativecalls.Lines 43 and 50 use
'integration-tests'but the testMap is built (line 23) using'integration-tests-new'. This produces wrong relative paths in the warning and log output, breaking the diagnostic value of these messages.🐛 Proposed fix
- remainingTests.map((t) => path.relative(path.join(process.cwd(), 'integration-tests'), t.path)) + remainingTests.map((t) => path.relative(path.join(process.cwd(), 'integration-tests-new'), t.path))- console.log(`${index + 1}. ${path.relative(path.join(process.cwd(), 'integration-tests'), test.path)}`) + console.log(`${index + 1}. ${path.relative(path.join(process.cwd(), 'integration-tests-new'), test.path)}`)
| - run: | ||
| name: Wait for services to be healthy | ||
| command: | | ||
| cd dev-ops | ||
| echo "Waiting for containers to be healthy..." | ||
|
|
||
| for i in {1..30}; do | ||
| unhealthy=$(docker ps --filter "health=unhealthy" --format "{{.Names}}") | ||
| starting=$(docker ps --filter "health=starting" --format "{{.Names}}") | ||
|
|
||
| if [ -z "$unhealthy" ] && [ -z "$starting" ]; then | ||
| echo "All containers are healthy" | ||
| break | ||
| fi | ||
|
|
||
| echo "Waiting... attempt $i" | ||
| docker ps --format "table {{.Names}}\t{{.Status}}" | ||
| sleep 5 | ||
| done |
There was a problem hiding this comment.
Health check loop silently continues on failure — pipeline will proceed with unhealthy services.
If containers never become healthy within 30 attempts, the for loop completes without hitting break, and execution silently proceeds to the migration step. This will cause confusing downstream failures. Add an explicit failure check after the loop.
🐛 Proposed fix: fail the step if containers aren't healthy
echo "Waiting... attempt $i"
docker ps --format "table {{.Names}}\t{{.Status}}"
sleep 5
done
+
+ # Fail if any container is still not healthy
+ unhealthy=$(docker ps --filter "health=unhealthy" --format "{{.Names}}")
+ starting=$(docker ps --filter "health=starting" --format "{{.Names}}")
+ if [ -n "$unhealthy" ] || [ -n "$starting" ]; then
+ echo "ERROR: Containers not healthy after 30 attempts: $unhealthy $starting"
+ docker ps --format "table {{.Names}}\t{{.Status}}"
+ exit 1
+ fi🤖 Prompt for AI Agents
In @.circleci/config.yml around lines 112 - 130, The "Wait for services to be
healthy" step's loop can finish without containers being healthy and currently
lets the pipeline continue; after the for loop that checks the unhealthy and
starting variables, add an explicit health check that re-evaluates docker ps (or
checks if $unhealthy or $starting are non-empty) and, if any containers remain
unhealthy/starting, emit a clear error message and exit non‑zero (e.g., exit 1)
to fail the job so migrations won't run against unhealthy services.
| SELECT create_reference_table('entity_types'); | ||
| SELECT create_reference_table('permissions'); | ||
| SELECT create_reference_table('sessions'); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for sessions references in the distribution file
echo "=== Sessions references in distributionColumns.psql ==="
rg -n 'sessions' src/distributionColumns.psql
# Look for sessions table definition in migrations
echo -e "\n=== Searching for sessions table migrations ==="
fd -e sql -e js 'migrations' . --type f | head -20
# Search for sessions table definition patterns
echo -e "\n=== Sessions table definitions ==="
rg -n 'sessions' --type sql --type js | grep -i 'create table\|sessions' | head -30Repository: ELEVATE-Project/mentoring
Length of output: 2950
🏁 Script executed:
#!/bin/bash
# Check the actual content of distributionColumns.psql to see all changes
echo "=== Full content of distributionColumns.psql ==="
cat -n src/distributionColumns.psqlRepository: ELEVATE-Project/mentoring
Length of output: 2013
🏁 Script executed:
#!/bin/bash
# Search for any foreign key or relationship context mentioning sessions
echo "=== Foreign key constraints involving sessions ==="
rg -n -B 2 -A 2 'sessions' src/ --type sql
# Look for references to sessions in application code
echo -e "\n=== Sessions usage in application code ==="
rg -n 'sessions' src/ --type js --type ts | head -20Repository: ELEVATE-Project/mentoring
Length of output: 2273
🏁 Script executed:
#!/bin/bash
# Search for migration files that define the sessions table schema
echo "=== Database migrations directory structure ==="
find . -type d -name '*migrat*' 2>/dev/null | head -10
# Look for any database schema files
echo -e "\n=== Database schema files ==="
fd -e sql -e json | grep -i -E '(schema|migrat|sql)' | grep -v node_modules | head -20
# Search for sessions table creation
echo -e "\n=== Sessions table creation ==="
rg -n 'CREATE TABLE.*sessions' --type sql -A 10Repository: ELEVATE-Project/mentoring
Length of output: 1565
🏁 Script executed:
#!/bin/bash
# Check if there are any migration tracking files
find . -type f -name '*.sql' | head -20
# Look for database initialization files
find . -type f \( -name '*init*' -o -name '*schema*' -o -name '*seed*' \) -type f | grep -i sqlRepository: ELEVATE-Project/mentoring
Length of output: 1450
🏁 Script executed:
#!/bin/bash
# List files in migrations directories
echo "=== Files in src/database/migrations ==="
ls -la src/database/migrations/ 2>/dev/null | head -20
echo -e "\n=== Files in src/migrations ==="
ls -la src/migrations/ 2>/dev/null | head -20
# Search for sessions table definition in any file
echo -e "\n=== Searching for sessions table definition ==="
rg -l 'sessions' src/database/migrations/ src/migrations/ 2>/dev/nullRepository: ELEVATE-Project/mentoring
Length of output: 5464
🏁 Script executed:
#!/bin/bash
# Check sample data to understand sessions data volume
echo "=== Sample sessions data ==="
rg -n "INSERT INTO.*sessions" documentation/ --type sql -A 2 | head -50
# Also check for any baseline schema
echo -e "\n=== Any baseline or schema files ==="
fd 'baseline\|schema' --type f | grep -v node_modules | head -10Repository: ELEVATE-Project/mentoring
Length of output: 135
🏁 Script executed:
#!/bin/bash
# Read the sessions table creation migration
echo "=== Sessions table schema ==="
cat -n src/database/migrations/20230611225246-create-sessions.js
# Also check if there are any indices or constraints that might indicate volume
echo -e "\n=== Session-related tables and their relationships ==="
rg -n 'session' src/database/migrations/ -i | grep -i 'create\|foreign\|index' | head -20Repository: ELEVATE-Project/mentoring
Length of output: 6327
🏁 Script executed:
#!/bin/bash
# Check sample data to understand typical session volume
echo "=== Sample data sessions volume ==="
rg "INSERT INTO.*sessions" documentation/ -c
# Check documentation version files for distribution columns consistency
echo -e "\n=== Distribution columns across versions ==="
head -5 documentation/*/distribution-columns/mentoring/distributionColumns.sqlRepository: ELEVATE-Project/mentoring
Length of output: 1820
🏁 Script executed:
#!/bin/bash
# Check migrations that update sessions table
echo "=== Migrations updating sessions table ==="
grep -l "sessions" src/database/migrations/*.js | while read f; do
echo "=== $(basename $f) ==="
head -50 "$f" | grep -E "(up:|down:|ALTER TABLE|ADD COLUMN|tenant_code|reference|distributed)" || true
done | head -100Repository: ELEVATE-Project/mentoring
Length of output: 5062
🏁 Script executed:
#!/bin/bash
# Check for any migration that adds tenant_code to sessions
echo "=== Looking for tenant_code addition to sessions ==="
rg -n 'tenant_code' src/database/migrations/ -B 3 -A 3 | grep -E '(sessions|tenant_code|migration)' | head -40
# Check the latest distribution columns file
echo -e "\n=== Latest distribution columns ==="
cat -n documentation/latest/distribution-columns/mentoring/distributionColumns.sql 2>/dev/null || cat -n documentation/3.1.2/distribution-columns/mentoring/distributionColumns.sqlRepository: ELEVATE-Project/mentoring
Length of output: 6002
🏁 Script executed:
#!/bin/bash
# Check git status and recent changes to distributionColumns.psql
echo "=== Git status for distributionColumns.psql ==="
git log --oneline -n 20 src/distributionColumns.psql 2>/dev/null || echo "File not tracked"
# Check if file exists and its content
echo -e "\n=== Current content of src/distributionColumns.psql ==="
head -10 src/distributionColumns.psql 2>/dev/null || echo "File does not exist"
# Compare with latest documentation
echo -e "\n=== Latest documentation distribution columns ==="
cat documentation/latest/distribution-columns/mentoring/distributionColumns.sql 2>/dev/null | head -20Repository: ELEVATE-Project/mentoring
Length of output: 1784
🏁 Script executed:
#!/bin/bash
# Check recent migrations that might have changed distribution strategy
echo "=== Migration 20251213000001 snippet about sessions ==="
sed -n '1,50p' src/database/migrations/20251213000001-remove-fks-manage-primary-keys.js | tail -20
# Verify the architecture intent for sessions
echo -e "\n=== Check if there's a migration undoing reference table for sessions ==="
rg -n 'sessions' src/database/migrations/20240716111200-undistribute-table-and-drop-views.js -A 5 -B 5 | head -30Repository: ELEVATE-Project/mentoring
Length of output: 2267
Sessions should be a distributed table, not a reference table.
The change in line 3 contradicts the current distribution architecture. Recent migrations restructure sessions as a distributed table with composite primary key (tenant_code, id) for multi-tenant data sharding, and the latest documentation specifies create_distributed_table('sessions', 'id'). Making sessions a reference table (fully replicated) conflicts with this design and the established pattern for operational tables in the system. Revert to distributed distribution strategy consistent with session_attendees, session_request, and post_session_details.
🤖 Prompt for AI Agents
In `@src/distributionColumns.psql` around lines 1 - 3, Change the incorrect
creation of sessions as a reference table: replace the call to
create_reference_table('sessions') with the distributed table creation used by
other operational tables (e.g. session_attendees) by calling
create_distributed_table('sessions', 'id'), and ensure the sessions table uses
the composite primary key (tenant_code, id) for multi-tenant sharding consistent
with session_request and post_session_details.
| const request = require('supertest') | ||
| const BASE = process.env.BASE_URL || 'http://localhost:3000' | ||
| const schemas = require('./schemas/config.schemas.json') | ||
|
|
||
| describe('config endpoints generated from api-doc.yaml', () => { | ||
| describe('GET /mentoring/v1/config/get', () => { | ||
| test('should return 200', async () => { | ||
| const url = `/mentoring/v1/config/get` | ||
| let req = request(BASE).get(url) | ||
| const res = await req | ||
| expect(res.status).toBe(200) | ||
| // validate response schema | ||
| const schema = schemas['GET_mentoring_v1_config_get'] | ||
| expect(res.body).toMatchSchema(schema) | ||
| }) | ||
| }) | ||
| }) |
There was a problem hiding this comment.
File will not be discovered by the test runner — wrong extension.
The Jest config at src/integrationJest.config.js Line 22 uses testMatch: ['<rootDir>/integration-tests-new/**/*.specs.js'] (note: plural .specs.js). This file is named config.spec.js (singular .spec.js), so it will be silently skipped. The same issue applies to admin/admin.spec.js, cloud-services/cloud-services.spec.js, and any other .spec.js files.
Rename to config.specs.js or update the testMatch glob to include both patterns.
🤖 Prompt for AI Agents
In `@src/integration-tests-new/config/config.spec.js` around lines 1 - 17, The
test files under integration-tests-new (e.g., config.spec.js, admin.spec.js,
cloud-services.spec.js) are using the singular extension ".spec.js" but the Jest
config variable testMatch in integrationJest.config.js expects the plural
pattern '<rootDir>/integration-tests-new/**/*.specs.js', so the files are
skipped; fix by either renaming the test files to use the plural suffix (e.g.,
config.specs.js) or modify the testMatch glob to include both patterns (e.g.,
add '**/*.spec.js' or use a combined pattern) so Jest discovers these tests.
| "operator": { | ||
| "type": "string" | ||
| }, |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's see the schema file structure to confirm the locations mentioned
cat -n src/integration-tests-new/default-rule/schemas/default-rule.schemas.json | sed -n '30,50p; 130,150p; 235,255p'Repository: ELEVATE-Project/mentoring
Length of output: 1760
🏁 Script executed:
# Search for actual usage of these field names outside schema definitions
rg -n 'matching_operator|operator|requester_roles_config|role_config' \
--type=js --type=json \
-g '!node_modules' \
-g '!**/node_modules' \
src/ | grep -v "default-rule.schemas.json" | head -50Repository: ELEVATE-Project/mentoring
Length of output: 5108
🏁 Script executed:
# Also check if there are any test files or mock responses that reference these fields
fd -e 'js' -e 'json' . src/integration-tests-new/default-rule --type f | \
xargs grep -l 'operator\|role_config' 2>/dev/null | head -20Repository: ELEVATE-Project/mentoring
Length of output: 202
Read schema has incorrect field names that don't match the actual API responses.
The create and update schemas correctly use operator (Lines 35, 241) and requester_roles_config (Lines 44, 250). However, the read schema incorrectly uses matching_operator (Line 135) and role_config (Line 144). The codebase—including helpers, validators, services, and tests—consistently use operator and requester_roles_config across all operations. The read schema will fail to validate actual API responses.
🤖 Prompt for AI Agents
In `@src/integration-tests-new/default-rule/schemas/default-rule.schemas.json`
around lines 35 - 37, The read-schema in default-rule.schemas.json uses the
wrong field names; replace any occurrences of "matching_operator" with
"operator" and "role_config" with "requester_roles_config" within the read
(response) schema so the field names match the create/update schemas and the
rest of the codebase (validators/helpers/services/tests); ensure the value types
for "operator" and "requester_roles_config" match the existing definitions used
by create/update schemas (adjust object/array/string types accordingly) and
update the read schema property names wherever "matching_operator" or
"role_config" appear.
| describe('POST /mentoring/v1/modules/update/{id}', () => { | ||
| test('should return 201', async () => { | ||
| const url = `/mentoring/v1/modules/update/{id}` | ||
| let req = request(BASE).post(url) | ||
| req = req.set('x-auth-token', 'string') | ||
| req = req | ||
| .send({ | ||
| code: 'string', | ||
| }) | ||
| .set('Content-Type', 'application/json') | ||
| const res = await req | ||
| expect(res.status).toBe(201) | ||
| // validate response schema | ||
| const schema = schemas['POST_mentoring_v1_modules_update_id'] | ||
| expect(res.body).toMatchSchema(schema) | ||
| }) | ||
|
|
||
| test('should return 400/422 for invalid body', async () => { | ||
| const url = `/mentoring/v1/modules/update/{id}` | ||
| let req = request(BASE).post(url) | ||
| req = req.set('x-auth-token', 'string') | ||
| req = req.send({}).set('Content-Type', 'application/json') | ||
| const res = await req | ||
| expect([400, 422]).toContain(res.status) | ||
| }) |
There was a problem hiding this comment.
{id} placeholder is never substituted — tests hit a non-existent route.
On Lines 35 and 51, the URL contains the literal string {id} instead of an actual module ID. The server will likely return a 404 or route-mismatch error, not the expected 201 or 400/422. The same issue occurs in the DELETE test (Line 62).
These tests need a setup step: first create a module, capture its ID from the response, then use it for update and delete.
Proposed fix outline
describe('POST /mentoring/v1/modules/update/{id}', () => {
+ let moduleId
+
+ beforeAll(async () => {
+ const createRes = await request(BASE)
+ .post('/mentoring/v1/modules/create')
+ .set('x-auth-token', process.env.AUTH_TOKEN)
+ .send({ code: 'test-module-update' })
+ .set('Content-Type', 'application/json')
+ moduleId = createRes.body.result?.id
+ })
+
test('should return 201', async () => {
- const url = `/mentoring/v1/modules/update/{id}`
+ const url = `/mentoring/v1/modules/update/${moduleId}`
let req = request(BASE).post(url)Apply the same pattern for the DELETE test.
🤖 Prompt for AI Agents
In `@src/integration-tests-new/modules/modules.spec.js` around lines 33 - 57, The
tests use the literal `{id}` in the update/delete URLs so they never hit a real
resource; before calling the update or delete requests in the tests inside the
describe('POST /mentoring/v1/modules/update/{id}') block, create a module via
the create endpoint (capture the returned module id from the create response),
then replace `{id}` with that captured id when constructing the URL for the
update and delete requests (also ensure Content-Type and auth headers are
preserved and optionally clean up the created module in a teardown or
afterEach).
| describe('rolePermissionMapping endpoints generated from api-doc.yaml', () => { | ||
| describe('POST /mentoring/v1/rolePermissionMapping/create/{role_id}', () => { | ||
| test('should return 201', async () => { | ||
| const url = `/mentoring/v1/rolePermissionMapping/create/{role_id}` |
There was a problem hiding this comment.
{role_id} path parameter is never substituted — requests hit a non-existent route.
The URL /mentoring/v1/rolePermissionMapping/create/{role_id} is sent verbatim. The literal {role_id} is not replaced with an actual numeric ID, so every request will target a meaningless path and likely return a 404, making the expect(res.status).toBe(201) assertion always fail.
Proposed fix
Replace the template placeholder with a real (or test-fixture) role ID:
- const url = `/mentoring/v1/rolePermissionMapping/create/{role_id}`
+ const roleId = process.env.TEST_ROLE_ID || 1
+ const url = `/mentoring/v1/rolePermissionMapping/create/${roleId}`Apply the same pattern to all four occurrences (lines 8, 24, 35, 51).
Also applies to: 24-24, 35-35, 51-51
🤖 Prompt for AI Agents
In
`@src/integration-tests-new/rolePermissionMapping/rolePermissionMapping.spec.js`
at line 8, The test builds a URL with a literal "{role_id}" so requests hit a
non-existent route; replace the placeholder with a real role ID variable (e.g.,
use a test fixture or created roleId) wherever the URL is constructed (the const
url declarations in rolePermissionMapping.spec.js) so the path becomes
`/mentoring/v1/rolePermissionMapping/create/${roleId}` (apply the same change at
the four occurrences noted) and ensure the tests use that roleId when calling
the endpoint before asserting res.status === 201.
| test('should return 201', async () => { | ||
| const url = `/mentoring/v1/rolePermissionMapping/delete/{role_id}` | ||
| let req = request(BASE).post(url) | ||
| req = req.set('x-auth-token', TOKEN) |
There was a problem hiding this comment.
TOKEN is not defined — tests will throw ReferenceError at runtime.
TOKEN is used on lines 37 and 53 but is never declared or imported in this file. Compare with lines 10 and 26 where the auth header is set to the literal string 'string'. This will cause the delete-endpoint tests to crash before any assertion runs.
Proposed fix — use a consistent token source
Either define TOKEN at the top of the file (ideally from an env variable or a shared test helper), or use the same pattern as the create tests:
- req = req.set('x-auth-token', TOKEN)
+ req = req.set('x-auth-token', process.env.AUTH_TOKEN || 'string')Apply the same change on both lines 37 and 53.
Also applies to: 53-53
🤖 Prompt for AI Agents
In
`@src/integration-tests-new/rolePermissionMapping/rolePermissionMapping.spec.js`
at line 37, The tests use an undefined TOKEN constant which causes a
ReferenceError; update the two usages of req.set('x-auth-token', TOKEN) in
rolePermissionMapping.spec.js (both occurrences) to use a defined token source —
either import or define TOKEN at the top of the test file (preferably from a
shared test helper or process.env) or replace with the same literal used
elsewhere (e.g., 'string'); ensure both occurrences are changed so the test runs
without runtime errors.
| describe('Session Feedback Submission', () => { | ||
| let feedbackSessionId | ||
|
|
||
| beforeAll(async () => { | ||
| let userDetails = await commonHelper.mentorLogIn() | ||
| // Create a session that has passed to mark as complete | ||
| const now = new Date() | ||
| const startDate = new Date(now.getTime() - 2 * 60 * 60 * 1000) // 2 hours ago | ||
| const startDateTimestamp = Math.floor(startDate.getTime() / 1000) | ||
| const endDate = new Date(startDate.getTime() + 60 * 60 * 1000) // 1 hour duration | ||
| const endDateTimestamp = Math.floor(endDate.getTime() / 1000) | ||
|
|
||
| const createRes = await request(BASE) | ||
| .post('/mentoring/v1/sessions/update') | ||
| .set('x-auth-token', userDetails.token) | ||
| .send({ | ||
| title: 'Session for Feedback Test', | ||
| description: 'This session is created to test the feedback endpoint.', | ||
| type: 'PUBLIC', | ||
| start_date: startDateTimestamp, | ||
| end_date: endDateTimestamp, | ||
| time_zone: 'Asia/Calcutta', | ||
| mentor_id: userDetails.userId.toString(), | ||
| }) | ||
|
|
||
| expect(createRes.status).toBe(201) | ||
| feedbackSessionId = createRes.body.result.id | ||
|
|
||
| // Enroll mentee in the session | ||
| const enrollUrl = `/mentoring/v1/sessions/enroll/${feedbackSessionId}` | ||
| const enrollRes = await request(BASE) | ||
| .post(enrollUrl) | ||
| .set('x-auth-token', menteeDetails.token) | ||
| .set('timezone', 'Asia/Calcutta') | ||
| expect(enrollRes.status).toBe(201) | ||
|
|
||
| // Start the session | ||
| const startUrl = `/mentoring/v1/sessions/start/${feedbackSessionId}` | ||
| const startRes = await request(BASE).get(startUrl).set('x-auth-token', userDetails.token) | ||
| expect(startRes.status).toBe(200) | ||
|
|
||
| // Complete the session | ||
| const completeUrl = `/mentoring/v1/sessions/completed/${feedbackSessionId}` | ||
| const completeRes = await request(BASE).patch(completeUrl).set('x-auth-token', userDetails.token) | ||
| expect(completeRes.status).toBe(200) | ||
| }) | ||
|
|
||
| test('POST /mentoring/v1/feedback/submit/{sessionId} - mentee feedback should return 200', async () => { | ||
| const url = `/mentoring/v1/feedback/submit/${feedbackSessionId}` | ||
| const menteeFeedbackPayload = { | ||
| feedbacks: [ | ||
| { question_id: 3, value: 3, label: 'How would you rate the host of the session?' }, | ||
| { question_id: 4, value: 4, label: 'How would you rate the engagement in the session?' }, | ||
| ], | ||
| feedback_as: 'mentee', | ||
| } | ||
| const res = await request(BASE) | ||
| .post(url) | ||
| .set('x-auth-token', menteeDetails.token) | ||
| .send(menteeFeedbackPayload) | ||
| expect(res.status).toBe(200) | ||
| }) | ||
|
|
||
| test('POST /mentoring/v1/feedback/submit/{sessionId} - mentor feedback should return 200', async () => { | ||
| const url = `/mentoring/v1/feedback/submit/${feedbackSessionId}` | ||
| const mentorFeedbackPayload = { | ||
| feedbacks: [ | ||
| { question_id: 1, value: 4, label: 'How would you rate the engagement in the session?' }, | ||
| { question_id: 2, value: 3, label: 'How would you rate the Audio/Video quality?' }, | ||
| ], | ||
| feedback_as: 'mentor', | ||
| } | ||
| const res = await request(BASE).post(url).set('x-auth-token', userDetails.token).send(mentorFeedbackPayload) | ||
| expect(res.status).toBe(200) | ||
| }) |
There was a problem hiding this comment.
Variable shadowing causes mentor feedback to use the wrong user's token.
At line 290, let userDetails = await commonHelper.mentorLogIn() shadows the outer userDetails (line 6), but this local variable is scoped to the beforeAll callback only. The mentor feedback test at line 358 uses userDetails.token, which resolves to the outer userDetails — a different mentor than the one who created and completed the session. This will likely cause the feedback submission to fail (wrong user context).
🐛 Proposed fix — use a describe-scoped variable
describe('Session Feedback Submission', () => {
let feedbackSessionId
+ let feedbackMentorDetails
beforeAll(async () => {
- let userDetails = await commonHelper.mentorLogIn()
+ feedbackMentorDetails = await commonHelper.mentorLogIn()
// Create a session that has passed to mark as complete
const now = new Date()
...
const createRes = await request(BASE)
.post('/mentoring/v1/sessions/update')
- .set('x-auth-token', userDetails.token)
+ .set('x-auth-token', feedbackMentorDetails.token)
.send({
...
- mentor_id: userDetails.userId.toString(),
+ mentor_id: feedbackMentorDetails.userId.toString(),
})
...
- const startRes = await request(BASE).get(startUrl).set('x-auth-token', userDetails.token)
+ const startRes = await request(BASE).get(startUrl).set('x-auth-token', feedbackMentorDetails.token)
...
- const completeRes = await request(BASE).patch(completeUrl).set('x-auth-token', userDetails.token)
+ const completeRes = await request(BASE).patch(completeUrl).set('x-auth-token', feedbackMentorDetails.token)
})
...
test('POST ... - mentor feedback should return 200', async () => {
...
- const res = await request(BASE).post(url).set('x-auth-token', userDetails.token).send(mentorFeedbackPayload)
+ const res = await request(BASE).post(url).set('x-auth-token', feedbackMentorDetails.token).send(mentorFeedbackPayload)
})🤖 Prompt for AI Agents
In `@src/integration-tests-new/sessions/sessions.specs.js` around lines 286 - 360,
The bug is variable shadowing: the inner let userDetails in the beforeAll
callback (created via commonHelper.mentorLogIn()) hides the outer
describe-scoped userDetails so tests like the mentor feedback test use the wrong
token; fix by removing the inner declaration and assign the logged-in mentor to
the describe-scoped variable (i.e., replace "let userDetails = await
commonHelper.mentorLogIn()" inside beforeAll with "userDetails = await
commonHelper.mentorLogIn()"), ensuring the same userDetails.token is used in
start/completed and the mentor feedback POST.
| const request = require('supertest') | ||
| const BASE = process.env.BASE_URL || 'http://localhost:3000' | ||
| const schemas = require('./schemas/users.schemas.json') | ||
|
|
||
| describe('users endpoints generated from api-doc.yaml', () => { | ||
| describe('GET /mentoring/v1/users/pendingFeedbacks', () => { | ||
| test('should return 200', async () => { | ||
| const url = `/mentoring/v1/users/pendingFeedbacks` | ||
| let req = request(BASE).get(url) | ||
| req = req.set('x-auth-token', 'string') | ||
| const res = await req | ||
| expect(res.status).toBe(200) | ||
| // validate response schema | ||
| const schema = schemas['GET_mentoring_v1_users_pendingFeedbacks'] | ||
| expect(res.body).toMatchSchema(schema) | ||
| }) | ||
| }) | ||
|
|
||
| describe('GET /mentoring/v1/users/list?type={userType}&page={page}&limit={limit}&search={search}', () => { | ||
| test('should return 200', async () => { | ||
| const url = `/mentoring/v1/users/list?type=mentor&page=1&limit=2&search=jhon` | ||
| let req = request(BASE).get(url) | ||
| req = req.set('x-auth-token', 'string') | ||
| const res = await req | ||
| expect(res.status).toBe(200) | ||
| // validate response schema | ||
| const schema = schemas['GET_mentoring_v1_users_list_type_userType_page_page_limit_limit_search_search'] | ||
| expect(res.body).toMatchSchema(schema) | ||
| }) | ||
| }) | ||
| }) |
There was a problem hiding this comment.
Placeholder token 'string' — tests will fail without real authentication.
Same issue as report-queries.spec.js: no commonHelper login is performed, and the x-auth-token header is set to the literal 'string'. Both tests expecting HTTP 200 will fail. Additionally, unlike other test files, there are no unauthorized-access test cases.
Add authentication via beforeAll and consider adding 401/403 tests for consistency with the rest of the suite.
🤖 Prompt for AI Agents
In `@src/integration-tests-new/users/users.spec.js` around lines 1 - 31, The tests
use a literal placeholder token in the x-auth-token header and lack a login
step; add a beforeAll that performs authentication (reusing the project's
commonHelper login helper if available) to obtain a real token and store it in a
variable (e.g., authToken) and replace the hardcoded 'string' in both tests with
that authToken when calling request(BASE).set('x-auth-token', authToken). Also
add a couple of negative tests for unauthorized access (e.g., missing or invalid
token) to mirror other spec files and ensure the endpoints return 401/403 as
expected.
| '@utils/(.*)': '<rootDir>/utils/$1', | ||
| }, | ||
| testMatch: ['<rootDir>/integration-tests/**/*.spec.js'], | ||
| testMatch: ['<rootDir>/integration-tests-new/**/*.specs.js'], |
There was a problem hiding this comment.
testMatch pattern excludes several test files in this PR.
The glob *.specs.js (plural s) will not match files named *.spec.js (singular). At least three files in this PR use the singular form: admin.spec.js, config.spec.js, cloud-services.spec.js. Either standardize all filenames to .specs.js or broaden the pattern:
Option: broaden the glob
- testMatch: ['<rootDir>/integration-tests-new/**/*.specs.js'],
+ testMatch: ['<rootDir>/integration-tests-new/**/*.spec?(s).js'],📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| testMatch: ['<rootDir>/integration-tests-new/**/*.specs.js'], | |
| testMatch: ['<rootDir>/integration-tests-new/**/*.spec.js', '<rootDir>/integration-tests-new/**/*.specs.js'], |
🤖 Prompt for AI Agents
In `@src/integrationJest.config.js` at line 22, The current Jest config's
testMatch pattern uses '<rootDir>/integration-tests-new/**/*.specs.js' which
only matches files with the plural '.specs.js' and misses existing tests named
'*.spec.js'; update the testMatch entry (the testMatch key in
src/integrationJest.config.js) to broaden the glob so it matches both
'*.spec.js' and '*.specs.js' (for example by using a pattern like
'**/*.spec?(s).js' or an array with both globs) so the files admin.spec.js,
config.spec.js, cloud-services.spec.js, etc., are included in the test run.
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/database/seeders/20230822124704-add_entity_types_and_entities.js (1)
3-6:⚠️ Potential issue | 🟠 MajorNo guard for
DEFAULT_ORGANISATION_CODE/DEFAULT_TENANT_CODEenv vars.
defaultOrgIdis validated at lines 3–6, butDEFAULT_ORGANIZATION_CODEandDEFAULT_TENANT_CODEare used without any check. If either is unset, the seeder will silently insertNULL/undefinedvalues intoorganization_codeandtenant_codecolumns.Proposed fix — add validation alongside the existing guard
const defaultOrgId = queryInterface.sequelize.options.defaultOrgId if (!defaultOrgId) { throw new Error('Default org ID is undefined. Please make sure it is set in sequelize options.') } + if (!process.env.DEFAULT_ORGANISATION_CODE) { + throw new Error('DEFAULT_ORGANISATION_CODE is undefined. Please make sure it is set in environment variables.') + } + if (!process.env.DEFAULT_TENANT_CODE) { + throw new Error('DEFAULT_TENANT_CODE is undefined. Please make sure it is set in environment variables.') + }Also applies to: 123-124, 158-158
🤖 Fix all issues with AI agents
In `@src/database/seeders/20230822124704-add_entity_types_and_entities.js`:
- Around line 123-124: Replace the incorrect American-spelled environment
variable reference process.env.DEFAULT_ORGANIZATION_CODE used when setting
organization_code in the seeder with the codebase-standard British spelling
process.env.DEFAULT_ORGANISATION_CODE so the seeder inserts the actual
organization_code value (leave tenant_code unchanged); update the occurrence in
the file where organization_code is assigned to use DEFAULT_ORGANISATION_CODE to
match other uses like getDefaultOrgId.js and org-admin.js.
| organization_code: process.env.DEFAULT_ORGANIZATION_CODE, | ||
| tenant_code: process.env.DEFAULT_TENANT_CODE, |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check which spelling of the env var is used across the codebase
echo "=== Occurrences of DEFAULT_ORGANIZATION_CODE (Z spelling) ==="
rg -n 'DEFAULT_ORGANIZATION_CODE' --type js || echo "No matches found"
echo ""
echo "=== Occurrences of DEFAULT_ORGANISATION_CODE (S spelling) ==="
rg -n 'DEFAULT_ORGANISATION_CODE' --type js || echo "No matches found"
echo ""
echo "=== Check .env and config files ==="
fd -e env -o -e example || echo "No .env files found"
echo ""
git ls-files | grep -E '\.(env|config)' | head -20
echo ""
echo "=== Search all files for both spellings ==="
rg 'DEFAULT_ORGANI[SZ]ATION_CODE' -i --type-list
echo "Z spelling across all files:"
rg 'DEFAULT_ORGANIZATION_CODE' --type-add 'text:!(node_modules|dist|build)' || echo "No matches"
echo ""
echo "S spelling across all files:"
rg 'DEFAULT_ORGANISATION_CODE' --type-add 'text:!(node_modules|dist|build)' || echo "No matches"Repository: ELEVATE-Project/mentoring
Length of output: 9063
🏁 Script executed:
# Check envVariables.js around both lines
sed -n '200,220p' src/envVariables.jsRepository: ELEVATE-Project/mentoring
Length of output: 594
Fix environment variable spelling to match codebase standard: DEFAULT_ORGANISATION_CODE (British spelling with S).
Line 123 references process.env.DEFAULT_ORGANIZATION_CODE (American spelling with Z), but the entire codebase consistently uses DEFAULT_ORGANISATION_CODE (British spelling with S). Both spellings are defined in src/envVariables.js, but the S spelling is used throughout the actual code (40+ occurrences in org-admin.js, cacheHelper.js, migrations, getDefaultOrgId.js, and all documentation). Using the wrong spelling will resolve to undefined, inserting NULL for organization_code.
Proposed fix
- organization_code: process.env.DEFAULT_ORGANIZATION_CODE,
+ organization_code: process.env.DEFAULT_ORGANISATION_CODE,📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| organization_code: process.env.DEFAULT_ORGANIZATION_CODE, | |
| tenant_code: process.env.DEFAULT_TENANT_CODE, | |
| organization_code: process.env.DEFAULT_ORGANISATION_CODE, | |
| tenant_code: process.env.DEFAULT_TENANT_CODE, |
🤖 Prompt for AI Agents
In `@src/database/seeders/20230822124704-add_entity_types_and_entities.js` around
lines 123 - 124, Replace the incorrect American-spelled environment variable
reference process.env.DEFAULT_ORGANIZATION_CODE used when setting
organization_code in the seeder with the codebase-standard British spelling
process.env.DEFAULT_ORGANISATION_CODE so the seeder inserts the actual
organization_code value (leave tenant_code unchanged); update the occurrence in
the file where organization_code is assigned to use DEFAULT_ORGANISATION_CODE to
match other uses like getDefaultOrgId.js and org-admin.js.
Release Notes
CI/CD Pipeline Enhancement
Database Schema Updates
Comprehensive Integration Test Suite
Development Tooling
node app.js