Skip to content

Migration copy2#1563

Open
borkarsaish65 wants to merge 86 commits intoELEVATE-Project:developfrom
borkarsaish65:migration_copy2
Open

Migration copy2#1563
borkarsaish65 wants to merge 86 commits intoELEVATE-Project:developfrom
borkarsaish65:migration_copy2

Conversation

@borkarsaish65
Copy link
Contributor

@borkarsaish65 borkarsaish65 commented Feb 11, 2026

Release Notes

CI/CD Pipeline Enhancement

  • Migrated CircleCI configuration from lightweight SonarCloud-based setup to comprehensive, self-contained multi-service integration pipeline
  • Added explicit Docker service orchestration with health checks, database migrations, and extensive diagnostics logging on failure
  • Implemented pre-steps for Git configuration, checkout management, and SSH rewrite disabling
  • Added caching logic for mentoring and user service dependencies

Database Schema Updates

  • Refactored session queries in migrations to use direct SQL via Sequelize QueryTypes instead of helper functions
  • Modified session_ownerships table structure: removed primary key on mentor_id, added composite key on (user_id, session_id, type), and introduced type column with CREATOR backfill for existing rows
  • Removed organization_code and tenant_code from orgEntity-type seeding
  • Transitioned entity_types, permissions, and sessions from distributed_table to reference_table declarations in Citus distribution configuration

Comprehensive Integration Test Suite

  • Added 50+ new integration test files covering all major API endpoints and functionality areas
  • Implemented test utilities for user provisioning, authentication flows, and profile extension readiness checks
  • Created structured JSON schemas for response validation across:
    • Admin operations (user deletion, view rebuild, periodic refresh)
    • Cloud services and file management
    • Connections and connection requests
    • Default rules and entity management
    • Feedback submission and forms
    • Mentor and mentee operations
    • Module, permission, and role management
    • Report generation and mapping
    • Session management and request handling
    • Profile and user operations
    • Questions and question sets
  • Introduced Jest test sequencer for deterministic test execution order
  • Updated integration test configuration with new module path mappings

Development Tooling

  • Added "dev" npm script pointing to node app.js
  • Updated Jest from v28.1.1 to v30.2.0
  • Adjusted db:init script to ensure migrations run regardless of database creation outcome
  • Changed test database connection from "mentoring-local" to "elevate-mentoring"

Author Lines Added Lines Removed
borkarsaish65 33,336 0

@coderabbitai
Copy link

coderabbitai bot commented Feb 11, 2026

Walkthrough

Replaces SonarCloud-based CircleCI with a self-contained Docker-driven CI pipeline, adds 30+ integration test suites with JSON schemas and a custom Jest sequencer, and applies multiple DB migration/distribution updates plus small package and test-config changes.

Changes

Cohort / File(s) Summary
CI/CD Pipeline
\.circleci/config.yml
Removed SonarCloud orb; replaced with a full CircleCI pipeline that checks out services, manages caches, starts docker-compose, waits for health checks, runs migrations/seeds, and executes integration tests with diagnostics.
Database Migrations & Seeders
src/database/migrations/...20231228184838-add-session-columns-and-update-mentor-names.js, src/database/migrations/...20240213074857-altering-session-ownership-tabe-and-data.js, src/database/migrations/...20251020081719-add-orgEntity-type.js, src/database/seeders/20230822124704-add_entity_types_and_entities.js
Replaced helper-based session retrieval with direct SQL select and early return; added migration renaming mentor_iduser_id, introduced type column with composite PK/backfill; removed org/tenant code fields from entity_type inserts; added env-driven tenant/org codes to seeder entities.
Distribution Schema
src/distributionColumns.psql
Converted entity_types, permissions, and sessions declarations from distributed tables to reference tables (adjusted distribution strategy).
Integration Tests & Schemas
src/integration-tests-new/..., src/integrationJest.config.js
Added 30+ integration test suites and many JSON schema files, updated Jest config (moduleNameMapper, testMatch), and added new test discovery pattern.
Test Orchestration & Utilities
src/integration-tests-new/commonTests.js, src/integration-tests-new/testSequencer.js, src/integration-tests-new/testSequencer.js
Introduced common login/auth helpers (logIn, mentorLogIn, adminLogin, logError) and a CustomSequencer to enforce test ordering.
Sessions Tests (large)
src/integration-tests-new/sessions/sessions.specs.js, src/integration-tests-new/sessions/schemas/sessions.schemas.json
Comprehensive session lifecycle tests (create, enroll, start, complete, feedback) with large schema definitions and extensive orchestration logic.
Package & Test Config
src/package.json, src/setupFileAfterEnv.js, src/integrationJest.config.js
Added dev script, adjusted db:init command grouping so migrate runs after create/echo, bumped jest to ^30.2.0, and switched Postgres connection string to elevate-mentoring; updated Jest module mappings and sequencer.
Misc — migrations/psql tweaks
src/distributionColumns.psql, src/database/migrations/...add-orgEntity-type.js
Removed organization_code and tenant_code from certain entity_type insert rows and adjusted reference/distribution declarations.

Sequence Diagram(s)

sequenceDiagram
    autonumber
    participant CI as CircleCI
    participant Docker as Docker Compose
    participant Services as Services (mentoring, user, scheduler, comms)
    participant DB as PostgreSQL
    participant Tests as Jest Integration Suite

    CI->>Docker: checkout repos & pull/build images
    Docker->>Services: docker-compose up (start containers)
    Services->>DB: run migrations, enable extensions, apply seeds
    DB-->>Services: migrations/seeds complete
    Docker->>Services: health-check loop (wait for ready)
    Services-->>Docker: healthy responses
    Docker->>Tests: start Jest (integration tests)
    Tests->>Services: HTTP requests (auth flows, seed/setup)
    Services->>DB: queries / data operations
    DB-->>Services: data responses
    Services-->>Tests: API responses (validated against schemas)
    Tests-->>CI: test results & diagnostics (on failure)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

  • Develop tenant changes #1334: Modifies the same session_ownership migration area; likely overlap on schema/backfill logic.
  • Tenant Optimized Script #1500: Touches tenant/distribution DB schema changes related to distributionColumns.psql and multi-tenant conversions.
  • entity type added #1422: Edits the same org-entity migration (entity_type insertion fields); directly related to removed org/tenant code fields.

Suggested reviewers

  • nevil-mathew

Poem

🐰 I hopped into CI and nudged the stack,
Brought up containers, then checked each hack.
Migrations and seeds in a tidy row,
Tests marched in order, ready to go.
Pipelines humming — carrots for the pack! 🥕

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 inconclusive)
Check name Status Explanation Resolution
Title check ❓ Inconclusive The PR title 'Migration copy2' is vague and generic, using non-descriptive language that fails to convey meaningful information about the changeset's primary purpose or scope. Replace with a descriptive title that captures the main change, such as 'Add integration test suite and database migration updates' or 'Implement comprehensive test infrastructure and schema changes'.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 22

Note

Due to the large number of review comments, Critical severity comments were prioritized as inline comments.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/setupFileAfterEnv.js (1)

16-28: ⚠️ Potential issue | 🟠 Major

Race condition: database connection may not be ready when tests start.

db.connect() uses a fire-and-forget callback, and beforeAll is empty. Tests can begin executing before the connection is established, leading to flaky failures. Move the connection into beforeAll using the promise-based API:

Proposed fix
-db.connect((err) => {
-	if (err) {
-		console.error('Database connection error:', err)
-	} else {
-		console.log('Connected to DB')
-	}
-})
-
-global.db = db
-
 beforeAll(async () => {
-	// You can add any setup code you need here
+	await db.connect()
+	console.log('Connected to DB')
+	global.db = db
 })
🤖 Fix all issues with AI agents
In @.circleci/config.yml:
- Around line 112-130: The "Wait for services to be healthy" step's loop can
finish without containers being healthy and currently lets the pipeline
continue; after the for loop that checks the unhealthy and starting variables,
add an explicit health check that re-evaluates docker ps (or checks if
$unhealthy or $starting are non-empty) and, if any containers remain
unhealthy/starting, emit a clear error message and exit non‑zero (e.g., exit 1)
to fail the job so migrations won't run against unhealthy services.

In `@src/distributionColumns.psql`:
- Around line 1-3: Change the incorrect creation of sessions as a reference
table: replace the call to create_reference_table('sessions') with the
distributed table creation used by other operational tables (e.g.
session_attendees) by calling create_distributed_table('sessions', 'id'), and
ensure the sessions table uses the composite primary key (tenant_code, id) for
multi-tenant sharding consistent with session_request and post_session_details.

In `@src/integration-tests-new/config/config.spec.js`:
- Around line 1-17: The test files under integration-tests-new (e.g.,
config.spec.js, admin.spec.js, cloud-services.spec.js) are using the singular
extension ".spec.js" but the Jest config variable testMatch in
integrationJest.config.js expects the plural pattern
'<rootDir>/integration-tests-new/**/*.specs.js', so the files are skipped; fix
by either renaming the test files to use the plural suffix (e.g.,
config.specs.js) or modify the testMatch glob to include both patterns (e.g.,
add '**/*.spec.js' or use a combined pattern) so Jest discovers these tests.

In `@src/integration-tests-new/default-rule/schemas/default-rule.schemas.json`:
- Around line 35-37: The read-schema in default-rule.schemas.json uses the wrong
field names; replace any occurrences of "matching_operator" with "operator" and
"role_config" with "requester_roles_config" within the read (response) schema so
the field names match the create/update schemas and the rest of the codebase
(validators/helpers/services/tests); ensure the value types for "operator" and
"requester_roles_config" match the existing definitions used by create/update
schemas (adjust object/array/string types accordingly) and update the read
schema property names wherever "matching_operator" or "role_config" appear.

In `@src/integration-tests-new/modules/modules.spec.js`:
- Around line 33-57: The tests use the literal `{id}` in the update/delete URLs
so they never hit a real resource; before calling the update or delete requests
in the tests inside the describe('POST /mentoring/v1/modules/update/{id}')
block, create a module via the create endpoint (capture the returned module id
from the create response), then replace `{id}` with that captured id when
constructing the URL for the update and delete requests (also ensure
Content-Type and auth headers are preserved and optionally clean up the created
module in a teardown or afterEach).

In `@src/integration-tests-new/org-admin/schemas/org-admin.schemas.json`:
- Around line 328-389: The POST_mentoring_v1_org-admin_uploadSampleCSV entry is
a concrete example, not a JSON Schema; replace it with a proper schema that
defines type: "object" and properties for responseCode (string, optional enum
like "OK"), message (string), result (array), and meta (object) where
meta.formsVersion is an array of objects with properties id (number), type
(string), version (number), plus correlation (string) and meetingPlatform
(string); ensure required fields mirror actual response and keep the top-level
entry named "POST_mentoring_v1_org-admin_uploadSampleCSV" so toMatchSchema calls
validate structure rather than literal values.

In `@src/integration-tests-new/permissions/permissions.spec.js`:
- Around line 38-56: The tests use the literal string "{id}" in URLs (url =
`/mentoring/v1/permissions/update/{id}` and delete variant) which hits no route;
fix by capturing the created permission ID from the create test response and
interpolating it into the update/delete URLs. Declare a shared variable (e.g.,
permissionId) in the spec scope, set permissionId = res.body.id (or the correct
ID field) in the create POST test (or create it in beforeAll), then replace
`/mentoring/v1/permissions/update/{id}` and
`/mentoring/v1/permissions/delete/{id}` with template strings using that
permissionId so updatePermission and deletePermission tests target the actual
resource; keep using BASE, request and schemas as before.
- Around line 1-84: The test file is named permissions.spec.js but Jest is
configured to run only *.specs.js, so it's being skipped; rename the test file
to permissions.specs.js and update any references. Specifically, rename the file
containing the describe block "permissions endpoints generated from
api-doc.yaml" from permissions.spec.js to permissions.specs.js so Jest's
testMatch (integrationJest.config.js) picks it up; also apply the same renaming
pattern to the other 18 test files using the singular .spec extension.

In `@src/integration-tests-new/question-set/question-set.spec.js`:
- Around line 1-74: Tests use the literal placeholder 'string' for the
x-auth-token in request setup (see the three places in question-set.spec.js
where req = req.set('x-auth-token', 'string')), so successful requests will
return 401/403; replace that with a real token fetched from the project's login
helper: add a beforeAll that calls the existing auth/login utility (e.g.,
login(), getAuthToken(), or global.signIn) to obtain a token and store it in a
variable (e.g., authToken), then update each request that calls
set('x-auth-token', 'string') to use that token (set('x-auth-token',
authToken)); ensure the helper uses valid credentials/environment and clean up
if required.

In `@src/integration-tests-new/report-mapping/report-mapping.spec.js`:
- Line 10: Replace the hardcoded placeholder header assignment (req =
req.set('x-auth-token', 'bearer {{token}}')) with a real token obtained from the
shared helper: call await commonHelper.adminLogin() to get the token, then set
the request header using the correct Bearer formatting (e.g., req =
req.set('x-auth-token', `Bearer ${token}`)); ensure the adminLogin call is
awaited before any requests that use the req variable so authenticated tests use
a valid token.

In `@src/integration-tests-new/report-queries/report-queries.spec.js`:
- Around line 1-66: Add a beforeAll hook that calls the shared auth helper
(e.g., commonHelper.adminLogin() or whatever login helper is used in other
specs) to get a real token, store it in a variable (e.g., authToken) and replace
the placeholder header values in each test's req.set('x-auth-token', 'bearer
{{token}}') with the real header using the token (e.g., req.set('x-auth-token',
`bearer ${authToken}`)); update all occurrences in this file (tests under POST
/mentoring/v1/report-queries/create, GET /mentoring/v1/report-queries/read, POST
/mentoring/v1/report-queries/update, DELETE /mentoring/v1/report-queries/delete)
so requests use the obtained token.

In `@src/integration-tests-new/report-type/report-type.spec.js`:
- Line 10: The test is using a literal Postman-style placeholder in the
Authorization header (req.set('x-auth-token', 'bearer {{token}}')), causing
401s; replace that hardcoded string with a real token source by wiring the
request header to your test auth helper or environment variable (for example use
the login helper that returns a JWT or process.env.AUTH_TOKEN) wherever
req.set('x-auth-token', 'bearer {{token}}') appears (including the other
occurrences in this spec) so tests send a valid bearer token.
- Line 37: The tests reference an undefined TOKEN constant (used when setting
the request header via req.set('x-auth-token', TOKEN) on the read/update tests),
causing a ReferenceError; fix by declaring and initializing TOKEN at the top of
the spec (or make the read/update tests use the same literal header string used
in create/delete, e.g. 'bearer {{token}}') so that req.set('x-auth-token',
TOKEN) (and the other occurrence) receive a defined value.
- Around line 1-3: Add the Jest configuration to load the setup file so the
toMatchSchema matcher from src/setupFileAfterEnv.js is registered: update the
jest config module (the object exported from src/jest.config.js) to include the
setupFilesAfterEnv property pointing to the setup file (ensure the string
references './setupFileAfterEnv.js' or '<rootDir>/setupFileAfterEnv.js') so
expect.extend(matchers) in src/setupFileAfterEnv.js runs before tests that call
toMatchSchema in report-type.spec.js.

In `@src/integration-tests-new/reports/reports.spec.js`:
- Line 10: The tests currently set the header using the literal placeholder
'bearer {{token}}' which causes 401/403; replace this with a real token fetched
at runtime (for example call a test helper like getAuthToken() or read
process.env.TEST_TOKEN) and set the header using string concatenation or
template interpolation (e.g., req.set('x-auth-token', 'bearer ' + token)), then
update every occurrence of req.set('x-auth-token', 'bearer {{token}}')
(including the repeated lines flagged) to use the runtime token value.

In `@src/integration-tests-new/requestSessions/requestSessions.specs.js`:
- Line 50: The org-id header is being set with the whole organization object for
mentee requests; update occurrences that use menteeDetails.organizations[0] to
access the id string instead by changing them to
menteeDetails.organizations[0].id.toString() so they match mentorDetails
usage—search for all .set('org-id', menteeDetails.organizations[0]) (e.g., the
requests around where menteeDetails is used) and replace with .set('org-id',
menteeDetails.organizations[0].id.toString()) to ensure consistent header values
across tests.

In `@src/integration-tests-new/role-extension/role-extension.spec.js`:
- Line 90: The test references an undeclared TOKEN variable in the request
header (req.set('x-auth-token', TOKEN)), causing a ReferenceError; replace TOKEN
with the intended token literal used elsewhere (e.g., 'bearer {{token}}' or
'test-token') so req.set('x-auth-token', ...) uses a defined string, or wire in
the correct test token variable if one exists, ensuring the header-setting call
in this test matches the other tests' token format.

In
`@src/integration-tests-new/rolePermissionMapping/rolePermissionMapping.spec.js`:
- Line 8: The test builds a URL with a literal "{role_id}" so requests hit a
non-existent route; replace the placeholder with a real role ID variable (e.g.,
use a test fixture or created roleId) wherever the URL is constructed (the const
url declarations in rolePermissionMapping.spec.js) so the path becomes
`/mentoring/v1/rolePermissionMapping/create/${roleId}` (apply the same change at
the four occurrences noted) and ensure the tests use that roleId when calling
the endpoint before asserting res.status === 201.
- Line 37: The tests use an undefined TOKEN constant which causes a
ReferenceError; update the two usages of req.set('x-auth-token', TOKEN) in
rolePermissionMapping.spec.js (both occurrences) to use a defined token source —
either import or define TOKEN at the top of the test file (preferably from a
shared test helper or process.env) or replace with the same literal used
elsewhere (e.g., 'string'); ensure both occurrences are changed so the test runs
without runtime errors.

In `@src/integration-tests-new/sessions/sessions.specs.js`:
- Around line 286-360: The bug is variable shadowing: the inner let userDetails
in the beforeAll callback (created via commonHelper.mentorLogIn()) hides the
outer describe-scoped userDetails so tests like the mentor feedback test use the
wrong token; fix by removing the inner declaration and assign the logged-in
mentor to the describe-scoped variable (i.e., replace "let userDetails = await
commonHelper.mentorLogIn()" inside beforeAll with "userDetails = await
commonHelper.mentorLogIn()"), ensuring the same userDetails.token is used in
start/completed and the mentor feedback POST.

In `@src/integration-tests-new/users/users.spec.js`:
- Around line 1-31: The tests use a literal placeholder token in the
x-auth-token header and lack a login step; add a beforeAll that performs
authentication (reusing the project's commonHelper login helper if available) to
obtain a real token and store it in a variable (e.g., authToken) and replace the
hardcoded 'string' in both tests with that authToken when calling
request(BASE).set('x-auth-token', authToken). Also add a couple of negative
tests for unauthorized access (e.g., missing or invalid token) to mirror other
spec files and ensure the endpoints return 401/403 as expected.

In `@src/integrationJest.config.js`:
- Line 22: The current Jest config's testMatch pattern uses
'<rootDir>/integration-tests-new/**/*.specs.js' which only matches files with
the plural '.specs.js' and misses existing tests named '*.spec.js'; update the
testMatch entry (the testMatch key in src/integrationJest.config.js) to broaden
the glob so it matches both '*.spec.js' and '*.specs.js' (for example by using a
pattern like '**/*.spec?(s).js' or an array with both globs) so the files
admin.spec.js, config.spec.js, cloud-services.spec.js, etc., are included in the
test run.
🟠 Major comments (37)
src/integration-tests-new/rolePermissionMapping/rolePermissionMapping.spec.js-10-10 (1)

10-10: ⚠️ Potential issue | 🟠 Major

Auth token is a hardcoded placeholder 'string' — requests will likely be rejected as unauthorized.

Setting x-auth-token to the literal value 'string' won't pass any real auth middleware. For integration tests to meaningfully validate a 201 response, a valid token (or a mechanism to obtain one during test setup) is needed.

Also applies to: 26-27

src/integration-tests-new/rolePermissionMapping/schemas/rolePermissionMapping.schemas.json-1-72 (1)

1-72: 🛠️ Refactor suggestion | 🟠 Major

Schemas lack required properties — they will match virtually any object, including {}.

Neither schema specifies required arrays at any level, so an empty object {} would pass validation. This largely defeats the purpose of response schema validation in the tests.

Consider adding required fields at minimum for the top-level properties and for result in the create schema:

Example for the create schema
 	"POST_mentoring_v1_rolePermissionMapping_create_role_id": {
 		"$schema": "http://json-schema.org/draft-04/schema#",
 		"type": "object",
+		"required": ["responseCode", "message", "result"],
 		"properties": {
 			"responseCode": {
 				"type": "string"
 			},

Apply the same pattern to the delete schema and nested objects as appropriate.

src/integration-tests-new/issues/issues.spec.js-10-10 (1)

10-10: ⚠️ Potential issue | 🟠 Major

Hardcoded placeholder auth token will cause test failures or false positives.

'string' is not a valid auth token. Against a real service this will return 401; if the test still passes with 200, it means the endpoint is unauthenticated and the header is meaningless. Consider sourcing a valid token from the test environment (e.g., process.env.AUTH_TOKEN).

src/integration-tests-new/issues/schemas/issues.schemas.json-1-36 (1)

1-36: ⚠️ Potential issue | 🟠 Major

Schema has no required fields — validation will pass even for empty objects.

Without a required array, toMatchSchema will accept {} as valid. At minimum, add the fields the API must always return (e.g., responseCode, message) to a required list. Consider also setting "additionalProperties": false if the shape should be strict.

Proposed fix (example)
 	"POST_mentoring_v1_issues_create": {
 		"$schema": "http://json-schema.org/draft-04/schema#",
 		"type": "object",
+		"required": ["responseCode", "message", "result", "meta"],
 		"properties": {
 			"responseCode": {
src/integration-tests-new/feedback/feedback.spec.js-10-10 (1)

10-10: ⚠️ Potential issue | 🟠 Major

Auth token 'string' is a placeholder — the "should return 200" test will likely fail against a real server.

If the API validates tokens, this request will be rejected. Use a valid token (e.g., from a login helper or environment variable) for the authenticated test cases to be meaningful.

src/integration-tests-new/feedback/schemas/feedback.schemas.json-2-88 (1)

2-88: 🛠️ Refactor suggestion | 🟠 Major

No required arrays in the GET schema — any subset of fields (including none) passes validation.

Without required at the top level and inside the form item, the schema accepts {} as a valid response. Add required arrays to enforce the contract the test is meant to verify.

Example for the top-level and form item objects
 	"GET_/mentoring/v1/feedback/forms/{SessionId}": {
 		"type": "object",
 		"properties": {
 			"responseCode": { "type": "string" },
 			"message": { "type": "string" },
 			"result": { ... },
 			"meta": { ... }
 		},
+		"required": ["responseCode", "message", "result"]
 	}

Similarly, add "required" inside the form item schema for fields that must always be present (e.g., "id", "name", "type", "status").

src/integration-tests-new/report-type/schemas/report-type.schemas.json-1-61 (1)

1-61: 🛠️ Refactor suggestion | 🟠 Major

Schemas lack required arrays, making validation too permissive.

None of the four schemas declare required on the top-level properties (responseCode, message, result) or on result properties (id, title, etc.). This means a response with an empty body {} would pass validation, defeating the purpose of schema-based contract testing.

Also, id is typed as "number" — consider using "integer" if IDs are always whole numbers, to catch accidental floating-point values.

Example fix for the create schema (apply similarly to others)
 	"POST_mentoring_v1_report-type_create": {
 		"type": "object",
 		"properties": {
 			"responseCode": {
 				"type": "string"
 			},
 			"message": {
 				"type": "string"
 			},
 			"result": {
 				"type": "object",
 				"properties": {
 					"created_at": {
 						"type": "string"
 					},
 					"updated_at": {
 						"type": "string"
 					},
 					"title": {
 						"type": "string"
 					},
 					"id": {
-						"type": "number"
+						"type": "integer"
 					},
 					"deleted_at": {
 						"type": "null"
 					}
-				}
+				},
+				"required": ["id", "title", "created_at", "updated_at"]
 			},
 			...
-		}
+		},
+		"required": ["responseCode", "message", "result"]
 	},
src/integration-tests-new/role-extension/role-extension.spec.js-5-104 (1)

5-104: ⚠️ Potential issue | 🟠 Major

Tests assume sequential execution order with no shared state.

The read, update, and delete tests query ?title=mentee, presumably relying on the resource created by the create test. However:

  1. Jest does not guarantee execution order across describe blocks by default (especially with --runInBand not being the default).
  2. The create test sends title: 'string', not title: 'mentee', so even if ordering is preserved, the resource looked up in subsequent tests won't match what was created.

Consider using beforeAll to create the test resource and afterAll to clean it up, storing the created entity's identifier for use in subsequent tests.

src/integration-tests-new/role-extension/schemas/role-extension.schemas.json-1-73 (1)

1-73: 🛠️ Refactor suggestion | 🟠 Major

Schemas lack required fields — validation is effectively a no-op.

None of the four schemas declare required at the top level (e.g., ["responseCode", "message", "result"]) or within the result object. As a result, toMatchSchema will pass even for an empty {} response, defeating the purpose of schema validation.

Add required arrays at minimum on the top-level object and on result:

Example fix for the create schema (apply similarly to others)
 "POST_mentoring_v1_role-extension_create": {
 	"type": "object",
+	"required": ["responseCode", "message", "result", "meta"],
 	"properties": {
 		"responseCode": {
 			"type": "string"
 		},
 		...
 		"result": {
 			"type": "object",
+			"required": ["id", "title", "label", "status"],
 			"properties": {
src/integration-tests-new/role-extension/role-extension.spec.js-9-11 (1)

9-11: ⚠️ Potential issue | 🟠 Major

Duplicate x-auth-token header — first value is silently overwritten.

Line 10 sets the header to 'bearer {{token}}', then line 11 immediately overwrites it with 'test-token'. The same pattern repeats at lines 36–37. This looks like leftover scaffolding from a template. Pick one consistent token strategy and remove the dead set call.

Additionally, all token values across the file ('bearer {{token}}', 'test-token') are non-functional placeholders. Against a real auth-protected API these tests will fail with 401/403 rather than exercising the happy path. Consider reading a valid token from the environment or a shared setup fixture (e.g., a login helper that runs before the suite).

Proposed fix (example using env variable)
+const TOKEN = process.env.AUTH_TOKEN || 'test-token'
+
 describe('role-extension endpoints generated from api-doc.yaml', () => {
 	describe('POST /mentoring/v1/role-extension/create', () => {
 		test('should return 201', async () => {
 			const url = `/mentoring/v1/role-extension/create`
 			let req = request(BASE).post(url)
-			req = req.set('x-auth-token', 'bearer {{token}}')
-			req = req.set('x-auth-token', 'test-token')
+			req = req.set('x-auth-token', TOKEN)
src/integration-tests-new/reports/reports.spec.js-109-129 (1)

109-129: ⚠️ Potential issue | 🟠 Major

POST reportData endpoint should likely expect 200, not 201.

This is a query/reporting endpoint, not a resource creation endpoint. Expecting 201 Created (Line 125) seems incorrect — 200 OK is the standard response for data retrieval operations.

src/integration-tests-new/reports/reports.spec.js-60-70 (1)

60-70: ⚠️ Potential issue | 🟠 Major

GET read endpoint should not expect HTTP 201.

A read/GET operation conventionally returns 200 OK, not 201 Created. Line 61's test description and Line 66's assertion both use 201. This will either always fail (if the API correctly returns 200) or masks an API design issue.

-		test('should return 201', async () => {
+		test('should return 200', async () => {
 			const url = `/mentoring/v1/reports/read?id=1`
 			let req = request(BASE).get(url)
 			req = req.set('x-auth-token', 'bearer {{token}}')
 			const res = await req
-			expect(res.status).toBe(201)
+			expect(res.status).toBe(200)
src/integration-tests-new/reports/reports.spec.js-73-93 (1)

73-93: ⚠️ Potential issue | 🟠 Major

Update test uses literal "string" as path param and POST instead of PUT/PATCH.

Line 75 uses /update/string — this is a placeholder, not a valid resource ID. Also, organization_id is sent as a number (1) on Line 85, but the create payload and the schema (reports.schemas.json Line 101-103) define it as a string type. This type mismatch will cause validation issues or silent data coercion.

Additionally, Line 76 uses .post() for an update — typically this should be PUT or PATCH to follow REST conventions, unless the API intentionally uses POST.

-			const url = `/mentoring/v1/reports/update/string`
+			const url = `/mentoring/v1/reports/update/${createdReportId}`
-				organization_id: 1,
+				organization_id: 'string',
src/integration-tests-new/reports/schemas/reports.schemas.json-1-139 (1)

1-139: ⚠️ Potential issue | 🟠 Major

Schemas lack top-level required fields — validation will pass even if responseCode, message, or result are missing.

Without a required array at the top level of each schema, toMatchSchema will accept a response body like {} as valid. This defeats the purpose of schema validation. The same applies to all six schemas in this file.

Example fix for the create schema
 	"POST_mentoring_v1_reports_create": {
 		"type": "object",
 		"properties": {
 			"responseCode": {
 				"type": "string"
 			},
 			...
 		},
+		"required": ["responseCode", "message", "result"]
 	},
src/integration-tests-new/questions/schemas/questions.schemas.json-26-28 (1)

26-28: ⚠️ Potential issue | 🟠 Major

options and category are typed as "null" only — this will reject non-null values.

The schema enforces "type": "null" for both options and category. However, the PUT test (in questions.spec.js) sends category: { evaluating: 'mentor' } as part of the request, and if the API echoes it back the schema validation will fail. If these fields can hold objects or arrays in certain responses, use a union type like ["object", "null"] or ["array", "null"].

Proposed fix (example for category in the PUT schema)
 					"category": {
-						"type": "null"
+						"type": ["object", "null"]
 					},

Apply the same pattern to options if it can return non-null values.

Also applies to: 35-37, 123-125, 132-134

src/integration-tests-new/questions/questions.spec.js-10-10 (1)

10-10: ⚠️ Potential issue | 🟠 Major

Replace placeholder token with valid authentication.

Lines 10, 51, and 88 use the literal string 'string' as the x-auth-token value. This is a placeholder that will cause authentication failures when tests run against a real service. The tests expect success responses (201, 202, 200) but cannot authenticate with an invalid token.

Replace with a valid token sourced from environment variables or user setup, matching the pattern used in other integration test files (e.g., userDetails.token as seen in sessions.spec.js).

src/integration-tests-new/org-admin/schemas/org-admin.schemas.json-390-395 (1)

390-395: ⚠️ Potential issue | 🟠 Major

updateTheme and themeDetails schemas are {"type": "string"} — this asserts the entire response body is a string.

These endpoints return JSON objects, not strings. The schema should be {"type": "object", "properties": {...}} to validate the actual response structure. As-is, toMatchSchema will fail for any valid JSON response.

src/integration-tests-new/org-admin/org-admin.spec.js-104-116 (1)

104-116: ⚠️ Potential issue | 🟠 Major

uploadSampleCSV sends a plain string instead of a file/CSV payload.

Line 109 sends 'string' as the body with Content-Type: application/json. A CSV upload endpoint typically expects multipart/form-data with a file attachment. This test won't exercise the intended functionality.

src/integration-tests-new/org-admin/org-admin.spec.js-5-23 (1)

5-23: ⚠️ Potential issue | 🟠 Major

All tests use hardcoded placeholder values — they will fail against a real server.

Every test sends 'string' as the auth token and dummy payloads. These tests will receive 401/400 responses rather than the expected 200/201. Integration tests need valid auth tokens (e.g., obtained via a login step in beforeAll) and realistic request data to be useful. As-is, the entire suite will fail.

src/integration-tests-new/org-admin/org-admin.spec.js-58-80 (1)

58-80: ⚠️ Potential issue | 🟠 Major

Request body for updateRelatedOrgs appears to be a copy of a response schema, not a valid request payload.

The body sent (responseCode, message, result, meta) matches the shape of an API response, not a plausible request to update related organizations. This test will likely get a 400/422 instead of the expected 200, and wouldn't test meaningful functionality even if the server ignored unknown fields.

src/integration-tests-new/org-admin/org-admin.spec.js-118-137 (1)

118-137: ⚠️ Potential issue | 🟠 Major

Inconsistent auth header name and value — tests will likely fail or test the wrong thing.

Line 122 uses X-auth-token (capital X) with value 'bearer {{token}}', while all other tests use x-auth-token with value 'string'. The {{token}} looks like an unresolved template placeholder (Postman/Insomnia style) that will be sent literally, not as a valid bearer token. The same issue applies to line 143.

If headers are case-insensitive in HTTP, the casing difference is benign, but the 'bearer {{token}}' value will not authenticate — it should either be a real token or the same placeholder 'string' used elsewhere for consistency.

-		req = req.set('X-auth-token', 'bearer {{token}}')
+		req = req.set('x-auth-token', 'string')
src/integration-tests-new/modules/schemas/modules.schemas.json-15-16 (1)

15-16: ⚠️ Potential issue | 🟠 Major

Inconsistent property name: Id vs id.

The create schema uses Id (capital I) on line 15, while the update schema (line 52) and list schema (line 121) use lowercase id. The required array in the list schema also uses lowercase id, indicating this is the standard convention. This inconsistency will cause schema validation to fail when the API actually returns id in the create response.

Fix by changing line 15 to use lowercase id:

Fix
-				"Id": {
+				"id": {
 					"type": "number"
 				},
src/integration-tests-new/mentoring/schemas/mentoring.schemas.json-1-70 (1)

1-70: ⚠️ Potential issue | 🟠 Major

Schema lacks required arrays — validation is effectively a no-op.

Without required at the top-level object (lines 3–69), the params object (lines 15–32), and the result object (lines 38–68), toMatchSchema will pass for any object (including {}), since JSON Schema only validates types of properties that are present. This defeats the purpose of schema validation.

Proposed fix — add required arrays
 {
 	"GET_mentoring_health": {
 		"type": "object",
 		"properties": {
 			"id": {
 				"type": "string"
 			},
 			...
 		},
+		"required": ["id", "ver", "ts", "params", "status", "result"]
 	}
 }

Similarly, add "required" to params and result:

"params": {
  "type": "object",
  "properties": { ... },
  "required": ["resmsgid", "msgid", "status"]
},
"result": {
  "type": "object",
  "properties": { ... },
  "required": ["version", "healthy", "checks"]
}
src/integration-tests-new/permissions/permissions.spec.js-9-10 (1)

9-10: ⚠️ Potential issue | 🟠 Major

Placeholder 'string' auth token will fail against a real API.

All requests use x-auth-token: 'string' which is not a valid authentication token. If these integration tests run against a real service (as the CI pipeline suggests), every authenticated request will be rejected. Other test suites in this project appear to use the same pattern — consider implementing a shared test helper that acquires a valid token (e.g., via a login flow or from environment variables) and injects it into requests.

#!/bin/bash
# Check if there's a shared test helper or setup that provides real auth tokens
fd -e js -p 'helper|setup|auth|token' src/integration-tests-new/ --exec head -30 {}
# Also check the test config for any global setup
fd -e js -p 'jest|config|sequencer' src/ -d 2 --exec cat {}
.circleci/config.yml-134-139 (1)

134-139: ⚠️ Potential issue | 🟠 Major

Fragile container name resolution via grep.

docker ps --format "{{.Names}}" | grep mentoring (and grep user) can match multiple containers or partial names (e.g., mentoring-db, user-cache). If multiple matches occur, the docker exec commands will fail. Use a more specific pattern or --filter flag.

🔧 Proposed fix: use grep with anchored pattern or docker-compose service names
-            export MENTORING=$(docker ps --format "{{.Names}}" | grep mentoring)
+            export MENTORING=$(docker-compose -f dev-ops/docker-compose.yml ps -q mentoring)

Or if using docker ps, use a more specific grep pattern:

-            export MENTORING=$(docker ps --format "{{.Names}}" | grep mentoring)
+            export MENTORING=$(docker ps --format "{{.Names}}" | grep -w mentoring)

Also applies to: 164-169, 171-176

.circleci/config.yml-178-197 (1)

178-197: ⚠️ Potential issue | 🟠 Major

Overwriting $USER environment variable — reserved Unix variable.

Line 181, 188, and 195 assign export USER=$(docker ps ...). USER is a standard Unix environment variable ($USER = current username). Overwriting it can break commands that depend on it (e.g., whoami, SSH operations, or any tool that reads $USER). Use a different variable name.

🐛 Proposed fix: rename to USER_CONTAINER
-          export USER=$(docker ps --format "{{.Names}}" | grep user)
-          echo "Running user migrations in $USER"
-          docker exec "$USER" npm run db:init
+          export USER_CONTAINER=$(docker ps --format "{{.Names}}" | grep user)
+          echo "Running user migrations in $USER_CONTAINER"
+          docker exec "$USER_CONTAINER" npm run db:init

Apply the same rename in the seeders and startup steps.

.circleci/config.yml-225-231 (1)

225-231: ⚠️ Potential issue | 🟠 Major

The tag filter regex pattern needs proper anchoring, and a branches filter is needed to run on the intended branches.

CircleCI uses Java regex (not Go), so \b word boundaries are supported. However, the pattern \b(dev|develop|main)\b lacks the full-string anchors required by CircleCI's filter logic. Use /^(dev|develop|main)$/ instead. Also, unquote the regex in YAML to avoid ambiguity (use single quotes: only: '/^(dev|develop|main)$/').

More critically: this config only triggers on matching tags, not branches. To run on branches named dev, develop, or main, add a branches filter:

Suggested fix
workflows:
  build-and-test:
    jobs:
      - build:
          filters:
            branches:
              only: '/^(dev|develop|main)$/'
            tags:
              only: '/^(dev|develop|main)$/'
src/integration-tests-new/connections/connections.specs.js-8-11 (1)

8-11: ⚠️ Potential issue | 🟠 Major

adminDetails is never declared — implicit global assignment.

Line 10 assigns to adminDetails without a prior let/const/var declaration. In strict mode this throws a ReferenceError. Additionally, adminDetails is never used in any test, so the entire beforeAll login is unnecessary overhead.

Proposed fix — either declare it or remove it

If admin context is needed later:

+let adminDetails = null
+
 beforeAll(async () => {
 	console.log('Attempting to login...')
 	adminDetails = await commonHelper.adminLogin()
 })

If not needed, remove the beforeAll entirely to speed up test execution.

src/integration-tests-new/commonTests.js-22-55 (1)

22-55: ⚠️ Potential issue | 🟠 Major

Bug: returned password doesn't match the password actually used.

Line 23 generates a random password via faker.internet.password(), but lines 27 and 33 use the hardcoded 'PassworD@@@123' for account creation and login. The returned userDetails.password (line 53) contains the faker-generated value, which is wrong. Any downstream test that uses userDetails.password to re-authenticate will fail.

The same issue exists in mentorLogIn (line 78 vs lines 83/90).

Proposed fix
-		let password = faker.internet.password()
+		let password = 'PassworD@@@123'

Or alternatively, use the faker password everywhere:

 		let password = faker.internet.password()
 		let res = await request.post('/user/v1/account/create').set('origin', 'localhost').send({
 			name: 'adithya',
 			email: email,
-			password: 'PassworD@@@123',
+			password: password,
 			role: common.MENTEE_ROLE,
 		})
 
 		res = await request.post('/user/v1/account/login').set('origin', 'localhost').send({
 			identifier: email,
-			password: 'PassworD@@@123',
+			password: password,
 		})
src/integration-tests-new/entity/entity.specs.js-23-24 (1)

23-24: ⚠️ Potential issue | 🟠 Major

Fix missing bearer prefix in x-auth-token header.

The middleware (src/middlewares/authenticator.js lines 384-388) requires the bearer prefix when IS_AUTH_TOKEN_BEARER is true (the default). It splits the header on whitespace and validates the auth type is bearer; if not, it throws an unauthorized error.

Current token usage in entity.specs.js (lines 23, 37, 67, 83, 92, 122, 137, 146, 169, 192) sets the raw token without the prefix, inconsistent with commonTests.js and the API scripts which use bearer ${token} format. These tests will fail with the default configuration.

Add the bearer prefix: .set('x-auth-token', 'bearer ' + adminDetails.token).

src/integration-tests-new/commonTests.js-135-143 (1)

135-143: ⚠️ Potential issue | 🟠 Major

Use environment variable instead of hardcoded token string.

Line 137 should use process.env.INTERNAL_ACCESS_TOKEN instead of the hardcoded string 'internal_access_token'. The middleware validates tokens against the environment variable (see src/middlewares/authenticator.js:23), and all production code uses process.env.INTERNAL_ACCESS_TOKEN. The test will only pass if the test environment explicitly sets INTERNAL_ACCESS_TOKEN='internal_access_token', which doesn't follow the production pattern. Update to .set('internal_access_token', process.env.INTERNAL_ACCESS_TOKEN) and ensure the test environment has the token properly configured.

src/integration-tests-new/cloud-services/cloud-services.spec.js-10-10 (1)

10-10: ⚠️ Potential issue | 🟠 Major

x-auth-token is set to the literal string 'string' — not a valid auth token.

Both tests use a hardcoded placeholder 'string' as the auth token. Unless these endpoints are unauthenticated, the requests will fail with 401. Other test files (e.g., entity-type.specs.js, mentees.specs.js) properly log in via commonHelper and use real tokens. This file should do the same.

Also applies to: 23-23

src/integration-tests-new/admin/admin.spec.js-6-16 (1)

6-16: ⚠️ Potential issue | 🟠 Major

Admin tests use placeholder 'string' as auth token, and userDelete has no target user specified.

All admin tests use 'string' as the x-auth-token, which will either fail with 401 or, if the endpoint lacks proper auth, execute destructive operations. The DELETE /userDelete test is particularly concerning — it sends no request body or user ID, so if it somehow authenticates, the behavior is undefined.

Use real admin credentials via commonHelper.adminLogin() and provide a valid (test-created) user ID for deletion.

src/integration-tests-new/mentees/mentees.specs.js-105-105 (1)

105-105: ⚠️ Potential issue | 🟠 Major

userDetails.id is likely undefined — the property is userId.

Based on the commonTests.js helper (lines 47-54 in the snippet), the returned object uses userId, not id. This means the mentorId query parameter will be undefined, and the test may pass vacuously or test the wrong behavior.

Proposed fix
-			const url = `/mentoring/v1/mentees/list?page=1&limit=5&search=&connected_mentees=true&mentorId=${userDetails.id}`
+			const url = `/mentoring/v1/mentees/list?page=1&limit=5&search=&connected_mentees=true&mentorId=${userDetails.userId}`

Apply to both Line 105 and Line 114.

Also applies to: 114-114

src/integration-tests-new/profile/profile.specs.js-7-11 (1)

7-11: ⚠️ Potential issue | 🟠 Major

Variable named mentorDetails but initialized via commonHelper.logIn() (mentee login).

In other test files, commonHelper.logIn() returns a mentee while commonHelper.mentorLogIn() returns a mentor. If these profile endpoints require a mentor role, this will fail. If a mentee is intentional, rename the variable to menteeDetails or userDetails to avoid confusion.

src/integration-tests-new/sessions/sessions.specs.js-15-17 (1)

15-17: ⚠️ Potential issue | 🟠 Major

Remove debug console.log statements.

These appear to be leftover debugging artifacts. They'll pollute test output with credential-bearing objects (userDetails, menteeDetails) on every run — both noisy and a potential PII/secret leak in CI logs.

src/integration-tests-new/testSequencer.js-43-50 (1)

43-50: ⚠️ Potential issue | 🟠 Major

Bug: Inconsistent directory name in path.relative calls.

Lines 43 and 50 use 'integration-tests' but the testMap is built (line 23) using 'integration-tests-new'. This produces wrong relative paths in the warning and log output, breaking the diagnostic value of these messages.

🐛 Proposed fix
-			remainingTests.map((t) => path.relative(path.join(process.cwd(), 'integration-tests'), t.path))
+			remainingTests.map((t) => path.relative(path.join(process.cwd(), 'integration-tests-new'), t.path))
-		console.log(`${index + 1}. ${path.relative(path.join(process.cwd(), 'integration-tests'), test.path)}`)
+		console.log(`${index + 1}. ${path.relative(path.join(process.cwd(), 'integration-tests-new'), test.path)}`)

Comment on lines +112 to +130
- run:
name: Wait for services to be healthy
command: |
cd dev-ops
echo "Waiting for containers to be healthy..."

for i in {1..30}; do
unhealthy=$(docker ps --filter "health=unhealthy" --format "{{.Names}}")
starting=$(docker ps --filter "health=starting" --format "{{.Names}}")

if [ -z "$unhealthy" ] && [ -z "$starting" ]; then
echo "All containers are healthy"
break
fi

echo "Waiting... attempt $i"
docker ps --format "table {{.Names}}\t{{.Status}}"
sleep 5
done
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Health check loop silently continues on failure — pipeline will proceed with unhealthy services.

If containers never become healthy within 30 attempts, the for loop completes without hitting break, and execution silently proceeds to the migration step. This will cause confusing downstream failures. Add an explicit failure check after the loop.

🐛 Proposed fix: fail the step if containers aren't healthy
             echo "Waiting... attempt $i"
             docker ps --format "table {{.Names}}\t{{.Status}}"
             sleep 5
           done
+
+          # Fail if any container is still not healthy
+          unhealthy=$(docker ps --filter "health=unhealthy" --format "{{.Names}}")
+          starting=$(docker ps --filter "health=starting" --format "{{.Names}}")
+          if [ -n "$unhealthy" ] || [ -n "$starting" ]; then
+            echo "ERROR: Containers not healthy after 30 attempts: $unhealthy $starting"
+            docker ps --format "table {{.Names}}\t{{.Status}}"
+            exit 1
+          fi
🤖 Prompt for AI Agents
In @.circleci/config.yml around lines 112 - 130, The "Wait for services to be
healthy" step's loop can finish without containers being healthy and currently
lets the pipeline continue; after the for loop that checks the unhealthy and
starting variables, add an explicit health check that re-evaluates docker ps (or
checks if $unhealthy or $starting are non-empty) and, if any containers remain
unhealthy/starting, emit a clear error message and exit non‑zero (e.g., exit 1)
to fail the job so migrations won't run against unhealthy services.

Comment on lines +1 to +3
SELECT create_reference_table('entity_types');
SELECT create_reference_table('permissions');
SELECT create_reference_table('sessions');
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Search for sessions references in the distribution file
echo "=== Sessions references in distributionColumns.psql ==="
rg -n 'sessions' src/distributionColumns.psql

# Look for sessions table definition in migrations
echo -e "\n=== Searching for sessions table migrations ==="
fd -e sql -e js 'migrations' . --type f | head -20

# Search for sessions table definition patterns
echo -e "\n=== Sessions table definitions ==="
rg -n 'sessions' --type sql --type js | grep -i 'create table\|sessions' | head -30

Repository: ELEVATE-Project/mentoring

Length of output: 2950


🏁 Script executed:

#!/bin/bash
# Check the actual content of distributionColumns.psql to see all changes
echo "=== Full content of distributionColumns.psql ==="
cat -n src/distributionColumns.psql

Repository: ELEVATE-Project/mentoring

Length of output: 2013


🏁 Script executed:

#!/bin/bash
# Search for any foreign key or relationship context mentioning sessions
echo "=== Foreign key constraints involving sessions ==="
rg -n -B 2 -A 2 'sessions' src/ --type sql

# Look for references to sessions in application code
echo -e "\n=== Sessions usage in application code ==="
rg -n 'sessions' src/ --type js --type ts | head -20

Repository: ELEVATE-Project/mentoring

Length of output: 2273


🏁 Script executed:

#!/bin/bash
# Search for migration files that define the sessions table schema
echo "=== Database migrations directory structure ==="
find . -type d -name '*migrat*' 2>/dev/null | head -10

# Look for any database schema files
echo -e "\n=== Database schema files ==="
fd -e sql -e json | grep -i -E '(schema|migrat|sql)' | grep -v node_modules | head -20

# Search for sessions table creation
echo -e "\n=== Sessions table creation ==="
rg -n 'CREATE TABLE.*sessions' --type sql -A 10

Repository: ELEVATE-Project/mentoring

Length of output: 1565


🏁 Script executed:

#!/bin/bash
# Check if there are any migration tracking files
find . -type f -name '*.sql' | head -20

# Look for database initialization files
find . -type f \( -name '*init*' -o -name '*schema*' -o -name '*seed*' \) -type f | grep -i sql

Repository: ELEVATE-Project/mentoring

Length of output: 1450


🏁 Script executed:

#!/bin/bash
# List files in migrations directories
echo "=== Files in src/database/migrations ==="
ls -la src/database/migrations/ 2>/dev/null | head -20

echo -e "\n=== Files in src/migrations ==="
ls -la src/migrations/ 2>/dev/null | head -20

# Search for sessions table definition in any file
echo -e "\n=== Searching for sessions table definition ==="
rg -l 'sessions' src/database/migrations/ src/migrations/ 2>/dev/null

Repository: ELEVATE-Project/mentoring

Length of output: 5464


🏁 Script executed:

#!/bin/bash
# Check sample data to understand sessions data volume
echo "=== Sample sessions data ==="
rg -n "INSERT INTO.*sessions" documentation/ --type sql -A 2 | head -50

# Also check for any baseline schema
echo -e "\n=== Any baseline or schema files ==="
fd 'baseline\|schema' --type f | grep -v node_modules | head -10

Repository: ELEVATE-Project/mentoring

Length of output: 135


🏁 Script executed:

#!/bin/bash
# Read the sessions table creation migration
echo "=== Sessions table schema ==="
cat -n src/database/migrations/20230611225246-create-sessions.js

# Also check if there are any indices or constraints that might indicate volume
echo -e "\n=== Session-related tables and their relationships ==="
rg -n 'session' src/database/migrations/ -i | grep -i 'create\|foreign\|index' | head -20

Repository: ELEVATE-Project/mentoring

Length of output: 6327


🏁 Script executed:

#!/bin/bash
# Check sample data to understand typical session volume
echo "=== Sample data sessions volume ==="
rg "INSERT INTO.*sessions" documentation/ -c

# Check documentation version files for distribution columns consistency
echo -e "\n=== Distribution columns across versions ==="
head -5 documentation/*/distribution-columns/mentoring/distributionColumns.sql

Repository: ELEVATE-Project/mentoring

Length of output: 1820


🏁 Script executed:

#!/bin/bash
# Check migrations that update sessions table
echo "=== Migrations updating sessions table ==="
grep -l "sessions" src/database/migrations/*.js | while read f; do
    echo "=== $(basename $f) ==="
    head -50 "$f" | grep -E "(up:|down:|ALTER TABLE|ADD COLUMN|tenant_code|reference|distributed)" || true
done | head -100

Repository: ELEVATE-Project/mentoring

Length of output: 5062


🏁 Script executed:

#!/bin/bash
# Check for any migration that adds tenant_code to sessions
echo "=== Looking for tenant_code addition to sessions ==="
rg -n 'tenant_code' src/database/migrations/ -B 3 -A 3 | grep -E '(sessions|tenant_code|migration)' | head -40

# Check the latest distribution columns file
echo -e "\n=== Latest distribution columns ==="
cat -n documentation/latest/distribution-columns/mentoring/distributionColumns.sql 2>/dev/null || cat -n documentation/3.1.2/distribution-columns/mentoring/distributionColumns.sql

Repository: ELEVATE-Project/mentoring

Length of output: 6002


🏁 Script executed:

#!/bin/bash
# Check git status and recent changes to distributionColumns.psql
echo "=== Git status for distributionColumns.psql ==="
git log --oneline -n 20 src/distributionColumns.psql 2>/dev/null || echo "File not tracked"

# Check if file exists and its content
echo -e "\n=== Current content of src/distributionColumns.psql ==="
head -10 src/distributionColumns.psql 2>/dev/null || echo "File does not exist"

# Compare with latest documentation
echo -e "\n=== Latest documentation distribution columns ==="
cat documentation/latest/distribution-columns/mentoring/distributionColumns.sql 2>/dev/null | head -20

Repository: ELEVATE-Project/mentoring

Length of output: 1784


🏁 Script executed:

#!/bin/bash
# Check recent migrations that might have changed distribution strategy
echo "=== Migration 20251213000001 snippet about sessions ==="
sed -n '1,50p' src/database/migrations/20251213000001-remove-fks-manage-primary-keys.js | tail -20

# Verify the architecture intent for sessions
echo -e "\n=== Check if there's a migration undoing reference table for sessions ==="
rg -n 'sessions' src/database/migrations/20240716111200-undistribute-table-and-drop-views.js -A 5 -B 5 | head -30

Repository: ELEVATE-Project/mentoring

Length of output: 2267


Sessions should be a distributed table, not a reference table.

The change in line 3 contradicts the current distribution architecture. Recent migrations restructure sessions as a distributed table with composite primary key (tenant_code, id) for multi-tenant data sharding, and the latest documentation specifies create_distributed_table('sessions', 'id'). Making sessions a reference table (fully replicated) conflicts with this design and the established pattern for operational tables in the system. Revert to distributed distribution strategy consistent with session_attendees, session_request, and post_session_details.

🤖 Prompt for AI Agents
In `@src/distributionColumns.psql` around lines 1 - 3, Change the incorrect
creation of sessions as a reference table: replace the call to
create_reference_table('sessions') with the distributed table creation used by
other operational tables (e.g. session_attendees) by calling
create_distributed_table('sessions', 'id'), and ensure the sessions table uses
the composite primary key (tenant_code, id) for multi-tenant sharding consistent
with session_request and post_session_details.

Comment on lines +1 to +17
const request = require('supertest')
const BASE = process.env.BASE_URL || 'http://localhost:3000'
const schemas = require('./schemas/config.schemas.json')

describe('config endpoints generated from api-doc.yaml', () => {
describe('GET /mentoring/v1/config/get', () => {
test('should return 200', async () => {
const url = `/mentoring/v1/config/get`
let req = request(BASE).get(url)
const res = await req
expect(res.status).toBe(200)
// validate response schema
const schema = schemas['GET_mentoring_v1_config_get']
expect(res.body).toMatchSchema(schema)
})
})
})
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

File will not be discovered by the test runner — wrong extension.

The Jest config at src/integrationJest.config.js Line 22 uses testMatch: ['<rootDir>/integration-tests-new/**/*.specs.js'] (note: plural .specs.js). This file is named config.spec.js (singular .spec.js), so it will be silently skipped. The same issue applies to admin/admin.spec.js, cloud-services/cloud-services.spec.js, and any other .spec.js files.

Rename to config.specs.js or update the testMatch glob to include both patterns.

🤖 Prompt for AI Agents
In `@src/integration-tests-new/config/config.spec.js` around lines 1 - 17, The
test files under integration-tests-new (e.g., config.spec.js, admin.spec.js,
cloud-services.spec.js) are using the singular extension ".spec.js" but the Jest
config variable testMatch in integrationJest.config.js expects the plural
pattern '<rootDir>/integration-tests-new/**/*.specs.js', so the files are
skipped; fix by either renaming the test files to use the plural suffix (e.g.,
config.specs.js) or modify the testMatch glob to include both patterns (e.g.,
add '**/*.spec.js' or use a combined pattern) so Jest discovers these tests.

Comment on lines +35 to +37
"operator": {
"type": "string"
},
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

# First, let's see the schema file structure to confirm the locations mentioned
cat -n src/integration-tests-new/default-rule/schemas/default-rule.schemas.json | sed -n '30,50p; 130,150p; 235,255p'

Repository: ELEVATE-Project/mentoring

Length of output: 1760


🏁 Script executed:

# Search for actual usage of these field names outside schema definitions
rg -n 'matching_operator|operator|requester_roles_config|role_config' \
  --type=js --type=json \
  -g '!node_modules' \
  -g '!**/node_modules' \
  src/ | grep -v "default-rule.schemas.json" | head -50

Repository: ELEVATE-Project/mentoring

Length of output: 5108


🏁 Script executed:

# Also check if there are any test files or mock responses that reference these fields
fd -e 'js' -e 'json' . src/integration-tests-new/default-rule --type f | \
  xargs grep -l 'operator\|role_config' 2>/dev/null | head -20

Repository: ELEVATE-Project/mentoring

Length of output: 202


Read schema has incorrect field names that don't match the actual API responses.

The create and update schemas correctly use operator (Lines 35, 241) and requester_roles_config (Lines 44, 250). However, the read schema incorrectly uses matching_operator (Line 135) and role_config (Line 144). The codebase—including helpers, validators, services, and tests—consistently use operator and requester_roles_config across all operations. The read schema will fail to validate actual API responses.

🤖 Prompt for AI Agents
In `@src/integration-tests-new/default-rule/schemas/default-rule.schemas.json`
around lines 35 - 37, The read-schema in default-rule.schemas.json uses the
wrong field names; replace any occurrences of "matching_operator" with
"operator" and "role_config" with "requester_roles_config" within the read
(response) schema so the field names match the create/update schemas and the
rest of the codebase (validators/helpers/services/tests); ensure the value types
for "operator" and "requester_roles_config" match the existing definitions used
by create/update schemas (adjust object/array/string types accordingly) and
update the read schema property names wherever "matching_operator" or
"role_config" appear.

Comment on lines +33 to +57
describe('POST /mentoring/v1/modules/update/{id}', () => {
test('should return 201', async () => {
const url = `/mentoring/v1/modules/update/{id}`
let req = request(BASE).post(url)
req = req.set('x-auth-token', 'string')
req = req
.send({
code: 'string',
})
.set('Content-Type', 'application/json')
const res = await req
expect(res.status).toBe(201)
// validate response schema
const schema = schemas['POST_mentoring_v1_modules_update_id']
expect(res.body).toMatchSchema(schema)
})

test('should return 400/422 for invalid body', async () => {
const url = `/mentoring/v1/modules/update/{id}`
let req = request(BASE).post(url)
req = req.set('x-auth-token', 'string')
req = req.send({}).set('Content-Type', 'application/json')
const res = await req
expect([400, 422]).toContain(res.status)
})
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

{id} placeholder is never substituted — tests hit a non-existent route.

On Lines 35 and 51, the URL contains the literal string {id} instead of an actual module ID. The server will likely return a 404 or route-mismatch error, not the expected 201 or 400/422. The same issue occurs in the DELETE test (Line 62).

These tests need a setup step: first create a module, capture its ID from the response, then use it for update and delete.

Proposed fix outline
 describe('POST /mentoring/v1/modules/update/{id}', () => {
+	let moduleId
+
+	beforeAll(async () => {
+		const createRes = await request(BASE)
+			.post('/mentoring/v1/modules/create')
+			.set('x-auth-token', process.env.AUTH_TOKEN)
+			.send({ code: 'test-module-update' })
+			.set('Content-Type', 'application/json')
+		moduleId = createRes.body.result?.id
+	})
+
 	test('should return 201', async () => {
-		const url = `/mentoring/v1/modules/update/{id}`
+		const url = `/mentoring/v1/modules/update/${moduleId}`
 		let req = request(BASE).post(url)

Apply the same pattern for the DELETE test.

🤖 Prompt for AI Agents
In `@src/integration-tests-new/modules/modules.spec.js` around lines 33 - 57, The
tests use the literal `{id}` in the update/delete URLs so they never hit a real
resource; before calling the update or delete requests in the tests inside the
describe('POST /mentoring/v1/modules/update/{id}') block, create a module via
the create endpoint (capture the returned module id from the create response),
then replace `{id}` with that captured id when constructing the URL for the
update and delete requests (also ensure Content-Type and auth headers are
preserved and optionally clean up the created module in a teardown or
afterEach).

describe('rolePermissionMapping endpoints generated from api-doc.yaml', () => {
describe('POST /mentoring/v1/rolePermissionMapping/create/{role_id}', () => {
test('should return 201', async () => {
const url = `/mentoring/v1/rolePermissionMapping/create/{role_id}`
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

{role_id} path parameter is never substituted — requests hit a non-existent route.

The URL /mentoring/v1/rolePermissionMapping/create/{role_id} is sent verbatim. The literal {role_id} is not replaced with an actual numeric ID, so every request will target a meaningless path and likely return a 404, making the expect(res.status).toBe(201) assertion always fail.

Proposed fix

Replace the template placeholder with a real (or test-fixture) role ID:

-			const url = `/mentoring/v1/rolePermissionMapping/create/{role_id}`
+			const roleId = process.env.TEST_ROLE_ID || 1
+			const url = `/mentoring/v1/rolePermissionMapping/create/${roleId}`

Apply the same pattern to all four occurrences (lines 8, 24, 35, 51).

Also applies to: 24-24, 35-35, 51-51

🤖 Prompt for AI Agents
In
`@src/integration-tests-new/rolePermissionMapping/rolePermissionMapping.spec.js`
at line 8, The test builds a URL with a literal "{role_id}" so requests hit a
non-existent route; replace the placeholder with a real role ID variable (e.g.,
use a test fixture or created roleId) wherever the URL is constructed (the const
url declarations in rolePermissionMapping.spec.js) so the path becomes
`/mentoring/v1/rolePermissionMapping/create/${roleId}` (apply the same change at
the four occurrences noted) and ensure the tests use that roleId when calling
the endpoint before asserting res.status === 201.

test('should return 201', async () => {
const url = `/mentoring/v1/rolePermissionMapping/delete/{role_id}`
let req = request(BASE).post(url)
req = req.set('x-auth-token', TOKEN)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

TOKEN is not defined — tests will throw ReferenceError at runtime.

TOKEN is used on lines 37 and 53 but is never declared or imported in this file. Compare with lines 10 and 26 where the auth header is set to the literal string 'string'. This will cause the delete-endpoint tests to crash before any assertion runs.

Proposed fix — use a consistent token source

Either define TOKEN at the top of the file (ideally from an env variable or a shared test helper), or use the same pattern as the create tests:

-			req = req.set('x-auth-token', TOKEN)
+			req = req.set('x-auth-token', process.env.AUTH_TOKEN || 'string')

Apply the same change on both lines 37 and 53.

Also applies to: 53-53

🤖 Prompt for AI Agents
In
`@src/integration-tests-new/rolePermissionMapping/rolePermissionMapping.spec.js`
at line 37, The tests use an undefined TOKEN constant which causes a
ReferenceError; update the two usages of req.set('x-auth-token', TOKEN) in
rolePermissionMapping.spec.js (both occurrences) to use a defined token source —
either import or define TOKEN at the top of the test file (preferably from a
shared test helper or process.env) or replace with the same literal used
elsewhere (e.g., 'string'); ensure both occurrences are changed so the test runs
without runtime errors.

Comment on lines +286 to +360
describe('Session Feedback Submission', () => {
let feedbackSessionId

beforeAll(async () => {
let userDetails = await commonHelper.mentorLogIn()
// Create a session that has passed to mark as complete
const now = new Date()
const startDate = new Date(now.getTime() - 2 * 60 * 60 * 1000) // 2 hours ago
const startDateTimestamp = Math.floor(startDate.getTime() / 1000)
const endDate = new Date(startDate.getTime() + 60 * 60 * 1000) // 1 hour duration
const endDateTimestamp = Math.floor(endDate.getTime() / 1000)

const createRes = await request(BASE)
.post('/mentoring/v1/sessions/update')
.set('x-auth-token', userDetails.token)
.send({
title: 'Session for Feedback Test',
description: 'This session is created to test the feedback endpoint.',
type: 'PUBLIC',
start_date: startDateTimestamp,
end_date: endDateTimestamp,
time_zone: 'Asia/Calcutta',
mentor_id: userDetails.userId.toString(),
})

expect(createRes.status).toBe(201)
feedbackSessionId = createRes.body.result.id

// Enroll mentee in the session
const enrollUrl = `/mentoring/v1/sessions/enroll/${feedbackSessionId}`
const enrollRes = await request(BASE)
.post(enrollUrl)
.set('x-auth-token', menteeDetails.token)
.set('timezone', 'Asia/Calcutta')
expect(enrollRes.status).toBe(201)

// Start the session
const startUrl = `/mentoring/v1/sessions/start/${feedbackSessionId}`
const startRes = await request(BASE).get(startUrl).set('x-auth-token', userDetails.token)
expect(startRes.status).toBe(200)

// Complete the session
const completeUrl = `/mentoring/v1/sessions/completed/${feedbackSessionId}`
const completeRes = await request(BASE).patch(completeUrl).set('x-auth-token', userDetails.token)
expect(completeRes.status).toBe(200)
})

test('POST /mentoring/v1/feedback/submit/{sessionId} - mentee feedback should return 200', async () => {
const url = `/mentoring/v1/feedback/submit/${feedbackSessionId}`
const menteeFeedbackPayload = {
feedbacks: [
{ question_id: 3, value: 3, label: 'How would you rate the host of the session?' },
{ question_id: 4, value: 4, label: 'How would you rate the engagement in the session?' },
],
feedback_as: 'mentee',
}
const res = await request(BASE)
.post(url)
.set('x-auth-token', menteeDetails.token)
.send(menteeFeedbackPayload)
expect(res.status).toBe(200)
})

test('POST /mentoring/v1/feedback/submit/{sessionId} - mentor feedback should return 200', async () => {
const url = `/mentoring/v1/feedback/submit/${feedbackSessionId}`
const mentorFeedbackPayload = {
feedbacks: [
{ question_id: 1, value: 4, label: 'How would you rate the engagement in the session?' },
{ question_id: 2, value: 3, label: 'How would you rate the Audio/Video quality?' },
],
feedback_as: 'mentor',
}
const res = await request(BASE).post(url).set('x-auth-token', userDetails.token).send(mentorFeedbackPayload)
expect(res.status).toBe(200)
})
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Variable shadowing causes mentor feedback to use the wrong user's token.

At line 290, let userDetails = await commonHelper.mentorLogIn() shadows the outer userDetails (line 6), but this local variable is scoped to the beforeAll callback only. The mentor feedback test at line 358 uses userDetails.token, which resolves to the outer userDetails — a different mentor than the one who created and completed the session. This will likely cause the feedback submission to fail (wrong user context).

🐛 Proposed fix — use a describe-scoped variable
 	describe('Session Feedback Submission', () => {
 		let feedbackSessionId
+		let feedbackMentorDetails

 		beforeAll(async () => {
-			let userDetails = await commonHelper.mentorLogIn()
+			feedbackMentorDetails = await commonHelper.mentorLogIn()
 			// Create a session that has passed to mark as complete
 			const now = new Date()
 			...
 			const createRes = await request(BASE)
 				.post('/mentoring/v1/sessions/update')
-				.set('x-auth-token', userDetails.token)
+				.set('x-auth-token', feedbackMentorDetails.token)
 				.send({
 					...
-					mentor_id: userDetails.userId.toString(),
+					mentor_id: feedbackMentorDetails.userId.toString(),
 				})
 			...
-			const startRes = await request(BASE).get(startUrl).set('x-auth-token', userDetails.token)
+			const startRes = await request(BASE).get(startUrl).set('x-auth-token', feedbackMentorDetails.token)
 			...
-			const completeRes = await request(BASE).patch(completeUrl).set('x-auth-token', userDetails.token)
+			const completeRes = await request(BASE).patch(completeUrl).set('x-auth-token', feedbackMentorDetails.token)
 		})
 		...
 		test('POST ... - mentor feedback should return 200', async () => {
 			...
-			const res = await request(BASE).post(url).set('x-auth-token', userDetails.token).send(mentorFeedbackPayload)
+			const res = await request(BASE).post(url).set('x-auth-token', feedbackMentorDetails.token).send(mentorFeedbackPayload)
 		})
🤖 Prompt for AI Agents
In `@src/integration-tests-new/sessions/sessions.specs.js` around lines 286 - 360,
The bug is variable shadowing: the inner let userDetails in the beforeAll
callback (created via commonHelper.mentorLogIn()) hides the outer
describe-scoped userDetails so tests like the mentor feedback test use the wrong
token; fix by removing the inner declaration and assign the logged-in mentor to
the describe-scoped variable (i.e., replace "let userDetails = await
commonHelper.mentorLogIn()" inside beforeAll with "userDetails = await
commonHelper.mentorLogIn()"), ensuring the same userDetails.token is used in
start/completed and the mentor feedback POST.

Comment on lines +1 to +31
const request = require('supertest')
const BASE = process.env.BASE_URL || 'http://localhost:3000'
const schemas = require('./schemas/users.schemas.json')

describe('users endpoints generated from api-doc.yaml', () => {
describe('GET /mentoring/v1/users/pendingFeedbacks', () => {
test('should return 200', async () => {
const url = `/mentoring/v1/users/pendingFeedbacks`
let req = request(BASE).get(url)
req = req.set('x-auth-token', 'string')
const res = await req
expect(res.status).toBe(200)
// validate response schema
const schema = schemas['GET_mentoring_v1_users_pendingFeedbacks']
expect(res.body).toMatchSchema(schema)
})
})

describe('GET /mentoring/v1/users/list?type={userType}&page={page}&limit={limit}&search={search}', () => {
test('should return 200', async () => {
const url = `/mentoring/v1/users/list?type=mentor&page=1&limit=2&search=jhon`
let req = request(BASE).get(url)
req = req.set('x-auth-token', 'string')
const res = await req
expect(res.status).toBe(200)
// validate response schema
const schema = schemas['GET_mentoring_v1_users_list_type_userType_page_page_limit_limit_search_search']
expect(res.body).toMatchSchema(schema)
})
})
})
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Placeholder token 'string' — tests will fail without real authentication.

Same issue as report-queries.spec.js: no commonHelper login is performed, and the x-auth-token header is set to the literal 'string'. Both tests expecting HTTP 200 will fail. Additionally, unlike other test files, there are no unauthorized-access test cases.

Add authentication via beforeAll and consider adding 401/403 tests for consistency with the rest of the suite.

🤖 Prompt for AI Agents
In `@src/integration-tests-new/users/users.spec.js` around lines 1 - 31, The tests
use a literal placeholder token in the x-auth-token header and lack a login
step; add a beforeAll that performs authentication (reusing the project's
commonHelper login helper if available) to obtain a real token and store it in a
variable (e.g., authToken) and replace the hardcoded 'string' in both tests with
that authToken when calling request(BASE).set('x-auth-token', authToken). Also
add a couple of negative tests for unauthorized access (e.g., missing or invalid
token) to mirror other spec files and ensure the endpoints return 401/403 as
expected.

'@utils/(.*)': '<rootDir>/utils/$1',
},
testMatch: ['<rootDir>/integration-tests/**/*.spec.js'],
testMatch: ['<rootDir>/integration-tests-new/**/*.specs.js'],
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

testMatch pattern excludes several test files in this PR.

The glob *.specs.js (plural s) will not match files named *.spec.js (singular). At least three files in this PR use the singular form: admin.spec.js, config.spec.js, cloud-services.spec.js. Either standardize all filenames to .specs.js or broaden the pattern:

Option: broaden the glob
-	testMatch: ['<rootDir>/integration-tests-new/**/*.specs.js'],
+	testMatch: ['<rootDir>/integration-tests-new/**/*.spec?(s).js'],
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
testMatch: ['<rootDir>/integration-tests-new/**/*.specs.js'],
testMatch: ['<rootDir>/integration-tests-new/**/*.spec.js', '<rootDir>/integration-tests-new/**/*.specs.js'],
🤖 Prompt for AI Agents
In `@src/integrationJest.config.js` at line 22, The current Jest config's
testMatch pattern uses '<rootDir>/integration-tests-new/**/*.specs.js' which
only matches files with the plural '.specs.js' and misses existing tests named
'*.spec.js'; update the testMatch entry (the testMatch key in
src/integrationJest.config.js) to broaden the glob so it matches both
'*.spec.js' and '*.specs.js' (for example by using a pattern like
'**/*.spec?(s).js' or an array with both globs) so the files admin.spec.js,
config.spec.js, cloud-services.spec.js, etc., are included in the test run.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/database/seeders/20230822124704-add_entity_types_and_entities.js (1)

3-6: ⚠️ Potential issue | 🟠 Major

No guard for DEFAULT_ORGANISATION_CODE / DEFAULT_TENANT_CODE env vars.

defaultOrgId is validated at lines 3–6, but DEFAULT_ORGANIZATION_CODE and DEFAULT_TENANT_CODE are used without any check. If either is unset, the seeder will silently insert NULL/undefined values into organization_code and tenant_code columns.

Proposed fix — add validation alongside the existing guard
 		const defaultOrgId = queryInterface.sequelize.options.defaultOrgId
 		if (!defaultOrgId) {
 			throw new Error('Default org ID is undefined. Please make sure it is set in sequelize options.')
 		}
+		if (!process.env.DEFAULT_ORGANISATION_CODE) {
+			throw new Error('DEFAULT_ORGANISATION_CODE is undefined. Please make sure it is set in environment variables.')
+		}
+		if (!process.env.DEFAULT_TENANT_CODE) {
+			throw new Error('DEFAULT_TENANT_CODE is undefined. Please make sure it is set in environment variables.')
+		}

Also applies to: 123-124, 158-158

🤖 Fix all issues with AI agents
In `@src/database/seeders/20230822124704-add_entity_types_and_entities.js`:
- Around line 123-124: Replace the incorrect American-spelled environment
variable reference process.env.DEFAULT_ORGANIZATION_CODE used when setting
organization_code in the seeder with the codebase-standard British spelling
process.env.DEFAULT_ORGANISATION_CODE so the seeder inserts the actual
organization_code value (leave tenant_code unchanged); update the occurrence in
the file where organization_code is assigned to use DEFAULT_ORGANISATION_CODE to
match other uses like getDefaultOrgId.js and org-admin.js.

Comment on lines +123 to +124
organization_code: process.env.DEFAULT_ORGANIZATION_CODE,
tenant_code: process.env.DEFAULT_TENANT_CODE,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check which spelling of the env var is used across the codebase
echo "=== Occurrences of DEFAULT_ORGANIZATION_CODE (Z spelling) ==="
rg -n 'DEFAULT_ORGANIZATION_CODE' --type js || echo "No matches found"

echo ""
echo "=== Occurrences of DEFAULT_ORGANISATION_CODE (S spelling) ==="
rg -n 'DEFAULT_ORGANISATION_CODE' --type js || echo "No matches found"

echo ""
echo "=== Check .env and config files ==="
fd -e env -o -e example || echo "No .env files found"
echo ""
git ls-files | grep -E '\.(env|config)' | head -20

echo ""
echo "=== Search all files for both spellings ==="
rg 'DEFAULT_ORGANI[SZ]ATION_CODE' -i --type-list
echo "Z spelling across all files:"
rg 'DEFAULT_ORGANIZATION_CODE' --type-add 'text:!(node_modules|dist|build)' || echo "No matches"
echo ""
echo "S spelling across all files:"
rg 'DEFAULT_ORGANISATION_CODE' --type-add 'text:!(node_modules|dist|build)' || echo "No matches"

Repository: ELEVATE-Project/mentoring

Length of output: 9063


🏁 Script executed:

# Check envVariables.js around both lines
sed -n '200,220p' src/envVariables.js

Repository: ELEVATE-Project/mentoring

Length of output: 594


Fix environment variable spelling to match codebase standard: DEFAULT_ORGANISATION_CODE (British spelling with S).

Line 123 references process.env.DEFAULT_ORGANIZATION_CODE (American spelling with Z), but the entire codebase consistently uses DEFAULT_ORGANISATION_CODE (British spelling with S). Both spellings are defined in src/envVariables.js, but the S spelling is used throughout the actual code (40+ occurrences in org-admin.js, cacheHelper.js, migrations, getDefaultOrgId.js, and all documentation). Using the wrong spelling will resolve to undefined, inserting NULL for organization_code.

Proposed fix
-				organization_code: process.env.DEFAULT_ORGANIZATION_CODE,
+				organization_code: process.env.DEFAULT_ORGANISATION_CODE,
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
organization_code: process.env.DEFAULT_ORGANIZATION_CODE,
tenant_code: process.env.DEFAULT_TENANT_CODE,
organization_code: process.env.DEFAULT_ORGANISATION_CODE,
tenant_code: process.env.DEFAULT_TENANT_CODE,
🤖 Prompt for AI Agents
In `@src/database/seeders/20230822124704-add_entity_types_and_entities.js` around
lines 123 - 124, Replace the incorrect American-spelled environment variable
reference process.env.DEFAULT_ORGANIZATION_CODE used when setting
organization_code in the seeder with the codebase-standard British spelling
process.env.DEFAULT_ORGANISATION_CODE so the seeder inserts the actual
organization_code value (leave tenant_code unchanged); update the occurrence in
the file where organization_code is assigned to use DEFAULT_ORGANISATION_CODE to
match other uses like getDefaultOrgId.js and org-admin.js.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant