# Testing gaps in `core` (unit tests only, no infra/adapters) ## Scope / rules (agreed) * **In scope:** code under [`core/`](core:1) only. * **Unit tests only:** tests should validate business rules and orchestration using **ports mocked in-test** (e.g., `vi.fn()`), not real persistence, HTTP, frameworks, or adapters. * **Out of scope:** any test that relies on real IO, real repositories, or infrastructure code (including [`core/**/infrastructure/`](core/rating/infrastructure:1)). ## How gaps were identified 1. Inventory of application and domain units was built from file structure under [`core/`](core:1). 2. Existing tests were located via `describe(` occurrences in `*.test.ts` and mapped to corresponding production units. 3. Gaps were prioritized by: * **Business criticality:** identity/security, payments/money flows. * **Complex branching / invariants:** state machines, decision tables. * **Time-dependent logic:** `Date.now()`, `new Date()`, time windows. * **Error handling paths:** repository errors, partial failures. --- ## Highest-priority testing gaps (P0) ### 1) `rating` module has **no unit tests** Why high risk: scoring/rating is a cross-cutting “truth source”, and current implementations contain test-driven hacks and inconsistent error handling. Targets: * [`core/rating/application/use-cases/CalculateRatingUseCase.ts`](core/rating/application/use-cases/CalculateRatingUseCase.ts:1) * [`core/rating/application/use-cases/CalculateTeamContributionUseCase.ts`](core/rating/application/use-cases/CalculateTeamContributionUseCase.ts:1) * [`core/rating/application/use-cases/GetRatingLeaderboardUseCase.ts`](core/rating/application/use-cases/GetRatingLeaderboardUseCase.ts:1) * [`core/rating/application/use-cases/SaveRatingUseCase.ts`](core/rating/application/use-cases/SaveRatingUseCase.ts:1) * [`core/rating/domain/Rating.ts`](core/rating/domain/Rating.ts:1) Proposed unit tests (Given/When/Then): 1. **CalculateRatingUseCase: driver missing** * Given `driverRepository.findById` returns `null` * When executing with `{ driverId, raceId }` * Then returns `Result.err` with message `Driver not found` and does not call `ratingRepository.save`. 2. **CalculateRatingUseCase: race missing** * Given driver exists, `raceRepository.findById` returns `null` * When execute * Then returns `Result.err` with message `Race not found`. 3. **CalculateRatingUseCase: no results** * Given driver & race exist, `resultRepository.findByRaceId` returns `[]` * When execute * Then returns `Result.err` with message `No results found for race`. 4. **CalculateRatingUseCase: driver not present in results** * Given results array without matching `driverId` * When execute * Then returns `Result.err` with message `Driver not found in race results`. 5. **CalculateRatingUseCase: publishes event after save** * Given all repositories return happy-path objects * When execute * Then `ratingRepository.save` is called once before `eventPublisher.publish`. 6. **CalculateRatingUseCase: component boundaries** * Given a result with `incidents = 0` * When execute * Then `components.cleanDriving === 100`. * Given `incidents >= 5` * Then `components.cleanDriving === 20`. 7. **CalculateRatingUseCase: time-dependent output** * Given frozen time (use `vi.setSystemTime`) * When execute * Then emitted rating has deterministic `timestamp`. 8. **CalculateTeamContributionUseCase: creates rating when missing** * Given `ratingRepository.findByDriverAndRace` returns `null` * When execute * Then `ratingRepository.save` is called with a rating whose `components.teamContribution` matches calculation. 9. **CalculateTeamContributionUseCase: updates existing rating** * Given existing rating with components set * When execute * Then only `components.teamContribution` is changed and other fields preserved. 10. **GetRatingLeaderboardUseCase: pagination + sorting** * Given multiple drivers and multiple ratings per driver * When execute with `{ limit, offset }` * Then returns latest per driver, sorted desc, sliced by pagination. 11. **SaveRatingUseCase: repository error wraps correctly** * Given `ratingRepository.save` throws * When execute * Then throws `Failed to save rating:` prefixed error. Ports to mock: `driverRepository`, `raceRepository`, `resultRepository`, `ratingRepository`, `eventPublisher`. --- ### 2) `dashboard` orchestration has no unit tests Target: * [`core/dashboard/application/use-cases/GetDashboardUseCase.ts`](core/dashboard/application/use-cases/GetDashboardUseCase.ts:1) Why high risk: timeouts, parallelization, filtering/sorting, and “log but don’t fail” event publishing. Proposed unit tests (Given/When/Then): 1. **Validation of driverId** * Given `driverId` is `''` or whitespace * When execute * Then throws [`ValidationError`](core/shared/errors/ValidationError.ts:1) (or the module’s equivalent) and does not hit repositories. 2. **Driver not found** * Given `driverRepository.findDriverById` returns `null` * When execute * Then throws [`DriverNotFoundError`](core/dashboard/domain/errors/DriverNotFoundError.ts:1). 3. **Filters invalid races** * Given `getUpcomingRaces` returns races missing `trackName` or with past `scheduledDate` * When execute * Then `upcomingRaces` in DTO excludes them. 4. **Limits upcoming races to 3 and sorts by date ascending** * Given 5 valid upcoming races out of order * When execute * Then DTO contains only 3 earliest. 5. **Activity is sorted newest-first** * Given activities with different timestamps * When execute * Then DTO is sorted desc by timestamp. 6. **Repository failures are logged and rethrown** * Given one of the repositories rejects * When execute * Then logger.error called and error is rethrown. 7. **Event publishing failure is swallowed** * Given `eventPublisher.publishDashboardAccessed` throws * When execute * Then use case still returns DTO and logger.error was called. 8. **Timeout behavior** (if retained) * Given `raceRepository.getUpcomingRaces` never resolves * When using fake timers and advancing by TIMEOUT * Then `upcomingRaces` becomes `[]` and use case completes. Ports to mock: all repositories, publisher, and [`Logger`](core/shared/domain/Logger.ts:1). --- ### 3) `leagues` module has multiple untested use-cases (time-dependent logic) Targets likely missing tests: * [`core/leagues/application/use-cases/JoinLeagueUseCase.ts`](core/leagues/application/use-cases/JoinLeagueUseCase.ts:1) * [`core/leagues/application/use-cases/LeaveLeagueUseCase.ts`](core/leagues/application/use-cases/LeaveLeagueUseCase.ts:1) * [`core/leagues/application/use-cases/ApproveMembershipRequestUseCase.ts`](core/leagues/application/use-cases/ApproveMembershipRequestUseCase.ts:1) * plus others without `*.test.ts` siblings in [`core/leagues/application/use-cases/`](core/leagues/application/use-cases:1) Proposed unit tests (Given/When/Then): 1. **JoinLeagueUseCase: league missing** * Given `leagueRepository.findById` returns `null` * When execute * Then throws `League not found`. 2. **JoinLeagueUseCase: driver missing** * Given league exists, `driverRepository.findDriverById` returns `null` * Then throws `Driver not found`. 3. **JoinLeagueUseCase: approvalRequired path uses pending requests** * Given `league.approvalRequired === true` * When execute * Then `leagueRepository.addPendingRequests` called with a request containing frozen `Date.now()` and `new Date()`. 4. **JoinLeagueUseCase: no-approval path adds member** * Given `approvalRequired === false` * Then `leagueRepository.addLeagueMembers` called with role `member`. 5. **ApproveMembershipRequestUseCase: request not found** * Given pending requests list without `requestId` * Then throws `Request not found`. 6. **ApproveMembershipRequestUseCase: happy path adds member then removes request** * Given request exists * Then `addLeagueMembers` called before `removePendingRequest`. 7. **LeaveLeagueUseCase: delegates to repository** * Given repository mock * Then `removeLeagueMember` is called once with inputs. Note: these use cases currently ignore injected `eventPublisher` in several places; tests should either (a) enforce event publication (drive implementation), or (b) remove the unused port. --- ## Medium-priority gaps (P1) ### 4) “Contract tests” that don’t test behavior (replace or move) These tests validate TypeScript shapes and mocked method existence, but do not protect business behavior: * [`core/ports/media/MediaResolverPort.test.ts`](core/ports/media/MediaResolverPort.test.ts:1) * [`core/ports/media/MediaResolverPort.comprehensive.test.ts`](core/ports/media/MediaResolverPort.comprehensive.test.ts:1) * [`core/notifications/domain/repositories/NotificationRepository.test.ts`](core/notifications/domain/repositories/NotificationRepository.test.ts:1) * [`core/notifications/application/ports/NotificationService.test.ts`](core/notifications/application/ports/NotificationService.test.ts:1) Recommended action: * Either delete these (if they add noise), or replace with **behavior tests of the code that consumes the port**. * If you want explicit “contract tests”, keep them in a dedicated layer and ensure they test the *adapter implementation* (but that would violate the current constraint, so keep them out of this scope). ### 5) Racing and Notifications include “imports-only” tests Several tests are effectively “module loads” checks (no business assertions). Example patterns show up in: * [`core/notifications/domain/entities/Notification.test.ts`](core/notifications/domain/entities/Notification.test.ts:1) * [`core/notifications/domain/entities/NotificationPreference.test.ts`](core/notifications/domain/entities/NotificationPreference.test.ts:1) * many files under [`core/racing/domain/entities/`](core/racing/domain/entities:1) Replace with invariant-focused tests: * Given invalid props (empty IDs, invalid status transitions) * When creating or transitioning state * Then throws domain error (or returns `Result.err`) with specific code/kind. ### 6) Racing use-cases with no tests (spot list) From a quick scan of [`core/racing/application/use-cases/`](core/racing/application/use-cases:1), some `.ts` appear without matching `.test.ts` siblings: * [`core/racing/application/use-cases/GetAllLeaguesWithCapacityUseCase.ts`](core/racing/application/use-cases/GetAllLeaguesWithCapacityUseCase.ts:1) * [`core/racing/application/use-cases/GetRaceProtestsUseCase.ts`](core/racing/application/use-cases/GetRaceProtestsUseCase.ts:1) * [`core/racing/application/use-cases/GetRaceRegistrationsUseCase.ts`](core/racing/application/use-cases/GetRaceRegistrationsUseCase.ts:1) (appears tested, confirm) * [`core/racing/application/use-cases/GetSponsorsUseCase.ts`](core/racing/application/use-cases/GetSponsorsUseCase.ts:1) (no test file listed) * [`core/racing/application/use-cases/GetLeagueAdminUseCase.ts`](core/racing/application/use-cases/GetLeagueAdminUseCase.ts:1) * [`core/racing/application/use-cases/UnpublishLeagueSeasonScheduleUseCase.ts`](core/racing/application/use-cases/UnpublishLeagueSeasonScheduleUseCase.ts:1) * [`core/racing/application/use-cases/SubmitProtestDefenseUseCase.test.ts`](core/racing/application/use-cases/SubmitProtestDefenseUseCase.test.ts:1) exists, confirm content quality Suggested scenarios depend on each use case’s branching, but the common minimum is: * repository error → `Result.err` with code * happy path → updates correct aggregates + publishes domain event if applicable * permission/invariant violations → domain error codes --- ## Lower-priority gaps (P2) ### 7) Coverage consistency and determinism Patterns to standardize across modules: * Tests that touch time should freeze time (`vi.setSystemTime`) rather than relying on `Date.now()`. * Use cases should return `Result` consistently (some throw, some return `Result`). Testing should expose this inconsistency and drive convergence. --- ## Proposed execution plan (next step: implement tests) 1. Add missing unit tests for `rating` use-cases and `rating/domain/Rating`. 2. Add unit tests for `GetDashboardUseCase` focusing on filtering/sorting, timeout, and publish failure behavior. 3. Add unit tests for `leagues` membership flow (`JoinLeagueUseCase`, `ApproveMembershipRequestUseCase`, `LeaveLeagueUseCase`). 4. Replace “imports-only” tests with invariant tests in `notifications` entities, starting with the most used aggregates. 5. Audit remaining racing use-cases without tests and add the top 5 based on branching and business impact.