Files
gridpilot.gg/plans/testing-gaps-core.md
2026-01-24 19:19:16 +01:00

12 KiB
Raw Blame History

Testing gaps in core (unit tests only, no infra/adapters)

Scope / rules (agreed)

  • In scope: code under core/ only.
  • Unit tests only: tests should validate business rules and orchestration using ports mocked in-test (e.g., vi.fn()), not real persistence, HTTP, frameworks, or adapters.
  • Out of scope: any test that relies on real IO, real repositories, or infrastructure code (including core/**/infrastructure/).

How gaps were identified

  1. Inventory of application and domain units was built from file structure under core/.
  2. Existing tests were located via describe( occurrences in *.test.ts and mapped to corresponding production units.
  3. Gaps were prioritized by:
    • Business criticality: identity/security, payments/money flows.
    • Complex branching / invariants: state machines, decision tables.
    • Time-dependent logic: Date.now(), new Date(), time windows.
    • Error handling paths: repository errors, partial failures.

Highest-priority testing gaps (P0)

1) rating module has no unit tests

Why high risk: scoring/rating is a cross-cutting “truth source”, and current implementations contain test-driven hacks and inconsistent error handling.

Targets:

Proposed unit tests (Given/When/Then):

  1. CalculateRatingUseCase: driver missing
    • Given driverRepository.findById returns null
    • When executing with { driverId, raceId }
    • Then returns Result.err with message Driver not found and does not call ratingRepository.save.
  2. CalculateRatingUseCase: race missing
    • Given driver exists, raceRepository.findById returns null
    • When execute
    • Then returns Result.err with message Race not found.
  3. CalculateRatingUseCase: no results
    • Given driver & race exist, resultRepository.findByRaceId returns []
    • When execute
    • Then returns Result.err with message No results found for race.
  4. CalculateRatingUseCase: driver not present in results
    • Given results array without matching driverId
    • When execute
    • Then returns Result.err with message Driver not found in race results.
  5. CalculateRatingUseCase: publishes event after save
    • Given all repositories return happy-path objects
    • When execute
    • Then ratingRepository.save is called once before eventPublisher.publish.
  6. CalculateRatingUseCase: component boundaries
    • Given a result with incidents = 0
    • When execute
    • Then components.cleanDriving === 100.
    • Given incidents >= 5
    • Then components.cleanDriving === 20.
  7. CalculateRatingUseCase: time-dependent output
    • Given frozen time (use vi.setSystemTime)
    • When execute
    • Then emitted rating has deterministic timestamp.
  8. CalculateTeamContributionUseCase: creates rating when missing
    • Given ratingRepository.findByDriverAndRace returns null
    • When execute
    • Then ratingRepository.save is called with a rating whose components.teamContribution matches calculation.
  9. CalculateTeamContributionUseCase: updates existing rating
    • Given existing rating with components set
    • When execute
    • Then only components.teamContribution is changed and other fields preserved.
  10. GetRatingLeaderboardUseCase: pagination + sorting
  • Given multiple drivers and multiple ratings per driver
  • When execute with { limit, offset }
  • Then returns latest per driver, sorted desc, sliced by pagination.
  1. SaveRatingUseCase: repository error wraps correctly
  • Given ratingRepository.save throws
  • When execute
  • Then throws Failed to save rating: prefixed error.

Ports to mock: driverRepository, raceRepository, resultRepository, ratingRepository, eventPublisher.


2) dashboard orchestration has no unit tests

Target:

Why high risk: timeouts, parallelization, filtering/sorting, and “log but dont fail” event publishing.

Proposed unit tests (Given/When/Then):

  1. Validation of driverId
    • Given driverId is '' or whitespace
    • When execute
    • Then throws ValidationError (or the modules equivalent) and does not hit repositories.
  2. Driver not found
    • Given driverRepository.findDriverById returns null
    • When execute
    • Then throws DriverNotFoundError.
  3. Filters invalid races
    • Given getUpcomingRaces returns races missing trackName or with past scheduledDate
    • When execute
    • Then upcomingRaces in DTO excludes them.
  4. Limits upcoming races to 3 and sorts by date ascending
    • Given 5 valid upcoming races out of order
    • When execute
    • Then DTO contains only 3 earliest.
  5. Activity is sorted newest-first
    • Given activities with different timestamps
    • When execute
    • Then DTO is sorted desc by timestamp.
  6. Repository failures are logged and rethrown
    • Given one of the repositories rejects
    • When execute
    • Then logger.error called and error is rethrown.
  7. Event publishing failure is swallowed
    • Given eventPublisher.publishDashboardAccessed throws
    • When execute
    • Then use case still returns DTO and logger.error was called.
  8. Timeout behavior (if retained)
    • Given raceRepository.getUpcomingRaces never resolves
    • When using fake timers and advancing by TIMEOUT
    • Then upcomingRaces becomes [] and use case completes.

Ports to mock: all repositories, publisher, and Logger.


3) leagues module has multiple untested use-cases (time-dependent logic)

Targets likely missing tests:

Proposed unit tests (Given/When/Then):

  1. JoinLeagueUseCase: league missing
    • Given leagueRepository.findById returns null
    • When execute
    • Then throws League not found.
  2. JoinLeagueUseCase: driver missing
    • Given league exists, driverRepository.findDriverById returns null
    • Then throws Driver not found.
  3. JoinLeagueUseCase: approvalRequired path uses pending requests
    • Given league.approvalRequired === true
    • When execute
    • Then leagueRepository.addPendingRequests called with a request containing frozen Date.now() and new Date().
  4. JoinLeagueUseCase: no-approval path adds member
    • Given approvalRequired === false
    • Then leagueRepository.addLeagueMembers called with role member.
  5. ApproveMembershipRequestUseCase: request not found
    • Given pending requests list without requestId
    • Then throws Request not found.
  6. ApproveMembershipRequestUseCase: happy path adds member then removes request
    • Given request exists
    • Then addLeagueMembers called before removePendingRequest.
  7. LeaveLeagueUseCase: delegates to repository
    • Given repository mock
    • Then removeLeagueMember is called once with inputs.

Note: these use cases currently ignore injected eventPublisher in several places; tests should either (a) enforce event publication (drive implementation), or (b) remove the unused port.


Medium-priority gaps (P1)

4) “Contract tests” that dont test behavior (replace or move)

These tests validate TypeScript shapes and mocked method existence, but do not protect business behavior:

Recommended action:

  • Either delete these (if they add noise), or replace with behavior tests of the code that consumes the port.
  • If you want explicit “contract tests”, keep them in a dedicated layer and ensure they test the adapter implementation (but that would violate the current constraint, so keep them out of this scope).

5) Racing and Notifications include “imports-only” tests

Several tests are effectively “module loads” checks (no business assertions). Example patterns show up in:

Replace with invariant-focused tests:

  • Given invalid props (empty IDs, invalid status transitions)
  • When creating or transitioning state
  • Then throws domain error (or returns Result.err) with specific code/kind.

6) Racing use-cases with no tests (spot list)

From a quick scan of core/racing/application/use-cases/, some .ts appear without matching .test.ts siblings:

Suggested scenarios depend on each use cases branching, but the common minimum is:

  • repository error → Result.err with code
  • happy path → updates correct aggregates + publishes domain event if applicable
  • permission/invariant violations → domain error codes

Lower-priority gaps (P2)

7) Coverage consistency and determinism

Patterns to standardize across modules:

  • Tests that touch time should freeze time (vi.setSystemTime) rather than relying on Date.now().
  • Use cases should return Result consistently (some throw, some return Result). Testing should expose this inconsistency and drive convergence.

Proposed execution plan (next step: implement tests)

  1. Add missing unit tests for rating use-cases and rating/domain/Rating.
  2. Add unit tests for GetDashboardUseCase focusing on filtering/sorting, timeout, and publish failure behavior.
  3. Add unit tests for leagues membership flow (JoinLeagueUseCase, ApproveMembershipRequestUseCase, LeaveLeagueUseCase).
  4. Replace “imports-only” tests with invariant tests in notifications entities, starting with the most used aggregates.
  5. Audit remaining racing use-cases without tests and add the top 5 based on branching and business impact.