Files
gridpilot.gg/docs/TESTS.md
2025-12-26 20:54:20 +01:00

46 KiB
Raw Blame History

Testing Strategy

Overview

GridPilot employs a comprehensive BDD (Behavior-Driven Development) testing strategy across three distinct layers: Unit, Integration, and End-to-End (E2E). Each layer validates different aspects of the system while maintaining a consistent Given/When/Then approach that emphasizes behavior over implementation.

This document provides practical guidance on testing philosophy, test organization, tooling, and execution patterns for GridPilot.


BDD Philosophy

Why BDD for GridPilot?

GridPilot manages complex business rules around league management, team registration, event scheduling, result processing, and standings calculation. These rules must be:

  • Understandable by non-technical stakeholders (league admins, race organizers)
  • Verifiable through automated tests that mirror real-world scenarios
  • Maintainable as business requirements evolve

BDD provides a shared vocabulary (Given/When/Then) that bridges the gap between domain experts and developers, ensuring tests document expected behavior rather than technical implementation details.

Given/When/Then Format

All tests—regardless of layer—follow this structure:

// Given: Establish initial state/context
// When: Perform the action being tested
// Then: Assert the expected outcome

Example (Unit Test):

describe('League Domain Entity', () => {
  it('should add a team when team limit not reached', () => {
    // Given
    const league = new League('Summer Series', { maxTeams: 10 });
    const team = new Team('Racing Legends');
    
    // When
    const result = league.addTeam(team);
    
    // Then
    expect(result.isSuccess()).toBe(true);
    expect(league.teams).toContain(team);
  });
});

This pattern applies equally to integration tests (with real database operations) and E2E tests (with full UI workflows).


Test Types & Organization

Unit Tests (/tests/unit)

Scope: Domain entities, value objects, and application use cases with mocked ports (repositories, external services).

Tooling: Vitest (fast, TypeScript-native, ESM support)

Execution: Parallel, target <1 second total runtime

Purpose:

  • Validate business logic in isolation
  • Ensure domain invariants hold (e.g., team limits, scoring rules)
  • Test use case orchestration with mocked dependencies

Examples from Architecture:

  1. Domain Entity Test:

    // League.addTeam() validation
    Given a League with maxTeams=10 and 9 current teams
    When addTeam() is called with a valid Team
    Then the team is added successfully
    
    Given a League with maxTeams=10 and 10 current teams
    When addTeam() is called
    Then a DomainError is returned with "Team limit reached"
    
  2. Use Case Test:

    // GenerateStandingsUseCase
    Given a League with 5 teams and completed races
    When execute() is called
    Then LeagueRepository.findById() is invoked
    And ScoringRule.calculatePoints() is called for each team
    And sorted standings are returned
    
  3. Scoring Rule Test:

    // ScoringRule.calculatePoints()
    Given a F1-style scoring rule (25-18-15-12-10-8-6-4-2-1)
    When calculatePoints(position=1) is called
    Then 25 points are returned
    
    Given the same rule
    When calculatePoints(position=11) is called
    Then 0 points are returned
    

Key Practices:

  • Mock only at architecture boundaries (ports like ILeagueRepository)
  • Never mock domain entities or value objects
  • Keep tests fast (<10ms per test)
  • Use in-memory test doubles for simple cases

Integration Tests (/tests/integration)

Scope: Repository implementations, infrastructure adapters (PostgreSQL, Redis, OAuth clients, result importers).

Tooling: Vitest + Testcontainers (spins up real PostgreSQL/Redis in Docker)

Execution: Sequential, ~10 seconds per suite

Purpose:

  • Validate that infrastructure adapters correctly implement port interfaces
  • Test database queries, migrations, and transaction handling
  • Ensure external API clients handle authentication and error scenarios

Examples from Architecture:

  1. Repository Test:

    // PostgresLeagueRepository
    Given a PostgreSQL container is running
    When save() is called with a League entity
    Then the league is persisted to the database
    And findById() returns the same league with correct attributes
    
  2. OAuth Client Test:

    // IRacingOAuthClient
    Given valid iRacing credentials
    When authenticate() is called
    Then an access token is returned
    And the token is cached in Redis for 1 hour
    
    Given expired credentials
    When authenticate() is called
    Then an AuthenticationError is thrown
    
  3. Result Importer Test:

    // EventResultImporter
    Given an Event exists in the database
    When importResults() is called with iRacing session data
    Then Driver entities are created/updated
    And EventResult entities are persisted with correct positions/times
    And the Event status is updated to 'COMPLETED'
    

Key Practices:

  • Use Testcontainers to spin up real databases (not mocks)
  • Clean database state between tests (truncate tables or use transactions)
  • Seed minimal test data via SQL fixtures
  • Test both success and failure paths (network errors, constraint violations)

End-to-End Tests (/tests/e2e)

Scope: Full user workflows spanning web-client → web-api → database.

Tooling: Playwright + Docker Compose (orchestrates all services)

Execution: ~2 minutes per scenario

Purpose:

  • Validate complete user journeys from UI interactions to database changes
  • Ensure services integrate correctly in a production-like environment
  • Catch regressions in multi-service workflows

Examples from Architecture:

  1. League Creation Workflow:

    Given an authenticated league admin
    When they navigate to "Create League"
    And fill in league name, scoring system, and team limit
    And submit the form
    Then the league appears in the admin dashboard
    And the database contains the new league record
    And the league is visible to other users
    
  2. Team Registration Workflow:

    Given a published league with 5/10 team slots filled
    When a team captain navigates to the league page
    And clicks "Join League"
    And fills in team name and roster
    And submits the form
    Then the team appears in the league's team list
    And the team count updates to 6/10
    And the captain receives a confirmation email
    
  3. Automated Result Import:

    Given a League with an upcoming Event
    And iRacing OAuth credentials are configured
    When the scheduled import job runs
    Then the job authenticates with iRacing
    And fetches session results for the Event
    And creates EventResult records in the database
    And updates the Event status to 'COMPLETED'
    And triggers standings recalculation
    
  4. Companion App Login Automation:

    Given a League Admin enables companion app login automation
    When the companion app is launched
    Then the app polls for a generated login token from web-api
    And auto-fills iRacing credentials from the admin's profile
    And logs into iRacing automatically
    And confirms successful login to web-api
    

Key Practices:

  • Use Playwright's Page Object pattern for reusable UI interactions
  • Test both happy paths and error scenarios (validation errors, network failures)
  • Clean database state between scenarios (via API or direct SQL)
  • Run E2E tests in CI before merging to main branch

Test Data Strategy

Fixtures & Seeding

Unit Tests:

  • Use in-memory domain objects (no database)
  • Factory functions for common test entities:
    function createTestLeague(overrides?: Partial<LeagueProps>): League {
      return new League('Test League', { maxTeams: 10, ...overrides });
    }
    

Integration Tests:

  • Use Testcontainers to spin up fresh PostgreSQL instances
  • Seed minimal test data via SQL scripts:
    -- tests/integration/fixtures/leagues.sql
    INSERT INTO leagues (id, name, max_teams) VALUES
      ('league-1', 'Test League', 10);
    
  • Clean state between tests (truncate tables or rollback transactions)

E2E Tests:

  • Pre-seed database via migrations before Docker Compose starts
  • Use API endpoints to create test data when possible (validates API behavior)
  • Database cleanup between scenarios:
    // tests/e2e/support/database.ts
    export async function cleanDatabase() {
      await sql`TRUNCATE TABLE event_results CASCADE`;
      await sql`TRUNCATE TABLE events CASCADE`;
      await sql`TRUNCATE TABLE teams CASCADE`;
      await sql`TRUNCATE TABLE leagues CASCADE`;
    }
    

Docker-Based Tests (Website ↔ API Wiring)

This repo uses Docker in two different ways:

  1. Website ↔ API smoke (wiring validation)

    • Orchestrated by docker-compose.test.yml at the repo root.
    • Runs:
      • Website in Docker (Next.js dev server)
      • An API mock server in Docker (Node HTTP server)
    • Goal: catch misconfigured hostnames/ports/env vars and CORS issues that only show up in Dockerized setups.
  2. Hosted-session automation E2E (fixture-driven automation)

    • Orchestrated by docker/docker-compose.e2e.yml (separate stack; documented later in this file).
    • Goal: validate Playwright-driven automation against HTML fixtures.

Website ↔ API smoke: how to run

Run:

What it does (in order):

  • Installs deps in a dedicated Docker volume (npm run docker:test:deps)
  • Starts the test stack (npm run docker:test:up)
  • Waits for readiness (npm run docker:test:wait)
  • Runs Playwright smoke tests (npm run smoke:website:docker)

Ports used:

  • Website: http://localhost:3100
  • API mock: http://localhost:3101

Key contract:

  • Website must resolve the API base URL via getWebsiteApiBaseUrl().
  • The websites HTTP client uses credentials: 'include', so the API must support CORS-with-credentials (implemented for the real API in bootstrap()).

“Mock vs Real” (Website & API)

  • The Website does not have a runtime flag like AUTOMATION_MODE.
  • “Mock vs real” for the Website is only which API base URL it uses:
    • Browser: NEXT_PUBLIC_API_BASE_URL
    • Server: API_BASE_URL (preferred in Docker) or NEXT_PUBLIC_API_BASE_URL fallback

In the Docker smoke stack, “mock API” means the Node HTTP server in docker-compose.test.yml. In Docker dev/prod, “real API” means the NestJS app started from bootstrap(), and “real vs in-memory” persistence is controlled by GRIDPILOT_API_PERSISTENCE in AppModule.


BDD Scenario Examples

1. League Creation (Success + Failure)

Scenario: Admin creates a new league
  Given an authenticated admin user
  When they submit a league form with:
    | name           | Summer Series 2024  |
    | maxTeams       | 12                  |
    | scoringSystem  | F1                  |
  Then the league is created successfully
  And the admin is redirected to the league dashboard
  And the database contains the new league

Scenario: League creation fails with duplicate name
  Given a league named "Summer Series 2024" already exists
  When an admin submits a league form with name "Summer Series 2024"
  Then the form displays error "League name already exists"
  And no new league is created in the database

2. Team Registration (Success + Failure)

Scenario: Team registers for a league
  Given a published league with 5/10 team slots
  When a team captain submits registration with:
    | teamName  | Racing Legends     |
    | drivers   | Alice, Bob, Carol  |
  Then the team is added to the league
  And the team count updates to 6/10
  And the captain receives a confirmation email

Scenario: Registration fails when league is full
  Given a published league with 10/10 team slots
  When a team captain attempts to register
  Then the form displays error "League is full"
  And the team is not added to the league

3. Automated Result Import (Success + Failure)

Scenario: Import results from iRacing
  Given a League with an Event scheduled for today
  And iRacing OAuth credentials are configured
  When the scheduled import job runs
  Then the job authenticates with iRacing API
  And fetches session results for the Event
  And creates EventResult records for each driver
  And updates the Event status to 'COMPLETED'
  And triggers standings recalculation

Scenario: Import fails with invalid credentials
  Given an Event with expired iRacing credentials
  When the import job runs
  Then an AuthenticationError is logged
  And the Event status remains 'SCHEDULED'
  And an admin notification is sent

4. Parallel Scoring Calculation

Scenario: Calculate standings for multiple leagues concurrently
  Given 5 active leagues with completed events
  When the standings recalculation job runs
  Then each league's standings are calculated in parallel
  And the process completes in <5 seconds
  And all standings are persisted correctly
  And no race conditions occur (validated via database integrity checks)

5. Companion App Login Automation

Scenario: Companion app logs into iRacing automatically
  Given a League Admin enables companion app login automation
  And provides their iRacing credentials
  When the companion app is launched
  Then the app polls web-api for a login token
  And retrieves the admin's encrypted credentials
  And auto-fills the iRacing login form
  And submits the login request
  And confirms successful login to web-api
  And caches the session token for 24 hours

Coverage Goals

Target Coverage Levels

  • Domain/Application Layers: >90% (critical business logic)
  • Infrastructure Layer: >80% (repository implementations, adapters)
  • Presentation Layer: Smoke tests (basic rendering, no exhaustive UI coverage)

Running Coverage Reports

# Unit + Integration coverage
npm run test:coverage

# View HTML report
open coverage/index.html

# E2E coverage (via Istanbul)
npm run test:e2e:coverage

What to Prioritize

  1. Domain Entities: Invariants, validation rules, state transitions
  2. Use Cases: Orchestration logic, error handling, port interactions
  3. Repositories: CRUD operations, query builders, transaction handling
  4. Adapters: External API clients, OAuth flows, result importers

What NOT to prioritize:

  • Trivial getters/setters
  • Framework boilerplate (Express route handlers)
  • UI styling (covered by visual regression tests if needed)

Continuous Testing

Watch Mode (Development)

# Auto-run unit tests on file changes
npm run test:watch

# Auto-run integration tests (slower, but useful for DB work)
npm run test:integration:watch

CI/CD Pipeline

graph LR
  A[Code Push] --> B[Unit Tests]
  B --> C[Integration Tests]
  C --> D[E2E Tests]
  D --> E[Deploy to Staging]

Execution Order:

  1. Unit Tests (parallel, <1 second) — fail fast on logic errors
  2. Integration Tests (sequential, ~10 seconds) — catch infrastructure issues
  3. E2E Tests (sequential, ~2 minutes) — validate full workflows
  4. Deploy — only if all tests pass

Parallelization:

  • Unit tests run in parallel (Vitest default)
  • Integration tests run sequentially (avoid database conflicts)
  • E2E tests run sequentially (UI interactions are stateful)

Testing Best Practices

1. Test Behavior, Not Implementation

Bad (overfitted to implementation):

it('should call repository.save() once', () => {
  const repo = mock<ILeagueRepository>();
  const useCase = new CreateLeagueUseCase(repo);
  useCase.execute({ name: 'Test' });
  expect(repo.save).toHaveBeenCalledTimes(1);
});

Good (tests observable behavior):

it('should persist the league to the repository', async () => {
  const repo = new InMemoryLeagueRepository();
  const useCase = new CreateLeagueUseCase(repo);
  
  const result = await useCase.execute({ name: 'Test' });
  
  expect(result.isSuccess()).toBe(true);
  const league = await repo.findById(result.value.id);
  expect(league?.name).toBe('Test');
});

2. Mock Only at Architecture Boundaries

Ports (interfaces) should be mocked in use case tests:

const mockRepo = mock<ILeagueRepository>({
  save: jest.fn().mockResolvedValue(undefined),
});

Domain entities should NEVER be mocked:

// ❌ Don't do this
const mockLeague = mock<League>();

// ✅ Do this
const league = new League('Test League', { maxTeams: 10 });

3. Keep Tests Readable and Maintainable

Arrange-Act-Assert Pattern:

it('should calculate standings correctly', () => {
  // Arrange: Set up test data
  const league = createTestLeague();
  const teams = [createTestTeam('Team A'), createTestTeam('Team B')];
  const results = [createTestResult(teams[0], position: 1)];
  
  // Act: Perform the action
  const standings = league.calculateStandings(results);
  
  // Assert: Verify the outcome
  expect(standings[0].team).toBe(teams[0]);
  expect(standings[0].points).toBe(25);
});

4. Test Error Scenarios

Don't just test the happy path:

describe('League.addTeam()', () => {
  it('should add team successfully', () => { /* ... */ });
  
  it('should fail when team limit reached', () => {
    const league = createTestLeague({ maxTeams: 1 });
    league.addTeam(createTestTeam('Team A'));
    
    const result = league.addTeam(createTestTeam('Team B'));
    
    expect(result.isFailure()).toBe(true);
    expect(result.error.message).toBe('Team limit reached');
  });
  
  it('should fail when adding duplicate team', () => { /* ... */ });
});

Common Patterns

Setting Up Test Fixtures

Factory Functions:

// tests/support/factories.ts
export function createTestLeague(overrides?: Partial<LeagueProps>): League {
  return new League('Test League', {
    maxTeams: 10,
    scoringSystem: 'F1',
    ...overrides,
  });
}

export function createTestTeam(name: string): Team {
  return new Team(name, { drivers: ['Driver 1', 'Driver 2'] });
}

Mocking Ports in Use Case Tests

// tests/unit/application/CreateLeagueUseCase.test.ts
describe('CreateLeagueUseCase', () => {
  let mockRepo: jest.Mocked<ILeagueRepository>;
  let useCase: CreateLeagueUseCase;
  
  beforeEach(() => {
    mockRepo = {
      save: jest.fn().mockResolvedValue(undefined),
      findById: jest.fn().mockResolvedValue(null),
      findByName: jest.fn().mockResolvedValue(null),
    };
    useCase = new CreateLeagueUseCase(mockRepo);
  });
  
  it('should create a league when name is unique', async () => {
    const result = await useCase.execute({ name: 'New League' });
    
    expect(result.isSuccess()).toBe(true);
    expect(mockRepo.save).toHaveBeenCalledWith(
      expect.objectContaining({ name: 'New League' })
    );
  });
});

Database Cleanup Strategies

Integration Tests:

// tests/integration/setup.ts
import { sql } from './database';

export async function cleanDatabase() {
  await sql`TRUNCATE TABLE event_results CASCADE`;
  await sql`TRUNCATE TABLE events CASCADE`;
  await sql`TRUNCATE TABLE teams CASCADE`;
  await sql`TRUNCATE TABLE leagues CASCADE`;
}

beforeEach(async () => {
  await cleanDatabase();
});

E2E Tests:

// tests/e2e/support/hooks.ts
import { test as base } from '@playwright/test';

export const test = base.extend({
  page: async ({ page }, use) => {
    // Clean database before each test
    await fetch('http://localhost:3000/test/cleanup', { method: 'POST' });
    await use(page);
  },
});

Playwright Page Object Pattern

// tests/e2e/pages/LeaguePage.ts
export class LeaguePage {
  constructor(private page: Page) {}
  
  async navigateToCreateLeague() {
    await this.page.goto('/leagues/create');
  }
  
  async fillLeagueForm(data: { name: string; maxTeams: number }) {
    await this.page.fill('[name="name"]', data.name);
    await this.page.fill('[name="maxTeams"]', data.maxTeams.toString());
  }
  
  async submitForm() {
    await this.page.click('button[type="submit"]');
  }
  
  async getSuccessMessage() {
    return this.page.textContent('.success-message');
  }
}

// Usage in test
test('should create league', async ({ page }) => {
  const leaguePage = new LeaguePage(page);
  await leaguePage.navigateToCreateLeague();
  await leaguePage.fillLeagueForm({ name: 'Test', maxTeams: 10 });
  await leaguePage.submitForm();
  
  expect(await leaguePage.getSuccessMessage()).toBe('League created');
});

Real E2E Testing Strategy (No Mocks)

GridPilot focuses its real E2E testing strategy on browser-driven automation:

  1. Strategy A (Docker): Test BrowserDevToolsAdapter with Playwright or similar browser tooling against a fixture server
  2. Strategy B (Native macOS, legacy): Historical native OS-level automation on real hardware (now removed)

Constraint: iRacing Terms of Service

  • Production: Native OS-level automation only (no browser DevTools/CDP for actual iRacing automation)
  • Testing: Playwright-driven automation CAN be used against static HTML fixtures

Test Architecture Overview

graph TB
    subgraph Docker E2E - CI
        FX[Static HTML Fixtures] --> FS[Fixture Server Container]
        FS --> HC[Headless Chrome Container]
        HC --> BDA[BrowserDevToolsAdapter Tests]
    end
    
    %% Legacy native OS-level automation tests have been removed.

Strategy A: Docker-Based E2E Tests

Purpose

Test the complete 18-step workflow using BrowserDevToolsAdapter against real HTML fixtures without mocks.

Architecture

# docker/docker-compose.e2e.yml
services:
  # Headless Chrome with remote debugging enabled
  chrome:
    image: browserless/chrome:latest
    ports:
      - "9222:3000"
    environment:
      - CONNECTION_TIMEOUT=600000
      - MAX_CONCURRENT_SESSIONS=1
      - PREBOOT_CHROME=true
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:3000/json/version"]
      interval: 5s
      timeout: 10s
      retries: 3

  # Static server for iRacing HTML fixtures
  fixture-server:
    build:
      context: ./fixture-server
      dockerfile: Dockerfile
    ports:
      - "3456:80"
    volumes:
      - ../resources/iracing-hosted-sessions:/usr/share/nginx/html:ro
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:80/01-hosted-racing.html"]
      interval: 5s
      timeout: 10s
      retries: 3

Fixture Server Configuration

# docker/fixture-server/Dockerfile
FROM nginx:alpine

# Configure nginx for static HTML serving
COPY nginx.conf /etc/nginx/conf.d/default.conf

EXPOSE 80
# docker/fixture-server/nginx.conf
server {
    listen 80;
    server_name localhost;
    root /usr/share/nginx/html;
    
    location / {
        try_files $uri $uri/ =404;
        add_header Access-Control-Allow-Origin *;
    }
}

BDD Scenarios for Docker E2E

Feature: BrowserDevToolsAdapter Workflow Automation
  As the automation engine
  I want to execute the 18-step hosted session workflow
  So that I can verify browser automation against real HTML fixtures

  Background:
    Given the Docker E2E environment is running
    And the fixture server is serving iRacing HTML pages
    And the headless Chrome container is connected

  Scenario: Complete workflow navigation through all 18 steps
    Given the BrowserDevToolsAdapter is connected to Chrome
    When I execute step 2 HOSTED_RACING
    Then the adapter should navigate to the hosted racing page
    And the page should contain the create race button
    
    When I execute step 3 CREATE_RACE
    Then the wizard modal should open
    
    When I execute step 4 RACE_INFORMATION
    And I fill the session name field with "Test Race"
    Then the form field should contain "Test Race"
    
    # ... steps 5-17 follow same pattern
    
    When I execute step 18 TRACK_CONDITIONS
    Then the automation should stop at the safety checkpoint
    And the checkout button should NOT be clicked

  Scenario: Modal step handling - Add Car modal
    Given the automation is at step 8 SET_CARS
    When I click the "Add Car" button
    Then the ADD_CAR modal should open
    When I search for "Dallara F3"
    And I select the first result
    Then the modal should close
    And the car should be added to the selection

  Scenario: Form field validation with real selectors
    Given I am on the RACE_INFORMATION page
    Then the selector "input[name='sessionName']" should exist
    And the selector ".form-group:has label:has-text Session Name input" should exist
    
  Scenario: Error handling when element not found
    Given I am on a blank page
    When I try to click selector "#nonexistent-element"
    Then the result should indicate failure
    And the error message should contain "not found"

Test Implementation Structure

// tests/e2e/docker/browserDevToolsAdapter.e2e.test.ts
import { describe, it, expect, beforeAll, afterAll } from 'vitest';
import { BrowserDevToolsAdapter } from '@infrastructure/adapters/automation/BrowserDevToolsAdapter';
import { StepId } from '@domain/value-objects/StepId';

describe('E2E: BrowserDevToolsAdapter - Docker Environment', () => {
  let adapter: BrowserDevToolsAdapter;
  const CHROME_WS_ENDPOINT = process.env.CHROME_WS_ENDPOINT || 'ws://localhost:9222';
  const FIXTURE_BASE_URL = process.env.FIXTURE_BASE_URL || 'http://localhost:3456';

  beforeAll(async () => {
    adapter = new BrowserDevToolsAdapter({
      browserWSEndpoint: CHROME_WS_ENDPOINT,
      defaultTimeout: 30000,
    });
    await adapter.connect();
  });

  afterAll(async () => {
    await adapter.disconnect();
  });

  describe('Step Workflow Execution', () => {
    it('should navigate to hosted racing page - step 2', async () => {
      const result = await adapter.navigateToPage(`${FIXTURE_BASE_URL}/01-hosted-racing.html`);
      expect(result.success).toBe(true);
    });

    it('should fill race information form - step 4', async () => {
      await adapter.navigateToPage(`${FIXTURE_BASE_URL}/03-race-information.html`);
      const stepId = StepId.create(4);
      const result = await adapter.executeStep(stepId, {
        sessionName: 'E2E Test Session',
        password: 'testpass123',
        description: 'Automated E2E test session',
      });
      expect(result.success).toBe(true);
    });

    // ... additional step tests
  });

  describe('Modal Operations', () => {
    it('should handle ADD_CAR modal - step 9', async () => {
      await adapter.navigateToPage(`${FIXTURE_BASE_URL}/09-add-a-car.html`);
      const stepId = StepId.create(9);
      const result = await adapter.handleModal(stepId, 'open');
      expect(result.success).toBe(true);
    });
  });

  describe('Safety Checkpoint', () => {
    it('should stop at step 18 without clicking checkout', async () => {
      await adapter.navigateToPage(`${FIXTURE_BASE_URL}/18-track-conditions.html`);
      const stepId = StepId.create(18);
      const result = await adapter.executeStep(stepId, {});
      expect(result.success).toBe(true);
      expect(result.metadata?.safetyStop).toBe(true);
    });
  });
});

Strategy B: Native macOS E2E Tests

Purpose

Test OS-level screen automation on real hardware. These tests CANNOT run in Docker because native automation requires actual display access.

Requirements

  • macOS CI runner with display access
  • Screen recording permissions granted
  • Accessibility permissions enabled
  • Real Chrome/browser window visible

BDD Scenarios for Native E2E (Legacy)

Historical note: previous native OS-level automation scenarios have been retired. Real-world coverage is now provided by Playwright-based workflows and fixture-backed automation; native OS-level adapters are no longer part of the supported stack.

Test Implementation Structure (Legacy)

Previous native OS-level adapter tests have been removed. The current E2E coverage relies on Playwright-driven automation and fixture-backed flows as described in the Docker-based strategy above.


Test File Structure

tests/
├── e2e/
│   ├── docker/                          # Docker-based E2E tests
│   │   ├── browserDevToolsAdapter.e2e.test.ts
│   │   ├── workflowSteps.e2e.test.ts
│   │   ├── modalHandling.e2e.test.ts
│   │   └── selectorValidation.e2e.test.ts
│   ├── native/                          # Native OS automation tests
│   │   ├── nutJsAdapter.e2e.test.ts
│   │   ├── screenCapture.e2e.test.ts
│   │   ├── templateMatching.e2e.test.ts
│   │   └── windowFocus.e2e.test.ts
│   ├── automation.e2e.test.ts           # Existing selector validation
│   └── features/                         # Gherkin feature files
│       └── hosted-session-automation.feature
├── integration/
│   └── infrastructure/
│       └── BrowserDevToolsAdapter.test.ts
└── unit/
    └── ...

docker/
├── docker-compose.e2e.yml              # E2E test environment
└── fixture-server/
    ├── Dockerfile
    └── nginx.conf

.github/
└── workflows/
    ├── e2e-docker.yml                  # Docker E2E workflow
    └── e2e-macos.yml                   # macOS native E2E workflow

CI/CD Integration

Docker E2E Workflow

# .github/workflows/e2e-docker.yml
name: E2E Tests - Docker

on:
  push:
    branches: [main, develop]
  pull_request:
    branches: [main]

jobs:
  e2e-docker:
    runs-on: ubuntu-latest
    
    steps:
      - uses: actions/checkout@v4
      
      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: '20'
          cache: 'npm'
      
      - name: Install dependencies
        run: npm ci
      
      - name: Start Docker E2E environment
        run: |
          docker compose -f docker/docker-compose.e2e.yml up -d
          docker compose -f docker/docker-compose.e2e.yml ps
      
      - name: Wait for services to be healthy
        run: |
          timeout 60 bash -c 'until curl -s http://localhost:9222/json/version; do sleep 2; done'
          timeout 60 bash -c 'until curl -s http://localhost:3456/01-hosted-racing.html; do sleep 2; done'
      
      - name: Run Docker E2E tests
        run: npm run test:e2e:docker
        env:
          CHROME_WS_ENDPOINT: ws://localhost:9222
          FIXTURE_BASE_URL: http://localhost:3456
      
      - name: Stop Docker environment
        if: always()
        run: docker compose -f docker/docker-compose.e2e.yml down -v

macOS Native E2E Workflow

# .github/workflows/e2e-macos.yml
name: E2E Tests - macOS Native

on:
  push:
    branches: [main, develop]
  pull_request:
    branches: [main]

jobs:
  e2e-macos:
    runs-on: macos-latest
    
    steps:
      - uses: actions/checkout@v4
      
      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: '20'
          cache: 'npm'
      
      - name: Install dependencies
        run: npm ci
      
      - name: Grant screen recording permissions
        run: |
          # Note: GitHub Actions macOS runners have limited permission support
          # Some tests may be skipped if permissions cannot be granted
          sudo sqlite3 /Library/Application\ Support/com.apple.TCC/TCC.db \
            "INSERT OR REPLACE INTO access VALUES('kTCCServiceScreenCapture','com.apple.Terminal',0,2,0,1,NULL,NULL,0,'UNUSED',NULL,0,$(date +%s));" 2>/dev/null || true
      
      - name: Run native E2E tests
        run: npm run test:e2e:native
        env:
          DISPLAY_AVAILABLE: "true"
      
      - name: Upload screenshots on failure
        if: failure()
        uses: actions/upload-artifact@v4
        with:
          name: e2e-screenshots
          path: tests/e2e/native/screenshots/

NPM Scripts

{
  "scripts": {
    "test:e2e": "vitest run --config vitest.e2e.config.ts",
    "test:e2e:docker": "vitest run --config vitest.e2e.config.ts tests/e2e/docker/",
    "test:e2e:native": "vitest run --config vitest.e2e.config.ts tests/e2e/native/",
    "docker:e2e:up": "docker compose -f docker/docker-compose.e2e.yml up -d",
    "docker:e2e:down": "docker compose -f docker/docker-compose.e2e.yml down -v",
    "docker:e2e:logs": "docker compose -f docker/docker-compose.e2e.yml logs -f"
  }
}

Environment Configuration

# .env.test.example
# Docker E2E Configuration
CHROME_WS_ENDPOINT=ws://localhost:9222
FIXTURE_BASE_URL=http://localhost:3456
E2E_TIMEOUT=120000

# Native E2E Configuration (legacy)
DISPLAY_AVAILABLE=true

Cross-References

  • ARCHITECTURE.md — Layer boundaries, port definitions, and dependency rules that guide test structure
  • TECH.md — Detailed tooling specifications (Vitest, Playwright, Testcontainers configuration)
  • package.json — Test scripts and commands (test:unit, test:integration, test:e2e, test:coverage)

Summary

GridPilot's testing strategy ensures:

  • Business logic is correct (unit tests for domain/application layers)
  • Infrastructure works reliably (integration tests for repositories/adapters)
  • User workflows function end-to-end (E2E tests for full stack)
  • Browser automation works correctly (Docker E2E tests with real fixtures)
  • OS-level automation works correctly (Native macOS E2E tests with display access)

Hosted Session Automation Test Pyramid

For the iRacing hosted-session automation, confidence is provided by these concrete suites:

Confidence expectations

  • For normal changes to hosted-session automation (selectors, step logic, overlay behavior, authentication, or confirmation flows), the following suites must pass to claim "high confidence":

    • All relevant unit tests in tests/unit that touch the changed domain/use-case code.
    • All relevant integration tests in tests/integration for the affected adapters.
    • All step E2E tests under tests/e2e/steps.
    • All workflow E2E tests under tests/e2e/workflows.
  • The real-world smoke suite in tests/e2e/automation.e2e.test.ts remains as historical documentation and should not be relied upon for validating changes; instead, update and extend the Playwright-based E2E and workflow suites.

  • When adding new behavior:

    • Prefer unit tests for domain/application changes.
    • Add or extend integration tests when introducing new adapters or external integration.
    • Add step E2E tests when changing DOM/step behavior for a specific wizard step.
    • Add or extend workflow E2E tests when behavior spans multiple steps, touches authentication/session lifecycle, or affects confirmation/checkout behavior end-to-end.

By following BDD principles and maintaining clear test organization, the team can confidently evolve GridPilot while preserving correctness and stability, with a dedicated, layered confidence story for hosted-session automation.

Hosted-session automation layers

The hosted-session automation stack is covered by layered suites that balance real-site confidence with fast, deterministic fixture runs:

  • Real-site hosted smoke (opt-in)

    • login-and-wizard-smoke.e2e.test.ts
    • Gated by HOSTED_REAL_E2E=1 and exercises the real members.iracing.com login + Hosted Racing landing page + "Create a Race" wizard entry.
    • Fails loudly if authentication, Hosted DOM, or wizard entry regress.
  • Fixture-backed auto-navigation workflows

    • full-hosted-session.autonav.workflow.e2e.test.ts
    • Uses the real Playwright stack (adapter + WizardStepOrchestrator + FixtureServer) with auto navigation enabled (__skipFixtureNavigation forbidden).
    • Drives a representative subset of steps (e.g., 1 → 3 → 7 → 9 → 13 → 17) and asserts each step lands on the expected wizard container via IRACING_SELECTORS.
  • Step-level fixture E2Es with explicit mismatch path

    • Existing step suites under tests/e2e/steps now have two execution paths via StepHarness:
      • executeStepWithFixtureMismatch() explicitly sets __skipFixtureNavigation for selector/state-mismatch tests (e.g., cars/track validation).
      • executeStepWithAutoNavigation() uses the adapters normal auto-navigation, forbidding __skipFixtureNavigation.

__skipFixtureNavigation guardrails

To avoid silently masking regressions in auto navigation:

Hosted-session behavior coverage matrix (initial slice)

Behavior Real-site smoke Fixture step E2Es Fixture workflows
Real login + Hosted landing login-and-wizard-smoke.e2e.test.ts (fixtures only) (fixtures only)
Step 3 Race Information DOM/fields 🔍 via hosted wizard modal in real smoke (presence only) step-03-race-information.e2e.test.ts via step 3 in full-hosted-session.autonav.workflow.e2e.test.ts
Cars / Add Car flow (steps 89) 🔍 via Hosted page + Create Race modal only step-08-cars.e2e.test.ts, step-09-add-car.e2e.test.ts steps 79 in steps-07-09-cars-flow.e2e.test.ts and autonav slice workflow

Real-site hosted and companion workflows (opt-in)

Real iRacing and companion-hosted workflows are never part of the default npm test run. They are gated behind explicit environment variables and npm scripts so they can be used in local runs or optional CI jobs without impacting day-to-day feedback loops.

Real-site hosted smoke and focused flows

Run them locally with:

HOSTED_REAL_E2E=1 npm run test:hosted-real

Intended CI usage:

  • Optional nightly/weekly workflow (not per-commit).

  • Example job shape:

  • Checkout

  • npm ci

  • HOSTED_REAL_E2E=1 npm run test:hosted-real

Companion fixture-hosted workflow (opt-in)

Run it locally with:

COMPANION_FIXTURE_HOSTED=1 npm run test:companion-hosted

Intended CI usage:

  • Optional companion-centric workflow (nightly or on-demand).

  • Example job shape:

  • Checkout

  • npm ci

  • COMPANION_FIXTURE_HOSTED=1 npm run test:companion-hosted

These suites assume the same fixture server and Playwright wiring as the rest of the hosted-session tests and are explicitly opt-in so npm test remains fast and deterministic.

Selector ↔ fixture ↔ real DOM guardrail

For hosted-session automation, IRACING_SELECTORS must match either:

Manual workflow when the iRacing DOM changes:

  1. Detect failure:
  • A hosted-real test fails because a selector no longer matches, or
  • A fixture-backed step/workflow test fails in a way that suggests large DOM drift.
  1. Refresh DOM fixtures:
npm run export-html-dumps

This script runs exportHtmlDumps.ts to regenerate html-dumps-optimized from the raw HTML under html-dumps.

  1. Re-align selectors and tests:
  • Update IRACING_SELECTORS to reflect the new DOM shape.
  • Fix any failing step/workflow E2Es under tests/e2e/steps and tests/e2e/workflows so they again describe the canonical behavior.
  • Re-run:
    • npm test
    • HOSTED_REAL_E2E=1 npm run test:hosted-real (if access to real iRacing)
    • COMPANION_FIXTURE_HOSTED=1 npm run test:companion-hosted (optional)

This keeps fixtures, selectors, and real-site behavior aligned without forcing real-site tests into every CI run.

The intent for new hosted-session work is:

  • Use fixture-backed step E2Es to lock DOM and per-step behavior.
  • Use fixture-backed auto-navigation workflows to guard WizardStepOrchestrator and PlaywrightAutomationAdapter.executeStep() across multiple steps.
  • Use opt-in real-site smoke to catch drift in authentication and Hosted Racing DOM without impacting default CI.