Some checks failed
CI / lint-typecheck (pull_request) Failing after 4m50s
CI / tests (pull_request) Has been skipped
CI / contract-tests (pull_request) Has been skipped
CI / e2e-tests (pull_request) Has been skipped
CI / comment-pr (pull_request) Has been skipped
CI / commit-types (pull_request) Has been skipped
128 lines
4.8 KiB
Markdown
128 lines
4.8 KiB
Markdown
# Rating BDD E2E Tests
|
|
|
|
This directory contains BDD (Behavior-Driven Development) E2E tests for the GridPilot Rating system.
|
|
|
|
## Overview
|
|
|
|
The GridPilot Rating system is a competition rating designed specifically for league racing. Unlike iRating (which is for matchmaking), GridPilot Rating measures:
|
|
- **Results Strength**: How well you finish relative to field strength
|
|
- **Consistency**: Stability of finishing positions over a season
|
|
- **Clean Driving**: Incidents per race, weighted by severity
|
|
- **Racecraft**: Positions gained/lost vs. incident involvement
|
|
- **Reliability**: Attendance, DNS/DNF record
|
|
- **Team Contribution**: Points earned for your team; lineup efficiency
|
|
|
|
## Test Files
|
|
|
|
### [`rating-profile.spec.ts`](rating-profile.spec.ts)
|
|
Tests the driver profile rating display, including:
|
|
- Current GridPilot Rating value
|
|
- Rating breakdown by component (results, consistency, clean driving, etc.)
|
|
- Rating trend over time (seasons)
|
|
- Rating comparison with peers
|
|
- Rating impact on team contribution
|
|
|
|
**Key Scenarios:**
|
|
- Driver sees their current GridPilot Rating on profile
|
|
- Driver sees rating breakdown by component
|
|
- Driver sees rating trend over multiple seasons
|
|
- Driver sees how rating compares to league peers
|
|
- Driver sees rating impact on team contribution
|
|
- Driver sees rating explanation/tooltip
|
|
- Driver sees rating update after race completion
|
|
|
|
### [`rating-calculation.spec.ts`](rating-calculation.spec.ts)
|
|
Tests the rating calculation logic and updates:
|
|
- Rating calculation after race completion
|
|
- Rating update based on finishing position
|
|
- Rating update based on field strength
|
|
- Rating update based on incidents
|
|
- Rating update based on consistency
|
|
- Rating update based on team contribution
|
|
- Rating update based on season performance
|
|
|
|
**Key Scenarios:**
|
|
- Rating increases after strong finish against strong field
|
|
- Rating decreases after poor finish or incidents
|
|
- Rating reflects consistency over multiple races
|
|
- Rating accounts for team contribution
|
|
- Rating updates immediately after results are processed
|
|
- Rating calculation is transparent and understandable
|
|
|
|
### [`rating-leaderboard.spec.ts`](rating-leaderboard.spec.ts)
|
|
Tests the rating-based leaderboards:
|
|
- Global driver rankings by GridPilot Rating
|
|
- League-specific driver rankings
|
|
- Team rankings based on driver ratings
|
|
- Rating-based filtering and sorting
|
|
- Rating-based search functionality
|
|
|
|
**Key Scenarios:**
|
|
- User sees drivers ranked by GridPilot Rating
|
|
- User can filter drivers by rating range
|
|
- User can search for drivers by rating
|
|
- User can sort drivers by different rating components
|
|
- User sees team rankings based on driver ratings
|
|
- User sees rating-based leaderboards with accurate data
|
|
|
|
## Test Structure
|
|
|
|
Each test file follows this pattern:
|
|
|
|
```typescript
|
|
import { test, expect } from '@playwright/test';
|
|
|
|
test.describe('GridPilot Rating System', () => {
|
|
test.beforeEach(async ({ page }) => {
|
|
// TODO: Implement authentication setup
|
|
});
|
|
|
|
test('Driver sees their GridPilot Rating on profile', async ({ page }) => {
|
|
// TODO: Implement test
|
|
// Scenario: Driver views their rating
|
|
// Given I am a registered driver "John Doe"
|
|
// And I have completed several races
|
|
// And I am on my profile page
|
|
// Then I should see my GridPilot Rating
|
|
// And I should see the rating breakdown
|
|
});
|
|
});
|
|
```
|
|
|
|
## Test Philosophy
|
|
|
|
These tests follow the BDD E2E testing concept:
|
|
|
|
- **Focus on outcomes, not visual implementation**: Tests validate what the user sees and can verify, not how it's rendered
|
|
- **Use Gherkin syntax**: Tests are written in Given/When/Then format
|
|
- **Validate final user outcomes**: Tests serve as acceptance criteria for the rating functionality
|
|
- **Use Playwright**: Tests are implemented using Playwright for browser automation
|
|
|
|
## TODO Implementation
|
|
|
|
All tests are currently placeholders with TODO comments. The actual test implementation should:
|
|
|
|
1. Set up authentication (login as a test driver)
|
|
2. Navigate to the appropriate page
|
|
3. Verify the expected outcomes using Playwright assertions
|
|
4. Handle loading states, error states, and edge cases
|
|
5. Use test data that matches the expected behavior
|
|
|
|
## Test Data
|
|
|
|
Tests should use realistic test data that matches the expected behavior:
|
|
- Driver: "John Doe" or similar test driver with varying performance
|
|
- Races: Completed races with different results (wins, podiums, DNFs)
|
|
- Fields: Races with varying field strength (strong vs. weak fields)
|
|
- Incidents: Races with different incident counts
|
|
- Teams: Teams with multiple drivers contributing to team score
|
|
|
|
## Future Enhancements
|
|
|
|
- Add test data factories/fixtures for consistent test data
|
|
- Add helper functions for common actions (login, navigation, etc.)
|
|
- Add visual regression tests for rating display
|
|
- Add performance tests for rating calculation
|
|
- Add accessibility tests for rating pages
|
|
- Add cross-browser compatibility testing
|