Initial project setup: monorepo structure and documentation
This commit is contained in:
1048
docs/ARCHITECTURE.md
Normal file
1048
docs/ARCHITECTURE.md
Normal file
File diff suppressed because it is too large
Load Diff
229
docs/CONCEPT.md
Normal file
229
docs/CONCEPT.md
Normal file
@@ -0,0 +1,229 @@
|
||||
# GridPilot Concept
|
||||
|
||||
## Problem Statement
|
||||
|
||||
iRacing league management today is fragmented and manual:
|
||||
|
||||
- **Communication Chaos**: League organizers juggle Discord channels, Google Sheets, and manual messaging to coordinate everything
|
||||
- **No Visibility**: Leagues operate in isolation without a central platform for discovery or branding
|
||||
- **Manual Burden**: Admins spend hours manually entering race results, managing registrations, and creating sessions in iRacing
|
||||
- **Team Racing Limitations**: No native support for team-based racing with parallel scoring (one driver per car slot, but team accumulates points)
|
||||
- **Session Creation Pain**: Creating race sessions in iRacing requires tedious browser navigation and form filling
|
||||
- **Migration Challenges**: Existing leagues can't easily migrate historical data or preserve their identity
|
||||
|
||||
Based on feedback from Reddit and Discord communities, league organizers are overwhelmed by administrative tasks when they'd rather focus on building community and running great racing events.
|
||||
|
||||
## Target Users
|
||||
|
||||
### League Organizers & Admins
|
||||
**What they need:**
|
||||
- Automated race result processing
|
||||
- Easy session creation without manual browser work
|
||||
- Tools to manage seasons, sign-ups, and standings
|
||||
- Professional branding and identity for their league
|
||||
- Custom domains to strengthen league identity
|
||||
- Migration support to bring existing league history
|
||||
|
||||
### Team Captains
|
||||
**What they need:**
|
||||
- Create and manage racing teams
|
||||
- Register teams for league seasons
|
||||
- Track team standings alongside driver standings
|
||||
- Coordinate with team drivers
|
||||
- View team performance history
|
||||
|
||||
### Solo Drivers
|
||||
**What they need:**
|
||||
- Browse and discover active leagues
|
||||
- Easy registration and sign-up flows
|
||||
- Personal statistics and race history
|
||||
- Track standings and points
|
||||
- Connect with the racing community
|
||||
|
||||
## Core Features
|
||||
|
||||
### For Solo Drivers
|
||||
|
||||
**League Discovery**
|
||||
- Browse active leagues across different series and skill levels
|
||||
- Filter by time zones, competitiveness, and racing format
|
||||
- Join leagues with simple registration flows
|
||||
|
||||
**Personal Racing Stats**
|
||||
- Automatic race result tracking from iRacing
|
||||
- Historical performance data
|
||||
- Personal standings in each league
|
||||
- Progress tracking across seasons
|
||||
|
||||
### For Teams
|
||||
|
||||
**Team Management**
|
||||
- Create and name racing teams
|
||||
- Invite and manage team drivers
|
||||
- Register teams for league seasons
|
||||
|
||||
**Parallel Racing Format**
|
||||
- One driver per car slot in each race
|
||||
- Team points accumulate from all drivers' results
|
||||
- Both team standings and individual driver standings
|
||||
- Flexibility for different drivers each race
|
||||
|
||||
**Team Identity**
|
||||
- Team branding and profiles
|
||||
- Historical team performance tracking
|
||||
- Team communication tools
|
||||
|
||||
### For League Organizers
|
||||
|
||||
**League Identity & Branding**
|
||||
- Professional league pages with custom branding
|
||||
- Custom domain support (e.g., your-league.racing)
|
||||
- League logos, colors, and identity
|
||||
- Public-facing presence for member recruitment
|
||||
|
||||
**Race Management**
|
||||
- Automated result importing from iRacing
|
||||
- No manual CSV uploads or data entry
|
||||
- Session result processing tied to league structure
|
||||
- Point calculations handled automatically
|
||||
|
||||
**Season Administration**
|
||||
- Create and manage racing seasons
|
||||
- Define scoring rules and formats
|
||||
- Handle sign-ups and registrations
|
||||
- Configure team vs solo racing formats
|
||||
|
||||
**Authentication & Security**
|
||||
- iRacing OAuth integration
|
||||
- Verify driver identities automatically
|
||||
- Secure access control for league admin functions
|
||||
- No separate account creation needed
|
||||
|
||||
### Migration Support
|
||||
|
||||
**For Existing Leagues**
|
||||
- Import historical season data
|
||||
- Preserve league identity and history
|
||||
- Maintain continuity for established communities
|
||||
- Smooth transition without losing context
|
||||
|
||||
## User Journeys
|
||||
|
||||
### Admin Creating a League
|
||||
|
||||
1. Sign in with iRacing credentials
|
||||
2. Create new league with name and branding
|
||||
3. Choose racing series and car/track combinations
|
||||
4. Configure season format (team vs solo, point system)
|
||||
5. Set up custom domain (optional)
|
||||
6. Open registration for drivers/teams
|
||||
7. Publish league page for discovery
|
||||
|
||||
### Team Registering for a Season
|
||||
|
||||
1. Team captain browses available leagues
|
||||
2. Reviews league format and schedule
|
||||
3. Registers team for upcoming season
|
||||
4. Invites or confirms team drivers
|
||||
5. Receives confirmation and season details
|
||||
6. Team appears in league roster
|
||||
|
||||
### Driver Viewing Standings
|
||||
|
||||
1. Driver logs into GridPilot
|
||||
2. Navigates to their league dashboard
|
||||
3. Views current season standings (team and driver)
|
||||
4. Reviews recent race results
|
||||
5. Checks upcoming race schedule
|
||||
6. Accesses historical performance data
|
||||
|
||||
### Organizer Managing Race Day
|
||||
|
||||
1. Admin creates race session through GridPilot
|
||||
2. Session automatically appears in iRacing
|
||||
3. Drivers join and race in iRacing
|
||||
4. Race completes in iRacing
|
||||
5. GridPilot automatically imports results
|
||||
6. Points calculated and standings updated
|
||||
7. Admin reviews and publishes results
|
||||
8. Drivers see updated standings immediately
|
||||
|
||||
## Automation Vision
|
||||
|
||||
### Why Browser Automation?
|
||||
|
||||
iRacing doesn't provide public APIs for session creation or comprehensive result data. League admins currently face:
|
||||
- Repetitive browser navigation to create each race session
|
||||
- Manual form filling for every session detail
|
||||
- Time-consuming workflows that scale poorly with league size
|
||||
- Error-prone manual processes
|
||||
|
||||
### What Automation Solves
|
||||
|
||||
**Session Creation Pain**
|
||||
- Eliminate manual browser work
|
||||
- Create sessions from GridPilot with one click
|
||||
- Batch session creation for full seasons
|
||||
- Consistent configuration without human error
|
||||
|
||||
**Result Processing**
|
||||
- Automatic result imports from iRacing
|
||||
- No manual CSV downloads or uploads
|
||||
- Real-time standings updates
|
||||
- Accurate point calculations
|
||||
|
||||
### Assistant-Style Approach
|
||||
|
||||
GridPilot acts as an admin assistant, not a bot:
|
||||
- Automation runs on admin's behalf with their authorization
|
||||
- Clear opt-in for automation features
|
||||
- Admin maintains full control and visibility
|
||||
- Automation handles tedious tasks, not gameplay
|
||||
|
||||
### Important Boundary
|
||||
|
||||
**We automate admin tasks, not gameplay.**
|
||||
|
||||
GridPilot automates league management workflows - creating sessions, processing results, managing registrations. We never touch actual racing gameplay, driver behavior, or in-race activities. This is administrative automation to free organizers from manual work.
|
||||
|
||||
## Future Vision
|
||||
|
||||
### Monetization Approach
|
||||
|
||||
GridPilot will introduce optional monetization features after the core platform is stable:
|
||||
|
||||
**League Operation Fees**
|
||||
- Organizers can charge season entry fees
|
||||
- Both one-time and per-race payment options
|
||||
- Revenue split between league and GridPilot platform
|
||||
- Support for league sustainability and prizes
|
||||
|
||||
**Platform Position**
|
||||
- GridPilot takes a percentage of collected fees
|
||||
- No fees for free leagues
|
||||
- Transparent pricing structure
|
||||
- Revenue supports platform development and hosting
|
||||
|
||||
### When Monetization Arrives
|
||||
|
||||
Monetization features will be added only after:
|
||||
- Core functionality is proven stable
|
||||
- User base is established and growing
|
||||
- League organizers are successfully using the platform
|
||||
- Feedback confirms value justifies pricing
|
||||
|
||||
The focus now is delivering a great product that solves real problems. Monetization comes later when the platform has earned it.
|
||||
|
||||
### Potential Expansion
|
||||
|
||||
Beyond iRacing, GridPilot's approach could extend to:
|
||||
- Other sim racing platforms
|
||||
- Different racing series and formats
|
||||
- Broader motorsport league management
|
||||
- Cross-platform racing communities
|
||||
|
||||
But first: nail the iRacing league management experience.
|
||||
|
||||
---
|
||||
|
||||
GridPilot exists to make league racing accessible and professional for organizers of all sizes, eliminating manual work so communities can focus on what matters: great racing and strong communities.
|
||||
465
docs/ROADMAP.md
Normal file
465
docs/ROADMAP.md
Normal file
@@ -0,0 +1,465 @@
|
||||
# GridPilot Implementation Roadmap
|
||||
|
||||
## Overview
|
||||
|
||||
This roadmap provides a phased implementation plan for GridPilot, an automated league management platform for iRacing. Each phase builds upon the previous one, with clear success criteria and actionable todos.
|
||||
|
||||
**Purpose:**
|
||||
- Guide iterative development from technical validation to public launch and monetization
|
||||
- Track progress through checkable todos
|
||||
- Validate assumptions before investing in full implementation
|
||||
- Ensure architectural integrity throughout each phase
|
||||
|
||||
**How to Use:**
|
||||
- Check off todos as they are completed (replace `[ ]` with `[x]`)
|
||||
- Review success criteria before moving to the next phase
|
||||
- Refer to [ARCHITECTURE.md](./ARCHITECTURE.md) for component boundaries and patterns
|
||||
- Consult [TESTS.md](./TESTS.md) for testing approach and BDD scenario structure
|
||||
- See [CONCEPT.md](./CONCEPT.md) for product vision and user needs
|
||||
|
||||
**Relationship to MVP:**
|
||||
- **Phase 0-1:** Pre-MVP validation (technical feasibility, market validation)
|
||||
- **Phase 2:** MVP (core league management with automated results)
|
||||
- **Phase 3-4:** Enhanced MVP (automation layer, branding)
|
||||
- **Phase 5-6:** Public launch and monetization
|
||||
|
||||
---
|
||||
|
||||
## Phase 0: Foundation (Automation Testing - Internal)
|
||||
|
||||
**Goal:** Validate technical feasibility of browser automation and establish testing infrastructure.
|
||||
|
||||
### Infrastructure Setup
|
||||
- [ ] Initialize monorepo with npm workspaces (`/src/apps`, `/src/packages`)
|
||||
- [ ] Set up TypeScript configuration (strict mode, path aliases)
|
||||
- [ ] Configure ESLint and Prettier (no warnings tolerated)
|
||||
- [ ] Create basic domain models (`League`, `Team`, `Event`, `Driver`, `Result`)
|
||||
- [ ] Set up test harness (Vitest for unit/integration tests)
|
||||
- [ ] Configure Docker Compose for E2E testing environment
|
||||
- [ ] Document development setup in README.md
|
||||
|
||||
### Automation Validation
|
||||
- [ ] Install and configure Nut.js for browser automation
|
||||
- [ ] Test iRacing session creation page detection
|
||||
- [ ] Test session ID extraction from URL or page elements
|
||||
- [ ] Validate server-side result polling from iRacing API
|
||||
- [ ] Create proof-of-concept automation script
|
||||
- [ ] Document automation approach and limitations
|
||||
- [ ] Identify automation failure modes and mitigation strategies
|
||||
|
||||
### Testing Foundation
|
||||
- [ ] Write example BDD scenarios (Given/When/Then format)
|
||||
- [ ] Set up Dockerized E2E test environment
|
||||
- [ ] Create fixture data for test scenarios
|
||||
- [ ] Validate test isolation and repeatability
|
||||
- [ ] Document testing strategy in [TESTS.md](./TESTS.md)
|
||||
|
||||
**Success Criteria:**
|
||||
- Technical feasibility confirmed (browser automation reliable)
|
||||
- Test infrastructure operational (unit, integration, E2E)
|
||||
- Development environment documented and reproducible
|
||||
- No blockers identified for MVP implementation
|
||||
|
||||
**Note:** This phase is internal validation only—no user-facing features.
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Landing Page & Market Validation
|
||||
|
||||
**Goal:** Validate product-market fit before building the full application.
|
||||
|
||||
### Marketing Website
|
||||
- [ ] Build static marketing website (Next.js or similar)
|
||||
- [ ] Create compelling copy addressing league organizer pain points
|
||||
- [ ] Design product mockups and fake screenshots
|
||||
- [ ] Add email collection form (waitlist integration)
|
||||
- [ ] Implement privacy policy and terms of service
|
||||
- [ ] Set up analytics (signups, page views, engagement)
|
||||
|
||||
### Community Engagement
|
||||
- [ ] Post to r/iRacing subreddit with mockups
|
||||
- [ ] Share in iRacing Discord communities
|
||||
- [ ] Reach out to league organizers directly
|
||||
- [ ] Collect feedback on pain points and feature requests
|
||||
- [ ] Conduct user interviews with interested organizers
|
||||
- [ ] Document feedback in product backlog
|
||||
|
||||
### Analysis
|
||||
- [ ] Analyze email signup metrics
|
||||
- [ ] Review qualitative feedback themes
|
||||
- [ ] Validate assumptions about organizer pain points
|
||||
- [ ] Assess willingness to pay (surveys, conversations)
|
||||
- [ ] Document findings and adjust roadmap if needed
|
||||
|
||||
**Success Criteria:**
|
||||
- 100+ email signups from target users
|
||||
- Positive feedback from league organizers
|
||||
- Validated demand for automated result import
|
||||
- Confirmed interest in team-based scoring
|
||||
- Product-market fit assumptions validated
|
||||
|
||||
**Note:** No application built in this phase—validation only. Pivot or proceed based on feedback.
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: MVP (League-Focused)
|
||||
|
||||
**Goal:** Build a functional league management platform with automated result import (no fees, no companion app yet).
|
||||
|
||||
### Authentication & User Management
|
||||
- [ ] Implement iRacing OAuth authentication flow
|
||||
- [ ] Create user registration and profile system
|
||||
- [ ] Build user role system (organizer, driver, spectator)
|
||||
- [ ] Implement session management and token refresh
|
||||
- [ ] Write BDD scenarios for authentication flows
|
||||
- [ ] Achieve test coverage for auth domain
|
||||
|
||||
### League Management (Core Domain)
|
||||
- [ ] Implement `CreateLeagueUseCase` (see [ARCHITECTURE.md](./ARCHITECTURE.md#application-layer))
|
||||
- [ ] Create league CRUD operations (update, delete, archive)
|
||||
- [ ] Build season setup (tracks, cars, rules configuration)
|
||||
- [ ] Implement points system configuration (customizable)
|
||||
- [ ] Create race event scheduling system
|
||||
- [ ] Write BDD scenarios for league lifecycle
|
||||
- [ ] Achieve >90% test coverage for `League` aggregate
|
||||
|
||||
### Driver & Team Registration
|
||||
- [ ] Build driver registration system (join league/season)
|
||||
- [ ] Implement team registration system (optional parallel scoring)
|
||||
- [ ] Create team roster management (add/remove drivers)
|
||||
- [ ] Build approval workflow for registrations
|
||||
- [ ] Write BDD scenarios for registration flows
|
||||
- [ ] Test team scoring calculation logic
|
||||
|
||||
### Automated Result Import
|
||||
- [ ] Implement PostgreSQL schema (repositories pattern)
|
||||
- [ ] Create server-side iRacing API integration
|
||||
- [ ] Build automated result polling service
|
||||
- [ ] Implement result parsing and validation
|
||||
- [ ] Create `ImportResultUseCase` (see [ARCHITECTURE.md](./ARCHITECTURE.md#application-layer))
|
||||
- [ ] Handle edge cases (DNS, penalties, disconnects)
|
||||
- [ ] Write BDD scenarios for result import
|
||||
- [ ] Test result import reliability and error handling
|
||||
|
||||
### Standings & Results
|
||||
- [ ] Generate driver standings (individual points calculation)
|
||||
- [ ] Generate team standings (parallel scoring model)
|
||||
- [ ] Build race result pages (lap times, incidents, finishing position)
|
||||
- [ ] Implement historical standings view (by race)
|
||||
- [ ] Create standings export functionality (CSV)
|
||||
- [ ] Write BDD scenarios for standings calculation
|
||||
- [ ] Test edge cases (ties, dropped races, penalties)
|
||||
|
||||
### League Identity & Admin
|
||||
- [ ] Build league identity pages (public view)
|
||||
- [ ] Create basic admin dashboard (organizer tools)
|
||||
- [ ] Implement league settings management
|
||||
- [ ] Build schedule and calendar view
|
||||
- [ ] Create notifications system (race reminders)
|
||||
- [ ] Write BDD scenarios for admin workflows
|
||||
|
||||
### Quality Assurance
|
||||
- [ ] Run full test suite (unit, integration, E2E)
|
||||
- [ ] Achieve >90% test coverage for domain/application layers
|
||||
- [ ] Perform manual testing with real iRacing data
|
||||
- [ ] Fix all critical bugs and edge cases
|
||||
- [ ] Document known limitations
|
||||
|
||||
**Success Criteria:**
|
||||
- Functional platform for league management
|
||||
- Automated result import working reliably
|
||||
- Driver and team standings calculated correctly
|
||||
- No manual result uploads required
|
||||
- Test coverage >90% for core domain
|
||||
- Ready for closed beta testing
|
||||
|
||||
**Note:** No fees, no payouts, no companion app in this phase. Focus on core league management.
|
||||
|
||||
**Cross-References:**
|
||||
- See [ARCHITECTURE.md](./ARCHITECTURE.md) for component boundaries
|
||||
- See [TESTS.md](./TESTS.md) for BDD scenario examples
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Companion App (Automation Layer)
|
||||
|
||||
**Goal:** Build an Electron companion app to automate session creation and reduce organizer workload.
|
||||
|
||||
### Companion App Foundation
|
||||
- [ ] Set up Electron application structure
|
||||
- [ ] Implement Nut.js browser automation framework
|
||||
- [ ] Create IPC bridge for backend communication
|
||||
- [ ] Build auto-updater mechanism
|
||||
- [ ] Set up application signing and packaging
|
||||
- [ ] Document installation and setup process
|
||||
|
||||
### Session Creation Automation
|
||||
- [ ] Build session creation assistance workflow
|
||||
- [ ] Implement iRacing session page detection
|
||||
- [ ] Create session ID extraction mechanism
|
||||
- [ ] Build form auto-fill functionality (track, cars, rules)
|
||||
- [ ] Implement session URL capture and sync
|
||||
- [ ] Handle automation failure cases gracefully
|
||||
- [ ] Write E2E tests for automation flows
|
||||
|
||||
### OAuth & Credential Handoff
|
||||
- [ ] Implement OAuth handoff from companion to web
|
||||
- [ ] Create secure credential storage (encrypted)
|
||||
- [ ] Build IPC bridge for authentication state
|
||||
- [ ] Handle token refresh in companion app
|
||||
- [ ] Write E2E tests for OAuth handoff flow
|
||||
- [ ] Test cross-process credential security
|
||||
|
||||
### Organizer Utilities
|
||||
- [ ] Create session creation guidance (step-by-step)
|
||||
- [ ] Build pre-race checklist functionality
|
||||
- [ ] Implement session status monitoring
|
||||
- [ ] Add quick access to league settings
|
||||
- [ ] Create notifications for upcoming races
|
||||
|
||||
### Testing & Reliability
|
||||
- [ ] Test session creation automation reliability (>95% success rate)
|
||||
- [ ] Validate automation across different iRacing UI versions
|
||||
- [ ] Handle iRacing website changes gracefully
|
||||
- [ ] Create fallback mechanisms for automation failures
|
||||
- [ ] Document troubleshooting guide
|
||||
|
||||
**Success Criteria:**
|
||||
- Companion app reduces session creation time by 80%+
|
||||
- Automation success rate >95%
|
||||
- OAuth handoff secure and seamless
|
||||
- Auto-updater working reliably
|
||||
- Comprehensive E2E test coverage
|
||||
|
||||
**Note:** Companion app is optional but highly valuable for organizers. Focus on reliability over features.
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Branding & Public Pages
|
||||
|
||||
**Goal:** Enable professional league identity and public discoverability.
|
||||
|
||||
### Asset Management
|
||||
- [ ] Implement S3-compatible asset storage (logos, images)
|
||||
- [ ] Add league logo upload functionality
|
||||
- [ ] Create image optimization pipeline
|
||||
- [ ] Implement asset CDN integration
|
||||
- [ ] Build asset management UI (upload, delete, replace)
|
||||
|
||||
### Custom Branding
|
||||
- [ ] Create custom CSS/theming system (colors, fonts)
|
||||
- [ ] Build theme preview functionality
|
||||
- [ ] Implement logo placement customization
|
||||
- [ ] Add custom header/footer options
|
||||
- [ ] Create branding guidelines documentation
|
||||
|
||||
### Public League Directory
|
||||
- [ ] Build public league directory (browse and discover)
|
||||
- [ ] Implement search and filtering (game type, region, skill level)
|
||||
- [ ] Create league detail pages (public view)
|
||||
- [ ] Add league statistics (active seasons, drivers, races)
|
||||
- [ ] Implement privacy settings (public/private leagues)
|
||||
|
||||
### External Integrations
|
||||
- [ ] Implement optional custom domain support (CNAME)
|
||||
- [ ] Create embeddable widgets (standings iframe, schedule)
|
||||
- [ ] Add Discord/TeamSpeak integration links
|
||||
- [ ] Implement YouTube/Twitch VOD linking (external only, no uploads)
|
||||
- [ ] Build social sharing functionality (Twitter, Reddit)
|
||||
|
||||
### Public Result Pages
|
||||
- [ ] Create public race result pages (shareable links)
|
||||
- [ ] Build driver profile pages (career statistics)
|
||||
- [ ] Implement team profile pages (roster, history)
|
||||
- [ ] Add historical standings archive
|
||||
- [ ] Create race replay link integration (if available)
|
||||
|
||||
### Testing & Documentation
|
||||
- [ ] Write BDD scenarios for branding features
|
||||
- [ ] Test public pages with various league configurations
|
||||
- [ ] Validate custom domain setup process
|
||||
- [ ] Create user guide for branding customization
|
||||
- [ ] Test embeddable widgets in external sites
|
||||
|
||||
**Success Criteria:**
|
||||
- Leagues have professional identity and branding
|
||||
- Public directory drives league discovery
|
||||
- Custom domains working reliably
|
||||
- Embeddable widgets functional
|
||||
- External integrations (Discord, Twitch) operational
|
||||
|
||||
**Note:** Branding features are optional but enhance league professionalism and discoverability.
|
||||
|
||||
---
|
||||
|
||||
## Phase 5: Public Launch
|
||||
|
||||
**Goal:** Launch GridPilot publicly with production-grade infrastructure and stability.
|
||||
|
||||
### Security & Compliance
|
||||
- [ ] Perform security audit (OAuth, credentials, API security)
|
||||
- [ ] Implement rate limiting and DDoS protection
|
||||
- [ ] Add CSRF and XSS protection
|
||||
- [ ] Conduct penetration testing
|
||||
- [ ] Review GDPR compliance (user data handling)
|
||||
- [ ] Implement data export functionality (user request)
|
||||
- [ ] Create incident response plan
|
||||
|
||||
### Performance & Scalability
|
||||
- [ ] Load testing and performance optimization
|
||||
- [ ] Implement database query optimization
|
||||
- [ ] Add caching layers (Redis for sessions, API responses)
|
||||
- [ ] Configure CDN for static assets
|
||||
- [ ] Optimize Docker images for production
|
||||
- [ ] Set up horizontal scaling strategy
|
||||
|
||||
### Production Infrastructure
|
||||
- [ ] Set up production hosting (AWS/GCP/Azure)
|
||||
- [ ] Configure production database (PostgreSQL with replication)
|
||||
- [ ] Implement database backup strategy (automated, tested)
|
||||
- [ ] Set up monitoring and alerting (logs, errors, uptime)
|
||||
- [ ] Configure error tracking (Sentry or similar)
|
||||
- [ ] Implement log aggregation and analysis
|
||||
- [ ] Create disaster recovery plan
|
||||
|
||||
### Documentation & Support
|
||||
- [ ] Write comprehensive user documentation
|
||||
- [ ] Create organizer onboarding guide
|
||||
- [ ] Build driver user guide
|
||||
- [ ] Document API endpoints (if public)
|
||||
- [ ] Create FAQ and troubleshooting guide
|
||||
- [ ] Set up support system (email, Discord)
|
||||
|
||||
### Launch Preparation
|
||||
- [ ] Prepare launch marketing materials
|
||||
- [ ] Coordinate Reddit/Discord announcements
|
||||
- [ ] Create launch video/demo
|
||||
- [ ] Set up social media presence
|
||||
- [ ] Prepare press kit (if applicable)
|
||||
- [ ] Plan launch timeline and milestones
|
||||
|
||||
### Beta Onboarding
|
||||
- [ ] Onboard first 10 beta leagues (closed beta)
|
||||
- [ ] Collect feedback from beta users
|
||||
- [ ] Fix critical bugs identified in beta
|
||||
- [ ] Validate production stability under real load
|
||||
- [ ] Document lessons learned
|
||||
|
||||
**Success Criteria:**
|
||||
- Platform publicly available and stable
|
||||
- Security audit passed with no critical issues
|
||||
- Production infrastructure operational
|
||||
- Monitoring and alerting functional
|
||||
- User documentation complete
|
||||
- First 10+ leagues successfully onboarded
|
||||
- Platform stable under real-world load
|
||||
|
||||
**Note:** Public launch is a major milestone. Ensure stability and security before opening access.
|
||||
|
||||
---
|
||||
|
||||
## Phase 6: Monetization & Expansion
|
||||
|
||||
**Goal:** Generate revenue and expand platform capabilities.
|
||||
|
||||
### Monetization Features
|
||||
- [ ] Implement league creation fee system
|
||||
- [ ] Add optional driver entry fee per season
|
||||
- [ ] Build revenue split mechanism (organizer/GridPilot)
|
||||
- [ ] Create billing and invoicing system
|
||||
- [ ] Implement payment processing (Stripe or similar)
|
||||
- [ ] Add subscription management (for premium features)
|
||||
- [ ] Create payout system for organizers
|
||||
- [ ] Implement refund and dispute handling
|
||||
|
||||
### Premium Features
|
||||
- [ ] Create premium league features (advanced analytics)
|
||||
- [ ] Build driver/team performance metrics over time
|
||||
- [ ] Implement historical trend analysis
|
||||
- [ ] Add advanced race strategy tools
|
||||
- [ ] Create custom report generation
|
||||
- [ ] Build league comparison and benchmarking
|
||||
|
||||
### Analytics & Insights
|
||||
- [ ] Add analytics dashboards for leagues
|
||||
- [ ] Implement driver consistency metrics
|
||||
- [ ] Create incident rate analysis
|
||||
- [ ] Build lap time comparison tools
|
||||
- [ ] Add race pace analysis
|
||||
- [ ] Implement predictive performance modeling
|
||||
|
||||
### Platform Expansion
|
||||
- [ ] Explore expansion to other simulators (ACC, rFactor 2)
|
||||
- [ ] Evaluate additional automation features
|
||||
- [ ] Research multi-game league support
|
||||
- [ ] Investigate community-requested features
|
||||
- [ ] Assess partnership opportunities (teams, sponsors)
|
||||
|
||||
### Business Intelligence
|
||||
- [ ] Implement revenue tracking and reporting
|
||||
- [ ] Create user engagement metrics
|
||||
- [ ] Build churn analysis and retention tools
|
||||
- [ ] Add A/B testing framework
|
||||
- [ ] Implement feature adoption tracking
|
||||
|
||||
**Success Criteria:**
|
||||
- Revenue generation active and growing
|
||||
- Premium features adopted by target segment
|
||||
- Payment processing reliable and secure
|
||||
- Organizer payouts working correctly
|
||||
- Platform expansion feasibility validated
|
||||
- Positive unit economics demonstrated
|
||||
|
||||
**Note:** Monetization should not compromise core user experience. Ensure value delivery justifies pricing.
|
||||
|
||||
---
|
||||
|
||||
## Dependencies & Sequencing
|
||||
|
||||
**Critical Path:**
|
||||
1. Phase 0 must be completed before any development begins
|
||||
2. Phase 1 validation should gate investment in Phase 2
|
||||
3. Phase 2 MVP is required before Phase 3 (companion app depends on API)
|
||||
4. Phase 4 can be developed in parallel with Phase 3 (independent features)
|
||||
5. Phase 5 (public launch) requires Phases 2-4 to be complete and stable
|
||||
6. Phase 6 (monetization) requires Phase 5 (public user base)
|
||||
|
||||
**Optional Paths:**
|
||||
- Phase 3 (companion app) can be delayed if organizers tolerate manual session creation
|
||||
- Phase 4 (branding) can be simplified for MVP launch
|
||||
- Phase 6 features can be prioritized based on user demand
|
||||
|
||||
**Iteration Strategy:**
|
||||
- Complete each phase fully before moving to the next
|
||||
- Validate success criteria before proceeding
|
||||
- Adjust roadmap based on feedback and learnings
|
||||
- Maintain architectural integrity throughout
|
||||
|
||||
---
|
||||
|
||||
## Living Document
|
||||
|
||||
This roadmap is a living document and will be updated as the project evolves. Key updates will include:
|
||||
- Completed todos (checked off)
|
||||
- Lessons learned from each phase
|
||||
- Adjusted priorities based on user feedback
|
||||
- New features discovered during development
|
||||
- Changes to success criteria or scope
|
||||
|
||||
**Maintenance:**
|
||||
- Review and update quarterly (or after each phase)
|
||||
- Archive completed phases for reference
|
||||
- Document deviations from original plan
|
||||
- Track velocity and estimate remaining work
|
||||
|
||||
**Cross-References:**
|
||||
- [CONCEPT.md](./CONCEPT.md) - Product vision and user needs
|
||||
- [ARCHITECTURE.md](./ARCHITECTURE.md) - Technical design and component boundaries
|
||||
- [TESTS.md](./TESTS.md) - Testing strategy and BDD scenarios
|
||||
- [TECH.md](./TECH.md) - Technology decisions and rationale
|
||||
|
||||
---
|
||||
|
||||
**Last Updated:** 2025-11-21
|
||||
**Current Phase:** Phase 0 (Foundation)
|
||||
**Overall Progress:** 0% (not started)
|
||||
274
docs/TECH.md
Normal file
274
docs/TECH.md
Normal file
@@ -0,0 +1,274 @@
|
||||
# Technology Stack
|
||||
|
||||
This document outlines GridPilot's technology choices and their rationale. For architectural patterns and layer organization, see [ARCHITECTURE.md](./ARCHITECTURE.md).
|
||||
|
||||
## 1. Language & Runtime
|
||||
|
||||
### TypeScript (Strict Mode)
|
||||
- **Version:** Latest stable (5.x+)
|
||||
- **Configuration:** `strict: true`, no `any` types permitted
|
||||
- **Rationale:** Type safety catches errors at compile time, improves IDE support, and serves as living documentation. Strict mode eliminates common type-related bugs and enforces explicit handling of null/undefined.
|
||||
|
||||
### Node.js LTS
|
||||
- **Version:** >=20.0.0
|
||||
- **Rationale:** Long-term support ensures stability and security patches. Modern features (fetch API, native test runner) reduce dependency overhead. Version 20+ provides performance improvements critical for real-time race monitoring.
|
||||
|
||||
## 2. Backend Framework
|
||||
|
||||
### Current Status: Under Evaluation
|
||||
Three candidates align with Clean Architecture principles:
|
||||
|
||||
**Option A: Express**
|
||||
- **Pros:** Mature ecosystem, extensive middleware, proven at scale
|
||||
- **Cons:** Slower than modern alternatives, callback-heavy patterns
|
||||
- **Use Case:** Best if stability and middleware availability are priorities
|
||||
|
||||
**Option B: Fastify**
|
||||
- **Pros:** High performance, schema-based validation, modern async/await
|
||||
- **Cons:** Smaller ecosystem than Express
|
||||
- **Use Case:** Best for performance-critical endpoints (real-time race data)
|
||||
|
||||
**Option C: Hono**
|
||||
- **Pros:** Ultra-lightweight, edge-ready, excellent TypeScript support
|
||||
- **Cons:** Newest option, smaller community
|
||||
- **Use Case:** Best for modern deployment targets (Cloudflare Workers, edge functions)
|
||||
|
||||
**Requirements (All Options):**
|
||||
- HTTP server with middleware support
|
||||
- OpenAPI/Swagger compatibility
|
||||
- JSON schema validation
|
||||
- WebSocket support (for real-time features)
|
||||
|
||||
**Decision Timeline:** Deferred to implementation phase based on deployment target selection.
|
||||
|
||||
## 3. Frontend Framework
|
||||
|
||||
### Current Status: Under Evaluation
|
||||
Two candidates meet accessibility and performance requirements:
|
||||
|
||||
**Option A: React 18+ with Vite**
|
||||
- **Pros:** Maximum flexibility, fast HMR, lightweight bundle
|
||||
- **Cons:** Manual SEO optimization, client-side routing complexity
|
||||
- **Use Case:** Best for dashboard-heavy, interactive UI (primary use case)
|
||||
|
||||
**Option B: Next.js 14+**
|
||||
- **Pros:** Built-in SSR/SSG, file-based routing, image optimization
|
||||
- **Cons:** Larger bundle, more opinionated
|
||||
- **Use Case:** Best if public league pages require SEO
|
||||
|
||||
**Shared Dependencies:**
|
||||
- **State Management:**
|
||||
- TanStack Query: Server state (caching, optimistic updates, real-time sync)
|
||||
- Zustand: Client state (UI preferences, form state)
|
||||
- Rationale: Separation of concerns prevents state management spaghetti
|
||||
- **UI Library:**
|
||||
- Tailwind CSS: Utility-first styling, design system consistency
|
||||
- shadcn/ui: Accessible components (WCAG 2.1 AA), copy-paste philosophy
|
||||
- Radix UI primitives: Unstyled, accessible foundations
|
||||
- Rationale: Rapid development without sacrificing accessibility or customization
|
||||
- **Forms:** React Hook Form + Zod schemas (type-safe validation)
|
||||
- **Routing:** React Router (Option A) or Next.js file-based routing (Option B)
|
||||
|
||||
**Decision Timeline:** Deferred to implementation phase. Leaning toward Option A (React + Vite) given dashboard-centric use case.
|
||||
|
||||
## 4. Database
|
||||
|
||||
### PostgreSQL 15+
|
||||
- **Rationale:**
|
||||
- Complex relationships (leagues → seasons → races → drivers → teams) require relational integrity
|
||||
- JSONB columns handle flexible metadata (iRacing session results, custom league rules)
|
||||
- Full-text search for driver/team lookups
|
||||
- Battle-tested at scale, excellent TypeScript ORM support
|
||||
- **Features Used:**
|
||||
- Foreign keys with cascading deletes
|
||||
- Check constraints (business rule validation)
|
||||
- Indexes on frequently queried fields (league_id, race_date)
|
||||
- Row-level security (multi-tenant data isolation)
|
||||
|
||||
### ORM: Under Evaluation
|
||||
|
||||
**Option A: Prisma**
|
||||
- **Pros:** Type-safe queries, automatic migrations, excellent DX
|
||||
- **Cons:** Additional build step, limited raw SQL for complex queries
|
||||
- **Use Case:** Best for rapid development, type safety priority
|
||||
|
||||
**Option B: TypeORM**
|
||||
- **Pros:** Decorators, Active Record pattern, SQL flexibility
|
||||
- **Cons:** Older API design, less type-safe than Prisma
|
||||
- **Use Case:** Best if complex SQL queries are frequent
|
||||
|
||||
**Decision Timeline:** Deferred to implementation phase. Both integrate cleanly with Clean Architecture (repository pattern at infrastructure layer).
|
||||
|
||||
## 5. Authentication
|
||||
|
||||
### iRacing OAuth Flow
|
||||
- **Provider:** iRacing official OAuth 2.0
|
||||
- **Rationale:** Official integration ensures compliance with iRacing Terms of Service. Users trust official credentials over third-party passwords.
|
||||
- **Flow:** Authorization Code with PKCE (Proof Key for Code Exchange)
|
||||
|
||||
### Session Management
|
||||
- **JWT:** Stateless tokens for API authentication
|
||||
- **Storage:** HTTP-only cookies (XSS protection), encrypted at rest
|
||||
- **Refresh Strategy:** Short-lived access tokens (15 min), long-lived refresh tokens (7 days)
|
||||
|
||||
### Implementation
|
||||
- **Passport.js:** OAuth strategy handling, pluggable architecture
|
||||
- **bcrypt:** Fallback password hashing (if local accounts added later)
|
||||
- **Rationale:** Passport's strategy pattern aligns with Clean Architecture (adapter layer). Well-tested, extensive documentation.
|
||||
|
||||
## 6. Automation (Companion App)
|
||||
|
||||
### Electron
|
||||
- **Version:** Latest stable (28.x+)
|
||||
- **Rationale:** Cross-platform desktop framework (Windows, macOS, Linux). Native OS integration (system tray, notifications, auto-start).
|
||||
- **Security:** Isolated renderer processes, context bridge for IPC
|
||||
|
||||
### Nut.js
|
||||
- **Purpose:** Keyboard/mouse control for browser automation
|
||||
- **Rationale:** Simulates human interaction with iRacing web UI when official API unavailable. Not gameplay automation—assistant for data entry tasks.
|
||||
- **Constraints:** Windows-only initially (iRacing primary platform)
|
||||
|
||||
### Electron IPC
|
||||
- **Main ↔ Renderer:** Type-safe message passing via preload scripts
|
||||
- **Rationale:** Security (no direct Node.js access in renderer), type safety (Zod schemas for IPC contracts)
|
||||
|
||||
### Auto-Updates
|
||||
- **electron-updater:** Handles signed updates, delta downloads
|
||||
- **Rationale:** Critical for security patches, seamless user experience
|
||||
|
||||
**Why This Approach:**
|
||||
- Assistant-style automation (user-initiated), not gameplay bots
|
||||
- Complements web app (handles tasks iRacing API doesn't expose)
|
||||
- Desktop integration (notifications for upcoming races, quick access via system tray)
|
||||
|
||||
## 7. Testing Tools
|
||||
|
||||
### Unit & Integration: Vitest
|
||||
- **Rationale:** Native TypeScript support, fast execution (ESM, watch mode), compatible with Vite ecosystem
|
||||
- **Coverage:** Built-in coverage reports (Istanbul), enforces 80% threshold
|
||||
|
||||
### E2E: Playwright
|
||||
- **Rationale:** Reliable browser automation, cross-browser testing (Chromium, Firefox, WebKit), built-in wait strategies
|
||||
- **Features:** Screenshot/video on failure, network mocking, parallel execution
|
||||
|
||||
### Test Containers (Docker)
|
||||
- **Purpose:** Isolated test databases, Redis instances
|
||||
- **Rationale:** Prevents test pollution, matches production environment, automatic cleanup
|
||||
- **Services:** PostgreSQL, Redis, S3 (MinIO)
|
||||
|
||||
**Testing Strategy:**
|
||||
- Unit tests: Core domain logic (pure functions, business rules)
|
||||
- Integration tests: Repository implementations, API endpoints
|
||||
- E2E tests: Critical user flows (create league, register for race, view results)
|
||||
|
||||
## 8. DevOps & Infrastructure
|
||||
|
||||
### Docker & Docker Compose
|
||||
- **Purpose:** Local development, E2E testing, consistent environments
|
||||
- **Services:**
|
||||
- PostgreSQL (primary database)
|
||||
- Redis (caching, rate limiting, job queue)
|
||||
- MinIO (S3-compatible storage for local dev)
|
||||
- **Rationale:** Developers get production-like environment instantly
|
||||
|
||||
### Redis
|
||||
- **Use Cases:**
|
||||
- Caching (league standings, frequently accessed driver stats)
|
||||
- Rate limiting (API throttling, abuse prevention)
|
||||
- Bull queue (background jobs: race result processing, email notifications)
|
||||
- **Rationale:** Proven performance, simple key-value model, pub/sub for real-time features
|
||||
|
||||
### Object Storage
|
||||
- **Production:** AWS S3 (logos, exported reports)
|
||||
- **Development:** MinIO (S3-compatible, Docker-based)
|
||||
- **Rationale:** Cost-effective, scalable, CDN integration
|
||||
|
||||
### Bull Queue (Redis-backed)
|
||||
- **Jobs:** Process race results, send notifications, generate reports
|
||||
- **Rationale:** Offloads heavy tasks from HTTP requests, retry logic, job prioritization
|
||||
|
||||
### CI/CD: Placeholder
|
||||
- **Options:** GitHub Actions, GitLab CI
|
||||
- **Rationale:** TBD based on hosting choice (GitHub vs. self-hosted GitLab)
|
||||
|
||||
## 9. Monorepo Tooling
|
||||
|
||||
### npm Workspaces
|
||||
- **Rationale:** Built-in, zero configuration, dependency hoisting
|
||||
- **Structure:** `/src/apps/*`, `/src/packages/*`, `/tests/*`
|
||||
|
||||
### Build Orchestration: Under Evaluation
|
||||
|
||||
**Option A: Turborepo**
|
||||
- **Pros:** Fast incremental builds, remote caching, simple config
|
||||
- **Cons:** Vercel-owned (vendor lock-in risk)
|
||||
|
||||
**Option B: Nx**
|
||||
- **Pros:** Advanced dependency graph, affected commands, plugins
|
||||
- **Cons:** Steeper learning curve, more complex config
|
||||
|
||||
**Decision Timeline:** Start with npm workspaces alone. Evaluate Turborepo/Nx if build times become bottleneck (unlikely at current scale).
|
||||
|
||||
## 10. Development Tools
|
||||
|
||||
### Code Quality
|
||||
- **ESLint:** Enforce coding standards, catch common mistakes
|
||||
- **Prettier:** Consistent formatting (no debates on tabs vs. spaces)
|
||||
- **Rationale:** Automated code reviews reduce friction, onboarding time
|
||||
|
||||
### Pre-Commit Hooks
|
||||
- **Husky:** Git hook management
|
||||
- **lint-staged:** Run linters only on changed files
|
||||
- **Rationale:** Fast feedback loop, prevents broken commits reaching CI
|
||||
|
||||
### TypeScript Configuration
|
||||
- **Strict Mode:** All strict flags enabled
|
||||
- **No Implicit Any:** Forces explicit types
|
||||
- **Rationale:** Type safety as first-class citizen, not opt-in feature
|
||||
|
||||
### Runtime Validation
|
||||
- **Zod:** Schema definition, runtime validation, type inference
|
||||
- **Use Cases:**
|
||||
- API request/response validation
|
||||
- Environment variable parsing
|
||||
- Form validation (shared between frontend/backend)
|
||||
- **Rationale:** Single source of truth for data shapes, generates TypeScript types automatically
|
||||
|
||||
---
|
||||
|
||||
## Decision Status Summary
|
||||
|
||||
**Finalized:**
|
||||
- Language: TypeScript (strict mode)
|
||||
- Runtime: Node.js 20+
|
||||
- Database: PostgreSQL 15+
|
||||
- Auth: iRacing OAuth + JWT
|
||||
- Companion: Electron + Nut.js
|
||||
- Testing: Vitest + Playwright + Test Containers
|
||||
- Infra: Docker + Redis + S3/MinIO
|
||||
- Monorepo: npm workspaces
|
||||
- Dev Tools: ESLint + Prettier + Husky + Zod
|
||||
|
||||
**Under Evaluation (Deferred to Implementation):**
|
||||
- Backend framework: Express vs. Fastify vs. Hono
|
||||
- Frontend framework: React + Vite vs. Next.js
|
||||
- ORM: Prisma vs. TypeORM
|
||||
- Build orchestration: Turborepo vs. Nx (if needed)
|
||||
- CI/CD: GitHub Actions vs. GitLab CI
|
||||
|
||||
**Deferred Decisions Rationale:**
|
||||
- Backend/frontend frameworks: Choice depends on deployment target (cloud vs. edge vs. self-hosted)
|
||||
- ORM: Both options integrate cleanly with Clean Architecture; decision based on team preference during implementation
|
||||
- Build tools: Optimize when proven bottleneck (YAGNI principle)
|
||||
|
||||
---
|
||||
|
||||
## Cross-References
|
||||
|
||||
- **Architecture Patterns:** See [ARCHITECTURE.md](./ARCHITECTURE.md) for how these technologies map to Clean Architecture layers
|
||||
- **Project Overview:** See [CONCEPT.md](./CONCEPT.md) for business context driving technology choices
|
||||
- **Setup Instructions:** See [README.md](../README.md) for installation and getting started
|
||||
|
||||
---
|
||||
|
||||
*Last Updated: 2025-11-21*
|
||||
716
docs/TESTS.md
Normal file
716
docs/TESTS.md
Normal file
@@ -0,0 +1,716 @@
|
||||
# Testing Strategy
|
||||
|
||||
## Overview
|
||||
|
||||
GridPilot employs a comprehensive BDD (Behavior-Driven Development) testing strategy across three distinct layers: **Unit**, **Integration**, and **End-to-End (E2E)**. Each layer validates different aspects of the system while maintaining a consistent Given/When/Then approach that emphasizes behavior over implementation.
|
||||
|
||||
This document provides practical guidance on testing philosophy, test organization, tooling, and execution patterns for GridPilot.
|
||||
|
||||
---
|
||||
|
||||
## BDD Philosophy
|
||||
|
||||
### Why BDD for GridPilot?
|
||||
|
||||
GridPilot manages complex business rules around league management, team registration, event scheduling, result processing, and standings calculation. These rules must be:
|
||||
|
||||
- **Understandable** by non-technical stakeholders (league admins, race organizers)
|
||||
- **Verifiable** through automated tests that mirror real-world scenarios
|
||||
- **Maintainable** as business requirements evolve
|
||||
|
||||
BDD provides a shared vocabulary (Given/When/Then) that bridges the gap between domain experts and developers, ensuring tests document expected behavior rather than technical implementation details.
|
||||
|
||||
### Given/When/Then Format
|
||||
|
||||
All tests—regardless of layer—follow this structure:
|
||||
|
||||
```typescript
|
||||
// Given: Establish initial state/context
|
||||
// When: Perform the action being tested
|
||||
// Then: Assert the expected outcome
|
||||
```
|
||||
|
||||
**Example (Unit Test):**
|
||||
```typescript
|
||||
describe('League Domain Entity', () => {
|
||||
it('should add a team when team limit not reached', () => {
|
||||
// Given
|
||||
const league = new League('Summer Series', { maxTeams: 10 });
|
||||
const team = new Team('Racing Legends');
|
||||
|
||||
// When
|
||||
const result = league.addTeam(team);
|
||||
|
||||
// Then
|
||||
expect(result.isSuccess()).toBe(true);
|
||||
expect(league.teams).toContain(team);
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
This pattern applies equally to integration tests (with real database operations) and E2E tests (with full UI workflows).
|
||||
|
||||
---
|
||||
|
||||
## Test Types & Organization
|
||||
|
||||
### Unit Tests (`/tests/unit`)
|
||||
|
||||
**Scope:** Domain entities, value objects, and application use cases with mocked ports (repositories, external services).
|
||||
|
||||
**Tooling:** Vitest (fast, TypeScript-native, ESM support)
|
||||
|
||||
**Execution:** Parallel, target <1 second total runtime
|
||||
|
||||
**Purpose:**
|
||||
- Validate business logic in isolation
|
||||
- Ensure domain invariants hold (e.g., team limits, scoring rules)
|
||||
- Test use case orchestration with mocked dependencies
|
||||
|
||||
**Examples from Architecture:**
|
||||
|
||||
1. **Domain Entity Test:**
|
||||
```typescript
|
||||
// League.addTeam() validation
|
||||
Given a League with maxTeams=10 and 9 current teams
|
||||
When addTeam() is called with a valid Team
|
||||
Then the team is added successfully
|
||||
|
||||
Given a League with maxTeams=10 and 10 current teams
|
||||
When addTeam() is called
|
||||
Then a DomainError is returned with "Team limit reached"
|
||||
```
|
||||
|
||||
2. **Use Case Test:**
|
||||
```typescript
|
||||
// GenerateStandingsUseCase
|
||||
Given a League with 5 teams and completed races
|
||||
When execute() is called
|
||||
Then LeagueRepository.findById() is invoked
|
||||
And ScoringRule.calculatePoints() is called for each team
|
||||
And sorted standings are returned
|
||||
```
|
||||
|
||||
3. **Scoring Rule Test:**
|
||||
```typescript
|
||||
// ScoringRule.calculatePoints()
|
||||
Given a F1-style scoring rule (25-18-15-12-10-8-6-4-2-1)
|
||||
When calculatePoints(position=1) is called
|
||||
Then 25 points are returned
|
||||
|
||||
Given the same rule
|
||||
When calculatePoints(position=11) is called
|
||||
Then 0 points are returned
|
||||
```
|
||||
|
||||
**Key Practices:**
|
||||
- Mock only at architecture boundaries (ports like `ILeagueRepository`)
|
||||
- Never mock domain entities or value objects
|
||||
- Keep tests fast (<10ms per test)
|
||||
- Use in-memory test doubles for simple cases
|
||||
|
||||
---
|
||||
|
||||
### Integration Tests (`/tests/integration`)
|
||||
|
||||
**Scope:** Repository implementations, infrastructure adapters (PostgreSQL, Redis, OAuth clients, result importers).
|
||||
|
||||
**Tooling:** Vitest + Testcontainers (spins up real PostgreSQL/Redis in Docker)
|
||||
|
||||
**Execution:** Sequential, ~10 seconds per suite
|
||||
|
||||
**Purpose:**
|
||||
- Validate that infrastructure adapters correctly implement port interfaces
|
||||
- Test database queries, migrations, and transaction handling
|
||||
- Ensure external API clients handle authentication and error scenarios
|
||||
|
||||
**Examples from Architecture:**
|
||||
|
||||
1. **Repository Test:**
|
||||
```typescript
|
||||
// PostgresLeagueRepository
|
||||
Given a PostgreSQL container is running
|
||||
When save() is called with a League entity
|
||||
Then the league is persisted to the database
|
||||
And findById() returns the same league with correct attributes
|
||||
```
|
||||
|
||||
2. **OAuth Client Test:**
|
||||
```typescript
|
||||
// IRacingOAuthClient
|
||||
Given valid iRacing credentials
|
||||
When authenticate() is called
|
||||
Then an access token is returned
|
||||
And the token is cached in Redis for 1 hour
|
||||
|
||||
Given expired credentials
|
||||
When authenticate() is called
|
||||
Then an AuthenticationError is thrown
|
||||
```
|
||||
|
||||
3. **Result Importer Test:**
|
||||
```typescript
|
||||
// EventResultImporter
|
||||
Given an Event exists in the database
|
||||
When importResults() is called with iRacing session data
|
||||
Then Driver entities are created/updated
|
||||
And EventResult entities are persisted with correct positions/times
|
||||
And the Event status is updated to 'COMPLETED'
|
||||
```
|
||||
|
||||
**Key Practices:**
|
||||
- Use Testcontainers to spin up real databases (not mocks)
|
||||
- Clean database state between tests (truncate tables or use transactions)
|
||||
- Seed minimal test data via SQL fixtures
|
||||
- Test both success and failure paths (network errors, constraint violations)
|
||||
|
||||
---
|
||||
|
||||
### End-to-End Tests (`/tests/e2e`)
|
||||
|
||||
**Scope:** Full user workflows spanning web-client → web-api → database.
|
||||
|
||||
**Tooling:** Playwright + Docker Compose (orchestrates all services)
|
||||
|
||||
**Execution:** ~2 minutes per scenario
|
||||
|
||||
**Purpose:**
|
||||
- Validate complete user journeys from UI interactions to database changes
|
||||
- Ensure services integrate correctly in a production-like environment
|
||||
- Catch regressions in multi-service workflows
|
||||
|
||||
**Examples from Architecture:**
|
||||
|
||||
1. **League Creation Workflow:**
|
||||
```gherkin
|
||||
Given an authenticated league admin
|
||||
When they navigate to "Create League"
|
||||
And fill in league name, scoring system, and team limit
|
||||
And submit the form
|
||||
Then the league appears in the admin dashboard
|
||||
And the database contains the new league record
|
||||
And the league is visible to other users
|
||||
```
|
||||
|
||||
2. **Team Registration Workflow:**
|
||||
```gherkin
|
||||
Given a published league with 5/10 team slots filled
|
||||
When a team captain navigates to the league page
|
||||
And clicks "Join League"
|
||||
And fills in team name and roster
|
||||
And submits the form
|
||||
Then the team appears in the league's team list
|
||||
And the team count updates to 6/10
|
||||
And the captain receives a confirmation email
|
||||
```
|
||||
|
||||
3. **Automated Result Import:**
|
||||
```gherkin
|
||||
Given a League with an upcoming Event
|
||||
And iRacing OAuth credentials are configured
|
||||
When the scheduled import job runs
|
||||
Then the job authenticates with iRacing
|
||||
And fetches session results for the Event
|
||||
And creates EventResult records in the database
|
||||
And updates the Event status to 'COMPLETED'
|
||||
And triggers standings recalculation
|
||||
```
|
||||
|
||||
4. **Companion App Login Automation:**
|
||||
```gherkin
|
||||
Given a League Admin enables companion app login automation
|
||||
When the companion app is launched
|
||||
Then the app polls for a generated login token from web-api
|
||||
And auto-fills iRacing credentials from the admin's profile
|
||||
And logs into iRacing automatically
|
||||
And confirms successful login to web-api
|
||||
```
|
||||
|
||||
**Key Practices:**
|
||||
- Use Playwright's Page Object pattern for reusable UI interactions
|
||||
- Test both happy paths and error scenarios (validation errors, network failures)
|
||||
- Clean database state between scenarios (via API or direct SQL)
|
||||
- Run E2E tests in CI before merging to main branch
|
||||
|
||||
---
|
||||
|
||||
## Test Data Strategy
|
||||
|
||||
### Fixtures & Seeding
|
||||
|
||||
**Unit Tests:**
|
||||
- Use in-memory domain objects (no database)
|
||||
- Factory functions for common test entities:
|
||||
```typescript
|
||||
function createTestLeague(overrides?: Partial<LeagueProps>): League {
|
||||
return new League('Test League', { maxTeams: 10, ...overrides });
|
||||
}
|
||||
```
|
||||
|
||||
**Integration Tests:**
|
||||
- Use Testcontainers to spin up fresh PostgreSQL instances
|
||||
- Seed minimal test data via SQL scripts:
|
||||
```sql
|
||||
-- tests/integration/fixtures/leagues.sql
|
||||
INSERT INTO leagues (id, name, max_teams) VALUES
|
||||
('league-1', 'Test League', 10);
|
||||
```
|
||||
- Clean state between tests (truncate tables or rollback transactions)
|
||||
|
||||
**E2E Tests:**
|
||||
- Pre-seed database via migrations before Docker Compose starts
|
||||
- Use API endpoints to create test data when possible (validates API behavior)
|
||||
- Database cleanup between scenarios:
|
||||
```typescript
|
||||
// tests/e2e/support/database.ts
|
||||
export async function cleanDatabase() {
|
||||
await sql`TRUNCATE TABLE event_results CASCADE`;
|
||||
await sql`TRUNCATE TABLE events CASCADE`;
|
||||
await sql`TRUNCATE TABLE teams CASCADE`;
|
||||
await sql`TRUNCATE TABLE leagues CASCADE`;
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Docker E2E Setup
|
||||
|
||||
### Architecture
|
||||
|
||||
E2E tests run against a full stack orchestrated by `docker-compose.test.yml`:
|
||||
|
||||
```yaml
|
||||
services:
|
||||
postgres:
|
||||
image: postgres:16
|
||||
environment:
|
||||
POSTGRES_DB: gridpilot_test
|
||||
POSTGRES_USER: test
|
||||
POSTGRES_PASSWORD: test
|
||||
|
||||
redis:
|
||||
image: redis:7-alpine
|
||||
|
||||
web-api:
|
||||
build: ./src/apps/web-api
|
||||
depends_on:
|
||||
- postgres
|
||||
- redis
|
||||
environment:
|
||||
DATABASE_URL: postgres://test:test@postgres:5432/gridpilot_test
|
||||
REDIS_URL: redis://redis:6379
|
||||
ports:
|
||||
- "3000:3000"
|
||||
```
|
||||
|
||||
### Execution Flow
|
||||
|
||||
1. **Start Services:** `docker compose -f docker-compose.test.yml up -d`
|
||||
2. **Run Migrations:** `npm run migrate:test` (seeds database)
|
||||
3. **Execute Tests:** Playwright targets `http://localhost:3000`
|
||||
4. **Teardown:** `docker compose -f docker-compose.test.yml down -v`
|
||||
|
||||
### Environment Setup
|
||||
|
||||
```bash
|
||||
# tests/e2e/setup.ts
|
||||
export async function globalSetup() {
|
||||
// Wait for web-api to be ready
|
||||
await waitForService('http://localhost:3000/health');
|
||||
|
||||
// Run database migrations
|
||||
await runMigrations();
|
||||
}
|
||||
|
||||
export async function globalTeardown() {
|
||||
// Stop Docker Compose services
|
||||
await exec('docker compose -f docker-compose.test.yml down -v');
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## BDD Scenario Examples
|
||||
|
||||
### 1. League Creation (Success + Failure)
|
||||
|
||||
```gherkin
|
||||
Scenario: Admin creates a new league
|
||||
Given an authenticated admin user
|
||||
When they submit a league form with:
|
||||
| name | Summer Series 2024 |
|
||||
| maxTeams | 12 |
|
||||
| scoringSystem | F1 |
|
||||
Then the league is created successfully
|
||||
And the admin is redirected to the league dashboard
|
||||
And the database contains the new league
|
||||
|
||||
Scenario: League creation fails with duplicate name
|
||||
Given a league named "Summer Series 2024" already exists
|
||||
When an admin submits a league form with name "Summer Series 2024"
|
||||
Then the form displays error "League name already exists"
|
||||
And no new league is created in the database
|
||||
```
|
||||
|
||||
### 2. Team Registration (Success + Failure)
|
||||
|
||||
```gherkin
|
||||
Scenario: Team registers for a league
|
||||
Given a published league with 5/10 team slots
|
||||
When a team captain submits registration with:
|
||||
| teamName | Racing Legends |
|
||||
| drivers | Alice, Bob, Carol |
|
||||
Then the team is added to the league
|
||||
And the team count updates to 6/10
|
||||
And the captain receives a confirmation email
|
||||
|
||||
Scenario: Registration fails when league is full
|
||||
Given a published league with 10/10 team slots
|
||||
When a team captain attempts to register
|
||||
Then the form displays error "League is full"
|
||||
And the team is not added to the league
|
||||
```
|
||||
|
||||
### 3. Automated Result Import (Success + Failure)
|
||||
|
||||
```gherkin
|
||||
Scenario: Import results from iRacing
|
||||
Given a League with an Event scheduled for today
|
||||
And iRacing OAuth credentials are configured
|
||||
When the scheduled import job runs
|
||||
Then the job authenticates with iRacing API
|
||||
And fetches session results for the Event
|
||||
And creates EventResult records for each driver
|
||||
And updates the Event status to 'COMPLETED'
|
||||
And triggers standings recalculation
|
||||
|
||||
Scenario: Import fails with invalid credentials
|
||||
Given an Event with expired iRacing credentials
|
||||
When the import job runs
|
||||
Then an AuthenticationError is logged
|
||||
And the Event status remains 'SCHEDULED'
|
||||
And an admin notification is sent
|
||||
```
|
||||
|
||||
### 4. Parallel Scoring Calculation
|
||||
|
||||
```gherkin
|
||||
Scenario: Calculate standings for multiple leagues concurrently
|
||||
Given 5 active leagues with completed events
|
||||
When the standings recalculation job runs
|
||||
Then each league's standings are calculated in parallel
|
||||
And the process completes in <5 seconds
|
||||
And all standings are persisted correctly
|
||||
And no race conditions occur (validated via database integrity checks)
|
||||
```
|
||||
|
||||
### 5. Companion App Login Automation
|
||||
|
||||
```gherkin
|
||||
Scenario: Companion app logs into iRacing automatically
|
||||
Given a League Admin enables companion app login automation
|
||||
And provides their iRacing credentials
|
||||
When the companion app is launched
|
||||
Then the app polls web-api for a login token
|
||||
And retrieves the admin's encrypted credentials
|
||||
And auto-fills the iRacing login form
|
||||
And submits the login request
|
||||
And confirms successful login to web-api
|
||||
And caches the session token for 24 hours
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Coverage Goals
|
||||
|
||||
### Target Coverage Levels
|
||||
|
||||
- **Domain/Application Layers:** >90% (critical business logic)
|
||||
- **Infrastructure Layer:** >80% (repository implementations, adapters)
|
||||
- **Presentation Layer:** Smoke tests (basic rendering, no exhaustive UI coverage)
|
||||
|
||||
### Running Coverage Reports
|
||||
|
||||
```bash
|
||||
# Unit + Integration coverage
|
||||
npm run test:coverage
|
||||
|
||||
# View HTML report
|
||||
open coverage/index.html
|
||||
|
||||
# E2E coverage (via Istanbul)
|
||||
npm run test:e2e:coverage
|
||||
```
|
||||
|
||||
### What to Prioritize
|
||||
|
||||
1. **Domain Entities:** Invariants, validation rules, state transitions
|
||||
2. **Use Cases:** Orchestration logic, error handling, port interactions
|
||||
3. **Repositories:** CRUD operations, query builders, transaction handling
|
||||
4. **Adapters:** External API clients, OAuth flows, result importers
|
||||
|
||||
**What NOT to prioritize:**
|
||||
- Trivial getters/setters
|
||||
- Framework boilerplate (Express route handlers)
|
||||
- UI styling (covered by visual regression tests if needed)
|
||||
|
||||
---
|
||||
|
||||
## Continuous Testing
|
||||
|
||||
### Watch Mode (Development)
|
||||
|
||||
```bash
|
||||
# Auto-run unit tests on file changes
|
||||
npm run test:watch
|
||||
|
||||
# Auto-run integration tests (slower, but useful for DB work)
|
||||
npm run test:integration:watch
|
||||
```
|
||||
|
||||
### CI/CD Pipeline
|
||||
|
||||
```mermaid
|
||||
graph LR
|
||||
A[Code Push] --> B[Unit Tests]
|
||||
B --> C[Integration Tests]
|
||||
C --> D[E2E Tests]
|
||||
D --> E[Deploy to Staging]
|
||||
```
|
||||
|
||||
**Execution Order:**
|
||||
1. **Unit Tests** (parallel, <1 second) — fail fast on logic errors
|
||||
2. **Integration Tests** (sequential, ~10 seconds) — catch infrastructure issues
|
||||
3. **E2E Tests** (sequential, ~2 minutes) — validate full workflows
|
||||
4. **Deploy** — only if all tests pass
|
||||
|
||||
**Parallelization:**
|
||||
- Unit tests run in parallel (Vitest default)
|
||||
- Integration tests run sequentially (avoid database conflicts)
|
||||
- E2E tests run sequentially (UI interactions are stateful)
|
||||
|
||||
---
|
||||
|
||||
## Testing Best Practices
|
||||
|
||||
### 1. Test Behavior, Not Implementation
|
||||
|
||||
**❌ Bad (overfitted to implementation):**
|
||||
```typescript
|
||||
it('should call repository.save() once', () => {
|
||||
const repo = mock<ILeagueRepository>();
|
||||
const useCase = new CreateLeagueUseCase(repo);
|
||||
useCase.execute({ name: 'Test' });
|
||||
expect(repo.save).toHaveBeenCalledTimes(1);
|
||||
});
|
||||
```
|
||||
|
||||
**✅ Good (tests observable behavior):**
|
||||
```typescript
|
||||
it('should persist the league to the repository', async () => {
|
||||
const repo = new InMemoryLeagueRepository();
|
||||
const useCase = new CreateLeagueUseCase(repo);
|
||||
|
||||
const result = await useCase.execute({ name: 'Test' });
|
||||
|
||||
expect(result.isSuccess()).toBe(true);
|
||||
const league = await repo.findById(result.value.id);
|
||||
expect(league?.name).toBe('Test');
|
||||
});
|
||||
```
|
||||
|
||||
### 2. Mock Only at Architecture Boundaries
|
||||
|
||||
**Ports (interfaces)** should be mocked in use case tests:
|
||||
```typescript
|
||||
const mockRepo = mock<ILeagueRepository>({
|
||||
save: jest.fn().mockResolvedValue(undefined),
|
||||
});
|
||||
```
|
||||
|
||||
**Domain entities** should NEVER be mocked:
|
||||
```typescript
|
||||
// ❌ Don't do this
|
||||
const mockLeague = mock<League>();
|
||||
|
||||
// ✅ Do this
|
||||
const league = new League('Test League', { maxTeams: 10 });
|
||||
```
|
||||
|
||||
### 3. Keep Tests Readable and Maintainable
|
||||
|
||||
**Arrange-Act-Assert Pattern:**
|
||||
```typescript
|
||||
it('should calculate standings correctly', () => {
|
||||
// Arrange: Set up test data
|
||||
const league = createTestLeague();
|
||||
const teams = [createTestTeam('Team A'), createTestTeam('Team B')];
|
||||
const results = [createTestResult(teams[0], position: 1)];
|
||||
|
||||
// Act: Perform the action
|
||||
const standings = league.calculateStandings(results);
|
||||
|
||||
// Assert: Verify the outcome
|
||||
expect(standings[0].team).toBe(teams[0]);
|
||||
expect(standings[0].points).toBe(25);
|
||||
});
|
||||
```
|
||||
|
||||
### 4. Test Error Scenarios
|
||||
|
||||
Don't just test the happy path:
|
||||
```typescript
|
||||
describe('League.addTeam()', () => {
|
||||
it('should add team successfully', () => { /* ... */ });
|
||||
|
||||
it('should fail when team limit reached', () => {
|
||||
const league = createTestLeague({ maxTeams: 1 });
|
||||
league.addTeam(createTestTeam('Team A'));
|
||||
|
||||
const result = league.addTeam(createTestTeam('Team B'));
|
||||
|
||||
expect(result.isFailure()).toBe(true);
|
||||
expect(result.error.message).toBe('Team limit reached');
|
||||
});
|
||||
|
||||
it('should fail when adding duplicate team', () => { /* ... */ });
|
||||
});
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Common Patterns
|
||||
|
||||
### Setting Up Test Fixtures
|
||||
|
||||
**Factory Functions:**
|
||||
```typescript
|
||||
// tests/support/factories.ts
|
||||
export function createTestLeague(overrides?: Partial<LeagueProps>): League {
|
||||
return new League('Test League', {
|
||||
maxTeams: 10,
|
||||
scoringSystem: 'F1',
|
||||
...overrides,
|
||||
});
|
||||
}
|
||||
|
||||
export function createTestTeam(name: string): Team {
|
||||
return new Team(name, { drivers: ['Driver 1', 'Driver 2'] });
|
||||
}
|
||||
```
|
||||
|
||||
### Mocking Ports in Use Case Tests
|
||||
|
||||
```typescript
|
||||
// tests/unit/application/CreateLeagueUseCase.test.ts
|
||||
describe('CreateLeagueUseCase', () => {
|
||||
let mockRepo: jest.Mocked<ILeagueRepository>;
|
||||
let useCase: CreateLeagueUseCase;
|
||||
|
||||
beforeEach(() => {
|
||||
mockRepo = {
|
||||
save: jest.fn().mockResolvedValue(undefined),
|
||||
findById: jest.fn().mockResolvedValue(null),
|
||||
findByName: jest.fn().mockResolvedValue(null),
|
||||
};
|
||||
useCase = new CreateLeagueUseCase(mockRepo);
|
||||
});
|
||||
|
||||
it('should create a league when name is unique', async () => {
|
||||
const result = await useCase.execute({ name: 'New League' });
|
||||
|
||||
expect(result.isSuccess()).toBe(true);
|
||||
expect(mockRepo.save).toHaveBeenCalledWith(
|
||||
expect.objectContaining({ name: 'New League' })
|
||||
);
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
### Database Cleanup Strategies
|
||||
|
||||
**Integration Tests:**
|
||||
```typescript
|
||||
// tests/integration/setup.ts
|
||||
import { sql } from './database';
|
||||
|
||||
export async function cleanDatabase() {
|
||||
await sql`TRUNCATE TABLE event_results CASCADE`;
|
||||
await sql`TRUNCATE TABLE events CASCADE`;
|
||||
await sql`TRUNCATE TABLE teams CASCADE`;
|
||||
await sql`TRUNCATE TABLE leagues CASCADE`;
|
||||
}
|
||||
|
||||
beforeEach(async () => {
|
||||
await cleanDatabase();
|
||||
});
|
||||
```
|
||||
|
||||
**E2E Tests:**
|
||||
```typescript
|
||||
// tests/e2e/support/hooks.ts
|
||||
import { test as base } from '@playwright/test';
|
||||
|
||||
export const test = base.extend({
|
||||
page: async ({ page }, use) => {
|
||||
// Clean database before each test
|
||||
await fetch('http://localhost:3000/test/cleanup', { method: 'POST' });
|
||||
await use(page);
|
||||
},
|
||||
});
|
||||
```
|
||||
|
||||
### Playwright Page Object Pattern
|
||||
|
||||
```typescript
|
||||
// tests/e2e/pages/LeaguePage.ts
|
||||
export class LeaguePage {
|
||||
constructor(private page: Page) {}
|
||||
|
||||
async navigateToCreateLeague() {
|
||||
await this.page.goto('/leagues/create');
|
||||
}
|
||||
|
||||
async fillLeagueForm(data: { name: string; maxTeams: number }) {
|
||||
await this.page.fill('[name="name"]', data.name);
|
||||
await this.page.fill('[name="maxTeams"]', data.maxTeams.toString());
|
||||
}
|
||||
|
||||
async submitForm() {
|
||||
await this.page.click('button[type="submit"]');
|
||||
}
|
||||
|
||||
async getSuccessMessage() {
|
||||
return this.page.textContent('.success-message');
|
||||
}
|
||||
}
|
||||
|
||||
// Usage in test
|
||||
test('should create league', async ({ page }) => {
|
||||
const leaguePage = new LeaguePage(page);
|
||||
await leaguePage.navigateToCreateLeague();
|
||||
await leaguePage.fillLeagueForm({ name: 'Test', maxTeams: 10 });
|
||||
await leaguePage.submitForm();
|
||||
|
||||
expect(await leaguePage.getSuccessMessage()).toBe('League created');
|
||||
});
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Cross-References
|
||||
|
||||
- **[`ARCHITECTURE.md`](./ARCHITECTURE.md)** — Layer boundaries, port definitions, and dependency rules that guide test structure
|
||||
- **[`TECH.md`](./TECH.md)** — Detailed tooling specifications (Vitest, Playwright, Testcontainers configuration)
|
||||
- **[`package.json`](../package.json)** — Test scripts and commands (`test:unit`, `test:integration`, `test:e2e`, `test:coverage`)
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
GridPilot's testing strategy ensures:
|
||||
- **Business logic is correct** (unit tests for domain/application layers)
|
||||
- **Infrastructure works reliably** (integration tests for repositories/adapters)
|
||||
- **User workflows function end-to-end** (E2E tests for full stack)
|
||||
|
||||
By following BDD principles and maintaining clear test organization, the team can confidently evolve GridPilot while preserving correctness and stability.
|
||||
Reference in New Issue
Block a user