wip
This commit is contained in:
701
docs/ROADMAP.md
701
docs/ROADMAP.md
@@ -1,466 +1,289 @@
|
||||
# GridPilot Implementation Roadmap
|
||||
|
||||
## Overview
|
||||
## 1. Big Picture and Scope
|
||||
|
||||
This roadmap provides a phased implementation plan for GridPilot, an automated league management platform for iRacing. Each phase builds upon the previous one, with clear success criteria and actionable todos.
|
||||
GridPilot is the **competition layer** for iRacing leagues, as described in:
|
||||
|
||||
**Purpose:**
|
||||
- Guide iterative development from technical validation to public launch and monetization
|
||||
- Track progress through checkable todos
|
||||
- Validate assumptions before investing in full implementation
|
||||
- Ensure architectural integrity throughout each phase
|
||||
- [`CONCEPT.md`](docs/concept/CONCEPT.md)
|
||||
- [`ADMINS.md`](docs/concept/ADMINS.md)
|
||||
- [`DRIVERS.md`](docs/concept/DRIVERS.md)
|
||||
- [`COMPETITION.md`](docs/concept/COMPETITION.md)
|
||||
- [`RACING.md`](docs/concept/RACING.md)
|
||||
- [`STATS.md`](docs/concept/STATS.md)
|
||||
- [`RATING.md`](docs/concept/RATING.md)
|
||||
- [`SOCIAL.md`](docs/concept/SOCIAL.md)
|
||||
- [`TEAMS.md`](docs/concept/TEAMS.md)
|
||||
|
||||
**How to Use:**
|
||||
- Check off todos as they are completed (replace `[ ]` with `[x]`)
|
||||
- Review success criteria before moving to the next phase
|
||||
- Refer to [ARCHITECTURE.md](./ARCHITECTURE.md) for component boundaries and patterns
|
||||
- Consult [TESTS.md](./TESTS.md) for testing approach and BDD scenario structure
|
||||
- See [CONCEPT.md](./CONCEPT.md) for product vision and user needs
|
||||
Those docs describe the full platform: leagues, seasons, standings, stats, rating, complaints, social, teams, discovery, monetization.
|
||||
|
||||
This **repository** currently implements a **narrow, ToS-safe slice** of that vision:
|
||||
|
||||
- A desktop Electron companion running on the admin’s machine.
|
||||
- A hosted-session automation engine that drives the iRacing web UI with Playwright.
|
||||
- Domain and application logic for:
|
||||
- hosted wizard steps
|
||||
- authentication and cookie/session reuse
|
||||
- overlays and lifecycle events
|
||||
- checkout safety and confirmation.
|
||||
|
||||
For the technical slice implemented here, see:
|
||||
|
||||
- [`ARCHITECTURE.md`](docs/ARCHITECTURE.md)
|
||||
- [`TECH.md`](docs/TECH.md)
|
||||
- [`TESTS.md`](docs/TESTS.md)
|
||||
|
||||
Everything else from the concept docs (league/season management, stats, social, complaints, team identity, discovery) is **future or external** to this repo and will live in other services.
|
||||
|
||||
This roadmap is therefore split into two levels:
|
||||
|
||||
- **Automation & Companion Roadmap** – implementation-level, this repo.
|
||||
- **Core Platform Roadmap** – high-level, future/external services guided by the concept docs.
|
||||
|
||||
|
||||
## 2. How to Use This Roadmap
|
||||
|
||||
- Treat Automation & Companion items as work **inside this repo**.
|
||||
- Treat Core Platform items as **future/external services** that will integrate with this automation slice later.
|
||||
- Use checklists for near-term Automation & Companion work only.
|
||||
- Use the concept docs plus [`ARCHITECTURE.md`](docs/ARCHITECTURE.md) as the source of truth for scope boundaries.
|
||||
- Keep success criteria **testable**, using patterns in [`TESTS.md`](docs/TESTS.md).
|
||||
|
||||
**Relationship to MVP:**
|
||||
- **Phase 0-1:** Pre-MVP validation (technical feasibility, market validation)
|
||||
- **Phase 2:** MVP (core league management with automated results)
|
||||
- **Phase 3-4:** Enhanced MVP (automation layer, branding)
|
||||
- **Phase 5-6:** Public launch and monetization
|
||||
|
||||
---
|
||||
|
||||
## Phase 0: Foundation (Automation Testing - Internal)
|
||||
## 3. Automation & Companion Roadmap (This Repo)
|
||||
|
||||
**Goal:** Validate technical feasibility of browser automation and establish testing infrastructure.
|
||||
This track is grounded in the existing code and architecture:
|
||||
|
||||
### Infrastructure Setup
|
||||
- [ ] Initialize monorepo with npm workspaces (`/src/apps`, `/src/packages`)
|
||||
- [ ] Set up TypeScript configuration (strict mode, path aliases)
|
||||
- [ ] Configure ESLint and Prettier (no warnings tolerated)
|
||||
- [ ] Create basic domain models (`League`, `Team`, `Event`, `Driver`, `Result`)
|
||||
- [ ] Set up test harness (Vitest for unit/integration tests)
|
||||
- [ ] Configure Docker Compose for E2E testing environment
|
||||
- [ ] Document development setup in README.md
|
||||
- Hosted wizard flow and step orchestration (see `tests/e2e/steps/*` and `tests/e2e/workflows/*`).
|
||||
- Auth and cookie/session management.
|
||||
- Overlay lifecycle via [`IAutomationLifecycleEmitter`](packages/infrastructure/adapters/IAutomationLifecycleEmitter.ts:1) and [`OverlaySyncService`](packages/application/services/OverlaySyncService.ts:1).
|
||||
- Checkout safety via [`CheckoutPriceExtractor`](packages/infrastructure/adapters/automation/CheckoutPriceExtractor.ts:1), [`ConfirmCheckoutUseCase`](packages/application/use-cases/ConfirmCheckoutUseCase.ts:1), [`ElectronCheckoutConfirmationAdapter`](packages/infrastructure/adapters/ipc/ElectronCheckoutConfirmationAdapter.ts:1) and the renderer dialog.
|
||||
- Electron companion UI and IPC wiring.
|
||||
|
||||
### Automation Validation
|
||||
- [ ] Install and configure Playwright for browser automation
|
||||
- [ ] Test iRacing website navigation and authentication flow
|
||||
- [ ] Test session creation page detection on members.iracing.com
|
||||
- [ ] Test session ID extraction from URL or page elements
|
||||
- [ ] Validate server-side result polling from iRacing API
|
||||
- [ ] Create proof-of-concept automation script using Playwright
|
||||
- [ ] Document browser automation approach and iRacing automation rules
|
||||
- [ ] Identify automation failure modes and mitigation strategies
|
||||
### Phase A: Solid Hosted-Session Engine & Companion Baseline
|
||||
|
||||
### Testing Foundation
|
||||
- [ ] Write example BDD scenarios (Given/When/Then format)
|
||||
- [ ] Set up Dockerized E2E test environment
|
||||
- [ ] Create fixture data for test scenarios
|
||||
- [ ] Validate test isolation and repeatability
|
||||
- [ ] Document testing strategy in [TESTS.md](./TESTS.md)
|
||||
**Goal:** Make the existing hosted-session automation and Electron companion reliable, observable, and easy to run on an admin’s machine.
|
||||
|
||||
**Success Criteria:**
|
||||
- Technical feasibility confirmed (browser automation reliable)
|
||||
- Test infrastructure operational (unit, integration, E2E)
|
||||
- Development environment documented and reproducible
|
||||
- No blockers identified for MVP implementation
|
||||
**Automation (this repo)**
|
||||
|
||||
- [ ] Stabilize wizard step orchestration:
|
||||
- [ ] Review and align wizard-step domain rules with [`StepTransitionValidator`](packages/domain/services/StepTransitionValidator.ts:1).
|
||||
- [ ] Ensure `tests/e2e/steps/*` cover all 18 hosted wizard steps end to end.
|
||||
- [ ] Harden [`WizardStepOrchestrator`](packages/infrastructure/adapters/automation/core/PlaywrightAutomationAdapter.ts:1) behavior for retries and timeouts.
|
||||
- [ ] Strengthen page validation:
|
||||
- [ ] Extend [`PageStateValidator`](packages/domain/services/PageStateValidator.ts:1) to cover edge cases found in real-hosted tests.
|
||||
- [ ] Ensure selector sets in `packages/infrastructure/adapters/automation/dom/*` match current iRacing UI.
|
||||
- [ ] Tighten auth/session flows:
|
||||
- [ ] Verify [`CheckAuthenticationUseCase`](packages/application/use-cases/CheckAuthenticationUseCase.ts:1), [`InitiateLoginUseCase`](packages/application/use-cases/InitiateLoginUseCase.ts:1), and [`VerifyAuthenticatedPageUseCase`](packages/application/use-cases/VerifyAuthenticatedPageUseCase.ts:1) match the constraints in [`CONCEPT.md`](docs/concept/CONCEPT.md) and [`RACING.md`](docs/concept/RACING.md).
|
||||
- [ ] Confirm cookie handling in `automation/auth/*` matches the lifecycle described in [`ARCHITECTURE.md`](docs/ARCHITECTURE.md).
|
||||
- [ ] Companion baseline:
|
||||
- [ ] Ensure the Electron app boots and connects reliably on supported platforms (see smoke tests in `tests/smoke/*`).
|
||||
- [ ] Keep the renderer minimal but clear: session creation, auth state, progress, checkout confirmation.
|
||||
|
||||
**Success criteria**
|
||||
|
||||
- All unit, integration and E2E tests for existing flows are green (see [`TESTS.md`](docs/TESTS.md)).
|
||||
- Full hosted-session workflows (fixture-based and real-hosted where enabled) complete without intermittent failures.
|
||||
- Auth/login flow is ToS-safe, matches the “helper, not cheat” model in [`CONCEPT.md`](docs/concept/CONCEPT.md), and remains visible to the admin.
|
||||
- Companion can run a full hosted-session creation with no manual DOM clicks beyond login.
|
||||
|
||||
|
||||
### Phase B: Overlay & Lifecycle Clarity
|
||||
|
||||
**Goal:** Make the automation lifecycle and overlay behavior predictable and trustworthy for admins.
|
||||
|
||||
**Automation (this repo)**
|
||||
|
||||
- [ ] Lifecycle events:
|
||||
- [ ] Review events emitted by [`IAutomationLifecycleEmitter`](packages/infrastructure/adapters/IAutomationLifecycleEmitter.ts:1) and consumed by [`OverlaySyncService`](packages/application/services/OverlaySyncService.ts:1).
|
||||
- [ ] Ensure all critical state transitions of [`AutomationSession`](packages/domain/entities/AutomationSession.ts:1) are reflected in overlay events.
|
||||
- [ ] Overlay UX:
|
||||
- [ ] Ensure [`SessionProgressMonitor`](apps/companion/renderer/components/SessionProgressMonitor.tsx:1) clearly maps steps 1–18 to admin-understandable labels.
|
||||
- [ ] Align overlay messaging with admin QoL themes in [`ADMINS.md`](docs/concept/ADMINS.md) (less repetitive work, more transparency).
|
||||
- [ ] Error surfacing:
|
||||
- [ ] Standardize how validation and runtime errors are propagated from domain → application → companion UI.
|
||||
- [ ] Ensure failures are actionable (what failed, where, and what the admin can retry).
|
||||
|
||||
**Success criteria**
|
||||
|
||||
- Overlay and progress UI always reflect the underlying session state without lag or missing steps.
|
||||
- Admin can see where automation stopped and why, without reading logs.
|
||||
- Lifecycle behavior is fully covered in tests (overlay integration, companion workflow E2E), as referenced from [`TESTS.md`](docs/TESTS.md).
|
||||
|
||||
|
||||
### Phase C: Checkout Safety Path
|
||||
|
||||
**Goal:** Make every credit/checkout-like action go through an explicit, traceable confirmation path that admins can trust.
|
||||
|
||||
**Automation (this repo)**
|
||||
|
||||
- [ ] Enrich checkout detection:
|
||||
- [ ] Validate selector logic and price parsing in [`CheckoutPriceExtractor`](packages/infrastructure/adapters/automation/CheckoutPriceExtractor.ts:1) against current iRacing UI.
|
||||
- [ ] Ensure [`CheckoutState`](packages/domain/value-objects/CheckoutState.ts:1) covers all relevant button states.
|
||||
- [ ] Harden confirmation logic:
|
||||
- [ ] Confirm [`ConfirmCheckoutUseCase`](packages/application/use-cases/ConfirmCheckoutUseCase.ts:1) is the *only* entry point for automation that proceeds past a non-zero price.
|
||||
- [ ] Ensure [`ElectronCheckoutConfirmationAdapter`](packages/infrastructure/adapters/ipc/ElectronCheckoutConfirmationAdapter.ts:1) and [`CheckoutConfirmationDialog`](apps/companion/renderer/components/CheckoutConfirmationDialog.tsx:1) enforce explicit admin confirmation and timeouts.
|
||||
- [ ] Failure paths:
|
||||
- [ ] Verify that any parsing failure or ambiguous state results in a safe stop, not a blind click.
|
||||
- [ ] Add tests to cover “weird but possible” UI states observed via fixtures.
|
||||
|
||||
**Success criteria**
|
||||
|
||||
- No automation path can perform a checkout-like action without an on-screen confirmation dialog.
|
||||
- All credit-related flows are covered in tests (unit, integration, and companion E2E) with failure-path assertions.
|
||||
- Behavior matches the safety and trust requirements in [`ADMINS.md`](docs/concept/ADMINS.md) and [`RACING.md`](docs/concept/RACING.md).
|
||||
|
||||
|
||||
### Phase D: Additional Hosted Workflows & Admin QoL
|
||||
|
||||
**Goal:** Extend automation beyond the initial hosted-session wizard happy path while staying within the same ToS-safe browser automation model.
|
||||
|
||||
**Automation (this repo)**
|
||||
|
||||
- [ ] Map additional hosted workflows:
|
||||
- [ ] Identify additional iRacing hosted flows that align with admin QoL needs from [`ADMINS.md`](docs/concept/ADMINS.md) (e.g. practice-only, league-specific hosted sessions).
|
||||
- [ ] Encode them as configurations on top of [`HostedSessionConfig`](packages/domain/entities/HostedSessionConfig.ts:1) where feasible.
|
||||
- [ ] Workflow templates:
|
||||
- [ ] Provide a small set of reusable presets (e.g. “standard league race”, “test session”) that can later be populated by external services.
|
||||
- [ ] Resilience work:
|
||||
- [ ] Improve behavior under partial UI changes (selectors, labels) using the validation patterns from [`PageStateValidator`](packages/domain/services/PageStateValidator.ts:1).
|
||||
|
||||
**Success criteria**
|
||||
|
||||
- At least one additional hosted workflow beyond the baseline wizard is supported end to end.
|
||||
- Admins can choose between a small number of well-tested presets that reflect league use-cases from [`COMPETITION.md`](docs/concept/COMPETITION.md).
|
||||
- Automation remains fully ToS-safe (no gameplay-affecting automation, no desktop/sim process interference), as reiterated in [`ARCHITECTURE.md`](docs/ARCHITECTURE.md).
|
||||
|
||||
|
||||
### Phase E: Operationalization & Packaging
|
||||
|
||||
**Goal:** Make it realistic for real league admins to install, configure and operate the companion and automation engine.
|
||||
|
||||
**Automation (this repo)**
|
||||
|
||||
- [ ] Packaging & configuration:
|
||||
- [ ] Ensure Electron packaging, browser mode configuration and logging settings match the expectations in [`TECH.md`](docs/TECH.md).
|
||||
- [ ] Provide a minimal operator-facing configuration story (environment, headless vs headed, fixture vs live).
|
||||
- [ ] Observability:
|
||||
- [ ] Ensure logs and failure artifacts are sufficient for debugging issues without code changes.
|
||||
- [ ] Documentation:
|
||||
- [ ] Keep operator-focused docs short and aligned with admin benefits from [`ADMINS.md`](docs/concept/ADMINS.md).
|
||||
|
||||
**Success criteria**
|
||||
|
||||
- A technically inclined admin can install the companion, configure automation mode, and run a full hosted-session workflow using only the documentation in this repo.
|
||||
- Most operational issues can be diagnosed via logs and failure artifacts without code-level changes.
|
||||
- Existing tests remain the primary safety net for refactors (see [`TESTS.md`](docs/TESTS.md)).
|
||||
|
||||
**Note:** This phase is internal validation only—no user-facing features.
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Landing Page & Market Validation
|
||||
## 4. Core Platform Roadmap (Future / External Services)
|
||||
|
||||
**Goal:** Validate product-market fit before building the full application.
|
||||
This track covers the broader GridPilot competition platform from the concept docs. It is **not** implemented in this repo and will likely live in separate services/apps that integrate with the automation engine described in [`ARCHITECTURE.md`](docs/ARCHITECTURE.md).
|
||||
|
||||
### Marketing Website
|
||||
- [ ] Build static marketing website (Next.js or similar)
|
||||
- [ ] Create compelling copy addressing league organizer pain points
|
||||
- [ ] Design product mockups and fake screenshots
|
||||
- [ ] Add email collection form (waitlist integration)
|
||||
- [ ] Implement privacy policy and terms of service
|
||||
- [ ] Set up analytics (signups, page views, engagement)
|
||||
Each phase is intentionally high-level to avoid going stale; details belong in future architecture docs for those services.
|
||||
|
||||
### Community Engagement
|
||||
- [ ] Post to r/iRacing subreddit with mockups
|
||||
- [ ] Share in iRacing Discord communities
|
||||
- [ ] Reach out to league organizers directly
|
||||
- [ ] Collect feedback on pain points and feature requests
|
||||
- [ ] Conduct user interviews with interested organizers
|
||||
- [ ] Document feedback in product backlog
|
||||
### Phase P1: League Identity and Seasons
|
||||
|
||||
### Analysis
|
||||
- [ ] Analyze email signup metrics
|
||||
- [ ] Review qualitative feedback themes
|
||||
- [ ] Validate assumptions about organizer pain points
|
||||
- [ ] Assess willingness to pay (surveys, conversations)
|
||||
- [ ] Document findings and adjust roadmap if needed
|
||||
**Core Platform (future/external)**
|
||||
|
||||
**Success Criteria:**
|
||||
- 100+ email signups from target users
|
||||
- Positive feedback from league organizers
|
||||
- Validated demand for automated result import
|
||||
- Confirmed interest in team-based scoring
|
||||
- Product-market fit assumptions validated
|
||||
- Provide a clean league home, as described in:
|
||||
- [`CONCEPT.md`](docs/concept/CONCEPT.md)
|
||||
- [`ADMINS.md`](docs/concept/ADMINS.md)
|
||||
- Implement league identity, schedules and season configuration:
|
||||
- public league pages, schedules, rules, rosters (see sections 3 and 4 in [`CONCEPT.md`](docs/concept/CONCEPT.md)).
|
||||
- admin tools for creating seasons, calendars, formats (mirroring [`RACING.md`](docs/concept/RACING.md)).
|
||||
- Model leagues, seasons and events as first-class entities that can later produce [`HostedSessionConfig`](packages/domain/entities/HostedSessionConfig.ts:1) instances for this repo’s automation engine.
|
||||
|
||||
**Success criteria**
|
||||
|
||||
- Leagues can exist, configure seasons and publish schedules independent of automation.
|
||||
- Competition structure (points presets, drop weeks, team vs solo) matches the expectations in [`COMPETITION.md`](docs/concept/COMPETITION.md).
|
||||
- There is a clear integration point for calling the automation engine with derived hosted-session configurations (described in the future platform’s own architecture docs).
|
||||
|
||||
|
||||
### Phase P2: Results, Stats, Rating v1, and Team Competition
|
||||
|
||||
**Core Platform (future/external)**
|
||||
|
||||
- Result ingestion & standings:
|
||||
- Implement automated result import and standings as described in [`CONCEPT.md`](docs/concept/CONCEPT.md) and [`STATS.md`](docs/concept/STATS.md).
|
||||
- Combine imported results into per-season standings for drivers and teams.
|
||||
- Team system:
|
||||
- Implement team profiles and constructors-style championships as in [`TEAMS.md`](docs/concept/TEAMS.md) and team sections of [`RACING.md`](docs/concept/RACING.md).
|
||||
- Stats and inputs for rating:
|
||||
- Structure league and season stats so that league results, incidents, team points and attendance are reliably captured as described in [`STATS.md`](docs/concept/STATS.md).
|
||||
- GridPilot Rating v1 (platform-side service):
|
||||
- Deliver a first usable GridPilot Rating capability consistent with [`RATING.md`](docs/concept/RATING.md), computed entirely in core platform services.
|
||||
- Treat this repo’s automation slice as a producer of trusted, structured session configs and results; do not move any rating logic into the automation engine.
|
||||
|
||||
**Success criteria**
|
||||
|
||||
- For a league connected to GridPilot, standings and stats update automatically based on iRacing results and provide the inputs required by the rating model in [`RATING.md`](docs/concept/RATING.md).
|
||||
- Teams and drivers have persistent identity and history across seasons, matching the narratives in [`DRIVERS.md`](docs/concept/DRIVERS.md) and [`TEAMS.md`](docs/concept/TEAMS.md).
|
||||
- Automation engine in this repo can be treated as a “session executor” feeding reliable results into the platform’s scoring and rating engines, while rating computation remains in Core Platform services.
|
||||
|
||||
|
||||
### Phase P3: Complaints, Penalties, Transparency, and Rating Fairness
|
||||
|
||||
**Core Platform (future/external)**
|
||||
|
||||
- Complaint intake:
|
||||
- Structured complaint flows as defined in [`CONCEPT.md`](docs/concept/CONCEPT.md) and [`RACING.md`](docs/concept/RACING.md) (race, drivers, timestamps, evidence).
|
||||
- Penalty tools:
|
||||
- Admin-facing review and decision tools described in [`ADMINS.md`](docs/concept/ADMINS.md) and [`RACING.md`](docs/concept/RACING.md).
|
||||
- Classification updates:
|
||||
- Automatic recalculation of results and standings after penalties, aligned with the classification and penalty handling in [`STATS.md`](docs/concept/STATS.md).
|
||||
- Rating dependencies:
|
||||
- Ensure that penalty-aware classification, incident handling and transparency from this phase directly feed into GridPilot Rating as incident and season factors, consistent with [`RATING.md`](docs/concept/RATING.md).
|
||||
- Keep rating computation fully within Core Platform services; this repo continues to provide only the structured competition data that rating consumes.
|
||||
|
||||
**Success criteria**
|
||||
|
||||
- Complaints and penalties are no longer handled via ad-hoc Discord and spreadsheets.
|
||||
- Standings, stats, histories and rating signals remain consistent and penalty-aware.
|
||||
- The platform exposes a transparent, auditable record of decisions, supporting the fairness and rating trust goals from the concept docs.
|
||||
|
||||
|
||||
### Phase P4: Social, Discovery, and Monetization
|
||||
|
||||
**Core Platform (future/external)**
|
||||
|
||||
- Social and discovery:
|
||||
- Implement the lightweight social and discovery features from [`SOCIAL.md`](docs/concept/SOCIAL.md) and league/team profile extensions in [`TEAMS.md`](docs/concept/TEAMS.md).
|
||||
- League and driver discovery:
|
||||
- Make it easy for drivers to find leagues and teams, and for leagues to find drivers, as described in [`DRIVERS.md`](docs/concept/DRIVERS.md) and [`COMPETITION.md`](docs/concept/COMPETITION.md).
|
||||
- Monetization (later phase):
|
||||
- Add monetization and premium features only after the core competition and trust layers are stable, following the MVP philosophy in [`CONCEPT.md`](docs/concept/CONCEPT.md).
|
||||
|
||||
**Success criteria**
|
||||
|
||||
- Drivers, teams and leagues can discover each other through GridPilot, with identity and history driving trust.
|
||||
- Social features remain lightweight and purpose-driven, complementing community tools like Discord instead of replacing them.
|
||||
- Any monetization respects the “clarity, fairness, and admin control” principles in the concept docs.
|
||||
|
||||
**Note:** No application built in this phase—validation only. Pivot or proceed based on feedback.
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: MVP (League-Focused)
|
||||
|
||||
**Goal:** Build a functional league management platform with automated result import (no fees, no companion app yet).
|
||||
|
||||
### Authentication & User Management
|
||||
- [ ] Implement iRacing OAuth authentication flow
|
||||
- [ ] Create user registration and profile system
|
||||
- [ ] Build user role system (organizer, driver, spectator)
|
||||
- [ ] Implement session management and token refresh
|
||||
- [ ] Write BDD scenarios for authentication flows
|
||||
- [ ] Achieve test coverage for auth domain
|
||||
|
||||
### League Management (Core Domain)
|
||||
- [ ] Implement `CreateLeagueUseCase` (see [ARCHITECTURE.md](./ARCHITECTURE.md#application-layer))
|
||||
- [ ] Create league CRUD operations (update, delete, archive)
|
||||
- [ ] Build season setup (tracks, cars, rules configuration)
|
||||
- [ ] Implement points system configuration (customizable)
|
||||
- [ ] Create race event scheduling system
|
||||
- [ ] Write BDD scenarios for league lifecycle
|
||||
- [ ] Achieve >90% test coverage for `League` aggregate
|
||||
|
||||
### Driver & Team Registration
|
||||
- [ ] Build driver registration system (join league/season)
|
||||
- [ ] Implement team registration system (optional parallel scoring)
|
||||
- [ ] Create team roster management (add/remove drivers)
|
||||
- [ ] Build approval workflow for registrations
|
||||
- [ ] Write BDD scenarios for registration flows
|
||||
- [ ] Test team scoring calculation logic
|
||||
|
||||
### Automated Result Import
|
||||
- [ ] Implement PostgreSQL schema (repositories pattern)
|
||||
- [ ] Create server-side iRacing API integration
|
||||
- [ ] Build automated result polling service
|
||||
- [ ] Implement result parsing and validation
|
||||
- [ ] Create `ImportResultUseCase` (see [ARCHITECTURE.md](./ARCHITECTURE.md#application-layer))
|
||||
- [ ] Handle edge cases (DNS, penalties, disconnects)
|
||||
- [ ] Write BDD scenarios for result import
|
||||
- [ ] Test result import reliability and error handling
|
||||
|
||||
### Standings & Results
|
||||
- [ ] Generate driver standings (individual points calculation)
|
||||
- [ ] Generate team standings (parallel scoring model)
|
||||
- [ ] Build race result pages (lap times, incidents, finishing position)
|
||||
- [ ] Implement historical standings view (by race)
|
||||
- [ ] Create standings export functionality (CSV)
|
||||
- [ ] Write BDD scenarios for standings calculation
|
||||
- [ ] Test edge cases (ties, dropped races, penalties)
|
||||
|
||||
### League Identity & Admin
|
||||
- [ ] Build league identity pages (public view)
|
||||
- [ ] Create basic admin dashboard (organizer tools)
|
||||
- [ ] Implement league settings management
|
||||
- [ ] Build schedule and calendar view
|
||||
- [ ] Create notifications system (race reminders)
|
||||
- [ ] Write BDD scenarios for admin workflows
|
||||
|
||||
### Quality Assurance
|
||||
- [ ] Run full test suite (unit, integration, E2E)
|
||||
- [ ] Achieve >90% test coverage for domain/application layers
|
||||
- [ ] Perform manual testing with real iRacing data
|
||||
- [ ] Fix all critical bugs and edge cases
|
||||
- [ ] Document known limitations
|
||||
|
||||
**Success Criteria:**
|
||||
- Functional platform for league management
|
||||
- Automated result import working reliably
|
||||
- Driver and team standings calculated correctly
|
||||
- No manual result uploads required
|
||||
- Test coverage >90% for core domain
|
||||
- Ready for closed beta testing
|
||||
|
||||
**Note:** No fees, no payouts, no companion app in this phase. Focus on core league management.
|
||||
|
||||
**Cross-References:**
|
||||
- See [ARCHITECTURE.md](./ARCHITECTURE.md) for component boundaries
|
||||
- See [TESTS.md](./TESTS.md) for BDD scenario examples
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Companion App (Automation Layer)
|
||||
|
||||
**Goal:** Build an Electron companion app to automate session creation and reduce organizer workload.
|
||||
|
||||
### Companion App Foundation
|
||||
- [ ] Set up Electron application structure
|
||||
- [ ] Implement Playwright browser automation framework
|
||||
- [ ] Create IPC bridge for backend communication
|
||||
- [ ] Build auto-updater mechanism
|
||||
- [ ] Set up application signing and packaging
|
||||
- [ ] Document installation and setup process
|
||||
|
||||
### Session Creation Automation
|
||||
- [ ] Build session creation assistance workflow
|
||||
- [ ] Implement iRacing session page detection
|
||||
- [ ] Create session ID extraction mechanism
|
||||
- [ ] Build form auto-fill functionality (track, cars, rules)
|
||||
- [ ] Implement session URL capture and sync
|
||||
- [ ] Handle automation failure cases gracefully
|
||||
- [ ] Write E2E tests for automation flows
|
||||
|
||||
### OAuth & Credential Handoff
|
||||
- [ ] Implement OAuth handoff from companion to web
|
||||
- [ ] Create secure credential storage (encrypted)
|
||||
- [ ] Build IPC bridge for authentication state
|
||||
- [ ] Handle token refresh in companion app
|
||||
- [ ] Write E2E tests for OAuth handoff flow
|
||||
- [ ] Test cross-process credential security
|
||||
|
||||
### Organizer Utilities
|
||||
- [ ] Create session creation guidance (step-by-step)
|
||||
- [ ] Build pre-race checklist functionality
|
||||
- [ ] Implement session status monitoring
|
||||
- [ ] Add quick access to league settings
|
||||
- [ ] Create notifications for upcoming races
|
||||
|
||||
### Testing & Reliability
|
||||
- [ ] Test session creation automation reliability (>95% success rate)
|
||||
- [ ] Validate automation across different iRacing UI versions
|
||||
- [ ] Handle iRacing website changes gracefully
|
||||
- [ ] Create fallback mechanisms for automation failures
|
||||
- [ ] Document troubleshooting guide
|
||||
|
||||
**Success Criteria:**
|
||||
- Companion app reduces session creation time by 80%+
|
||||
- Automation success rate >95%
|
||||
- OAuth handoff secure and seamless
|
||||
- Auto-updater working reliably
|
||||
- Comprehensive E2E test coverage
|
||||
|
||||
**Note:** Companion app is optional but highly valuable for organizers. Focus on reliability over features.
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Branding & Public Pages
|
||||
|
||||
**Goal:** Enable professional league identity and public discoverability.
|
||||
|
||||
### Asset Management
|
||||
- [ ] Implement S3-compatible asset storage (logos, images)
|
||||
- [ ] Add league logo upload functionality
|
||||
- [ ] Create image optimization pipeline
|
||||
- [ ] Implement asset CDN integration
|
||||
- [ ] Build asset management UI (upload, delete, replace)
|
||||
|
||||
### Custom Branding
|
||||
- [ ] Create custom CSS/theming system (colors, fonts)
|
||||
- [ ] Build theme preview functionality
|
||||
- [ ] Implement logo placement customization
|
||||
- [ ] Add custom header/footer options
|
||||
- [ ] Create branding guidelines documentation
|
||||
|
||||
### Public League Directory
|
||||
- [ ] Build public league directory (browse and discover)
|
||||
- [ ] Implement search and filtering (game type, region, skill level)
|
||||
- [ ] Create league detail pages (public view)
|
||||
- [ ] Add league statistics (active seasons, drivers, races)
|
||||
- [ ] Implement privacy settings (public/private leagues)
|
||||
|
||||
### External Integrations
|
||||
- [ ] Implement optional custom domain support (CNAME)
|
||||
- [ ] Create embeddable widgets (standings iframe, schedule)
|
||||
- [ ] Add Discord/TeamSpeak integration links
|
||||
- [ ] Implement YouTube/Twitch VOD linking (external only, no uploads)
|
||||
- [ ] Build social sharing functionality (Twitter, Reddit)
|
||||
|
||||
### Public Result Pages
|
||||
- [ ] Create public race result pages (shareable links)
|
||||
- [ ] Build driver profile pages (career statistics)
|
||||
- [ ] Implement team profile pages (roster, history)
|
||||
- [ ] Add historical standings archive
|
||||
- [ ] Create race replay link integration (if available)
|
||||
|
||||
### Testing & Documentation
|
||||
- [ ] Write BDD scenarios for branding features
|
||||
- [ ] Test public pages with various league configurations
|
||||
- [ ] Validate custom domain setup process
|
||||
- [ ] Create user guide for branding customization
|
||||
- [ ] Test embeddable widgets in external sites
|
||||
|
||||
**Success Criteria:**
|
||||
- Leagues have professional identity and branding
|
||||
- Public directory drives league discovery
|
||||
- Custom domains working reliably
|
||||
- Embeddable widgets functional
|
||||
- External integrations (Discord, Twitch) operational
|
||||
|
||||
**Note:** Branding features are optional but enhance league professionalism and discoverability.
|
||||
|
||||
---
|
||||
|
||||
## Phase 5: Public Launch
|
||||
|
||||
**Goal:** Launch GridPilot publicly with production-grade infrastructure and stability.
|
||||
|
||||
### Security & Compliance
|
||||
- [ ] Perform security audit (OAuth, credentials, API security)
|
||||
- [ ] Implement rate limiting and DDoS protection
|
||||
- [ ] Add CSRF and XSS protection
|
||||
- [ ] Conduct penetration testing
|
||||
- [ ] Review GDPR compliance (user data handling)
|
||||
- [ ] Implement data export functionality (user request)
|
||||
- [ ] Create incident response plan
|
||||
|
||||
### Performance & Scalability
|
||||
- [ ] Load testing and performance optimization
|
||||
- [ ] Implement database query optimization
|
||||
- [ ] Add caching layers (Redis for sessions, API responses)
|
||||
- [ ] Configure CDN for static assets
|
||||
- [ ] Optimize Docker images for production
|
||||
- [ ] Set up horizontal scaling strategy
|
||||
|
||||
### Production Infrastructure
|
||||
- [ ] Set up production hosting (AWS/GCP/Azure)
|
||||
- [ ] Configure production database (PostgreSQL with replication)
|
||||
- [ ] Implement database backup strategy (automated, tested)
|
||||
- [ ] Set up monitoring and alerting (logs, errors, uptime)
|
||||
- [ ] Configure error tracking (Sentry or similar)
|
||||
- [ ] Implement log aggregation and analysis
|
||||
- [ ] Create disaster recovery plan
|
||||
|
||||
### Documentation & Support
|
||||
- [ ] Write comprehensive user documentation
|
||||
- [ ] Create organizer onboarding guide
|
||||
- [ ] Build driver user guide
|
||||
- [ ] Document API endpoints (if public)
|
||||
- [ ] Create FAQ and troubleshooting guide
|
||||
- [ ] Set up support system (email, Discord)
|
||||
|
||||
### Launch Preparation
|
||||
- [ ] Prepare launch marketing materials
|
||||
- [ ] Coordinate Reddit/Discord announcements
|
||||
- [ ] Create launch video/demo
|
||||
- [ ] Set up social media presence
|
||||
- [ ] Prepare press kit (if applicable)
|
||||
- [ ] Plan launch timeline and milestones
|
||||
|
||||
### Beta Onboarding
|
||||
- [ ] Onboard first 10 beta leagues (closed beta)
|
||||
- [ ] Collect feedback from beta users
|
||||
- [ ] Fix critical bugs identified in beta
|
||||
- [ ] Validate production stability under real load
|
||||
- [ ] Document lessons learned
|
||||
|
||||
**Success Criteria:**
|
||||
- Platform publicly available and stable
|
||||
- Security audit passed with no critical issues
|
||||
- Production infrastructure operational
|
||||
- Monitoring and alerting functional
|
||||
- User documentation complete
|
||||
- First 10+ leagues successfully onboarded
|
||||
- Platform stable under real-world load
|
||||
|
||||
**Note:** Public launch is a major milestone. Ensure stability and security before opening access.
|
||||
|
||||
---
|
||||
|
||||
## Phase 6: Monetization & Expansion
|
||||
|
||||
**Goal:** Generate revenue and expand platform capabilities.
|
||||
|
||||
### Monetization Features
|
||||
- [ ] Implement league creation fee system
|
||||
- [ ] Add optional driver entry fee per season
|
||||
- [ ] Build revenue split mechanism (organizer/GridPilot)
|
||||
- [ ] Create billing and invoicing system
|
||||
- [ ] Implement payment processing (Stripe or similar)
|
||||
- [ ] Add subscription management (for premium features)
|
||||
- [ ] Create payout system for organizers
|
||||
- [ ] Implement refund and dispute handling
|
||||
|
||||
### Premium Features
|
||||
- [ ] Create premium league features (advanced analytics)
|
||||
- [ ] Build driver/team performance metrics over time
|
||||
- [ ] Implement historical trend analysis
|
||||
- [ ] Add advanced race strategy tools
|
||||
- [ ] Create custom report generation
|
||||
- [ ] Build league comparison and benchmarking
|
||||
|
||||
### Analytics & Insights
|
||||
- [ ] Add analytics dashboards for leagues
|
||||
- [ ] Implement driver consistency metrics
|
||||
- [ ] Create incident rate analysis
|
||||
- [ ] Build lap time comparison tools
|
||||
- [ ] Add race pace analysis
|
||||
- [ ] Implement predictive performance modeling
|
||||
|
||||
### Platform Expansion
|
||||
- [ ] Explore expansion to other simulators (ACC, rFactor 2)
|
||||
- [ ] Evaluate additional automation features
|
||||
- [ ] Research multi-game league support
|
||||
- [ ] Investigate community-requested features
|
||||
- [ ] Assess partnership opportunities (teams, sponsors)
|
||||
|
||||
### Business Intelligence
|
||||
- [ ] Implement revenue tracking and reporting
|
||||
- [ ] Create user engagement metrics
|
||||
- [ ] Build churn analysis and retention tools
|
||||
- [ ] Add A/B testing framework
|
||||
- [ ] Implement feature adoption tracking
|
||||
|
||||
**Success Criteria:**
|
||||
- Revenue generation active and growing
|
||||
- Premium features adopted by target segment
|
||||
- Payment processing reliable and secure
|
||||
- Organizer payouts working correctly
|
||||
- Platform expansion feasibility validated
|
||||
- Positive unit economics demonstrated
|
||||
|
||||
**Note:** Monetization should not compromise core user experience. Ensure value delivery justifies pricing.
|
||||
|
||||
---
|
||||
|
||||
## Dependencies & Sequencing
|
||||
|
||||
**Critical Path:**
|
||||
1. Phase 0 must be completed before any development begins
|
||||
2. Phase 1 validation should gate investment in Phase 2
|
||||
3. Phase 2 MVP is required before Phase 3 (companion app depends on API)
|
||||
4. Phase 4 can be developed in parallel with Phase 3 (independent features)
|
||||
5. Phase 5 (public launch) requires Phases 2-4 to be complete and stable
|
||||
6. Phase 6 (monetization) requires Phase 5 (public user base)
|
||||
|
||||
**Optional Paths:**
|
||||
- Phase 3 (companion app) can be delayed if organizers tolerate manual session creation
|
||||
- Phase 4 (branding) can be simplified for MVP launch
|
||||
- Phase 6 features can be prioritized based on user demand
|
||||
|
||||
**Iteration Strategy:**
|
||||
- Complete each phase fully before moving to the next
|
||||
- Validate success criteria before proceeding
|
||||
- Adjust roadmap based on feedback and learnings
|
||||
- Maintain architectural integrity throughout
|
||||
|
||||
---
|
||||
|
||||
## Living Document
|
||||
|
||||
This roadmap is a living document and will be updated as the project evolves. Key updates will include:
|
||||
- Completed todos (checked off)
|
||||
- Lessons learned from each phase
|
||||
- Adjusted priorities based on user feedback
|
||||
- New features discovered during development
|
||||
- Changes to success criteria or scope
|
||||
|
||||
**Maintenance:**
|
||||
- Review and update quarterly (or after each phase)
|
||||
- Archive completed phases for reference
|
||||
- Document deviations from original plan
|
||||
- Track velocity and estimate remaining work
|
||||
|
||||
**Cross-References:**
|
||||
- [CONCEPT.md](./CONCEPT.md) - Product vision and user needs
|
||||
- [ARCHITECTURE.md](./ARCHITECTURE.md) - Technical design and component boundaries
|
||||
- [TESTS.md](./TESTS.md) - Testing strategy and BDD scenarios
|
||||
- [TECH.md](./TECH.md) - Technology decisions and rationale
|
||||
|
||||
---
|
||||
|
||||
**Last Updated:** 2025-11-23
|
||||
**Current Phase:** Phase 0 (Foundation)
|
||||
**Overall Progress:** In Progress (browser automation implemented)
|
||||
## 5. Dependencies and Evolution
|
||||
|
||||
- Automation & Companion phases (A–E) are largely **independent** of Core Platform phases and can be completed first.
|
||||
- Core Platform phases (P1–P4) depend on:
|
||||
- A stable automation engine and companion (this repo).
|
||||
- Clear APIs/IPC or integration contracts that should be documented in future platform services, referencing [`ARCHITECTURE.md`](docs/ARCHITECTURE.md).
|
||||
- The automation slice should remain **small and robust**, so that multiple future services can treat it as a shared “session engine.”
|
||||
|
||||
Use this roadmap as a **living, checkable** guide:
|
||||
|
||||
- Update checklists under Automation & Companion as work lands.
|
||||
- Keep Core Platform phases at the level of concept alignment, not implementation detail.
|
||||
- When new services are built, they should introduce their own roadmaps and link back to:
|
||||
- [`CONCEPT.md`](docs/concept/CONCEPT.md) and related concept docs.
|
||||
- [`ARCHITECTURE.md`](docs/ARCHITECTURE.md) for the automation slice.
|
||||
- [`TESTS.md`](docs/TESTS.md) for testing strategy and coverage expectations.
|
||||
|
||||
**Last Updated:** 2025-12-01
|
||||
**Tracks:** Automation & Companion (this repo) / Core Platform (future/external)
|
||||
**Current Focus:** Phase A (Solid hosted-session engine & companion baseline)
|
||||
Reference in New Issue
Block a user