17 KiB
GridPilot Implementation Roadmap
1. Big Picture and Scope
GridPilot is the competition layer for iRacing leagues, as described in:
Those docs describe the full platform: leagues, seasons, standings, stats, rating, complaints, social, teams, discovery, monetization.
This repository currently implements a narrow, ToS-safe slice of that vision:
- A desktop Electron companion running on the admin’s machine.
- A hosted-session automation engine that drives the iRacing web UI with Playwright.
- Domain and application logic for:
- hosted wizard steps
- authentication and cookie/session reuse
- overlays and lifecycle events
- checkout safety and confirmation.
For the technical slice implemented here, see:
Everything else from the concept docs (league/season management, stats, social, complaints, team identity, discovery) is future or external to this repo and will live in other services.
This roadmap is therefore split into two levels:
- Automation & Companion Roadmap – implementation-level, this repo.
- Core Platform Roadmap – high-level, future/external services guided by the concept docs.
2. How to Use This Roadmap
- Treat Automation & Companion items as work inside this repo.
- Treat Core Platform items as future/external services that will integrate with this automation slice later.
- Use checklists for near-term Automation & Companion work only.
- Use the concept docs plus
ARCHITECTURE.mdas the source of truth for scope boundaries. - Keep success criteria testable, using patterns in
TESTS.md.
3. Automation & Companion Roadmap (This Repo)
This track is grounded in the existing code and architecture:
- Hosted wizard flow and step orchestration (see
tests/e2e/steps/*andtests/e2e/workflows/*). - Auth and cookie/session management.
- Overlay lifecycle via
IAutomationLifecycleEmitterandOverlaySyncService. - Checkout safety via
CheckoutPriceExtractor,ConfirmCheckoutUseCase,ElectronCheckoutConfirmationAdapterand the renderer dialog. - Electron companion UI and IPC wiring.
Phase A: Solid Hosted-Session Engine & Companion Baseline
Goal: Make the existing hosted-session automation and Electron companion reliable, observable, and easy to run on an admin’s machine.
Automation (this repo)
- Stabilize wizard step orchestration:
- Review and align wizard-step domain rules with
StepTransitionValidator. - Ensure
tests/e2e/steps/*cover all 18 hosted wizard steps end to end. - Harden
WizardStepOrchestratorbehavior for retries and timeouts.
- Review and align wizard-step domain rules with
- Strengthen page validation:
- Extend
PageStateValidatorto cover edge cases found in real-hosted tests. - Ensure selector sets in
core/infrastructure/adapters/automation/dom/*match current iRacing UI.
- Extend
- Tighten auth/session flows:
- Verify
CheckAuthenticationUseCase,InitiateLoginUseCase, andVerifyAuthenticatedPageUseCasematch the constraints inCONCEPT.mdandRACING.md. - Confirm cookie handling in
automation/auth/*matches the lifecycle described inARCHITECTURE.md.
- Verify
- Companion baseline:
- Ensure the Electron app boots and connects reliably on supported platforms (see smoke tests in
tests/smoke/*). - Keep the renderer minimal but clear: session creation, auth state, progress, checkout confirmation.
- Ensure the Electron app boots and connects reliably on supported platforms (see smoke tests in
Success criteria
- All unit, integration and E2E tests for existing flows are green (see
TESTS.md). - Full hosted-session workflows (fixture-based and real-hosted where enabled) complete without intermittent failures.
- Auth/login flow is ToS-safe, matches the “helper, not cheat” model in
CONCEPT.md, and remains visible to the admin. - Companion can run a full hosted-session creation with no manual DOM clicks beyond login.
Phase B: Overlay & Lifecycle Clarity
Goal: Make the automation lifecycle and overlay behavior predictable and trustworthy for admins.
Automation (this repo)
- Lifecycle events:
- Review events emitted by
IAutomationLifecycleEmitterand consumed byOverlaySyncService. - Ensure all critical state transitions of
AutomationSessionare reflected in overlay events.
- Review events emitted by
- Overlay UX:
- Ensure
SessionProgressMonitorclearly maps steps 1–18 to admin-understandable labels. - Align overlay messaging with admin QoL themes in
ADMINS.md(less repetitive work, more transparency).
- Ensure
- Error surfacing:
- Standardize how validation and runtime errors are propagated from domain → application → companion UI.
- Ensure failures are actionable (what failed, where, and what the admin can retry).
Success criteria
- Overlay and progress UI always reflect the underlying session state without lag or missing steps.
- Admin can see where automation stopped and why, without reading logs.
- Lifecycle behavior is fully covered in tests (overlay integration, companion workflow E2E), as referenced from
TESTS.md.
Phase C: Checkout Safety Path
Goal: Make every credit/checkout-like action go through an explicit, traceable confirmation path that admins can trust.
Automation (this repo)
- Enrich checkout detection:
- Validate selector logic and price parsing in
CheckoutPriceExtractoragainst current iRacing UI. - Ensure
CheckoutStatecovers all relevant button states.
- Validate selector logic and price parsing in
- Harden confirmation logic:
- Confirm
ConfirmCheckoutUseCaseis the only entry point for automation that proceeds past a non-zero price. - Ensure
ElectronCheckoutConfirmationAdapterandCheckoutConfirmationDialogenforce explicit admin confirmation and timeouts.
- Confirm
- Failure paths:
- Verify that any parsing failure or ambiguous state results in a safe stop, not a blind click.
- Add tests to cover “weird but possible” UI states observed via fixtures.
Success criteria
- No automation path can perform a checkout-like action without an on-screen confirmation dialog.
- All credit-related flows are covered in tests (unit, integration, and companion E2E) with failure-path assertions.
- Behavior matches the safety and trust requirements in
ADMINS.mdandRACING.md.
Phase D: Additional Hosted Workflows & Admin QoL
Goal: Extend automation beyond the initial hosted-session wizard happy path while staying within the same ToS-safe browser automation model.
Automation (this repo)
- Map additional hosted workflows:
- Identify additional iRacing hosted flows that align with admin QoL needs from
ADMINS.md(e.g. practice-only, league-specific hosted sessions). - Encode them as configurations on top of
HostedSessionConfigwhere feasible.
- Identify additional iRacing hosted flows that align with admin QoL needs from
- Workflow templates:
- Provide a small set of reusable presets (e.g. “standard league race”, “test session”) that can later be populated by external services.
- Resilience work:
- Improve behavior under partial UI changes (selectors, labels) using the validation patterns from
PageStateValidator.
- Improve behavior under partial UI changes (selectors, labels) using the validation patterns from
Success criteria
- At least one additional hosted workflow beyond the baseline wizard is supported end to end.
- Admins can choose between a small number of well-tested presets that reflect league use-cases from
COMPETITION.md. - Automation remains fully ToS-safe (no gameplay-affecting automation, no desktop/sim process interference), as reiterated in
ARCHITECTURE.md.
Phase E: Operationalization & Packaging
Goal: Make it realistic for real league admins to install, configure and operate the companion and automation engine.
Automation (this repo)
- Packaging & configuration:
- Ensure Electron packaging, browser mode configuration and logging settings match the expectations in
TECH.md. - Provide a minimal operator-facing configuration story (environment, headless vs headed, fixture vs live).
- Ensure Electron packaging, browser mode configuration and logging settings match the expectations in
- Observability:
- Ensure logs and failure artifacts are sufficient for debugging issues without code changes.
- Documentation:
- Keep operator-focused docs short and aligned with admin benefits from
ADMINS.md.
- Keep operator-focused docs short and aligned with admin benefits from
Success criteria
- A technically inclined admin can install the companion, configure automation mode, and run a full hosted-session workflow using only the documentation in this repo.
- Most operational issues can be diagnosed via logs and failure artifacts without code-level changes.
- Existing tests remain the primary safety net for refactors (see
TESTS.md).
4. Core Platform Roadmap (Future / External Services)
This track covers the broader GridPilot competition platform from the concept docs. It is not implemented in this repo and will likely live in separate services/apps that integrate with the automation engine described in ARCHITECTURE.md.
Each phase is intentionally high-level to avoid going stale; details belong in future architecture docs for those services.
Phase P1: League Identity and Seasons
Core Platform (future/external)
- Provide a clean league home, as described in:
- Implement league identity, schedules and season configuration:
- public league pages, schedules, rules, rosters (see sections 3 and 4 in
CONCEPT.md). - admin tools for creating seasons, calendars, formats (mirroring
RACING.md).
- public league pages, schedules, rules, rosters (see sections 3 and 4 in
- Model leagues, seasons and events as first-class entities that can later produce
HostedSessionConfiginstances for this repo’s automation engine.
Success criteria
- Leagues can exist, configure seasons and publish schedules independent of automation.
- Competition structure (points presets, drop weeks, team vs solo) matches the expectations in
COMPETITION.md. - There is a clear integration point for calling the automation engine with derived hosted-session configurations (described in the future platform’s own architecture docs).
Phase P2: Results, Stats, Rating v1, and Team Competition
Core Platform (future/external)
- Result ingestion & standings:
- Implement automated result import and standings as described in
CONCEPT.mdandSTATS.md. - Combine imported results into per-season standings for drivers and teams.
- Implement automated result import and standings as described in
- Team system:
- Stats and inputs for rating:
- Structure league and season stats so that league results, incidents, team points and attendance are reliably captured as described in
STATS.md.
- Structure league and season stats so that league results, incidents, team points and attendance are reliably captured as described in
- GridPilot Rating v1 (platform-side service):
- Deliver a first usable GridPilot Rating capability consistent with
RATING.md, computed entirely in core platform services. - Treat this repo’s automation slice as a producer of trusted, structured session configs and results; do not move any rating logic into the automation engine.
- Deliver a first usable GridPilot Rating capability consistent with
Success criteria
- For a league connected to GridPilot, standings and stats update automatically based on iRacing results and provide the inputs required by the rating model in
RATING.md. - Teams and drivers have persistent identity and history across seasons, matching the narratives in
DRIVERS.mdandTEAMS.md. - Automation engine in this repo can be treated as a “session executor” feeding reliable results into the platform’s scoring and rating engines, while rating computation remains in Core Platform services.
Phase P3: Complaints, Penalties, Transparency, and Rating Fairness
Core Platform (future/external)
- Complaint intake:
- Structured complaint flows as defined in
CONCEPT.mdandRACING.md(race, drivers, timestamps, evidence).
- Structured complaint flows as defined in
- Penalty tools:
- Classification updates:
- Automatic recalculation of results and standings after penalties, aligned with the classification and penalty handling in
STATS.md.
- Automatic recalculation of results and standings after penalties, aligned with the classification and penalty handling in
- Rating dependencies:
- Ensure that penalty-aware classification, incident handling and transparency from this phase directly feed into GridPilot Rating as incident and season factors, consistent with
RATING.md. - Keep rating computation fully within Core Platform services; this repo continues to provide only the structured competition data that rating consumes.
- Ensure that penalty-aware classification, incident handling and transparency from this phase directly feed into GridPilot Rating as incident and season factors, consistent with
Success criteria
- Complaints and penalties are no longer handled via ad-hoc Discord and spreadsheets.
- Standings, stats, histories and rating signals remain consistent and penalty-aware.
- The platform exposes a transparent, auditable record of decisions, supporting the fairness and rating trust goals from the concept docs.
Phase P4: Social, Discovery, and Monetization
Core Platform (future/external)
- Social and discovery:
- League and driver discovery:
- Make it easy for drivers to find leagues and teams, and for leagues to find drivers, as described in
DRIVERS.mdandCOMPETITION.md.
- Make it easy for drivers to find leagues and teams, and for leagues to find drivers, as described in
- Monetization (later phase):
- Add monetization and premium features only after the core competition and trust layers are stable, following the MVP philosophy in
CONCEPT.md.
- Add monetization and premium features only after the core competition and trust layers are stable, following the MVP philosophy in
Success criteria
- Drivers, teams and leagues can discover each other through GridPilot, with identity and history driving trust.
- Social features remain lightweight and purpose-driven, complementing community tools like Discord instead of replacing them.
- Any monetization respects the “clarity, fairness, and admin control” principles in the concept docs.
5. Dependencies and Evolution
- Automation & Companion phases (A–E) are largely independent of Core Platform phases and can be completed first.
- Core Platform phases (P1–P4) depend on:
- A stable automation engine and companion (this repo).
- Clear APIs/IPC or integration contracts that should be documented in future platform services, referencing
ARCHITECTURE.md.
- The automation slice should remain small and robust, so that multiple future services can treat it as a shared “session engine.”
Use this roadmap as a living, checkable guide:
- Update checklists under Automation & Companion as work lands.
- Keep Core Platform phases at the level of concept alignment, not implementation detail.
- When new services are built, they should introduce their own roadmaps and link back to:
CONCEPT.mdand related concept docs.ARCHITECTURE.mdfor the automation slice.TESTS.mdfor testing strategy and coverage expectations.
Last Updated: 2025-12-01 Tracks: Automation & Companion (this repo) / Core Platform (future/external) Current Focus: Phase A (Solid hosted-session engine & companion baseline)