Files
gridpilot.gg/docs/ROADMAP.md
2025-12-15 13:46:07 +01:00

17 KiB
Raw Blame History

GridPilot Implementation Roadmap

1. Big Picture and Scope

GridPilot is the competition layer for iRacing leagues, as described in:

Those docs describe the full platform: leagues, seasons, standings, stats, rating, complaints, social, teams, discovery, monetization.

This repository currently implements a narrow, ToS-safe slice of that vision:

  • A desktop Electron companion running on the admins machine.
  • A hosted-session automation engine that drives the iRacing web UI with Playwright.
  • Domain and application logic for:
    • hosted wizard steps
    • authentication and cookie/session reuse
    • overlays and lifecycle events
    • checkout safety and confirmation.

For the technical slice implemented here, see:

Everything else from the concept docs (league/season management, stats, social, complaints, team identity, discovery) is future or external to this repo and will live in other services.

This roadmap is therefore split into two levels:

  • Automation & Companion Roadmap implementation-level, this repo.
  • Core Platform Roadmap high-level, future/external services guided by the concept docs.

2. How to Use This Roadmap

  • Treat Automation & Companion items as work inside this repo.
  • Treat Core Platform items as future/external services that will integrate with this automation slice later.
  • Use checklists for near-term Automation & Companion work only.
  • Use the concept docs plus ARCHITECTURE.md as the source of truth for scope boundaries.
  • Keep success criteria testable, using patterns in TESTS.md.

3. Automation & Companion Roadmap (This Repo)

This track is grounded in the existing code and architecture:

Phase A: Solid Hosted-Session Engine & Companion Baseline

Goal: Make the existing hosted-session automation and Electron companion reliable, observable, and easy to run on an admins machine.

Automation (this repo)

  • Stabilize wizard step orchestration:
  • Strengthen page validation:
    • Extend PageStateValidator to cover edge cases found in real-hosted tests.
    • Ensure selector sets in core/infrastructure/adapters/automation/dom/* match current iRacing UI.
  • Tighten auth/session flows:
  • Companion baseline:
    • Ensure the Electron app boots and connects reliably on supported platforms (see smoke tests in tests/smoke/*).
    • Keep the renderer minimal but clear: session creation, auth state, progress, checkout confirmation.

Success criteria

  • All unit, integration and E2E tests for existing flows are green (see TESTS.md).
  • Full hosted-session workflows (fixture-based and real-hosted where enabled) complete without intermittent failures.
  • Auth/login flow is ToS-safe, matches the “helper, not cheat” model in CONCEPT.md, and remains visible to the admin.
  • Companion can run a full hosted-session creation with no manual DOM clicks beyond login.

Phase B: Overlay & Lifecycle Clarity

Goal: Make the automation lifecycle and overlay behavior predictable and trustworthy for admins.

Automation (this repo)

  • Lifecycle events:
  • Overlay UX:
    • Ensure SessionProgressMonitor clearly maps steps 118 to admin-understandable labels.
    • Align overlay messaging with admin QoL themes in ADMINS.md (less repetitive work, more transparency).
  • Error surfacing:
    • Standardize how validation and runtime errors are propagated from domain → application → companion UI.
    • Ensure failures are actionable (what failed, where, and what the admin can retry).

Success criteria

  • Overlay and progress UI always reflect the underlying session state without lag or missing steps.
  • Admin can see where automation stopped and why, without reading logs.
  • Lifecycle behavior is fully covered in tests (overlay integration, companion workflow E2E), as referenced from TESTS.md.

Phase C: Checkout Safety Path

Goal: Make every credit/checkout-like action go through an explicit, traceable confirmation path that admins can trust.

Automation (this repo)

  • Enrich checkout detection:
  • Harden confirmation logic:
  • Failure paths:
    • Verify that any parsing failure or ambiguous state results in a safe stop, not a blind click.
    • Add tests to cover “weird but possible” UI states observed via fixtures.

Success criteria

  • No automation path can perform a checkout-like action without an on-screen confirmation dialog.
  • All credit-related flows are covered in tests (unit, integration, and companion E2E) with failure-path assertions.
  • Behavior matches the safety and trust requirements in ADMINS.md and RACING.md.

Phase D: Additional Hosted Workflows & Admin QoL

Goal: Extend automation beyond the initial hosted-session wizard happy path while staying within the same ToS-safe browser automation model.

Automation (this repo)

  • Map additional hosted workflows:
    • Identify additional iRacing hosted flows that align with admin QoL needs from ADMINS.md (e.g. practice-only, league-specific hosted sessions).
    • Encode them as configurations on top of HostedSessionConfig where feasible.
  • Workflow templates:
    • Provide a small set of reusable presets (e.g. “standard league race”, “test session”) that can later be populated by external services.
  • Resilience work:
    • Improve behavior under partial UI changes (selectors, labels) using the validation patterns from PageStateValidator.

Success criteria

  • At least one additional hosted workflow beyond the baseline wizard is supported end to end.
  • Admins can choose between a small number of well-tested presets that reflect league use-cases from COMPETITION.md.
  • Automation remains fully ToS-safe (no gameplay-affecting automation, no desktop/sim process interference), as reiterated in ARCHITECTURE.md.

Phase E: Operationalization & Packaging

Goal: Make it realistic for real league admins to install, configure and operate the companion and automation engine.

Automation (this repo)

  • Packaging & configuration:
    • Ensure Electron packaging, browser mode configuration and logging settings match the expectations in TECH.md.
    • Provide a minimal operator-facing configuration story (environment, headless vs headed, fixture vs live).
  • Observability:
    • Ensure logs and failure artifacts are sufficient for debugging issues without code changes.
  • Documentation:
    • Keep operator-focused docs short and aligned with admin benefits from ADMINS.md.

Success criteria

  • A technically inclined admin can install the companion, configure automation mode, and run a full hosted-session workflow using only the documentation in this repo.
  • Most operational issues can be diagnosed via logs and failure artifacts without code-level changes.
  • Existing tests remain the primary safety net for refactors (see TESTS.md).

4. Core Platform Roadmap (Future / External Services)

This track covers the broader GridPilot competition platform from the concept docs. It is not implemented in this repo and will likely live in separate services/apps that integrate with the automation engine described in ARCHITECTURE.md.

Each phase is intentionally high-level to avoid going stale; details belong in future architecture docs for those services.

Phase P1: League Identity and Seasons

Core Platform (future/external)

  • Provide a clean league home, as described in:
  • Implement league identity, schedules and season configuration:
    • public league pages, schedules, rules, rosters (see sections 3 and 4 in CONCEPT.md).
    • admin tools for creating seasons, calendars, formats (mirroring RACING.md).
  • Model leagues, seasons and events as first-class entities that can later produce HostedSessionConfig instances for this repos automation engine.

Success criteria

  • Leagues can exist, configure seasons and publish schedules independent of automation.
  • Competition structure (points presets, drop weeks, team vs solo) matches the expectations in COMPETITION.md.
  • There is a clear integration point for calling the automation engine with derived hosted-session configurations (described in the future platforms own architecture docs).

Phase P2: Results, Stats, Rating v1, and Team Competition

Core Platform (future/external)

  • Result ingestion & standings:
    • Implement automated result import and standings as described in CONCEPT.md and STATS.md.
    • Combine imported results into per-season standings for drivers and teams.
  • Team system:
    • Implement team profiles and constructors-style championships as in TEAMS.md and team sections of RACING.md.
  • Stats and inputs for rating:
    • Structure league and season stats so that league results, incidents, team points and attendance are reliably captured as described in STATS.md.
  • GridPilot Rating v1 (platform-side service):
    • Deliver a first usable GridPilot Rating capability consistent with RATING.md, computed entirely in core platform services.
    • Treat this repos automation slice as a producer of trusted, structured session configs and results; do not move any rating logic into the automation engine.

Success criteria

  • For a league connected to GridPilot, standings and stats update automatically based on iRacing results and provide the inputs required by the rating model in RATING.md.
  • Teams and drivers have persistent identity and history across seasons, matching the narratives in DRIVERS.md and TEAMS.md.
  • Automation engine in this repo can be treated as a “session executor” feeding reliable results into the platforms scoring and rating engines, while rating computation remains in Core Platform services.

Phase P3: Complaints, Penalties, Transparency, and Rating Fairness

Core Platform (future/external)

  • Complaint intake:
    • Structured complaint flows as defined in CONCEPT.md and RACING.md (race, drivers, timestamps, evidence).
  • Penalty tools:
  • Classification updates:
    • Automatic recalculation of results and standings after penalties, aligned with the classification and penalty handling in STATS.md.
  • Rating dependencies:
    • Ensure that penalty-aware classification, incident handling and transparency from this phase directly feed into GridPilot Rating as incident and season factors, consistent with RATING.md.
    • Keep rating computation fully within Core Platform services; this repo continues to provide only the structured competition data that rating consumes.

Success criteria

  • Complaints and penalties are no longer handled via ad-hoc Discord and spreadsheets.
  • Standings, stats, histories and rating signals remain consistent and penalty-aware.
  • The platform exposes a transparent, auditable record of decisions, supporting the fairness and rating trust goals from the concept docs.

Phase P4: Social, Discovery, and Monetization

Core Platform (future/external)

  • Social and discovery:
    • Implement the lightweight social and discovery features from SOCIAL.md and league/team profile extensions in TEAMS.md.
  • League and driver discovery:
    • Make it easy for drivers to find leagues and teams, and for leagues to find drivers, as described in DRIVERS.md and COMPETITION.md.
  • Monetization (later phase):
    • Add monetization and premium features only after the core competition and trust layers are stable, following the MVP philosophy in CONCEPT.md.

Success criteria

  • Drivers, teams and leagues can discover each other through GridPilot, with identity and history driving trust.
  • Social features remain lightweight and purpose-driven, complementing community tools like Discord instead of replacing them.
  • Any monetization respects the “clarity, fairness, and admin control” principles in the concept docs.

5. Dependencies and Evolution

  • Automation & Companion phases (AE) are largely independent of Core Platform phases and can be completed first.
  • Core Platform phases (P1P4) depend on:
    • A stable automation engine and companion (this repo).
    • Clear APIs/IPC or integration contracts that should be documented in future platform services, referencing ARCHITECTURE.md.
  • The automation slice should remain small and robust, so that multiple future services can treat it as a shared “session engine.”

Use this roadmap as a living, checkable guide:

  • Update checklists under Automation & Companion as work lands.
  • Keep Core Platform phases at the level of concept alignment, not implementation detail.
  • When new services are built, they should introduce their own roadmaps and link back to:

Last Updated: 2025-12-01 Tracks: Automation & Companion (this repo) / Core Platform (future/external) Current Focus: Phase A (Solid hosted-session engine & companion baseline)