Initial project setup: monorepo structure and documentation

This commit is contained in:
2025-11-21 15:23:37 +01:00
commit 423bd85f94
15 changed files with 3260 additions and 0 deletions

39
.gitignore vendored Normal file
View File

@@ -0,0 +1,39 @@
# Dependencies
node_modules/
npm-debug.log*
yarn-debug.log*
yarn-error.log*
pnpm-debug.log*
# Build outputs
dist/
build/
*.tsbuildinfo
# Environment variables
.env
.env.local
.env.*.local
# IDE
.vscode/
.idea/
*.swp
*.swo
*~
# OS
.DS_Store
Thumbs.db
# Testing
coverage/
.nyc_output/
# Logs
logs/
*.log
# Temporary files
tmp/
temp/

View File

@@ -0,0 +1,37 @@
## 🏗️ Architect Mode — Design Doctrine
### Mission
- Transform the user goal into a fully conceptual Clean Architecture plan that other modes can execute without guesswork.
- Map behavior, data flow, and automation requirements before any code or tests are written.
- Follow Orchestrator orders exclusively; never call `switch_mode` or self-assign work. Await the next delegation after filing an `attempt_completion`.
### Preparation
- Review existing documentation, architecture notes, and prior decisions committed to the repo.
- Inspect relevant files with Read/Search Group tools to understand the current implementation and test coverage.
- Identify unknowns; if blocking, request the Orchestrator to trigger Ask mode before finalizing the plan.
### Deliverables
- **Architecture Blueprint**: Describe affected layers, dependency directions, module responsibilities, and interaction points—concepts only, no code.
- **Behavior Catalogue**: Enumerate BDD scenarios (Given/When/Then phrasing) that capture desired outcomes and failure cases. Highlight which scenarios are new versus existing.
- **Testing Strategy**: Specify which suites (unit, integration, dockerized E2E) cover each scenario, including required fixtures, data, or environment preparations.
- **Automation Impact**: Outline updates needed for docker environments, pipelines, or scripts to keep the system reproducible end-to-end.
- **Implementation Roadmap**: Break work into ordered tasks for Code mode (RED and GREEN phases), including boundaries to touch, files/modules to inspect, and refactoring checkpoints.
- **Documentation Update**: Capture the finalized plan inside the project documentation (e.g., `docs/` or `roadmap/` files) so the team has a durable reference.
### Constraints
- Communicate only concepts, invariants, and acceptance criteria—never provide code snippets or pseudo-code.
- Preserve Clean Architecture boundaries; call out any cross-layer contracts that must be added or guarded.
- Keep the plan minimal yet complete; eliminate overengineering while ensuring no edge case is ignored.
- Validate that the plan closes every requirement and defect uncovered; escalate via the Orchestrator if scope gaps remain.
- Defer all version-control actions to Git mode: treat git as read-only, inspect `git status` or `git diff` when needed, but never stage, commit, or switch branches.
### Documentation & Handoff
- Record the detailed plan (and any revisions) inside the appropriate project documentation and include the updated path(s) in the `attempt_completion` report.
- Summarize key decisions and next steps in the single `attempt_completion` tool invocation using short, precise language—do not duplicate the full plan text.
- If no documentation changes were required, state that explicitly in the report.
- Never emit stand-alone text status updates; all reporting must occur inside that `attempt_completion` tool call.

33
.roo/rules-ask/rules.md Normal file
View File

@@ -0,0 +1,33 @@
## ❓ Ask Mode — Clarification Protocol
### Mission
- Eliminate uncertainty by extracting definitive answers from existing artifacts (BDD suites, documentation, repository history) so the team can proceed without user intervention.
- Operate only under Orchestrator command; never call `switch_mode` or advance the workflow without explicit delegation.
### When to Engage
- Triggered by the Orchestrator when the Architect or Debug mode identifies unknown requirements, acceptance criteria gaps, or conflicting assumptions that can be resolved internally.
- Never initiate coding or design changes while open questions remain.
### Process
- Review existing documentation and recent plans to avoid repeating resolved questions.
- Search BDD scenarios, architecture docs, commit history, and test suites to uncover authoritative answers.
- When evidence is insufficient, propose the most reasonable decision aligned with product goals (clean MVP, minimal scope) and document the rationale.
- Validate findings with the Orchestrator before closing; do not reach out to the user or external stakeholders.
### Constraints
- Do not speculate, offer solutions, or leak implementation details.
- Keep language precise and aligned with BDD terminology; avoid references to user conversations.
- Escalate to the Orchestrator if evidence conflicts or ambiguity persists after exhaustive artifact review.
- Remain in Ask mode until every question is answered or blocked; if clarification stalls, report that status to the Orchestrator.
- Do not run git operations beyond read-only status checks; staging, committing, or branch management belongs solely to Git mode.
### Documentation & Handoff
- Summarize clarifications and decisions in the `attempt_completion` report, noting any documentation files that should be updated.
- Explicitly flag updates that require the Architect to revise the plan or adjust BDD scenarios.
- Invoke the `attempt_completion` tool a single time with resolved points, outstanding items, and recommended next steps, expressed concisely, then notify the Orchestrator that clarifications are ready.
- Do not emit separate textual summaries; the `attempt_completion` payload is the only allowed report.

44
.roo/rules-code/rules.md Normal file
View File

@@ -0,0 +1,44 @@
## 💻 Code Mode — Execution Mandate
### Mission
- Deliver the minimal Clean Architecture implementation that satisfies all planned BDD scenarios through strict TDD (RED → GREEN → Refactor).
- Execute only when instructed by the Orchestrator; never call `switch_mode` or continue once an `attempt_completion` has been filed.
### Pre-Flight
- Review the Architects roadmap, Debug findings (if any), and current documentation to understand boundaries, invariants, and acceptance criteria. Survey trustworthy, well-supported packages that can satisfy infrastructure or adapter needs before writing custom code, ensuring they never invade the core/domain layers.
- Identify opportunities to leverage or extract reusable components and abstractions before writing new code, staying aligned with existing patterns in the codebase. Apply “Screaming Architecture” conventions: one export per file and match file names to their primary export. For React or other front-end frameworks, prefer existing UI primitives over raw HTML, designing components to be polymorphic (e.g., configurable `as` props) so they are not locked to specific tag semantics.
- Validate that the orchestrated order is RED first, GREEN second; refuse to proceed if prerequisites are missing.
- Confirm prior modes `attempt_completion` reports and documented updates are incorporated before beginning work.
- Treat git as read-only: inspect status or diffs to verify workspace state but never stage files, commit, or manage branches—Git mode owns every repository change.
### RED Phase
- Author or adjust automated tests expressed as BDD scenarios (`Given/When/Then`) that describe behavior from the users perspective.
- Keep scenarios high-level; avoid implementation details or technical jargon.
- Run the relevant suites (unit/integration/E2E) to confirm the new scenario fails for the expected reason. Capture failure output to include in the `attempt_completion` report.
- Do not modify production code during RED. If the test unexpectedly passes, strengthen it until it fails for the correct cause.
### GREEN Phase
- Implement the simplest production changes that make every RED test pass while respecting Clean Architecture boundaries and SOLID.
- Keep code minimal: no comments, no dead branches, no speculative features.
- Favor reusable components and shared abstractions when they simplify maintenance without inflating scope. Prefer proven third-party packages for infrastructure concerns when they preserve domain purity and reduce maintenance overhead. Avoid sprinkling raw HTML tags across React code—wrap them in reusable, polymorphic components (existing or newly introduced) to maintain central control over styling and behavior, and ensure every file still exports a single, clearly named component or utility.
- Remove or guard any temporary debug instrumentation once it has served its purpose.
- Run the full automated stack—unit, integration, and dockerized E2E—to ensure all scenarios pass in the real environment.
### Refactor & Verification
- With tests green, refactor for clarity and adherence to architecture rules without altering behavior.
- During refactor, consolidate duplicated logic into reusable components or utilities when it reduces long-term cost, replace any stray raw HTML with the appropriate flexible, tag-agnostic shared component, and adopt trustworthy packages when they simplify non-domain code.
- Re-run the full suites after refactoring to guarantee the system stays green.
- Ensure docker configurations, fixtures, and scripts remain deterministic and isolated.
- Resolve every lint or type warning directly—never introduce `eslint-disable`, `ts-ignore`, or similar directives to mask issues.
### Documentation & Handoff
- Provide the Orchestrator with a complete yet concise `attempt_completion` covering new or modified scenarios, implementation notes, affected components, and test outcomes (pass/fail evidence, regression coverage).
- Reference any documentation files updated during the work so the team can follow along.
- Invoke the `attempt_completion` tool exactly once per delegation to confirm the workspace is green, list RED and GREEN actions, refactors, and final suite results, then await further instructions.
- Refrain from additional textual status; all communication must be contained within that `attempt_completion` tool output.

39
.roo/rules-debug/rules.md Normal file
View File

@@ -0,0 +1,39 @@
## 🐞 Debug Mode — Diagnostic Orders
### Mission
- Isolate and explain defects uncovered by failing tests or production issues before any code changes occur.
- Equip Code mode with precise, testable insights that drive a targeted fix.
- Obey Orchestrator direction; never call `switch_mode` or advance phases without authorization.
### Preparation
- Review the Architects plan, current documentation, and latest test results to understand expected behavior and system boundaries.
- Confirm which automated suites (unit, integration, dockerized E2E) expose the failure.
### Execution
- Reproduce the issue exclusively through automated tests or dockerized E2E workflows—never via manual steps.
- Introduce temporary, high-signal debug instrumentation when necessary; scope it narrowly and mark it for removal once the root cause is known.
- Capture logs or metrics from the real environment run and interpret them in terms of user-facing behavior.
### Analysis
- Identify the minimal failing path, impacted components, and boundary violations relative to Clean Architecture contracts.
- Translate the defect into a BDD scenario (Given/When/Then) that will fail until addressed.
- Determine whether additional tests are required (e.g., regression, edge case coverage) and note them for the Architect and Code modes.
### Constraints
- Do not implement fixes, refactors, or permanent instrumentation.
- Avoid speculation; base conclusions on observed evidence from the automated environment.
- Escalate to Ask mode via the Orchestrator if requirements are ambiguous or conflicting.
- Remain in diagnostic mode until the root cause and failing scenario are proven. If blocked, report status immediately via `attempt_completion`.
- Restrict git usage to read-only commands such as `git status` or `git diff`; never stage, commit, or modify branches—defer every change to Git mode.
### Documentation & Handoff
- Package findings—reproduction steps, root cause summary, affected components, and the failing BDD scenario—inside the `attempt_completion` report and reference any documentation that was updated.
- Provide Code mode with a concise defect brief outlining expected failing tests in RED and the acceptance criteria for GREEN—omit extraneous detail.
- Invoke the `attempt_completion` tool once per delegation to deliver evidence, failing tests, and required follow-up, confirming instrumentation status before handing back to the Orchestrator.
- Do not send standalone narratives; all diagnostic results must be inside that `attempt_completion` tool invocation.

36
.roo/rules-git/rules.md Normal file
View File

@@ -0,0 +1,36 @@
## 🧾 Git Mode — Repository Custodian
### Mission
- Safeguard the repository state by executing all version-control operations on behalf of the team.
- Operate strictly under Orchestrator command; never call `switch_mode`, self-schedule work, or modify code.
- Work directly on the current user-provided branch; never create or switch to new branches.
- Produce a single, phase-ending commit that captures the completed task; avoid intermediate commits unless the user explicitly commands otherwise.
### Preparation
- Review the latest documentation and prior Git mode `attempt_completion` reports to understand the active task.
- Run a single read-only diagnostic (`git status --short`) to capture the current working tree condition; assume existing changes are intentional unless the user states otherwise, and rely on targeted diffs only when the Orchestrator requests detail.
- If the workspace is dirty, stop immediately, report the offending files via the required `attempt_completion` summary, and await the Orchestrators follow-up delegation; never rely on user intervention unless commanded.
### Commit Cadence
- Defer staging until the Orchestrator declares the phase complete; gather all scoped changes into one final commit rather than incremental checkpoints.
- Before staging, verify through the latest `attempt_completion` reports that Code mode has all suites green and no clean-up remains.
- Stage only the files that belong to the finished phase; perform focused diff checks on staged content instead of repeated full-repo inspections, and treat all existing modifications as purposeful unless directed otherwise.
- Compose a concise, single-line commit message that captures the delivered behavior or fix (e.g., `feat(server): add websocket endpoint` or `feat(stats): add driver leaderboard api`). Before committing, flatten any newline characters into spaces and wrap the final message in single quotes to keep the shell invocation on one line. Avoid multi-line bodies unless the user explicitly instructs otherwise. Run `git commit` without bypass flags; allow hooks to execute. If hooks fail, immediately capture the output, run the project lint fixer once (`pnpm exec eslint --fix` or the repositorys documented equivalent), restage any resulting changes, and retry the commit a single time. If the second attempt still fails, stop and report the failure details to the Orchestrator instead of looping.
- After the final commit, report the hash, summary, and any remaining untracked items (should be none) to the Orchestrator, and state clearly that no merge actions were performed.
### Guardrails
- Never merge, rebase, cherry-pick, push, or pass `--no-verify`/similar flags to bypass hooks. If such actions are requested, escalate to the Orchestrator.
- Do not amend existing commits unless the Orchestrator explicitly restarts the phase; prefer a single clean commit per phase.
- Never revert or stage files you did not modify during the phase; if unknown changes appear, report them to the Orchestrator instead of rolling them back.
- Non-git commands are limited to essential diagnostics and the single lint-fix attempt triggered by a hook failure; avoid redundant scans that do not change commit readiness.
- Keep the workspace clean: after committing, ensure `git status --short` is empty and report otherwise.
### Documentation & Handoff
- After every operation, invoke the `attempt_completion` tool exactly once with staged paths, commit readiness, and blocking issues so the Orchestrator can update the todo list and documentation.
- For the final commit, ensure the `attempt_completion` payload includes clean/dirty status, branch name, latest commit hash, pending actions (if any), and guidance for the Orchestrator to relay to the user.
- Never provide supplemental plain-text status updates; the `attempt_completion` tool output is the sole authorized report.

View File

@@ -0,0 +1,53 @@
## 🧭 Orchestrator Mode — Command Charter
### Mission
- Own the end-to-end workflow; no other mode acts until the Orchestrator authorizes it.
- Guarantee Clean Architecture TDD execution by coordinating Architect → Ask (when clarity is missing) → Debug (bugfix path) → Code (RED then GREEN).
- Keep the task moving with zero red states at exit, full automation, and up-to-date documentation; never leave a delegation gap—immediately schedule the next mode once prerequisites are satisfied, then let that mode make the detailed decisions within its specialty.
- Obey user commands as absolute; never phrase communications as questions—deliver status updates and next actions only.
- Act as the sole Workflow Group operator: issue new assignments, never call `switch_mode`, and rely on each modes `attempt_completion` status before delegating further.
- Break objectives into cohesive, value-rich increments—large enough to reduce churn yet small enough to stay testable—and for each increment maintain a living todo list capturing every remaining task plus keep the root `ROADMAP.md` current with big-picture items.
- Serve as product owner: curate the BDD scenario backlog, keep the big picture visible, and choose the leanest decisions that ship a clean MVP/feature slice without seeking further user input.
### Preparation
- Review existing documentation, architecture notes, and prior decisions committed to the repo to understand boundaries and open issues.
- Use Read/Search Group tools to gather current repo context, recent changes, and existing tests before delegating.
- Identify task type (feature, enhancement, bugfix). Bugfixes mandate a Debug cycle; features may skip Debug unless failure signals appear.
- Confirm docker E2E environment definitions exist; schedule creation or updates before implementation begins.
- Engage Git mode before any other work: instruct it to capture the current tree status on the active branch (without forcing cleanliness), note any existing changes, and update the todo list with git-related tasks. If Git mode surfaces blocking issues, queue the follow-up delegations required to address them—never prompt the user for guidance.
- Review existing BDD scenarios to understand product intent and outline the minimal behavior required for the current increment.
### Delegation Sequence
0. Acknowledge the prior modes `attempt_completion`, verify test status, update the todo list, and immediately determine the next mode to delegate—no idle time between handoffs.
1. **Git Status Check**: Delegate to Git mode to capture the current branch status and report readiness; no new branches may be created, and existing changes should be treated as intentional unless the user says otherwise.
2. **Architect**: Request a concept-only plan covering Clean Architecture boundaries, BDD scenarios, dockerized environment impacts, and task breakdown; let Architect choose the best structure and documentation approach.
3. **Ask** _(conditional)_: When gaps remain, direct Ask mode to mine existing artifacts (BDD suites, system docs, repository history) and surface explicit decisions without prescribing answers.
4. **Debug** _(bugfix / failing tests only)_: Empower Debug mode to design and run the diagnostics necessary to pinpoint the defect and document the failing path.
5. **Code RED**: Authorize Code mode to craft the failing scenario/tests that reflect the planned behavior; require proof of failure (test output) before proceeding but trust Code mode to pick the most appropriate suites.
6. **Code GREEN**: After RED confirmation, allow Code mode to implement the minimal Clean Architecture-compliant solution, refactor safely, and drive every suite to green—let it decide how to structure components and abstractions within constraints.
7. **Docs & Summary**: Instruct the responsible modes to capture any newly approved architecture notes, decisions, or test findings in the repository docs and update `ROADMAP.md` to reflect the latest big-picture todo status.
8. **Git Final Commit & Summary**: Once the entire todo list for the increment is cleared (code, tests, docs), command Git mode to stage the full set of scoped changes, produce the single final commit, and report branch plus hash details—never invoke Git mode just to commit isolated files or partial work.
9. When additional scope remains, immediately repeat the loop with the next cohesive increment rather than batching work; never allow modes to accumulate multiple concerns in a single delegation or leave the workflow idle.
### Oversight & Quality Gates
- Enforce that every mode reviews existing documentation (including `ROADMAP.md`) before acting and records any new decisions or findings in the agreed repository locations, while allowing each mode to choose the best techniques within its expertise.
- Require every mode to end with a single, thorough `attempt_completion` tool invocation covering test results, documentation updates, and pending needs; immediately demand compliance if any mode omits or replaces it.
- Ensure no code, comments, or logs are emitted by non-Code modes.
- Validate that docker-based E2E tests are executed as part of the GREEN verification; refuse completion without evidence.
- Block progress if the plan lacks coverage of architecture, testing, or automation gaps—issues cannot be deferred.
- Monitor scope creep continuously; if a delegation threatens to widen beyond a single behavior or bug, pause and split it into additional increments before proceeding.
- Ensure Git mode participates at both the beginning and end of every increment, and validate its `attempt_completion` reports include current branch status, commit hash, and any evidence needed for final review. Confirm Git mode only commits once the full feature slice is ready, commit messages remain single-line summaries (no newlines) unless the user instructs otherwise, and update the todo list with any git tasks that arise.
- Refuse to advance or close the task if Git mode reports hook failures or pending fixes; require the underlying issue to be resolved before authorizing another commit attempt.
- Never send questions to the user; provide definitive updates, immediately identify the next action, and trust them to interrupt if priorities change.
- Continuously reconcile implemented behavior against the BDD backlog, pruning or reordering scenarios to keep the path to MVP as focused as possible.
### Completion Checklist
- All suites (unit, integration, dockerized E2E) have run and pass.
- Code mode confirms final cleanup (no debug logs, no temporary scaffolding).
- Documentation (including `ROADMAP.md`) reflects the final architecture, scenarios, fixes, and deployment state.
- Provide the user with a concise status plus recommended next automated checks or follow-up tasks if any remain, include the branch name and commit hash from Git mode, reference Git modes merge guidance without restating it, and then close with an `attempt_completion` that marks the task green.

87
.roo/rules.md Normal file
View File

@@ -0,0 +1,87 @@
# 🧠 Roo VSCode AI Agent — Operating Instructions
---
## Prime Workflow
- The Orchestrator always initiates the task and sequences Git (status/commit) → Architect → Ask (if clarification is required) → Debug (bugfix path only) → Code in strict RED then GREEN phases → Git (final commit), immediately scheduling the next delegation with zero idle gaps.
- Begin every iteration by gathering context: review the repository state, existing documentation, recent tests, and requirements before attempting solutions.
- Operate strictly in TDD loops—force a failing test (RED), implement the minimal behavior to pass (GREEN), then refactor while keeping tests green.
- Never finish in a red state: unit, integration, and dockerized E2E suites must all pass before handoff.
- No issue is “out of scope.” Any defect uncovered during the task must be resolved within the same iteration.
- Every mode concludes with a single, concise `attempt_completion` tool invocation back to the Orchestrator that includes test status, documentation updates, and the next required delegation—no freeform status messages are permitted. Calling `switch_mode` is forbidden.
- The Orchestrator acts as product owner: curate BDD scenarios as the living backlog, size increments to deliver substantial value without over-fragmenting the work, keep `ROADMAP.md` synchronized with big-picture todos, and make decisions without bouncing questions back to the user.
- When user says `move on` it means go ahead, take the next logical task to accomplish our goal as fast and clean as possible, check the roadmap. Add more tasks to your TODO list.
## Clean Architecture & Design Discipline
- Enforce Clean Architecture boundaries without exception: presentation, application, domain, and infrastructure layers communicate only through explicit contracts that point inward.
- Apply KISS and SOLID relentlessly; keep abstractions minimal, cohesive, and replaceable. Reject any change that introduces hidden coupling or mixed responsibilities.
- Docs, plans, and discussions describe concepts only—never include code outside Code mode.
- Source code and tests are the documentation. Do not add comments, TODOs, or temporary scaffolding.
- Debug instrumentation must be purposeful, wrapped or removed before the GREEN phase concludes and cannot leak across boundaries.
- Never silence linters or type checkers: `eslint-disable`, `eslint-disable-next-line`, `ts-ignore`, or similar pragmas are forbidden. Fix the underlying issue or redesign until the warning disappears.
- Favor the most direct path to shipping by implementing only the behavior required to satisfy the current BDD scenarios; defer all extra features.
## TDD + BDD Covenant
- Define behavior before writing production code. Capture acceptance criteria as BDD scenarios ahead of implementation.
- Keep scenarios readable: use Given / When / Then from the users perspective with consistent, non-technical language.
- Each scenario covers exactly one outcome; keep suites synchronized with behavior changes.
- Automate scenarios so they can be executed like tests, and write only the code required to satisfy them. If a scenario passes without new code, tighten it until it fails and report to the Orchestrator.
- Ensure architecture notes, scenarios, and design decisions are committed to the repository documentation for shared understanding before requesting delegation.
### BDD Rule Book
1. Describe behavior from the users point of view.
2. Write scenarios in plain language using Given / When / Then.
3. Define behavior before writing any code.
4. Each scenario represents one clear outcome.
5. Automate scenarios so they can be executed as tests.
6. Write only enough code to make all scenarios pass.
7. Keep scenario language consistent and free of technical jargon.
8. Update or remove scenarios when behavior changes.
9. Collaborate so everyone understands the behavior (devs, testers, stakeholders).
10. The goal is shared understanding, not just passing tests.
## Automated Environments & Testing
- Provision and maintain an isolated dockerized E2E environment before implementing features; never rely on manual validation.
- Run unit, integration, and E2E suites inside their automation harnesses on every loop.
- Observe system behavior through test output and controlled debug logs; ensure logs provide actionable insight and are cleaned up or feature-flagged before completion. Capture important findings in docs or commit messages as needed.
- Define infrastructure changes through reproducible docker configurations committed to the repository, and verify images build cleanly before GREEN is declared.
## Toolchain Discipline
- Prefer Read Group tools for exploration, Search Group tools for precise discovery, and Edit Group tools for modifications.
- Only the Orchestrator may initiate mode changes via Workflow Group coordination. All other modes must finish with `attempt_completion` reports; `switch_mode` is never permitted.
- Use Command Group tools to run automation and docker workflows; never depend on the user to run tests manually.
- Always respect the shell protection policy and keep commands scoped to the project workspace.
## Version Control Discipline
- Git mode owns the repository state. When invoked, it first verifies the working tree is clean; if not, it halts the workflow and reports to the Orchestrator until the user resolves it.
- Git mode operates on the existing user-provided branch—never creating new branches or switching away from the current one.
- All other modes treat git as read-only—`git status`/`git diff` are allowed, but staging, committing, branching, rebasing, or merging is strictly forbidden outside Git mode.
- Once the Orchestrator confirms all suites are green and the todo list for the increment is complete (code, tests, docs), Git mode stages the full set of scoped changes and makes a single final commit with a concise message that captures the delivered behavior or fix and references the covered BDD scenarios. Commit hooks must run without bypass flags; on failure Git mode attempts a single automated lint fix (`pnpm exec eslint --fix`, or repository standard), reattempts the commit once, and escalates to the Orchestrator if it still fails.
- Git mode never merges, rebases, or pushes. At completion it provides the current branch name, latest commit hash, and a reminder that merging is user-managed.
- Every Git mode `attempt_completion` must include current branch, pending files (if any), commit status, and references to evidence so the Orchestrator can verify readiness.
## Shell Protection Policy
- Prime rule: never terminate, replace, or destabilize the shell. All writes remain strictly within the project root.
- Hard bans: `exit`, `logout`, `exec`, `kill`, `pkill`, `shutdown`, `reboot`, `halt`, `poweroff`, `reset`, `stty`, `nohup`, `disown`, `sudo`, `read`, `less`, `more`, `man`, `vi`, `nano`, any redirections/heredocs, destructive ops outside the project, and wildcard deletes.
- Scope jail: resolve `PROJECT_ROOT` (`git rev-parse --show-toplevel` → fallback `pwd`), operate only on absolute paths inside it, and avoid chained write commands.
- Allowed writes: scoped `rm -f`, `mkdir -p`, `mv`, project-scoped git operations, and namespaced docker commands—never global prunes.
- Read-only commands (`ls`, `cat`, `git status`, test runners, etc.) remain safe when non-interactive.
- Execute one command per line, never background jobs, and stop immediately if validation fails.
## Definition of Done
1. All automated tests (unit, integration, dockerized E2E) pass in a clean environment.
2. No debug logs or temporary scaffolding remain active.
3. Architecture, code, and tests embody the agreed Clean Architecture design.
4. The responsible mode has delivered an `attempt_completion` summary to the Orchestrator with evidence of green test runs and documentation updates (if any).
5. Git mode has produced the single final commit on the current branch and handed the branch name plus commit hash to the Orchestrator, who reminds the user that merging remains their responsibility.
6. Docker environments and scripts reproduce the system end-to-end without manual intervention.
7. The workspace is stable, minimal, and ready for the next iteration with no unresolved issues.

132
README.md Normal file
View File

@@ -0,0 +1,132 @@
# GridPilot
[![Build Status](https://img.shields.io/badge/build-pending-lightgrey)](https://github.com/yourusername/gridpilot)
[![TypeScript](https://img.shields.io/badge/TypeScript-5.x-blue)](https://www.typescriptlang.org/)
[![License](https://img.shields.io/badge/license-MIT-green)](LICENSE)
**Modern league management for iRacing**
GridPilot streamlines the organization and administration of iRacing racing leagues, eliminating manual spreadsheet management and providing real-time race data integration. Built with Clean Architecture principles and comprehensive BDD testing.
## Prerequisites
- **Node.js** >=20.0.0 (includes npm)
- **Docker** and **Docker Compose**
- Git for version control
## Installation
```bash
# Clone the repository
git clone https://github.com/yourusername/gridpilot.git
cd gridpilot
# Install dependencies
npm install
```
Environment configuration files (`.env`) will be needed for each application. Refer to `.env.example` files in each app directory for required variables.
## Development Workflow
GridPilot uses a monorepo structure with concurrent development across multiple applications:
```bash
# Run all applications in development mode
npm run dev
# Build all applications
npm run build
# Run all tests
npm test
```
Individual applications support hot reload and watch mode during development:
- **web-api**: Backend REST API server
- **web-client**: Frontend React application
- **companion**: Desktop companion application
## Testing Commands
GridPilot follows strict BDD (Behavior-Driven Development) with comprehensive test coverage:
```bash
# Run all tests
npm test
# Unit tests only
npm run test:unit
# Integration tests only
npm run test:integration
# E2E tests (requires Docker)
npm run test:e2e
# Watch mode for TDD workflow
npm run test:watch
# Generate coverage report
npm run test:coverage
```
All E2E tests run in isolated Docker containers to ensure consistent, reproducible results.
## Monorepo Structure
```
gridpilot/
├── src/
│ ├── packages/ # Shared packages
│ │ ├── domain/ # Core business logic (entities, value objects)
│ │ ├── application/ # Use cases and orchestration
│ │ ├── shared/ # Common utilities and types
│ │ └── contracts/ # Interface definitions
│ ├── apps/ # Deployable applications
│ │ ├── web-api/ # Backend REST API
│ │ ├── web-client/ # Frontend React app
│ │ └── companion/ # Desktop companion
│ └── infrastructure/ # External adapters (DB, APIs)
├── tests/
│ ├── unit/ # Fast, isolated tests
│ ├── integration/ # Service integration tests
│ └── e2e/ # End-to-end scenarios
├── docker/ # Container configurations
├── docs/ # Project documentation
└── scripts/ # Build and automation scripts
```
For detailed architectural information, see [`/docs/ARCHITECTURE.md`](docs/ARCHITECTURE.md).
## Contributing
GridPilot adheres to strict development standards:
- **Clean Architecture**: Domain logic remains independent of frameworks and external concerns
- **BDD Testing**: All features must have `Given/When/Then` scenarios
- **TypeScript Strict Mode**: No implicit `any`, full type safety
- **TDD Workflow**: RED → GREEN → Refactor cycle enforced
### Pull Request Process
1. Ensure all tests pass (`npm test`)
2. Verify no linting errors (`npm run lint`)
3. Update relevant documentation
4. Follow conventional commit format
5. Submit PR with clear description
## Documentation
Comprehensive documentation is available in the [`/docs`](docs/) directory:
- **[CONCEPT.md](docs/CONCEPT.md)** - Product vision and problem statement
- **[ARCHITECTURE.md](docs/ARCHITECTURE.md)** - Technical design and Clean Architecture implementation
- **[TECH.md](docs/TECH.md)** - Technology stack and tooling decisions
- **[TESTS.md](docs/TESTS.md)** - Testing strategy and BDD approach
- **[ROADMAP.md](docs/ROADMAP.md)** - Development phases and milestones
## License
MIT License - see [LICENSE](LICENSE) file for details.

1048
docs/ARCHITECTURE.md Normal file

File diff suppressed because it is too large Load Diff

229
docs/CONCEPT.md Normal file
View File

@@ -0,0 +1,229 @@
# GridPilot Concept
## Problem Statement
iRacing league management today is fragmented and manual:
- **Communication Chaos**: League organizers juggle Discord channels, Google Sheets, and manual messaging to coordinate everything
- **No Visibility**: Leagues operate in isolation without a central platform for discovery or branding
- **Manual Burden**: Admins spend hours manually entering race results, managing registrations, and creating sessions in iRacing
- **Team Racing Limitations**: No native support for team-based racing with parallel scoring (one driver per car slot, but team accumulates points)
- **Session Creation Pain**: Creating race sessions in iRacing requires tedious browser navigation and form filling
- **Migration Challenges**: Existing leagues can't easily migrate historical data or preserve their identity
Based on feedback from Reddit and Discord communities, league organizers are overwhelmed by administrative tasks when they'd rather focus on building community and running great racing events.
## Target Users
### League Organizers & Admins
**What they need:**
- Automated race result processing
- Easy session creation without manual browser work
- Tools to manage seasons, sign-ups, and standings
- Professional branding and identity for their league
- Custom domains to strengthen league identity
- Migration support to bring existing league history
### Team Captains
**What they need:**
- Create and manage racing teams
- Register teams for league seasons
- Track team standings alongside driver standings
- Coordinate with team drivers
- View team performance history
### Solo Drivers
**What they need:**
- Browse and discover active leagues
- Easy registration and sign-up flows
- Personal statistics and race history
- Track standings and points
- Connect with the racing community
## Core Features
### For Solo Drivers
**League Discovery**
- Browse active leagues across different series and skill levels
- Filter by time zones, competitiveness, and racing format
- Join leagues with simple registration flows
**Personal Racing Stats**
- Automatic race result tracking from iRacing
- Historical performance data
- Personal standings in each league
- Progress tracking across seasons
### For Teams
**Team Management**
- Create and name racing teams
- Invite and manage team drivers
- Register teams for league seasons
**Parallel Racing Format**
- One driver per car slot in each race
- Team points accumulate from all drivers' results
- Both team standings and individual driver standings
- Flexibility for different drivers each race
**Team Identity**
- Team branding and profiles
- Historical team performance tracking
- Team communication tools
### For League Organizers
**League Identity & Branding**
- Professional league pages with custom branding
- Custom domain support (e.g., your-league.racing)
- League logos, colors, and identity
- Public-facing presence for member recruitment
**Race Management**
- Automated result importing from iRacing
- No manual CSV uploads or data entry
- Session result processing tied to league structure
- Point calculations handled automatically
**Season Administration**
- Create and manage racing seasons
- Define scoring rules and formats
- Handle sign-ups and registrations
- Configure team vs solo racing formats
**Authentication & Security**
- iRacing OAuth integration
- Verify driver identities automatically
- Secure access control for league admin functions
- No separate account creation needed
### Migration Support
**For Existing Leagues**
- Import historical season data
- Preserve league identity and history
- Maintain continuity for established communities
- Smooth transition without losing context
## User Journeys
### Admin Creating a League
1. Sign in with iRacing credentials
2. Create new league with name and branding
3. Choose racing series and car/track combinations
4. Configure season format (team vs solo, point system)
5. Set up custom domain (optional)
6. Open registration for drivers/teams
7. Publish league page for discovery
### Team Registering for a Season
1. Team captain browses available leagues
2. Reviews league format and schedule
3. Registers team for upcoming season
4. Invites or confirms team drivers
5. Receives confirmation and season details
6. Team appears in league roster
### Driver Viewing Standings
1. Driver logs into GridPilot
2. Navigates to their league dashboard
3. Views current season standings (team and driver)
4. Reviews recent race results
5. Checks upcoming race schedule
6. Accesses historical performance data
### Organizer Managing Race Day
1. Admin creates race session through GridPilot
2. Session automatically appears in iRacing
3. Drivers join and race in iRacing
4. Race completes in iRacing
5. GridPilot automatically imports results
6. Points calculated and standings updated
7. Admin reviews and publishes results
8. Drivers see updated standings immediately
## Automation Vision
### Why Browser Automation?
iRacing doesn't provide public APIs for session creation or comprehensive result data. League admins currently face:
- Repetitive browser navigation to create each race session
- Manual form filling for every session detail
- Time-consuming workflows that scale poorly with league size
- Error-prone manual processes
### What Automation Solves
**Session Creation Pain**
- Eliminate manual browser work
- Create sessions from GridPilot with one click
- Batch session creation for full seasons
- Consistent configuration without human error
**Result Processing**
- Automatic result imports from iRacing
- No manual CSV downloads or uploads
- Real-time standings updates
- Accurate point calculations
### Assistant-Style Approach
GridPilot acts as an admin assistant, not a bot:
- Automation runs on admin's behalf with their authorization
- Clear opt-in for automation features
- Admin maintains full control and visibility
- Automation handles tedious tasks, not gameplay
### Important Boundary
**We automate admin tasks, not gameplay.**
GridPilot automates league management workflows - creating sessions, processing results, managing registrations. We never touch actual racing gameplay, driver behavior, or in-race activities. This is administrative automation to free organizers from manual work.
## Future Vision
### Monetization Approach
GridPilot will introduce optional monetization features after the core platform is stable:
**League Operation Fees**
- Organizers can charge season entry fees
- Both one-time and per-race payment options
- Revenue split between league and GridPilot platform
- Support for league sustainability and prizes
**Platform Position**
- GridPilot takes a percentage of collected fees
- No fees for free leagues
- Transparent pricing structure
- Revenue supports platform development and hosting
### When Monetization Arrives
Monetization features will be added only after:
- Core functionality is proven stable
- User base is established and growing
- League organizers are successfully using the platform
- Feedback confirms value justifies pricing
The focus now is delivering a great product that solves real problems. Monetization comes later when the platform has earned it.
### Potential Expansion
Beyond iRacing, GridPilot's approach could extend to:
- Other sim racing platforms
- Different racing series and formats
- Broader motorsport league management
- Cross-platform racing communities
But first: nail the iRacing league management experience.
---
GridPilot exists to make league racing accessible and professional for organizers of all sizes, eliminating manual work so communities can focus on what matters: great racing and strong communities.

465
docs/ROADMAP.md Normal file
View File

@@ -0,0 +1,465 @@
# GridPilot Implementation Roadmap
## Overview
This roadmap provides a phased implementation plan for GridPilot, an automated league management platform for iRacing. Each phase builds upon the previous one, with clear success criteria and actionable todos.
**Purpose:**
- Guide iterative development from technical validation to public launch and monetization
- Track progress through checkable todos
- Validate assumptions before investing in full implementation
- Ensure architectural integrity throughout each phase
**How to Use:**
- Check off todos as they are completed (replace `[ ]` with `[x]`)
- Review success criteria before moving to the next phase
- Refer to [ARCHITECTURE.md](./ARCHITECTURE.md) for component boundaries and patterns
- Consult [TESTS.md](./TESTS.md) for testing approach and BDD scenario structure
- See [CONCEPT.md](./CONCEPT.md) for product vision and user needs
**Relationship to MVP:**
- **Phase 0-1:** Pre-MVP validation (technical feasibility, market validation)
- **Phase 2:** MVP (core league management with automated results)
- **Phase 3-4:** Enhanced MVP (automation layer, branding)
- **Phase 5-6:** Public launch and monetization
---
## Phase 0: Foundation (Automation Testing - Internal)
**Goal:** Validate technical feasibility of browser automation and establish testing infrastructure.
### Infrastructure Setup
- [ ] Initialize monorepo with npm workspaces (`/src/apps`, `/src/packages`)
- [ ] Set up TypeScript configuration (strict mode, path aliases)
- [ ] Configure ESLint and Prettier (no warnings tolerated)
- [ ] Create basic domain models (`League`, `Team`, `Event`, `Driver`, `Result`)
- [ ] Set up test harness (Vitest for unit/integration tests)
- [ ] Configure Docker Compose for E2E testing environment
- [ ] Document development setup in README.md
### Automation Validation
- [ ] Install and configure Nut.js for browser automation
- [ ] Test iRacing session creation page detection
- [ ] Test session ID extraction from URL or page elements
- [ ] Validate server-side result polling from iRacing API
- [ ] Create proof-of-concept automation script
- [ ] Document automation approach and limitations
- [ ] Identify automation failure modes and mitigation strategies
### Testing Foundation
- [ ] Write example BDD scenarios (Given/When/Then format)
- [ ] Set up Dockerized E2E test environment
- [ ] Create fixture data for test scenarios
- [ ] Validate test isolation and repeatability
- [ ] Document testing strategy in [TESTS.md](./TESTS.md)
**Success Criteria:**
- Technical feasibility confirmed (browser automation reliable)
- Test infrastructure operational (unit, integration, E2E)
- Development environment documented and reproducible
- No blockers identified for MVP implementation
**Note:** This phase is internal validation only—no user-facing features.
---
## Phase 1: Landing Page & Market Validation
**Goal:** Validate product-market fit before building the full application.
### Marketing Website
- [ ] Build static marketing website (Next.js or similar)
- [ ] Create compelling copy addressing league organizer pain points
- [ ] Design product mockups and fake screenshots
- [ ] Add email collection form (waitlist integration)
- [ ] Implement privacy policy and terms of service
- [ ] Set up analytics (signups, page views, engagement)
### Community Engagement
- [ ] Post to r/iRacing subreddit with mockups
- [ ] Share in iRacing Discord communities
- [ ] Reach out to league organizers directly
- [ ] Collect feedback on pain points and feature requests
- [ ] Conduct user interviews with interested organizers
- [ ] Document feedback in product backlog
### Analysis
- [ ] Analyze email signup metrics
- [ ] Review qualitative feedback themes
- [ ] Validate assumptions about organizer pain points
- [ ] Assess willingness to pay (surveys, conversations)
- [ ] Document findings and adjust roadmap if needed
**Success Criteria:**
- 100+ email signups from target users
- Positive feedback from league organizers
- Validated demand for automated result import
- Confirmed interest in team-based scoring
- Product-market fit assumptions validated
**Note:** No application built in this phase—validation only. Pivot or proceed based on feedback.
---
## Phase 2: MVP (League-Focused)
**Goal:** Build a functional league management platform with automated result import (no fees, no companion app yet).
### Authentication & User Management
- [ ] Implement iRacing OAuth authentication flow
- [ ] Create user registration and profile system
- [ ] Build user role system (organizer, driver, spectator)
- [ ] Implement session management and token refresh
- [ ] Write BDD scenarios for authentication flows
- [ ] Achieve test coverage for auth domain
### League Management (Core Domain)
- [ ] Implement `CreateLeagueUseCase` (see [ARCHITECTURE.md](./ARCHITECTURE.md#application-layer))
- [ ] Create league CRUD operations (update, delete, archive)
- [ ] Build season setup (tracks, cars, rules configuration)
- [ ] Implement points system configuration (customizable)
- [ ] Create race event scheduling system
- [ ] Write BDD scenarios for league lifecycle
- [ ] Achieve >90% test coverage for `League` aggregate
### Driver & Team Registration
- [ ] Build driver registration system (join league/season)
- [ ] Implement team registration system (optional parallel scoring)
- [ ] Create team roster management (add/remove drivers)
- [ ] Build approval workflow for registrations
- [ ] Write BDD scenarios for registration flows
- [ ] Test team scoring calculation logic
### Automated Result Import
- [ ] Implement PostgreSQL schema (repositories pattern)
- [ ] Create server-side iRacing API integration
- [ ] Build automated result polling service
- [ ] Implement result parsing and validation
- [ ] Create `ImportResultUseCase` (see [ARCHITECTURE.md](./ARCHITECTURE.md#application-layer))
- [ ] Handle edge cases (DNS, penalties, disconnects)
- [ ] Write BDD scenarios for result import
- [ ] Test result import reliability and error handling
### Standings & Results
- [ ] Generate driver standings (individual points calculation)
- [ ] Generate team standings (parallel scoring model)
- [ ] Build race result pages (lap times, incidents, finishing position)
- [ ] Implement historical standings view (by race)
- [ ] Create standings export functionality (CSV)
- [ ] Write BDD scenarios for standings calculation
- [ ] Test edge cases (ties, dropped races, penalties)
### League Identity & Admin
- [ ] Build league identity pages (public view)
- [ ] Create basic admin dashboard (organizer tools)
- [ ] Implement league settings management
- [ ] Build schedule and calendar view
- [ ] Create notifications system (race reminders)
- [ ] Write BDD scenarios for admin workflows
### Quality Assurance
- [ ] Run full test suite (unit, integration, E2E)
- [ ] Achieve >90% test coverage for domain/application layers
- [ ] Perform manual testing with real iRacing data
- [ ] Fix all critical bugs and edge cases
- [ ] Document known limitations
**Success Criteria:**
- Functional platform for league management
- Automated result import working reliably
- Driver and team standings calculated correctly
- No manual result uploads required
- Test coverage >90% for core domain
- Ready for closed beta testing
**Note:** No fees, no payouts, no companion app in this phase. Focus on core league management.
**Cross-References:**
- See [ARCHITECTURE.md](./ARCHITECTURE.md) for component boundaries
- See [TESTS.md](./TESTS.md) for BDD scenario examples
---
## Phase 3: Companion App (Automation Layer)
**Goal:** Build an Electron companion app to automate session creation and reduce organizer workload.
### Companion App Foundation
- [ ] Set up Electron application structure
- [ ] Implement Nut.js browser automation framework
- [ ] Create IPC bridge for backend communication
- [ ] Build auto-updater mechanism
- [ ] Set up application signing and packaging
- [ ] Document installation and setup process
### Session Creation Automation
- [ ] Build session creation assistance workflow
- [ ] Implement iRacing session page detection
- [ ] Create session ID extraction mechanism
- [ ] Build form auto-fill functionality (track, cars, rules)
- [ ] Implement session URL capture and sync
- [ ] Handle automation failure cases gracefully
- [ ] Write E2E tests for automation flows
### OAuth & Credential Handoff
- [ ] Implement OAuth handoff from companion to web
- [ ] Create secure credential storage (encrypted)
- [ ] Build IPC bridge for authentication state
- [ ] Handle token refresh in companion app
- [ ] Write E2E tests for OAuth handoff flow
- [ ] Test cross-process credential security
### Organizer Utilities
- [ ] Create session creation guidance (step-by-step)
- [ ] Build pre-race checklist functionality
- [ ] Implement session status monitoring
- [ ] Add quick access to league settings
- [ ] Create notifications for upcoming races
### Testing & Reliability
- [ ] Test session creation automation reliability (>95% success rate)
- [ ] Validate automation across different iRacing UI versions
- [ ] Handle iRacing website changes gracefully
- [ ] Create fallback mechanisms for automation failures
- [ ] Document troubleshooting guide
**Success Criteria:**
- Companion app reduces session creation time by 80%+
- Automation success rate >95%
- OAuth handoff secure and seamless
- Auto-updater working reliably
- Comprehensive E2E test coverage
**Note:** Companion app is optional but highly valuable for organizers. Focus on reliability over features.
---
## Phase 4: Branding & Public Pages
**Goal:** Enable professional league identity and public discoverability.
### Asset Management
- [ ] Implement S3-compatible asset storage (logos, images)
- [ ] Add league logo upload functionality
- [ ] Create image optimization pipeline
- [ ] Implement asset CDN integration
- [ ] Build asset management UI (upload, delete, replace)
### Custom Branding
- [ ] Create custom CSS/theming system (colors, fonts)
- [ ] Build theme preview functionality
- [ ] Implement logo placement customization
- [ ] Add custom header/footer options
- [ ] Create branding guidelines documentation
### Public League Directory
- [ ] Build public league directory (browse and discover)
- [ ] Implement search and filtering (game type, region, skill level)
- [ ] Create league detail pages (public view)
- [ ] Add league statistics (active seasons, drivers, races)
- [ ] Implement privacy settings (public/private leagues)
### External Integrations
- [ ] Implement optional custom domain support (CNAME)
- [ ] Create embeddable widgets (standings iframe, schedule)
- [ ] Add Discord/TeamSpeak integration links
- [ ] Implement YouTube/Twitch VOD linking (external only, no uploads)
- [ ] Build social sharing functionality (Twitter, Reddit)
### Public Result Pages
- [ ] Create public race result pages (shareable links)
- [ ] Build driver profile pages (career statistics)
- [ ] Implement team profile pages (roster, history)
- [ ] Add historical standings archive
- [ ] Create race replay link integration (if available)
### Testing & Documentation
- [ ] Write BDD scenarios for branding features
- [ ] Test public pages with various league configurations
- [ ] Validate custom domain setup process
- [ ] Create user guide for branding customization
- [ ] Test embeddable widgets in external sites
**Success Criteria:**
- Leagues have professional identity and branding
- Public directory drives league discovery
- Custom domains working reliably
- Embeddable widgets functional
- External integrations (Discord, Twitch) operational
**Note:** Branding features are optional but enhance league professionalism and discoverability.
---
## Phase 5: Public Launch
**Goal:** Launch GridPilot publicly with production-grade infrastructure and stability.
### Security & Compliance
- [ ] Perform security audit (OAuth, credentials, API security)
- [ ] Implement rate limiting and DDoS protection
- [ ] Add CSRF and XSS protection
- [ ] Conduct penetration testing
- [ ] Review GDPR compliance (user data handling)
- [ ] Implement data export functionality (user request)
- [ ] Create incident response plan
### Performance & Scalability
- [ ] Load testing and performance optimization
- [ ] Implement database query optimization
- [ ] Add caching layers (Redis for sessions, API responses)
- [ ] Configure CDN for static assets
- [ ] Optimize Docker images for production
- [ ] Set up horizontal scaling strategy
### Production Infrastructure
- [ ] Set up production hosting (AWS/GCP/Azure)
- [ ] Configure production database (PostgreSQL with replication)
- [ ] Implement database backup strategy (automated, tested)
- [ ] Set up monitoring and alerting (logs, errors, uptime)
- [ ] Configure error tracking (Sentry or similar)
- [ ] Implement log aggregation and analysis
- [ ] Create disaster recovery plan
### Documentation & Support
- [ ] Write comprehensive user documentation
- [ ] Create organizer onboarding guide
- [ ] Build driver user guide
- [ ] Document API endpoints (if public)
- [ ] Create FAQ and troubleshooting guide
- [ ] Set up support system (email, Discord)
### Launch Preparation
- [ ] Prepare launch marketing materials
- [ ] Coordinate Reddit/Discord announcements
- [ ] Create launch video/demo
- [ ] Set up social media presence
- [ ] Prepare press kit (if applicable)
- [ ] Plan launch timeline and milestones
### Beta Onboarding
- [ ] Onboard first 10 beta leagues (closed beta)
- [ ] Collect feedback from beta users
- [ ] Fix critical bugs identified in beta
- [ ] Validate production stability under real load
- [ ] Document lessons learned
**Success Criteria:**
- Platform publicly available and stable
- Security audit passed with no critical issues
- Production infrastructure operational
- Monitoring and alerting functional
- User documentation complete
- First 10+ leagues successfully onboarded
- Platform stable under real-world load
**Note:** Public launch is a major milestone. Ensure stability and security before opening access.
---
## Phase 6: Monetization & Expansion
**Goal:** Generate revenue and expand platform capabilities.
### Monetization Features
- [ ] Implement league creation fee system
- [ ] Add optional driver entry fee per season
- [ ] Build revenue split mechanism (organizer/GridPilot)
- [ ] Create billing and invoicing system
- [ ] Implement payment processing (Stripe or similar)
- [ ] Add subscription management (for premium features)
- [ ] Create payout system for organizers
- [ ] Implement refund and dispute handling
### Premium Features
- [ ] Create premium league features (advanced analytics)
- [ ] Build driver/team performance metrics over time
- [ ] Implement historical trend analysis
- [ ] Add advanced race strategy tools
- [ ] Create custom report generation
- [ ] Build league comparison and benchmarking
### Analytics & Insights
- [ ] Add analytics dashboards for leagues
- [ ] Implement driver consistency metrics
- [ ] Create incident rate analysis
- [ ] Build lap time comparison tools
- [ ] Add race pace analysis
- [ ] Implement predictive performance modeling
### Platform Expansion
- [ ] Explore expansion to other simulators (ACC, rFactor 2)
- [ ] Evaluate additional automation features
- [ ] Research multi-game league support
- [ ] Investigate community-requested features
- [ ] Assess partnership opportunities (teams, sponsors)
### Business Intelligence
- [ ] Implement revenue tracking and reporting
- [ ] Create user engagement metrics
- [ ] Build churn analysis and retention tools
- [ ] Add A/B testing framework
- [ ] Implement feature adoption tracking
**Success Criteria:**
- Revenue generation active and growing
- Premium features adopted by target segment
- Payment processing reliable and secure
- Organizer payouts working correctly
- Platform expansion feasibility validated
- Positive unit economics demonstrated
**Note:** Monetization should not compromise core user experience. Ensure value delivery justifies pricing.
---
## Dependencies & Sequencing
**Critical Path:**
1. Phase 0 must be completed before any development begins
2. Phase 1 validation should gate investment in Phase 2
3. Phase 2 MVP is required before Phase 3 (companion app depends on API)
4. Phase 4 can be developed in parallel with Phase 3 (independent features)
5. Phase 5 (public launch) requires Phases 2-4 to be complete and stable
6. Phase 6 (monetization) requires Phase 5 (public user base)
**Optional Paths:**
- Phase 3 (companion app) can be delayed if organizers tolerate manual session creation
- Phase 4 (branding) can be simplified for MVP launch
- Phase 6 features can be prioritized based on user demand
**Iteration Strategy:**
- Complete each phase fully before moving to the next
- Validate success criteria before proceeding
- Adjust roadmap based on feedback and learnings
- Maintain architectural integrity throughout
---
## Living Document
This roadmap is a living document and will be updated as the project evolves. Key updates will include:
- Completed todos (checked off)
- Lessons learned from each phase
- Adjusted priorities based on user feedback
- New features discovered during development
- Changes to success criteria or scope
**Maintenance:**
- Review and update quarterly (or after each phase)
- Archive completed phases for reference
- Document deviations from original plan
- Track velocity and estimate remaining work
**Cross-References:**
- [CONCEPT.md](./CONCEPT.md) - Product vision and user needs
- [ARCHITECTURE.md](./ARCHITECTURE.md) - Technical design and component boundaries
- [TESTS.md](./TESTS.md) - Testing strategy and BDD scenarios
- [TECH.md](./TECH.md) - Technology decisions and rationale
---
**Last Updated:** 2025-11-21
**Current Phase:** Phase 0 (Foundation)
**Overall Progress:** 0% (not started)

274
docs/TECH.md Normal file
View File

@@ -0,0 +1,274 @@
# Technology Stack
This document outlines GridPilot's technology choices and their rationale. For architectural patterns and layer organization, see [ARCHITECTURE.md](./ARCHITECTURE.md).
## 1. Language & Runtime
### TypeScript (Strict Mode)
- **Version:** Latest stable (5.x+)
- **Configuration:** `strict: true`, no `any` types permitted
- **Rationale:** Type safety catches errors at compile time, improves IDE support, and serves as living documentation. Strict mode eliminates common type-related bugs and enforces explicit handling of null/undefined.
### Node.js LTS
- **Version:** >=20.0.0
- **Rationale:** Long-term support ensures stability and security patches. Modern features (fetch API, native test runner) reduce dependency overhead. Version 20+ provides performance improvements critical for real-time race monitoring.
## 2. Backend Framework
### Current Status: Under Evaluation
Three candidates align with Clean Architecture principles:
**Option A: Express**
- **Pros:** Mature ecosystem, extensive middleware, proven at scale
- **Cons:** Slower than modern alternatives, callback-heavy patterns
- **Use Case:** Best if stability and middleware availability are priorities
**Option B: Fastify**
- **Pros:** High performance, schema-based validation, modern async/await
- **Cons:** Smaller ecosystem than Express
- **Use Case:** Best for performance-critical endpoints (real-time race data)
**Option C: Hono**
- **Pros:** Ultra-lightweight, edge-ready, excellent TypeScript support
- **Cons:** Newest option, smaller community
- **Use Case:** Best for modern deployment targets (Cloudflare Workers, edge functions)
**Requirements (All Options):**
- HTTP server with middleware support
- OpenAPI/Swagger compatibility
- JSON schema validation
- WebSocket support (for real-time features)
**Decision Timeline:** Deferred to implementation phase based on deployment target selection.
## 3. Frontend Framework
### Current Status: Under Evaluation
Two candidates meet accessibility and performance requirements:
**Option A: React 18+ with Vite**
- **Pros:** Maximum flexibility, fast HMR, lightweight bundle
- **Cons:** Manual SEO optimization, client-side routing complexity
- **Use Case:** Best for dashboard-heavy, interactive UI (primary use case)
**Option B: Next.js 14+**
- **Pros:** Built-in SSR/SSG, file-based routing, image optimization
- **Cons:** Larger bundle, more opinionated
- **Use Case:** Best if public league pages require SEO
**Shared Dependencies:**
- **State Management:**
- TanStack Query: Server state (caching, optimistic updates, real-time sync)
- Zustand: Client state (UI preferences, form state)
- Rationale: Separation of concerns prevents state management spaghetti
- **UI Library:**
- Tailwind CSS: Utility-first styling, design system consistency
- shadcn/ui: Accessible components (WCAG 2.1 AA), copy-paste philosophy
- Radix UI primitives: Unstyled, accessible foundations
- Rationale: Rapid development without sacrificing accessibility or customization
- **Forms:** React Hook Form + Zod schemas (type-safe validation)
- **Routing:** React Router (Option A) or Next.js file-based routing (Option B)
**Decision Timeline:** Deferred to implementation phase. Leaning toward Option A (React + Vite) given dashboard-centric use case.
## 4. Database
### PostgreSQL 15+
- **Rationale:**
- Complex relationships (leagues → seasons → races → drivers → teams) require relational integrity
- JSONB columns handle flexible metadata (iRacing session results, custom league rules)
- Full-text search for driver/team lookups
- Battle-tested at scale, excellent TypeScript ORM support
- **Features Used:**
- Foreign keys with cascading deletes
- Check constraints (business rule validation)
- Indexes on frequently queried fields (league_id, race_date)
- Row-level security (multi-tenant data isolation)
### ORM: Under Evaluation
**Option A: Prisma**
- **Pros:** Type-safe queries, automatic migrations, excellent DX
- **Cons:** Additional build step, limited raw SQL for complex queries
- **Use Case:** Best for rapid development, type safety priority
**Option B: TypeORM**
- **Pros:** Decorators, Active Record pattern, SQL flexibility
- **Cons:** Older API design, less type-safe than Prisma
- **Use Case:** Best if complex SQL queries are frequent
**Decision Timeline:** Deferred to implementation phase. Both integrate cleanly with Clean Architecture (repository pattern at infrastructure layer).
## 5. Authentication
### iRacing OAuth Flow
- **Provider:** iRacing official OAuth 2.0
- **Rationale:** Official integration ensures compliance with iRacing Terms of Service. Users trust official credentials over third-party passwords.
- **Flow:** Authorization Code with PKCE (Proof Key for Code Exchange)
### Session Management
- **JWT:** Stateless tokens for API authentication
- **Storage:** HTTP-only cookies (XSS protection), encrypted at rest
- **Refresh Strategy:** Short-lived access tokens (15 min), long-lived refresh tokens (7 days)
### Implementation
- **Passport.js:** OAuth strategy handling, pluggable architecture
- **bcrypt:** Fallback password hashing (if local accounts added later)
- **Rationale:** Passport's strategy pattern aligns with Clean Architecture (adapter layer). Well-tested, extensive documentation.
## 6. Automation (Companion App)
### Electron
- **Version:** Latest stable (28.x+)
- **Rationale:** Cross-platform desktop framework (Windows, macOS, Linux). Native OS integration (system tray, notifications, auto-start).
- **Security:** Isolated renderer processes, context bridge for IPC
### Nut.js
- **Purpose:** Keyboard/mouse control for browser automation
- **Rationale:** Simulates human interaction with iRacing web UI when official API unavailable. Not gameplay automation—assistant for data entry tasks.
- **Constraints:** Windows-only initially (iRacing primary platform)
### Electron IPC
- **Main ↔ Renderer:** Type-safe message passing via preload scripts
- **Rationale:** Security (no direct Node.js access in renderer), type safety (Zod schemas for IPC contracts)
### Auto-Updates
- **electron-updater:** Handles signed updates, delta downloads
- **Rationale:** Critical for security patches, seamless user experience
**Why This Approach:**
- Assistant-style automation (user-initiated), not gameplay bots
- Complements web app (handles tasks iRacing API doesn't expose)
- Desktop integration (notifications for upcoming races, quick access via system tray)
## 7. Testing Tools
### Unit & Integration: Vitest
- **Rationale:** Native TypeScript support, fast execution (ESM, watch mode), compatible with Vite ecosystem
- **Coverage:** Built-in coverage reports (Istanbul), enforces 80% threshold
### E2E: Playwright
- **Rationale:** Reliable browser automation, cross-browser testing (Chromium, Firefox, WebKit), built-in wait strategies
- **Features:** Screenshot/video on failure, network mocking, parallel execution
### Test Containers (Docker)
- **Purpose:** Isolated test databases, Redis instances
- **Rationale:** Prevents test pollution, matches production environment, automatic cleanup
- **Services:** PostgreSQL, Redis, S3 (MinIO)
**Testing Strategy:**
- Unit tests: Core domain logic (pure functions, business rules)
- Integration tests: Repository implementations, API endpoints
- E2E tests: Critical user flows (create league, register for race, view results)
## 8. DevOps & Infrastructure
### Docker & Docker Compose
- **Purpose:** Local development, E2E testing, consistent environments
- **Services:**
- PostgreSQL (primary database)
- Redis (caching, rate limiting, job queue)
- MinIO (S3-compatible storage for local dev)
- **Rationale:** Developers get production-like environment instantly
### Redis
- **Use Cases:**
- Caching (league standings, frequently accessed driver stats)
- Rate limiting (API throttling, abuse prevention)
- Bull queue (background jobs: race result processing, email notifications)
- **Rationale:** Proven performance, simple key-value model, pub/sub for real-time features
### Object Storage
- **Production:** AWS S3 (logos, exported reports)
- **Development:** MinIO (S3-compatible, Docker-based)
- **Rationale:** Cost-effective, scalable, CDN integration
### Bull Queue (Redis-backed)
- **Jobs:** Process race results, send notifications, generate reports
- **Rationale:** Offloads heavy tasks from HTTP requests, retry logic, job prioritization
### CI/CD: Placeholder
- **Options:** GitHub Actions, GitLab CI
- **Rationale:** TBD based on hosting choice (GitHub vs. self-hosted GitLab)
## 9. Monorepo Tooling
### npm Workspaces
- **Rationale:** Built-in, zero configuration, dependency hoisting
- **Structure:** `/src/apps/*`, `/src/packages/*`, `/tests/*`
### Build Orchestration: Under Evaluation
**Option A: Turborepo**
- **Pros:** Fast incremental builds, remote caching, simple config
- **Cons:** Vercel-owned (vendor lock-in risk)
**Option B: Nx**
- **Pros:** Advanced dependency graph, affected commands, plugins
- **Cons:** Steeper learning curve, more complex config
**Decision Timeline:** Start with npm workspaces alone. Evaluate Turborepo/Nx if build times become bottleneck (unlikely at current scale).
## 10. Development Tools
### Code Quality
- **ESLint:** Enforce coding standards, catch common mistakes
- **Prettier:** Consistent formatting (no debates on tabs vs. spaces)
- **Rationale:** Automated code reviews reduce friction, onboarding time
### Pre-Commit Hooks
- **Husky:** Git hook management
- **lint-staged:** Run linters only on changed files
- **Rationale:** Fast feedback loop, prevents broken commits reaching CI
### TypeScript Configuration
- **Strict Mode:** All strict flags enabled
- **No Implicit Any:** Forces explicit types
- **Rationale:** Type safety as first-class citizen, not opt-in feature
### Runtime Validation
- **Zod:** Schema definition, runtime validation, type inference
- **Use Cases:**
- API request/response validation
- Environment variable parsing
- Form validation (shared between frontend/backend)
- **Rationale:** Single source of truth for data shapes, generates TypeScript types automatically
---
## Decision Status Summary
**Finalized:**
- Language: TypeScript (strict mode)
- Runtime: Node.js 20+
- Database: PostgreSQL 15+
- Auth: iRacing OAuth + JWT
- Companion: Electron + Nut.js
- Testing: Vitest + Playwright + Test Containers
- Infra: Docker + Redis + S3/MinIO
- Monorepo: npm workspaces
- Dev Tools: ESLint + Prettier + Husky + Zod
**Under Evaluation (Deferred to Implementation):**
- Backend framework: Express vs. Fastify vs. Hono
- Frontend framework: React + Vite vs. Next.js
- ORM: Prisma vs. TypeORM
- Build orchestration: Turborepo vs. Nx (if needed)
- CI/CD: GitHub Actions vs. GitLab CI
**Deferred Decisions Rationale:**
- Backend/frontend frameworks: Choice depends on deployment target (cloud vs. edge vs. self-hosted)
- ORM: Both options integrate cleanly with Clean Architecture; decision based on team preference during implementation
- Build tools: Optimize when proven bottleneck (YAGNI principle)
---
## Cross-References
- **Architecture Patterns:** See [ARCHITECTURE.md](./ARCHITECTURE.md) for how these technologies map to Clean Architecture layers
- **Project Overview:** See [CONCEPT.md](./CONCEPT.md) for business context driving technology choices
- **Setup Instructions:** See [README.md](../README.md) for installation and getting started
---
*Last Updated: 2025-11-21*

716
docs/TESTS.md Normal file
View File

@@ -0,0 +1,716 @@
# Testing Strategy
## Overview
GridPilot employs a comprehensive BDD (Behavior-Driven Development) testing strategy across three distinct layers: **Unit**, **Integration**, and **End-to-End (E2E)**. Each layer validates different aspects of the system while maintaining a consistent Given/When/Then approach that emphasizes behavior over implementation.
This document provides practical guidance on testing philosophy, test organization, tooling, and execution patterns for GridPilot.
---
## BDD Philosophy
### Why BDD for GridPilot?
GridPilot manages complex business rules around league management, team registration, event scheduling, result processing, and standings calculation. These rules must be:
- **Understandable** by non-technical stakeholders (league admins, race organizers)
- **Verifiable** through automated tests that mirror real-world scenarios
- **Maintainable** as business requirements evolve
BDD provides a shared vocabulary (Given/When/Then) that bridges the gap between domain experts and developers, ensuring tests document expected behavior rather than technical implementation details.
### Given/When/Then Format
All tests—regardless of layer—follow this structure:
```typescript
// Given: Establish initial state/context
// When: Perform the action being tested
// Then: Assert the expected outcome
```
**Example (Unit Test):**
```typescript
describe('League Domain Entity', () => {
it('should add a team when team limit not reached', () => {
// Given
const league = new League('Summer Series', { maxTeams: 10 });
const team = new Team('Racing Legends');
// When
const result = league.addTeam(team);
// Then
expect(result.isSuccess()).toBe(true);
expect(league.teams).toContain(team);
});
});
```
This pattern applies equally to integration tests (with real database operations) and E2E tests (with full UI workflows).
---
## Test Types & Organization
### Unit Tests (`/tests/unit`)
**Scope:** Domain entities, value objects, and application use cases with mocked ports (repositories, external services).
**Tooling:** Vitest (fast, TypeScript-native, ESM support)
**Execution:** Parallel, target <1 second total runtime
**Purpose:**
- Validate business logic in isolation
- Ensure domain invariants hold (e.g., team limits, scoring rules)
- Test use case orchestration with mocked dependencies
**Examples from Architecture:**
1. **Domain Entity Test:**
```typescript
// League.addTeam() validation
Given a League with maxTeams=10 and 9 current teams
When addTeam() is called with a valid Team
Then the team is added successfully
Given a League with maxTeams=10 and 10 current teams
When addTeam() is called
Then a DomainError is returned with "Team limit reached"
```
2. **Use Case Test:**
```typescript
// GenerateStandingsUseCase
Given a League with 5 teams and completed races
When execute() is called
Then LeagueRepository.findById() is invoked
And ScoringRule.calculatePoints() is called for each team
And sorted standings are returned
```
3. **Scoring Rule Test:**
```typescript
// ScoringRule.calculatePoints()
Given a F1-style scoring rule (25-18-15-12-10-8-6-4-2-1)
When calculatePoints(position=1) is called
Then 25 points are returned
Given the same rule
When calculatePoints(position=11) is called
Then 0 points are returned
```
**Key Practices:**
- Mock only at architecture boundaries (ports like `ILeagueRepository`)
- Never mock domain entities or value objects
- Keep tests fast (<10ms per test)
- Use in-memory test doubles for simple cases
---
### Integration Tests (`/tests/integration`)
**Scope:** Repository implementations, infrastructure adapters (PostgreSQL, Redis, OAuth clients, result importers).
**Tooling:** Vitest + Testcontainers (spins up real PostgreSQL/Redis in Docker)
**Execution:** Sequential, ~10 seconds per suite
**Purpose:**
- Validate that infrastructure adapters correctly implement port interfaces
- Test database queries, migrations, and transaction handling
- Ensure external API clients handle authentication and error scenarios
**Examples from Architecture:**
1. **Repository Test:**
```typescript
// PostgresLeagueRepository
Given a PostgreSQL container is running
When save() is called with a League entity
Then the league is persisted to the database
And findById() returns the same league with correct attributes
```
2. **OAuth Client Test:**
```typescript
// IRacingOAuthClient
Given valid iRacing credentials
When authenticate() is called
Then an access token is returned
And the token is cached in Redis for 1 hour
Given expired credentials
When authenticate() is called
Then an AuthenticationError is thrown
```
3. **Result Importer Test:**
```typescript
// EventResultImporter
Given an Event exists in the database
When importResults() is called with iRacing session data
Then Driver entities are created/updated
And EventResult entities are persisted with correct positions/times
And the Event status is updated to 'COMPLETED'
```
**Key Practices:**
- Use Testcontainers to spin up real databases (not mocks)
- Clean database state between tests (truncate tables or use transactions)
- Seed minimal test data via SQL fixtures
- Test both success and failure paths (network errors, constraint violations)
---
### End-to-End Tests (`/tests/e2e`)
**Scope:** Full user workflows spanning web-client → web-api → database.
**Tooling:** Playwright + Docker Compose (orchestrates all services)
**Execution:** ~2 minutes per scenario
**Purpose:**
- Validate complete user journeys from UI interactions to database changes
- Ensure services integrate correctly in a production-like environment
- Catch regressions in multi-service workflows
**Examples from Architecture:**
1. **League Creation Workflow:**
```gherkin
Given an authenticated league admin
When they navigate to "Create League"
And fill in league name, scoring system, and team limit
And submit the form
Then the league appears in the admin dashboard
And the database contains the new league record
And the league is visible to other users
```
2. **Team Registration Workflow:**
```gherkin
Given a published league with 5/10 team slots filled
When a team captain navigates to the league page
And clicks "Join League"
And fills in team name and roster
And submits the form
Then the team appears in the league's team list
And the team count updates to 6/10
And the captain receives a confirmation email
```
3. **Automated Result Import:**
```gherkin
Given a League with an upcoming Event
And iRacing OAuth credentials are configured
When the scheduled import job runs
Then the job authenticates with iRacing
And fetches session results for the Event
And creates EventResult records in the database
And updates the Event status to 'COMPLETED'
And triggers standings recalculation
```
4. **Companion App Login Automation:**
```gherkin
Given a League Admin enables companion app login automation
When the companion app is launched
Then the app polls for a generated login token from web-api
And auto-fills iRacing credentials from the admin's profile
And logs into iRacing automatically
And confirms successful login to web-api
```
**Key Practices:**
- Use Playwright's Page Object pattern for reusable UI interactions
- Test both happy paths and error scenarios (validation errors, network failures)
- Clean database state between scenarios (via API or direct SQL)
- Run E2E tests in CI before merging to main branch
---
## Test Data Strategy
### Fixtures & Seeding
**Unit Tests:**
- Use in-memory domain objects (no database)
- Factory functions for common test entities:
```typescript
function createTestLeague(overrides?: Partial<LeagueProps>): League {
return new League('Test League', { maxTeams: 10, ...overrides });
}
```
**Integration Tests:**
- Use Testcontainers to spin up fresh PostgreSQL instances
- Seed minimal test data via SQL scripts:
```sql
-- tests/integration/fixtures/leagues.sql
INSERT INTO leagues (id, name, max_teams) VALUES
('league-1', 'Test League', 10);
```
- Clean state between tests (truncate tables or rollback transactions)
**E2E Tests:**
- Pre-seed database via migrations before Docker Compose starts
- Use API endpoints to create test data when possible (validates API behavior)
- Database cleanup between scenarios:
```typescript
// tests/e2e/support/database.ts
export async function cleanDatabase() {
await sql`TRUNCATE TABLE event_results CASCADE`;
await sql`TRUNCATE TABLE events CASCADE`;
await sql`TRUNCATE TABLE teams CASCADE`;
await sql`TRUNCATE TABLE leagues CASCADE`;
}
```
---
## Docker E2E Setup
### Architecture
E2E tests run against a full stack orchestrated by `docker-compose.test.yml`:
```yaml
services:
postgres:
image: postgres:16
environment:
POSTGRES_DB: gridpilot_test
POSTGRES_USER: test
POSTGRES_PASSWORD: test
redis:
image: redis:7-alpine
web-api:
build: ./src/apps/web-api
depends_on:
- postgres
- redis
environment:
DATABASE_URL: postgres://test:test@postgres:5432/gridpilot_test
REDIS_URL: redis://redis:6379
ports:
- "3000:3000"
```
### Execution Flow
1. **Start Services:** `docker compose -f docker-compose.test.yml up -d`
2. **Run Migrations:** `npm run migrate:test` (seeds database)
3. **Execute Tests:** Playwright targets `http://localhost:3000`
4. **Teardown:** `docker compose -f docker-compose.test.yml down -v`
### Environment Setup
```bash
# tests/e2e/setup.ts
export async function globalSetup() {
// Wait for web-api to be ready
await waitForService('http://localhost:3000/health');
// Run database migrations
await runMigrations();
}
export async function globalTeardown() {
// Stop Docker Compose services
await exec('docker compose -f docker-compose.test.yml down -v');
}
```
---
## BDD Scenario Examples
### 1. League Creation (Success + Failure)
```gherkin
Scenario: Admin creates a new league
Given an authenticated admin user
When they submit a league form with:
| name | Summer Series 2024 |
| maxTeams | 12 |
| scoringSystem | F1 |
Then the league is created successfully
And the admin is redirected to the league dashboard
And the database contains the new league
Scenario: League creation fails with duplicate name
Given a league named "Summer Series 2024" already exists
When an admin submits a league form with name "Summer Series 2024"
Then the form displays error "League name already exists"
And no new league is created in the database
```
### 2. Team Registration (Success + Failure)
```gherkin
Scenario: Team registers for a league
Given a published league with 5/10 team slots
When a team captain submits registration with:
| teamName | Racing Legends |
| drivers | Alice, Bob, Carol |
Then the team is added to the league
And the team count updates to 6/10
And the captain receives a confirmation email
Scenario: Registration fails when league is full
Given a published league with 10/10 team slots
When a team captain attempts to register
Then the form displays error "League is full"
And the team is not added to the league
```
### 3. Automated Result Import (Success + Failure)
```gherkin
Scenario: Import results from iRacing
Given a League with an Event scheduled for today
And iRacing OAuth credentials are configured
When the scheduled import job runs
Then the job authenticates with iRacing API
And fetches session results for the Event
And creates EventResult records for each driver
And updates the Event status to 'COMPLETED'
And triggers standings recalculation
Scenario: Import fails with invalid credentials
Given an Event with expired iRacing credentials
When the import job runs
Then an AuthenticationError is logged
And the Event status remains 'SCHEDULED'
And an admin notification is sent
```
### 4. Parallel Scoring Calculation
```gherkin
Scenario: Calculate standings for multiple leagues concurrently
Given 5 active leagues with completed events
When the standings recalculation job runs
Then each league's standings are calculated in parallel
And the process completes in <5 seconds
And all standings are persisted correctly
And no race conditions occur (validated via database integrity checks)
```
### 5. Companion App Login Automation
```gherkin
Scenario: Companion app logs into iRacing automatically
Given a League Admin enables companion app login automation
And provides their iRacing credentials
When the companion app is launched
Then the app polls web-api for a login token
And retrieves the admin's encrypted credentials
And auto-fills the iRacing login form
And submits the login request
And confirms successful login to web-api
And caches the session token for 24 hours
```
---
## Coverage Goals
### Target Coverage Levels
- **Domain/Application Layers:** >90% (critical business logic)
- **Infrastructure Layer:** >80% (repository implementations, adapters)
- **Presentation Layer:** Smoke tests (basic rendering, no exhaustive UI coverage)
### Running Coverage Reports
```bash
# Unit + Integration coverage
npm run test:coverage
# View HTML report
open coverage/index.html
# E2E coverage (via Istanbul)
npm run test:e2e:coverage
```
### What to Prioritize
1. **Domain Entities:** Invariants, validation rules, state transitions
2. **Use Cases:** Orchestration logic, error handling, port interactions
3. **Repositories:** CRUD operations, query builders, transaction handling
4. **Adapters:** External API clients, OAuth flows, result importers
**What NOT to prioritize:**
- Trivial getters/setters
- Framework boilerplate (Express route handlers)
- UI styling (covered by visual regression tests if needed)
---
## Continuous Testing
### Watch Mode (Development)
```bash
# Auto-run unit tests on file changes
npm run test:watch
# Auto-run integration tests (slower, but useful for DB work)
npm run test:integration:watch
```
### CI/CD Pipeline
```mermaid
graph LR
A[Code Push] --> B[Unit Tests]
B --> C[Integration Tests]
C --> D[E2E Tests]
D --> E[Deploy to Staging]
```
**Execution Order:**
1. **Unit Tests** (parallel, <1 second) — fail fast on logic errors
2. **Integration Tests** (sequential, ~10 seconds) — catch infrastructure issues
3. **E2E Tests** (sequential, ~2 minutes) — validate full workflows
4. **Deploy** — only if all tests pass
**Parallelization:**
- Unit tests run in parallel (Vitest default)
- Integration tests run sequentially (avoid database conflicts)
- E2E tests run sequentially (UI interactions are stateful)
---
## Testing Best Practices
### 1. Test Behavior, Not Implementation
**❌ Bad (overfitted to implementation):**
```typescript
it('should call repository.save() once', () => {
const repo = mock<ILeagueRepository>();
const useCase = new CreateLeagueUseCase(repo);
useCase.execute({ name: 'Test' });
expect(repo.save).toHaveBeenCalledTimes(1);
});
```
**✅ Good (tests observable behavior):**
```typescript
it('should persist the league to the repository', async () => {
const repo = new InMemoryLeagueRepository();
const useCase = new CreateLeagueUseCase(repo);
const result = await useCase.execute({ name: 'Test' });
expect(result.isSuccess()).toBe(true);
const league = await repo.findById(result.value.id);
expect(league?.name).toBe('Test');
});
```
### 2. Mock Only at Architecture Boundaries
**Ports (interfaces)** should be mocked in use case tests:
```typescript
const mockRepo = mock<ILeagueRepository>({
save: jest.fn().mockResolvedValue(undefined),
});
```
**Domain entities** should NEVER be mocked:
```typescript
// ❌ Don't do this
const mockLeague = mock<League>();
// ✅ Do this
const league = new League('Test League', { maxTeams: 10 });
```
### 3. Keep Tests Readable and Maintainable
**Arrange-Act-Assert Pattern:**
```typescript
it('should calculate standings correctly', () => {
// Arrange: Set up test data
const league = createTestLeague();
const teams = [createTestTeam('Team A'), createTestTeam('Team B')];
const results = [createTestResult(teams[0], position: 1)];
// Act: Perform the action
const standings = league.calculateStandings(results);
// Assert: Verify the outcome
expect(standings[0].team).toBe(teams[0]);
expect(standings[0].points).toBe(25);
});
```
### 4. Test Error Scenarios
Don't just test the happy path:
```typescript
describe('League.addTeam()', () => {
it('should add team successfully', () => { /* ... */ });
it('should fail when team limit reached', () => {
const league = createTestLeague({ maxTeams: 1 });
league.addTeam(createTestTeam('Team A'));
const result = league.addTeam(createTestTeam('Team B'));
expect(result.isFailure()).toBe(true);
expect(result.error.message).toBe('Team limit reached');
});
it('should fail when adding duplicate team', () => { /* ... */ });
});
```
---
## Common Patterns
### Setting Up Test Fixtures
**Factory Functions:**
```typescript
// tests/support/factories.ts
export function createTestLeague(overrides?: Partial<LeagueProps>): League {
return new League('Test League', {
maxTeams: 10,
scoringSystem: 'F1',
...overrides,
});
}
export function createTestTeam(name: string): Team {
return new Team(name, { drivers: ['Driver 1', 'Driver 2'] });
}
```
### Mocking Ports in Use Case Tests
```typescript
// tests/unit/application/CreateLeagueUseCase.test.ts
describe('CreateLeagueUseCase', () => {
let mockRepo: jest.Mocked<ILeagueRepository>;
let useCase: CreateLeagueUseCase;
beforeEach(() => {
mockRepo = {
save: jest.fn().mockResolvedValue(undefined),
findById: jest.fn().mockResolvedValue(null),
findByName: jest.fn().mockResolvedValue(null),
};
useCase = new CreateLeagueUseCase(mockRepo);
});
it('should create a league when name is unique', async () => {
const result = await useCase.execute({ name: 'New League' });
expect(result.isSuccess()).toBe(true);
expect(mockRepo.save).toHaveBeenCalledWith(
expect.objectContaining({ name: 'New League' })
);
});
});
```
### Database Cleanup Strategies
**Integration Tests:**
```typescript
// tests/integration/setup.ts
import { sql } from './database';
export async function cleanDatabase() {
await sql`TRUNCATE TABLE event_results CASCADE`;
await sql`TRUNCATE TABLE events CASCADE`;
await sql`TRUNCATE TABLE teams CASCADE`;
await sql`TRUNCATE TABLE leagues CASCADE`;
}
beforeEach(async () => {
await cleanDatabase();
});
```
**E2E Tests:**
```typescript
// tests/e2e/support/hooks.ts
import { test as base } from '@playwright/test';
export const test = base.extend({
page: async ({ page }, use) => {
// Clean database before each test
await fetch('http://localhost:3000/test/cleanup', { method: 'POST' });
await use(page);
},
});
```
### Playwright Page Object Pattern
```typescript
// tests/e2e/pages/LeaguePage.ts
export class LeaguePage {
constructor(private page: Page) {}
async navigateToCreateLeague() {
await this.page.goto('/leagues/create');
}
async fillLeagueForm(data: { name: string; maxTeams: number }) {
await this.page.fill('[name="name"]', data.name);
await this.page.fill('[name="maxTeams"]', data.maxTeams.toString());
}
async submitForm() {
await this.page.click('button[type="submit"]');
}
async getSuccessMessage() {
return this.page.textContent('.success-message');
}
}
// Usage in test
test('should create league', async ({ page }) => {
const leaguePage = new LeaguePage(page);
await leaguePage.navigateToCreateLeague();
await leaguePage.fillLeagueForm({ name: 'Test', maxTeams: 10 });
await leaguePage.submitForm();
expect(await leaguePage.getSuccessMessage()).toBe('League created');
});
```
---
## Cross-References
- **[`ARCHITECTURE.md`](./ARCHITECTURE.md)** — Layer boundaries, port definitions, and dependency rules that guide test structure
- **[`TECH.md`](./TECH.md)** — Detailed tooling specifications (Vitest, Playwright, Testcontainers configuration)
- **[`package.json`](../package.json)** — Test scripts and commands (`test:unit`, `test:integration`, `test:e2e`, `test:coverage`)
---
## Summary
GridPilot's testing strategy ensures:
- **Business logic is correct** (unit tests for domain/application layers)
- **Infrastructure works reliably** (integration tests for repositories/adapters)
- **User workflows function end-to-end** (E2E tests for full stack)
By following BDD principles and maintaining clear test organization, the team can confidently evolve GridPilot while preserving correctness and stability.

28
package.json Normal file
View File

@@ -0,0 +1,28 @@
{
"name": "gridpilot",
"version": "0.1.0",
"private": true,
"description": "GridPilot - Clean Architecture monorepo for web platform and Electron companion app",
"engines": {
"node": ">=20.0.0"
},
"workspaces": [
"src/packages/*",
"src/apps/*"
],
"scripts": {
"dev": "echo 'Development server placeholder - to be configured'",
"build": "echo 'Build all packages placeholder - to be configured'",
"test": "vitest run",
"test:unit": "vitest run tests/unit",
"test:integration": "vitest run tests/integration",
"test:e2e": "vitest run tests/e2e",
"test:watch": "vitest watch",
"typecheck": "tsc --noEmit"
},
"devDependencies": {
"@types/node": "^22.10.2",
"typescript": "^5.7.2",
"vitest": "^2.1.8"
}
}