bdd tests
This commit is contained in:
81
plans/bdd_testing_concept.md
Normal file
81
plans/bdd_testing_concept.md
Normal file
@@ -0,0 +1,81 @@
|
||||
# BDD E2E Testing Concept - GridPilot
|
||||
|
||||
## 1. Vision
|
||||
Our BDD (Behavior-Driven Development) E2E tests serve as the **final source of truth** for the GridPilot platform. They define the "Final Expectations" by describing user journeys in plain English (Gherkin), ensuring that the core value proposition—automation and unified league management—is delivered.
|
||||
|
||||
We focus on **outcomes**, not visual implementation.
|
||||
|
||||
## 2. Core Scenarios (Gherkin)
|
||||
|
||||
### Feature: Driver Onboarding and League Participation
|
||||
**Scenario: Driver joins a league and prepares for a race**
|
||||
Given I am a registered driver "John Doe"
|
||||
And I am on the "Leagues Discovery" page
|
||||
When I select the "European GT League"
|
||||
And I click "Join League"
|
||||
Then I should see myself in the "Roster"
|
||||
And the "Dashboard" should show the next race at "Monza"
|
||||
|
||||
### Feature: Admin League Management and Automation
|
||||
**Scenario: Admin schedules a race via Companion and verifies on Website**
|
||||
Given I am a league admin for "European GT League"
|
||||
When the Companion App schedules a "GT3 Monza" race
|
||||
Then the Website "Schedule" page should display the "Monza" race
|
||||
And drivers should be able to "Register" for this race
|
||||
|
||||
### Feature: Results and Standings
|
||||
**Scenario: Admin imports results and standings update automatically**
|
||||
Given a completed race "Monza GP" exists in "European GT League"
|
||||
And I am an admin on the "Import Results" page
|
||||
When I upload the iRacing results CSV
|
||||
Then the "Race Results" page should show "John Doe" in P1
|
||||
And the "Standings" should reflect the new points for "John Doe"
|
||||
|
||||
## 3. Testing Hierarchy & Cleanup
|
||||
|
||||
### Layer 1: Unit Tests (STAY)
|
||||
- **Location:** Adjacent to code (`*.test.ts`)
|
||||
- **Focus:** Domain logic, value objects, pure business rules.
|
||||
- **Status:** Keep as is. They are fast and catch logic errors early.
|
||||
|
||||
### Layer 2: Integration Tests (REFACTOR/REDUCE)
|
||||
- **Location:** `tests/integration/`
|
||||
- **Focus:** Adapter wiring and Use Case orchestration.
|
||||
- **Cleanup:** Many integration tests that currently "smoke test" UI data flows (e.g., `standings-data-flow.integration.test.ts`) will be superseded by BDD E2E tests. We will keep only those that test complex infrastructure boundaries (e.g., Database constraints, external API adapters).
|
||||
|
||||
### Layer 3: BDD E2E Tests (NEW CORE)
|
||||
- **Location:** `tests/e2e/bdd/` (using Playwright)
|
||||
- **Focus:** Final outcome validation.
|
||||
- **Status:** These become the primary "Acceptance Tests".
|
||||
|
||||
## 4. Obsolete and Redundant Tests Audit
|
||||
|
||||
Based on the new BDD E2E focus, the following tests are candidates for removal or refactoring:
|
||||
|
||||
### E2E Layer
|
||||
- `tests/e2e/website/league-pages.e2e.test.ts`: **Refactor**. This file contains many "Verify X is visible" tests. These should be absorbed into the BDD scenarios (e.g., "Then I should see the Monza race").
|
||||
- `tests/e2e/website/route-coverage.e2e.test.ts`: **Keep**. This is a technical smoke test ensuring no 404s, which is different from behavioral testing.
|
||||
|
||||
### Integration Layer
|
||||
- `tests/integration/league/standings-data-flow.integration.test.ts`: **Obsolete**. The BDD scenario "Admin imports results and standings update" covers this end-to-end.
|
||||
- `tests/integration/league/schedule-data-flow.integration.test.ts`: **Obsolete**. Covered by the "Admin schedules a race" BDD scenario.
|
||||
- `tests/integration/league/stats-data-flow.integration.test.ts`: **Obsolete**. Should be part of the "Results and Standings" BDD feature.
|
||||
- `tests/integration/database/*`: **Keep**. These ensure data integrity at the persistence layer, which E2E tests might miss (e.g., unique constraints).
|
||||
|
||||
## 5. Testing Hierarchy
|
||||
|
||||
| Layer | Tool | Responsibility | Data |
|
||||
|-------|------|----------------|------|
|
||||
| **Unit** | Vitest | Business Rules / Domain Invariants | Mocks / In-memory |
|
||||
| **Integration** | Vitest | Infrastructure Boundaries (DB, API Contracts) | In-memory / Docker DB |
|
||||
| **BDD E2E** | Playwright | Final User Outcomes / Acceptance Criteria | Full Stack (Docker) |
|
||||
|
||||
## 6. Implementation Plan
|
||||
|
||||
1. **Infrastructure Setup:** Configure Playwright to support Gherkin-style reporting or use a lightweight wrapper.
|
||||
2. **Scenario Implementation:**
|
||||
- Implement "Driver Journey" (Discovery -> Join -> Dashboard).
|
||||
- Implement "Admin Journey" (Schedule -> Import -> Standings).
|
||||
3. **Cleanup:**
|
||||
- Migrate logic from `league-pages.e2e.test.ts` to BDD steps.
|
||||
- Remove redundant `*-data-flow.integration.test.ts` files.
|
||||
67
plans/clean_integration_strategy.md
Normal file
67
plans/clean_integration_strategy.md
Normal file
@@ -0,0 +1,67 @@
|
||||
# Concept: Breaking Down Complexity via Clean Integration Testing
|
||||
|
||||
## 1. The Problem: The "Big Bang" Implementation Trap
|
||||
Complex features like "Standings Recalculation" or "Automated Race Scheduling" often fail because developers try to implement the entire flow at once. This leads to:
|
||||
- Massive PRs that are impossible to review.
|
||||
- Brittle code that is hard to debug.
|
||||
- "Big Bang" E2E tests that fail for obscure reasons.
|
||||
|
||||
## 2. The Solution: The "Use Case First" Integration Strategy
|
||||
We break down complex tasks by focusing on the **Application Use Case** as the unit of integration. We don't wait for the UI or the real database to be ready. We use **Clean Integration Tests** to prove the orchestration logic in isolation.
|
||||
|
||||
### 2.1 The "Vertical Slice" Breakdown
|
||||
Instead of implementing by layer (DB first, then API, then UI), we implement by **Behavioral Slice**:
|
||||
1. **Slice A:** The core logic (Domain + Use Case).
|
||||
2. **Slice B:** The persistence (Repository Adapter).
|
||||
3. **Slice C:** The delivery (API Controller + Presenter).
|
||||
4. **Slice D:** The UI (React Component).
|
||||
|
||||
## 3. The Clean Integration Test Pattern
|
||||
A "Clean Integration Test" is a test that verifies a Use Case's interaction with its Ports using **In-Memory Adapters**.
|
||||
|
||||
### 3.1 Why In-Memory?
|
||||
- **Speed:** Runs in milliseconds.
|
||||
- **Determinism:** No external state or network issues.
|
||||
- **Focus:** Tests the *orchestration* (e.g., "Does the Use Case call the Repository and then the Event Publisher?").
|
||||
|
||||
### 3.2 Example: Breaking Down "Import Race Results"
|
||||
**Task:** Implement CSV Result Import.
|
||||
|
||||
**Step 1: Integration Test for the Use Case (The Orchestrator)**
|
||||
- **Given:** An `InMemoryRaceRepository` with a scheduled race.
|
||||
- **And:** An `InMemoryStandingRepository`.
|
||||
- **When:** `ImportRaceResultsUseCase` is executed with valid CSV data.
|
||||
- **Then:** The `RaceRepository` should mark the race as "Completed".
|
||||
- **And:** The `StandingRepository` should contain updated points.
|
||||
- **And:** An `EventPublisher` should emit `ResultsImportedEvent`.
|
||||
|
||||
**Step 2: Integration Test for the Repository (The Persistence)**
|
||||
- **Given:** A real Docker-based PostgreSQL database.
|
||||
- **When:** `PostgresRaceRepository.save()` is called with a completed race.
|
||||
- **Then:** The database record should reflect the status change.
|
||||
- **And:** All related `RaceResult` entities should be persisted.
|
||||
|
||||
**Step 3: Integration Test for the API (The Contract)**
|
||||
- **Given:** A running API server.
|
||||
- **When:** A `POST /leagues/:id/results` request is made with a CSV file.
|
||||
- **Then:** The response should be `201 Created`.
|
||||
- **And:** The returned DTO should match the `RaceResultsDTO` contract.
|
||||
|
||||
## 4. The "Task Breakdown" Workflow for Developers
|
||||
|
||||
When faced with a complex task, follow this workflow:
|
||||
|
||||
1. **Define the Use Case Port:** What does the system *need* to do? (e.g., `IImportResultsPort`).
|
||||
2. **Write the Use Case Integration Test:** Use `InMemory` doubles to define the success path.
|
||||
3. **Implement the Use Case:** Make the integration test pass.
|
||||
4. **Implement the Infrastructure:** Create the real `Postgres` or `API` adapters.
|
||||
5. **Verify with BDD E2E:** Finally, connect the UI and verify the "Final Expectation."
|
||||
|
||||
## 5. Benefits of this Approach
|
||||
- **Early Feedback:** You know the logic is correct before you even touch the UI.
|
||||
- **Parallel Development:** One developer can work on the Use Case while another works on the UI, both using the same Port definition.
|
||||
- **Debuggability:** If the E2E test fails, you can check the Integration tests to see if the failure is in the *logic* or the *wiring*.
|
||||
- **PR Quality:** PRs can be broken down by slice (e.g., "PR 1: Use Case + Integration Tests", "PR 2: Repository Implementation").
|
||||
|
||||
---
|
||||
*This concept ensures that complexity is managed through strict architectural boundaries and fast, reliable feedback loops.*
|
||||
Reference in New Issue
Block a user