Files
gridpilot.gg/plans/clean_integration_strategy.md
2026-01-21 23:46:48 +01:00

68 lines
3.7 KiB
Markdown

# Concept: Breaking Down Complexity via Clean Integration Testing
## 1. The Problem: The "Big Bang" Implementation Trap
Complex features like "Standings Recalculation" or "Automated Race Scheduling" often fail because developers try to implement the entire flow at once. This leads to:
- Massive PRs that are impossible to review.
- Brittle code that is hard to debug.
- "Big Bang" E2E tests that fail for obscure reasons.
## 2. The Solution: The "Use Case First" Integration Strategy
We break down complex tasks by focusing on the **Application Use Case** as the unit of integration. We don't wait for the UI or the real database to be ready. We use **Clean Integration Tests** to prove the orchestration logic in isolation.
### 2.1 The "Vertical Slice" Breakdown
Instead of implementing by layer (DB first, then API, then UI), we implement by **Behavioral Slice**:
1. **Slice A:** The core logic (Domain + Use Case).
2. **Slice B:** The persistence (Repository Adapter).
3. **Slice C:** The delivery (API Controller + Presenter).
4. **Slice D:** The UI (React Component).
## 3. The Clean Integration Test Pattern
A "Clean Integration Test" is a test that verifies a Use Case's interaction with its Ports using **In-Memory Adapters**.
### 3.1 Why In-Memory?
- **Speed:** Runs in milliseconds.
- **Determinism:** No external state or network issues.
- **Focus:** Tests the *orchestration* (e.g., "Does the Use Case call the Repository and then the Event Publisher?").
### 3.2 Example: Breaking Down "Import Race Results"
**Task:** Implement CSV Result Import.
**Step 1: Integration Test for the Use Case (The Orchestrator)**
- **Given:** An `InMemoryRaceRepository` with a scheduled race.
- **And:** An `InMemoryStandingRepository`.
- **When:** `ImportRaceResultsUseCase` is executed with valid CSV data.
- **Then:** The `RaceRepository` should mark the race as "Completed".
- **And:** The `StandingRepository` should contain updated points.
- **And:** An `EventPublisher` should emit `ResultsImportedEvent`.
**Step 2: Integration Test for the Repository (The Persistence)**
- **Given:** A real Docker-based PostgreSQL database.
- **When:** `PostgresRaceRepository.save()` is called with a completed race.
- **Then:** The database record should reflect the status change.
- **And:** All related `RaceResult` entities should be persisted.
**Step 3: Integration Test for the API (The Contract)**
- **Given:** A running API server.
- **When:** A `POST /leagues/:id/results` request is made with a CSV file.
- **Then:** The response should be `201 Created`.
- **And:** The returned DTO should match the `RaceResultsDTO` contract.
## 4. The "Task Breakdown" Workflow for Developers
When faced with a complex task, follow this workflow:
1. **Define the Use Case Port:** What does the system *need* to do? (e.g., `IImportResultsPort`).
2. **Write the Use Case Integration Test:** Use `InMemory` doubles to define the success path.
3. **Implement the Use Case:** Make the integration test pass.
4. **Implement the Infrastructure:** Create the real `Postgres` or `API` adapters.
5. **Verify with BDD E2E:** Finally, connect the UI and verify the "Final Expectation."
## 5. Benefits of this Approach
- **Early Feedback:** You know the logic is correct before you even touch the UI.
- **Parallel Development:** One developer can work on the Use Case while another works on the UI, both using the same Port definition.
- **Debuggability:** If the E2E test fails, you can check the Integration tests to see if the failure is in the *logic* or the *wiring*.
- **PR Quality:** PRs can be broken down by slice (e.g., "PR 1: Use Case + Integration Tests", "PR 2: Repository Implementation").
---
*This concept ensures that complexity is managed through strict architectural boundaries and fast, reliable feedback loops.*