refactor use cases
This commit is contained in:
244
tests/e2e/api/README.md
Normal file
244
tests/e2e/api/README.md
Normal file
@@ -0,0 +1,244 @@
|
||||
# API Smoke Tests
|
||||
|
||||
This directory contains true end-to-end API smoke tests that make direct HTTP requests to the running API server to validate endpoint functionality and detect issues like "presenter not presented" errors.
|
||||
|
||||
## Overview
|
||||
|
||||
The API smoke tests are designed to:
|
||||
|
||||
1. **Test all public API endpoints** - Make requests to discover and validate endpoints
|
||||
2. **Detect presenter errors** - Identify use cases that return errors without calling `this.output.present()`
|
||||
3. **Validate response formats** - Ensure endpoints return proper data structures
|
||||
4. **Test error handling** - Verify graceful handling of invalid inputs
|
||||
5. **Generate detailed reports** - Create JSON and Markdown reports of findings
|
||||
|
||||
## Files
|
||||
|
||||
- `api-smoke.test.ts` - Main Playwright test file
|
||||
- `README.md` - This documentation
|
||||
|
||||
## Usage
|
||||
|
||||
### Local Testing
|
||||
|
||||
Run the API smoke tests against a locally running API:
|
||||
|
||||
```bash
|
||||
# Start the API server (in one terminal)
|
||||
npm run docker:dev:up
|
||||
|
||||
# Run smoke tests (in another terminal)
|
||||
npm run test:api:smoke
|
||||
```
|
||||
|
||||
### Docker Testing (Recommended)
|
||||
|
||||
Run the tests in the full Docker e2e environment:
|
||||
|
||||
```bash
|
||||
# Start the complete e2e environment
|
||||
npm run docker:e2e:up
|
||||
|
||||
# Run smoke tests in Docker
|
||||
npm run test:api:smoke:docker
|
||||
|
||||
# Or use the unified command
|
||||
npm run test:e2e:website # This runs all e2e tests including API smoke
|
||||
```
|
||||
|
||||
### CI/CD Integration
|
||||
|
||||
Add to your CI pipeline:
|
||||
|
||||
```yaml
|
||||
# GitHub Actions example
|
||||
- name: Start E2E Environment
|
||||
run: npm run docker:e2e:up
|
||||
|
||||
- name: Run API Smoke Tests
|
||||
run: npm run test:api:smoke:docker
|
||||
|
||||
- name: Upload Test Reports
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: api-smoke-reports
|
||||
path: |
|
||||
api-smoke-report.json
|
||||
api-smoke-report.md
|
||||
playwright-report/
|
||||
```
|
||||
|
||||
## Test Coverage
|
||||
|
||||
The smoke tests cover:
|
||||
|
||||
### Race Endpoints
|
||||
- `/races/all` - Get all races
|
||||
- `/races/total-races` - Get total count
|
||||
- `/races/page-data` - Get paginated data
|
||||
- `/races/reference/penalty-types` - Reference data
|
||||
- `/races/{id}` - Race details (with invalid IDs)
|
||||
- `/races/{id}/results` - Race results
|
||||
- `/races/{id}/sof` - Strength of field
|
||||
- `/races/{id}/protests` - Protests
|
||||
- `/races/{id}/penalties` - Penalties
|
||||
|
||||
### League Endpoints
|
||||
- `/leagues/all` - All leagues
|
||||
- `/leagues/available` - Available leagues
|
||||
- `/leagues/{id}` - League details
|
||||
- `/leagues/{id}/standings` - Standings
|
||||
- `/leagues/{id}/schedule` - Schedule
|
||||
|
||||
### Team Endpoints
|
||||
- `/teams/all` - All teams
|
||||
- `/teams/{id}` - Team details
|
||||
- `/teams/{id}/members` - Team members
|
||||
|
||||
### Driver Endpoints
|
||||
- `/drivers/leaderboard` - Leaderboard
|
||||
- `/drivers/total-drivers` - Total count
|
||||
- `/drivers/{id}` - Driver details
|
||||
|
||||
### Media Endpoints
|
||||
- `/media/avatar/{id}` - Avatar retrieval
|
||||
- `/media/{id}` - Media retrieval
|
||||
|
||||
### Sponsor Endpoints
|
||||
- `/sponsors/pricing` - Sponsorship pricing
|
||||
- `/sponsors/dashboard` - Sponsor dashboard
|
||||
- `/sponsors/{id}` - Sponsor details
|
||||
|
||||
### Auth Endpoints
|
||||
- `/auth/login` - Login
|
||||
- `/auth/signup` - Signup
|
||||
- `/auth/session` - Session info
|
||||
|
||||
### Dashboard Endpoints
|
||||
- `/dashboard/overview` - Overview
|
||||
- `/dashboard/feed` - Activity feed
|
||||
|
||||
### Analytics Endpoints
|
||||
- `/analytics/metrics` - Metrics
|
||||
- `/analytics/dashboard` - Dashboard data
|
||||
|
||||
### Admin Endpoints
|
||||
- `/admin/users` - User management
|
||||
|
||||
### Protest Endpoints
|
||||
- `/protests/race/{id}` - Race protests
|
||||
|
||||
### Payment Endpoints
|
||||
- `/payments/wallet` - Wallet info
|
||||
|
||||
### Notification Endpoints
|
||||
- `/notifications/unread` - Unread notifications
|
||||
|
||||
### Feature Flags
|
||||
- `/features` - Feature flag configuration
|
||||
|
||||
## Reports
|
||||
|
||||
After running tests, three reports are generated:
|
||||
|
||||
1. **`api-smoke-report.json`** - Detailed JSON report with all test results
|
||||
2. **`api-smoke-report.md`** - Human-readable Markdown report
|
||||
3. **Playwright HTML report** - Interactive test report (in `playwright-report/`)
|
||||
|
||||
### Report Structure
|
||||
|
||||
```json
|
||||
{
|
||||
"timestamp": "2024-01-07T22:00:00Z",
|
||||
"summary": {
|
||||
"total": 50,
|
||||
"success": 45,
|
||||
"failed": 5,
|
||||
"presenterErrors": 3,
|
||||
"avgResponseTime": 45.2
|
||||
},
|
||||
"results": [...],
|
||||
"failures": [...]
|
||||
}
|
||||
```
|
||||
|
||||
## Detecting Presenter Errors
|
||||
|
||||
The test specifically looks for the "Presenter not presented" error pattern:
|
||||
|
||||
```typescript
|
||||
// Detects these patterns:
|
||||
- "Presenter not presented"
|
||||
- "presenter not presented"
|
||||
- Error messages containing these phrases
|
||||
```
|
||||
|
||||
When found, these are flagged as **presenter errors** and require immediate attention.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### API Not Ready
|
||||
If tests fail because API isn't ready:
|
||||
```bash
|
||||
# Check API health
|
||||
curl http://localhost:3101/health
|
||||
|
||||
# Wait longer in test setup (increase timeout in test file)
|
||||
```
|
||||
|
||||
### Port Conflicts
|
||||
```bash
|
||||
# Stop conflicting services
|
||||
npm run docker:e2e:down
|
||||
|
||||
# Check what's running
|
||||
docker-compose -f docker-compose.e2e.yml ps
|
||||
```
|
||||
|
||||
### Missing Data
|
||||
The tests expect seeded data. If you see 404s:
|
||||
```bash
|
||||
# Ensure bootstrap is enabled
|
||||
export GRIDPILOT_API_BOOTSTRAP=1
|
||||
|
||||
# Restart services
|
||||
npm run docker:e2e:clean && npm run docker:e2e:up
|
||||
```
|
||||
|
||||
## Integration with Existing Tests
|
||||
|
||||
This smoke test complements the existing test suite:
|
||||
|
||||
- **Unit tests** (`apps/api/src/**/*Service.test.ts`) - Test individual services
|
||||
- **Integration tests** (`tests/integration/`) - Test component interactions
|
||||
- **E2E website tests** (`tests/e2e/website/`) - Test website functionality
|
||||
- **API smoke tests** (this) - Test API endpoints directly
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Run before deployments** - Catch presenter errors before they reach production
|
||||
2. **Run in CI/CD** - Automated regression testing
|
||||
3. **Review reports** - Always check the generated reports
|
||||
4. **Fix presenter errors immediately** - They indicate missing `.present()` calls
|
||||
5. **Keep tests updated** - Add new endpoints as they're created
|
||||
|
||||
## Performance
|
||||
|
||||
- Typical runtime: 30-60 seconds
|
||||
- Parallel execution: Playwright runs tests in parallel by default
|
||||
- Response time tracking: All requests are timed
|
||||
- Average response time tracked in reports
|
||||
|
||||
## Maintenance
|
||||
|
||||
When adding new endpoints:
|
||||
1. Add them to the test arrays in `api-smoke.test.ts`
|
||||
2. Test locally first: `npm run test:api:smoke`
|
||||
3. Verify reports show expected results
|
||||
4. Commit updated test file
|
||||
|
||||
When fixing presenter errors:
|
||||
1. Run smoke test to identify failing endpoints
|
||||
2. Check the specific error messages
|
||||
3. Fix the use case to call `this.output.present()` before returning
|
||||
4. Re-run smoke test to verify fix
|
||||
Reference in New Issue
Block a user