refactor use cases
This commit is contained in:
244
tests/e2e/api/README.md
Normal file
244
tests/e2e/api/README.md
Normal file
@@ -0,0 +1,244 @@
|
||||
# API Smoke Tests
|
||||
|
||||
This directory contains true end-to-end API smoke tests that make direct HTTP requests to the running API server to validate endpoint functionality and detect issues like "presenter not presented" errors.
|
||||
|
||||
## Overview
|
||||
|
||||
The API smoke tests are designed to:
|
||||
|
||||
1. **Test all public API endpoints** - Make requests to discover and validate endpoints
|
||||
2. **Detect presenter errors** - Identify use cases that return errors without calling `this.output.present()`
|
||||
3. **Validate response formats** - Ensure endpoints return proper data structures
|
||||
4. **Test error handling** - Verify graceful handling of invalid inputs
|
||||
5. **Generate detailed reports** - Create JSON and Markdown reports of findings
|
||||
|
||||
## Files
|
||||
|
||||
- `api-smoke.test.ts` - Main Playwright test file
|
||||
- `README.md` - This documentation
|
||||
|
||||
## Usage
|
||||
|
||||
### Local Testing
|
||||
|
||||
Run the API smoke tests against a locally running API:
|
||||
|
||||
```bash
|
||||
# Start the API server (in one terminal)
|
||||
npm run docker:dev:up
|
||||
|
||||
# Run smoke tests (in another terminal)
|
||||
npm run test:api:smoke
|
||||
```
|
||||
|
||||
### Docker Testing (Recommended)
|
||||
|
||||
Run the tests in the full Docker e2e environment:
|
||||
|
||||
```bash
|
||||
# Start the complete e2e environment
|
||||
npm run docker:e2e:up
|
||||
|
||||
# Run smoke tests in Docker
|
||||
npm run test:api:smoke:docker
|
||||
|
||||
# Or use the unified command
|
||||
npm run test:e2e:website # This runs all e2e tests including API smoke
|
||||
```
|
||||
|
||||
### CI/CD Integration
|
||||
|
||||
Add to your CI pipeline:
|
||||
|
||||
```yaml
|
||||
# GitHub Actions example
|
||||
- name: Start E2E Environment
|
||||
run: npm run docker:e2e:up
|
||||
|
||||
- name: Run API Smoke Tests
|
||||
run: npm run test:api:smoke:docker
|
||||
|
||||
- name: Upload Test Reports
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: api-smoke-reports
|
||||
path: |
|
||||
api-smoke-report.json
|
||||
api-smoke-report.md
|
||||
playwright-report/
|
||||
```
|
||||
|
||||
## Test Coverage
|
||||
|
||||
The smoke tests cover:
|
||||
|
||||
### Race Endpoints
|
||||
- `/races/all` - Get all races
|
||||
- `/races/total-races` - Get total count
|
||||
- `/races/page-data` - Get paginated data
|
||||
- `/races/reference/penalty-types` - Reference data
|
||||
- `/races/{id}` - Race details (with invalid IDs)
|
||||
- `/races/{id}/results` - Race results
|
||||
- `/races/{id}/sof` - Strength of field
|
||||
- `/races/{id}/protests` - Protests
|
||||
- `/races/{id}/penalties` - Penalties
|
||||
|
||||
### League Endpoints
|
||||
- `/leagues/all` - All leagues
|
||||
- `/leagues/available` - Available leagues
|
||||
- `/leagues/{id}` - League details
|
||||
- `/leagues/{id}/standings` - Standings
|
||||
- `/leagues/{id}/schedule` - Schedule
|
||||
|
||||
### Team Endpoints
|
||||
- `/teams/all` - All teams
|
||||
- `/teams/{id}` - Team details
|
||||
- `/teams/{id}/members` - Team members
|
||||
|
||||
### Driver Endpoints
|
||||
- `/drivers/leaderboard` - Leaderboard
|
||||
- `/drivers/total-drivers` - Total count
|
||||
- `/drivers/{id}` - Driver details
|
||||
|
||||
### Media Endpoints
|
||||
- `/media/avatar/{id}` - Avatar retrieval
|
||||
- `/media/{id}` - Media retrieval
|
||||
|
||||
### Sponsor Endpoints
|
||||
- `/sponsors/pricing` - Sponsorship pricing
|
||||
- `/sponsors/dashboard` - Sponsor dashboard
|
||||
- `/sponsors/{id}` - Sponsor details
|
||||
|
||||
### Auth Endpoints
|
||||
- `/auth/login` - Login
|
||||
- `/auth/signup` - Signup
|
||||
- `/auth/session` - Session info
|
||||
|
||||
### Dashboard Endpoints
|
||||
- `/dashboard/overview` - Overview
|
||||
- `/dashboard/feed` - Activity feed
|
||||
|
||||
### Analytics Endpoints
|
||||
- `/analytics/metrics` - Metrics
|
||||
- `/analytics/dashboard` - Dashboard data
|
||||
|
||||
### Admin Endpoints
|
||||
- `/admin/users` - User management
|
||||
|
||||
### Protest Endpoints
|
||||
- `/protests/race/{id}` - Race protests
|
||||
|
||||
### Payment Endpoints
|
||||
- `/payments/wallet` - Wallet info
|
||||
|
||||
### Notification Endpoints
|
||||
- `/notifications/unread` - Unread notifications
|
||||
|
||||
### Feature Flags
|
||||
- `/features` - Feature flag configuration
|
||||
|
||||
## Reports
|
||||
|
||||
After running tests, three reports are generated:
|
||||
|
||||
1. **`api-smoke-report.json`** - Detailed JSON report with all test results
|
||||
2. **`api-smoke-report.md`** - Human-readable Markdown report
|
||||
3. **Playwright HTML report** - Interactive test report (in `playwright-report/`)
|
||||
|
||||
### Report Structure
|
||||
|
||||
```json
|
||||
{
|
||||
"timestamp": "2024-01-07T22:00:00Z",
|
||||
"summary": {
|
||||
"total": 50,
|
||||
"success": 45,
|
||||
"failed": 5,
|
||||
"presenterErrors": 3,
|
||||
"avgResponseTime": 45.2
|
||||
},
|
||||
"results": [...],
|
||||
"failures": [...]
|
||||
}
|
||||
```
|
||||
|
||||
## Detecting Presenter Errors
|
||||
|
||||
The test specifically looks for the "Presenter not presented" error pattern:
|
||||
|
||||
```typescript
|
||||
// Detects these patterns:
|
||||
- "Presenter not presented"
|
||||
- "presenter not presented"
|
||||
- Error messages containing these phrases
|
||||
```
|
||||
|
||||
When found, these are flagged as **presenter errors** and require immediate attention.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### API Not Ready
|
||||
If tests fail because API isn't ready:
|
||||
```bash
|
||||
# Check API health
|
||||
curl http://localhost:3101/health
|
||||
|
||||
# Wait longer in test setup (increase timeout in test file)
|
||||
```
|
||||
|
||||
### Port Conflicts
|
||||
```bash
|
||||
# Stop conflicting services
|
||||
npm run docker:e2e:down
|
||||
|
||||
# Check what's running
|
||||
docker-compose -f docker-compose.e2e.yml ps
|
||||
```
|
||||
|
||||
### Missing Data
|
||||
The tests expect seeded data. If you see 404s:
|
||||
```bash
|
||||
# Ensure bootstrap is enabled
|
||||
export GRIDPILOT_API_BOOTSTRAP=1
|
||||
|
||||
# Restart services
|
||||
npm run docker:e2e:clean && npm run docker:e2e:up
|
||||
```
|
||||
|
||||
## Integration with Existing Tests
|
||||
|
||||
This smoke test complements the existing test suite:
|
||||
|
||||
- **Unit tests** (`apps/api/src/**/*Service.test.ts`) - Test individual services
|
||||
- **Integration tests** (`tests/integration/`) - Test component interactions
|
||||
- **E2E website tests** (`tests/e2e/website/`) - Test website functionality
|
||||
- **API smoke tests** (this) - Test API endpoints directly
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Run before deployments** - Catch presenter errors before they reach production
|
||||
2. **Run in CI/CD** - Automated regression testing
|
||||
3. **Review reports** - Always check the generated reports
|
||||
4. **Fix presenter errors immediately** - They indicate missing `.present()` calls
|
||||
5. **Keep tests updated** - Add new endpoints as they're created
|
||||
|
||||
## Performance
|
||||
|
||||
- Typical runtime: 30-60 seconds
|
||||
- Parallel execution: Playwright runs tests in parallel by default
|
||||
- Response time tracking: All requests are timed
|
||||
- Average response time tracked in reports
|
||||
|
||||
## Maintenance
|
||||
|
||||
When adding new endpoints:
|
||||
1. Add them to the test arrays in `api-smoke.test.ts`
|
||||
2. Test locally first: `npm run test:api:smoke`
|
||||
3. Verify reports show expected results
|
||||
4. Commit updated test file
|
||||
|
||||
When fixing presenter errors:
|
||||
1. Run smoke test to identify failing endpoints
|
||||
2. Check the specific error messages
|
||||
3. Fix the use case to call `this.output.present()` before returning
|
||||
4. Re-run smoke test to verify fix
|
||||
332
tests/e2e/api/api-smoke.test.ts
Normal file
332
tests/e2e/api/api-smoke.test.ts
Normal file
@@ -0,0 +1,332 @@
|
||||
/**
|
||||
* API Smoke Test
|
||||
*
|
||||
* This test performs true e2e testing of all API endpoints by making direct HTTP requests
|
||||
* to the running API server. It tests for:
|
||||
* - Basic connectivity and response codes
|
||||
* - Presenter errors ("Presenter not presented")
|
||||
* - Response format validation
|
||||
* - Error handling
|
||||
*
|
||||
* This test is designed to run in the Docker e2e environment and can be executed with:
|
||||
* npm run test:e2e:website (which runs everything in Docker)
|
||||
*/
|
||||
|
||||
import { test, expect } from '@playwright/test';
|
||||
import * as fs from 'fs/promises';
|
||||
import * as path from 'path';
|
||||
|
||||
interface EndpointTestResult {
|
||||
endpoint: string;
|
||||
method: 'GET' | 'POST' | 'PUT' | 'DELETE' | 'PATCH';
|
||||
status: number;
|
||||
success: boolean;
|
||||
error?: string;
|
||||
response?: unknown;
|
||||
hasPresenterError: boolean;
|
||||
responseTime: number;
|
||||
}
|
||||
|
||||
interface TestReport {
|
||||
timestamp: string;
|
||||
summary: {
|
||||
total: number;
|
||||
success: number;
|
||||
failed: number;
|
||||
presenterErrors: number;
|
||||
avgResponseTime: number;
|
||||
};
|
||||
results: EndpointTestResult[];
|
||||
failures: EndpointTestResult[];
|
||||
}
|
||||
|
||||
const API_BASE_URL = process.env.API_BASE_URL ?? process.env.NEXT_PUBLIC_API_BASE_URL ?? 'http://localhost:3101';
|
||||
|
||||
test.describe('API Smoke Tests', () => {
|
||||
let testResults: EndpointTestResult[] = [];
|
||||
|
||||
test.beforeAll(async ({ request }) => {
|
||||
console.log(`[API SMOKE] Testing API at: ${API_BASE_URL}`);
|
||||
|
||||
// Wait for API to be ready
|
||||
const maxAttempts = 30;
|
||||
let apiReady = false;
|
||||
|
||||
for (let i = 0; i < maxAttempts; i++) {
|
||||
try {
|
||||
const response = await request.get(`${API_BASE_URL}/health`);
|
||||
if (response.ok()) {
|
||||
apiReady = true;
|
||||
console.log(`[API SMOKE] API is ready after ${i + 1} attempts`);
|
||||
break;
|
||||
}
|
||||
} catch (error) {
|
||||
// Continue trying
|
||||
}
|
||||
await new Promise(resolve => setTimeout(resolve, 1000));
|
||||
}
|
||||
|
||||
if (!apiReady) {
|
||||
throw new Error('API failed to become ready');
|
||||
}
|
||||
});
|
||||
|
||||
test.afterAll(async () => {
|
||||
await generateReport();
|
||||
});
|
||||
|
||||
test('all public GET endpoints respond correctly', async ({ request }) => {
|
||||
const endpoints = [
|
||||
// Race endpoints
|
||||
{ method: 'GET' as const, path: '/races/all', name: 'Get all races' },
|
||||
{ method: 'GET' as const, path: '/races/total-races', name: 'Get total races count' },
|
||||
{ method: 'GET' as const, path: '/races/page-data', name: 'Get races page data' },
|
||||
{ method: 'GET' as const, path: '/races/all/page-data', name: 'Get all races page data' },
|
||||
{ method: 'GET' as const, path: '/races/reference/penalty-types', name: 'Get penalty types reference' },
|
||||
|
||||
// League endpoints
|
||||
{ method: 'GET' as const, path: '/leagues/all', name: 'Get all leagues' },
|
||||
{ method: 'GET' as const, path: '/leagues/available', name: 'Get available leagues' },
|
||||
|
||||
// Team endpoints
|
||||
{ method: 'GET' as const, path: '/teams/all', name: 'Get all teams' },
|
||||
|
||||
// Driver endpoints
|
||||
{ method: 'GET' as const, path: '/drivers/leaderboard', name: 'Get driver leaderboard' },
|
||||
{ method: 'GET' as const, path: '/drivers/total-drivers', name: 'Get total drivers count' },
|
||||
|
||||
// Dashboard endpoints (may require auth, but should handle gracefully)
|
||||
{ method: 'GET' as const, path: '/dashboard/overview', name: 'Get dashboard overview' },
|
||||
|
||||
// Analytics endpoints
|
||||
{ method: 'GET' as const, path: '/analytics/metrics', name: 'Get analytics metrics' },
|
||||
|
||||
// Sponsor endpoints
|
||||
{ method: 'GET' as const, path: '/sponsors/pricing', name: 'Get sponsorship pricing' },
|
||||
|
||||
// Payments endpoints
|
||||
{ method: 'GET' as const, path: '/payments/wallet', name: 'Get wallet (may require auth)' },
|
||||
|
||||
// Notifications endpoints
|
||||
{ method: 'GET' as const, path: '/notifications/unread', name: 'Get unread notifications' },
|
||||
|
||||
// Features endpoint
|
||||
{ method: 'GET' as const, path: '/features', name: 'Get feature flags' },
|
||||
];
|
||||
|
||||
console.log(`\n[API SMOKE] Testing ${endpoints.length} public endpoints...`);
|
||||
|
||||
for (const endpoint of endpoints) {
|
||||
await testEndpoint(request, endpoint);
|
||||
}
|
||||
|
||||
// Check for presenter errors
|
||||
const presenterErrors = testResults.filter(r => r.hasPresenterError);
|
||||
if (presenterErrors.length > 0) {
|
||||
console.log('\n❌ PRESENTER ERRORS FOUND:');
|
||||
presenterErrors.forEach(r => {
|
||||
console.log(` ${r.method} ${r.endpoint} - ${r.error}`);
|
||||
});
|
||||
}
|
||||
|
||||
// Assert no presenter errors
|
||||
expect(presenterErrors.length).toBe(0);
|
||||
});
|
||||
|
||||
test('POST endpoints handle requests gracefully', async ({ request }) => {
|
||||
const endpoints = [
|
||||
{ method: 'POST' as const, path: '/auth/login', name: 'Login', body: { email: 'test@example.com', password: 'test' } },
|
||||
{ method: 'POST' as const, path: '/auth/signup', name: 'Signup', body: { email: 'test@example.com', password: 'test', name: 'Test User' } },
|
||||
{ method: 'POST' as const, path: '/races/123/register', name: 'Register for race', body: { driverId: 'test-driver' } },
|
||||
{ method: 'POST' as const, path: '/races/protests/file', name: 'File protest', body: { raceId: '123', driverId: '456', description: 'Test protest' } },
|
||||
{ method: 'POST' as const, path: '/leagues/123/join', name: 'Join league', body: { driverId: 'test-driver' } },
|
||||
{ method: 'POST' as const, path: '/teams/123/join', name: 'Join team', body: { driverId: 'test-driver' } },
|
||||
];
|
||||
|
||||
console.log(`\n[API SMOKE] Testing ${endpoints.length} POST endpoints...`);
|
||||
|
||||
for (const endpoint of endpoints) {
|
||||
await testEndpoint(request, endpoint);
|
||||
}
|
||||
|
||||
// Check for presenter errors
|
||||
const presenterErrors = testResults.filter(r => r.hasPresenterError);
|
||||
expect(presenterErrors.length).toBe(0);
|
||||
});
|
||||
|
||||
test('parameterized endpoints handle missing IDs gracefully', async ({ request }) => {
|
||||
const endpoints = [
|
||||
{ method: 'GET' as const, path: '/races/non-existent-id', name: 'Get non-existent race' },
|
||||
{ method: 'GET' as const, path: '/races/non-existent-id/results', name: 'Get non-existent race results' },
|
||||
{ method: 'GET' as const, path: '/leagues/non-existent-id', name: 'Get non-existent league' },
|
||||
{ method: 'GET' as const, path: '/teams/non-existent-id', name: 'Get non-existent team' },
|
||||
{ method: 'GET' as const, path: '/drivers/non-existent-id', name: 'Get non-existent driver' },
|
||||
{ method: 'GET' as const, path: '/media/avatar/non-existent-id', name: 'Get non-existent avatar' },
|
||||
];
|
||||
|
||||
console.log(`\n[API SMOKE] Testing ${endpoints.length} parameterized endpoints with invalid IDs...`);
|
||||
|
||||
for (const endpoint of endpoints) {
|
||||
await testEndpoint(request, endpoint);
|
||||
}
|
||||
|
||||
// Check for presenter errors
|
||||
const presenterErrors = testResults.filter(r => r.hasPresenterError);
|
||||
expect(presenterErrors.length).toBe(0);
|
||||
});
|
||||
|
||||
async function testEndpoint(
|
||||
request: import('@playwright/test').APIRequestContext,
|
||||
endpoint: { method: 'GET' | 'POST' | 'PUT' | 'DELETE' | 'PATCH'; path: string; name?: string; body?: unknown }
|
||||
): Promise<void> {
|
||||
const startTime = Date.now();
|
||||
const fullUrl = `${API_BASE_URL}${endpoint.path}`;
|
||||
|
||||
console.log(`\n[TEST] ${endpoint.method} ${endpoint.path} (${endpoint.name || 'Unknown'})`);
|
||||
|
||||
try {
|
||||
let response;
|
||||
|
||||
switch (endpoint.method) {
|
||||
case 'GET':
|
||||
response = await request.get(fullUrl);
|
||||
break;
|
||||
case 'POST':
|
||||
response = await request.post(fullUrl, { data: endpoint.body || {} });
|
||||
break;
|
||||
case 'PUT':
|
||||
response = await request.put(fullUrl, { data: endpoint.body || {} });
|
||||
break;
|
||||
case 'DELETE':
|
||||
response = await request.delete(fullUrl);
|
||||
break;
|
||||
case 'PATCH':
|
||||
response = await request.patch(fullUrl, { data: endpoint.body || {} });
|
||||
break;
|
||||
}
|
||||
|
||||
const responseTime = Date.now() - startTime;
|
||||
const status = response.status();
|
||||
const body = await response.json().catch(() => null);
|
||||
const bodyText = await response.text().catch(() => '');
|
||||
|
||||
// Check for presenter errors
|
||||
const hasPresenterError =
|
||||
bodyText.includes('Presenter not presented') ||
|
||||
bodyText.includes('presenter not presented') ||
|
||||
(body && body.message && body.message.includes('Presenter not presented')) ||
|
||||
(body && body.error && body.error.includes('Presenter not presented'));
|
||||
|
||||
const success = status < 400 && !hasPresenterError;
|
||||
|
||||
const result: EndpointTestResult = {
|
||||
endpoint: endpoint.path,
|
||||
method: endpoint.method,
|
||||
status,
|
||||
success,
|
||||
hasPresenterError,
|
||||
responseTime,
|
||||
response: body || bodyText.substring(0, 200),
|
||||
};
|
||||
|
||||
if (!success) {
|
||||
result.error = body?.message || bodyText.substring(0, 200);
|
||||
}
|
||||
|
||||
testResults.push(result);
|
||||
|
||||
if (hasPresenterError) {
|
||||
console.log(` ❌ PRESENTER ERROR: ${status} - ${body?.message || bodyText.substring(0, 100)}`);
|
||||
} else if (success) {
|
||||
console.log(` ✅ ${status} (${responseTime}ms)`);
|
||||
} else {
|
||||
console.log(` ⚠️ ${status} (${responseTime}ms) - ${body?.message || 'Error'}`);
|
||||
}
|
||||
|
||||
} catch (error: unknown) {
|
||||
const responseTime = Date.now() - startTime;
|
||||
const errorString = error instanceof Error ? error.message : String(error);
|
||||
|
||||
const result: EndpointTestResult = {
|
||||
endpoint: endpoint.path,
|
||||
method: endpoint.method,
|
||||
status: 0,
|
||||
success: false,
|
||||
hasPresenterError: false,
|
||||
responseTime,
|
||||
error: errorString,
|
||||
};
|
||||
|
||||
// Check if it's a presenter error
|
||||
if (errorString.includes('Presenter not presented')) {
|
||||
result.hasPresenterError = true;
|
||||
console.log(` ❌ PRESENTER ERROR (exception): ${errorString}`);
|
||||
} else {
|
||||
console.log(` ❌ EXCEPTION: ${errorString}`);
|
||||
}
|
||||
|
||||
testResults.push(result);
|
||||
}
|
||||
}
|
||||
|
||||
async function generateReport(): Promise<void> {
|
||||
const summary = {
|
||||
total: testResults.length,
|
||||
success: testResults.filter(r => r.success).length,
|
||||
failed: testResults.filter(r => !r.success).length,
|
||||
presenterErrors: testResults.filter(r => r.hasPresenterError).length,
|
||||
avgResponseTime: testResults.reduce((sum, r) => sum + r.responseTime, 0) / testResults.length || 0,
|
||||
};
|
||||
|
||||
const report: TestReport = {
|
||||
timestamp: new Date().toISOString(),
|
||||
summary,
|
||||
results: testResults,
|
||||
failures: testResults.filter(r => !r.success),
|
||||
};
|
||||
|
||||
// Write JSON report
|
||||
const jsonPath = path.join(__dirname, '../../../api-smoke-report.json');
|
||||
await fs.writeFile(jsonPath, JSON.stringify(report, null, 2));
|
||||
|
||||
// Write Markdown report
|
||||
const mdPath = path.join(__dirname, '../../../api-smoke-report.md');
|
||||
let md = `# API Smoke Test Report\n\n`;
|
||||
md += `**Generated:** ${new Date().toISOString()}\n`;
|
||||
md += `**API Base URL:** ${API_BASE_URL}\n\n`;
|
||||
|
||||
md += `## Summary\n\n`;
|
||||
md += `- **Total Endpoints:** ${summary.total}\n`;
|
||||
md += `- **✅ Success:** ${summary.success}\n`;
|
||||
md += `- **❌ Failed:** ${summary.failed}\n`;
|
||||
md += `- **⚠️ Presenter Errors:** ${summary.presenterErrors}\n`;
|
||||
md += `- **Avg Response Time:** ${summary.avgResponseTime.toFixed(2)}ms\n\n`;
|
||||
|
||||
if (summary.presenterErrors > 0) {
|
||||
md += `## Presenter Errors\n\n`;
|
||||
const presenterFailures = testResults.filter(r => r.hasPresenterError);
|
||||
presenterFailures.forEach((r, i) => {
|
||||
md += `${i + 1}. **${r.method} ${r.endpoint}**\n`;
|
||||
md += ` - Status: ${r.status}\n`;
|
||||
md += ` - Error: ${r.error || 'No error message'}\n\n`;
|
||||
});
|
||||
}
|
||||
|
||||
if (summary.failed > 0 && summary.presenterErrors < summary.failed) {
|
||||
md += `## Other Failures\n\n`;
|
||||
const otherFailures = testResults.filter(r => !r.success && !r.hasPresenterError);
|
||||
otherFailures.forEach((r, i) => {
|
||||
md += `${i + 1}. **${r.method} ${r.endpoint}**\n`;
|
||||
md += ` - Status: ${r.status}\n`;
|
||||
md += ` - Error: ${r.error || 'No error message'}\n\n`;
|
||||
});
|
||||
}
|
||||
|
||||
await fs.writeFile(mdPath, md);
|
||||
|
||||
console.log(`\n📊 Reports generated:`);
|
||||
console.log(` JSON: ${jsonPath}`);
|
||||
console.log(` Markdown: ${mdPath}`);
|
||||
console.log(`\nSummary: ${summary.success}/${summary.total} passed, ${summary.presenterErrors} presenter errors`);
|
||||
}
|
||||
});
|
||||
Reference in New Issue
Block a user