Files
gridpilot.gg/README.docker.md
2026-01-03 22:28:12 +01:00

12 KiB

Docker Setup for GridPilot

This document describes the Docker setup for local development and production deployment of GridPilot.

Quick Start

Development

Start all services with hot-reloading:

npm run docker:dev:build

This will:

  • Start PostgreSQL database on port 5432
  • Start API on port 3001 (container port 3000, debugger 9229)
  • Start Website on port 3000
  • Enable hot-reloading for both apps

Access:

Production

Start all services in production mode:

npm run docker:prod:build

This will:

  • Build optimized Docker images
  • Start PostgreSQL, Redis, API, Website, and Nginx
  • Enable health checks, auto-restart, and resource limits
  • Configure caching and performance optimizations

Access:

Available Commands

Development

  • npm run docker:dev - Start dev environment (alias of docker:dev:up)
  • npm run docker:dev:up - Start dev environment
  • npm run docker:dev:postgres - Start dev environment with GRIDPILOT_API_PERSISTENCE=postgres
  • npm run docker:dev:inmemory - Start dev environment with GRIDPILOT_API_PERSISTENCE=inmemory
  • npm run docker:dev:build - Rebuild and start
  • npm run docker:dev:restart - Restart services
  • npm run docker:dev:ps - Show service status
  • npm run docker:dev:down - Stop services
  • npm run docker:dev:logs - View logs
  • npm run docker:dev:clean - Stop and remove volumes

Production

  • npm run docker:prod - Start prod environment
  • npm run docker:prod:build - Rebuild and start
  • npm run docker:prod:down - Stop services
  • npm run docker:prod:logs - View logs
  • npm run docker:prod:clean - Stop and remove volumes

Testing (Docker)

The goal of Docker-backed tests is to catch wiring issues between Website ↔ API (wrong hostnames/ports/env vars, missing CORS for credentialed requests, etc.) in a deterministic environment.

  • npm run test:docker:website - Start API/DB in Docker, run website locally via Playwright, and execute e2e tests.
    • Uses docker-compose.test.yml for API and PostgreSQL.
    • Playwright starts the website locally via webServer config (not in Docker).
    • Tests run against http://localhost:3000 (website) talking to http://localhost:3101 (API).
    • Validates that pages render, middleware works, and API connections succeed.

Important: The website runs locally (not in Docker) to avoid Next.js SWC/compilation issues in containers.

Supporting scripts:

  • npm run docker:test:deps - Verify monorepo dependencies are installed.
  • npm run docker:test:up - Start API and PostgreSQL containers.
  • npm run docker:test:wait - Wait for API health check at http://localhost:3101/health.
  • npm run docker:test:down - Stop containers and clean up.

Environment Variables

"Mock vs Real" (Website & API)

There is no AUTOMATION_MODE equivalent for the Website/API runtime.

  • Website "mock vs real" is controlled purely by which API base URL you point it at via getWebsiteApiBaseUrl():

    • Browser calls use NEXT_PUBLIC_API_BASE_URL
    • Server/Next.js calls use API_BASE_URL ?? NEXT_PUBLIC_API_BASE_URL
  • API "mock vs real" is controlled by API runtime env:

    • Persistence: GRIDPILOT_API_PERSISTENCE=postgres|inmemory in AppModule
    • Optional bootstrapping: GRIDPILOT_API_BOOTSTRAP=0|1 in AppModule

Practical presets:

  • Website + real API (Docker dev): npm run docker:dev:build (Website 3000, API 3001, Postgres required).
    • Website browser → API: NEXT_PUBLIC_API_BASE_URL=http://localhost:3001
    • Website container → API container: API_BASE_URL=http://api:3000
  • Website + mock API (Docker smoke): npm run test:docker:website (Website 3100, API mock 3101).
    • API mock is defined inline in docker-compose.test.yml
    • Website browser → API mock: NEXT_PUBLIC_API_BASE_URL=http://localhost:3101
    • Website container → API mock container: API_BASE_URL=http://api:3000

Website ↔ API Connection

The website talks to the API via fetch() in BaseApiClient, and it always includes cookies (credentials: 'include'). That means:

  • The browser must be pointed at a host-accessible API URL via NEXT_PUBLIC_API_BASE_URL
  • The server (Next.js / Node) must be pointed at a container-network API URL via API_BASE_URL (when running in Docker)

The single source of truth for "what base URL should I use?" is getWebsiteApiBaseUrl():

  • Browser: reads NEXT_PUBLIC_API_BASE_URL
  • Server: reads API_BASE_URL ?? NEXT_PUBLIC_API_BASE_URL
  • In Docker/CI/test: throws if missing (no silent localhost fallback)

Dev Docker defaults (docker-compose.dev.yml)

  • Website: http://localhost:3000
  • API: http://localhost:3001 (maps to container api:3000)
  • NEXT_PUBLIC_API_BASE_URL=http://localhost:3001 (browser → host port)
  • API_BASE_URL=http://api:3000 (website container → api container)

Test Docker defaults (docker-compose.test.yml)

This stack is intended for deterministic smoke tests and uses different host ports to avoid colliding with docker:dev:

  • Website: http://localhost:3000 (started by Playwright webServer, not Docker)
  • API: http://localhost:3101 (maps to container api:3000)
  • PostgreSQL: localhost:5433 (maps to container 5432)
  • NEXT_PUBLIC_API_BASE_URL=http://localhost:3101 (browser → host port)
  • API_BASE_URL=http://localhost:3101 (Playwright webServer → host port)

Important:

  • The website runs locally via Playwright's webServer config to avoid Next.js SWC compilation issues in Docker.
  • The API is a real TypeORM/PostgreSQL server (not a mock) for testing actual database interactions.
  • Playwright automatically starts the website server before running tests.

Troubleshooting

  • Port conflicts: If docker:dev is running, use npm run docker:dev:down before npm run test:docker:website to avoid port conflicts (dev uses 3001, test uses 3101).
  • Website not starting: Playwright's webServer may fail if dependencies are missing. Run npm install first.
  • Cookie errors: The WebsiteAuthManager requires both url and path properties for cookies. Check Playwright version compatibility.
  • Docker volumes stuck: Run npm run docker:test:down (uses --remove-orphans + rm -f).
  • SWC compilation issues: If website fails to start in Docker, use the local webServer approach (already configured in playwright.website.config.ts).

API "Real vs In-Memory" Mode

The API can now be run either:

  • postgres: loads DatabaseModule (requires Postgres)
  • inmemory: does not load DatabaseModule (no Postgres required)

Control it with:

  • GRIDPILOT_API_PERSISTENCE=postgres|inmemory (defaults to postgres if DATABASE_URL is set, otherwise inmemory)
  • Optional: GRIDPILOT_API_BOOTSTRAP=0 to skip BootstrapModule

Development (.env.development)

Copy and customize as needed. Default values work out of the box.

Production (.env.production)

IMPORTANT: Update these before deploying:

  • Database credentials (POSTGRES_PASSWORD, DATABASE_URL)
  • Website/API URLs (NEXT_PUBLIC_API_BASE_URL, NEXT_PUBLIC_SITE_URL)
  • Vercel KV credentials (KV_REST_API_URL, KV_REST_API_TOKEN) (required for production email signups/rate limit)

Architecture

Development Setup

  • Hot-reloading enabled via volume mounts
  • Source code changes reflect immediately
  • Database persisted in named volume
  • Debug port exposed for API (9229)

Production Setup

  • Multi-stage builds for optimized images
  • Only production dependencies included
  • Nginx reverse proxy for both services
  • Health checks for all services
  • Auto-restart on failure

Docker Services

API (NestJS)

  • Dev: apps/api/Dockerfile.dev
  • Prod: apps/api/Dockerfile.prod
  • Port: 3000
  • Debug: 9229 (dev only)

Website (Next.js)

  • Dev: apps/website/Dockerfile.dev
  • Prod: apps/website/Dockerfile.prod
  • Port: 3001 (dev), 3000 (prod)

Database (PostgreSQL)

  • Image: postgres:15-alpine
  • Port: 5432 (internal)
  • Data: Persisted in Docker volume
  • Optimized with performance tuning parameters

Redis (Production only)

  • Image: redis:7-alpine
  • Port: 6379 (internal)
  • Configured with:
    • LRU eviction policy
    • 512MB max memory
    • AOF persistence
    • Password protection

Nginx (Production only)

  • Reverse proxy for website + API
  • Features:
    • Rate limiting (API: 10r/s, General: 30r/s)
    • Security headers (XSS, CSP, Frame-Options)
    • Gzip compression
    • Static asset caching
    • Connection pooling
    • Request buffering
  • Port: 80, 443

Troubleshooting

Services won't start

# Clean everything and rebuild
npm run docker:dev:clean
npm run docker:dev:build

Hot-reloading not working

Check that volume mounts are correct in docker-compose.dev.yml

Database connection issues

Ensure DATABASE_URL in .env matches the database service configuration

Check logs

# All services
npm run docker:dev:logs

# Specific service
docker-compose -f docker-compose.dev.yml logs -f api
docker-compose -f docker-compose.dev.yml logs -f website
docker-compose -f docker-compose.dev.yml logs -f db

Database Migration for Media References

If you have existing seeded data with old URL formats (e.g., /api/avatar/{id}, /api/media/teams/{id}/logo), you need to migrate to the new MediaReference format.

Option 1: Migration Script (Preserve Data)

Run the migration script to convert old URLs to proper MediaReference objects:

# Test mode (dry run - shows what would change)
npm run migrate:media:test

# Execute migration (applies changes)
npm run migrate:media:exec

The script handles:

  • Driver avatars: /api/avatar/{id}system-default (deterministic variant)
  • Team logos: /api/media/teams/{id}/logogenerated
  • League logos: /api/media/leagues/{id}/logogenerated
  • Unknown formatsnone

Option 2: Wipe and Reseed (Clean Slate)

For development environments, you can wipe all data and start fresh:

# Stop services and remove volumes
npm run docker:dev:clean

# Rebuild and start fresh
npm run docker:dev:build

This will:

  • Delete all existing data
  • Run fresh seed with correct MediaReference format
  • No migration needed

When to Use Each Option

Use Migration Script when:

  • You have production data you want to preserve
  • You want to understand what changes will be made
  • You need a controlled, reversible process

Use Wipe and Reseed when:

  • You're in development/testing
  • You don't care about existing data
  • You want the fastest path to a clean state

Tips

  1. First time setup: Use docker:dev:build to ensure images are built
  2. Clean slate: Use docker:dev:clean to remove all data and start fresh
  3. Production testing: Test prod setup locally before deploying
  4. Database access: Use any PostgreSQL client with credentials from .env file
  5. Debugging: Attach debugger to port 9229 for API debugging

Production Deployment

Before deploying to production:

  1. Update .env.production with real credentials
  2. Configure SSL certificates in nginx/ssl/
  3. Update Nginx configuration for HTTPS
  4. Set proper domain names in environment variables
  5. Consider using Docker secrets for sensitive data

File Structure

.
├── docker-compose.dev.yml       # Development orchestration
├── docker-compose.prod.yml      # Production orchestration
├── .env.development             # Dev environment variables
├── .env.production              # Prod environment variables
├── apps/
│   ├── api/
│   │   ├── Dockerfile.dev       # API dev image
│   │   ├── Dockerfile.prod      # API prod image
│   │   └── .dockerignore
│   └── website/
│       ├── Dockerfile.dev       # Website dev image
│       ├── Dockerfile.prod      # Website prod image
│       └── .dockerignore
└── nginx/
    └── nginx.conf               # Nginx configuration