Docker Deployment Guide
Docker Deployment Guide
This comprehensive guide covers deploying MCP Client Tester using Docker and Docker Compose, from development environments to production deployments.
Architecture Overview
MCP Client Tester uses a multi-service Docker architecture:
graph TB A[Caddy Reverse Proxy] --> B[Frontend - Astro] A --> C[Backend - FastAPI] A --> D[Documentation - Starlight] A --> E[Reports - Static Files]
C --> F[PostgreSQL Database] C --> G[Procrastinate Worker] G --> F
H[External Caddy Network] --> AServices:
- Frontend: Astro-based web interface
- Backend: FastAPI server with MCP implementation
- Documentation: Starlight documentation site
- Reports: Static file server for test reports
- Database: PostgreSQL for session and task data
- Worker: Procrastinate background task processor
Development Deployment
Prerequisites
-
Install Docker & Docker Compose
Terminal window # Install Dockercurl -fsSL https://get.docker.com -o get-docker.shsudo sh get-docker.sh# Install Docker Composesudo curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-composesudo chmod +x /usr/local/bin/docker-compose# Add user to docker groupsudo usermod -aG docker $USERnewgrp dockerTerminal window # Install Docker Desktopbrew install --cask docker# Start Docker Desktop applicationopen /Applications/Docker.appTerminal window # Install Docker Desktopwinget install Docker.DockerDesktop# Or download from https://www.docker.com/products/docker-desktop -
Verify Installation
Terminal window docker --versiondocker-compose --version# Test Dockerdocker run hello-world -
Setup External Network
Terminal window # Create the external Caddy networkdocker network create caddy 2>/dev/null || echo "Network already exists"# Verify network creationdocker network ls | grep caddy
Project Setup
-
Clone Repository
Terminal window git clone https://github.com/your-org/mcp-client-test.gitcd mcp-client-test -
Configure Environment
Terminal window # Copy example environment filecp .env.example .env# Edit configurationnano .envDevelopment Configuration (
.env):Terminal window # Project ConfigurationCOMPOSE_PROJECT_NAME=mcp-client-testerDOMAIN=mcp-tester.local# Server ConfigurationMCP_SERVER_NAME=MCP Client TesterMCP_SERVER_VERSION=1.0.0MCP_LOG_LEVEL=DEBUGMCP_SESSION_TIMEOUT=3600# DatabaseDATABASE_URL=sqlite:///app/data/mcp_tester.dbPROCRASTINATE_DATABASE_URL=postgresql://procrastinate:procrastinate@procrastinate-db:5432/procrastinate# FrontendPUBLIC_DOMAIN=mcp-tester.localPUBLIC_API_URL=https://api.mcp-tester.localPUBLIC_WS_URL=wss://api.mcp-tester.local# DevelopmentENVIRONMENT=developmentDEBUG=true -
Configure Local DNS
Terminal window # Add entries to /etc/hosts (Linux/macOS)sudo tee -a /etc/hosts << EOF127.0.0.1 mcp-tester.local127.0.0.1 api.mcp-tester.local127.0.0.1 docs.mcp-tester.local127.0.0.1 reports.mcp-tester.localEOF# Or use localhost instead# sed -i 's/mcp-tester.local/localhost/g' .env
Building and Starting Services
-
Build All Services
Terminal window # Build all Docker imagesdocker-compose build# Build with no cache (if needed)docker-compose build --no-cache -
Start Services
Terminal window # Start all services in backgrounddocker-compose up -d# View logs during startupdocker-compose logs -f -
Verify Deployment
Terminal window # Check service statusdocker-compose ps# Should show all services as "Up" and healthy -
Test Access
Terminal window # Test main applicationcurl -I https://mcp-tester.local# Test APIcurl https://api.mcp-tester.local/health# Test documentationcurl -I https://docs.mcp-tester.local
Production Deployment
Production Environment Configuration
Production .env file:
# Project ConfigurationCOMPOSE_PROJECT_NAME=mcp-client-tester-prodDOMAIN=your-domain.com
# Server ConfigurationMCP_SERVER_NAME=MCP Client TesterMCP_SERVER_VERSION=1.0.0MCP_LOG_LEVEL=INFOMCP_SESSION_TIMEOUT=7200
# Database (Use external PostgreSQL in production)DATABASE_URL=postgresql://mcp_user:secure_password@db.your-domain.com:5432/mcp_testerPROCRASTINATE_DATABASE_URL=postgresql://procrastinate_user:secure_password@db.your-domain.com:5432/procrastinate
# SecuritySECRET_KEY=your-very-secure-secret-key-hereAPI_KEYS=prod_key_1,prod_key_2,admin_key_3
# FrontendPUBLIC_DOMAIN=your-domain.comPUBLIC_API_URL=https://api.your-domain.comPUBLIC_WS_URL=wss://api.your-domain.com
# Production SettingsENVIRONMENT=productionDEBUG=falseENABLE_CORS=falseALLOWED_ORIGINS=https://your-domain.com,https://api.your-domain.com
# MonitoringSENTRY_DSN=https://your-sentry-dsnENABLE_METRICS=trueLOG_FORMAT=jsonProduction Docker Compose
Create docker-compose.prod.yml:
name: ${COMPOSE_PROJECT_NAME}
networks: caddy: external: true internal: driver: bridge
volumes: postgres_data: mcp_data: static_files:
services: # FastAPI Backend - Production backend: build: context: ./backend dockerfile: Dockerfile target: production volumes: - mcp_data:/app/data - static_files:/app/static environment: - DOMAIN=${DOMAIN} - DATABASE_URL=${DATABASE_URL} - PROCRASTINATE_DATABASE_URL=${PROCRASTINATE_DATABASE_URL} - MCP_SERVER_NAME=${MCP_SERVER_NAME} - MCP_SERVER_VERSION=${MCP_SERVER_VERSION} - MCP_LOG_LEVEL=${MCP_LOG_LEVEL} - ENVIRONMENT=${ENVIRONMENT} - DEBUG=${DEBUG} - SECRET_KEY=${SECRET_KEY} - API_KEYS=${API_KEYS} expose: - 8000 networks: - caddy - internal labels: caddy: api.${DOMAIN} caddy.@ws.0_header: Connection *Upgrade* caddy.@ws.1_header: Upgrade websocket caddy.0_reverse_proxy: "@ws {{upstreams 8000}}" caddy.1_reverse_proxy: "{{upstreams 8000}}" depends_on: - postgres restart: unless-stopped deploy: resources: limits: memory: 1G cpus: '0.5' reservations: memory: 512M healthcheck: test: ["CMD", "curl", "-f", "http://localhost:8000/health"] interval: 30s timeout: 10s retries: 3
# PostgreSQL Database postgres: image: postgres:16-alpine environment: - POSTGRES_DB=mcp_tester - POSTGRES_USER=${DB_USER} - POSTGRES_PASSWORD=${DB_PASSWORD} - POSTGRES_MULTIPLE_DATABASES=procrastinate volumes: - postgres_data:/var/lib/postgresql/data - ./scripts/init-db.sh:/docker-entrypoint-initdb.d/init-db.sh:ro networks: - internal restart: unless-stopped deploy: resources: limits: memory: 2G cpus: '1.0' healthcheck: test: ["CMD-SHELL", "pg_isready -U ${DB_USER}"] interval: 10s timeout: 5s retries: 5
# Procrastinate Worker - Production procrastinate-worker: build: context: ./backend dockerfile: Dockerfile target: production command: /app/.venv/bin/python -m procrastinate worker volumes: - mcp_data:/app/data environment: - PROCRASTINATE_DATABASE_URL=${PROCRASTINATE_DATABASE_URL} - DATABASE_URL=${DATABASE_URL} - ENVIRONMENT=${ENVIRONMENT} - LOG_LEVEL=${MCP_LOG_LEVEL} networks: - internal depends_on: - postgres restart: unless-stopped deploy: replicas: 2 resources: limits: memory: 512M cpus: '0.25'
# Frontend - Production frontend: build: context: ./frontend dockerfile: Dockerfile target: production environment: - PUBLIC_DOMAIN=${DOMAIN} - PUBLIC_API_URL=https://api.${DOMAIN} - PUBLIC_WS_URL=wss://api.${DOMAIN} networks: - caddy labels: caddy: ${DOMAIN} caddy.reverse_proxy: "{{upstreams 80}}" caddy.header: /static/* Cache-Control "public, max-age=31536000" restart: unless-stopped deploy: resources: limits: memory: 256M cpus: '0.25'
# Documentation - Production docs: build: context: ./docs dockerfile: Dockerfile target: production environment: - PUBLIC_DOMAIN=${DOMAIN} networks: - caddy labels: caddy: docs.${DOMAIN} caddy.reverse_proxy: "{{upstreams 80}}" caddy.header: Cache-Control "public, max-age=3600" restart: unless-stopped
# Reports Server reports: image: nginx:alpine volumes: - ./backend/reports:/usr/share/nginx/html:ro - ./nginx-reports.conf:/etc/nginx/conf.d/default.conf:ro networks: - caddy labels: caddy: reports.${DOMAIN} caddy.reverse_proxy: "{{upstreams 80}}" caddy.basicauth: /admin/* caddy.basicauth.admin: ${ADMIN_PASSWORD_HASH} restart: unless-stopped
# Redis Cache (Optional) redis: image: redis:7-alpine command: redis-server --appendonly yes volumes: - redis_data:/data networks: - internal restart: unless-stopped deploy: resources: limits: memory: 256MSSL/TLS Configuration
For production, use proper SSL certificates:
Automatic SSL with Caddy:
# Add to backend service labelscaddy: api.${DOMAIN}caddy.tls: your-email@domain.com # For Let's EncryptOr use wildcard certificates:
caddy: "*.${DOMAIN}"caddy.tls: your-email@domain.comMount custom certificates:
# Add volume mountsvolumes: - ./certs:/etc/ssl/certs:ro
# Configure in Caddycaddy: ${DOMAIN}caddy.tls: /etc/ssl/certs/domain.crt /etc/ssl/certs/domain.keySecurity Hardening
Security Configuration:
# Add security labels to all servicescaddy.header: Strict-Transport-Security "max-age=31536000"caddy.header: X-Content-Type-Options nosniffcaddy.header: X-Frame-Options DENYcaddy.header: X-XSS-Protection "1; mode=block"caddy.header: Referrer-Policy strict-origin-when-cross-origincaddy.header: Content-Security-Policy "default-src 'self'"
# Add basic auth for admin endpointscaddy.basicauth: /admin/*caddy.basicauth.admin: ${ADMIN_PASSWORD_HASH}Generate password hash:
# Using Caddy containerdocker run --rm caddy:2-alpine caddy hash-password --plaintext "your-password"Container Configuration
Multi-Stage Dockerfiles
Backend Dockerfile (Production Optimized):
# Backend DockerfileFROM python:3.13-slim AS base
# Install system dependenciesRUN apt-get update && apt-get install -y \ curl \ && rm -rf /var/lib/apt/lists/*
# Install uvCOPY --from=ghcr.io/astral-sh/uv:latest /uv /bin/uv
WORKDIR /app
# Development stageFROM base AS development
# Copy dependency filesCOPY pyproject.toml uv.lock ./
# Install dependencies including devRUN uv sync --frozen
# Copy source codeCOPY . .
# Expose port and start dev serverEXPOSE 8000CMD ["uv", "run", "uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000", "--reload"]
# Production build stageFROM base AS build
# Copy dependency filesCOPY pyproject.toml uv.lock ./
# Install dependencies (production only)RUN uv sync --frozen --no-dev
# Copy source codeCOPY . .
# Production stageFROM python:3.13-slim AS production
# Install system dependenciesRUN apt-get update && apt-get install -y \ curl \ && rm -rf /var/lib/apt/lists/* \ && useradd --create-home --shell /bin/bash app
# Copy built applicationCOPY --from=build --chown=app:app /app /app
# Switch to non-root userUSER appWORKDIR /app
# Health checkHEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \ CMD curl -f http://localhost:8000/health || exit 1
# Expose portEXPOSE 8000
# Start production serverCMD ["/app/.venv/bin/uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000", "--workers", "4"]Frontend Dockerfile (Production Optimized):
# Frontend DockerfileFROM node:20-alpine AS base
# Install dependencies only when neededFROM base AS depsRUN apk add --no-cache libc6-compatWORKDIR /app
# Install dependenciesCOPY package.json package-lock.json* ./RUN npm ci --only=production && npm cache clean --force
# Development stageFROM base AS developmentWORKDIR /appCOPY package.json package-lock.json* ./RUN npm ci
COPY . .
EXPOSE 4321CMD ["npm", "run", "dev", "--", "--host", "0.0.0.0", "--port", "4321"]
# Production build stageFROM base AS builderWORKDIR /appCOPY package.json package-lock.json* ./RUN npm ci
COPY . .RUN npm run build
# Production runtimeFROM nginx:alpine AS production
# Copy built assetsCOPY --from=builder /app/dist /usr/share/nginx/html
# Copy nginx configurationCOPY nginx.conf /etc/nginx/nginx.conf
# Add health checkHEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \ CMD curl -f http://localhost/ || exit 1
EXPOSE 80CMD ["nginx", "-g", "daemon off;"]Monitoring & Logging
Production Logging
Logging Configuration:
# Add to docker-compose.prod.ymlservices: backend: logging: driver: "json-file" options: max-size: "10m" max-file: "3"
# Centralized logging with Fluentd fluentd: image: fluentd:v1.16-1 volumes: - ./fluentd.conf:/fluentd/etc/fluent.conf - /var/log:/var/log ports: - "24224:24224" environment: - FLUENTD_CONF=fluent.confFluent.conf:
<source> @type forward port 24224 bind 0.0.0.0</source>
<match *.**> @type elasticsearch host elasticsearch port 9200 logstash_format true logstash_prefix mcp-client-tester</match>Health Checks
Comprehensive Health Checks:
services: backend: healthcheck: test: ["CMD-SHELL", "curl -f http://localhost:8000/health || exit 1"] interval: 30s timeout: 10s retries: 3 start_period: 40s
postgres: healthcheck: test: ["CMD-SHELL", "pg_isready -U ${DB_USER} -d ${DB_NAME}"] interval: 10s timeout: 5s retries: 5
frontend: healthcheck: test: ["CMD-SHELL", "curl -f http://localhost/ || exit 1"] interval: 30s timeout: 5s retries: 3Scaling & High Availability
Horizontal Scaling
Scale Backend Services:
# Scale backend to 3 replicasdocker-compose up -d --scale backend=3
# Scale workersdocker-compose up -d --scale procrastinate-worker=5Load Balancer Configuration:
# Caddy automatically load balances upstreamslabels: caddy: api.${DOMAIN} caddy.reverse_proxy: "{{upstreams 8000}}" # Auto load balancing caddy.lb_policy: least_conn # Load balancing algorithmDatabase High Availability
PostgreSQL with Replication:
# Primary databasepostgres-primary: image: postgres:16-alpine environment: - POSTGRES_REPLICATION_USER=replicator - POSTGRES_REPLICATION_PASSWORD=replicator_password command: | postgres -c wal_level=replica -c max_wal_senders=3 -c max_replication_slots=3
# Read replicapostgres-replica: image: postgres:16-alpine environment: - PGUSER=postgres - POSTGRES_PASSWORD=${DB_PASSWORD} - POSTGRES_MASTER_SERVICE=postgres-primary - POSTGRES_REPLICATION_USER=replicator - POSTGRES_REPLICATION_PASSWORD=replicator_password command: | bash -c " until pg_basebackup -h postgres-primary -D /var/lib/postgresql/data -U replicator -W; do echo 'Waiting for primary to be available...' sleep 1s done echo 'host replication replicator 0.0.0.0/0 md5' >> /var/lib/postgresql/data/pg_hba.conf postgres -c hot_standby=on"Deployment Scripts
Automated Deployment
Deploy Script (deploy.sh):
#!/bin/bashset -e
# ConfigurationENVIRONMENT=${1:-production}COMPOSE_FILE="docker-compose.${ENVIRONMENT}.yml"
echo "Deploying MCP Client Tester - ${ENVIRONMENT}"
# Verify prerequisitescommand -v docker >/dev/null 2>&1 || { echo "Docker is required but not installed."; exit 1; }command -v docker-compose >/dev/null 2>&1 || { echo "Docker Compose is required but not installed."; exit 1; }
# Load environment variablesif [ -f ".env.${ENVIRONMENT}" ]; then source ".env.${ENVIRONMENT}"else echo "Environment file .env.${ENVIRONMENT} not found" exit 1fi
# Create external network if it doesn't existdocker network create caddy 2>/dev/null || echo "Caddy network already exists"
# Pre-deployment checksecho "Running pre-deployment checks..."docker-compose -f ${COMPOSE_FILE} config --quiet
# Pull latest imagesecho "Pulling latest images..."docker-compose -f ${COMPOSE_FILE} pull
# Build servicesecho "Building services..."docker-compose -f ${COMPOSE_FILE} build
# Deploy with zero downtimeecho "Starting deployment..."
# Start new servicesdocker-compose -f ${COMPOSE_FILE} up -d
# Wait for health checksecho "Waiting for services to be healthy..."timeout 120s bash -c 'while [ "$(docker-compose -f '${COMPOSE_FILE}' ps -q | xargs docker inspect -f "{{ .State.Health.Status }}" | grep -c healthy)" -ne "$(docker-compose -f '${COMPOSE_FILE}' ps -q | wc -l)" ]; do echo "Waiting for all services to be healthy..." sleep 5done'
# Verify deploymentecho "Verifying deployment..."curl -f "https://api.${DOMAIN}/health" || { echo "Health check failed"; exit 1; }
echo "Deployment completed successfully!"
# Clean up old imagesdocker image prune -f
echo "MCP Client Tester deployed to ${ENVIRONMENT}"Make script executable:
chmod +x deploy.sh
# Deploy to production./deploy.sh productionBackup Script
Database Backup (backup.sh):
#!/bin/bashBACKUP_DIR="/backups/mcp-client-tester"TIMESTAMP=$(date +%Y%m%d_%H%M%S)
mkdir -p ${BACKUP_DIR}
# Backup PostgreSQLdocker-compose exec -T postgres pg_dump -U ${DB_USER} mcp_tester | gzip > ${BACKUP_DIR}/postgres_${TIMESTAMP}.sql.gz
# Backup application datadocker run --rm -v mcp-client-tester_mcp_data:/data -v ${BACKUP_DIR}:/backup alpine tar czf /backup/data_${TIMESTAMP}.tar.gz -C /data .
# Clean old backups (keep last 7 days)find ${BACKUP_DIR} -name "*.gz" -mtime +7 -delete
echo "Backup completed: ${BACKUP_DIR}"Troubleshooting Deployment
Common Issues
-
Services Won’t Start
Terminal window # Check service logsdocker-compose logs backend# Check resource usagedocker stats# Verify configurationdocker-compose config --quiet -
Network Issues
Terminal window # Recreate networkdocker network rm caddydocker network create caddy# Restart servicesdocker-compose down && docker-compose up -d -
SSL Certificate Problems
Terminal window # Force certificate renewal (Caddy)docker-compose exec caddy caddy reload# Check certificate statusopenssl s_client -connect your-domain.com:443 -servername your-domain.com
Ready for production? Continue with Production Setup for advanced deployment strategies or review Environment Variables for configuration details.