Skip to content

Search is only available in production builds. Try building and previewing the site to test it out locally.

Docker Deployment Guide

Docker Deployment Guide

This comprehensive guide covers deploying MCP Client Tester using Docker and Docker Compose, from development environments to production deployments.

Architecture Overview

MCP Client Tester uses a multi-service Docker architecture:

graph TB
A[Caddy Reverse Proxy] --> B[Frontend - Astro]
A --> C[Backend - FastAPI]
A --> D[Documentation - Starlight]
A --> E[Reports - Static Files]
C --> F[PostgreSQL Database]
C --> G[Procrastinate Worker]
G --> F
H[External Caddy Network] --> A

Services:

  • Frontend: Astro-based web interface
  • Backend: FastAPI server with MCP implementation
  • Documentation: Starlight documentation site
  • Reports: Static file server for test reports
  • Database: PostgreSQL for session and task data
  • Worker: Procrastinate background task processor

Development Deployment

Prerequisites

  1. Install Docker & Docker Compose

    Terminal window
    # Install Docker
    curl -fsSL https://get.docker.com -o get-docker.sh
    sudo sh get-docker.sh
    # Install Docker Compose
    sudo curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
    sudo chmod +x /usr/local/bin/docker-compose
    # Add user to docker group
    sudo usermod -aG docker $USER
    newgrp docker
  2. Verify Installation

    Terminal window
    docker --version
    docker-compose --version
    # Test Docker
    docker run hello-world
  3. Setup External Network

    Terminal window
    # Create the external Caddy network
    docker network create caddy 2>/dev/null || echo "Network already exists"
    # Verify network creation
    docker network ls | grep caddy

Project Setup

  1. Clone Repository

    Terminal window
    git clone https://github.com/your-org/mcp-client-test.git
    cd mcp-client-test
  2. Configure Environment

    Terminal window
    # Copy example environment file
    cp .env.example .env
    # Edit configuration
    nano .env

    Development Configuration (.env):

    Terminal window
    # Project Configuration
    COMPOSE_PROJECT_NAME=mcp-client-tester
    DOMAIN=mcp-tester.local
    # Server Configuration
    MCP_SERVER_NAME=MCP Client Tester
    MCP_SERVER_VERSION=1.0.0
    MCP_LOG_LEVEL=DEBUG
    MCP_SESSION_TIMEOUT=3600
    # Database
    DATABASE_URL=sqlite:///app/data/mcp_tester.db
    PROCRASTINATE_DATABASE_URL=postgresql://procrastinate:procrastinate@procrastinate-db:5432/procrastinate
    # Frontend
    PUBLIC_DOMAIN=mcp-tester.local
    PUBLIC_API_URL=https://api.mcp-tester.local
    PUBLIC_WS_URL=wss://api.mcp-tester.local
    # Development
    ENVIRONMENT=development
    DEBUG=true
  3. Configure Local DNS

    Terminal window
    # Add entries to /etc/hosts (Linux/macOS)
    sudo tee -a /etc/hosts << EOF
    127.0.0.1 mcp-tester.local
    127.0.0.1 api.mcp-tester.local
    127.0.0.1 docs.mcp-tester.local
    127.0.0.1 reports.mcp-tester.local
    EOF
    # Or use localhost instead
    # sed -i 's/mcp-tester.local/localhost/g' .env

Building and Starting Services

  1. Build All Services

    Terminal window
    # Build all Docker images
    docker-compose build
    # Build with no cache (if needed)
    docker-compose build --no-cache
  2. Start Services

    Terminal window
    # Start all services in background
    docker-compose up -d
    # View logs during startup
    docker-compose logs -f
  3. Verify Deployment

    Terminal window
    # Check service status
    docker-compose ps
    # Should show all services as "Up" and healthy
  4. Test Access

    Terminal window
    # Test main application
    curl -I https://mcp-tester.local
    # Test API
    curl https://api.mcp-tester.local/health
    # Test documentation
    curl -I https://docs.mcp-tester.local

Production Deployment

Production Environment Configuration

Production .env file:

Terminal window
# Project Configuration
COMPOSE_PROJECT_NAME=mcp-client-tester-prod
DOMAIN=your-domain.com
# Server Configuration
MCP_SERVER_NAME=MCP Client Tester
MCP_SERVER_VERSION=1.0.0
MCP_LOG_LEVEL=INFO
MCP_SESSION_TIMEOUT=7200
# Database (Use external PostgreSQL in production)
DATABASE_URL=postgresql://mcp_user:secure_password@db.your-domain.com:5432/mcp_tester
PROCRASTINATE_DATABASE_URL=postgresql://procrastinate_user:secure_password@db.your-domain.com:5432/procrastinate
# Security
SECRET_KEY=your-very-secure-secret-key-here
API_KEYS=prod_key_1,prod_key_2,admin_key_3
# Frontend
PUBLIC_DOMAIN=your-domain.com
PUBLIC_API_URL=https://api.your-domain.com
PUBLIC_WS_URL=wss://api.your-domain.com
# Production Settings
ENVIRONMENT=production
DEBUG=false
ENABLE_CORS=false
ALLOWED_ORIGINS=https://your-domain.com,https://api.your-domain.com
# Monitoring
SENTRY_DSN=https://your-sentry-dsn
ENABLE_METRICS=true
LOG_FORMAT=json

Production Docker Compose

Create docker-compose.prod.yml:

name: ${COMPOSE_PROJECT_NAME}
networks:
caddy:
external: true
internal:
driver: bridge
volumes:
postgres_data:
mcp_data:
static_files:
services:
# FastAPI Backend - Production
backend:
build:
context: ./backend
dockerfile: Dockerfile
target: production
volumes:
- mcp_data:/app/data
- static_files:/app/static
environment:
- DOMAIN=${DOMAIN}
- DATABASE_URL=${DATABASE_URL}
- PROCRASTINATE_DATABASE_URL=${PROCRASTINATE_DATABASE_URL}
- MCP_SERVER_NAME=${MCP_SERVER_NAME}
- MCP_SERVER_VERSION=${MCP_SERVER_VERSION}
- MCP_LOG_LEVEL=${MCP_LOG_LEVEL}
- ENVIRONMENT=${ENVIRONMENT}
- DEBUG=${DEBUG}
- SECRET_KEY=${SECRET_KEY}
- API_KEYS=${API_KEYS}
expose:
- 8000
networks:
- caddy
- internal
labels:
caddy: api.${DOMAIN}
caddy.@ws.0_header: Connection *Upgrade*
caddy.@ws.1_header: Upgrade websocket
caddy.0_reverse_proxy: "@ws {{upstreams 8000}}"
caddy.1_reverse_proxy: "{{upstreams 8000}}"
depends_on:
- postgres
restart: unless-stopped
deploy:
resources:
limits:
memory: 1G
cpus: '0.5'
reservations:
memory: 512M
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000/health"]
interval: 30s
timeout: 10s
retries: 3
# PostgreSQL Database
postgres:
image: postgres:16-alpine
environment:
- POSTGRES_DB=mcp_tester
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASSWORD}
- POSTGRES_MULTIPLE_DATABASES=procrastinate
volumes:
- postgres_data:/var/lib/postgresql/data
- ./scripts/init-db.sh:/docker-entrypoint-initdb.d/init-db.sh:ro
networks:
- internal
restart: unless-stopped
deploy:
resources:
limits:
memory: 2G
cpus: '1.0'
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${DB_USER}"]
interval: 10s
timeout: 5s
retries: 5
# Procrastinate Worker - Production
procrastinate-worker:
build:
context: ./backend
dockerfile: Dockerfile
target: production
command: /app/.venv/bin/python -m procrastinate worker
volumes:
- mcp_data:/app/data
environment:
- PROCRASTINATE_DATABASE_URL=${PROCRASTINATE_DATABASE_URL}
- DATABASE_URL=${DATABASE_URL}
- ENVIRONMENT=${ENVIRONMENT}
- LOG_LEVEL=${MCP_LOG_LEVEL}
networks:
- internal
depends_on:
- postgres
restart: unless-stopped
deploy:
replicas: 2
resources:
limits:
memory: 512M
cpus: '0.25'
# Frontend - Production
frontend:
build:
context: ./frontend
dockerfile: Dockerfile
target: production
environment:
- PUBLIC_DOMAIN=${DOMAIN}
- PUBLIC_API_URL=https://api.${DOMAIN}
- PUBLIC_WS_URL=wss://api.${DOMAIN}
networks:
- caddy
labels:
caddy: ${DOMAIN}
caddy.reverse_proxy: "{{upstreams 80}}"
caddy.header: /static/* Cache-Control "public, max-age=31536000"
restart: unless-stopped
deploy:
resources:
limits:
memory: 256M
cpus: '0.25'
# Documentation - Production
docs:
build:
context: ./docs
dockerfile: Dockerfile
target: production
environment:
- PUBLIC_DOMAIN=${DOMAIN}
networks:
- caddy
labels:
caddy: docs.${DOMAIN}
caddy.reverse_proxy: "{{upstreams 80}}"
caddy.header: Cache-Control "public, max-age=3600"
restart: unless-stopped
# Reports Server
reports:
image: nginx:alpine
volumes:
- ./backend/reports:/usr/share/nginx/html:ro
- ./nginx-reports.conf:/etc/nginx/conf.d/default.conf:ro
networks:
- caddy
labels:
caddy: reports.${DOMAIN}
caddy.reverse_proxy: "{{upstreams 80}}"
caddy.basicauth: /admin/*
caddy.basicauth.admin: ${ADMIN_PASSWORD_HASH}
restart: unless-stopped
# Redis Cache (Optional)
redis:
image: redis:7-alpine
command: redis-server --appendonly yes
volumes:
- redis_data:/data
networks:
- internal
restart: unless-stopped
deploy:
resources:
limits:
memory: 256M

SSL/TLS Configuration

For production, use proper SSL certificates:

Automatic SSL with Caddy:

# Add to backend service labels
caddy: api.${DOMAIN}
caddy.tls: your-email@domain.com # For Let's Encrypt

Or use wildcard certificates:

caddy: "*.${DOMAIN}"
caddy.tls: your-email@domain.com

Security Hardening

Security Configuration:

# Add security labels to all services
caddy.header: Strict-Transport-Security "max-age=31536000"
caddy.header: X-Content-Type-Options nosniff
caddy.header: X-Frame-Options DENY
caddy.header: X-XSS-Protection "1; mode=block"
caddy.header: Referrer-Policy strict-origin-when-cross-origin
caddy.header: Content-Security-Policy "default-src 'self'"
# Add basic auth for admin endpoints
caddy.basicauth: /admin/*
caddy.basicauth.admin: ${ADMIN_PASSWORD_HASH}

Generate password hash:

Terminal window
# Using Caddy container
docker run --rm caddy:2-alpine caddy hash-password --plaintext "your-password"

Container Configuration

Multi-Stage Dockerfiles

Backend Dockerfile (Production Optimized):

# Backend Dockerfile
FROM python:3.13-slim AS base
# Install system dependencies
RUN apt-get update && apt-get install -y \
curl \
&& rm -rf /var/lib/apt/lists/*
# Install uv
COPY --from=ghcr.io/astral-sh/uv:latest /uv /bin/uv
WORKDIR /app
# Development stage
FROM base AS development
# Copy dependency files
COPY pyproject.toml uv.lock ./
# Install dependencies including dev
RUN uv sync --frozen
# Copy source code
COPY . .
# Expose port and start dev server
EXPOSE 8000
CMD ["uv", "run", "uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000", "--reload"]
# Production build stage
FROM base AS build
# Copy dependency files
COPY pyproject.toml uv.lock ./
# Install dependencies (production only)
RUN uv sync --frozen --no-dev
# Copy source code
COPY . .
# Production stage
FROM python:3.13-slim AS production
# Install system dependencies
RUN apt-get update && apt-get install -y \
curl \
&& rm -rf /var/lib/apt/lists/* \
&& useradd --create-home --shell /bin/bash app
# Copy built application
COPY --from=build --chown=app:app /app /app
# Switch to non-root user
USER app
WORKDIR /app
# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
CMD curl -f http://localhost:8000/health || exit 1
# Expose port
EXPOSE 8000
# Start production server
CMD ["/app/.venv/bin/uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000", "--workers", "4"]

Frontend Dockerfile (Production Optimized):

# Frontend Dockerfile
FROM node:20-alpine AS base
# Install dependencies only when needed
FROM base AS deps
RUN apk add --no-cache libc6-compat
WORKDIR /app
# Install dependencies
COPY package.json package-lock.json* ./
RUN npm ci --only=production && npm cache clean --force
# Development stage
FROM base AS development
WORKDIR /app
COPY package.json package-lock.json* ./
RUN npm ci
COPY . .
EXPOSE 4321
CMD ["npm", "run", "dev", "--", "--host", "0.0.0.0", "--port", "4321"]
# Production build stage
FROM base AS builder
WORKDIR /app
COPY package.json package-lock.json* ./
RUN npm ci
COPY . .
RUN npm run build
# Production runtime
FROM nginx:alpine AS production
# Copy built assets
COPY --from=builder /app/dist /usr/share/nginx/html
# Copy nginx configuration
COPY nginx.conf /etc/nginx/nginx.conf
# Add health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD curl -f http://localhost/ || exit 1
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]

Monitoring & Logging

Production Logging

Logging Configuration:

# Add to docker-compose.prod.yml
services:
backend:
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
# Centralized logging with Fluentd
fluentd:
image: fluentd:v1.16-1
volumes:
- ./fluentd.conf:/fluentd/etc/fluent.conf
- /var/log:/var/log
ports:
- "24224:24224"
environment:
- FLUENTD_CONF=fluent.conf

Fluent.conf:

<source>
@type forward
port 24224
bind 0.0.0.0
</source>
<match *.**>
@type elasticsearch
host elasticsearch
port 9200
logstash_format true
logstash_prefix mcp-client-tester
</match>

Health Checks

Comprehensive Health Checks:

services:
backend:
healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost:8000/health || exit 1"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
postgres:
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${DB_USER} -d ${DB_NAME}"]
interval: 10s
timeout: 5s
retries: 5
frontend:
healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost/ || exit 1"]
interval: 30s
timeout: 5s
retries: 3

Scaling & High Availability

Horizontal Scaling

Scale Backend Services:

Terminal window
# Scale backend to 3 replicas
docker-compose up -d --scale backend=3
# Scale workers
docker-compose up -d --scale procrastinate-worker=5

Load Balancer Configuration:

# Caddy automatically load balances upstreams
labels:
caddy: api.${DOMAIN}
caddy.reverse_proxy: "{{upstreams 8000}}" # Auto load balancing
caddy.lb_policy: least_conn # Load balancing algorithm

Database High Availability

PostgreSQL with Replication:

# Primary database
postgres-primary:
image: postgres:16-alpine
environment:
- POSTGRES_REPLICATION_USER=replicator
- POSTGRES_REPLICATION_PASSWORD=replicator_password
command: |
postgres -c wal_level=replica
-c max_wal_senders=3
-c max_replication_slots=3
# Read replica
postgres-replica:
image: postgres:16-alpine
environment:
- PGUSER=postgres
- POSTGRES_PASSWORD=${DB_PASSWORD}
- POSTGRES_MASTER_SERVICE=postgres-primary
- POSTGRES_REPLICATION_USER=replicator
- POSTGRES_REPLICATION_PASSWORD=replicator_password
command: |
bash -c "
until pg_basebackup -h postgres-primary -D /var/lib/postgresql/data -U replicator -W; do
echo 'Waiting for primary to be available...'
sleep 1s
done
echo 'host replication replicator 0.0.0.0/0 md5' >> /var/lib/postgresql/data/pg_hba.conf
postgres -c hot_standby=on"

Deployment Scripts

Automated Deployment

Deploy Script (deploy.sh):

#!/bin/bash
set -e
# Configuration
ENVIRONMENT=${1:-production}
COMPOSE_FILE="docker-compose.${ENVIRONMENT}.yml"
echo "Deploying MCP Client Tester - ${ENVIRONMENT}"
# Verify prerequisites
command -v docker >/dev/null 2>&1 || { echo "Docker is required but not installed."; exit 1; }
command -v docker-compose >/dev/null 2>&1 || { echo "Docker Compose is required but not installed."; exit 1; }
# Load environment variables
if [ -f ".env.${ENVIRONMENT}" ]; then
source ".env.${ENVIRONMENT}"
else
echo "Environment file .env.${ENVIRONMENT} not found"
exit 1
fi
# Create external network if it doesn't exist
docker network create caddy 2>/dev/null || echo "Caddy network already exists"
# Pre-deployment checks
echo "Running pre-deployment checks..."
docker-compose -f ${COMPOSE_FILE} config --quiet
# Pull latest images
echo "Pulling latest images..."
docker-compose -f ${COMPOSE_FILE} pull
# Build services
echo "Building services..."
docker-compose -f ${COMPOSE_FILE} build
# Deploy with zero downtime
echo "Starting deployment..."
# Start new services
docker-compose -f ${COMPOSE_FILE} up -d
# Wait for health checks
echo "Waiting for services to be healthy..."
timeout 120s bash -c '
while [ "$(docker-compose -f '${COMPOSE_FILE}' ps -q | xargs docker inspect -f "{{ .State.Health.Status }}" | grep -c healthy)" -ne "$(docker-compose -f '${COMPOSE_FILE}' ps -q | wc -l)" ]; do
echo "Waiting for all services to be healthy..."
sleep 5
done'
# Verify deployment
echo "Verifying deployment..."
curl -f "https://api.${DOMAIN}/health" || { echo "Health check failed"; exit 1; }
echo "Deployment completed successfully!"
# Clean up old images
docker image prune -f
echo "MCP Client Tester deployed to ${ENVIRONMENT}"

Make script executable:

Terminal window
chmod +x deploy.sh
# Deploy to production
./deploy.sh production

Backup Script

Database Backup (backup.sh):

#!/bin/bash
BACKUP_DIR="/backups/mcp-client-tester"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
mkdir -p ${BACKUP_DIR}
# Backup PostgreSQL
docker-compose exec -T postgres pg_dump -U ${DB_USER} mcp_tester | gzip > ${BACKUP_DIR}/postgres_${TIMESTAMP}.sql.gz
# Backup application data
docker run --rm -v mcp-client-tester_mcp_data:/data -v ${BACKUP_DIR}:/backup alpine tar czf /backup/data_${TIMESTAMP}.tar.gz -C /data .
# Clean old backups (keep last 7 days)
find ${BACKUP_DIR} -name "*.gz" -mtime +7 -delete
echo "Backup completed: ${BACKUP_DIR}"

Troubleshooting Deployment

Common Issues

  1. Services Won’t Start

    Terminal window
    # Check service logs
    docker-compose logs backend
    # Check resource usage
    docker stats
    # Verify configuration
    docker-compose config --quiet
  2. Network Issues

    Terminal window
    # Recreate network
    docker network rm caddy
    docker network create caddy
    # Restart services
    docker-compose down && docker-compose up -d
  3. SSL Certificate Problems

    Terminal window
    # Force certificate renewal (Caddy)
    docker-compose exec caddy caddy reload
    # Check certificate status
    openssl s_client -connect your-domain.com:443 -servername your-domain.com

Ready for production? Continue with Production Setup for advanced deployment strategies or review Environment Variables for configuration details.