Skip to content

Search is only available in production builds. Try building and previewing the site to test it out locally.

Basic Usage Examples

Basic Usage Examples

This guide provides practical, step-by-step examples of common MCP Client Tester workflows. Whether you’re testing your first MCP server or debugging complex client interactions, these examples will help you get started quickly.

Example 1: Testing a Simple MCP Server

Let’s start with testing a basic MCP server that provides a few tools and resources.

  1. Start MCP Client Tester

    Terminal window
    cd mcp-client-test
    docker-compose up -d

    Wait for all services to be healthy:

    Terminal window
    docker-compose ps
  2. Create a Test Session

    Using the web interface or API:

    • Navigate to https://mcp-tester.local
    • Click “New Session”
    • Name: “Basic Server Test”
    • Transport: “HTTP”
    • Click “Create Session”
  3. Connect Your MCP Server

    Point your MCP server to the provided endpoint. For this example, let’s use a simple Python server:

    simple_mcp_server.py
    from fastmcp import FastMCP
    from pydantic import BaseModel
    import requests
    app = FastMCP("Simple Test Server")
    class SearchRequest(BaseModel):
    query: str
    limit: int = 10
    @app.tool()
    def search_data(request: SearchRequest) -> dict:
    """Search for data in our simple database"""
    # Simulate a database search
    results = [
    {"id": 1, "title": "Sample Result 1", "content": "..."},
    {"id": 2, "title": "Sample Result 2", "content": "..."}
    ]
    return {
    "results": results[:request.limit],
    "total": len(results),
    "query": request.query
    }
    @app.resource("file://{path}")
    def read_file(path: str) -> str:
    """Read a file from the local filesystem"""
    try:
    with open(path, 'r') as f:
    return f.read()
    except FileNotFoundError:
    raise ValueError(f"File not found: {path}")
    if __name__ == "__main__":
    import uvicorn
    uvicorn.run(app, host="0.0.0.0", port=8001)
  4. Run the Test Server

    Terminal window
    python simple_mcp_server.py
  5. Configure Connection

    In MCP Client Tester, configure the connection to your server:

    • Session endpoint: The URL provided when you created the session
    • Target server: http://localhost:8001/mcp

Observing the Test Results

Once connected, you’ll see:

Initial Handshake

{
"jsonrpc": "2.0",
"id": 1,
"method": "initialize",
"params": {
"protocolVersion": "1.0.0",
"capabilities": {},
"clientInfo": {
"name": "MCP Client Tester",
"version": "1.0.0"
}
}
}

Server Response

{
"jsonrpc": "2.0",
"id": 1,
"result": {
"protocolVersion": "1.0.0",
"capabilities": {
"tools": {},
"resources": {}
},
"serverInfo": {
"name": "Simple Test Server",
"version": "1.0.0"
}
}
}

The web interface will show:

  • Connection Status: Active and healthy
  • Client Detection: Your server capabilities and version
  • Available Tools: search_data tool with parameters
  • Available Resources: File resource handler

Example 2: Interactive Tool Testing

Let’s test the tools your server provides interactively.

  1. Discover Available Tools

    In the web interface:

    • Go to the “Tools” tab
    • Click “Refresh Tools List”
    • You should see the search_data tool

    Or via API:

    Terminal window
    curl -X GET "https://api.mcp-tester.local/api/v1/sessions/{session_id}/tools"
  2. Test Tool Execution

    Use the interactive tool tester:

    • Select “search_data” tool
    • Enter parameters:
      {
      "query": "sample data",
      "limit": 5
      }
    • Click “Execute Tool”
  3. View Results

    The response will show:

    {
    "results": [
    {"id": 1, "title": "Sample Result 1", "content": "..."},
    {"id": 2, "title": "Sample Result 2", "content": "..."}
    ],
    "total": 2,
    "query": "sample data"
    }
  4. Test Error Conditions

    Try invalid parameters:

    {
    "query": "",
    "limit": -1
    }

    This should generate a validation error that you can observe in the protocol log.

Example 3: Resource Access Testing

Test how your server handles resource requests.

  1. Create Test Files

    Terminal window
    echo "This is test content" > /tmp/test.txt
    echo '{"name": "test", "value": 123}' > /tmp/data.json
  2. Test Resource Reading

    In the web interface:

    • Go to “Resources” tab
    • Enter URI: file:///tmp/test.txt
    • Click “Read Resource”

    Expected response:

    {
    "contents": [
    {
    "uri": "file:///tmp/test.txt",
    "mimeType": "text/plain",
    "text": "This is test content\n"
    }
    ]
    }
  3. Test Different Resource Types

    Try reading the JSON file:

    • URI: file:///tmp/data.json
    • Should return JSON content with appropriate MIME type
  4. Test Error Handling

    Try accessing a non-existent file:

    • URI: file:///tmp/nonexistent.txt
    • Should return appropriate error response

Example 4: Testing with Claude Desktop

Here’s how to test your MCP server with Claude Desktop.

  1. Create STDIO Session

    In MCP Client Tester:

    • Create new session with transport “STDIO”
    • Note the session command provided
  2. Configure Claude Desktop

    Edit your Claude Desktop configuration:

    {
    "mcpServers": {
    "test-server": {
    "command": "python",
    "args": [
    "/path/to/your/simple_mcp_server.py",
    "--stdio",
    "--session-id", "your-session-id"
    ]
    }
    }
    }
  3. Start Claude Desktop

    Launch Claude Desktop and verify the MCP server connection in the settings.

  4. Test in Conversation

    In Claude Desktop, try using your tools:

    Can you search for "sample data" using the search_data tool?

    Claude should:

    • Recognize the available tool
    • Call it with appropriate parameters
    • Display the results
  5. Monitor in MCP Client Tester

    Watch the real-time protocol messages in your test session:

    • See Claude’s tool discovery requests
    • Monitor tool execution calls
    • Observe response handling

Example 5: Performance Testing

Let’s test how your server performs under load.

  1. Create Performance Test Session

    Terminal window
    curl -X POST "https://api.mcp-tester.local/api/v1/sessions" \
    -H "Content-Type: application/json" \
    -d '{
    "name": "Performance Test",
    "transport": "http",
    "config": {
    "enable_metrics": true,
    "detailed_timing": true
    }
    }'
  2. Run Load Test

    Use the built-in load testing tool:

    Terminal window
    curl -X POST "https://api.mcp-tester.local/api/v1/test/load" \
    -H "Content-Type: application/json" \
    -d '{
    "session_id": "your-session-id",
    "test_config": {
    "duration_seconds": 60,
    "requests_per_second": 10,
    "tool_name": "search_data",
    "tool_args": {"query": "load test", "limit": 5}
    }
    }'
  3. Monitor Results

    Watch the performance metrics in real-time:

    • Average response time
    • Request success rate
    • Error frequency
    • Throughput statistics
  4. Analyze Performance Data

    Export the session data for detailed analysis:

    Terminal window
    curl -X GET "https://api.mcp-tester.local/api/v1/sessions/{session_id}/export?format=json" \
    --output performance_results.json

Performance Analysis

The results will show metrics like:

{
"performance_summary": {
"total_requests": 600,
"successful_requests": 598,
"failed_requests": 2,
"success_rate": 99.67,
"avg_response_time_ms": 45.2,
"p95_response_time_ms": 89.1,
"p99_response_time_ms": 156.7,
"requests_per_second": 9.97,
"errors": [
{
"error_type": "timeout",
"count": 2,
"percentage": 0.33
}
]
}
}

Example 6: Error Scenario Testing

Test how your server handles various error conditions.

  1. Test Invalid Tool Calls

    # Create test script for error scenarios
    import requests
    import json
    session_url = "https://api.mcp-tester.local/mcp/session/test-123"
    # Test 1: Invalid tool name
    invalid_tool_request = {
    "jsonrpc": "2.0",
    "id": 1,
    "method": "tools/call",
    "params": {
    "name": "nonexistent_tool",
    "arguments": {}
    }
    }
    response = requests.post(session_url, json=invalid_tool_request)
    print("Invalid tool response:", response.json())
    # Test 2: Invalid parameters
    invalid_params_request = {
    "jsonrpc": "2.0",
    "id": 2,
    "method": "tools/call",
    "params": {
    "name": "search_data",
    "arguments": {
    "query": 123, # Should be string
    "limit": "invalid" # Should be number
    }
    }
    }
    response = requests.post(session_url, json=invalid_params_request)
    print("Invalid params response:", response.json())
  2. Test Resource Errors

    # Test resource not found
    resource_request = {
    "jsonrpc": "2.0",
    "id": 3,
    "method": "resources/read",
    "params": {
    "uri": "file:///nonexistent/path.txt"
    }
    }
    response = requests.post(session_url, json=resource_request)
    print("Resource not found:", response.json())
  3. Test Protocol Errors

    # Test malformed JSON-RPC
    malformed_request = {
    "jsonrpc": "1.0", # Wrong version
    "method": "tools/list"
    # Missing required id field
    }
    response = requests.post(session_url, json=malformed_request)
    print("Malformed request:", response.json())

Example 7: Multi-Transport Testing

Test the same server across different transport protocols.

  1. Create Multiple Sessions

    import asyncio
    import aiohttp
    async def create_test_sessions():
    transports = ["stdio", "http", "sse", "http-streaming"]
    sessions = {}
    for transport in transports:
    session_data = {
    "name": f"Multi-transport Test - {transport}",
    "transport": transport,
    "config": {"enable_comparison": True}
    }
    async with aiohttp.ClientSession() as client:
    async with client.post(
    "https://api.mcp-tester.local/api/v1/sessions",
    json=session_data
    ) as resp:
    session = await resp.json()
    sessions[transport] = session
    return sessions
  2. Run Identical Tests

    Execute the same test suite across all transports:

    async def test_all_transports(sessions):
    test_results = {}
    for transport, session in sessions.items():
    print(f"Testing {transport} transport...")
    # Test tool discovery
    tools_result = await test_tool_discovery(session)
    # Test tool execution
    execution_result = await test_tool_execution(session)
    # Test resource access
    resource_result = await test_resource_access(session)
    test_results[transport] = {
    "tools": tools_result,
    "execution": execution_result,
    "resources": resource_result
    }
    return test_results
  3. Compare Results

    Analyze differences between transports:

    def compare_transport_results(results):
    comparison = {}
    for transport, data in results.items():
    comparison[transport] = {
    "success_rate": calculate_success_rate(data),
    "avg_response_time": calculate_avg_response_time(data),
    "features_supported": count_supported_features(data)
    }
    return comparison

Example 8: Automated Testing Script

Create a reusable test script for your MCP server.

#!/usr/bin/env python3
"""
Automated MCP Server Testing Script
Usage:
python test_mcp_server.py --server-url http://localhost:8001 --tests basic,performance,errors
"""
import argparse
import asyncio
import aiohttp
import json
from datetime import datetime
class MCPServerTester:
def __init__(self, tester_api_url, server_url):
self.tester_api_url = tester_api_url
self.server_url = server_url
self.session = None
async def create_session(self, name="Automated Test"):
"""Create a new test session"""
session_data = {
"name": f"{name} - {datetime.now().isoformat()}",
"transport": "http",
"config": {
"timeout_seconds": 300,
"enable_progress": True,
"detailed_logging": True
}
}
async with aiohttp.ClientSession() as client:
async with client.post(
f"{self.tester_api_url}/api/v1/sessions",
json=session_data
) as resp:
self.session = await resp.json()
return self.session
async def run_basic_tests(self):
"""Run basic functionality tests"""
print("Running basic functionality tests...")
results = {}
# Test 1: Tool discovery
results['tool_discovery'] = await self.test_tool_discovery()
# Test 2: Tool execution
results['tool_execution'] = await self.test_tool_execution()
# Test 3: Resource access
results['resource_access'] = await self.test_resource_access()
return results
async def run_performance_tests(self):
"""Run performance tests"""
print("Running performance tests...")
load_test_config = {
"session_id": self.session['id'],
"test_config": {
"duration_seconds": 30,
"requests_per_second": 5,
"tool_name": "search_data",
"tool_args": {"query": "performance test", "limit": 10}
}
}
async with aiohttp.ClientSession() as client:
async with client.post(
f"{self.tester_api_url}/api/v1/test/load",
json=load_test_config
) as resp:
return await resp.json()
async def run_error_tests(self):
"""Run error handling tests"""
print("Running error handling tests...")
error_tests = [
# Invalid tool name
{
"name": "invalid_tool",
"request": {
"jsonrpc": "2.0",
"id": 1,
"method": "tools/call",
"params": {"name": "nonexistent", "arguments": {}}
}
},
# Invalid parameters
{
"name": "invalid_params",
"request": {
"jsonrpc": "2.0",
"id": 2,
"method": "tools/call",
"params": {
"name": "search_data",
"arguments": {"query": 123, "limit": "invalid"}
}
}
}
]
results = {}
for test in error_tests:
result = await self.send_raw_request(test["request"])
results[test["name"]] = result
return results
async def generate_report(self, all_results):
"""Generate a comprehensive test report"""
# Export session data
async with aiohttp.ClientSession() as client:
async with client.get(
f"{self.tester_api_url}/api/v1/sessions/{self.session['id']}/export?format=json"
) as resp:
session_data = await resp.json()
report = {
"test_summary": {
"session_id": self.session['id'],
"timestamp": datetime.now().isoformat(),
"server_url": self.server_url
},
"test_results": all_results,
"session_data": session_data
}
# Save report
report_filename = f"mcp_test_report_{datetime.now().strftime('%Y%m%d_%H%M%S')}.json"
with open(report_filename, 'w') as f:
json.dump(report, f, indent=2)
print(f"Test report saved to: {report_filename}")
return report
async def main():
parser = argparse.ArgumentParser(description="Automated MCP Server Testing")
parser.add_argument("--tester-url", default="https://api.mcp-tester.local",
help="MCP Client Tester API URL")
parser.add_argument("--server-url", required=True,
help="MCP Server URL to test")
parser.add_argument("--tests", default="basic,performance,errors",
help="Comma-separated list of test suites to run")
args = parser.parse_args()
tester = MCPServerTester(args.tester_url, args.server_url)
# Create test session
await tester.create_session("Automated Test Suite")
print(f"Created test session: {tester.session['id']}")
# Run requested tests
test_suites = args.tests.split(',')
all_results = {}
if 'basic' in test_suites:
all_results['basic'] = await tester.run_basic_tests()
if 'performance' in test_suites:
all_results['performance'] = await tester.run_performance_tests()
if 'errors' in test_suites:
all_results['errors'] = await tester.run_error_tests()
# Generate report
report = await tester.generate_report(all_results)
print("\nTest Summary:")
print(f"Session ID: {tester.session['id']}")
print(f"Tests Run: {', '.join(test_suites)}")
print(f"Report: {report}")
if __name__ == "__main__":
asyncio.run(main())

Running the Examples

To run these examples:

  1. Save the scripts to your local machine
  2. Install dependencies:
    Terminal window
    pip install aiohttp requests fastmcp
  3. Start MCP Client Tester:
    Terminal window
    docker-compose up -d
  4. Run the examples:
    Terminal window
    python test_mcp_server.py --server-url http://localhost:8001

Next Steps

Once you’re comfortable with these basic examples:


Ready for more advanced testing? Continue with Custom Tools Testing to learn about testing complex MCP tools and workflows.