Basic Usage Examples
Basic Usage Examples
This guide provides practical, step-by-step examples of common MCP Client Tester workflows. Whether you’re testing your first MCP server or debugging complex client interactions, these examples will help you get started quickly.
Example 1: Testing a Simple MCP Server
Let’s start with testing a basic MCP server that provides a few tools and resources.
-
Start MCP Client Tester
Terminal window cd mcp-client-testdocker-compose up -dWait for all services to be healthy:
Terminal window docker-compose ps -
Create a Test Session
Using the web interface or API:
- Navigate to
https://mcp-tester.local - Click “New Session”
- Name: “Basic Server Test”
- Transport: “HTTP”
- Click “Create Session”
Terminal window curl -X POST "https://api.mcp-tester.local/api/v1/sessions" \-H "Content-Type: application/json" \-d '{"name": "Basic Server Test","transport": "http","config": {"timeout_seconds": 300,"enable_progress": true}}' - Navigate to
-
Connect Your MCP Server
Point your MCP server to the provided endpoint. For this example, let’s use a simple Python server:
simple_mcp_server.py from fastmcp import FastMCPfrom pydantic import BaseModelimport requestsapp = FastMCP("Simple Test Server")class SearchRequest(BaseModel):query: strlimit: int = 10@app.tool()def search_data(request: SearchRequest) -> dict:"""Search for data in our simple database"""# Simulate a database searchresults = [{"id": 1, "title": "Sample Result 1", "content": "..."},{"id": 2, "title": "Sample Result 2", "content": "..."}]return {"results": results[:request.limit],"total": len(results),"query": request.query}@app.resource("file://{path}")def read_file(path: str) -> str:"""Read a file from the local filesystem"""try:with open(path, 'r') as f:return f.read()except FileNotFoundError:raise ValueError(f"File not found: {path}")if __name__ == "__main__":import uvicornuvicorn.run(app, host="0.0.0.0", port=8001) -
Run the Test Server
Terminal window python simple_mcp_server.py -
Configure Connection
In MCP Client Tester, configure the connection to your server:
- Session endpoint: The URL provided when you created the session
- Target server:
http://localhost:8001/mcp
Observing the Test Results
Once connected, you’ll see:
The web interface will show:
- Connection Status: Active and healthy
- Client Detection: Your server capabilities and version
- Available Tools:
search_datatool with parameters - Available Resources: File resource handler
Example 2: Interactive Tool Testing
Let’s test the tools your server provides interactively.
-
Discover Available Tools
In the web interface:
- Go to the “Tools” tab
- Click “Refresh Tools List”
- You should see the
search_datatool
Or via API:
Terminal window curl -X GET "https://api.mcp-tester.local/api/v1/sessions/{session_id}/tools" -
Test Tool Execution
Use the interactive tool tester:
- Select “search_data” tool
- Enter parameters:
{"query": "sample data","limit": 5}
- Click “Execute Tool”
-
View Results
The response will show:
{"results": [{"id": 1, "title": "Sample Result 1", "content": "..."},{"id": 2, "title": "Sample Result 2", "content": "..."}],"total": 2,"query": "sample data"} -
Test Error Conditions
Try invalid parameters:
{"query": "","limit": -1}This should generate a validation error that you can observe in the protocol log.
Example 3: Resource Access Testing
Test how your server handles resource requests.
-
Create Test Files
Terminal window echo "This is test content" > /tmp/test.txtecho '{"name": "test", "value": 123}' > /tmp/data.json -
Test Resource Reading
In the web interface:
- Go to “Resources” tab
- Enter URI:
file:///tmp/test.txt - Click “Read Resource”
Expected response:
{"contents": [{"uri": "file:///tmp/test.txt","mimeType": "text/plain","text": "This is test content\n"}]} -
Test Different Resource Types
Try reading the JSON file:
- URI:
file:///tmp/data.json - Should return JSON content with appropriate MIME type
- URI:
-
Test Error Handling
Try accessing a non-existent file:
- URI:
file:///tmp/nonexistent.txt - Should return appropriate error response
- URI:
Example 4: Testing with Claude Desktop
Here’s how to test your MCP server with Claude Desktop.
-
Create STDIO Session
In MCP Client Tester:
- Create new session with transport “STDIO”
- Note the session command provided
-
Configure Claude Desktop
Edit your Claude Desktop configuration:
{"mcpServers": {"test-server": {"command": "python","args": ["/path/to/your/simple_mcp_server.py","--stdio","--session-id", "your-session-id"]}}} -
Start Claude Desktop
Launch Claude Desktop and verify the MCP server connection in the settings.
-
Test in Conversation
In Claude Desktop, try using your tools:
Can you search for "sample data" using the search_data tool?Claude should:
- Recognize the available tool
- Call it with appropriate parameters
- Display the results
-
Monitor in MCP Client Tester
Watch the real-time protocol messages in your test session:
- See Claude’s tool discovery requests
- Monitor tool execution calls
- Observe response handling
Example 5: Performance Testing
Let’s test how your server performs under load.
-
Create Performance Test Session
Terminal window curl -X POST "https://api.mcp-tester.local/api/v1/sessions" \-H "Content-Type: application/json" \-d '{"name": "Performance Test","transport": "http","config": {"enable_metrics": true,"detailed_timing": true}}' -
Run Load Test
Use the built-in load testing tool:
Terminal window curl -X POST "https://api.mcp-tester.local/api/v1/test/load" \-H "Content-Type: application/json" \-d '{"session_id": "your-session-id","test_config": {"duration_seconds": 60,"requests_per_second": 10,"tool_name": "search_data","tool_args": {"query": "load test", "limit": 5}}}' -
Monitor Results
Watch the performance metrics in real-time:
- Average response time
- Request success rate
- Error frequency
- Throughput statistics
-
Analyze Performance Data
Export the session data for detailed analysis:
Terminal window curl -X GET "https://api.mcp-tester.local/api/v1/sessions/{session_id}/export?format=json" \--output performance_results.json
Performance Analysis
The results will show metrics like:
{ "performance_summary": { "total_requests": 600, "successful_requests": 598, "failed_requests": 2, "success_rate": 99.67, "avg_response_time_ms": 45.2, "p95_response_time_ms": 89.1, "p99_response_time_ms": 156.7, "requests_per_second": 9.97, "errors": [ { "error_type": "timeout", "count": 2, "percentage": 0.33 } ] }}Example 6: Error Scenario Testing
Test how your server handles various error conditions.
-
Test Invalid Tool Calls
# Create test script for error scenariosimport requestsimport jsonsession_url = "https://api.mcp-tester.local/mcp/session/test-123"# Test 1: Invalid tool nameinvalid_tool_request = {"jsonrpc": "2.0","id": 1,"method": "tools/call","params": {"name": "nonexistent_tool","arguments": {}}}response = requests.post(session_url, json=invalid_tool_request)print("Invalid tool response:", response.json())# Test 2: Invalid parametersinvalid_params_request = {"jsonrpc": "2.0","id": 2,"method": "tools/call","params": {"name": "search_data","arguments": {"query": 123, # Should be string"limit": "invalid" # Should be number}}}response = requests.post(session_url, json=invalid_params_request)print("Invalid params response:", response.json()) -
Test Resource Errors
# Test resource not foundresource_request = {"jsonrpc": "2.0","id": 3,"method": "resources/read","params": {"uri": "file:///nonexistent/path.txt"}}response = requests.post(session_url, json=resource_request)print("Resource not found:", response.json()) -
Test Protocol Errors
# Test malformed JSON-RPCmalformed_request = {"jsonrpc": "1.0", # Wrong version"method": "tools/list"# Missing required id field}response = requests.post(session_url, json=malformed_request)print("Malformed request:", response.json())
Example 7: Multi-Transport Testing
Test the same server across different transport protocols.
-
Create Multiple Sessions
import asyncioimport aiohttpasync def create_test_sessions():transports = ["stdio", "http", "sse", "http-streaming"]sessions = {}for transport in transports:session_data = {"name": f"Multi-transport Test - {transport}","transport": transport,"config": {"enable_comparison": True}}async with aiohttp.ClientSession() as client:async with client.post("https://api.mcp-tester.local/api/v1/sessions",json=session_data) as resp:session = await resp.json()sessions[transport] = sessionreturn sessions -
Run Identical Tests
Execute the same test suite across all transports:
async def test_all_transports(sessions):test_results = {}for transport, session in sessions.items():print(f"Testing {transport} transport...")# Test tool discoverytools_result = await test_tool_discovery(session)# Test tool executionexecution_result = await test_tool_execution(session)# Test resource accessresource_result = await test_resource_access(session)test_results[transport] = {"tools": tools_result,"execution": execution_result,"resources": resource_result}return test_results -
Compare Results
Analyze differences between transports:
def compare_transport_results(results):comparison = {}for transport, data in results.items():comparison[transport] = {"success_rate": calculate_success_rate(data),"avg_response_time": calculate_avg_response_time(data),"features_supported": count_supported_features(data)}return comparison
Example 8: Automated Testing Script
Create a reusable test script for your MCP server.
#!/usr/bin/env python3"""Automated MCP Server Testing Script
Usage: python test_mcp_server.py --server-url http://localhost:8001 --tests basic,performance,errors"""
import argparseimport asyncioimport aiohttpimport jsonfrom datetime import datetime
class MCPServerTester: def __init__(self, tester_api_url, server_url): self.tester_api_url = tester_api_url self.server_url = server_url self.session = None
async def create_session(self, name="Automated Test"): """Create a new test session""" session_data = { "name": f"{name} - {datetime.now().isoformat()}", "transport": "http", "config": { "timeout_seconds": 300, "enable_progress": True, "detailed_logging": True } }
async with aiohttp.ClientSession() as client: async with client.post( f"{self.tester_api_url}/api/v1/sessions", json=session_data ) as resp: self.session = await resp.json() return self.session
async def run_basic_tests(self): """Run basic functionality tests""" print("Running basic functionality tests...")
results = {}
# Test 1: Tool discovery results['tool_discovery'] = await self.test_tool_discovery()
# Test 2: Tool execution results['tool_execution'] = await self.test_tool_execution()
# Test 3: Resource access results['resource_access'] = await self.test_resource_access()
return results
async def run_performance_tests(self): """Run performance tests""" print("Running performance tests...")
load_test_config = { "session_id": self.session['id'], "test_config": { "duration_seconds": 30, "requests_per_second": 5, "tool_name": "search_data", "tool_args": {"query": "performance test", "limit": 10} } }
async with aiohttp.ClientSession() as client: async with client.post( f"{self.tester_api_url}/api/v1/test/load", json=load_test_config ) as resp: return await resp.json()
async def run_error_tests(self): """Run error handling tests""" print("Running error handling tests...")
error_tests = [ # Invalid tool name { "name": "invalid_tool", "request": { "jsonrpc": "2.0", "id": 1, "method": "tools/call", "params": {"name": "nonexistent", "arguments": {}} } }, # Invalid parameters { "name": "invalid_params", "request": { "jsonrpc": "2.0", "id": 2, "method": "tools/call", "params": { "name": "search_data", "arguments": {"query": 123, "limit": "invalid"} } } } ]
results = {} for test in error_tests: result = await self.send_raw_request(test["request"]) results[test["name"]] = result
return results
async def generate_report(self, all_results): """Generate a comprehensive test report"""
# Export session data async with aiohttp.ClientSession() as client: async with client.get( f"{self.tester_api_url}/api/v1/sessions/{self.session['id']}/export?format=json" ) as resp: session_data = await resp.json()
report = { "test_summary": { "session_id": self.session['id'], "timestamp": datetime.now().isoformat(), "server_url": self.server_url }, "test_results": all_results, "session_data": session_data }
# Save report report_filename = f"mcp_test_report_{datetime.now().strftime('%Y%m%d_%H%M%S')}.json" with open(report_filename, 'w') as f: json.dump(report, f, indent=2)
print(f"Test report saved to: {report_filename}") return report
async def main(): parser = argparse.ArgumentParser(description="Automated MCP Server Testing") parser.add_argument("--tester-url", default="https://api.mcp-tester.local", help="MCP Client Tester API URL") parser.add_argument("--server-url", required=True, help="MCP Server URL to test") parser.add_argument("--tests", default="basic,performance,errors", help="Comma-separated list of test suites to run")
args = parser.parse_args()
tester = MCPServerTester(args.tester_url, args.server_url)
# Create test session await tester.create_session("Automated Test Suite") print(f"Created test session: {tester.session['id']}")
# Run requested tests test_suites = args.tests.split(',') all_results = {}
if 'basic' in test_suites: all_results['basic'] = await tester.run_basic_tests()
if 'performance' in test_suites: all_results['performance'] = await tester.run_performance_tests()
if 'errors' in test_suites: all_results['errors'] = await tester.run_error_tests()
# Generate report report = await tester.generate_report(all_results)
print("\nTest Summary:") print(f"Session ID: {tester.session['id']}") print(f"Tests Run: {', '.join(test_suites)}") print(f"Report: {report}")
if __name__ == "__main__": asyncio.run(main())Running the Examples
To run these examples:
- Save the scripts to your local machine
- Install dependencies:
Terminal window pip install aiohttp requests fastmcp - Start MCP Client Tester:
Terminal window docker-compose up -d - Run the examples:
Terminal window python test_mcp_server.py --server-url http://localhost:8001
Next Steps
Once you’re comfortable with these basic examples:
- Explore Custom Tools Testing for advanced scenarios
- Learn about Transport Testing for protocol-specific testing
- Check out Client Integration for end-to-end testing
- Review Advanced Scenarios for complex testing patterns
Ready for more advanced testing? Continue with Custom Tools Testing to learn about testing complex MCP tools and workflows.