FastMCP server for managing and testing webhooks via webhook-test.com API.
https://github.com/alimo7amed93/webhook-tester-mcpTesting webhooks shouldn't require spinning up ngrok, writing custom endpoints, or manually inspecting request logs. This MCP server turns webhook testing into a conversation with your AI assistant.
You're building an integration that receives webhooks. The usual workflow involves:
Meanwhile, your webhook-test.com dashboard sits unused because integrating their API means writing more throwaway code.
This server bridges webhook-test.com's API directly into your AI workflow. Instead of context-switching between tools, you manage webhooks through natural conversation:
"Create a webhook for testing Stripe payment notifications"
"Show me the last 5 payloads received on the user-signup webhook"
"Delete the old webhooks from yesterday's testing session"
The MCP server handles the API calls, payload formatting, and response parsing. You get immediate webhook insights without leaving your development context.
Faster Integration Testing: Create test webhooks instantly during development sessions. No ngrok setup, no custom server code, no manual URL copying between services.
Real-time Debugging: Inspect webhook payloads immediately when something breaks. Ask "what was the last payload structure?" and get formatted JSON responses instantly.
Cleaner Development Flow: Your AI assistant becomes your webhook testing interface. No switching to browser dashboards or parsing curl responses in terminal.
Reproducible Test Scenarios: Create specific webhook endpoints for different test cases, manage them conversationally, and clean up automatically.
Third-party Integration Development: You're building a Shopify app that processes order webhooks. Create test webhooks for different order states, inspect the payload structures, and iterate on your parsing logic without manual dashboard navigation.
API Testing Workflows: During development, you need to verify that your webhook handler correctly processes different payload formats. Instead of manually triggering test events through external dashboards, create targeted test webhooks and examine the exact payloads your handler will receive.
Debugging Production Issues: A webhook integration breaks in production. Rather than reproducing the exact scenario locally, create a test webhook with the same endpoint configuration, trigger the problematic payload, and analyze the request structure to identify the parsing issue.
Team Collaboration: Share webhook testing results directly in your development context. Instead of taking screenshots of webhook dashboards or copying payloads into chat, your AI assistant provides formatted, discussable webhook data.
Add the server to your Claude Desktop configuration:
{
"mcpServers": {
"webhook-tester-mcp": {
"command": "fastmcp",
"args": ["run", "/path/to/webhook-tester-mcp/server.py"]
}
}
}
Set your webhook-test.com API key:
export WEBHOOK_TEST_API_KEY="your_api_key_here"
Now webhook management becomes conversational. Create webhooks during code reviews, inspect payloads while debugging, and clean up test resources without leaving your development environment.
The server eliminates the API integration work you'd otherwise write yourself, while keeping webhook testing integrated into your existing AI-assisted development workflow.