MCP server for interacting with RabbitMQ message brokers. Exposes RabbitMQ admin APIs and message-level operations as Model Context Protocol tools.
https://github.com/kenliao94/mcp-server-rabbitmqStop context-switching between RabbitMQ admin tools and your AI conversations. This MCP server puts RabbitMQ operations directly into your AI workflow, giving you conversational access to queue management, message inspection, and broker administration.
You're debugging a production issue. Messages are backing up in a queue, and you need to:
Right now, that means bouncing between Claude, the RabbitMQ management UI, command-line tools, and maybe some custom scripts. Each context switch breaks your debugging flow and slows down incident response.
This MCP server eliminates the tool-switching overhead by exposing RabbitMQ's admin APIs and message operations as MCP tools. Your AI can now:
Diagnose queue issues in real-time:
"Check the depth of all queues in the user-notifications exchange and show me the consumer count for any queues over 1000 messages"
Perform surgical queue operations:
"Purge the failed-payment-retries queue but leave failed-payment-dlq untouched, then show me the last 5 messages that were in there"
Audit your topology setup:
"Compare the binding configuration between our staging and production payment-processing exchanges"
The AI gets full context about your message broker state and can execute operations without you having to describe what you're seeing in the management UI.
Available on PyPI, so no local builds or dependency hell:
{
"mcpServers": {
"rabbitmq": {
"command": "uvx",
"args": [
"mcp-server-rabbitmq@latest",
"--rabbitmq-host", "your-broker.com",
"--username", "admin",
"--password", "secure-password"
]
}
}
}
Or install via Smithery for Claude Desktop:
npx -y @smithery/cli install @kenliao94/mcp-server-rabbitmq --client claude
Incident Response: Instead of alt-tabbing through multiple tools while on a production call, ask your AI to check queue depths, identify backed-up consumers, and execute fixes. The AI maintains full context about what it's seeing and what actions it's taking.
Development Debugging: When your local integration tests fail because of message timing issues, have the AI inspect your test queues, show you exactly what messages are there, and help you understand why your consumer isn't picking them up.
Infrastructure Auditing: Need to verify your disaster recovery setup? Ask the AI to compare queue configurations between environments and flag any differences in routing rules or durability settings.
Capacity Planning: Get the AI to analyze message flow patterns across your topology and identify bottlenecks before they become problems.
Switch between different RabbitMQ instances mid-conversation. Start by checking your production cluster, then compare with staging, then look at your development environment - all without reconfiguring anything.
The server handles connection management and keeps your AI context flowing smoothly across different broker environments.
FastMCP Integration: Built on FastMCP with bearer auth support, so you can deploy this as a remote service with proper authentication rather than running everything locally.
Message-Level Operations: Goes beyond just admin APIs. Using Pika under the hood, it can publish, consume, and inspect individual messages - giving your AI fine-grained control over your message flows.
Production Ready: Supports TLS connections and all the authentication patterns you need for real broker environments.
This isn't just another API wrapper. It's designed specifically to make AI conversations about message queues as natural and powerful as talking to a senior infrastructure engineer who has your entire RabbitMQ topology memorized.
Your AI debugging sessions just got a lot more productive.