A Model Context Protocol (MCP) server that exposes Prometheus metrics through standardized MCP tools so AI assistants can run PromQL queries, list metrics, fetch metadata and more.
https://github.com/pab1it0/prometheus-mcp-serverStop context-switching between your AI chat and Prometheus dashboards. This MCP server gives your AI assistant direct access to your Prometheus metrics, so you can troubleshoot performance issues, analyze trends, and investigate incidents without leaving your conversation.
Your AI assistant can now run complex PromQL queries for you. Ask it to "find services with high error rates in the last hour" or "show me memory usage trends for my backend pods" and get actual data instead of generic advice.
Instead of manually crafting queries like:
rate(http_requests_total{status=~"5.."}[5m]) / rate(http_requests_total[5m]) > 0.01
Just tell your assistant: "Find services with error rates above 1% in the last 5 minutes."
The server exposes five focused tools that transform how you interact with your metrics:
Your assistant can now correlate metrics across services, identify patterns in your infrastructure, and suggest optimizations based on actual data patterns.
Built for real environments with authentication support (basic auth and bearer tokens), multi-tenant setups (Cortex/Mimir/Thanos), and configurable tool sets to avoid cluttering your AI's context window.
The Docker container approach means zero local Python dependencies:
{
"mcpServers": {
"prometheus": {
"command": "docker",
"args": [
"run", "-i", "--rm",
"-e", "PROMETHEUS_URL",
"ghcr.io/pab1it0/prometheus-mcp-server:latest"
],
"env": {
"PROMETHEUS_URL": "http://your-prometheus:9090"
}
}
}
}
During outages, your assistant can immediately pull relevant metrics, correlate across services, and help identify root causes while you focus on fixes rather than query syntax.
Ask questions like:
And get actionable data instead of having to remember PromQL syntax under pressure.
Your monitoring data becomes conversational, making your AI assistant as knowledgeable about your infrastructure as your most experienced SRE.