Model Context Protocol (MCP) server that exposes Grafana, Prometheus, Loki, Alerting and OnCall operations as structured tools.
https://github.com/grafana/mcp-grafanaStop copying dashboard URLs and alert snippets into chat windows. The Grafana MCP server connects AI assistants directly to your observability stack, turning conversations into actionable insights across dashboards, metrics, logs, and incidents.
You're debugging an outage at 2 AM. Your AI assistant can help analyze patterns and suggest fixes, but it can't see your dashboards, query your metrics, or check who's on-call. You end up switching between tools, copying data, and losing context in the process.
Meanwhile, your monitoring setup contains everything needed to understand system behavior - if only your AI could access it directly.
This MCP server makes your AI assistant a first-class member of your observability workflow. Instead of describing what you see in Grafana, your assistant can:
Your conversations become data-driven immediately, with real metrics backing every suggestion.
Incident Response: "What's causing high memory usage on the payment service?" Your assistant queries the relevant Prometheus metrics, checks related dashboards, searches logs for errors, and identifies the root cause - all in one conversation.
Alert Investigation: When an alert fires, ask "Why is this alerting and who should handle it?" Your assistant examines the alert rule configuration, checks current metric values, identifies the on-call engineer, and can even create an incident if needed.
Performance Analysis: "Find slow database queries from the last hour." Your assistant uses Sift to analyze traces, correlates with relevant dashboards, and surfaces the specific queries causing issues.
Dashboard Building: "Create a dashboard for monitoring our new API endpoints." Your assistant can examine existing dashboards for patterns, suggest relevant metrics, and even generate the dashboard configuration.
Create a Grafana service account with appropriate permissions for the tools you need
Choose your deployment method:
# Docker (most common)
docker run --rm -i \
-e GRAFANA_URL=http://localhost:3000 \
-e GRAFANA_API_KEY=your_token \
mcp/grafana -t stdio
# Or install the binary
go install github.com/grafana/mcp-grafana/cmd/mcp-grafana@latest
Configure your AI client (Claude Desktop example):
{
"mcpServers": {
"grafana": {
"command": "mcp-grafana",
"env": {
"GRAFANA_URL": "http://localhost:3000",
"GRAFANA_API_KEY": "your_service_account_token"
}
}
}
}
The server connects to your Grafana instance and automatically discovers available datasources, dashboards, and configurations.
The server includes 40+ tools across different categories. You can disable entire categories you don't use:
# Skip OnCall tools if you don't use Grafana OnCall
mcp-grafana --disable-oncall
# Focus on just metrics and dashboards
mcp-grafana --disable-oncall --disable-incident --disable-sift
This keeps your AI conversations focused and reduces context window usage.
Works with your existing setup: No changes to Grafana configurations or datasources required. The server uses standard Grafana APIs.
Secure by design: Uses service account tokens with configurable permissions. You control exactly what data the AI can access.
Multiple deployment options: Run as a binary, Docker container, or in SSE mode for multi-client scenarios.
Debug mode: Built-in request/response logging for troubleshooting API interactions.
Whether you're debugging production issues, building new dashboards, or investigating performance problems, this MCP server transforms your AI assistant from a generic helper into an observability expert that understands your specific infrastructure and data.
Your next incident response starts with a conversation, not a dashboard hunt.