Model Context Protocol server that lets Honeycomb Enterprise customers query and analyze observability data (datasets, SLOs, triggers, etc.) directly from LLMs.
https://github.com/honeycombio/honeycomb-mcpStop switching between your AI assistant and Honeycomb's web interface. This MCP server connects your LLM directly to your observability data, letting you debug production issues and analyze system behavior through conversation.
You're already context-switching between your code editor, monitoring dashboards, and AI chat. Every time you need to check error rates, analyze slow queries, or investigate an incident, you break flow to open another browser tab, remember filter syntax, and manually correlate data points.
The Honeycomb MCP server eliminates that friction. Ask your AI assistant "What's the P95 latency for the payment service in the last hour?" and get immediate answers from your actual production data.
Direct Data Access: Query any Honeycomb dataset through natural language. Your AI assistant becomes a first-class interface to your observability stack.
Multi-Environment Support: Seamlessly switch between production, staging, and development environments without changing tools or context.
Rich Analytics: Run complex queries with breakdowns, filters, percentiles, and time-based analysis. The same power as Honeycomb's query builder, but through conversation.
SLO and Alert Monitoring: Check SLO burn rates, trigger status, and alert configurations without leaving your development environment.
Intelligent Caching: Built-in TTL-based caching reduces API calls and improves response times for metadata queries.
Incident Response: "Show me error rates by service for the last 30 minutes" → immediate breakdown without dashboard navigation
Performance Investigation: "What's the slowest endpoint in production right now?" → sorted P95 latencies with actual values
Deployment Validation: "Compare API response times before and after the 2 PM deploy" → side-by-side analysis
Capacity Planning: "Show me database connection pool utilization trends over the past week" → historical analysis with breakdowns
Compatible with the MCP clients you're already using:
Configuration is straightforward - add your Honeycomb API key to your MCP client's environment variables and start querying.
Built specifically for Honeycomb Enterprise customers with full API access. The server runs locally on your machine using Node.js 18+, communicating with your MCP client over STDIO.
Query Capabilities:
Resource Management:
Enterprise Features:
git clone https://github.com/honeycombio/honeycomb-mcp.git
cd honeycomb-mcp
pnpm install && pnpm run build
Add to your MCP client configuration:
{
"mcpServers": {
"honeycomb": {
"command": "node",
"args": ["/path/to/honeycomb-mcp/build/index.mjs"],
"env": {
"HONEYCOMB_API_KEY": "your_api_key"
}
}
}
}
Start asking your AI assistant about your production systems instead of hunting through dashboards. Your observability data becomes as accessible as your codebase.