Model-Context-Protocol (MCP) server that lets LLMs query Logfire OpenTelemetry traces & metrics via tools such as find_exceptions, find_exceptions_in_file and arbitrary_query.
https://github.com/pydantic/logfire-mcpStop context-switching between your IDE and Logfire's web interface every time you need to investigate production issues. The Logfire MCP Server connects your AI assistant directly to your OpenTelemetry traces and metrics, letting you debug with natural language queries.
You're deep in a coding session when alerts start firing. Now you need to:
Meanwhile, your AI assistant sits idle, unable to help with the most time-consuming part of debugging.
With Logfire MCP, your AI assistant becomes your debugging partner. Instead of manual trace hunting, you can ask:
app/api.py with their trace context"Your assistant executes these queries directly against your Logfire data and provides actionable insights without you leaving your development environment.
find_exceptions: Get exception counts grouped by file path. Perfect for spotting which parts of your codebase are throwing errors most frequently.
find_exceptions_in_file: Drill down into specific files to see detailed trace information, including stack traces, attributes, and trace IDs.
arbitrary_query: Run custom SQL queries against your OpenTelemetry data. Full access to your traces and metrics with the flexibility of SQL.
get_logfire_records_schema: Understand your data structure to craft better queries.
Each tool works with a simple time window parameter (up to 7 days back), so you can focus on recent issues or investigate historical patterns.
Production Fire Drill: When alerts fire at 2 AM, ask "What new exceptions appeared in the last 30 minutes?" Get immediate insight into what broke without clicking through dozens of trace views.
Performance Investigation: Run queries like SELECT AVG(duration_ms), service_name FROM traces WHERE operation_name = 'database_query' AND created_at > NOW() - INTERVAL '24 hours' GROUP BY service_name to identify slow services.
Code Review Context: Before deploying, ask "Show me all errors from the payment_processor.py file in the last week" to understand the stability of code you're about to modify.
Post-Mortem Analysis: Query specific time ranges around incidents with full SQL flexibility: SELECT trace_id, message, attributes FROM records WHERE severity_text = 'ERROR' AND created_at BETWEEN '2024-03-20 10:00:00' AND '2024-03-20 11:00:00'
uvx logfire-mcp --read-token=YOUR_TOKENThat's it. No complex authentication flows, no additional infrastructure, no learning new query languages.
For Cursor users, just add this to .cursor/mcp.json:
{
"mcpServers": {
"logfire": {
"command": "uvx",
"args": ["logfire-mcp", "--read-token=YOUR-TOKEN"]
}
}
}
Built by the Pydantic team, this server integrates seamlessly with:
This isn't a proof-of-concept. The server handles real production workloads with:
Your observability data is already structured and queryable. Your AI assistant is already capable of complex reasoning. The Logfire MCP Server is the missing link that turns hours of manual investigation into seconds of natural language queries.
Install it, configure it, and start debugging with the efficiency you've been waiting for.