An MCP (Model-Context Protocol) server that exposes Hydrolix clusters to LLMs through a small set of SQL-based tools (run_select_query, list_databases, list_tables).
https://github.com/hydrolix/mcp-hydrolixStop switching between chat interfaces and SQL clients. The Hydrolix MCP server connects your LLM directly to your time-series data, letting you analyze observability metrics, IoT streams, and event data through natural conversation.
Working with time-series data means constantly jumping between tools. You ask questions in Slack, then pivot to a SQL client, write complex time-range queries, export results, and paste them back into your discussion. This server eliminates that friction entirely.
Your LLM can now query your Hydrolix cluster directly, write optimized time-series queries, and deliver insights without you touching SQL. Ask "What were our API error rates during the incident last Tuesday?" and get actual data, not hallucinated responses.
Conversational Analytics: Transform natural language questions into optimized SQL queries against your time-series data. No more context switching between chat and database clients.
Built-in Safety: Every query runs with readonly = 1, so you can confidently let your LLM explore your data without worrying about accidental modifications.
Time-Series Optimization: The server guides LLMs to write queries that take advantage of Hydrolix's primary key optimizations, especially for time-range queries that are common in observability workflows.
Zero Infrastructure Overhead: Runs as a lightweight MCP server using uv for isolated dependency management. No containers or complex deployments required.
Incident Response: During an outage, ask your LLM to correlate error rates across services, identify when specific metrics spiked, and trace the timeline of events across your observability data.
Performance Analysis: "Show me the 95th percentile response times for our payment API over the last 30 days, broken down by region." Your LLM writes the time-series aggregation query and explains the trends.
Capacity Planning: Analyze resource utilization patterns, identify peak usage periods, and forecast capacity needs by querying historical metrics data through natural conversation.
Business Intelligence: Transform raw event streams into business insights. Ask about user behavior patterns, feature adoption rates, or revenue metrics across time periods.
The server exposes three focused tools that your LLM can use intelligently:
run_select_query - Execute SQL queries with automatic safety constraintslist_databases - Discover available data sourceslist_tables - Explore schema and structureSetup takes minutes with uv:
{
"command": "uv",
"args": ["run", "--with", "mcp-hydrolix", "--python", "3.13", "mcp-hydrolix"],
"env": {
"HYDROLIX_HOST": "your-cluster-host",
"HYDROLIX_USER": "your-username",
"HYDROLIX_PASSWORD": "your-password"
}
}
The server includes specific guidance that helps LLMs write better time-series queries. When you mention time ranges in your questions, your LLM learns to:
This means faster queries and more efficient resource usage without you needing to know the optimization details.
If you're working with observability data, IoT telemetry, financial time series, or any high-volume event streams in Hydrolix, this server transforms how you interact with that data. Instead of writing SQL queries manually, you can have natural conversations about your data and get real results.
The combination of Hydrolix's powerful time-series capabilities with conversational AI creates a new way to explore and understand your data. Your domain expertise stays focused on the insights, not the query mechanics.