Model-Context-Protocol (MCP) server that lets an LLM save tabular data and render it with Vega-Lite, returning either a full spec (text) or a PNG image.
https://github.com/isaacwasserman/mcp-vegalite-serverStop settling for text descriptions when your AI could be generating actual charts. This MCP server bridges the gap between language models and visual data representation, letting your LLM save datasets and render them as proper Vega-Lite visualizations.
You're building with LLMs that can analyze data and suggest visualizations, but they're stuck describing what charts should look like instead of creating them. Meanwhile, you're context-switching between AI conversations and manual chart creation in separate tools. The disconnect kills your flow and limits what's possible in AI-powered data analysis workflows.
Direct Visualization Pipeline: Your LLM saves tabular data with save_data, then renders it with visualize_data using proper Vega-Lite specifications. No more "here's what your chart should look like" - you get actual charts.
Flexible Output Options: Choose between complete Vega-Lite JSON specs (perfect for embedding in web apps) or base64-encoded PNG images (ideal for reports and documentation). Switch between modes with a simple configuration flag.
Production-Ready Stack: Built on battle-tested libraries - Altair for Vega-Lite generation, Pandas for data handling, FastAPI for the server layer. The dependency list reads like a standard data science toolkit because it is one.
Data Analysis Workflows: Ask Claude to analyze your CSV, and instead of getting a description of trends, get actual scatter plots, histograms, and correlation matrices rendered immediately in the conversation.
Report Generation: Build LLM-powered reporting tools where natural language requests like "show me quarterly sales by region" generate embedded charts, not just text summaries.
Exploratory Data Analysis: Feed datasets to your LLM and let it generate multiple visualization perspectives automatically - distribution plots, time series charts, categorical breakdowns - all rendered and ready to use.
API Integration: The MCP protocol means this plugs into any MCP-compatible client. Start with Claude Desktop for prototyping, then integrate into custom AI applications.
Add it to your Claude Desktop config and you're immediately working with an AI that can create real visualizations:
{
"mcpServers": {
"datavis": {
"command": "uv",
"args": [
"--directory", "/path/to/mcp-vegalite-server",
"run", "mcp_server_datavis",
"--output_type", "png"
]
}
}
}
The server handles the complexity of Vega-Lite specification generation while your LLM focuses on understanding your data and visualization intent. You describe what you want to see; it creates the chart specification and renders the result.
This isn't about replacing your existing visualization tools - it's about making your LLM workflows more complete. When you're working with data in an AI conversation, you shouldn't have to break flow to create charts manually. Your AI should handle that step too.