MCP server that lets any MCP-compatible client (Claude Desktop, Cursor, etc.) interact with a ZenML MLOps/LLMOps backend.
https://github.com/zenml-io/mcp-zenmlStop switching between your IDE, chat interface, and ZenML dashboard. This MCP server connects Claude Desktop, Cursor, and other MCP clients directly to your ZenML MLOps backend, so you can query pipeline status, inspect artifacts, and trigger runs without leaving your development environment.
You're deep in a coding session, working on a new ML feature. You need to check if your latest training pipeline completed, grab some artifact metadata, or kick off a new experiment. Instead of staying in flow, you're tab-switching to the ZenML dashboard, losing context and momentum.
This MCP server solves that by putting your entire ZenML backend at your fingertips through natural language commands in your AI assistant.
Direct Pipeline Access: Query pipeline runs, step details, and execution logs without leaving your chat interface. Ask "What's the status of my latest training run?" and get real-time answers.
Artifact Intelligence: Inspect model artifacts, datasets, and experiment metadata. Your AI assistant can now understand your ML pipeline context and provide better suggestions.
One-Command Execution: Trigger new pipeline runs directly from conversation. "Run the preprocessing pipeline with the new dataset" becomes a single command instead of a dashboard workflow.
Live Debugging: Check step logs and failure details instantly. When something breaks, you can investigate without context switching.
During Model Development: You're iterating on a training script and want to check if your hyperparameter sweep finished. Instead of opening the ZenML dashboard, you ask Claude "Show me the results from the last hyperparameter run" and get structured data right in your chat.
Pipeline Monitoring: Your deployment pipeline is running in production. You can monitor its progress and check artifact quality metrics without leaving your coding environment.
Quick Experiments: You need to test a new data preprocessing step. Ask your AI assistant to trigger a pipeline run with specific parameters, then continue coding while it runs in the background.
Collaborative Debugging: When a teammate asks about a failed pipeline, you can instantly pull up logs, step details, and error messages to debug together.
This isn't about replacing your ZenML dashboard - it's about reducing friction in your ML development workflow. The server provides read access to all your ZenML resources (users, stacks, pipelines, runs, artifacts, services) plus the ability to trigger new runs when templates exist.
The setup is straightforward: point the MCP server at your ZenML backend with your API credentials, configure it in Claude Desktop or Cursor, and start chatting. No complex authentication flows or additional infrastructure needed.
For Claude Desktop: Add the server to your MCP config and restart. Your AI assistant immediately gains access to your entire ZenML context.
For Cursor: Configure per-repository so each project gets its own ZenML context. Perfect for teams with multiple ML projects.
The real power emerges when your AI assistant understands your ML pipeline context. It can suggest optimizations based on current run patterns, help troubleshoot failures using historical data, and even recommend next steps based on your artifact metadata.
Your conversations become more productive because the AI has full visibility into your ML operations context - not just your code, but your actual running systems.
This is particularly valuable for LLMOps workflows where you're frequently iterating on model deployments, monitoring performance, and adjusting based on real-world feedback.
You need a deployed ZenML server (ZenML Pro offers free trials) and either Claude Desktop or Cursor installed. Clone the repository, configure your ZenML credentials in the MCP config, and start chatting with your pipelines.
The server includes automated testing and handles the MCP protocol complexity, so you can focus on your ML work instead of integration details.
Ready to stop tab-switching and keep your ML operations in the same conversation as your code? Your ZenML backend is one config file away from being part of your development flow.