Go-based Model Context Protocol (MCP) server that natively talks to Kubernetes / OpenShift, offering CRUD operations for any resource plus Helm, metrics, exec, logs, etc. Distributed as single native binary, npm and PyPI packages.
https://github.com/manusa/kubernetes-mcp-serverStop wrapping kubectl commands in scripts and giving your AI assistant slow, brittle access to your clusters. This MCP server gives Claude, GitHub Copilot, and other AI tools direct, native access to the Kubernetes API - no external dependencies, no kubectl overhead.
Most Kubernetes MCP implementations are just kubectl command wrappers. Every operation means spawning a process, parsing text output, and dealing with inconsistent formatting. This server talks directly to the Kubernetes API using the same client-go library that powers kubectl itself.
Real performance difference: A kubectl wrapper takes 200-500ms per operation. Direct API calls? Sub-50ms. When your AI is debugging a failing deployment by checking pods, services, and events, that's the difference between a snappy conversation and waiting around.
Full Resource Management
Pod Operations That Work
Helm Integration
Cluster Intelligence
Incident Response Your AI can instantly pull logs from failing pods, check resource usage, examine recent events, and even exec into containers to run diagnostic commands - all in one conversation without you switching contexts.
Development Workflow "Deploy this containerized app to the dev namespace and expose it on port 8080" becomes a single AI interaction. The server handles pod creation, service configuration, and reports back the actual endpoint.
Configuration Debugging Instead of manually checking if your deployment has the right labels, resource limits, and environment variables, your AI can inspect the actual resource definitions and spot misconfigurations immediately.
Multi-Environment Management Switch between clusters and namespaces seamlessly. Your AI maintains context about which environment it's working with and can compare configurations across environments.
For Claude Desktop (fastest start):
{
"mcpServers": {
"kubernetes": {
"command": "npx",
"args": ["-y", "kubernetes-mcp-server@latest"]
}
}
}
For VS Code/GitHub Copilot: One-click installation via the extension marketplace or command line.
For Production: Native binaries for Linux, macOS, and Windows. No Node.js, Python, or container runtime required.
Unlike hobby projects, this server handles:
The server includes comprehensive safety features: read-only mode for production clusters, destructive operation controls, and proper error handling that doesn't leave your cluster in inconsistent states.
Claude Desktop debugging session: "Check why my nginx deployment isn't starting" → AI examines the deployment, checks pod status, pulls container logs, identifies the configuration issue, and suggests the fix.
VS Code development flow: "Deploy this FastAPI app to my staging cluster" → AI creates the deployment manifest, applies it, creates a service, and reports back the access URL.
Incident response: "What's wrong with the payment service?" → AI checks all payment-related pods, examines recent events, pulls error logs, and identifies that the database connection is failing due to a ConfigMap change.
The server's native performance means these interactions happen in real-time, not after waiting for multiple kubectl commands to complete.
Ready to give your AI actual Kubernetes superpowers instead of slow CLI access? The setup takes under 2 minutes, and you'll immediately notice the difference in response time and capability.