Golang-based Model-Context-Protocol (MCP) server that connects to a Kubernetes cluster and exposes cluster operations (list/get resources, logs, exec, etc.) as MCP tools.
https://github.com/strowk/mcp-k8s-goStop copying and pasting kubectl outputs into Claude. This MCP server connects your AI directly to your Kubernetes clusters, turning every conversation into a live debugging session.
You know the drill: production pod is failing, you run kubectl get pods, copy the output, paste it into Claude, explain the context, wait for analysis, then run more commands based on the suggestions. Rinse and repeat.
mcp-k8s-go eliminates that entire workflow. Your AI can now directly query your clusters, analyze logs, inspect resources, and even execute commands inside pods—all within a single conversation context.
Once connected, your AI becomes a Kubernetes-native assistant that can:
Think of it as having a senior SRE who never needs to ask "can you show me the pod logs?" because they can just look.
Incident Response: Instead of frantically switching between terminal windows and AI chat, you can have a conversation like: "The checkout service is down, what's wrong?" Your AI examines the pods, checks events, pulls logs, identifies the failing database connection, and suggests fixes—all automatically.
Capacity Planning: "Which namespaces are consuming the most memory?" gets you actual data analysis, not just raw kubectl output to interpret yourself.
Security Auditing: "Show me all pods running as root across the cluster" becomes a comprehensive security review with context and recommendations.
Deployment Validation: After a deployment, your AI can verify everything is healthy by checking pod status, resource usage, and service connectivity without you writing monitoring scripts.
The fastest path is using NPM if you have Node.js:
npx @modelcontextprotocol/inspector npx @strowk/mcp-k8s
For Claude Desktop, add this to your config:
{
"mcpServers": {
"kubernetes": {
"command": "npx",
"args": ["@strowk/mcp-k8s"]
}
}
}
The server automatically uses your existing kubeconfig, so if kubectl works on your machine, this will too.
For production use, restrict cluster access with the --allowed-contexts flag:
mcp-k8s --allowed-contexts=staging,development
This prevents your AI from accidentally touching production clusters when you meant to debug staging.
The server respects your existing RBAC permissions—if your kubeconfig can't delete deployments, neither can your AI. No additional security model to learn.
This isn't just another Node.js wrapper around kubectl. Built with the official Kubernetes Go client libraries, it speaks directly to the K8s API with minimal overhead. The result: faster responses when you're troubleshooting critical issues, and reliable performance under load.
Multiple deployment options mean you can run it locally during development, in CI/CD pipelines, or as a cluster service for team-wide access.
| Method | Best For | Setup |
|--------|----------|--------|
| NPM | Individual developers | npm install -g @strowk/mcp-k8s |
| Docker | Team deployments | docker run -v ~/.kube:/home/nonroot/.kube mcpk8s/server |
| Go Install | Go developers | go install github.com/strowk/mcp-k8s-go |
| Smithery | Auto-configuration | npx @smithery/cli install @strowk/mcp-k8s --client claude |
Ready to stop being a human kubectl proxy for your AI? Pick your installation method and start having actual technical conversations about your Kubernetes infrastructure.