A Go-based MCP (Model Control Protocol) server that lets LLM/MCP clients perform fine-grained Kubernetes and Helm operations over stdio or SSE transports.
https://github.com/silenceper/mcp-k8sManaging Kubernetes clusters through kubectl commands and YAML manifests gets tedious fast. You know the drill: remembering resource syntax, debugging failed deployments, and context-switching between documentation and terminal. What if you could just describe what you want in plain English?
mcp-k8s bridges your LLM assistant directly to your Kubernetes cluster, turning natural language into precise cluster operations. No more memorizing kubectl flags or hunting down that specific YAML structure.
You're juggling multiple contexts, namespaces, and resource types daily. Quick tasks become multi-step processes:
kubectl describe commands across different resourcesMeanwhile, your LLM assistant sits idle, capable of understanding complex requirements but unable to act on your cluster.
mcp-k8s connects your LLM directly to your Kubernetes API, enabling conversations like:
You: "Show me all pods in the default namespace that are failing"
LLM: Lists failing pods with status details and suggests troubleshooting steps
You: "Create a deployment for nginx with 3 replicas, expose it on port 80"
LLM: Creates the deployment and service, confirms successful rollout
You: "Scale down all deployments in the staging namespace"
LLM: Identifies deployments, scales them down, reports new replica counts
The server handles the kubectl complexity while you focus on the actual problem you're solving.
Unlike tools that require broad cluster access, mcp-k8s uses granular permission controls:
# Read-only mode for safe exploration
mcp-k8s -kubeconfig ~/.kube/config
# Selective write operations
mcp-k8s -enable-create -enable-list -kubeconfig ~/.kube/config
# Full operations for trusted environments
mcp-k8s -enable-create -enable-update -enable-delete -enable-helm-install
Each operation type (create, update, delete, Helm installs) can be independently enabled or disabled. Your production clusters stay protected while development environments get full flexibility.
The server exposes your entire cluster through natural language:
Resource Operations:
Helm Integration:
Real Example: "Install the ingress-nginx chart from the ingress-nginx repo, configure it for AWS load balancer, and show me the external IP once it's ready."
Studio/stdio mode for development:
{
"mcpServers": {
"mcp-k8s": {
"command": "mcp-k8s",
"args": ["-kubeconfig", "~/.kube/config", "-enable-create", "-enable-list"]
}
}
}
SSE mode for team deployments:
# Run as HTTP service
mcp-k8s -transport=sse -port=8080 -enable-create -enable-list
# Team members connect via URL
"url": "http://k8s-mcp.company.com:8080/sse"
Docker deployment for consistent environments:
docker run -p 8080:8080 -v ~/.kube/config:/root/.kube/config \
ghcr.io/silenceper/mcp-k8s:latest -transport=sse
Download the binary from releases or install via Go:
go install github.com/silenceper/mcp-k8s/cmd/mcp-k8s@latest
Configure your MCP client to use mcp-k8s:
{
"mcpServers": {
"mcp-k8s": {
"command": "mcp-k8s",
"args": ["-kubeconfig", "~/.kube/config", "-enable-list"]
}
}
}
Start with read-only operations to explore safely, then enable write operations as needed.
The server respects your existing kubeconfig and RBAC permissions, so it works within your current security model. Your LLM assistant becomes a natural language interface to your existing Kubernetes access.
Your cluster management just got conversational. Instead of translating requirements into kubectl commands, describe what you need and let your LLM handle the implementation details.