kom – SDK-level wrapper around kubectl / client-go that can also run as a multi-cluster MCP (Model Context Protocol) server. Provides high-level Go APIs and 48+ built-in tools for creating, updating, deleting, describing and querying (even via SQL) any Kubernetes or CRD resource, plus pod file ops, logs, exec, HPA/rollout helpers and more.
https://github.com/weibaohui/komStop switching between your AI chat and kubectl. With kom's MCP server, your AI assistant becomes a native Kubernetes operator that can query, manage, and troubleshoot your clusters directly from the conversation.
You're debugging a production issue. You ask Claude about pod restart patterns, then tab over to kubectl to check actual pod status, then back to Claude to interpret the results, then back to kubectl to grab logs. This context switching kills your flow and slows down incident response.
Most AI tools can't see your actual cluster state. They give generic advice when you need specific insights about your deployments, your resource consumption, your failing pods.
kom is a dual-purpose tool: a Go SDK that wraps kubectl/client-go with a fluent API, and an MCP server that exposes 48+ Kubernetes operations directly to AI tools.
For AI Integration:
kubectl get pods for youFor Go Development:
SELECT * FROM pods WHERE namespace='prod' ORDER BY creationTimestamp DESCIncident Response: Ask your AI "What pods are failing in production?" and get real-time data from your cluster. Follow up with "Show me the logs from the failing pod" and get actual log output, not generic troubleshooting steps.
Resource Planning: "Which nodes are running hot?" pulls real CPU/memory usage. "What's using the most storage?" queries your PVCs and shows actual consumption.
Deployment Management: "Scale the frontend deployment to 5 replicas" executes the change. "Rollback the API deployment" handles the rollout for you.
Cross-Resource Analysis: "Show me all services without corresponding pods" runs complex queries across multiple resource types that would take several kubectl commands to piece together.
kom provides comprehensive cluster management through its MCP interface:
For AI Integration:
# Build the binary
go build -o kom cmd/kom/main.go
# Set your kubeconfig
export KUBECONFIG=/path/to/your/kubeconfig
# Run the MCP server
./kom
Add to your Claude Desktop config:
{
"mcpServers": {
"kubernetes": {
"command": "/path/to/kom",
"args": []
}
}
}
For Go Development:
import "github.com/weibaohui/kom"
// Register clusters
callbacks.RegisterInit()
kom.Clusters().RegisterByPathWithID(kubeconfig, "production")
// Query with SQL
var pods []v1.Pod
err := kom.DefaultCluster().
Sql("SELECT * FROM pod WHERE namespace='kube-system' ORDER BY creationTimestamp DESC").
List(&pods).Error
// Chain operations
err := kom.DefaultCluster().
Resource(&pod).
Namespace("production").
Name("api-server").
Ctl().Pod().
GetLogs(&stream, &v1.PodLogOptions{})
Manage multiple clusters from a single interface:
// Register multiple clusters
kom.Clusters().RegisterByPathWithID("/Users/me/.kube/prod", "production")
kom.Clusters().RegisterByPathWithID("/Users/me/.kube/staging", "staging")
// Query across clusters
prodPods := kom.Cluster("production").Resource(&v1.Pod{}).List(&pods)
stagingPods := kom.Cluster("staging").Resource(&v1.Pod{}).List(&pods)
Your AI assistant can now switch between clusters contextually: "Show me production pods" vs "Check staging deployment status."
Stop writing complex kubectl commands. Query your cluster like a database:
-- Find pods consuming most memory
SELECT * FROM pod WHERE status.phase='Running'
ORDER BY spec.containers.resources.requests.memory DESC LIMIT 10
-- Cross-namespace service discovery
SELECT * FROM service WHERE metadata.namespace IN ('prod', 'staging')
AND spec.selector.app = 'api'
-- Resource usage analysis
SELECT * FROM node WHERE
spec.allocatable.cpu > '4' AND
metadata.labels.zone = 'us-west-2a'
kom handles the tedious file operations that kubectl makes awkward:
// Upload configuration
kom.DefaultCluster().
Namespace("prod").Name("api-pod").
Ctl().Pod().ContainerName("app").
SaveFile("/app/config.json", configData)
// Download logs for analysis
kom.DefaultCluster().
Namespace("prod").Name("api-pod").
Ctl().Pod().ContainerName("app").
DownloadFile("/var/log/app.log")
// Execute maintenance commands
kom.DefaultCluster().
Namespace("prod").Name("api-pod").
Ctl().Pod().ContainerName("app").
Command("systemctl", "restart", "nginx").
ExecuteCommand(&result)
Built-in caching prevents API server overload during batch operations:
// Cache results for 5 seconds during bulk operations
kom.DefaultCluster().
Resource(&pod).
WithCache(5 * time.Second).
List(&pods)
Clone and build:
git clone https://github.com/weibaohui/kom.git
cd kom && go build -o kom cmd/kom/main.go
Configure your cluster:
export KUBECONFIG=/path/to/your/kubeconfig
Start the MCP server:
./kom # Runs on http://localhost:9096/sse
Integrate with your AI tool using the configuration examples above
Now ask your AI assistant: "What pods are running in the kube-system namespace?" and watch it query your actual cluster.
kom bridges the gap between AI assistance and real infrastructure management. Your AI becomes a true operational partner, not just a documentation lookup tool.