Model Context Protocol (MCP) server that lets LLMs query Nutanix Prism Central resources (VMs, clusters, hosts, etc.) via the Prism Go API.
https://github.com/thunderboltsid/mcp-nutanixStop context-switching between your AI assistant and Prism Central dashboards. This MCP server connects any compatible LLM directly to your Nutanix infrastructure, letting you query VMs, clusters, and resources using natural language.
You're troubleshooting a performance issue. Instead of asking your AI assistant for general advice, then switching to Prism Central to gather actual data, then back to your AI with screenshots—just ask: "Show me all VMs on cluster prod-hci-01 with high CPU usage." Your AI gets real infrastructure data instantly.
Direct Infrastructure Queries: Your LLM can list VMs, clusters, hosts, images, and subnets without you touching Prism Central. Ask "Which VMs are running on the finance cluster?" and get structured data back immediately.
Resource Deep-Dives: Beyond simple lists, drill into specific resources with URI-based access. Your AI can examine individual VM configurations, cluster health metrics, or network topology details.
Natural Language Infrastructure: Transform infrastructure queries from point-and-click navigation into conversational commands. "What's the storage utilization across my development clusters?" becomes a simple prompt.
Incident Response: During outages, ask your AI to quickly inventory affected resources, check cluster health, and identify related infrastructure without manual dashboard navigation.
Capacity Planning: "Show me VMs with less than 20% CPU utilization in the last week" - get immediate data for rightsizing decisions.
Compliance Auditing: Generate infrastructure reports by asking your AI to correlate VM configurations, network policies, and resource allocations across multiple clusters.
Change Management: Before maintenance windows, query current infrastructure state to understand dependencies and plan rollbacks.
The server runs as a standard Go binary that connects to your Prism Central instance. Set your credentials via environment variables, start the server, and configure your MCP-compatible AI client to use it.
# Build and run
make build
./bin/mcp-nutanix
# Your AI can now run queries like:
# - List all VMs: "vms"
# - Get cluster details: "cluster://{uuid}"
# - Check host status: "hosts"
Built on Nutanix's official Prism Go client and the MCP Go library, so you're working with supported, maintained code rather than scraping APIs.
This is read-only by design—no create, update, or delete operations. Perfect for giving your AI infrastructure visibility without security concerns about accidental changes.
The server handles authentication to Prism Central, manages API rate limits, and formats responses for LLM consumption. Your AI gets clean, structured data; you get infrastructure insights without manual lookups.
Ready to stop explaining your infrastructure to your AI and start having it discover the details itself? Clone the repo and connect your first cluster.