Collection of Model Context Protocol (MCP) servers built with Quarkus/Java and runnable via JBang (e.g. JDBC, JVM Insight, Filesystem, JavaFX Canvas, Kubernetes, Containers, Wolfram).
https://github.com/quarkiverse/quarkus-mcp-serversStop building MCP servers from scratch. This collection gives you 7+ battle-tested servers that instantly extend your LLM's capabilities across databases, containers, file systems, and more—all runnable with a single JBang command.
While other developers are writing MCP servers from the ground up, you can tap into production-ready integrations that just work. Each server in this collection solves specific workflow bottlenecks that Java developers face daily.
The problem: Your LLM can write code but can't inspect your running JVM, query your database, or manage your Kubernetes deployments. The solution: Pick the servers you need and run them in seconds with JBang.
jbang jdbc@quarkiverse/quarkus-mcp-servers
Your LLM can now query any JDBC-compatible database directly. PostgreSQL, MySQL, Oracle, SQLite—if it has a JDBC driver, your AI can work with it. No more copying schema definitions or sample data into prompts.
Real workflow impact: Ask your LLM to "analyze sales trends from the last quarter" and watch it generate the SQL, execute it against your actual database, and provide insights based on real data.
jbang jvminsight@quarkiverse/quarkus-mcp-servers
Give your LLM direct access to inspect running JVM processes. Memory usage, thread dumps, GC metrics, system properties—everything you'd get from JConsole, but available to your AI assistant.
Perfect for: Performance troubleshooting sessions where you can ask "Why is this process using so much memory?" and get both the data and AI-powered analysis.
jbang kubernetes@quarkiverse/quarkus-mcp-servers
Your LLM becomes your kubectl interface. Check pod status, inspect deployments, analyze resource usage—all through natural language queries.
Game changer for: DevOps workflows where you can ask "Which pods are consuming the most CPU in the production namespace?" and get both the data and actionable recommendations.
jbang containers@quarkiverse/quarkus-mcp-servers
Manage containers through your LLM. List running containers, inspect configurations, analyze resource usage—compatible with Docker, Podman, and other OCI runtimes.
jbang filesystem@quarkiverse/quarkus-mcp-servers [path1] [path2]
Let your LLM read, analyze, and work with files from specified directories. Perfect for code reviews, configuration analysis, or batch file processing.
jbang jfx@quarkiverse/quarkus-mcp-servers
Your LLM can create visual diagrams, charts, and drawings using JavaFX. Turn data into visualizations or create UI mockups through conversation.
jbang wolfram@quarkiverse/quarkus-mcp-servers
Access Wolfram Alpha's computational engine optimized for LLM integration. Mathematical calculations, data analysis, scientific computing—all available to your AI.
Built on Quarkus, these servers start in milliseconds and use minimal memory. You're not sacrificing performance for convenience—you're getting both.
Startup time: Under 100ms for most servers
Memory footprint: As low as 20MB
Distribution: Single JBang command—no complex installations
Whether you're using Claude Desktop, Continue, or building your own MCP client, these servers integrate immediately:
Database-driven development: "Analyze the performance of our user registration queries over the past month and suggest optimizations"
Production debugging: "Check which microservices in our staging cluster are experiencing high memory pressure and correlate with recent deployments"
Infrastructure analysis: "Show me container resource usage patterns and identify which services need resource limit adjustments"
Code review assistance: "Review the configuration files in the /config directory and flag any security concerns or best practice violations"
The difference isn't just convenience—it's about having an AI assistant that understands your actual systems, not just theoretical examples.
This isn't just a collection of tools—it's a foundation. Each server is open source and extensible. Need custom functionality? Fork the repo, modify the server you need, and rebuild with Maven.
The Quarkiverse team actively maintains these servers, adding new capabilities and keeping dependencies current. You get the benefits of community development without the maintenance overhead.
Start with the servers that solve your immediate pain points, then expand as your AI-assisted workflows evolve. Your future self will thank you for choosing battle-tested solutions over weekend projects.
Ready to give your LLM real superpowers? Pick a server and run it now.