Wanaku MCP Router – a router for AI-enabled applications that speak the open Model Context Protocol (MCP).
https://github.com/wanaku-ai/wanakuIf you're building AI-enabled applications, you know the pain: each service has its own API, authentication scheme, and data format. Your codebase becomes a mess of different SDK imports, error handling patterns, and connection management logic.
Wanaku MCP Router solves this by giving you a single HTTP/WebSocket gateway that speaks the standardized Model Context Protocol. Instead of integrating directly with OpenAI, Anthropic, vector databases, and custom tools separately, you route everything through one clean interface.
Unified AI tool execution: Call any registered tool through the same POST /v1/invoke endpoint. Whether it's an OpenAI completion, database query, or custom webhook, the interface stays consistent.
Real-time streaming: WebSocket support at /v1/stream for bidirectional communication. Perfect for chat applications or workflows that need immediate responses.
Dynamic tool registration: Drop YAML schemas into the ./tools/ directory and call /v1/tools/reload. No deployment required to add new capabilities.
Built-in LLM connectors: OpenAI and Anthropic integrations ready to go. Just set your API keys and start making calls.
PostgreSQL + vector search: Full pgvector integration for embeddings and semantic search without managing separate vector databases.
Multi-agent orchestration: When you need multiple AI services working together, Wanaku becomes your central nervous system. One agent queries documents, another generates responses, a third handles tool execution—all coordinated through the same protocol.
Enterprise AI gateways: Perfect for organizations that need to standardize AI access across teams. Developers use one API regardless of which underlying services power their requests.
Rapid prototyping: Testing different AI providers becomes trivial. Swap OpenAI for Anthropic by changing an environment variable, not rewriting integration code.
Local development: Run everything on localhost:8080 during development, then point to your production instance when ready to deploy. No authentication complexity during early development.
# Start the router
java -jar wanaku-server.jar
# Register a custom tool
curl -X POST http://localhost:8080/v1/tools/reload
# Execute any tool through the same interface
curl -X POST http://localhost:8080/v1/invoke \
-d '{"tool":"openai_completion","args":{"prompt":"Explain MCP protocol"}}'
Your application code stays clean because you're always talking to the same endpoint, regardless of what's happening behind the scenes.
The router includes Prometheus metrics, request correlation IDs, rate limiting, and proper health checks. It's Spring Boot under the hood, so you get enterprise-grade reliability and observability.
Authentication is optional for development but enforces bearer tokens in production. Configure with environment variables and deploy anywhere Java runs.
If you're tired of maintaining separate integrations for every AI service you use, check out Wanaku. It's Apache 2.0 licensed and designed to simplify exactly the kind of AI integration complexity that's eating up your development time.