MCP server that queries several local Ollama models and merges their answers to give Claude a multi-perspective "council of advisors".
https://github.com/YuChenSSR/multi-ai-advisor-mcpStop asking the same question to multiple AI models manually. This MCP server automatically queries your local Ollama models and feeds their diverse perspectives to Claude, who synthesizes them into comprehensive answers.
When you're making complex decisions or exploring nuanced topics, a single AI perspective—even Claude's—has inherent blind spots. Different models excel at different types of reasoning: one might be more creative, another more analytical, a third more empathetic.
The traditional workflow is tedious: ask Model A, copy the response, ask Model B, compare answers, try to synthesize insights yourself. This MCP server automates that entire process while keeping everything local and private.
Instead of Claude working alone, it becomes the moderator of an AI council:
Example query: "What are the most important skills for success in today's job market?"
Saves significant time: No more manual copy-pasting between different AI interfaces or trying to remember what each model said.
Better decision quality: Multiple reasoning approaches reduce blind spots and surface considerations you might miss with a single perspective.
Privacy-first: Uses your local Ollama models instead of multiple cloud API calls—your sensitive questions stay on your machine.
Customizable expertise: Configure each model with specific personas (creative director, data analyst, project manager) to match your decision-making needs.
Seamless integration: Works directly in Claude Desktop with no context switching.
Technical Architecture Decisions: Get perspectives on scalability (analytical model), user experience (empathetic model), and innovative approaches (creative model) before Claude synthesizes the trade-offs.
Content Strategy: Creative model suggests engaging angles, analytical model provides data-driven insights, empathetic model considers audience needs.
Career Planning: Different models weigh market trends, personal fulfillment, financial considerations, and growth opportunities before Claude creates a comprehensive plan.
Product Development: Multiple perspectives on user needs, technical feasibility, market positioning, and resource allocation.
The server works with your existing Ollama installation and integrates directly with Claude Desktop:
# Install via Smithery (recommended)
npx -y @smithery/cli install @YuChenSSR/multi-ai-advisor-mcp --client claude
# Or clone and configure manually
git clone https://github.com/YuChenSSR/multi-ai-advisor-mcp.git
cd multi-ai-advisor-mcp
npm install && npm run build
Configure your model personas in .env:
DEFAULT_MODELS=gemma3:1b,llama3.2:1b,deepseek-r1:1.5b
GEMMA_SYSTEM_PROMPT=You are a creative and innovative AI assistant...
LLAMA_SYSTEM_PROMPT=You are empathetic and focused on human well-being...
DEEPSEEK_SYSTEM_PROMPT=You are logical and analytical...
Add to your Claude Desktop config and restart. That's it.
Basic council query: "What should I consider when choosing between job offers? Use the multi-model advisor."
Specific model selection: "Help me design this API architecture using gemma3:1b, llama3.2:1b, and deepseek-r1:1.5b for different perspectives."
Model discovery: "Show me which Ollama models are available" to see all your local options.
The real power emerges when you start configuring model personas for your specific domain—whether you're doing technical consulting, creative work, or strategic planning.
Your local AI council is ready when you are. No API costs, no data leaving your machine, just better decisions through diverse AI perspectives.