Unichat MCP Server (Python). Implements an MCP-compatible "unichat" tool and several pre-defined prompts so that clients (e.g. Claude Desktop) can call OpenAI, MistralAI, Anthropic, xAI, Google AI, DeepSeek, Alibaba and Inception models through a single interface.
https://github.com/amidabuddha/unichat-mcp-serverStop juggling multiple AI provider APIs. Unichat MCP Server gives you unified access to OpenAI, Anthropic, Mistral, Google AI, xAI, DeepSeek, Alibaba, and Inception models through a single interface.
You're probably familiar with this workflow: OpenAI for general tasks, Claude for reasoning, Mistral for multilingual work, Google AI for specific use cases. Each provider has its own SDK, authentication flow, and request format. Your codebase becomes a mess of different API clients, and switching between providers means rewriting integration code.
Unichat MCP Server eliminates this friction entirely.
Set your model and API key once, then call any supported model through the same interface:
{
"env": {
"UNICHAT_MODEL": "gpt-4o-mini",
"UNICHAT_API_KEY": "your-openai-key"
}
}
Want to switch to Claude? Change two lines:
{
"env": {
"UNICHAT_MODEL": "claude-3-sonnet",
"UNICHAT_API_KEY": "your-anthropic-key"
}
}
The interface stays identical. Your prompts work exactly the same way.
Unichat comes with four pre-built prompts that handle common development tasks:
These aren't generic AI interactions—they're structured specifically for code analysis and modification tasks.
Multi-Model Code Reviews: Run the same code through different models to catch different types of issues. GPT-4 might catch performance problems while Claude spots security vulnerabilities.
Provider Fallbacks: Start with a cost-effective model like GPT-4o-mini for initial analysis, then escalate to more powerful models for complex problems—all without changing your integration code.
Model Comparison: Test how different models handle your specific use cases by simply changing the environment variable, not your entire implementation.
Team Standardization: Your team can use different AI providers based on preference or budget, but everyone uses the same interface and prompts.
Add to your Claude Desktop config:
{
"mcpServers": {
"unichat-mcp-server": {
"command": "uvx",
"args": ["unichat-mcp-server"],
"env": {
"UNICHAT_MODEL": "gpt-4o-mini",
"UNICHAT_API_KEY": "your-api-key"
}
}
}
}
npx -y @smithery/cli install unichat-mcp-server --client claude
git clone https://github.com/amidabuddha/unichat-mcp-server.git
cd unichat-mcp-server
uv sync
uv run unichat-mcp-server
You're not just getting another AI wrapper. You're getting provider independence. Test different models for different tasks without rewriting integration code. Switch providers based on cost, performance, or availability. Use the best model for each specific task while maintaining a consistent development experience.
The built-in code-focused prompts mean you can start improving your development workflow immediately, without crafting your own prompts or figuring out optimal parameters for each provider.
Available in Python and TypeScript—choose your runtime, keep the same powerful multi-provider access.
MIT Licensed • GitHub • Ready for production use