Model Context Protocol (MCP) server for integrating Yuque API content with MCP-compatible clients.
https://github.com/HenryHaoson/Yuque-MCP-ServerStop context-switching between your Yuque knowledge base and AI conversations. This MCP server transforms your Yuque documents into structured, AI-accessible content that flows directly into your development workflow.
Your team's best practices, API documentation, and architectural decisions live in Yuque. But when you're coding with AI assistance, that knowledge stays trapped in browser tabs. You end up copying-pasting documentation, explaining context repeatedly, or worse - working with incomplete information.
This MCP server creates a live bridge between your Yuque workspace and MCP-compatible AI clients. Your AI assistant gets instant access to your team's knowledge base without you manually feeding it context every single conversation.
Key Benefits:
API Development: AI assistants can reference your latest API specs stored in Yuque when helping you write integration code, generate tests, or debug endpoint issues.
Code Reviews: Pull your coding standards and architectural guidelines directly into review discussions. No more "check the wiki" comments.
Onboarding Automation: New team members get AI assistance that actually knows your codebase conventions, deployment procedures, and project history.
Documentation-Driven Development: Write specs in Yuque, then have AI generate boilerplate code that actually follows your documented patterns.
The server runs as a standard HTTP service alongside your existing tools:
# Clone and install
git clone https://github.com/HenryHaoson/Yuque-MCP-Server.git
cd Yuque-MCP-Server && npm install
# Configure with your Yuque token (needs docs:read scope)
echo "YUQUE_TOKEN=your_token_here" > .env
# Build and run
npm run build && npm start
The server exposes your Yuque content through standard MCP endpoints that any compatible AI client can consume. Documents flow through as structured JSON with proper metadata, making them much more useful than raw text dumps.
Built-in memory caching (configurable TTL) means your AI assistant gets fast responses without overwhelming Yuque's API. The webhook endpoint lets Yuque push real-time updates, so your AI always works with current information.
No complex authentication flows or permission management - just a simple Yuque Personal Access Token and you're connected.
Knowledge bases are only valuable when they're accessible at the moment you need them. This server eliminates the friction between where your knowledge lives and where your development work happens. Your AI assistant becomes genuinely helpful because it knows what you know.
Get it running in under 5 minutes and start building with AI that actually understands your project context.