MCP server that performs vector-based documentation ingestion and semantic search to augment LLM responses (RAG for docs).
https://github.com/hannesrudolph/mcp-ragdocsEver found yourself jumping between documentation sites, Stack Overflow, and your IDE while working on a project? Your AI assistant can now do that heavy lifting for you.
MCP-RAGDocs turns any documentation into a searchable knowledge base that your AI assistant can query directly. Instead of asking you to "check the React docs for useEffect examples," it just finds them and gives you the answer with proper context.
You're deep in a coding session when you hit a wall. Maybe it's a specific API parameter, an edge case in a library, or remembering the exact syntax for something. So you:
This context switching kills productivity. Your AI assistant knows what you're working on, but it doesn't know what your project's documentation says about the specific libraries and frameworks you're using.
Point this MCP server at any documentation site — React docs, your company's internal wiki, API references, whatever. It crawls, chunks, and indexes everything using vector embeddings. When you ask your AI assistant a question, it automatically searches through all that documentation and includes the most relevant sections in its response.
Example conversation:
You: "How do I handle errors in this Next.js API route?"
AI: "Based on your Next.js documentation, here are the recommended error handling patterns for API routes:
[Includes specific examples from Next.js docs about error handling, with exact syntax and best practices]
Here's how to implement it in your current code..."
The AI didn't just give you generic advice — it pulled the exact, up-to-date information from the Next.js documentation you configured.
Internal Documentation: Your company's API docs, deployment guides, and coding standards become instantly accessible to your AI assistant. No more digging through Confluence or internal wikis.
Framework-Specific Help: Working with FastAPI, Django, or some niche library? Feed the docs into the system and get contextual help that's actually relevant to your framework version.
Legacy System Understanding: Got a codebase with custom documentation or README files scattered everywhere? Index them all and let your AI assistant become the expert on your specific system.
Learning New Technologies: Pick up a new framework faster by having an AI assistant that can reference the complete documentation set while answering your questions.
{
"mcpServers": {
"rag-docs": {
"command": "npx",
"args": ["-y", "@hannesrudolph/mcp-ragdocs"],
"env": {
"OPENAI_API_KEY": "your-key-here",
"QDRANT_URL": "your-qdrant-url",
"QDRANT_API_KEY": "your-qdrant-key"
}
}
}
}
You'll need a Qdrant instance (their free tier works fine for most use cases) and an OpenAI API key for embeddings. Then just tell it what documentation to index:
Extract URLs from https://nextjs.org/docs and add them to the processing queue
Run the queue to index everything
This isn't just "upload PDFs and search them." The tool understands web documentation structure:
The semantic search means you can ask "How do I deploy this to production?" and get relevant results even if the docs use terms like "deployment," "publishing," or "going live."
Works with Claude Desktop, custom MCP clients, or any tool that supports the Model Context Protocol. The assistant gets access to seven tools for managing and searching your documentation, but you don't need to think about the complexity — just ask questions and get answers grounded in your actual docs.
Ready to stop tab-switching your way through documentation? Your AI assistant is about to become a lot more useful.