Official Qdrant Model Context Protocol (MCP) server that adds a semantic-memory layer on top of a Qdrant vector database.
https://github.com/qdrant/mcp-server-qdrantYour AI assistant forgets everything the moment your conversation ends. Every interaction starts from scratch, losing valuable context about your projects, preferences, and accumulated knowledge. The official Qdrant MCP server changes that by adding a semantic memory layer that persists across all your AI conversations.
You've built up context over dozens of conversations - explaining your codebase architecture, documenting decision rationales, sharing implementation patterns. Then you start a new chat and have to explain everything again. Or worse, you're juggling multiple AI tools (Claude Desktop, Cursor, VS Code Copilot) and none of them remember what you told the others.
The Qdrant MCP server creates a shared semantic memory that works across all your MCP-compatible AI tools. Store information once, retrieve it everywhere.
qdrant-store: Save any information with natural language descriptions
qdrant-find: Retrieve relevant information using semantic search
Semantic Code Search Transform your development workflow by storing and searching code semantically. Instead of remembering exact function names or file locations, describe what you're looking for:
# Store with description
"Store this React hook for API pagination - handles loading states, error handling, and infinite scroll"
# Find with intent
"Find the pagination hook that handles loading states"
Cross-Project Knowledge Sharing Build a knowledge base that spans multiple projects and teams:
Persistent AI Conversations Your AI assistant remembers previous conversations and builds on past context:
Claude Desktop: One-click installation via Smithery
npx @smithery/cli install mcp-server-qdrant --client claude
Cursor/Windsurf: Semantic code search integration
uvx mcp-server-qdrant --transport sse
# Point Cursor to http://localhost:8000/sse
VS Code: Direct integration with one-click install buttons
Claude Code: Enhanced semantic search over your codebase
claude mcp add code-search -e QDRANT_URL="http://localhost:6333" -- uvx mcp-server-qdrant
Local Development: Perfect for personal projects
# In-memory for quick testing
QDRANT_URL=":memory:" uvx mcp-server-qdrant
# Local persistent storage
QDRANT_LOCAL_PATH="/path/to/db" uvx mcp-server-qdrant
Team Sharing: Centralized knowledge base
# Point everyone to the same Qdrant instance
QDRANT_URL="https://your-team-qdrant.com:6333" uvx mcp-server-qdrant --transport sse
Cloud Native: Scales with your organization
docker run -p 8000:8000 \
-e QDRANT_URL="https://xyz.aws.cloud.qdrant.io:6333" \
-e QDRANT_API_KEY="your-key" \
mcp-server-qdrant
You could build your own vector search integration, but you'd spend weeks handling:
The Qdrant MCP server handles all of this out of the box, with ongoing maintenance and updates from the Qdrant team.
The fastest way to add semantic memory to your AI workflow is literally one command:
# For Claude Desktop
npx @smithery/cli install mcp-server-qdrant --client claude
# For everything else
QDRANT_URL="http://localhost:6333" uvx mcp-server-qdrant
Your AI assistant will finally remember what you've taught it. Your code searches will find what you actually meant, not just what you typed. Your team's knowledge will persist beyond individual conversations.
Stop starting from scratch every time you open a new chat. Give your AI tools the memory they should have had from the beginning.