MCP (Model Context Protocol) server for Weaviate
https://github.com/weaviate/mcp-server-weaviateStop rebuilding context from scratch every conversation. This official MCP server from Weaviate gives your language models the persistent memory they need to maintain context across sessions, conversations, and deployments.
You've built an AI application, but every conversation starts from zero. Your LLM can't remember previous interactions, learned preferences, or accumulated knowledge. You're either:
The Weaviate MCP server solves this by giving your LLMs direct access to hybrid search over persistent vector storage.
Persistent Memory: Your LLM can store and retrieve context across conversations, maintaining continuity that feels natural to users.
Hybrid Search: Combine semantic similarity with keyword matching. Find relevant context whether users reference exact terms or conceptually related ideas.
Zero RAG Complexity: Skip building custom vector pipelines. The MCP protocol handles the bridge between your LLM runtime and Weaviate automatically.
Production Scale: Built on Weaviate's proven vector database architecture that scales from prototypes to production workloads.
Here's how it works in practice:
Context Storage: As users interact with your AI, relevant information gets stored in Weaviate as vectors with metadata.
# LLM interaction automatically triggers storage
User: "I prefer morning meetings and work in PST"
# Server stores preference vector with metadata
Intelligent Retrieval: When context is needed, hybrid search finds the most relevant information.
# Later conversation
User: "Schedule our next check-in"
# Server retrieves: preference for morning meetings, PST timezone
# LLM gets relevant context automatically
Seamless Integration: Your LLM runtime (Claude, ChatGPT, etc.) communicates with Weaviate through standard MCP calls.
Setup takes minutes, not days:
git clone https://github.com/weaviate/mcp-server-weaviate.git
cd mcp-server-weaviate
make build
./mcp-server
Configure with environment variables:
WEAVIATE_HOST=http://localhost:8080
WEAVIATE_API_KEY=your-api-key
The server exposes two core tools:
Your MCP-compatible LLM runtime handles the rest automatically.
Customer Support AI: Remember customer preferences, previous issues, and resolution history across all interactions.
Personal AI Assistants: Maintain user preferences, learned behaviors, and conversation context across sessions.
Knowledge Management: Store and retrieve company documentation, meeting notes, and institutional knowledge.
Multi-Session Applications: Any AI app where context matters beyond a single conversation thread.
This is the official server from the Weaviate team – the people who built the vector database. You get:
The server fits into your existing MCP workflow without requiring architecture changes. If you're already using MCP for other integrations, adding persistent memory is just another server configuration.
For new MCP implementations, this gives you a compelling reason to adopt the protocol – your LLMs immediately gain sophisticated memory capabilities that would take weeks to build custom.
Ready to give your LLMs the memory they deserve? The official Weaviate MCP server makes persistent context as simple as any other MCP integration.