Model Context Protocol (MCP) Server for Langfuse Prompt Management. Allows discovery and retrieval of Langfuse prompts via MCP endpoints and exported tools.
https://github.com/langfuse/mcp-server-langfuseIf you're managing prompts in Langfuse but still manually copying them into Claude Desktop or Cursor, you're doing extra work for no reason. This MCP server creates a direct bridge between your Langfuse prompt library and any MCP-compatible AI assistant.
You craft and version prompts in Langfuse's interface, test different variations, get them production-ready—then when you want to use one in Claude Desktop, you're back to copy-paste hell. Switch tabs, find the right prompt version, copy the text, paste it over, fill in variables manually. Every. Single. Time.
That's solved now.
With this MCP server running, your entire Langfuse prompt library becomes instantly accessible inside Claude Desktop, Cursor, or any MCP client. No context switching, no manual copying—just type the prompt name and your variables get compiled automatically.
Before: "Let me go grab that code review prompt from Langfuse..."
After: Your prompts appear in Claude's prompt library, ready to use with proper variable substitution.
Instant Prompt Discovery: prompts/list shows all your production Langfuse prompts directly in your AI assistant's interface. Pagination included if you've got a serious prompt library.
Smart Variable Handling: Pass variables as JSON and get back fully compiled prompts. No more manual find-and-replace on template strings.
Universal Compatibility: Implements both MCP prompts specification and exports tools for broader MCP client support. Works whether your client fully supports prompts or not.
Production-Ready Filter: Only surfaces prompts marked as "production" in Langfuse, so you're not accidentally using draft prompts in important work.
Code Review Workflows: Store your code review prompts in Langfuse with variables for language, complexity_level, and focus_areas. In Claude Desktop, select the prompt and pass your specific values—get back a perfectly customized review prompt.
Documentation Generation: Maintain versioned documentation prompts in Langfuse. When writing docs in Cursor, access them instantly with project-specific variables already compiled.
API Development: Keep API design and testing prompts centralized in Langfuse. Pull them into Claude with endpoint-specific details filled in automatically.
The setup is straightforward—add your Langfuse API keys as environment variables and point the MCP server at your instance:
{
"mcpServers": {
"langfuse": {
"command": "node",
"args": ["path/to/build/index.js"],
"env": {
"LANGFUSE_PUBLIC_KEY": "your-key",
"LANGFUSE_SECRET_KEY": "your-secret",
"LANGFUSE_BASEURL": "https://cloud.langfuse.com"
}
}
}
}
Your prompts are now accessible wherever you're already working with AI assistants.
If you're already using Langfuse for prompt management, this eliminates the friction between your organized prompt library and actually using those prompts. You've invested time in proper prompt versioning and management—now you can access that investment seamlessly while coding.
This isn't about adding another tool to your stack. It's about connecting the tools you're already using so they work together instead of forcing you to context-switch between them.
The server handles both text and chat prompts, includes cursor-based pagination for large prompt libraries, and compiles variables on-demand. Built by the Langfuse team, so it's designed to work exactly how you'd expect with their platform.
Stop treating your prompt library like a static reference and start using it as a live development resource.