OpenMCP – a standard and open-source registry for turning existing web APIs (OpenAPI, gRPC, GraphQL, etc.) into MCP servers so LLM clients can call them token-efficiently.
https://github.com/wegotdocs/open-mcpStop writing custom integrations for every API your LLM needs to access. OpenMCP automatically converts your existing OpenAPI, gRPC, GraphQL, and other web APIs into MCP servers that work seamlessly with Claude, Cursor, and other MCP clients.
You've got a dozen APIs your LLM agents need to call - Stripe for payments, GitHub for repos, Slack for messaging, your internal services. Each one requires custom integration code, token management, and manual prompt engineering to work efficiently with your AI workflows.
The traditional approach burns through tokens describing API schemas in prompts, requires maintaining separate integration code for each service, and breaks whenever APIs change. OpenMCP flips this around entirely.
OpenMCP provides a standardized way to expose any web API as an MCP server. Your LLM clients get structured, token-efficient access to:
Instead of describing API schemas in prompts, your LLM gets direct tool access with proper typing and validation.
Add any API to Claude Desktop:
npx @open-mcp/config add stripe-api \
~/Library/Application\ Support/Claude/claude_desktop_config.json \
--STRIPE_API_KEY=sk_test_...
For Cursor projects:
npx @open-mcp/config add github-api \
.cursor/mcp.json \
--GITHUB_TOKEN=ghp_...
Restart your client and the API tools are immediately available. No manual config editing, no path hunting, no syntax errors.
Before OpenMCP: You're building a customer support bot that needs to check payment status, create tickets, and update user profiles. That's three separate integrations, each requiring custom code, error handling, and prompt engineering to explain the API structure.
With OpenMCP: Point to your OpenAPI specs and you're done. Your LLM gets typed tool access to all endpoints with automatic request/response validation. When APIs change, the MCP server updates automatically from the schema.
Token Savings: Instead of burning 500+ tokens describing API schemas in every conversation, you get zero-token structured tool calls. For high-volume AI workflows, this cuts API costs significantly.
The OpenMCP registry includes pre-built servers for popular APIs - GitHub, Stripe, Slack, and dozens more. Clone, configure your API keys, and you're running.
Building internal tools? The standard makes it trivial to convert your existing API documentation into MCP servers that your team's AI agents can use immediately.
This isn't just about making HTTP requests. OpenMCP servers understand your API semantics - they know which endpoints require authentication, handle rate limiting, validate request schemas, and provide meaningful error messages to your LLM.
Your AI agents get reliable, structured access to external services without the fragility of raw API calls or the token overhead of schema descriptions.
OpenMCP servers run as standalone processes that your MCP clients connect to. This means:
The entire process takes minutes, not hours. Your existing API documentation becomes immediately usable by your AI agents.
OpenMCP eliminates the tedious work of API integration so you can focus on building AI workflows that actually matter. Stop reinventing the wheel - turn your APIs into MCP servers and get back to solving real problems.