MCP (Model Context Protocol) server that scans a directory of OpenAPI 3.x specs, builds a catalog of operations/schemas, and exposes them through the MCP protocol for LLM-powered IDEs such as Cursor.
https://github.com/ReAPI-com/mcp-openapiStop switching between API documentation and your IDE. This MCP server makes your OpenAPI specifications available directly to LLMs in Cursor and other AI-powered editors, so they can write API integration code that actually works with your real endpoints.
You've got OpenAPI specs. You've got an AI-powered IDE. But your LLM doesn't know anything about your APIs, so it generates generic axios calls that you have to fix manually. You're constantly copy-pasting from Swagger UI, explaining request/response schemas in chat, and debugging integration code that doesn't match your actual API contracts.
mcp-openapi scans your OpenAPI specification directories and exposes them through the Model Context Protocol. Your LLM now has direct access to your API operations, schemas, and documentation - no more explaining what a CreatePetRequest looks like or which endpoints exist.
Before: "Generate a function to create a pet" LLM generates generic code with wrong field names, missing required parameters
After: "Generate a function to create a pet using the petstore API" LLM generates code with correct endpoint, proper TypeScript types, all required fields, and proper error handling - because it knows your exact API specification
Add this to your project's .cursor/mcp.json:
{
"mcpServers": {
"@reapi/mcp-openapi": {
"command": "npx",
"args": ["-y", "@reapi/mcp-openapi@latest", "--dir", "./specs"],
"env": {}
}
}
}
Enable it in Cursor Settings > MCP, and you're done. No installation, no configuration files, no setup scripts.
API Client Generation
"Create a TypeScript client for the user management API with proper error handling"
Generates complete client with your exact endpoint paths, request/response types, and error schemas
Integration Code
"Write a React hook that fetches user profiles with loading states and error handling"
Uses your actual API specification to generate the correct endpoint URL, request format, and response handling
Mock Data & Testing
"Generate valid test data for the CreateOrderRequest schema"
Creates mock objects that match your exact schema requirements, including nested objects and validation rules
API Documentation
"Document how to integrate with the payment processing endpoints"
Generates documentation using your actual endpoint descriptions, parameters, and examples
Project-Specific Catalogs: Keep API specs scoped to each project, preventing context window overflow and maintaining clean separation between different services.
Custom Specification IDs: Use x-spec-id to distinguish between similar endpoints across microservices:
info:
x-spec-id: user-service
paths:
/users:
get: # Now referenced as user-service/users
Automatic Discovery: Drop new OpenAPI files into your specs directory, refresh the catalog, and they're immediately available to your LLM.
The server provides fuzzy search across operations and schemas:
Your LLM can now explore and understand your API surface area without you having to explain every endpoint.
This isn't a replacement for your API tooling - it's an enhancement. Keep using Postman, Insomnia, or curl for testing. Keep your OpenAPI specs in version control. The MCP server just makes that existing documentation available to your AI assistant.
Perfect for teams using:
LLMs are getting better at code generation, but they're still blind to your specific APIs. This server bridges that gap, turning your existing OpenAPI documentation into actionable context for AI-assisted development.
The result? Less time explaining your APIs to AI, more time building features that matter.
Get started: npx @reapi/mcp-openapi@latest --dir ./specs