Unofficial, enhanced OpenAPI 3.0 specifications (full + AI-optimized) for the Fastly CDN API, plus an NPM-published Model Context Protocol (MCP) server that lets AI agents manage Fastly services through a standard interface.
https://github.com/jedisct1/fastly-openapi-schemaStop wrestling with Fastly's API documentation. This MCP server turns your AI assistant into a CDN operations expert that can manage your Fastly infrastructure through natural conversation.
You know the drill: you need to purge cache for a specific URL pattern, update backend configurations, or check service health metrics. But instead of a quick fix, you're diving into Fastly's web console or piecing together API calls from scattered documentation that's missing half the context you need.
The official Fastly docs tell you what each endpoint does, but they don't tell you when to use them or how they fit into real operational workflows. You end up spending more time reading docs than actually managing your CDN.
The Fastly MCP server bridges that gap by giving your AI assistant deep knowledge of Fastly operations. Instead of memorizing API endpoints, you can just describe what you want to accomplish:
Your AI assistant handles the API orchestration, parameter validation, and error handling while you focus on the actual infrastructure decisions.
Operational Context: The enhanced OpenAPI specs include workflow context that the official docs miss. Instead of guessing which sequence of API calls accomplishes your goal, the AI understands common CDN management patterns.
Error Prevention: The AI validates configurations before making changes, catching issues like conflicting cache rules or invalid backend health check settings that would otherwise fail silently or cause performance problems.
Faster Troubleshooting: When something's wrong with your CDN performance, you can ask "What's causing high cache miss rates on our image service?" instead of manually correlating data across multiple API endpoints.
Incident Response: During an outage, instead of navigating through multiple Fastly screens, tell your AI: "Emergency purge all cache for example.com and show me current error rates by region." Get immediate execution with real-time status updates.
Configuration Management: When deploying new services, describe your requirements: "Set up a new service for api-v2.example.com with these backend servers, enable gzip compression, and configure logging to our S3 bucket." The AI handles the multi-step configuration process.
Performance Optimization: Regular optimization becomes conversational: "Analyze our cache performance this week and suggest configuration improvements for better hit rates." Get actionable insights without manual data analysis.
Install the MCP server and add it to your AI assistant configuration:
npm install -g fastly-mcp-server
Configure it in your MCP client (like Claude Desktop):
{
"mcpServers": {
"fastly": {
"command": "bunx",
"args": ["fastly-mcp-server@latest", "run"],
"env": {
"API_KEY_APIKEYAUTH": "your-fastly-api-key"
}
}
}
}
That's it. Your AI assistant now understands Fastly operations and can execute CDN management tasks through natural conversation.
This isn't just API wrapper - it's operational intelligence. The server includes:
The result is an AI assistant that doesn't just make API calls, but actually understands CDN operations well enough to suggest optimizations, catch configuration errors, and handle complex multi-step procedures.
Your Fastly infrastructure management just became as simple as describing what you want to accomplish. No more API documentation rabbit holes, no more guessing at parameter combinations, no more manual correlation of monitoring data across endpoints.