MCP Server for running Bruno Collections (execute Bruno API-test collections via the Model Context Protocol).
https://github.com/hungthai1401/bruno-mcpStop copying and pasting API test results between Bruno and your LLM conversations. This MCP server lets Claude execute your Bruno collections directly and analyze the results in real-time.
If you're using Bruno for API testing, you know the drill: run tests, check results, switch to Claude to discuss failures, explain context, then switch back to Bruno to fix issues. This constant context switching kills productivity.
Bruno MCP Server eliminates that friction. Now you can:
"Run my authentication tests and tell me what's failing"
"Execute the user registration flow and validate the response schemas"
"Check if our payment endpoints are working after the latest deploy"
Claude executes the tests, analyzes failures, and provides actionable feedback without you leaving the conversation.
Debugging API Issues: When a test fails, Claude can immediately run related collections, compare results, and suggest fixes based on the actual error responses.
Code Review Integration: During PR reviews, ask Claude to run relevant API tests and explain any regressions or new failure patterns.
Documentation Generation: Claude can execute your API tests and generate up-to-date documentation based on actual request/response patterns.
Regression Analysis: Compare test results across different environments or after deployments to identify what changed.
Install via Smithery (easiest):
npx -y @smithery/cli install @hungthai1401/bruno-mcp --client claude
Or configure manually in your Claude desktop config:
{
"mcpServers": {
"bruno-runner": {
"command": "npx",
"args": ["-y", "bruno-mcp"]
}
}
}
The server exposes a run-collection tool that Claude can use to execute any Bruno collection. Results include:
Pre-deployment Testing: "Run our critical path tests against staging and confirm everything's working"
API Monitoring: "Execute health checks for all services and alert me to any issues"
Development Iteration: "Run the authentication tests after each change until they all pass"
Cross-environment Validation: "Compare test results between dev and staging environments"
The server handles the Bruno CLI execution, parses results, and presents them in a format that Claude can analyze and discuss intelligently. No more manual test running or result interpretation.
Your Bruno collections become directly accessible to your LLM workflows, creating a seamless bridge between API testing and AI-assisted development.