MCP server that scrapes Google to provide a `search` tool (no API keys required).
https://github.com/pskill9/web-searchYou know the drill - your LLM needs current web data, but Google's Search API costs add up fast and their rate limits are restrictive. This MCP server cuts through that entirely by scraping Google search results directly. No API keys, no billing, no quotas.
Google's official Search API requires setup, authentication, and costs $5 per 1,000 queries after your free tier. For most development and personal use cases, that's overkill. You just want to grab some search results and move on.
This server gives your Claude Desktop or VSCode setup direct access to Google search results through a simple search
tool. It returns clean, structured data with titles, URLs, and descriptions - exactly what you need for context without the API overhead.
Structured Results: Clean JSON with title, URL, and description for each result
[
{
"title": "Next.js 15 Release Notes",
"url": "https://nextjs.org/blog/next-15",
"description": "Introducing React 19 support, improved caching..."
}
]
Zero Configuration: No environment variables, no API keys, no OAuth flows. Install, build, and it works.
MCP Integration: Works seamlessly with Claude Desktop and VSCode extensions. Just add it to your MCP config and start searching.
Instead of switching to your browser, you keep the research flow inside your AI conversation.
git clone https://github.com/pskill9/web-search.git
cd web-search
npm install && npm run build
Add to your Claude Desktop config:
{
"mcpServers": {
"web-search": {
"command": "node",
"args": ["/path/to/web-search/build/index.js"]
}
}
}
That's it. No tokens, no registration, no billing setup.
The tool takes a query and optional limit (max 10 results):
{
"query": "best practices for React Server Components",
"limit": 5
}
Perfect for when you need fresh information that isn't in your model's training data. Your Claude conversations can now pull current documentation, recent blog posts, or community discussions without breaking flow.
Since this scrapes Google directly, you'll hit rate limits if you go overboard. Keep searches reasonable (think human-like frequency) and you'll be fine. The 199+ GitHub stars suggest plenty of developers are using this successfully in production.
The server handles the complexity of parsing Google's HTML structure, so you get reliable results without dealing with web scraping yourself.
This belongs in your MCP toolkit if you're tired of API management overhead and just want web search that works.