Model Context Protocol (MCP) server that scrapes Hacker News and exposes a `get_stories` tool (top/new/ask/show/jobs) returning structured story data.
https://github.com/pskill9/hn-serverStop manually browsing HN or writing fragile scrapers. This MCP server gives you clean, structured Hacker News data that works seamlessly with Claude and other AI tools.
You want to analyze trends, research topics, or pull interesting stories from Hacker News, but you're stuck with these options:
A single get_stories tool that returns rich, structured data for any HN category:
[
{
"title": "Why databases use ordered indexes but programming uses hash tables",
"url": "https://example.com/database-indexes",
"points": 342,
"author": "database_expert",
"time": "2024-12-28T00:03:05",
"commentCount": 89,
"rank": 1
}
]
No HTML parsing. No rate limiting headaches. Just clean data ready for analysis.
Research workflows: Ask Claude "What are the trending AI topics on HN today?" and get instant analysis of the top stories with context about engagement and discussion volume.
Content creation: Pull Show HN posts to see what developers are building, then analyze patterns or create weekly roundups without manual browsing.
Competitive intelligence: Monitor Ask HN posts for questions about your product space or track what problems developers are discussing.
Trend analysis: Compare story engagement across different categories to understand what resonates with the developer community.
Add to your Claude config and start using natural language queries immediately:
{
"mcpServers": {
"hacker-news": {
"command": "node",
"args": ["/path/to/hn-server/build/index.js"]
}
}
}
Then query with natural language:
Claude handles the tool calls automatically—you focus on the insights.
Robust parsing: Uses Cheerio to handle HN's HTML structure properly, with error handling for edge cases you haven't thought of.
Flexible querying: Get exactly what you need—top/new/ask/show/jobs stories with configurable limits (1-30 stories).
Production ready: Includes proper error handling, validation, and TypeScript types. No debugging flaky scrapers at 2 AM.
Zero maintenance: The server handles HN's HTML structure changes so your workflows keep working.
Clone, install, build, and add to your MCP config:
git clone https://github.com/pskill9/hn-server
cd hn-server && npm install && npm run build
The server runs locally—no external dependencies or API keys required. Your HN data stays in your environment.
Perfect for developers who want HN integration that works reliably without the overhead of maintaining custom scraping code.