Model Context Protocol (MCP) server that exposes a Home Assistant instance to Large-Language-Model applications via a secure, real-time REST/SSE/WebSocket API.
https://github.com/tevonsb/homeassistant-mcpYou've been there – trying to integrate Home Assistant with LLMs means diving into REST endpoints, WebSocket connections, entity state management, and authentication tokens. Then you realize you need real-time updates, error handling, and context management. What started as a simple "turn on the lights" integration becomes a full-scale API client project.
This MCP server sits between your LLM applications and Home Assistant, handling all the API complexity so you don't have to. Instead of building yet another HA integration from scratch, you get a standardized Model Context Protocol interface that any MCP-compatible LLM can use immediately.
Before: Your LLM needs custom code to understand HA entities, manage authentication, handle state changes, and parse responses.
After: Your LLM sends simple MCP tool calls like turn_on_lights and gets back structured responses. The server handles everything else.
// Instead of managing WebSocket connections manually
const eventSource = new EventSource('/subscribe_events?domain=light');
eventSource.onmessage = (event) => {
// Your LLM instantly knows when lights change state
const data = JSON.parse(event.data);
};
Your LLM can send conversational requests like "turn on the kitchen lights to 70% brightness" and the server translates this into proper Home Assistant service calls with the right parameters.
{
"tool": "control",
"command": "turn_on",
"entity_id": "light.living_room",
"brightness": 128,
"color_temp": 4000
}
{
"tool": "automation_config",
"action": "create",
"config": {
"alias": "Motion Light",
"trigger": [{"platform": "state", "entity_id": "binary_sensor.motion", "to": "on"}],
"action": [{"service": "light.turn_on", "target": {"entity_id": "light.living_room"}}]
}
}
Your LLM can install Home Assistant add-ons, manage HACS packages, and configure system settings without you writing admin interfaces.
Add this to your Claude configuration:
{
"homeassistant": {
"command": "node",
"args": ["./dist/index.js"],
"env": {
"HASS_HOST": "http://homeassistant.local:8123",
"HASS_TOKEN": "your_token_here"
}
}
}
# Clone and configure
cp .env.example .env
# Edit .env with your HA details
# Deploy
docker compose up -d
The server runs on port 3000 by default and provides both REST endpoints and WebSocket connections for real-time updates.
Device Control: Lights, climate, covers, switches, locks, vacuums, media players – everything your Home Assistant instance manages.
System Management: Install add-ons, manage HACS packages, create automations, monitor system health.
Real-Time Updates: Server-Sent Events keep your LLM informed of state changes as they happen.
Security: Rate limiting, helmet middleware, token authentication, and input validation.
Developer Experience: TypeScript definitions, comprehensive error handling, and extensive documentation.
You're building LLM applications that need to interact with smart home devices, but you don't want to become a Home Assistant API expert. You want to focus on the AI logic, not the integration plumbing.
Perfect for voice assistants, automated home management systems, or any application where natural language needs to control physical devices.
The 333 GitHub stars and active development show this isn't just another weekend project – it's a solid foundation other developers are already building on.
Ready to skip the API integration work and get straight to building intelligent home automation? The server handles Home Assistant so your LLM can focus on being intelligent.