Turn natural-language commands from an LLM into ROS/ROS2 topic messages over rosbridge WebSockets. Provides an MCP (Model Context Protocol) server that exposes robot-control functions such as geometry_msgs/Twist publishing, image streaming, joint state reporting, etc.
https://github.com/lpigeon/ros-mcp-serverStop writing geometry_msgs/Twist publishers and action clients every time you need basic robot control. This MCP server lets you tell your robot "move forward 2 meters" or "rotate left 90 degrees" through any LLM interface, and it handles the ROS message translation automatically.
You're building a robotics application. You need the robot to move, maybe grab something, check joint states. So you write the same boilerplate: topic publishers, message constructors, coordinate frame conversions. Then you realize you need it to work with ROS2 too. More boilerplate.
What if you could just describe what you want the robot to do and have it happen?
The ROS MCP Server sits between your LLM (Claude, GPT, whatever) and your robot's ROS stack. You type natural language commands, it publishes the right messages to the right topics. Works with both ROS1 and ROS2 through rosbridge WebSockets.
# Instead of this
rostopic pub /cmd_vel geometry_msgs/Twist "linear: {x: 0.5, y: 0.0, z: 0.0}, angular: {x: 0.0, y: 0.0, z: 0.5}"
# You do this
"Make the robot move forward while turning left"
The MCP server translates your intent into proper ROS messages and publishes them. No topic names to remember, no message structure to look up.
Cross-platform development: Runs on Linux, Windows, MacOS. Your robot might be on Ubuntu, but you're developing on Mac? No problem.
Zero ROS node modification: Drop this into any existing ROS setup without touching your navigation stack, sensor drivers, or control loops.
Version agnostic: Same commands work whether you're on ROS Noetic or ROS2 Iron. The server handles the rosbridge protocol differences.
Rapid prototyping: Want to test a movement sequence? Just describe it instead of writing a launch file and node.
Robotics education: Students focus on high-level robot behavior instead of getting stuck on message formats and topic publishing syntax.
Quick testing: "Check the joint angles" gives you sensor_msgs/JointState data without remembering topic names or message structure.
Demo preparation: Control your robot through natural language during presentations. No need to switch between terminals or remember rostopic commands.
Research iteration: Rapidly test different movement patterns or behaviors by describing them instead of hardcoding parameters.
Multi-robot coordination: Scale natural language commands across different robot types without rewriting control code.
You're probably thinking about the setup complexity. Here's what you actually need to do:
The server handles message types you use constantly:
geometry_msgs/Twist for movementsensor_msgs/Image for camera feedssensor_msgs/JointState for arm/joint monitoringExtension is straightforward - add new message types by implementing the translation functions.
This isn't magic. It's a well-designed abstraction that understands common robotics patterns and translates natural language into the ROS messages you'd write anyway. The WebSocket approach means it works across network boundaries and doesn't require ROS environment setup on your development machine.
The 112 GitHub stars suggest other robotics developers found this useful enough to bookmark. The research backing (there's a YouTube demo) shows it working with real robots in simulation.
If you're tired of looking up message definitions every time you need basic robot control, or you want to make robotics more accessible to team members who don't live in ROS documentation, this belongs in your toolkit.