Pi Robot Meets ROS

Pi Robo Meets ROSEvery once in awhile someone, or some group, comes up with a Really Good Idea. In the world of robotics, a good example is the Robot Operating System, or ROS, from California startup company Willow Garage.

The primary goal of ROS (pronounced "Ross") is to provide a unified and open source programming framework for controlling robots in a variety of real world and simulated environments. ROS is certainly not the first such effort; in fact, doing a Wikipedia search for "robot software" turns up over 25 such projects. But Willow Garage is no ordinary group of programmers banging out free software. Propelled by some serious funding, strong technical expertise, and a well planned series of developmental milestones, Willow Garage has ignited a kind of programming fervor among roboticists with hundreds of user-contributed ROS packages already created in just a few short years. ROS now includes software for tasks ranging from localization and navigation (SLAM), stereo object recognition, action planning, motion control for multi-jointed arms, machine learning and even playing billiards.

In the meantime, Willow Garage has also designed and manufactured a $400,000 robot called the PR2 to help showcase its operating system. Using the latest in robot hardware including two stereo cameras, a pair of laser scanners, arms with 7 degrees of freedom and an omni-directional drive system, only a lucky few will be able to run ROS directly on the PR2, including 11 research institutions that were awarded free PR2s as part of a beta-test contest.  However, you do not need a PR2 to leverage the power of ROS; packages have already been created to support lower-cost platforms and components including the iRobot Create, WowWee Rovio, Lego NXT, Phidgets, ArbotiX and Robotis Dynamixels.

The guiding principle underlying ROS is “don't reinvent the wheel”. Many thousands of very smart people have been programming robots for nearly five decades—why not bring all that brain power together in one place? Fortunately, the Web is the perfect medium for sharing code. All it needed was a little boost from a well organized company, an emphasis on open source, and some good media coverage.  Many universities now openly share their ROS code repositories, and with free cloud space available through services such as Google Code, anyone can share their own ROS creations easily and at no cost.

Is ROS for Me?

ROS has made its biggest impact in university robotics labs. For this reason, the project might appear beyond the reach of the typical hobby roboticist.  To be sure, the ROS learning curve is a little steep and a complete beginner might find it somewhat intimidating.  For one thing, the full ROS framework only runs under Linux at the moment, though parts of it can be made to work under Windows or other OSes.  So you'll need a Linux machine (preferably Ubuntu) or a Linux installation alongside your existing OS.  (Ubuntu can even be installed under Windows as just another application without the need for repartitioning.)  Once you have Linux up and running, you can turn to the ROS Wiki for installation instructions and a set of excellent beginner tutorials for both Python and C++.

In the end, the time put into learning ROS amounts to a tiny fraction of what it would take to develop all the code from scratch.  For example, suppose you want to program your robot to take an object from your location in the dining room to somebody else in the bedroom, all while avoiding obstacles.  You can certainly solve this problem yourself using visual landmarks or a laser scanner.  But whole books have been written on the subject (called SLAM) by some of the best roboticists in the world, so why not capitalize on their efforts?  ROS allows you to do precisely this by plugging your robot directly into the pre-existing navigation stack, a set of routines designed to map laser scan data and odometery information from your robot into motion commands and automatic localization.  All you need to provide are the dimensions of your robot plus the sensor and encoder data and away you go.  The hundreds or thousands of hours you just saved by not reinventing the wheel can now be spent on something else such as having your robot tidy your room or fold the laundry.

ROS Highlights

Over the next series of articles, we will have a lot more to say about the things Pi (or your own robot) can do with ROS.  For now, let us take a brief look at some of the highlights.

3D Robot Visualizer and Simulator

As the image above illustrates, ROS makes it possible to display a 3D model of your robot using a visualization tool called RViz. Creating the model involves editing an XML file (written in URDF or Unified Robot Description Format) specifying the dimensions of your robot and how the joints are offset from one another.  You can also specify physical parameters of the various parts such as mass and inertia in case you want to do some accurate physical simulations.  (The actual physics is simulated in a another set of tools called Player/Stage/Gazebo which pre-date ROS but can be used with it.)  Once you have your URDF file, it can be brought up in RViz and move around your virtual robot with the mouse as shown in the following video:












You can also create poses for your virtual robot using a tool called the joint state publisher which includes a graphical slider control, one slider for each joint.  The pose illustrated in the first image above was created this way.  However, the real power (and fun) of RViz comes from being able to view a live representation of your robot as it moves about in the world.  This way your robot might be around a corner and out of sight, but you can still view its virtual doppelganger in RViz (examples below).

Equidistant laser scan readingsRViz is typically used to display sensor readings and the layout of obstacles such as walls and other objects.  In particular, it is good at displaying the data returned from laser range finders (also called LIDAR) and stereo vision (point clouds).  Both of these data types consist of points located at various distances from the robot.  A laser scanner returns an array of distance measurements lying in a single plane and typically sweeps through an arc in front of and to the sides of the robot.  A stereo camera returns a set of distances (disparity measures) across the planar field of view of the two cameras.  In any event, these distance readings can be visualized in RViz as a set of points as illustrated by the orange spheres in the images on the right.  In the first image, the readings are all equidistant from the scanner as though there were a long sheet of cardboard bent in an arc in front of the robot.  In the second image, Pi is standing in front of a hallway so that the distance readings recede as the scanner probes through the opening.  Both of these images where created using an inexpensive substitute for a laser scanner that anyone can make and is described in greater detail below.

PML at a doorway

Navigation and Obstacle Avoidance

As mentioned in the introductory paragraphs, the ROS navigation system enables a robot to move from point A to B without running into things.  To do true SLAM (simultaneous localization and mapping), your robot will generally need a laser range finder or good stereo vision (visual SLAM or VSLAM).  However, basic obstacle avoidance and navigation by dead reckoning can be accomplished with an inexpensive alternative dubbed the "Poor Man's Lidar" or PML.  (Thanks to Bob Mottram and Michael Ferguson--see the references below.)

A PML consists of a low cost IR sensor mounted on a panning servo that continually sweeps the sensor through an arc in front of the robot. The servo-plus-sensor can record 30 readings per 180-degree sweep which takes 1 second in each direction.  As a result, there is a bit of a lag between the motion of the robot and the updated range readings indicated by the orange balls in the images above and the videos below.  By comparison, the lowest cost laser range finder (about $1300)  takes over 600 distance readings per 240-degree sweep and covers the entire arc 10 times per second (1/10th of a second per sweep).

The photos below show our PML setup:

PML Photo 1 PML Photo 2 PML Photo 3


Toward the bottom of the first photo you can see the IR sensor (Sharp model
GP2Y0A02YK) attached to a Robotis Dynamixel AX-12+ servo.  Notice how the IR sensor is mounted "vertically" which is a better orientation when taking sweeping horizontal measurements.  The second photo better illustrates how the IR sensor sweeps to one side, and the third photo shows the ArbotiX microcontroller attached to the back of Pi's torso.  The IR sensor plugs into one of the ArbotiX analog sensor ports while the AX-12 servo plugs into the Dynamixel bus.  The ArbotiX firmware includes direct support for a PML sensor of this type and Vanadium labs has developed an open source ROS node that allows the PML data to appear as a "laser scan" within the ROS framework and RViz.  The image below shows the PML data from a single scan while Pi stands in front of a box which itself is in front of a wall with a desk on the right and a door on the left.

PML detects an obstacle

Our PML has a rather limited range compared to a laser scanner. The Sharp GP2Y0A02YK can measure distances between 20 cm (0.2 meters) and 1.5 meters.  A typical laser scanner has a range between 2 cm (0.02 meters) to 5.5 meters.   Longer range IR sensors are available such as the GP2Y0A700K0F which measures between 1.0 and 5.5 meters but this means the robot would be blind to objects within 1 meter.   We could also use a pair of short and long range sensors, but for this article we'll use just a single sensor.  

Despite its limitations, we can still use the PML with ROS to move Pi Robot around a room while avoiding obstacles.  In the video below, the grid squares in RViz are 0.25 meters on a side (about 10 inches) and you can see that the user clicks on a location with the mouse to tell Pi where to go next.  (The green arrow indicates the orientation we want the robot to have once it gets to the target location.)  ROS then figures out the best path to follow to get there (indicated by the faint green line) and incorporates the data points from the PML scanner to avoid obstacles along the way.  When an obstacle is detected, a red square is placed on the map to indicate that the cell is occupied. The grey squares add a little insurance based on the dimensions of Pi's base just to make sure we don't get caught on an edge.  Be sure to view the video in full screen mode by clicking on the little box with 4 arrows at the bottom right corner of the video.

This video demonstrates the use of the ROS navigation stack with a "Poor Man's Lidar" consisting of a low cost Sharp IR sensor (model GP2Y0A02YK) with a panning servo (Dynamixel AX-12+) and the ArbotiX microcontroller by Vanadium Labs.  Odometry data is obtained from a Serializer microcontroller made by the RoboticsConnection and connected to a pair of 7.2V gearhead drive motors equipped with integrated encoders (624 counts per revolution).  Communication to the controlling PC is by way of XBee for the ArbotiX and Bluetooth for the Serializer.


Coordinate Frames and Transformations

If you managed to slog through the earlier article on Robot Coordinate Frames, you can see how complicated it can be to coordinate even a handful of joints on a robot.  Now imagine doing a similar analysis with a pair of arms each with 7 degrees of freedom, a torso joint and two head joints (pan and tilt).  Fortunately, ROS can take care of all the math for you using the coordinate transform library called tf (i.e. "transform frame").  The tf library works nicely with the URDF file you create to describe the dimensions of your robot.  ROS takes care of working out the chain of transformations (rotations and translations) as you move from one link to the next such as going from the base to the torso, to the right shoulder, to the right elbow, etc.  To see how the various joints are related, ROS provides a URDF-to-PDF tool that allows you to visualize the transform tree for your robot.  The following image is the result for Pi Robot:

(Click the image for a larger PDF version.)

Pi Robot Transform Tree


Of course, as your robot moves its head and reaches for objects, the various coordinate frames centered at the joints move as well and the relations between them have to be updated.  The tf library works together with another process known as the robot state publisher to take care of this automatically.  For example, if you know the coordinates of a visual object in a head centered coordinate frame, then you can simply ask tf to tell you the coordinates of the same object relative to the left hand or the right elbow or any other point you care to use as a reference--including a point on another object if you know its relation to the robot.  In a future article, we will make heavy use of the tf library and the ROS arm navigation routines to enable Pi to reach for objects and move or carry them to a different location.  While such tasks will still take some effort to program, ROS allows anyone enthusiastic about building robots to dream a little bigger than before.

What's Next?

In the next article, we will replace our PML sensor with an actual laser scanner and dive into the world of SLAM, or "simultaneous localization and mapping".  This will allow Pi Robot to create a detailed map of its enviroment and move from one location in the house to another on command.  Fortunately, ROS will do all the heavy lifting so we can spend more time looking ahead to arm navigation and visual object recognition.

Related Links

Further Reading