|
Robot Cartography: ROS + SLAM
In a much earlier article we looked at how Pi Robot might use omnidirectonal video images and an artificial neural network to figure out which room he was in. The idea was that different places have different visual appearances and we could use these differences to determine where we were at any given moment. We may come back to that approach at a later time, but there is another method, called SLAM, that has a long history in the field of robotics and is now within reach of even hobby roboticists thanks to ROS.
SLAM stands for Simultaneous Localization and Mapping and one way to understand it is to imagine yourself entering an unfamiliar building for the first time. When you walk in the front door, your eyes immediately begin to gaze about and you quickly assess the layout of the room or rooms nearest to your current location. At this point, you know that you are located at the front entrance and you have an initial sense of the layout--or map--of a small part of the building. As you cross the floor ahead, your eyes and head continue to scan from side to side and you notice doorways and other entrances leading to additional rooms and perhaps even stairways or elevators leading up or down to additional floors.
As you move about the building, you don't completely forget where you have already been. Indeed, at any moment you have a pretty good idea where you are within the current map that you have so far constructed in your head, and unless you have a really bad sense of direction, you could probably turn around and get back out of the building without too much trouble. Finding your way around the building is a good example of simultaneously constructing a map and localizing yourself within that map.
Roboticists have developed a similar process for mobile robots but instead of using visual landmarks, most algorithms use an occupancy map. An occupancy map consists of a grid laid over some region around the robot with each cell in the grid marked as "occupied", "free" or "unknown". A robot can use a number of methods for determining the occupancy of a cell in the map, but the most common method is to employ a scanning laser range finder. If the sweeping beam of the laser detects an object at a certain distance and direction, then we mark the cell at that location as occupied. Otherwise, the cell is considered free, at least for now. If the laser scanner has not yet swept past a cell within its range, that location is marked uknown.
In the previous article we saw how Pi Robot can use the ROS navigation stack and a simple scanning IR sensor to avoid obstacles while moving about a cluttered room. However, the limited range of the IR sensor and the small number of measurements per sweep is generally insufficient for building a stable occupancy map. So to do SLAM, we will need a laser range finder. Thanks to a generous contribution from an anonymous donor, Pi is now equipped with a Hokuyo laser scanner (model URG-04LX-UG01) as shown in the picture on the right. Note that the laser scanner has taken the place of our earlier panning IR sensor toward the front of Pi's chassis. How does it work and how is it different from our earlier setup?
A scanning laser range finder sends out pulses of low-power infrared laser light (Class 1 or "eye safe") and measures the time it takes for the pulses to reflect from objects and return to the scanner. (Each scan typically covers an arc set between 180 and 240 degrees.) The Hokuyo URG model used here can emit 600 pulses per scan and it can perform 10 scans per second. That's 6000 data points per second compared to the 30 we obtained using our IR scanner. This means that not only do we obtain much finer angular resolution, but we can now detect changes in the object layout much faster thereby allowing our robot to move more quickly without running into things. The laser scanner is also remarkably precise, returning distance data with an angular resolution of 0.36 degrees and a range from about 2 cm to 5.6 meters with 3% accuracy.
The two images on the right and below compare an earlier scan using our IR sensor with a new scan of the same scene using the laser scanner. As you can see, the laser scanner provides a much denser set of distance measurements and reveals a sharper image of the waste basket in front of the wall, the wall itself, the corner of the desk on the right, and the door on the left opening inward. This high quality distance data can be used in ROS to do SLAM. Intutitively, it is easy to imagine that if you know your precise distance from a number of fixed points in a given room, then you can essentially triangulate your position in the room using basic trigonometry. Now imagine that you have 6000 such measurements per second measured across a 180 degree arc in front of you. Such an abundance of data enables us to use some powerful statistical tools to build a map of the space surrounding the robot. As the robot moves, its wheel encoders report back position data while the laser scanner continues to return distance measurements. Combining the two data streams, we can not only extend and refine the map but also localize the robot within the map. This is what we mean by simultaneous localization and mapping.
In this short article, we won't go into the mathematics behind SLAM (see references below). And one of the strengths of using ROS with your robot is that you don't have to. Instead, you can set your robot to the task of mapping out your house or apartment while you get started on Thanksgiving dinner.
The videos below were captured from the ROS RViz visualization utilitiy while Pi Robot mapped out several rooms in a typical apartment. The first video is run at 6x speed and takes only 50 seconds so you can see the process more easily. The second video is run in real time and includes a number of captions that explain the process along the way.
References
|
|