Perhaps the most difficult aspect of the NASA Sample Return Robot Challenge is that it takes place outdoors in the real world with variable terrain, lighting conditions and background objects. All we will have ahead of time is a satellite map of the competition field and static images of a few landmarks. But it is strictly forbidden to actually visit the site beforehand and test the robot before the competition. This gives returning teams a significant advantage assuming the competition takes place in the same place as last year.
Consequently, it is essential to start testing various aspects of the Challenge outdoors as soon as possible. Initial experiments involved taking pictures with a DSLR camera of the various samples on a grass field from different distances and lighting conditions. We then ran a number of computer vision algorithms on the resulting images to determine which methods could most reliably detect the samples. The images below show one of the samples being automatically detected in different locations.
(Click on the image for a larger version.)
With the basics of visual detection figured out, it was time to mount the camera on a mobile robot to test the effects of motion blur while the robot bounces along the grass. As the team works on the full size competition robot, a lot of testing can be done using smaller off-road rovers as shown in the video below:
This little robot was named “Scout” and was built by yours truly using Vex aluminum framing, 1/4″ ABS plastic, Pololu motors and an Arduino Mega. The robot is controlled by a dual-core netbook sitting on the upper deck running Ubuntu 12.04, ROS Hydro and the ros_arduino_bridge package. The Nikon D5100 DSLR camera was kindly loaned to me by a colleague (thanks RS!) and is mounted on a panning platform controlled by an AX-12 Dynamixel servo.
That’s it for now. Stay tuned for more updates in the near future.