|
ROS by Example: Head Tracking in 3D (Part 1)
NOTE: This tutorial is several years out of date and the specific commands for installing software and using ROS have changed since then. An up-to-date version of this tutorial can be found in the book ROS By Example Volume 2: Packages and Programs for Advanced Robot Behaviors, available as a downloadable PDF and in paperback on Lulu.com.
In the previous tutorial we learned how to use ROS and OpenCV to do basic head tracking of a visual target as seen in a typical webam. In this tutorial, we will learn how to perform a similar task using the ROS tf package for transforming target locations from one frame of reference to another. At the same time, we will trade in our webcam for a Microsoft Kinect which provides an RGB-D point cloud that we can process using the ROS Point Cloud Library (PCL) or OpenNI. Part 1 of the tutorial will cover the use of tf to specify target locations and does not require that you have a Kinect. Part 2 will use the results of Part 1 together with the object recognition abilities of PCL and OpenNI to track 3D visual targets.
Part 1: Head Pointing Using tfThe ROS tf package allows us to specify the target in nearly any frame of reference. For example, we might know the location of an object relative to our robot's base (e.g. "1.2 meters directly ahead and on the floor") and we want the robot to pan and tilt its camera to look straight at that location. Conversely, we can use tf to translate a location relative to the camera frame into coordinates relative to any other frame, such as the robot's base or hand. In this way we can use vision to determine the direction and distance the robot would have to travel to a given object, or where to position its hand to grasp it.
When working with reference frames, keep in mind that ROS uses a right-hand convention for orienting the coordinate axes as shown on left. Similarly, the direction of rotations about an axis is defined by the right hand rule shown on the right: if you point your thumb in the positive direction of any axis, your fingers curl in the direction of a positive rotation. For a mobile robot using ROS, the z-axis points upward, the x-axis points forward, and the y-axis points to the left. Under the right-hand rule, a positive rotation of the robot about the z-axis is counterclockwise while a negative rotation is clockwise.
In the earlier head tracking tutorial, we used two different packages for controlling Pi Robot's Dynamixel AX-12 servos: the ArbotiX package from Vanadium labs, and the Robotis package from the Healthcare Robotics Lab (HRL) at Georgia Tech. In this tutorial, we will use a third package for managing a Dynamixel bus; namely, the dynamixel_controllers package from the Arizona Robotics Research Group (ARRG). (A special thanks to Anton Rebguns for helping me understand how to set up the controller launch files.) This package is part of the dynamixel_motor stack which in turn is part of the the larger ua-ros-pkg repository that includes packages for controlling a multijointed arm as well as the use of ROS actions and trajectories. We will be looking at actions, arm trajectories and inverse kinematics for Pi Robot's arms in the near future so the UA stack will come in quite handy.
As in the previous tutorial, we expect the reader to be familiar with the basics of ROS. You should be able to use either Cturtle or Diamondback for your main ROS installation although I recommend making the upgrade to Diamondback. Here again is a quick checklist of pre-requisites before you can run the code we will create later on:
- Install Ubuntu Linux (I am using version 10.04 on a machine that dual boots with Windows). If your robot does not have its own onboard computer, you can still run the code on your main computer and wire the Kinect and servo controller to a pair of USB ports.
- Install either the Diamondback or Electric release of ROS on your Ubuntu machine.
- If you are not already familiar with the ROS basics, work through the Beginner Tutorials. It is important to actually try the sample code on your own machine rather than just reading through the text. In fact, I ran through all the tutorials twice since a few concepts were a little shaky after a single pass.
- In addition to the beginner tutorials, it is essential to also work through the ROS tf Tutorials.
To prepare our robot for the real 3-dimensional world, we will need a few key ingredients, all of which are well tested parts of the ROS framework. These include:
- A geometrically accurate model of our robot using the Unified Robot Description Format or URDF.
- A way to visualize and test the joints of the robot before hooking up a real servo controller. This will be handled by the joint_state_publisher package by David Lu!!.
- A method for combining the geometry of the robot and the current joint angles to yield the 3D positions and orientations of the joints. This is taken care of by the all-powerful robot_state_publisher package which outputs the tf frames attached to each link on our robot and the transformations between them.
- Drivers for our Kinect RGB-D camera. We will use the openni packages. (Not done until Part 2.)
- A controller package for our AX-12 servos and USB2Dynamixel controller. For this, we will use the dynamixel_controllers package as already stated.
The steps we will follow in this tutorial are:
- Download the tutorial files from the Pi Robot repository.
- Create a URDF model of our robot and test it in RViz.
- Install and set up the dynamixel_controllers package to control our servos.
- Learn how to point the head to an arbitrary location using tf.
Downloading the Tutorial Files
All the files needed for the tutorial can be downloaded via SVN. Move into your personal ROS path (e.g. ~/ros) and execute the commands:
$ svn co http://pi-robot-ros-pkg.googlecode.com/svn/trunk/pi_tutorials/pi_head_tracking_3d_part1
Now take a look at the manifest file:
$ more pi_head_tracking_3d_part1/manifest.xml
Head Tracking in 3D Part 1
Patrick Goebel
BSD
https://pirobot.org/blog/0018
Note the dependency on the third-party packages joint_state_publisher as well as dynamixel_driver, dynamixel_msgs and dynamixel_controllers. Let's download and install these packages now so we can build the main head tracking project.
Installing the joint_state_publisher PackageThe joint_state_publisher package was created by David Lu!! and we can install it as part of his urdf_tools stack. First move into your personal ROS directory (e.g. ~/ros), then issue the commands:
$ svn co https://wu-ros-pkg.svn.sourceforge.net/svnroot/wu-ros-pkg/stacks/urdf_tools/trunk urdf_tools
$ cd urdf_tools
$ rosmake --rosdep-install
Installing the dynamixel_motor Packages
Move into a directory in your personal ROS path (e.g. ~/ros) and get the entire dynamixel_motor stack using one of the following two commands:
For Diamondback:
$ sudo apt-get install ros-diamondback-dynamixel-motor
For Electric:
$ sudo apt-get install ros-electric-dynamixel-motor
The dynamixel_controllers package included in the dynamixel_motor stack works in a manner similar to the controller used on the Willow Garage PR2: first a controller manager node is launched that connects to the Dynamixel bus (in our case a USB2Dynamixel controller). The controller node can then start, stop or restart one or more individual servo controllers. All of this is taken care of by a launch file that we will create later on.
Building the Head Tracking 3D PackageNow that we have our dependencies installed, we can build the main head tracking package using the command:
$ rosmake --rosdep-install pi_head_tracking_3d_part1
Setting Up the URDF/Xacro Robot ModelBefore we can use tf to translate between frames of reference, we need a model of how our robot is put together. Even if you simply mount your Kinect onto a pair of pan and tilt servos, tf needs to know the geometry describing how these servos are connected to each other and to the camera, as well as how the whole setup is mounted relative to the robot base or table top.
To describe the configuration of links and joints making up your robot, ROS uses an XML file written in the Unified Robot Description Format (URDF). You can also use some simple macros using the Xacro macro language for simplying the URDF file.
Pi Robot's URDF model is fairly complex because of the number of joints (13 altogether) and some funky offsets of the joints due to the way the brackets are mounted. So we will use a simpler robot model that we will call the KinectBot for lack of a better name. You can also use your own robot model if it includes pan and tilt joints for the head. (It is assumed in this tutorial that the pan and tilt links are called head_pan_link and head_tilt_link.) An excellent set of URDF and Xacro tutorials for creating your own URDF model can be found at:
http://www.ros.org/wiki/urdf/Tutorials
Otherwise, you can use the KinectBot model found in the tutorial package under the urdf directory.
To see what the contents of this file look like, issue the commands:
$ roscd pi_head_tracking_3d_part1/urdf
$ more kinectbot.urdf.xacro To check that validity of this file, run the following commands:
$ rosrun xacro xacro.py kinectbot.urdf.xacro > tmp.urdf
Then, for C-Turtle or Diamondback:
$ rosrun urdf check_urdf tmp.urdf
Or if you are using Electric:
$ rosrun urdf_parser check_urdf tmp.urdf
You should see an output that looks like this:
robot name is: kinectbot
---------- Successfully Parsed XML ---------------
root Link: base_link has 1 child(ren)
child(1): torso_link
child(1): head_pan_link
child(1): pan_tilt_bracket
child(1): head_tilt_link
child(1): neck_link
child(1): head_base_link
child(1): head_post_link
child(1): head_link
child(1): hair_link
child(2): left_eye_link
child(3): right_eye_link
Feel free to modify the kinectbot.urdf.xacro file as you like, but be sure to verify any changes using the procedure described above. (BTW, the left and right eye links do not correspond to the Kinect's lenses; they are simply for decoration at this point. We will deal with the true optical frames of the Kinect in Part 2).
Testing Your Model using joint_state_publisher & robot_state_publisher
To test our URDF model in RViz, we need a way to publish the joint positions even though we are not yet connected to a real robot. This is where David Lu!!'s joint_state_publisher comes in. We'll need a launch file that brings up our URDF model together with the joint_state_publisher node and one other key ROS node called the robot_state_publisher. The robot state publisher knows how to take the current joint angles of your robot and turn them into 3D poses of the links by propagating the joint angles through the kinematic tree defined by your URDF file. For example, if your robot's head is connected to the tilt servo joint by a 10cm long bracket, and the tilt joint angle is currently 90 degrees, then the robot_state_publisher can compute that the head is now 10cm forward of the servo joint and parallel to the ground. The end result is that both both tf and RViz have access to the current configuration of your robot at any moment in time.
Part of the tutorial package includes the launch file test_urdf.launch that looks like this:
The first line in the launch file highlighted in yellow loads our URDF model of the KinectBot onto the parameter server as the parameter /robot_description.
The next two lines highlighted in blue launch the joint_state_publisher node. Setting the use_gui parameter to True will bring up a slider control that allows us to set the simulated joint angles manually.
Next we launch the robot_state_publisher node as shown in green. The robot state publisher knows how to take the current joint angles of the robot and map them into 3D poses of the links as defined by the URDF model.
The final line provides a static transform between the robot /base_link frame and the /world frame. In the case of the KinectBot, the middle of the base is 3.25 cm above the ground plane.
Let's fire up this set of parameters, nodes, and transforms, by launching the test_urdf.launch file as follows:
$ roslaunch pi_head_tracking_3d_part1 test_urdf.launch
If all goes well, you should see the following output on your terminal window:
SUMMARY
========
PARAMETERS
* /rosversion
* /use_gui
* /rosdistro
* /robot_description
* /robot_state_publisher/publish_frequency
NODES
/
joint_state_publisher (joint_state_publisher/joint_state_publisher)
robot_state_publisher (robot_state_publisher/state_publisher)
world_base_broadcaster (tf/static_transform_publisher)
ROS_MASTER_URI=http://localhost:11311
core service [/rosout] found
process[joint_state_publisher-1]: started with pid [11991]
process[robot_state_publisher-2]: started with pid [11992]
process[world_base_broadcaster-3]: started with pid [11993]
You should also see a little slider control pop up on your screen that looks like this:
To visualize the KinectBot and test the joints, bring up RViz using the configuration file included in the tutorial package:
$ roscd pi_head_tracking_3d_part1
$ rosrun rviz rviz -d tutorial.vcg
You should see the RViz window come up looking something like this:
Now bring the joint slider control back to the foreground and try out the controls for the pan and tilt servos. You should see the KinectBot's head move in RViz as you change the joint angles. Assuming everything works OK, we are ready to try things out on the real robot.
Servo Control Using the dynamixel_motor Packages
PLEASE NOTE: (Sept 7, 2011) The following section has been changed since the original tutorial was written. The old ax12_controller_core package has been replaced by the dynamixel_motor stack. The updated text below therefore uses the new dynamixel_motor stack.
The dynamixel_controllers package works in a manner similar to the one used on the Willow Garage PR2: first a controller manager node is launched that connects to the Dynamixel bus (in our case a USB2Dynamixel controller). The controller node then launches a number of individual controllers, one for each servo on the bus. Here is the launch file we will use to control our robot's pan and tilt servos:
type="controller_manager.py" required="true" output="screen">
namespace: pi_dynamixel_manager
serial_ports:
dynamixel_ax12:
port_name: "/dev/ttyUSB0"
baud_rate: 1000000
min_motor_id: 1
max_motor_id: 2
update_rate: 20
type="controller_spawner.py"
args="--manager=pi_dynamixel_manager
--port=dynamixel_ax12
--type=simple
head_pan_controller
head_tilt_controller"
output="screen" />
Looking at the launch file, we see that the USB2Dyamixel controller is assumed to be on port /dev/ttyUSB0 and the servo IDs are 1 and 2. Change these values if necessary for your setup. The launch file depends on dynamixel_params.yaml file found in the params subdirectory. That file loos like this:
dynamixels: ['head_pan', 'head_tilt']
head_pan_controller:
controller:
package: dynamixel_controllers
module: joint_position_controller
type: JointPositionController
joint_name: head_pan_joint
joint_speed: 2.0
motor:
id: 1
init: 512
min: 0
max: 1024
head_tilt_controller:
controller:
package: dynamixel_controllers
module: joint_position_controller
type: JointPositionController
joint_name: head_tilt_joint
joint_speed: 2.0
motor:
id: 2
init: 512
min: 300
max: 800 First we define a parameter called dynamixels that simply lists the name of our servos. Then we specify the type of controller that will control each servo as well as its hardware ID, initial position value and its min and max position values. The head tilt controller is given less than full range since it cannot go all the way forward or back without hitting the top of the torso. At this level, the init/min/max numbers are given in servo ticks which varies from 0 to 1023 for the AX-12s . (We also specify limits on the head tilt joint in the robot's URDF file, giving a min/max of 1.57 radians which is 90 degrees either way.)
Testing the ServosTo test the pan and tilt servos, first connect your servos and USB2Dynamixels to a power source, then make sure your USB2Dynamixel is connected to a USB port on your computer. Once connected, issue the following command to see what USB ports you have connected:
$ ls /dev/ttyUSB*
Hopefully you will see something like the following output:
/dev/ttyUSB0
If instead you get back the message:
ls: cannot access /dev/ttyUSB*: No such file or directory
then your USB2Dynamixel has not been recognized. Try plugging it in to a different USB port or use a different cable.
If you have no other USB devices attached, your USB2Dynamixel should be on /dev/ttyUSB0 and the default launch file will work without modification. If it is on a different numbered USB port, edit the dynamixels.launch file in the launch directory and change the port accordingly. Similarly, if your servos have IDs other than 1 and 2, edit the dynamixels_params.yaml and dynamixels.launch files as needed.
If you have RViz up during the following tests, you should see the virtual head move in sync with the real head.
When this is all done, fire up the dynamixels.launch file:
$ roslaunch pi_head_tracking_3d_part1 dynamixels.launch
You should see a number of startup messages that look something like this:
process[robot_state_publisher-1]: started with pid [19394]
process[dynamixel_controller/dynamixel_manager-2]: started with pid [19395]
process[dynamixel_controller/dynamixel_controller_spawner_ax12-3]: started with pid [19396]
process[dynamixel_controller/dynamixel_joint_states_publisher-4]: started with pid [19397]
process[relax_all_servos-5]: started with pid [19399]
process[world_base_broadcaster-6]: started with pid [19404]
[INFO] [WallTime: 1313701889.865474] Pinging motor IDs 1 through 2...
[INFO] [WallTime: 1313701889.865756] dynamixel_ax12 controller_spawner: waiting for controller_manager pi_dynamixel_manager to startup in /dynamixel_controller/ namespace...
[INFO] [WallTime: 1313701889.869122] Found motors with IDs: [1, 2].
[INFO] [WallTime: 1313701889.871879] Starting Dynamixel Joint State Publisher at 10Hz
[INFO] [WallTime: 1313701889.968588] There are 2 AX-12+ servos connected
[INFO] [WallTime: 1313701889.968957] Dynamixel Manager on port /dev/ttyUSB0 initialized
[INFO] [WallTime: 1313701890.179728] dynamixel_ax12 controller_spawner: All services are up, spawning controllers...
[INFO] [WallTime: 1313701890.267504] Controller head_pan_controller successfully started.
[INFO] [WallTime: 1313701890.373233] Controller head_tilt_controller successfully started.
Once the dynamixel controllers are up and running, bring up a new terminal and send a couple of simple pan and tilt commands. The first command should pan the head to the left through 1 radian or about 57 degrees:
$ rostopic pub -1 /dynamixel_controller/head_pan_controller/command std_msgs/Float64 -- 1.0
Re-center the servo with the command:
$ rostopic pub -1 /dynamixel_controller/head_pan_controller/command std_msgs/Float64 -- 0.0
Now try tilting the head downward hald a radian (about 28 degrees):
$ rostopic pub -1 /dynamixel_controller/head_tilt_controller/command std_msgs/Float64 -- 0.5
And bring it back up:
$ rostopic pub -1 /dynamixel_controller/head_tilt_controller/command std_msgs/Float64 -- 0.0
To change the speed of the head pan servo in radians per second, use the set_speed service:
$ rosservice call /dynamixel_controller/head_pan_controller/set_speed 1.0
$ rostopic pub -1 /dynamixel_controller/head_pan_controller/command std_msgs/Float64 -- 1.0
To relax a servo so that you can move it by hand,use the torque_enable service:
$ rosservice call /dynamixel_controller/head_pan_controller/torque_enable False
The Point Head NodeWe are finally ready to introduce tf into the picture. Recall that at the start of this tutorial, we promised that we would be able to point the head to any location defined in any frame of reference. For example, if we say to the robot "look 1.2 meters forward, 0.5 meters up, and 2.1 meters to the right", where all measurements are relative to the base reference frame, how do we map these coordinates into a pair of pan and tilt angles at the head? To do this on your own, you would have to figure out all the 3-dimensional transformations (rotations and translations) that connect all the links and joints of your robot. Foruntately, that's exactly what tf does for us.
The ua-ros-pkg repository includes a node that uses tf to point the head toward a desired 3D location. However, that node (found in the wubble_actions package) uses ROS actions which is a topic for another tutorial. So the following code is a less elegant way to do the same thing but without the fancier control that actions provide.
The Python script point_head.py found in the bin directory of the tutorial package does the work. First we'll show the whole listing, then we'll take a closer look at the more interesting parts:
import roslib; roslib.load_manifest('pi_head_tracking_3d_part1') import rospy import tf from geometry_msgs.msg import PointStamped from std_msgs.msg import Float64
import math
class PointHeadNode():
def __init__(self): rospy.init_node('point_head_node', anonymous=True) dynamixel_namespace = rospy.get_namespace() rate = rospy.get_param('~rate', 1) r = rospy.Rate(rate) self.target_point = PointStamped() self.last_target_point = PointStamped()
rospy.Subscriber('/target_point', PointStamped, self.update_target_point)
self.head_pan_frame = 'head_pan_link' self.head_pan_pub = rospy.Publisher(dynamixel_namespace + 'head_pan_controller/command', Float64) self.head_tilt_frame = 'head_tilt_link' self.head_tilt_pub = rospy.Publisher(dynamixel_namespace + 'head_tilt_controller/command', Float64)
self.tf = tf.TransformListener() self.tf.waitForTransform(self.head_pan_frame, self.head_tilt_frame, rospy.Time(), \ rospy.Duration(5.0)) rospy.sleep(1) self.reset_head_position() rospy.loginfo("Ready to accept target point") while not rospy.is_shutdown(): rospy.wait_for_message('/target_point', PointStamped) if self.target_point == self.last_target_point: continue try: target_angles = self.transform_target_point(self.target_point) except (tf.Exception, tf.ConnectivityException, tf.LookupException): rospy.loginfo("tf Failure") continue self.head_pan_pub.publish(target_angles[0]) self.head_tilt_pub.publish(target_angles[1]) self.last_target_point = self.target_point rospy.loginfo("Setting Target Point:\n" + str(self.target_point)) r.sleep() def update_target_point(self, msg): self.target_point = msg
def reset_head_position(self): self.head_pan_pub.publish(0.0) self.head_tilt_pub.publish(0.0) rospy.sleep(3)
def transform_target_point(self, target): pan_ref_frame = self.head_pan_frame tilt_ref_frame = self.head_tilt_frame self.tf.waitForTransform(pan_ref_frame, target.header.frame_id, rospy.Time(), \ rospy.Duration(5.0)) self.tf.waitForTransform(tilt_ref_frame, target.header.frame_id, rospy.Time(), \ rospy.Duration(5.0))
pan_target = self.tf.transformPoint(pan_ref_frame, target) pan_angle = math.atan2(pan_target.point.y, pan_target.point.x)
tilt_target = self.tf.transformPoint(tilt_ref_frame, target) tilt_angle = math.atan2(tilt_target.point.z, math.sqrt(math.pow(tilt_target.point.x, 2) + math.pow(tilt_target.point.y, 2)))
return [pan_angle, tilt_angle]
if __name__ == '__main__': try: point_head = PointHeadNode() rospy.spin() except rospy.ROSInterruptException: pass
Now let's focus on some of the more important parts of the script beginning near the top:
import tf from geometry_msgs.msg import PointStamped Since the name of the game is frame transformations, we import the tf package. The PointStamped message type glues together a Point message type (x, y, z coordinates) with a Header message type (seq, stamp, frame_id) which therefore attaches the point to a particular frame of reference. Remember that you can always display the fields of a given message type using the rosmsg command:
$ rosmsg show geometry_msgs/PointStamped
Header header
uint32 seq
time stamp
string frame_id
geometry_msgs/Point point
float64 x
float64 y
float64 z
A PointStamped message is therefore exactly what we need if we want to specify a target location in a given frame, such as the base of the robot (/base_link), a current map (/map) or some other part of the robot (e.g. /left_wrist_joint). So we initialize the target point accordingly. We also keep track of the last target point so we can tell if the target has changed:
self.target_point = PointStamped() self.last_target_point = PointStamped() Next we subscribe to the /target_point topic on which we will publish PointStamped target locations:
rospy.Subscriber('/target_point', PointStamped, self.update_target_point) The callback function update_target_point simply sets the local target_point variable to the published value:
def update_target_point(self, msg): self.target_point = msg We use a pair of publishers to update the positions of the pan and tilt servos. We also select the two reference frames, head_pan_link and head_tilt_link, that will be used in transforming the target location into appropriate motions:
self.head_pan_frame = 'head_pan_link' self.head_pan_pub = rospy.Publisher(dynamixel_namespace + 'head_pan_controller/command', Float64) self.head_tilt_frame = 'head_tilt_link' self.head_tilt_pub = rospy.Publisher(dynamixel_namespace + 'head_tilt_controller/command', Float64) Next we initialize the tf listener and make sure that at least the pan and tilt frames are up and visible:
self.tf = tf.TransformListener() self.tf.waitForTransform(self.head_pan_frame, self.head_tilt_frame, rospy.Time(), \ rospy.Duration(5.0)) Now we enter the main processing loop:
while not rospy.is_shutdown(): rospy.wait_for_message('/target_point', PointStamped) if self.target_point == self.last_target_point: continue First we wait on the /target_point topic to make sure we get a target message. If the current target is the same as the last, we skip the rest of the loop since there is nothing to do. Otherwise, we transform the target location into the pan and tilt link frames and get back the angles needed to rotate the head to point at the target:
try: target_angles = self.transform_target_point(self.target_point) except (tf.Exception, tf.ConnectivityException, tf.LookupException): rospy.loginfo("tf Failure") continue The function transform_target_point will be described shortly. Once we have our pan and tilt angles, we publish them to the dynamixel controllers to move the head:
self.head_pan_pub.publish(target_angles[0]) self.head_tilt_pub.publish(target_angles[1])
The real work is done by the transform_target_point function which, as mentioned earlier, is taken from the UA wubble head action script. Let's see how it works:
def transform_target_point(self, target): pan_ref_frame = self.head_pan_frame tilt_ref_frame = self.head_tilt_frame The input to the function is the PointStamped target location. First we set our reference frames to the head_pan_frame and head_tilt_frame defined in the main script, namely /head_pan_link and /head_tilt_link. Since these are the frames in which head motion takes place, we have to transform the target's coordinates into these frames. It's a good idea to wait for tf to see both the reference frames (pan and tilt) and the target frame, so we do that next:
self.tf.waitForTransform(pan_ref_frame, target.header.frame_id, rospy.Time(), \ rospy.Duration(5.0)) self.tf.waitForTransform(tilt_ref_frame, target.header.frame_id, rospy.Time(), \ rospy.Duration(5.0))
Finally, we put tf to work by using the the transformPoint method to map the target point from its own frame into the pan and tilt frames. First the pan frame:
pan_target = self.tf.transformPoint(pan_ref_frame, target) pan_angle = math.atan2(pan_target.point.y, pan_target.point.x) In the first line above, pan_target is assigned the (x, y, z) coordinates of the target in the reference frame attached to the head_pan_link. The corresponding pan angle is then computed in the second line from the projection of these coordinates in the horizontal x-y plane. In a similar fashion, the next two lines compute the tilt angle as follows:
tilt_target = self.tf.transformPoint(tilt_ref_frame, target) tilt_angle = math.atan2(tilt_target.point.z, math.sqrt(math.pow(tilt_target.point.x, 2) + math.pow(tilt_target.point.y, 2))) With both angles computed, we return them to our main loop:
return [pan_angle, tilt_angle]
This completes the point_head node. Once launched, the node will listen on the /target_point topic and when a target message is received, it will move the pan and tilt servos to point the head in the appropriate direction. To test the node, first make sure you have launched the dynamixel controllers if they are not already running:
$ roslaunch pi_head_tracking_3d_part1 dynamixels.launch
Then move into another terminal and launch the point_head node:
$ roslaunch pi_head_tracking_3d_part1 point_head.launch
Shortly after launching the point_head node, the pan and tilt servos should move to their neutral positions and you should see the following messages on screen:
process[ax12_controller/point_head_node-1]: started with pid [870]
[INFO] [WallTime: 1298908182.741524] Ready to accept target point
At this point, bring up another terminal, and try publishing a target on the /target_point topic. The following command sets the target location 1 meter forward, 1 meter to the left, and 0 meters upward relative to the base_link frame:
$ rostopic pub -1 /target_point geometry_msgs/PointStamped '{ header: {frame_id: base_link }, point: {x: 1.0, y: 1.0, z: 0.0} }'
If all goes well, your robot's head should pan to the left and tilt downward. Now try the almost identical command, but change the frame_id from base_link to torso_link:
$ rostopic pub -1 /target_point geometry_msgs/PointStamped '{ header: {frame_id: torso_link }, point: {x: 1.0, y: 1.0, z: 0.0} }'
You should have noticed the head tilt up ever so slightly. Why? Since the torso reference frame is slightly above the base reference frame, giving the same coordinates relative to this upward-shifted frame refers to a higher point in the world and the robot's head tilts up accordingly.
You can move the head back to the neutral position using the command:
$ rostopic pub -1 /target_point geometry_msgs/PointStamped '{ header: {frame_id: head_pan_link }, point: {x: 100.0, y: 0.0, z: 0.0} }'
Note how in the command above, we have set the target relative to the head_pan_link frame. Since this frame is fixed relative to the rest of the robot below the head, setting a target far away and straight ahead (y = 0) centers the head regardless of the head's current position. On the other hand, note what happens if we repeatedly publish a fixed set of target coordinates relative to the head_link frame which is attached to the head itself and therefore moves with the head. (Note that 'rostopic pub -r 1' repeats the message once per second.)
$ rostopic pub -r 1 /target_point geometry_msgs/PointStamped '{ header: {frame_id: head_link }, point: {x: 5.0, y: 0.0, z: 0.01} }'
(Type Ctrl-C to stop the motion before it goes too far.) After issuing the command above, you should see the robot head tilt further and further up/back each time the message is published. Why? The target coordinates {x: 5.0, y: 0.0, z: 0.01} specifies a location 5 meters forward and 0.01 meters (1 cm) upward from the frame attached to the head. When we first publish this target, the head therefore tilts a little bit upward, but so does the head_link frame that is attached to the head. So when we publish the same coordinates again, they are relative to this new frame and again the head tilts "upward" relative to this new frame.
Head Pointer GUIPointing the head using the command line is a little tedious so the tutorial package includes a simple GUI to make testing a little easier. The code for this GUI is based on Mike Ferguson's most excellent ArbotiX controller GUI. Small modifications were made to allow the selection of a reference frame and the specification of a target location relative to this frame. Once you have launched the dynamixels.launch file and the point_head.launch file, you can fire up the GUI using the command:
$ rosrun pi_head_tracking_3d_part1 head_pointer_gui.py
The following window should then appear:
Use the Select Frame pull down menu to select the reference frame in which you want to specify the target location. Then enter the x, y and z coordinates of the target and click the Point Head button. To re-center the head, click the Reset Position button.
Looking Ahead to Part 2
In Part 2 of the tutorial, we will combine the head pointing nodes we have already developed with 3D visual targets using the Kinect RGB-D camera together with OpenNI skeleton tracking.
|