|
ROS by Example: Head Tracking in 3D (Part 2)
NOTE: This tutorial is several years out of date and the specific commands for installing software and using ROS have changed since then. An up-to-date version of this tutorial can be found in the book ROS By Example Volume 2: Packages and Programs for Advanced Robot Behaviors, available as a downloadable PDF and in paperback on Lulu.com.
In the previous tutorial we learned how to use the ROS tf package to point a pan-and-tilt head toward a location in space relative to any frame of reference. We will now develop a ROS node that can track a moving 3D point by smoothly panning and tilting the camera.
Target locations can be points viewed through a depth camera like the Kinect, or they can be any other set of (x, y, z) coordinates relative to some reference frame. For example, it might be useful to have the robot follow its gripper while reaching for an object. Since the gripper's coordinates are easily obtained using tf and the URDF model of the robot, there is no need to actually visually identify the gripper using the camera--we can point the camera directly toward the gripper's 3D coordinates, which are simply (0, 0, 0) in the gripper reference frame. tf then takes care of the rest.
As another example, the ROS openni_tracker package uses the Kinect to track and publish the coordinate frames attached to the skeletal joints of an operator standing in front of the camera. This means that our robot's head can track a given joint without actually identifying the joint in the regular RGB image (which would be a fairly difficult task). In fact, since skeleton tracking relies only on the infrared depth information returned by the Kinect, tracking can continue even in the dark!
In this tutorial, we will cover the following steps:
System RequirementsThis tutorial has only been tested using ROS Electric and Ubuntu 10.04.
A Modified TurtleBotIn Part 1 we used a URDF model for a ficticious KinectBot. In this tutorial, we will use a TurtleBot that has been modified to place the Kinect on top of a pair of pan and tilt Dynamixel servos as shown in the pictures above and to the right. However, all you need to follow the tutorial is the pan and tilt head as well as a URDF model describing your setup.
If you have a TurtleBot and would like to add a pan and tilt head using Dynamixel servos, you can either make your own setup as shown on the right or you can purchase a ready-made turret. There are two Dynamixel servo controllers that work well with ROS: the ArbotiX controller used on the Mini Max robot, and the USB2Dynamixel controller used with the dynamixel_motor stack and the Crustcrawler Smart Arm. We will use the USB2Dynamixel controller in this tutorial to be consistent with Part 1. You can find existing tutorials for using the ArbotiX controller on the Mini Max Wiki page.
Downloading the Tutorial Files
All the files needed for the tutorial can be downloaded via SVN. Move into your personal ROS path (e.g. ~/ros) and execute the commands:
$ svn co http://pi-robot-ros-pkg.googlecode.com/svn/trunk/pi_tutorials/pi_head_tracking_3d_part2
Now take a look at the manifest file:
$ cd pi_head_tracking_3d_part2
$ cat manifest.xml
Head Tracking in 3D Part 2
Patrick Goebel
BSD
https://pirobot.org/blog/0023
The openni_camera and openni_tracker packages come from the openni_kinect stack while the dynamixel packages live in the newly released dynamixel_motor stack. The dynamixel_motor stack replaces the earlier ax12_controller_core package which is now depreciated in favor of dynamixel_motor. We will also need the turtlebot_description package and the modifications made for the pan and tilt head found in the package. Let's get these packages now.
$ sudo apt-get install ros-electric-openni-kinect
$ sudo apt-get install ros-electric-dynamixel-motor
$ sudo apt-get install ros-electric-turtlebot
Next get the pan and tilt head modifications from the pi_tb_description package:
$ cd ~/ros (or wherever you personal ROS directory lives)
$ svn co http://pi-robot-ros-pkg.googlecode.com/svn/trunk/pi_tb_description
Building the Head Tracking Package
To check that you have everything in place, try building the head tracking package as follows:
$ rosmake --rosdep-install pi_head_tracking_3d_part2 If you get any complaints about missing dependencies, install them now and try the above command again.
Testing the TurtleBot's URDF ModelBefore firing up our robot for real, let's test the URDF model in RViz to make sure everything looks OK. Launch the test_urdf.launch file and RViz using the following commands:
$ roslaunch pi_head_tracking_3d_part2 test_urdf.launch
$ roscd pi_head_tracking_3d_part2
$ rosrun rviz rviz -d tutorial.vcg
If everythings goes well, you should see the robot appear in RViz and you should have a smaller popup window containing slider controls for the joints. Sliding one of the servo joint controls should cause the TurtleBot's head to move in the appropriate manner in RViz.
When you are statisfied that the model is OK, type Ctrl-C in the terminal where you launch the test_urdf.launch file.
Configuring the ServosThe dynamixel_controllers package works in a manner similar to the one used on the Willow Garage PR2: first a controller manager node is launched that connects to the Dynamixel bus (in our case a USB2Dynamixel controller). The controller node then launches a number of individual controllers, one for each servo on the bus. The launch file dynamixels.launch in the launch subdirectory shows how to set it up when we have 2 Dynamixel servos on the bus with hardware IDs 1 and 2:
type="controller_manager.py" required="true" output="screen">
namespace: pi_dynamixel_manager
serial_ports:
dynamixel_ax12:
port_name: "/dev/ttyUSB0"
baud_rate: 1000000
min_motor_id: 1
max_motor_id: 2
update_rate: 20
type="controller_spawner.py"
args="--manager=pi_dynamixel_manager
--port=dynamixel_ax12
--type=simple
head_pan_controller
head_tilt_controller"
output="screen" />
Looking at the launch file, we see that the USB2Dyamixel controller is assumed to be on port /dev/ttyUSB0 and the servo IDs range from 1 through 2. Change these values if necessary for your setup. (For example, on my own TurtleBot I also have a Dynamixel arm that uses servo IDs 1-5 while the pan and tilt servos actually have IDs 6 and 7.) The launch file uses the configuration file dynamixel_params.yaml file found in the config subdirectory. That file loos like this:
dynamixels: ['head_pan_joint', 'head_tilt_joint']
head_pan_controller:
controller:
package: dynamixel_controllers
module: joint_position_controller
type: JointPositionController
joint_name: head_pan_joint
joint_speed: 2.0
motor:
id: 1
init: 512
min: 0
max: 1024
head_tilt_controller:
controller:
package: dynamixel_controllers
module: joint_position_controller
type: JointPositionController
joint_name: head_tilt_joint
joint_speed: 2.0
motor:
id: 2
init: 512
min: 300
max: 800 First we define a parameter called dynamixels that simply lists the name of our servos. Then we specify the type of controller that will control each servo as well as its hardware ID, initial position value and its min and max position values. The head tilt controller is given less than full range since it cannot go all the way forward or back without hitting the top of the torso. In this file, the init/min/max numbers are given in servo ticks which varies from 0 to 1023 for the AX-12s . (We also specify limits on the head tilt joint in the robot's URDF file, giving a min/max of 1.57 radians which is 90 degrees either way.)
Testing the ServosTo test the pan and tilt servos, first connect your servos and USB2Dynamixels to a power source, then make sure your USB2Dynamixel is connected to a USB port on your computer. Once connected, issue the following command to see what USB ports you have connected:
$ ls /dev/ttyUSB*
Hopefully you will see something like the following output:
/dev/ttyUSB0
If instead you get back the message:
ls: cannot access /dev/ttyUSB*: No such file or directory
then your USB2Dynamixel has not been recognized. Try plugging it in to a different USB port or use a different cable.
If you have no other USB devices attached, your USB2Dynamixel should be on /dev/ttyUSB0 and the default launch file will work without modification. If it is on a different numbered USB port, edit the dynamixels.launch file in the launch directory and change the port accordingly. Similarly, if your servos have IDs other than 1 and 2, edit the dynamixels_params.yaml and dynamixels.launch files as needed.
When this is all done, fire up the dynamixels.launch file:
$ roslaunch pi_head_tracking_3d_part2 dynamixels.launch
You should see a number of startup messages that look something like this:
process[robot_state_publisher-1]: started with pid [19394]
process[dynamixel_controller/dynamixel_manager-2]: started with pid [19395]
process[dynamixel_controller/dynamixel_controller_spawner_ax12-3]: started with pid [19396]
process[dynamixel_controller/dynamixel_joint_states_publisher-4]: started with pid [19397]
process[relax_all_servos-5]: started with pid [19399]
process[world_base_broadcaster-6]: started with pid [19404]
[INFO] [WallTime: 1313701889.865474] Pinging motor IDs 1 through 2...
[INFO] [WallTime: 1313701889.865756] dynamixel_ax12 controller_spawner: waiting for controller_manager pi_dynamixel_manager to startup in /dynamixel_controller/ namespace...
[INFO] [WallTime: 1313701889.869122] Found motors with IDs: [1, 2].
[INFO] [WallTime: 1313701889.871879] Starting Dynamixel Joint State Publisher at 10Hz
[INFO] [WallTime: 1313701889.968588] There are 2 AX-12+ servos connected
[INFO] [WallTime: 1313701889.968957] Dynamixel Manager on port /dev/ttyUSB0 initialized
[INFO] [WallTime: 1313701890.179728] dynamixel_ax12 controller_spawner: All services are up, spawning controllers...
[INFO] [WallTime: 1313701890.267504] Controller head_pan_controller successfully started.
[INFO] [WallTime: 1313701890.373233] Controller head_tilt_controller successfully started.
Once the dynamixel controllers are up and running, bring up a new terminal and send a couple of simple pan and tilt commands. The first command should pan the head to the left through 1 radian or about 57 degrees:
$ rostopic pub -1 /head_pan_controller/command std_msgs/Float64 -- 1.0 Re-center the servo with the command:
$ rostopic pub -1 /head_pan_controller/command std_msgs/Float64 -- 0.0 Now try tilting the head downward half a radian (about 28 degrees):
$ rostopic pub -1 /head_tilt_controller/command std_msgs/Float64 -- -0.5 And bring it back up:
$ rostopic pub -1 /head_tilt_controller/command std_msgs/Float64 -- 0.0 To change the speed of the head pan servo in radians per second, use the set_speed service:
$ rosservice call /head_pan_controller/set_speed 1.0
$ rostopic pub -1 /head_pan_controller/command std_msgs/Float64 -- 1.0
To relax a servo so that you can move it by hand, use the torque_enable service:
$ rosservice call /head_pan_controller/torque_enable False
To relax all the servos at once, use the relax_all_servos.py utility script:
$ rosrun pi_head_tracking_3d_part2 relax_all_servos.py
If you look at the code inside relax_all_servos.py (it lives in the nodes subdirectory) you will see that relaxing is done using a torque_enable(False) service call to each servo. You will also notice the use of the set_speed service to set the servo speed to a relatively slow speed so that you are not surprised by a fast motion the next time you send them a position command.
The tf Head Tracking NodeThe Python script called tf_head_tracker.py in the nodes directory of the tutorial package defines a ROS node that will cause the head to follow any 3D point relative to a given reference frame. The node assumes that the target is being published as a PointStamped message on the /target_point topic but this can be remapped in your launch file or on the command line.
In Part 1 of the tutorial, we simply pointed the head to a fixed target and then stopped. To follow a moving target, it turns out that simply repeating the process quickly is not the best approach. The problem is that the tracking movement will tend to be jerky as the servos start and stop. The head tracking method described here results in smoother head motion.
The basic flow of the tf_head_tracking node is as follows:
- Initialize the node with a set of tracking parameters that determine how sensitive tracking is to movement of the target.
- Connect to the set_speed and torque_enable services for each servo.
- Define a pair of position publishers, one for each servo.
- Wait for the first target message to appear on the /target_point topic.
- Subscribe to the /target_point topic and point the callback to the update_head_position() function.
- Enter the main tracking loop:
-
- project the target onto the camera reference frame
- set the speed of each servo proportional to the distance of the target from the center of view
- publish a target position for each servo ahead or behind its current position
The tracking speed and target positions are set in the update_head_position() function. We use a form of speed-based tracking: the idea is to set the target positions ahead or behind the actual target but to make the speeds proportional to the displacement of the target from the center of the camera frame. As a result, when the target is centered in the camera frame, the servo speeds are set to zero and the head stops moving.
Here now are the key parts of the tf_head_tracker.py script and what they do:
self.rate = rospy.get_param('~rate', 10)
r = rospy.Rate(self.rate) This is the rate at which we update the servo speeds and positions. Something around 10-20 Hz is generally a good value.
set_speed_service = '/' + controller + '/set_speed'
rospy.wait_for_service(set_speed_service)
self.servo_speed[name] = rospy.ServiceProxy(set_speed_service, SetSpeed, persistent=True) Note that when we set up the set_speed service for each servo, we use the flag persistent=True. Without this flag, we have to connect to the set_speed service every time we pass through our update loop (10-20 times per second). Services are generally not designed to handle such high rates of connects and disconnects, so we create persistent connections to avoid trouble.
self.default_joint_speed = rospy.get_param('~default_joint_speed', 0.3)
self.max_joint_speed = rospy.get_param('~max_joint_speed', 0.5)
self.lead_target_angle = rospy.get_param('~lead_target_angle', 0.5)
-
default_joint_speed determines how fast the servos will move when recentering the head. The default value of 0.3 radians per second is a reasonable speed without being too fast.
-
max_joint_speed is set to protect your servos from moving too fast. This is especially important when testing for the first time.
-
lead_target_angle determines how far ahead or behind the actual target we set the servo goal positions. Using values much less than 0.5 tends to result in jerky motion.
self.pan_threshold = int(rospy.get_param('~pan_threshold', 0.01))
self.tilt_threshold = int(rospy.get_param('~tilt_threshold', 0.01))
self.k_pan = rospy.get_param('~k_pan', 1.5)
self.k_tilt = rospy.get_param('~k_tilt', 1.5)
- The pan_threshold and tilt_threshold parameters specify how far (in meters) the target needs to be from the center of the camera frame before we adjust the head position.
- The k_pan and k_tilt parameters determine how fast the servos attempt to center the target in the field of view. Values between 0.5 and 2.5 tend to work well.
while not rospy.is_shutdown():
try:
""" Only update the pan speed if it differs from the last value """
if self.last_pan_speed != self.pan_speed:
self.servo_speed[self.head_pan_joint](self.pan_speed)
self.last_pan_speed = self.pan_speed
self.servo_position[self.head_pan_joint].publish(self.pan_position)
except:
""" If we run into any exceptions, mometarily stop the head movement by setting
the target pan position to the current position. """
try:
current_pan_position = self.joint_state.position[self.joint_state.name.index(self.head_pan_joint)]
self.servo_position[self.head_pan_joint].publish(current_pan_position)
rospy.loginfo("Servo SetSpeed Exception!")
rospy.loginfo(sys.exc_info())
except:
pass
try:
""" Only update the tilt speed if it differs from the last value """
if self.last_tilt_speed != self.tilt_speed:
self.servo_speed[self.head_tilt_joint](self.tilt_speed)
self.last_tilt_speed = self.tilt_speed
self.servo_position[self.head_tilt_joint].publish(self.tilt_position)
except:
""" If we run into any exceptions, mometarily stop the head movement by setting
the target tilt position to the current position. """
try:
current_tilt_position = self.joint_state.position[self.joint_state.name.index(self.head_tilt_joint)]
self.servo_position[self.head_tilt_joint].publish(current_tilt_position)
rospy.loginfo("Servo SetSpeed Exception!")
rospy.loginfo(sys.exc_info())
except:
pass
r.sleep()
This is our main update loop. Note that we only update the servo speeds if the last update values (last_pan_speed, last_tilt_speed) have changed since the last cycle. If we run into any set_speed exceptions, momentarily stop the tracking by setting the target pan and tilt positions to the current positions. The target pan and tilt positions, self.pan_position and self.tilt_position are provided by the update_head_position() callback which we turn to next.
The first block in the update_head_position() callback looks like this:
try:
self.tf.waitForTransform(self.camera_link, target.header.frame_id, rospy.Time.now(), rospy.Duration(1.0))
camera_target = self.tf.transformPoint(self.camera_link, target)
except (tf.Exception, tf.ConnectivityException, tf.LookupException):
return First we wait for the current transform between the camera frame and the target frame. Then we use one of tf's more powerful function, transformPoint(), to project the target onto the camera frame. This gives us the target's coordinate's (in meters) in the camera frame of reference.
pan = -camera_target.point.y
tilt = -camera_target.point.z The plane of the camera image is in the y-z plane so that the pan angle is obtained from the y coordinate and the tilt angle from the z coordinate.
distance = float(abs(camera_target.point.x)) The distance of the target from the camera image is in the x direction.
try:
pan /= distance
tilt /= distance
except:
pan /= 0.5
tilt /= 0.5 To convert meters to radians, we divide the pan and tilt values by the distance to the target. If we run into an exception, which could be caused by distance = 0 or distance = NaN, then we assume the target is at 0.5m as a fall back.
if abs(pan) > self.pan_threshold:
""" Set the pan speed proportion to the horizontal displacement of the target """
self.pan_speed = trunc(min(self.max_joint_speed, max(ZERO_SPEED, self.k_pan * abs(pan))), 2)
""" Set the target position ahead or behind the current position """
try:
current_pan = self.joint_state.position[self.joint_state.name.index(self.head_pan_joint)]
except:
return
if pan > 0:
self.pan_position = max(self.min_pan, current_pan - self.lead_target_angle)
else:
self.pan_position = min(self.max_pan, current_pan + self.lead_target_angle)
else:
self.pan_speed = ZERO_SPEED Here is where we implement the actual speed-based tracking algorithm. The code block above sets the pan speed (self.pan_speed) proportional to the pan displacement (pan) that we just computed. Note that we also put min/max bounds on the speed and truncate it to two decimal places. The reason we truncate the speed is that we do not want to chase very small speed changes when using the set_speed service or we'll tend to run into exceptions. We then set the target pan angle to the current pan angle plus the lead angle, self.lead_target_angle. The same procedure is repeated for the tilt speed and angle.
Following a Ficticious Target
In the nodes subdirectory of the tutorial package you will find a file called pub_3d_target.py. This node moves a ficitious point relative to the top plate (plate_3_link in the URDF model) and publishes a corresponding PointStamped message on the /target_point topic. You can therefore use this node to test tracking before trying it with a real target.
The pub_3d_target.py script moves the target point back and forth along the x, y and z dimensions. The head should therefore follow a left-right, up-down motion to follow the target. To try it out, execute the following commands:
First bring our servos online:
$ roslaunch pi_head_tracking_3d_part2 dynamixels.launch
Now set the fictious target moving:
$ rosrun pi_head_tracking_3d_part2 pub_3d_target.py Bring up RViz so that you can see the moving target which should appear as a green sphere:
$ roscd pi_head_tracking_3d_part2
$ rosrun rviz rviz -d fake_target.vcg Finally, start the head tracking node:
$ roslaunch pi_head_tracking_3d_part2 tf_head_tracker.launch
When you are ready to stop tracking, simply type Ctrl-C in tf_head_tracker window and the pan and tilt servos should re-center the head.
If you look at the tf_head_tracker.launch file, you'll see a few of the critical parameters that determine head tracking performance:
rate: 10
pan_threshold: 0.01
tilt_threshold: 0.01
k_pan: 1.0
k_tilt: 0.9 Of these parameters, the k_pan and k_tilt are perhaps the most important as they determine how fast or slow the tracking follows the target's actual motion. If these parameters are too small, the head will always lag the target noticeably. If they are too large, the tracking will continually overshoot then try to correct leading to oscillations. Try setting different values in the launch file while tracking the ficticious target.
When you are finished with the ficticious target, close down RViz and type Ctrl-C in the terminal in which you ran pub_3d_target.py or type the following command in another window:
$ rosnode kill pub_3d_target
You can leave the dynamixel.launch window running as we will want the servos online for the next section.
Tracking a Visual Target using the pi_face_tracker Package
In an earlier article we introduced the pi_face_tracker for following faces and other visual targets. If you have already installed the pi_face_tracker package, be sure to get the latest update since it has been revised to publish a 3D target for the purposes of this tutorial.
Since the pi_face_tracker node publishes the tracked target as a PointStamped message on the /target_point topic, we can immediately run it alongside our tf_head_tracker node and begin tracking visual targets with our pan and tilt head.
If you don't already have the pi_face_tracker package, you can get it now as follows. Note that you will first need to get Eric Perko's uvc_cam package:
$ cd ~/ros (or wherever you personal ROS directory lives)
$ git clone https://github.com/ericperko/uvc_cam.git
$ svn co http://pi-robot-ros-pkg.googlecode.com/svn/trunk/pi_vision
$ rosmake --rosdep-install pi_vision
If you haven't done so already, make sure you have installed the ROS openni_kinect stack using the command:
$ sudo apt-get install ros-electric-openni-kinect
Once your Kinect is powered up and connected to your computer's USB port, fire up the openni_camera node as follows:
$ roslaunch pi_head_tracking_3d_part2 openni_node.launch
If successful, you should see some output messages like this:
process[openni_manager-1]: started with pid [19084]
process[openni_camera-2]: started with pid [19085]
[ INFO] [1320933284.325941549]: [/openni_camera] Number devices connected: 1
[ INFO] [1320933284.326273067]: [/openni_camera] 1. device on bus 001:03 is a Xbox NUI Camera (2ae) from Microsoft (45e) with serial id 'B00362222624036B'
[ WARN] [1320933284.328264288]: [/openni_camera] device_id is not set! Using first device.
[ INFO] [1320933284.386437662]: [/openni_camera] Opened 'Xbox NUI Camera' on bus 1:3 with serial number 'B00362222624036B'
[ INFO] [1320933284.396661898]: rgb_frame_id = 'kinect_rgb_optical_frame'
[ INFO] [1320933284.398186460]: depth_frame_id = 'kinect_depth_optical_frame'
The openni_camera node publishes data on a number of topics. You can list them using the ROS rostopic command:
$ rostopic list /camera
In particular, the RGB image is published on the /camera/rgb/image_color topic which you can view using the ROS image_view node:
$ rosrun image_view image_view image:=/camera/rgb/image_color
After a short delay, you should see the color video from the camera. If you do not see the video, type Ctrl-C in the openni_node.launch window, unplug your Kinect from your computer's USB port, power cycle the Kinect, plug it back in to your USB port, then fire up the openni_node.launch file again.
Once you see the video from the Kinect, you can type Ctrl-C in the image_view terminal and launch the pi_face_tracker node:
$ roslaunch pi_face_tracker face_tracker_kinect.launch auto_face_tracking:=False
After a brief delay, you should see the video window appear. Note that we have turned off automatic face tracking on the command line since it is easier to do our initial testing with manually selected targets.
Using your mouse, draw a rectangle around a target in the video image. You should see a number of green feature points appear over the target. At this point the pi_face_tracker node is publishing a PointStamped message on the /target_point topic corresponding to the 3-dimensional centroild of the feature cluster. You can verify this in another window using rostopic echo:
$ rostopic echo /target_point
Now that our target is selected, launch the tf_head_tracker node in another terminal:
$ roslaunch pi_head_tracking_3d_part2 tf_head_tracker.launch
After a brief delay, the head should start tracking the feature cluster seen in the video image. At this point you can use your mouse to select other targets and the head should move to center on the new features. If you hold something in your hand, such as a picture, you can select the picture then move it in front of the camera and the head should track the movement. Note that tracking speed is limited mostly by the speed of the Lucas-Kanade optical flow algorithm used in the pi_face_tracker node. Apparently the new LK algorithm in libopencv2.3 is significantly faster and will be added to the pi_face_tracker package once it has been tested.
Finally, if your Kinect has a good view of your face, you can try the auto_face_tracking mode by hitting the "a" key on your keyboard when the video window is brought to the foreground (click on the window's title bar). Hitting the "a" key again will take you out of auto face tracking mode. Hitting the "c" key will clear the current feature cluster. See the pi_face_tracker wiki documentation for more information.
Tracking Skeleton Joints using openni_trackerThe openni_tracker package publishes the 3D transformations between the Kinect and the skeleton joints of a user standing in front of the camera. The pi_head_tracking_3d_part2 tutorial package includes a node called skeleton_joint_publisher.py that uses tf to listen for a given skeleton joint frame and then re-publishes it as a PointStamped message on the /target_point topic. We can then use the tf_head_tracker node to have our robot track any one of the user's joints.
To try it out, exectute the following commands. You can skip any step that you have already launched:
Fire up the Dyamixels:
$ roslaunch pi_head_tracking_3d_part2 dynamixels.launch
Now the Kinect:
$ roslaunch pi_head_tracking_3d_part2 openni_node.launch
Next bring up the skeleton_joint_publisher. You can edit the launch file to change the joint that is being tracked--the head joint is tracked by default:
$ roslaunch pi_head_tracking_3d_part2 skeleton_joint_publisher.launch
Finally, launch the head tracker node:
$ roslaunch pi_head_tracking_3d_part2 tf_head_tracker.launch
At this point, stand in front of the Kinect a good 10 feet or so back (you want the Kinect to be able to see your entire body) and assume the "Psi position". Once the openni_tracker node gets a lock on you, you should see the Kinect tilt up to center on your head (unless you changed the tracked joint in the launch file). You can then move about the robot slowly and the Kinect should pan and tilt to follow you.
Once you have it working, you can try tracking other joints such as your left or right hand by editing the skeleton_joint_publisher.launch file. Valid joint names are: head, neck, torso, left_shoulder, left_elbow, left_hand, right_shoulder, right_elbow, right_hand, left_hip, left_knee, left_foot, right_hip, right_knee, right_foot. If you find that left and right seem to be reversed, change the mirror_skeleton parameter in the launch file from True to False.
|
|