First, install the base of ROS by following Willow Garage's instructions. The robots we will be using this semester have the following ROS stacks (as Ubuntu/Debian packages) installed:
ros-diamondback-brown-perception ros-diamondback-brown-remotelab ros-diamondback-common ros-diamondback-common-msgs ros-diamondback-control ros-diamondback-diagnostics ros-diamondback-driver-common ros-diamondback-executive-smach ros-diamondback-geometry ros-diamondback-image-common ros-diamondback-image-pipeline ros-diamondback-image-transport-plugins ros-diamondback-joystick-drivers ros-diamondback-laser-pipeline ros-diamondback-multimaster-experimental ros-diamondback-navigation ros-diamondback-openni-kinect ros-diamondback-perception-pcl ros-diamondback-pr2-common ros-diamondback-pr2-controllers ros-diamondback-pr2-mechanism ros-diamondback-robot-model ros-diamondback-ros ros-diamondback-ros-comm ros-diamondback-slam-gmapping ros-diamondback-turtlebot ros-diamondback-turtlebot-apps ros-diamondback-turtlebot-robot ros-diamondback-turtlebot-viz ros-diamondback-vision-opencv
Any of these stacks can be installed using the command (substituting ”[stackname]” with the actual stack name):
> sudo apt-get install ros-diamondback-[stackname]
Next, setup your ROS package path to include a folder in your home directory. This will allow you to create and edit packages within this folder (and any subfolders thereof) and have them visible to ROS. (Make sure to substitute your username for [username].)
> cd ~ && mkdir ros > echo "source /opt/ros/diamondback/setup.bash; export ROS_PACKAGE_PATH=/home/[username]/ros:$ROS_PACKAGE_PATH" >> .bashrc
Based on the CMU CMVision library, the cmvision package performs segmentation of solid colored regions (or “blobs”) in an image, reported as bounding boxes. cmvision proxy thresholds and groups pixels in an images based on given YUV color ranges to estimate blobs. To calibrate color ranges, the colorgui node is include within cmvision to build color ranges from selected pixels in published image topics.
Using Subversion, check out the ROS cmvision package from ros.org into your ros directory (assuming ~/ros is your ROS working directory):
> cd ~ros > svn co https://code.ros.org/svn/wg-ros-pkg/branches/trunk_cturtle/vision/cmvision > rosmake cmvision
For color blobfinding, ROS uses the CMVision library to perform color segmentation of an image and find relatively solid colored regions (or “blobs”), as illustrated below. The cmvision package in ROS consists of two nodes:
Both of these nodes receive input from the camera by subscribing to an image topic.
The blobfinder provides a bounding box around each image region containing pixels within a specified color range. These color ranges are specified in a color calibration file, or colorfile, such as in the “colors.txt” example below. cmvision colorfiles contains two sections with the following headers:
The following example “colors.txt” illustrates the format of the colorfile for colors “Red”, “Green”, and “Blue”:
[[Colors]] (255, 0, 0) 0.000000 10 Red ( 0,255, 0) 0.000000 10 Green ( 0, 0,255) 0.000000 10 Blue [[Thresholds]] ( 25:164, 80:120,150:240) ( 20:220, 50:120, 40:115) ( 15:190,145:255, 40:120)
In this colorfile, the color “Red” has the integer identifier ”(255,0,0)” or, in hexidecimal, “0x00FF0000” and YUV thresholds ”(25:164,80:120,150:240)”. These thresholds are specified as a range in the the Wikipedia YUV color space. Specifically, any pixel with YUV values within this range will be labeled with the given blob color. Note: that YUV and RGB color coordinates are vastly different representations, you can refer to the Wikipedia YUV entry and the Appendix for details.
To calibrate the blobfinder, you will use colorgui to estimate YUV color ranges for objects viewed in the camera's image stream. These color ranges will then be entered into your own colorfile for use by the cmvision node. Assuming turtlebot driver is running, run colorgui using the following command (substituting [imagetopic] for the actual topic the camera uses to publish images):
> rosrun cmvision colorgui image:=[imagetopic]
If you are using the Kinect, [imagetopic] will be /camera/rgb/image_color. If you are using gscam, [imagetopic] will be /gscam/image_raw. This begs the question of why there is not a standard topic name for publishing images.
The result should pop up a window displaying the current camera image stream, similar to running image_view. The colorgui image window can now be used to find the YUV range for a single color of interest.
Using colorgui image window, you can calibrate for the color of specific objects by sampling their pixel colors. Put objects of interest in the robot's view. Mouse click on a pixel belonging to the object in the image window. This action should put the RGB value of the pixel into the left textbox and YUV value in the right textbox. Clicking on another pixel will update the output of the terminal to show the pixel's RGB value and the YUV range encompassing both clicked pixels. Clicking on additional pixels will expand the YUV range to span the color region of interest. Assuming your clicks represent a consistent color, you should see bounding boxes in the colorgui window represented color blobs found with the current YUV range.
Note: you may not want to click on all pixels of an object due to shadowing and specular (“shiny”) artifacts.
Once you have a sufficient calibration for a color, copy the YUV range shown in the colorgui textbox (or output to the terminal) to a separate text buffer temporarily or directly enter this information into your colorfile. Save this file as colors.txt on the robot. You can restart this process to calibrate for another color by selecting “File→Reset” in the colorgui menu bar.
Once you have an appropriately calibrated colorfile, the cmvision blobfinder will be able to detect color blobs. This process can be used to color calibrate a variety of cameras both in real and simulated environments. However, your color file will likely work only for cameras and lighting conditions similar to those used at the time of calibration.
> roscd cmvision > roslaunch cmvision.launch
cmvision.launch essentially sets related ROS parameters and launches cmvision to use images from your camera image and your color file. The code for cmvision.launch is listed below:
<launch> <!-- Location of the cmvision color file --> <param name="cmvision/color_file" type="string" value="PATH_TO_YOUR_COLOR_FILE" /> <!-- Turn debug output on or off --> <param name="cmvision/debug_on" type="bool" value="true"/> <!-- Turn color calibration on or off --> <param name="cmvision/color_cal_on" type="bool" value="false"/> <!-- Enable Mean shift filtering --> <param name="cmvision/mean_shift_on" type="bool" value="false"/> <!-- Spatial bandwidth: Bigger = smoother image --> <param name="cmvision/spatial_radius_pix" type="double" value="2.0"/> <!-- Color bandwidth: Bigger = smoother image--> <param name="cmvision/color_radius_pix" type="double" value="40.0"/> <node name="cmvision" pkg="cmvision" type="cmvision" args="image:=/camera/rgb/image_color" output="screen" /> </launch>
Note that default cmvision.launch contains wrong parameter settings. Please copy the code above and modify it for your use.
> roslaunch YOUR_ROBOT.launch > rosrun teleop_twist_keyboard teleop_twist_keyboard.py cmd_vel:=/turtlebot_node/cmd_vel
Disclaimer: The calibration process is not always easy and may take several iterations to get a working calibration. Remember, the real world can be particular and unforgiving. Small variations make a huge difference. So, be consistent and thorough.
Based on the ARToolkit augmented reality library, ar_recog recognizes augmented reality tags in an image. ar_recog publishes various information about recognized tags, such as its corners in image space and relative 6DOF pose in camera space.
> cd ~/ros > svn co http://brown-ros-pkg.googlecode.com/svn/trunk/experimental/ar_recog ar_recog
> roscd ar_recog > cmake . > rosmake ar_recog
> roscd ar_recog/bin > rosrun ar_recog ar_recog image:=[imagetopic]
> rosrun image_view image_view image:=/ar/image
If successful, you should see a window with drawn green boxes overlaid on AR tags in the camera image stream:
> cd $ROS_HOME/ar_recog/src/ARToolKit/bin > ./mk_patt camera parameter: camera_para.dat # show camera tag of interest, tag is highlight, click window to choose, # save pattern as "patt.patternname" (or patt.X) > cp patt.patternname $ROS_HOME/ar_recog/bin > vi $ROS_HOME/ar_recog/bin/object_data # add pattern entry (patternname, patternfilename, width of tag in mm, center of tag usually "0.0 0.0")
mk_patt will likely use the laptop's onboard camera instead of the PS3 cam. It is usually necessary to change the configuration string of mk_patt and remaking mk_patt to use a non-default camera, which is why we do not recommend cs148 students training new tags.