Strict Standards: Declaration of action_plugin_safefnrecode::register() should be compatible with DokuWiki_Action_Plugin::register($controller) in /home/brownrob/public_html/cs148/lib/plugins/safefnrecode/action.php on line 14
Strict Standards: Declaration of action_plugin_popularity::register() should be compatible with DokuWiki_Action_Plugin::register($controller) in /home/brownrob/public_html/cs148/lib/plugins/popularity/action.php on line 57
Warning: Cannot modify header information - headers already sent by (output started at /home/brownrob/public_html/cs148/lib/plugins/safefnrecode/action.php:14) in /home/brownrob/public_html/cs148/inc/auth.php on line 352
Warning: Cannot modify header information - headers already sent by (output started at /home/brownrob/public_html/cs148/lib/plugins/safefnrecode/action.php:14) in /home/brownrob/public_html/cs148/inc/actions.php on line 180
path_planning – Introduction to Autonomous Robotics
Strict Standards: Declaration of Doku_Renderer_metadata::table_open() should be compatible with Doku_Renderer::table_open($maxcols = NULL, $numrows = NULL, $pos = NULL) in /home/brownrob/public_html/cs148/inc/parser/metadata.php on line 483
Strict Standards: Declaration of Doku_Renderer_metadata::table_close() should be compatible with Doku_Renderer::table_close($pos = NULL) in /home/brownrob/public_html/cs148/inc/parser/metadata.php on line 483
This two-part assignment is intended to gain familiarity with using path planning for robot navigation, especially the challenges for executing planned paths with partial observability and imperfect actuation. Path planning is the problem of determining what route a robot should take to get from its current position to a destination position, given a map of the surrounding environment. In this assignment, you will design your own wavefront path planner for a static map by replacing the move_base node in ROS's nav stack with a simplified version of your own creation. The following video describes the overall ROS navigation stack, which includes localization and mapping in addition to planning:
For the milestone, you will be creating a global planner using the wavefront planning algorithm.
For the final, you will create a local planner which uses your global planner and localization information. The global planner, local planner, and a static map will be used in conjunction to navigate your robot through a planned path. In the end, you'll be able to issue commands in rviz to do this movement.
The following figure illustrates the modifications to the design of the move_base node that we will use for this assignment:
You will be implementing the wavefront algorithm (as described in class) to create your global planner. As you are replacing the move_base algorithm and connecting to rviz, the inputs to and outputs from your system are already well-defined.
In your wavefront planner, don't forget that your robot is not a single point. Rather, it occupies a region of space. One way to encode this information is using the Minkowski sum – that is, to treat the robot as a point but to pad all barriers with the radius of your robot. You will be expected to take this into account when writing your planner.
move_base_simple/goal (geometry_msgs/PoseStamped): This will set the goal for your robot to reach. (This is called from rviz's “2D Nav Goal” button.)
/initialpose (geometry_msgs/PoseWithCovarianceStamped): This will provide the robot's initial world coordinates. (This is called from rviz's “2D Pose Estimate” button.)
/map (nav_msgs/OccupancyGrid): This will provide you with the map upon which you are to operate.
The way the message passing works for /map this is kind of interesting; it seems that upon a new subscriber, the message is passed once and only once. Because of this, make sure you catch the map and store it somewhere when the first message is passed. You may assume that this map will be static throughout your code's execution lifetime.
The map's data has values of -1, 0, and 100. These correspond to the map values of gray, white, and black, respectively. You should only navigate on white space. (Treat black as equivalent to gray – ie, not navigable.)
You can use the following map, or create your own (which should be included in your repository):
/move_base/TrajectoryPlannerROS/global_plan (nav_msgs/Path): This will be the global planned path your robot will follow.
You may ask, why the really long message name? That's what rviz looks for for move_base. So, fingers-crossed, you shouldn't have to configure anything to see this. (If you do, it's on the left-hand column under “Path”.) For more details, see rviz display types and rviz paths.
Second, to make this path viewable in rviz, you have to specify the coordinate frame in which the should be visualized. We want to tell rviz that the path's frame is the same as the map's frame. This can be accomplished by changing the path's frame_id:
Note that rotation here is specified using quaternions. These are used to prevent gimbal lock in 3D rotation space. Using quaternions for 2D navigation is a bit overkill (since 2D nav doesn't suffer for gimbal lock), but so that we use standard ROS interfaces, and because they still do the job fine, we'll use them anyway.
ROS's tf library kindly provides functions to convert between quaternions (x, y, z, w) and the more familiar Euler (x, y, z) coordinates. In python (similar functions exist for C++):
from tf.transformations import euler_from_quaternion, quaternion_from_euler
quat = quaternion_from_euler(x, y, z) # quat is [x, y, z, w]
eul = euler_from_quaternion([x, y, z, w]) # eul is [x, y, z]
Warning: If you haven't learned the art of launch files yet, you're going to find this part hard. Launch files will be more and more important as the semester progresses, because you'll have more and more stuff on your robot running at once. Save yourselves hours of time later by spending a few minutes getting friendly with launch files now.
Look at turtlebot_navigation/amcl_demo.launch. You basically want to do this, with a few changes:
Change the map file to that of Brown's fourth floor.
Use your own move_base package instead of the nav stack's.
Comment out the robot packages (eg, base.launch) for now. We'll add them back in the final.
Pop open an rviz, select a start point using “2D Pose Estimate” (top row of buttons), select a goal using “2D Nav Goal” (again, top row of buttons), and watch your path magically appear! Whee!
…Alright, so that probably won't happen the first time. If you're trying to figure out why not, here's a few things to try:
Don't make this harder than it has to be at first. Instead of starting by implementing a wavefront planner, for instance, try just drawing a line on rviz. Once you have that done, then move up to actually planning.
Make sure your inputs and outputs are correct. Check rviz's configuration. Also, check that all of your nodes are connecting with each other as you imagine they are using rxgraph.
Make sure you put the right frame_id on your path:
path = Path()
path.header.frame_id = "/map"
And don't forget, your TAs are friendly people too! We promise!
Make sure your code works by launching this command. (We will start rviz to the side by ourselves.)
Make sure to test your submissions before you send them away. “Test” means: check out a completely fresh copy of your code from git and run that copy for your robot. This is what the TAs do, and if you can't get it to work this way, the TAs can't, either!
Once you can successfully plan and visualize paths, move forward to the rest of the assignment!
Now that you have a working global planner, your next step is to make your robot follow the plan you created! In order to do this, you'll need to create a local planner. The local planner is responsible for steering the robot through the global plan's waypoints in order for it to reach its end goal.
/move_base/TrajectoryPlannerROS/global_plan (nav_msgs/Path): This was the output from your global planner. (Note that you don't necessarily have to pass this message using a publisher/subcriber. If you do as the original move_base node did, the passing of this information is internal to the node, so you don't actually define a topic or anything.)
/amcl_pose (geometry_msgs/PoseWithCovarianceStamped): Provides an estimate of the pose of the Turtlebot in the world coordinates, as it moves about the world. This is the output of the AMCL localizer.
odom (nav_msgs/Odometry): This provides odometry data from the Turtlebot relative to where it started. However, it does not put it in world coordinates.
The benefit of odom over amcl_pose is speed. odom runs at about 30hz, while amcl_pose runs at about 1 hz. However, the benefit of amcl_pose over odom is that amcl_pose does map localization (using the map plus odometry), whereas odom simply does dead reckoning (using odometry only), which will become more and more inaccurate over time. One strategy to get the best of both worlds (tight feedback loops plus map-based pose estimation) is to use odom to navigate small distances (for instance, between waypoints) and then to use amcl_pose to better estimate the robot's actual location.
(This section copied from the Run n' SLAM assignment.)
In order to access and control a robot remotely, two Linux environment variables need to be set. If setup is done correctly, “rostopic list” should show the list of available topic from the robot. If you see “Could not contact ROS master”, either something is wrong with your environment settings or the robot has not yet been started. Note that setting an environment variable in a particular terminal window affects only commands given in that terminal window.
As in the milestone, you should be able to use rviz to click on a start point and end point. Make sure to select a start point and properly localize your robot with amcl before attempting to do navigation. This may require you to drive the Turtlebot around for a bit. When the kinect depths align well with the walls as you drive, you're in business.
Once your robot is properly localized, select an end point. As in the milestone, you should see a path. Now, however, you should also see a robot traversing this path in real life!
Path planning seems a lot easier in the theory world, doesn't it? It turns out that tackling localization and path navigation in the real world is actually a very frustrating problem, especialy in the face of low-cost sensors. (Whereas Turtlebot navigation uses a $150 Kinect (error of +/- 20cm at 5m distance, about 5m range max, 30 hz refresh rate, 57 degrees of viewspace) the PR2 uses a $5500 Hokuyo rangefinder (60 hz, 30 meters range with smaller error, 270ish degrees of viewspace). Throwing gobs of money at this problem makes it easier!) But you got through it! Good work!
To submit your code, create the branch “path_planning_final_[your_team_name]”. This code will be checkout and run by the course staff, without modification. If you have anything out of the ordinary, include a README file to explain it. This code will be tested against three different start/goal poses in the 4th floor hallways with four trials each. To receive a check for the assignment, your controller must have at least a %50 success rate for these trials.
To grade this, your TAs will execute this (and only this) command:
Make sure your code works by launching this command. (We will start rviz to the side by ourselves.)
Again, make sure to test your submissions before you send them away. “Test” means: check out a completely fresh copy of your code from git and run that copy for your robot. This is what the TAs do, and if you can't get it to work this way, the TAs can't, either!
See you in the next round!
path_planning.txt · Last modified: 2011/11/01 16:27 by brian