Navigation
This page gives you information about the navigation system used in this project
Last updated
This page gives you information about the navigation system used in this project
Last updated
Navigating the robot based on the obstacles in the simulated world are done with the use of ROS2 Foxy packages, while the robot itself is running ROS1 Noetic. Therefore, the information about the robot's odometry and position is obtained by the use of the ros2/ros1 bridge. A bridge also needs to be started between Gazebo and ROS2. The following image represents how the various topics of the system were bridged between the ROS1, ROS2 and Gazebo sides.
The topics bridged with the ros2/ros1 bridge are bridged two ways. The direction of the arrow indicates the intended data flow. On the other hand, the topics bridged with the ignition bridge are bridged only in the direction of the arrows.
In this project, the slam_toolbox is used for mapping the environment and localizing the robot in it, while the nav2 stack is used for controlling the robot's behavior given a specified goal.
The following image shows the transform frames. You can see that some of the connections are static, while some of them are being dynamically updated.
The SLAM algorithm creates the map and publishes the map->odom transformation, while the odom->base_link transformation comes from the Husarion ROSbot's system.
Launching the navigation system of this project is done via through two packages. One is the navigation_stack package put together for this project. This starts a robot_state_publisher node which publishes the transform information about the robot's frames, starts the ignition bridge between Gazebo and ROS2 and launches the SLAM algorithm. The other one is the nav2_bringup package.
The following graph shows how the different ROS2 nodes that subscribe and publish to the various ROS2 topics. Most of the topics that were bridged are highlighted in green, however the clock topic is not shown in this image:
In the launch_navigation_stack.launch.py file, a node called robot_localization_node is also included, but at the moment, commented out. This node uses the robot_localization package, with the ekf_node. The goal of this is to estimate the state of the robot based on information coming from multiple topics. It is a nonlinear state estimator. If you decide to use this node, keep in mind that the correct setup of the parameters is extremely important, otherwise the localization of the robot can fail. The parameter file that this node is set to use is available here.
If you encounter performance problems, you can find some tips below hat might help:
If you do not have GPU set up, it might help the performance to turn off the Gazebo GUI. You can do so by adding the -s --headless-rendering
options to the launch command of the gazebo world
Without starting the whole project, first create a full map with careful, slow movements, always paying attention to the lidar points vs the detected objects, making sure there is not too much slip and it has time to recover.
Afterwards, set the config file of the SLAM algorithm and set method to localization
To help the SLAM algorithm map the area and localize the robot in it, it is beneficial to
have multiple objects
arrange the objects asymetrically
use careful, slow movements