🤖
Hardware-in-the-loop
  • About the project
  • Getting started
  • Setup
  • Simulation world
  • Husarion ROSbot 2 urdf model
  • ROSbot control
  • Navigation
  • Launching the system
  • Additional resources
Powered by GitBook
On this page

Navigation

This page gives you information about the navigation system used in this project

PreviousROSbot controlNextLaunching the system

Last updated 1 year ago

Navigating the robot based on the obstacles in the simulated world are done with the use of ROS2 Foxy packages, while the robot itself is running ROS1 Noetic. Therefore, the information about the robot's odometry and position is obtained by the use of the ros2/ros1 bridge. A bridge also needs to be started between Gazebo and ROS2. The following image represents how the various topics of the system were bridged between the ROS1, ROS2 and Gazebo sides.

The topics bridged with the ros2/ros1 bridge are bridged two ways. The direction of the arrow indicates the intended data flow. On the other hand, the topics bridged with the ignition bridge are bridged only in the direction of the arrows.

If the mapping, localization and/or navigation runs into problems, it could be beneficial to reduce both the linear and the angular speed of the robot. You can also adjust the parameters of the SLAM algorithm.

As you reduce the movement speed, make sure to increase the movement_time_allowance parameter as well in the nav2_params.yaml file. Otherwise, the algorithm could think that the movement has failed, while it is still being carried out.

The parameters set in these configuration files might not be the most optimal. Feel free to experiment with different settings.

The following image shows the transform frames. You can see that some of the connections are static, while some of them are being dynamically updated.

The SLAM algorithm creates the map and publishes the map->odom transformation, while the odom->base_link transformation comes from the Husarion ROSbot's system.

The following graph shows how the different ROS2 nodes that subscribe and publish to the various ROS2 topics. Most of the topics that were bridged are highlighted in green, however the clock topic is not shown in this image:

If you encounter performance problems, you can find some tips below hat might help:

  • If you do not have GPU set up, it might help the performance to turn off the Gazebo GUI. You can do so by adding the -s --headless-rendering options to the launch command of the gazebo world

  • Without starting the whole project, first create a full map with careful, slow movements, always paying attention to the lidar points vs the detected objects, making sure there is not too much slip and it has time to recover.

  • To help the SLAM algorithm map the area and localize the robot in it, it is beneficial to

    • have multiple objects

    • arrange the objects asymetrically

    • use careful, slow movements

The frequency of the published odometry data compared to the frequency of the other topics, most importantly the lidar data, could also be important. During the project a test was carried out when only the Gazebo simulation and the ROS2 slam_toolbox setup (RViz included) were run. In this test, the odometry data was published with different frequencies, and it was examined how the SLAM toolbox and rviz reacts, while the frequency of the lidar data was published with 30Hz. The frequency was adjusted in the diff_drive plugin of the rosbot.urdf file.

With a frequency of 1, the slam_toolbox did not work at all, and the message "[slam_toolbox]: Message Filter dropping message: frame 'husarion_rosbot/base_link/laser' at time 3.160 for reason 'Unknown'" was being published.

With a frequency of 10 and 20 the package did work, but the messages still appeared, although less frequently. With a frequency of 50 everything worked smoothly.

The correct publishing of tf data through joint state publisher and robot state publisher is also extremely important. If these are not correct, SLAM does not work and the filter drops the lidar data.

In this project, the is used for mapping the environment and localizing the robot in it, while the stack is used for controlling the robot's behavior given a specified goal.

With regards to this project, you can do so in the following files: and .

Launching the navigation system of this project is done via through two packages. One is the package put together for this project. This starts a robot_state_publisher node which publishes the transform information about the robot's frames, starts the ignition bridge between Gazebo and ROS2 and launches the SLAM algorithm. The other one is the nav2_bringup package.

In the file, a node called robot_localization_node is also included, but at the moment, commented out. This node uses the robot_localization package, with the ekf_node. The goal of this is to estimate the state of the robot based on information coming from multiple topics. It is a nonlinear state estimator. If you decide to use this node, keep in mind that the correct setup of the parameters is extremely important, otherwise the localization of the robot can fail. The parameter file that this node is set to use is available .

Afterwards, set the of the SLAM algorithm and set method to localization

slam_toolbox
nav2
mapper_params_online_async.yaml
nav2_params.yaml
navigation_stack
launch_navigation_stack.launch.py
here
config file
ROS topics
Transform frames of robot
ROS2 topics and nodes
Page cover image