2022 IB Diploma Extended Essays

Figure 2. In contrast to figure 1, the architecture map proposed above shows that Lidar SLAM doesn’t need to over-go feature extraction from a scene, unlike the camera [10]. Instead it goes through something called scan matching, the LiDAR works by shooting light everywhere across the scene and getting reflections of that light back to the sensor in order to calculate depth [1]. This allows the robot to know it’s position relative to everything else within the scene. It also makes LiDAR great in different lighting conditions such as night time, as it doesn’t rely upon lighting to extract depth [24]. ROS To develop robot prototypes and reduce production costs, the robot operating system (ROS) is normally used [25]. ROS proves itself useful as there are a large number of implemented per exisiting SLAM solutions that exist as separate modules within the Navigation Stack. This investigation implements ORB and Hector SLAM ROS modules due to low cost and limited computational power [7]. ROS also includes teb local planner, which is a module designed specifically for path planning for car-like robots. Sensor-Fusion SLAM Combining information from different sensors is known as sensor-fusion. To solve this problem, the Kalman filter (KF) [26], Extended Kalman filter,(EKF [27]) , and the Particle Filter (PF, [28]) is implemented. Due to it’s ease of use with ROS and compatibility with the data from the IMU and Hector and monocular slam data.

Made with FlippingBook. PDF to flipbook with ease