2022 IB Diploma Extended Essays
The following ROS data flow architecture implements EKF and PF. Visual slam correction combines hector mapping, ORB2-SLAM, ZEDfu, the IMU, and GPS data. This is done with pose, yaw, and depth. LiDAR gives the robot's Cartesian position. ORB2-SLAM and Zedfu measure polar velocity and distance. ORB-SLAM uses sparse camera features, while ZEDfu uses an active depth sensor. The camera wrapper class allows real-time streaming. Creating ORB2-SLAM and Zedfu maps. The correction class receives pose orientation, position, covariance, and depth. The correction also includes hector mapping and inertial navigation data. When all SLAM pose values are combined, the scene's depth can be accurately reconstructed. This corrects the visual slam's photometry. Nonlinear polar velocity, acceleration, and yaw are all measured by the Inertial Navigation System.To add these values to LiDAR and Visual SLAM data, they must be linearized. The Jacobian matrix is used for this purpose. The correction class is added to the VSLAM-Correction class, which has depth and pose measurements for 3D reconstruction. NAVDATA correction improves pose prediction. through the addition of raw accelerometer, magnetometer, gyroscope, and GPS data from the buffer to NAVDATA correction. This reduces pose estimates' sensitivity to acceleration and velocity changes. The raw LiDAR pose values from the buffer are then added to the LiDAR correction to complete the correction.This accurately predicts the robot's 2D position and orientation as it comes from an active depth sensor. The global planner gets the pose estimate. If the user-specified goal is an obstacle, the planner moves the robot closer to it. However, it doesn't account for non-holomic robots like the Ackerman drive. The global planner doesn't consider robot constraints. Path prediction improves with teb local planner as it takes into account non-holomic robot kynodynamics.
Made with FlippingBook. PDF to flipbook with ease