2022 IB Diploma Extended Essays

Figure 1. illustrates the general architecture of Visual SLAM algorithms. Where it firstly finds features within a scene from the stereo camera, these include corners of objects, different geometry, lighting etc. Then these features are tracked as the robot moves through the scene. These landmarks are then moved to the triangulator in order to create a reconstruction of the scene. At the same time the features from the feature detector are used in a loop closer [4]. Because when the robot observes a landmark for the second time, it corrects it’s own position relative to the observed landmark position [3]. This new position of the landmark is then utilised by the SLAM system too update it’s reconstruction. New camera position of the robot relative to the scene is figured out this way. LiDAR SLAM The main sensor that is most commonly used within AGV’s are LiDAR. LiDAR’s are also used to solve SLAM. The LiDAR presents itself as a easy to use, well established, accurate and precise sensor [8]. However, there are drawbacks the Occupancy based maps such as Hector SLAM [24] can be dedicated to 2D environments only, there is also a very huge computational processing threshold in terms of the amount of memory required for large scale environments and difficult loop closing process (Scan Matching). [24]

Made with FlippingBook. PDF to flipbook with ease