Abstract:In view of the traditional V-SLAM algorithm, the map is created under the assumption that the rigidity of the scene remains unchanged, leading to the inability to achieve dynamic environment mapping, and traditional algorithms cannot overcome the problem of scenes being lost due to unobvious environmental features or robots being “kidnapped and hijacked”. An algorithm for simultaneous localization and multi-mapping in a dynamic environment was proposed. Firstly, the algorithm introduced a multi-mapping idea, when tracking failed, a new local map would be adaptively generated, and the map would be merged with the previous map during the loop, to solve the problem that the algorithm cannot be built after the algorithm was lost. Secondly, deep learning and multi-view geometry technology were combined to realize real-time detection of dynamic objects in the environment, and multi-frame fusion technology was used to repair the background of the parts occluded by dynamic objects, effectively solving the problem of tracking and mapping in a dynamic environment. Finally, the algorithm was applied to actual scenes for testing. The results showed that compared with the classic V-SLAM (ORB-SLAM2, ORBSLAMM and DynaSLAM) algorithm, when tracking loss occurred, the proposed algorithm can quickly rebuild the map in a short time. And to realize the continuous tracking and the new map integration, ORB-SLAM2 and DynaSLAM would enter the relocation mode after being lost, and cannot continue to build. Although ORBSLAMM can continue to build maps after being lost, the built maps cannot achieve multi-mapping integration and cannot build an overall map; further through dynamic environment test experiments, it was found that only the algorithm can achieve all dynamic goals (a priori and moving goals). For real-time detection and background restoration, DynaSLAM can only achieve a priori target detection, while the other two algorithms cannot achieve target detection and mapping in a dynamic environment.