[go: up one dir, main page]

 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (133)

Search Parameters:
Keywords = ORB-SLAM

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 6262 KiB  
Article
YPR-SLAM: A SLAM System Combining Object Detection and Geometric Constraints for Dynamic Scenes
by Xukang Kan, Gefei Shi, Xuerong Yang and Xinwei Hu
Sensors 2024, 24(20), 6576; https://doi.org/10.3390/s24206576 (registering DOI) - 12 Oct 2024
Viewed by 192
Abstract
Traditional SLAM systems assume a static environment, but moving objects break this ideal assumption. In the real world, moving objects can greatly influence the precision of image matching and camera pose estimation. In order to solve these problems, the YPR-SLAM system is proposed. [...] Read more.
Traditional SLAM systems assume a static environment, but moving objects break this ideal assumption. In the real world, moving objects can greatly influence the precision of image matching and camera pose estimation. In order to solve these problems, the YPR-SLAM system is proposed. First of all, the system includes a lightweight YOLOv5 detection network for detecting both dynamic and static objects, which provides pre-dynamic object information to the SLAM system. Secondly, utilizing the prior information of dynamic targets and the depth image, a method of geometric constraint for removing motion feature points from the depth image is proposed. The Depth-PROSAC algorithm is used to differentiate the dynamic and static feature points so that dynamic feature points can be removed. At last, the dense cloud map is constructed by the static feature points. The YPR-SLAM system is an efficient combination of object detection and geometry constraint in a tightly coupled way, eliminating motion feature points and minimizing their adverse effects on SLAM systems. The performance of the YPR-SLAM was assessed on the public TUM RGB-D dataset, and it was found that YPR-SLAM was suitable for dynamic situations. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>Framework of YPR-SLAM System. The blue section is ORB-SLAM2, and the orange section is the addition of this paper.</p>
Full article ">Figure 2
<p>The YOLOv5 network architecture.</p>
Full article ">Figure 3
<p>Dynamic target detection and filtering thread. First, the ORB feature point is extracted from the RGB image by the tracking thread. Next, the dynamic target detection thread identifies potential dynamic target areas, and then the Depth-PROSAC algorithm is applied to filter out dynamic feature points. Finally, the static feature points are retained for subsequent pose estimation.</p>
Full article ">Figure 4
<p>The comparison between target detection algorithms and the Depth-PROSAC algorithm in filtering out dynamic feature points. (<b>a</b>) shows that the object detection method directly filters out dynamic feature points, and (<b>b</b>) shows that the Depth-PROSAC algorithm filters out dynamic feature points.</p>
Full article ">Figure 5
<p>Dense point cloud construction workflow.</p>
Full article ">Figure 6
<p>In the fr3_walking_halfsphere sequence, the YPR-SLAM and ORB-SLAM2 systems were used to estimate the 3D motion of the camera. (<b>a</b>) Camera path estimated by ORB-SLAM2; (<b>b</b>) YPR-SLAM estimation of camera trajectory.</p>
Full article ">Figure 7
<p><span class="html-italic">ATE</span> and <span class="html-italic">RPE</span> of the ORB-SLAM2 system and the YPR-SLAM system under different datasets. (<b>a1</b>,<b>a2</b>,<b>c1</b>,<b>c2</b>,<b>e1</b>,<b>e2</b>,<b>g1</b>,<b>g2</b>) represent ATE and RPE obtained by the ORB-SLAM2 system by running fre3_sitting_static, fre3_walking_static, fre3_walking_halfsphere, and fre3_walking_xyz, respectively. (<b>b1</b>,<b>b2</b>,<b>d1</b>,<b>d2</b>,<b>f1</b>,<b>f2</b>,<b>h1</b>,<b>h2</b>) represent <span class="html-italic">ATE</span> and <span class="html-italic">RPE</span> plots of the YPR-SLAM system running fre3_sitting_static, fre3_walking_static, fre3_walking_halfsphere, and fre3_walking_xyz, respectively. (<b>a1</b>,<b>b1</b>,<b>c1</b>,<b>d1</b>,<b>e1</b>,<b>f1</b>,<b>g1</b>,<b>h1</b>) represent ATE plots. (<b>a2</b>,<b>b2</b>,<b>c2</b>,<b>d2</b>,<b>e2</b>,<b>f2</b>,<b>g2</b>,<b>h2</b>) represent <span class="html-italic">RPE</span> plots.</p>
Full article ">Figure 7 Cont.
<p><span class="html-italic">ATE</span> and <span class="html-italic">RPE</span> of the ORB-SLAM2 system and the YPR-SLAM system under different datasets. (<b>a1</b>,<b>a2</b>,<b>c1</b>,<b>c2</b>,<b>e1</b>,<b>e2</b>,<b>g1</b>,<b>g2</b>) represent ATE and RPE obtained by the ORB-SLAM2 system by running fre3_sitting_static, fre3_walking_static, fre3_walking_halfsphere, and fre3_walking_xyz, respectively. (<b>b1</b>,<b>b2</b>,<b>d1</b>,<b>d2</b>,<b>f1</b>,<b>f2</b>,<b>h1</b>,<b>h2</b>) represent <span class="html-italic">ATE</span> and <span class="html-italic">RPE</span> plots of the YPR-SLAM system running fre3_sitting_static, fre3_walking_static, fre3_walking_halfsphere, and fre3_walking_xyz, respectively. (<b>a1</b>,<b>b1</b>,<b>c1</b>,<b>d1</b>,<b>e1</b>,<b>f1</b>,<b>g1</b>,<b>h1</b>) represent ATE plots. (<b>a2</b>,<b>b2</b>,<b>c2</b>,<b>d2</b>,<b>e2</b>,<b>f2</b>,<b>g2</b>,<b>h2</b>) represent <span class="html-italic">RPE</span> plots.</p>
Full article ">Figure 7 Cont.
<p><span class="html-italic">ATE</span> and <span class="html-italic">RPE</span> of the ORB-SLAM2 system and the YPR-SLAM system under different datasets. (<b>a1</b>,<b>a2</b>,<b>c1</b>,<b>c2</b>,<b>e1</b>,<b>e2</b>,<b>g1</b>,<b>g2</b>) represent ATE and RPE obtained by the ORB-SLAM2 system by running fre3_sitting_static, fre3_walking_static, fre3_walking_halfsphere, and fre3_walking_xyz, respectively. (<b>b1</b>,<b>b2</b>,<b>d1</b>,<b>d2</b>,<b>f1</b>,<b>f2</b>,<b>h1</b>,<b>h2</b>) represent <span class="html-italic">ATE</span> and <span class="html-italic">RPE</span> plots of the YPR-SLAM system running fre3_sitting_static, fre3_walking_static, fre3_walking_halfsphere, and fre3_walking_xyz, respectively. (<b>a1</b>,<b>b1</b>,<b>c1</b>,<b>d1</b>,<b>e1</b>,<b>f1</b>,<b>g1</b>,<b>h1</b>) represent ATE plots. (<b>a2</b>,<b>b2</b>,<b>c2</b>,<b>d2</b>,<b>e2</b>,<b>f2</b>,<b>g2</b>,<b>h2</b>) represent <span class="html-italic">RPE</span> plots.</p>
Full article ">Figure 8
<p>Using ORB-SLAM2 and YPR-SLAM to construct dense 3D point cloud map in dynamic scene sequence fre3_walking_xyz. (<b>a</b>) represents a dense 3D point cloud map constructed by the ORB-SLAM2 system; (<b>b</b>) represents a dense 3D point cloud map constructed by the YPR-SLAM system.</p>
Full article ">
24 pages, 4712 KiB  
Article
Balancing Efficiency and Accuracy: Enhanced Visual Simultaneous Localization and Mapping Incorporating Principal Direction Features
by Yuelin Yuan, Fei Li, Xiaohui Liu and Jialiang Chen
Appl. Sci. 2024, 14(19), 9124; https://doi.org/10.3390/app14199124 - 9 Oct 2024
Viewed by 458
Abstract
In visual Simultaneous Localization and Mapping (SLAM), operational efficiency and localization accuracy are equally crucial evaluation metrics. We propose an enhanced visual SLAM method to ensure stable localization accuracy while improving system efficiency. It can maintain localization accuracy even after reducing the number [...] Read more.
In visual Simultaneous Localization and Mapping (SLAM), operational efficiency and localization accuracy are equally crucial evaluation metrics. We propose an enhanced visual SLAM method to ensure stable localization accuracy while improving system efficiency. It can maintain localization accuracy even after reducing the number of feature pyramid levels by 50%. Firstly, we innovatively incorporate the principal direction error, which represents the global geometric features of feature points, into the error function for pose estimation, utilizing Pareto optimal solutions to improve the localization accuracy. Secondly, for loop-closure detection, we construct a feature matrix by integrating the grayscale and gradient direction of an image. This matrix is then dimensionally reduced through aggregation, and a multi-layer detection approach is employed to ensure both efficiency and accuracy. Finally, we optimize the feature extraction levels and integrate our method into the visual system to speed up the extraction process and mitigate the impact of the reduced levels. We comprehensively evaluate the proposed method on local and public datasets. Experiments show that the SLAM method maintained high localization accuracy after reducing the tracking time by 24% compared with ORB SLAM3. Additionally, the proposed loop-closure-detection method demonstrated superior computational efficiency and detection accuracy compared to the existing methods. Full article
(This article belongs to the Special Issue Mobile Robotics and Autonomous Intelligent Systems)
Show Figures

Figure 1

Figure 1
<p>The visual SLAM framework. The main contributions of our work are highlighted within the yellow dashed box. (<b>a</b>) Pose estimation incorporating principal direction information; (<b>b</b>) descriptor extraction; (<b>c</b>) similarity calculation and loop-closure detection.</p>
Full article ">Figure 2
<p>Pose estimation incorporating principal direction information. The small rectangles represent feature points, and the straight lines with arrows represent the principal directions of the feature points.</p>
Full article ">Figure 3
<p>Loop-closure detection based on aggregation descriptors. The left side shows the aggregation feature descriptor extraction process, while the right side shows the loop-closure-detection process.</p>
Full article ">Figure 4
<p>The illustration of principal direction feature projection. The purple rectangles represent feature points, while the pink lines indicate the principal directions of the feature points.</p>
Full article ">Figure 5
<p>Aggregation feature descriptor generation process. Each image generates two feature vectors, aggregated from the feature matrix along the row and column directions.</p>
Full article ">Figure 6
<p>Examples of scenes in dataset. (<b>a</b>) Around-view image; (<b>b</b>) fast motion and uneven feature distribution; (<b>c</b>) low luminance; (<b>d</b>) high luminance; (<b>e</b>) image rotation.</p>
Full article ">Figure 7
<p>Experimental trajectory and loop-closure ground truth. From left to right: sequences 00, 05, 06, 07 from KITTI.</p>
Full article ">Figure 8
<p>Experimental trajectory and loop-closure ground truth in local datasets.</p>
Full article ">Figure 9
<p>Precision–recall curves based on 00 sequence (<b>left</b>) and local dataset (<b>right</b>).</p>
Full article ">Figure 10
<p>Maximum recall at 100% precision.</p>
Full article ">Figure 11
<p>Schematic of image processing. The left image is the original, and the right image has luminance reduced to 15%.</p>
Full article ">Figure 12
<p>Maximum recall at 100% precision in low-luminance scenarios.</p>
Full article ">Figure 13
<p>Comparison of trajectory results. The red box indicates a locally magnified trajectory. The left and right images correspond to sequences 05 and 06, both from the KITTI dataset.</p>
Full article ">
18 pages, 7421 KiB  
Article
Enhanced Visual SLAM for Collision-Free Driving with Lightweight Autonomous Cars
by Zhihao Lin, Zhen Tian, Qi Zhang, Hanyang Zhuang and Jianglin Lan
Sensors 2024, 24(19), 6258; https://doi.org/10.3390/s24196258 - 27 Sep 2024
Viewed by 468
Abstract
The paper presents a vision-based obstacle avoidance strategy for lightweight self-driving cars that can be run on a CPU-only device using a single RGB-D camera. The method consists of two steps: visual perception and path planning. The visual perception part uses ORBSLAM3 enhanced [...] Read more.
The paper presents a vision-based obstacle avoidance strategy for lightweight self-driving cars that can be run on a CPU-only device using a single RGB-D camera. The method consists of two steps: visual perception and path planning. The visual perception part uses ORBSLAM3 enhanced with optical flow to estimate the car’s poses and extract rich texture information from the scene. In the path planning phase, the proposed method employs a method combining a control Lyapunov function and control barrier function in the form of a quadratic program (CLF-CBF-QP) together with an obstacle shape reconstruction process (SRP) to plan safe and stable trajectories. To validate the performance and robustness of the proposed method, simulation experiments were conducted with a car in various complex indoor environments using the Gazebo simulation environment. The proposed method can effectively avoid obstacles in the scenes. The proposed algorithm outperforms benchmark algorithms in achieving more stable and shorter trajectories across multiple simulated scenes. Full article
(This article belongs to the Special Issue Intelligent Control Systems for Autonomous Vehicles)
Show Figures

Figure 1

Figure 1
<p>The proposed system workflow. This system comprises two main components: environment perception and path planning. Initially, a PGM map and cost map are constructed. The vehicle, equipped with a visual sensor, extracts point features from the environment and uses relocation to ascertain its position and identify obstacles. A static map is inflated for navigational safety. The vehicle’s pose is dynamically updated by tracking map points, and a global path is mapped using CBF. For local path planning, the TEB algorithm is employed. The system updates the vehicle’s pose in real-time, calculates safe passage areas with CBF, and facilitates optimal, obstacle-free path selection to the destination.</p>
Full article ">Figure 2
<p>The Intel D435i RGB-D camera utilizes the structured light triangulation method for depth sensing.</p>
Full article ">Figure 3
<p>Illustration of the point feature matching process. Point features and their associated descriptors (compact representations of local appearance) are matched between consecutive frames using grid IDs, Euclidean distance, and cosine similarity to ensure alignment and temporal consistency. Solid lines represent the connections between the map points projected onto the 2D plane in the previous frame and the feature points, while dashed lines represent the connections between the map points projected onto the 2D plane in the current frame and the feature points.</p>
Full article ">Figure 4
<p>Illustration of SRP for an obstacle.</p>
Full article ">Figure 5
<p>Illustrative experiment showing the robot car’s navigation from the start to the destination. (<b>a</b>) Detected feature points by the car’s vision sensor in a simulated environment, (<b>b</b>) ideal trajectory planned using the proposed method in MATLAB, (<b>c</b>) actual path followed by the car in Rviz, (<b>d</b>) 3D point cloud map of the environment generated by the proposed method, (<b>e</b>) starting position of the car in Gazebo, and (<b>f</b>) final position of the car in Gazebo.</p>
Full article ">Figure 6
<p>The three different destinations chosen in the experiment, and comparison of the proposed method and PU-PRT algorithm performances for target point 2. (<b>a</b>) Experimental settings. (<b>b</b>) PU-PRT for target point 2. (<b>c</b>) Comparison of the proposed method and PU-PRT.</p>
Full article ">Figure 7
<p>Comparison of the estimated and actual trajectories from the start point to three target points. (<b>a</b>) Experimental environment in Gazebo, (<b>b</b>) ideal trajectory from the start to target 1 planned in MATLAB, (<b>c</b>) ideal trajectory from the start to target 2 planned in MATLAB, (<b>d</b>) ideal trajectory from the start to target 3 planned in MATLAB, (<b>e</b>) 2D grid map constructed in Rviz, (<b>f</b>) actual trajectory from the start to target 1 in Rviz, (<b>g</b>) actual trajectory from the start to target 2 in Rviz, (<b>h</b>) actual trajectory from the start to target 3 in Rviz.</p>
Full article ">Figure 8
<p>Variation of variables for target point 3. (<b>a</b>) Variation of CBF. (<b>b</b>) Variation of <math display="inline"><semantics> <mi>ω</mi> </semantics></math>. (<b>c</b>) Variation of Longitudinal Distance, Lateral Distance, and <math display="inline"><semantics> <mi>θ</mi> </semantics></math>.</p>
Full article ">Figure 9
<p>Comparison of CLF-CBF-QP-SRP, APF, and Voronoi diagram for three target points. (<b>a</b>) CLF-CBF-QP-SRP for target point 1. (<b>b</b>) APF for target point 1. (<b>c</b>) Voronoi diagram for target point 1. (<b>d</b>) CLF-CBF-QP-SRP for target point 2. (<b>e</b>) APF for target point 2. (<b>f</b>) Voronoi diagram for target point 2. (<b>g</b>) CLF-CBF-QP-SRP for target point 3. (<b>h</b>) APF for target point 3. (<b>i</b>) Voronoi diagram for target point 3.</p>
Full article ">
23 pages, 9746 KiB  
Article
Research on SLAM Localization Algorithm for Orchard Dynamic Vision Based on YOLOD-SLAM2
by Zhen Ma, Siyuan Yang, Jingbin Li and Jiangtao Qi
Agriculture 2024, 14(9), 1622; https://doi.org/10.3390/agriculture14091622 - 16 Sep 2024
Viewed by 526
Abstract
With the development of agriculture, the complexity and dynamism of orchard environments pose challenges to the perception and positioning of inter-row environments for agricultural vehicles. This paper proposes a method for extracting navigation lines and measuring pedestrian obstacles. The improved YOLOv5 algorithm is [...] Read more.
With the development of agriculture, the complexity and dynamism of orchard environments pose challenges to the perception and positioning of inter-row environments for agricultural vehicles. This paper proposes a method for extracting navigation lines and measuring pedestrian obstacles. The improved YOLOv5 algorithm is used to detect tree trunks between left and right rows in orchards. The experimental results show that the average angle deviation of the extracted navigation lines was less than 5 degrees, verifying its accuracy. Due to the variable posture of pedestrians and ineffective camera depth, a distance measurement algorithm based on a four-zone depth comparison is proposed for pedestrian obstacle distance measurement. Experimental results showed that within a range of 6 m, the average relative error of distance measurement did not exceed 1%, and within a range of 9 m, the maximum relative error was 2.03%. The average distance measurement time was 30 ms, which could accurately and quickly achieve pedestrian distance measurement in orchard environments. On the publicly available TUM RGB-D dynamic dataset, YOLOD-SLAM2 significantly reduced the RMSE index of absolute trajectory error compared to the ORB-SLAM2 algorithm, which was less than 0.05 m/s. In actual orchard environments, YOLOD-SLAM2 had a higher degree of agreement between the estimated trajectory and the true trajectory when the vehicle was traveling in straight and circular directions. The RMSE index of the absolute trajectory error was less than 0.03 m/s, and the average tracking time was 47 ms, indicating that the YOLOD-SLAM2 algorithm proposed in this paper could meet the accuracy and real-time requirements of agricultural vehicle positioning in orchard environments. Full article
(This article belongs to the Section Agricultural Technology)
Show Figures

Figure 1

Figure 1
<p>Pinhole Camera Model.</p>
Full article ">Figure 2
<p>Pixel Coordinate System.</p>
Full article ">Figure 3
<p>YOLOv5s algorithm network structure.</p>
Full article ">Figure 4
<p>Overall principle block diagram of ORB-SLAM2.</p>
Full article ">Figure 5
<p>Algorithm flowchart.</p>
Full article ">Figure 6
<p>Reference point of tree trunk.</p>
Full article ">Figure 7
<p>Problem diagram of extracting navigation lines in Scenario 1.</p>
Full article ">Figure 8
<p>Problem diagram of extracting navigation lines in scenario 2.</p>
Full article ">Figure 9
<p>Navigation line extraction diagram.</p>
Full article ">Figure 10
<p>Navigation line extraction in different orchard scenes.</p>
Full article ">Figure 11
<p>Schematic diagram of evaluation indicators.</p>
Full article ">Figure 12
<p>Angle deviation.</p>
Full article ">Figure 13
<p>Schematic diagram of algorithm flow.</p>
Full article ">Figure 14
<p>Pedestrian Distance Measurement Experiment.</p>
Full article ">Figure 14 Cont.
<p>Pedestrian Distance Measurement Experiment.</p>
Full article ">Figure 15
<p>YOLOD-SLAM2 Algorithm Framework.</p>
Full article ">Figure 16
<p>Left image and depth map of binocular camera.</p>
Full article ">Figure 17
<p>Geometric Constraints of Polar Lines.</p>
Full article ">Figure 18
<p>Comparison of Algorithm Trajectory.</p>
Full article ">Figure 19
<p>Actual Orchard Environment Trajectory Map.</p>
Full article ">Figure 20
<p>Trajectory diagram of the straight-line driving in x, y, and z directions.</p>
Full article ">Figure 21
<p>Trajectory diagram of circular driving in the x, y, and z directions.</p>
Full article ">
18 pages, 5473 KiB  
Article
Visual-Inertial RGB-D SLAM with Encoder Integration of ORB Triangulation and Depth Measurement Uncertainties
by Zhan-Wu Ma and Wan-Sheng Cheng
Sensors 2024, 24(18), 5964; https://doi.org/10.3390/s24185964 - 14 Sep 2024
Viewed by 627
Abstract
In recent years, the accuracy of visual SLAM (Simultaneous Localization and Mapping) technology has seen significant improvements, making it a prominent area of research. However, within the current RGB-D SLAM systems, the estimation of 3D positions of feature points primarily relies on direct [...] Read more.
In recent years, the accuracy of visual SLAM (Simultaneous Localization and Mapping) technology has seen significant improvements, making it a prominent area of research. However, within the current RGB-D SLAM systems, the estimation of 3D positions of feature points primarily relies on direct measurements from RGB-D depth cameras, which inherently contain measurement errors. Moreover, the potential of triangulation-based estimation for ORB (Oriented FAST and Rotated BRIEF) feature points remains underutilized. To address the singularity of measurement data, this paper proposes the integration of the ORB features, triangulation uncertainty estimation and depth measurements uncertainty estimation, for 3D positions of feature points. This integration is achieved using a CI (Covariance Intersection) filter, referred to as the CI-TEDM (Triangulation Estimates and Depth Measurements) method. Vision-based SLAM systems face significant challenges, particularly in environments, such as long straight corridors, weakly textured scenes, or during rapid motion, where tracking failures are common. To enhance the stability of visual SLAM, this paper introduces an improved CI-TEDM method by incorporating wheel encoder data. The mathematical model of the encoder is proposed, and detailed derivations of the encoder pre-integration model and error model are provided. Building on these improvements, we propose a novel tightly coupled visual-inertial RGB-D SLAM with encoder integration of ORB triangulation and depth measurement uncertainties. Validation on open-source datasets and real-world environments demonstrates that the proposed improvements significantly enhance the robustness of real-time state estimation and localization accuracy for intelligent vehicles in challenging environments. Full article
Show Figures

Figure 1

Figure 1
<p>The system framework diagram. The system framework diagram consists of three main modules: input, function, and output.</p>
Full article ">Figure 2
<p>An example diagram of reprojection error. Feature matching indicates that points <math display="inline"><semantics> <mrow> <mi>p</mi> <mn>1</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>p</mi> <mn>2</mn> </mrow> </semantics></math> are projections of the same spatial point <math display="inline"><semantics> <mi>p</mi> </semantics></math>, but the camera pose is initially unknown. Initially, there is a certain distance between the projected point, <math display="inline"><semantics> <mrow> <mover> <mi>p</mi> <mo>∧</mo> </mover> <mn>2</mn> </mrow> </semantics></math>, of <math display="inline"><semantics> <mi>P</mi> </semantics></math> and the actual point, <math display="inline"><semantics> <mrow> <mi>p</mi> <mn>2</mn> </mrow> </semantics></math>. The camera pose is then adjusted to minimize this distance.</p>
Full article ">Figure 3
<p>The motion model of the wheeled robot using wheel encoders. The figure illustrates the motion model of a mobile robot using wheel encoders in a 2D plane. The model describes the robot’s trajectory between its position at time <math display="inline"><semantics> <mrow> <msub> <mi>t</mi> <mi>k</mi> </msub> </mrow> </semantics></math>, denoted as <math display="inline"><semantics> <mrow> <mfenced> <mrow> <msub> <mi>x</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>y</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </mrow> </mfenced> </mrow> </semantics></math>, and its position at time <math display="inline"><semantics> <mrow> <msub> <mi>t</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math>, denoted as <math display="inline"><semantics> <mrow> <mfenced> <mrow> <msub> <mi>x</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>y</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </mrow> </mfenced> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>The process of running datasets in the VEOS3-TEDM algorithm: (<b>a</b>) corridor scene and (<b>b</b>) laboratory scene. The blue frames represent keyframes, red frames represent initial keyframes, and green frames represent current frames.</p>
Full article ">Figure 5
<p>The process of tracking datasets in the VEOS3-TEDM algorithm: (<b>a</b>) corridor scene and (<b>b</b>) laboratory scene. The green boxes in the figure represent key feature points detected by VEOS3-TEDM algorithm.</p>
Full article ">Figure 6
<p>The comparison between estimated and true trajectory in the VEOS3-TEDM algorithm: (<b>a</b>) corridor scene and (<b>b</b>) laboratory scene.</p>
Full article ">Figure 7
<p>The comparison between true and estimated trajectories in <span class="html-italic">x</span>, <span class="html-italic">y</span> and <span class="html-italic">z</span> directions, using the VEOS3-TEDM algorithm: (<b>a</b>) corridor scene and (<b>b</b>) laboratory scene.</p>
Full article ">Figure 8
<p>3D point cloud maps: (<b>a</b>) corridor scene and (<b>b</b>) laboratory scene.</p>
Full article ">Figure 9
<p>Images of the experimental platform: (<b>a</b>) front view and (<b>b</b>) left view.</p>
Full article ">Figure 10
<p>The location of various components on the mobile robot: (<b>a</b>) bottom level and (<b>b</b>) upper level.</p>
Full article ">Figure 11
<p>The process of tracking real-world environments in the VEOS3-TEDM algorithm: (<b>a1</b>,<b>a2</b>) laboratory, (<b>b1</b>,<b>b2</b>) hall, (<b>c1</b>,<b>c2</b>) weak texture scene, (<b>d1</b>,<b>d2</b>) long straight corridor. The green boxes in the figure represent key feature points detected by VEOS3-TEDM algorithm.</p>
Full article ">Figure 12
<p>A comparison of estimated and true trajectories in real-world environments using the VEOS3-TEDM algorithm.</p>
Full article ">
19 pages, 20386 KiB  
Article
YOD-SLAM: An Indoor Dynamic VSLAM Algorithm Based on the YOLOv8 Model and Depth Information
by Yiming Li, Yize Wang, Liuwei Lu and Qi An
Electronics 2024, 13(18), 3633; https://doi.org/10.3390/electronics13183633 - 12 Sep 2024
Viewed by 493
Abstract
Aiming at the problems of low positioning accuracy and poor mapping effect of the visual SLAM system caused by the poor quality of the dynamic object mask in an indoor dynamic environment, an indoor dynamic VSLAM algorithm based on the YOLOv8 model and [...] Read more.
Aiming at the problems of low positioning accuracy and poor mapping effect of the visual SLAM system caused by the poor quality of the dynamic object mask in an indoor dynamic environment, an indoor dynamic VSLAM algorithm based on the YOLOv8 model and depth information (YOD-SLAM) is proposed based on the ORB-SLAM3 system. Firstly, the YOLOv8 model obtains the original mask of a priori dynamic objects, and the depth information is used to modify the mask. Secondly, the mask’s depth information and center point are used to a priori determine if the dynamic object has missed detection and if the mask needs to be redrawn. Then, the mask edge distance and depth information are used to judge the movement state of non-prior dynamic objects. Finally, all dynamic object information is removed, and the remaining static objects are used for posing estimation and dense point cloud mapping. The accuracy of camera positioning and the construction effect of dense point cloud maps are verified using the TUM RGB-D dataset and real environment data. The results show that YOD-SLAM has a higher positioning accuracy and dense point cloud mapping effect in dynamic scenes than other advanced SLAM systems such as DS-SLAM and DynaSLAM. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

Figure 1
<p>Overview of YOD-SLAM.</p>
Full article ">Figure 2
<p>The process of modifying prior dynamic object masks using depth information. Figure (<b>a</b>) shows the depth image corresponding to the current frame. Through the algorithm presented in this article, the background area that is excessively covered in (<b>b</b>) is removed in (<b>c</b>). The expanded edges of the human body have achieved better coverage in (<b>d</b>).</p>
Full article ">Figure 3
<p>The process of redrawing the mask of previously missed dynamic objects. Figure (<b>a</b>) shows the depth image corresponding to the current frame. In (<b>b</b>), we can see that people in the distance were not covered by the original mask, resulting in missed detections. The mask in (<b>c</b>) can be obtained by filling in the mask with the depth information specific to that location in (<b>a</b>).</p>
Full article ">Figure 4
<p>The process of excluding prior static objects in motion.</p>
Full article ">Figure 5
<p>Comparison of estimated trajectories and real trajectories of four systems.</p>
Full article ">Figure 6
<p>The results of mask modification on dynamic objects. The three graphs of each column come from the same time as their respective datasets. The first line is the depth image corresponding to the current frame; the second is the original mask obtained by YOLOv8; and the third is the final mask after our modification.</p>
Full article ">Figure 7
<p>Comparison of point cloud maps between ORB-SLAM3 and YOD-SLAM in two sets of highly dynamic sequences.</p>
Full article ">Figure 7 Cont.
<p>Comparison of point cloud maps between ORB-SLAM3 and YOD-SLAM in two sets of highly dynamic sequences.</p>
Full article ">Figure 8
<p>Comparison of point cloud maps between ORB-SLAM3 and YOD-SLAM in low- and static dynamic sequences, where fr2/desk/p is a low-dynamic scene, while fr2/rpy is a static scene.</p>
Full article ">Figure 9
<p>Intel RealSense Depth Camera D455.</p>
Full article ">Figure 10
<p>Mask processing and ORB feature point extraction in real laboratory environments. Several non English exhibition boards are leaning against the wall to simulate typical indoor environments. The facial features of the characters have been treated with confidentiality.</p>
Full article ">Figure 11
<p>Comparison of dense point cloud mapping between ORB-SLAM3 and YOD-SLAM in real laboratory environments. We marked the map areas affected by dynamic objects with red circles.</p>
Full article ">
25 pages, 4182 KiB  
Article
W-VSLAM: A Visual Mapping Algorithm for Indoor Inspection Robots
by Dingji Luo, Yucan Huang, Xuchao Huang, Mingda Miao and Xueshan Gao
Sensors 2024, 24(17), 5662; https://doi.org/10.3390/s24175662 - 30 Aug 2024
Viewed by 556
Abstract
In recent years, with the widespread application of indoor inspection robots, high-precision, robust environmental perception has become essential for robotic mapping. Addressing the issues of visual–inertial estimation inaccuracies due to redundant pose degrees of freedom and accelerometer drift during the planar motion of [...] Read more.
In recent years, with the widespread application of indoor inspection robots, high-precision, robust environmental perception has become essential for robotic mapping. Addressing the issues of visual–inertial estimation inaccuracies due to redundant pose degrees of freedom and accelerometer drift during the planar motion of mobile robots in indoor environments, we propose a visual SLAM perception method that integrates wheel odometry information. First, the robot’s body pose is parameterized in SE(2) and the corresponding camera pose is parameterized in SE(3). On this basis, we derive the visual constraint residuals and their Jacobian matrices for reprojection observations using the camera projection model. We employ the concept of pre-integration to derive pose-constraint residuals and their Jacobian matrices and utilize marginalization theory to derive the relative pose residuals and their Jacobians for loop closure constraints. This approach solves the nonlinear optimization problem to obtain the optimal pose and landmark points of the ground-moving robot. A comparison with the ORBSLAM3 algorithm reveals that, in the recorded indoor environment datasets, the proposed algorithm demonstrates significantly higher perception accuracy, with root mean square error (RMSE) improvements of 89.2% in translation and 98.5% in rotation for absolute trajectory error (ATE). The overall trajectory localization accuracy ranges between 5 and 17 cm, validating the effectiveness of the proposed algorithm. These findings can be applied to preliminary mapping for the autonomous navigation of indoor mobile robots and serve as a basis for path planning based on the mapping results. Full article
Show Figures

Figure 1

Figure 1
<p>Block diagram of mobile robot visual SLAM with integrated wheel speed.</p>
Full article ">Figure 2
<p>Schematic diagram of mobile robot coordinate system.</p>
Full article ">Figure 3
<p>Camera projection model.</p>
Full article ">Figure 4
<p>Schematic diagram of wheel speed information pre-integration.</p>
Full article ">Figure 5
<p>Schematic diagram of square movement in the comparative experiment.</p>
Full article ">Figure 6
<p>Reference keyframe and current frame in previous tracking, (<b>a</b>) reference keyframe of previous tracking and (<b>b</b>) current frame of previous tracking.</p>
Full article ">Figure 7
<p>Comparative Experiment One: robot poses and environmental map points obtained by W-VSLAM.</p>
Full article ">Figure 8
<p>Comparative Experiment Two: robot poses and environmental map points obtained by W-VSLAM.</p>
Full article ">Figure 9
<p>Comparative Experiment One: trajectory comparison chart of different algorithms.</p>
Full article ">Figure 10
<p>Comparative Experiment Two: trajectory comparison chart of different algorithms.</p>
Full article ">Figure 11
<p>Comparative test one: translational component comparison chart of trajectories from different algorithms.</p>
Full article ">Figure 12
<p>(<b>a</b>) Perception results of robot poses and map points in Experiment One. (<b>b</b>) Comparison between Experiment One and reference trajectory.</p>
Full article ">Figure 13
<p>(<b>a</b>) Perception results of robot poses and map points in Experiment Two. (<b>b</b>) Comparison between Experiment Two and reference trajectory.</p>
Full article ">Figure 14
<p>(<b>a</b>) Perception results of robot poses and map points in Experiment Three. (<b>b</b>) Comparison between Experiment Three and reference trajectory.</p>
Full article ">Figure 15
<p>Indoor long corridor environment trajectory; Rivz result diagram.</p>
Full article ">Figure 16
<p>Comparison chart of trajectories in the indoor long corridor environment.</p>
Full article ">Figure 17
<p>Comparison chart of translational components of trajectories in the indoor long corridor environment.</p>
Full article ">Figure 18
<p>Absolute accuracy of rotational estimation in the indoor long corridor environment.</p>
Full article ">Figure 19
<p>Relative accuracy of rotational estimation in the indoor long corridor environment (with a 1° increment).</p>
Full article ">
24 pages, 1413 KiB  
Article
Loop Detection Method Based on Neural Radiance Field BoW Model for Visual Inertial Navigation of UAVs
by Xiaoyue Zhang, Yue Cui, Yanchao Ren, Guodong Duan and Huanrui Zhang
Remote Sens. 2024, 16(16), 3038; https://doi.org/10.3390/rs16163038 - 19 Aug 2024
Viewed by 493
Abstract
The loop closure detection (LCD) methods in Unmanned Aerial Vehicle (UAV) Visual Inertial Navigation System (VINS) are often affected by issues such as insufficient image texture information and limited observational perspectives, resulting in constrained UAV positioning accuracy and reduced capability to perform complex [...] Read more.
The loop closure detection (LCD) methods in Unmanned Aerial Vehicle (UAV) Visual Inertial Navigation System (VINS) are often affected by issues such as insufficient image texture information and limited observational perspectives, resulting in constrained UAV positioning accuracy and reduced capability to perform complex tasks. This study proposes a Bag-of-Words (BoW) LCD method based on Neural Radiance Field (NeRF), which estimates camera poses from existing images and achieves rapid scene reconstruction through NeRF. A method is designed to select virtual viewpoints and render images along the flight trajectory using a specific sampling approach to expand the limited observational angles, mitigating the impact of image blur and insufficient texture information at specific viewpoints while enlarging the loop closure candidate frames to improve the accuracy and success rate of LCD. Additionally, a BoW vector construction method that incorporates the importance of similar visual words and an adapted virtual image filtering and comprehensive scoring calculation method are designed to determine loop closures. Applied to VINS-Mono and ORB-SLAM3, and compared with the advanced BoW model LCDs of the two systems, results indicate that the NeRF-based BoW LCD method can detect more than 48% additional accurate loop closures, while the system’s navigation positioning error mean is reduced by over 46%, validating the effectiveness and superiority of the proposed method and demonstrating its significant importance for improving the navigation accuracy of VINS. Full article
Show Figures

Figure 1

Figure 1
<p>Framework of the BoW LCD method based on NeRF and its position in VINS.</p>
Full article ">Figure 2
<p>Positions of the central pixel and surrounding pixels.</p>
Full article ">Figure 3
<p>Example of Instant-NGP virtual view camera pose.</p>
Full article ">Figure 4
<p>Quadtree uniform feature point extraction.</p>
Full article ">Figure 5
<p>Comparison of reconstructed data of three pose estimation schemes.</p>
Full article ">Figure 6
<p>Feature matching effect between real image (<b>left</b>) and synthetic image (<b>right</b>).</p>
Full article ">Figure 7
<p>The loop closure frame detection results for the two approaches used in VINS-Mono.</p>
Full article ">Figure 8
<p>The loop closure frame detection results for the two approaches used in ORB-SLAM3.</p>
Full article ">Figure 9
<p>Example of additional loopback matching results.</p>
Full article ">Figure 10
<p>The ground truth and trajectory of two methods in VINS-Mono.</p>
Full article ">Figure 11
<p>Statistical data of the APE with image frame index in VINS-Mono.</p>
Full article ">Figure 12
<p>The APE statistics of BoW LCD method.</p>
Full article ">Figure 13
<p>The APE statistics of NeRF-based BoW model LCD method.</p>
Full article ">Figure 14
<p>The distribution image of APE in the VINS-Mono system with the color of the trajectory.</p>
Full article ">Figure 15
<p>The ground truth and trajectory of two methods in ORB-SLAM3.</p>
Full article ">Figure 16
<p>Statistical data of the APE with image frame index in ORB-SLAM3.</p>
Full article ">Figure 17
<p>The APE statistics of BoW LCD method in ORB-SLAM3.</p>
Full article ">Figure 18
<p>The APE statistics of NeRF-based BoW model LCD method in ORB-SLAM3.</p>
Full article ">Figure 19
<p>The distribution image of APE in the ORB-SLAM3 system with the color of the trajectory.</p>
Full article ">Figure 20
<p>Distribution of detected loop closures as a function of threshold r.</p>
Full article ">
25 pages, 23704 KiB  
Article
PE-SLAM: A Modified Simultaneous Localization and Mapping System Based on Particle Swarm Optimization and Epipolar Constraints
by Cuiming Li, Zhengyu Shang, Jinxin Wang, Wancai Niu and Ke Yang
Appl. Sci. 2024, 14(16), 7097; https://doi.org/10.3390/app14167097 - 13 Aug 2024
Viewed by 603
Abstract
Due to various typical unstructured factors in the environment of photovoltaic power stations, such as high feature similarity, weak textures, and simple structures, the motion model of the ORB-SLAM2 algorithm performs poorly, leading to a decline in tracking accuracy. To address this issue, [...] Read more.
Due to various typical unstructured factors in the environment of photovoltaic power stations, such as high feature similarity, weak textures, and simple structures, the motion model of the ORB-SLAM2 algorithm performs poorly, leading to a decline in tracking accuracy. To address this issue, we propose PE-SLAM, which improves the ORB-SLAM2 algorithm’s motion model by incorporating the particle swarm optimization algorithm combined with epipolar constraint to eliminate mismatches. First, a new mutation strategy is proposed to introduce perturbations to the pbest (personal best value) during the late convergence stage of the PSO algorithm, thereby preventing the PSO algorithm from falling into local optima. Then, the improved PSO algorithm is used to solve the fundamental matrix between two images based on the feature matching relationships obtained from the motion model. Finally, the epipolar constraint is applied using the computed fundamental matrix to eliminate incorrect matches produced by the motion model, thereby enhancing the tracking accuracy and robustness of the ORB-SLAM2 algorithm in unstructured photovoltaic power station scenarios. In feature matching experiments, compared to the ORB algorithm and the ORB+HAMMING algorithm, the ORB+PE-match algorithm achieved an average accuracy improvement of 19.5%, 14.0%, and 6.0% in unstructured environments, respectively, with better recall rates. In the trajectory experiments of the TUM dataset, PE-SLAM reduced the average absolute trajectory error compared to ORB-SLAM2 by 29.1% and the average relative pose error by 27.0%. In the photovoltaic power station scene mapping experiment, the dense point cloud map constructed has less overlap and is complete, reflecting that PE-SLAM has basically overcome the unstructured factors of the photovoltaic power station scene and is suitable for applications in this scene. Full article
(This article belongs to the Special Issue Autonomous Vehicles and Robotics)
Show Figures

Figure 1

Figure 1
<p>Photovoltaic power plant scene images.</p>
Full article ">Figure 2
<p>PE-SLAM system framework diagram.</p>
Full article ">Figure 3
<p>Schematic diagram of motion model.</p>
Full article ">Figure 4
<p>Schematic diagram of epipolar constraint.</p>
Full article ">Figure 5
<p>Schematic diagram of matching point pairs for pole constraint verification.</p>
Full article ">Figure 6
<p>Improved mutation strategy for particle perturbation.</p>
Full article ">Figure 7
<p>Particle update schematic.</p>
Full article ">Figure 8
<p>Optical images of OxfordVGG dataset.</p>
Full article ">Figure 9
<p>Influence of important parameters on accuracy and time consuming.</p>
Full article ">Figure 9 Cont.
<p>Influence of important parameters on accuracy and time consuming.</p>
Full article ">Figure 10
<p>Line chart of fitness change.</p>
Full article ">Figure 11
<p>Matching accuracy based on the data.</p>
Full article ">Figure 12
<p>The recall rate of matching results based on the dataset.</p>
Full article ">Figure 13
<p>Unstructured scenarios for datasets.</p>
Full article ">Figure 14
<p>Trajectory error plot of the structure_notexture sequence.</p>
Full article ">Figure 15
<p>Trajectory error plot of the nostructure_texture sequence.</p>
Full article ">Figure 16
<p>Trajectory error plot of the structure_texture sequence.</p>
Full article ">Figure 17
<p>The 3D map generated from the structure_notexture sequence.</p>
Full article ">Figure 18
<p>The 3D map generated from the nostructure_texture sequence.</p>
Full article ">Figure 19
<p>The 3D map generated from the structure_texture sequence.</p>
Full article ">Figure 20
<p>Robot model based on Gazebo.</p>
Full article ">Figure 21
<p>Photovoltaic power station simulation scene based on Gazebo.</p>
Full article ">Figure 22
<p>Trajectory error plot of the Si.</p>
Full article ">Figure 23
<p>The 3D map generated from the Si.</p>
Full article ">Figure 24
<p>Partial images of indoor scenes.</p>
Full article ">Figure 25
<p>Partial images of outdoor scenes.</p>
Full article ">Figure 26
<p>The 3D map generated from In sequence.</p>
Full article ">Figure 27
<p>The 3D map generated from Real sequence.</p>
Full article ">
21 pages, 12275 KiB  
Article
Segmentation Point Simultaneous Localization and Mapping: A Stereo Vision Simultaneous Localization and Mapping Method for Unmanned Surface Vehicles in Nearshore Environments
by Xiujing Gao, Xinzhi Lin, Fanchao Lin and Hongwu Huang
Electronics 2024, 13(16), 3106; https://doi.org/10.3390/electronics13163106 - 6 Aug 2024
Viewed by 864
Abstract
Unmanned surface vehicles (USVs) in nearshore areas are prone to environmental occlusion and electromagnetic interference, which can lead to the failure of traditional satellite-positioning methods. This paper utilizes a visual simultaneous localization and mapping (vSLAM) method to achieve USV positioning in nearshore environments. [...] Read more.
Unmanned surface vehicles (USVs) in nearshore areas are prone to environmental occlusion and electromagnetic interference, which can lead to the failure of traditional satellite-positioning methods. This paper utilizes a visual simultaneous localization and mapping (vSLAM) method to achieve USV positioning in nearshore environments. To address the issues of uneven feature distribution, erroneous depth information, and frequent viewpoint jitter in the visual data of USVs operating in nearshore environments, we propose a stereo vision SLAM system tailored for nearshore conditions: SP-SLAM (Segmentation Point-SLAM). This method is based on ORB-SLAM2 and incorporates a distance segmentation module, which filters feature points from different regions and adaptively adjusts the impact of outliers on iterative optimization, reducing the influence of erroneous depth information on motion scale estimation in open environments. Additionally, our method uses the Sum of Absolute Differences (SAD) for matching image blocks and quadric interpolation to obtain more accurate depth information, constructing a complete map. The experimental results on the USVInland dataset show that SP-SLAM solves the scaling constraint failure problem in nearshore environments and significantly improves the robustness of the stereo SLAM system in such environments. Full article
(This article belongs to the Section Electrical and Autonomous Vehicles)
Show Figures

Figure 1

Figure 1
<p>Nearshore environments image, (<b>a</b>) is the river channel, (<b>b</b>) is the coastal area, and (<b>c</b>) is the lake.</p>
Full article ">Figure 2
<p>Structure of the SP-SLAM system.</p>
Full article ">Figure 3
<p>Rotation invariance and scale invariance of ORB features.</p>
Full article ">Figure 4
<p>Morphological dilatation, (<b>a</b>) is the original image, (<b>b</b>) is the structural element, and (<b>c</b>) is the expanded image; the gray areas represent the pixel distribution before dilation, while the black areas denote the pixels filled during the dilation process.</p>
Full article ">Figure 5
<p>Distance segmentation, (<b>a</b>) is the original image, (<b>b</b>) is the image after distance segmentation.</p>
Full article ">Figure 6
<p>Quadric interpolation diagram.</p>
Full article ">Figure 7
<p>Three-dimensional point updating.</p>
Full article ">Figure 8
<p>USVInland Acquisition Platform: (<b>a</b>) is the physical picture of the unmanned ship, (<b>b</b>) is the front view of the unmanned ship, (<b>c</b>) is the top view of the unmanned ship.</p>
Full article ">Figure 9
<p>The distance segmentation effect processed by different structural parts is as follows: the first act is the original image, the second act is the binary image segmented by the iterative threshold, and the third to fifth lines are segmented by different structural elements.</p>
Full article ">Figure 10
<p>Nearshore feature extraction effect, (<b>a</b>–<b>d</b>) illustrate the extraction results in different scenarios, where the green points represent foreground feature points and the red points represent background feature points.</p>
Full article ">Figure 11
<p>USVInland real environment presentation.</p>
Full article ">Figure 12
<p>Comparison of motion trajectories.</p>
Full article ">Figure 13
<p>KITTI datasets 00, 01 sequence trajectory comparison diagram.</p>
Full article ">Figure 14
<p>N03-5 mapping effect display, (<b>a</b>) shows the real scene image and mapping results of the riverbanks, and (<b>b</b>) displays the satellite top view of the river and the overall mapping outcomes. The red line in the figure indicates the approximate path of the unmanned surface vessel.</p>
Full article ">Figure 15
<p>N02-4 mapping effect display, (<b>a</b>) shows the real scene images of both ends of the river, both of which are bridge scenes; (<b>b</b>) displays the satellite top view of the river and the overall mapping results. The red line in the figure indicates the approximate path of the USVs.</p>
Full article ">
15 pages, 4119 KiB  
Article
Visual Navigation Algorithms for Aircraft Fusing Neural Networks in Denial Environments
by Yang Gao, Yue Wang, Lingyun Tian, Dongguang Li and Fenming Wang
Sensors 2024, 24(15), 4797; https://doi.org/10.3390/s24154797 - 24 Jul 2024
Viewed by 623
Abstract
A lightweight aircraft visual navigation algorithm that fuses neural networks is proposed to address the limited computing power issue during the offline operation of aircraft edge computing platforms in satellite-denied environments with complex working scenarios. This algorithm utilizes object detection algorithms to label [...] Read more.
A lightweight aircraft visual navigation algorithm that fuses neural networks is proposed to address the limited computing power issue during the offline operation of aircraft edge computing platforms in satellite-denied environments with complex working scenarios. This algorithm utilizes object detection algorithms to label dynamic objects within complex scenes and performs dynamic feature point elimination to enhance the feature point extraction quality, thereby improving navigation accuracy. The algorithm was validated using an aircraft edge computing platform, and comparisons were made with existing methods through experiments conducted on the TUM public dataset and physical flight experiments. The experimental results show that the proposed algorithm not only improves the navigation accuracy but also has high robustness compared with the monocular ORB-SLAM2 method under the premise of satisfying the real-time operation of the system. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

Figure 1
<p>Network of the YOLOv5 [<a href="#B29-sensors-24-04797" class="html-bibr">29</a>].</p>
Full article ">Figure 2
<p>The Network of CBAM-YOLOv5.</p>
Full article ">Figure 3
<p>Flow chart of the YOLO-SVO system.</p>
Full article ">Figure 4
<p>Processing flow of the dynamic pixels marked.</p>
Full article ">Figure 5
<p>Static feature point extraction.</p>
Full article ">Figure 6
<p>(<b>a</b>) The result of the original feature algorithm extraction; (<b>b</b>) the result of the static feature algorithm extraction. Comparison of the result among the original feature extraction and the static feature extraction.</p>
Full article ">Figure 7
<p>The error of the pose estimation. (<b>a</b>) Trajectories parsed by the monocular ORB-SLAM2; (<b>b</b>) The RPE of the monocular ORB-SLAM2 parsed trajectories; (<b>c</b>) trajectories parsed by our algorithm; (<b>d</b>) the RPE of our algorithm’s parsed trajectories.</p>
Full article ">Figure 8
<p>The experimental UAV platform.</p>
Full article ">Figure 9
<p>The trajectory of the flight in the real world.</p>
Full article ">Figure 10
<p>The scene of the flight.</p>
Full article ">Figure 11
<p>The result of the static feature algorithm extraction.</p>
Full article ">Figure 12
<p>Comparison of the absolute trajectory error. (<b>a</b>) Trajectories parsed by our algorithm; (<b>b</b>) the RPE of our algorithm-parsed trajectories; (<b>c</b>) trajectories parsed by the monocular ORB-SLAM2; (<b>d</b>) the RPE of the monocular ORB-SLAM2-parsed trajectories.</p>
Full article ">
22 pages, 16538 KiB  
Article
BY-SLAM: Dynamic Visual SLAM System Based on BEBLID and Semantic Information Extraction
by Daixian Zhu, Peixuan Liu, Qiang Qiu, Jiaxin Wei and Ruolin Gong
Sensors 2024, 24(14), 4693; https://doi.org/10.3390/s24144693 - 19 Jul 2024
Viewed by 890
Abstract
SLAM is a critical technology for enabling autonomous navigation and positioning in unmanned vehicles. Traditional visual simultaneous localization and mapping algorithms are built upon the assumption of a static scene, overlooking the impact of dynamic targets within real-world environments. Interference from dynamic targets [...] Read more.
SLAM is a critical technology for enabling autonomous navigation and positioning in unmanned vehicles. Traditional visual simultaneous localization and mapping algorithms are built upon the assumption of a static scene, overlooking the impact of dynamic targets within real-world environments. Interference from dynamic targets can significantly degrade the system’s localization accuracy or even lead to tracking failure. To address these issues, we propose a dynamic visual SLAM system named BY-SLAM, which is based on BEBLID and semantic information extraction. Initially, the BEBLID descriptor is introduced to describe Oriented FAST feature points, enhancing both feature point matching accuracy and speed. Subsequently, FasterNet replaces the backbone network of YOLOv8s to expedite semantic information extraction. By using the results of DBSCAN clustering object detection, a more refined semantic mask is obtained. Finally, by leveraging the semantic mask and epipolar constraints, dynamic feature points are discerned and eliminated, allowing for the utilization of only static feature points for pose estimation and the construction of a dense 3D map that excludes dynamic targets. Experimental evaluations are conducted on both the TUM RGB-D dataset and real-world scenarios and demonstrate the effectiveness of the proposed algorithm at filtering out dynamic targets within the scenes. On average, the localization accuracy for the TUM RGB-D dataset improves by 95.53% compared to ORB-SLAM3. Comparative analyses against classical dynamic SLAM systems further corroborate the improvement in localization accuracy, map readability, and robustness achieved by BY-SLAM. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

Figure 1
<p>Framework of BY-SLAM. Two new threads have been added to the ORB-SLAM3 framework: dynamic object detection and dense 3D mapping. The ORB-SLAM3 framework is shown in gray; the new features added to BY-SLAM are shown in blue.</p>
Full article ">Figure 2
<p>BEBLID descriptor extraction flow.</p>
Full article ">Figure 3
<p>Framework of the proposed YOLOv8s-FasterNet.</p>
Full article ">Figure 4
<p>Principle of PConv.</p>
Full article ">Figure 5
<p>DBSCAN clustering experiment. (<b>a</b>) RGB image of the 90th frame of the fr3_walking_xyz sequence; (<b>b</b>) object detection results; (<b>c</b>) depth image of the 90th frame of the fr3_walking_xyz sequence; (<b>d</b>) DBSCAN clustering results.</p>
Full article ">Figure 6
<p>Epipolar constraint.</p>
Full article ">Figure 7
<p>Feature matching results. The green lines represent the feature correspondences between matching points. Different scenes are represented from top to bottom and are consecutive frames of the fr3_walking_xyz dynamic sequence, v_home, and i_parking (1–3). (<b>a</b>) ORB algorithm results; (<b>b</b>) results of our algorithm.</p>
Full article ">Figure 8
<p>Dynamic feature point elimination experiment for the TUM RGB-D dataset. (<b>a</b>) RGB image of the 87th frame of the fr3_walking_static sequence; (<b>b</b>) object detection results; (<b>c</b>) clustering results of DBSCAN; (<b>d</b>) feature extraction results of ORB-SLAM3; (<b>e</b>) depth image of the 87th frame of the fr3_walking_static sequence; (<b>f</b>) direct rejection of all feature points in the target frame; (<b>g</b>) direct elimination of dynamic feature points within the semantic mask; (<b>h</b>) the result of eliminating dynamic feature points using our algorithm.</p>
Full article ">Figure 9
<p>Real-world scene feature point elimination experiment. (<b>a</b>) RGB image of frame 163 of the real-world dataset; (<b>b</b>) object detection results; (<b>c</b>) clustering results of DBSCAN; (<b>d</b>) feature extraction results of ORB-SLAM3; (<b>e</b>) depth image of frame 163 of the real-world dataset; (<b>f</b>) direct rejection of all feature points in the target frame; (<b>g</b>) direct rejection of the dynamic feature points within the semantic mask; (<b>h</b>) the result of our algorithm for eliminating dynamic feature points.</p>
Full article ">Figure 10
<p>ATE for ORB-SLAM3 and BY-SLAM is evaluated on certain sequences from the TUM RGB-D dataset. In the ATE evaluation, the real camera trajectory is represented by the black line, the estimated camera trajectory is represented by the blue line, and the discrepancy between the two is represented by the red line. Different scenes are represented from top to bottom: (1) fr3_sitting_static, (2) fr3_walking_half, (3) fr3_walking_rpy, (4) fr3_walking_static, and (5) fr3_walking_xyz. (<b>a</b>) ORB-SLAM3 algorithm results; (<b>b</b>) BY-SLAM algorithm results.</p>
Full article ">Figure 11
<p>RPE for ORB-SLAM3 and BY-SLAM is evaluated on certain sequences from the TUM RGB-D dataset. In the RPE evaluation, the blue lines denote the RPE at each moment. Different scenes are represented from top to bottom: (1) fr3_sitting_static, (2) fr3_walking_half, (3) fr3_walking_rpy, (4) fr3_walking_static, and (5) fr3_walking_xyz. (<b>a</b>) ORB-SLAM3 algorithm results; (<b>b</b>) BY-SLAM algorithm results.</p>
Full article ">Figure 12
<p>Dense 3D maps for the TUM RGB-D dataset. Different scenes are represented from top to bottom: (1) fr3_sitting_static and (2) fr3_walking_static. (<b>a</b>) ORB-SLAM3; (<b>b</b>) BY-SLAM.</p>
Full article ">Figure 13
<p>Dense 3D maps of the real-world scene. (<b>a</b>) ORB-SLAM3; (<b>b</b>) BY-SLAM.</p>
Full article ">
17 pages, 6246 KiB  
Article
YPL-SLAM: A Simultaneous Localization and Mapping Algorithm for Point–line Fusion in Dynamic Environments
by Xinwu Du, Chenglin Zhang, Kaihang Gao, Jin Liu, Xiufang Yu and Shusong Wang
Sensors 2024, 24(14), 4517; https://doi.org/10.3390/s24144517 - 12 Jul 2024
Viewed by 779
Abstract
Simultaneous Localization and Mapping (SLAM) is one of the key technologies with which to address the autonomous navigation of mobile robots, utilizing environmental features to determine a robot’s position and create a map of its surroundings. Currently, visual SLAM algorithms typically yield precise [...] Read more.
Simultaneous Localization and Mapping (SLAM) is one of the key technologies with which to address the autonomous navigation of mobile robots, utilizing environmental features to determine a robot’s position and create a map of its surroundings. Currently, visual SLAM algorithms typically yield precise and dependable outcomes in static environments, and many algorithms opt to filter out the feature points in dynamic regions. However, when there is an increase in the number of dynamic objects within the camera’s view, this approach might result in decreased accuracy or tracking failures. Therefore, this study proposes a solution called YPL-SLAM based on ORB-SLAM2. The solution adds a target recognition and region segmentation module to determine the dynamic region, potential dynamic region, and static region; determines the state of the potential dynamic region using the RANSAC method with polar geometric constraints; and removes the dynamic feature points. It then extracts the line features of the non-dynamic region and finally performs the point–line fusion optimization process using a weighted fusion strategy, considering the image dynamic score and the number of successful feature point–line matches, thus ensuring the system’s robustness and accuracy. A large number of experiments have been conducted using the publicly available TUM dataset to compare YPL-SLAM with globally leading SLAM algorithms. The results demonstrate that the new algorithm surpasses ORB-SLAM2 in terms of accuracy (with a maximum improvement of 96.1%) while also exhibiting a significantly enhanced operating speed compared to Dyna-SLAM. Full article
Show Figures

Figure 1

Figure 1
<p>The SLAM framework of the YPL-SLAM algorithm, with the additional threads for target recognition and region delineation depicted in the blue box. Meanwhile, the yellow box illustrates the line feature extraction and processing module.</p>
Full article ">Figure 2
<p>The YOLOv5s algorithm recognizes the targets, obtains the targets’ semantic information, and classifies these targets into static, dynamic, and potentially dynamic regions. (<b>a</b>,<b>b</b>) Recognizing the targets; (<b>c</b>,<b>d</b>) categorizing the area in which the target is located. Dynamic indicates a dynamic region, uncertain indicates a potential dynamic region, and static indicates a static region.</p>
Full article ">Figure 3
<p>Geometric constraints on the poles.</p>
Full article ">Figure 4
<p>The extraction of feature points, the filtering of dynamic feature points, and the extraction of the non-dynamic region’s line features are compared. (<b>a</b>,<b>d</b>,<b>g</b>) are extracted feature points, where the green dots represent the extracted feature points; (<b>b</b>,<b>e</b>,<b>h</b>) are the effects after removing the dynamic feature points; and (<b>c</b>,<b>f</b>,<b>i</b>) are the extracted non-dynamic region’s line features, where the red lines represent the extracted line features.</p>
Full article ">Figure 4 Cont.
<p>The extraction of feature points, the filtering of dynamic feature points, and the extraction of the non-dynamic region’s line features are compared. (<b>a</b>,<b>d</b>,<b>g</b>) are extracted feature points, where the green dots represent the extracted feature points; (<b>b</b>,<b>e</b>,<b>h</b>) are the effects after removing the dynamic feature points; and (<b>c</b>,<b>f</b>,<b>i</b>) are the extracted non-dynamic region’s line features, where the red lines represent the extracted line features.</p>
Full article ">Figure 5
<p>In the fr3_s_hs sequence, we compare the absolute trajectory error (ATE) and relative pose error (RPE) between ORB-SLAM2 and YPL-SLAM, particularly focusing on the translation drift. The first and second columns of the table display the ATE results, while the third and fourth columns show the RPE results.</p>
Full article ">Figure 6
<p>In the fr3_s_hs sequence, we compare the ground truth and estimated trajectories. The first column displays the comparison of the 3D trajectories, containing the ground truth and trajectories estimated by both ORB-SLAM2 and YPL-SLAM. The second column presents the fitting results for the X, Y, and Z axes, while the third column shows the fitting results for the roll, pitch, and yaw axes.</p>
Full article ">Figure 7
<p>In the fr3_w_xyz sequence, we compare the absolute trajectory error (ATE) and relative pose error (RPE) between ORB-SLAM2 and YPL-SLAM, focusing on the translation drift performance. The first two columns of the table display the ATE results, while the third and fourth columns show the RPE results.</p>
Full article ">Figure 8
<p>In the fr3_w_xyz sequence, we compare the ground truth and estimated trajectories. The first column displays the comparison of the 3D trajectories, containing the ground truth and trajectories estimated by both ORB-SLAM2 and YPL-SLAM. The second column presents the fitting results for the X, Y, and Z axes, while the third column shows the fitting results for the roll, pitch, and yaw axes.</p>
Full article ">
36 pages, 30845 KiB  
Article
Semantic Visual SLAM Algorithm Based on Improved DeepLabV3+ Model and LK Optical Flow
by Yiming Li, Yize Wang, Liuwei Lu, Yiran Guo and Qi An
Appl. Sci. 2024, 14(13), 5792; https://doi.org/10.3390/app14135792 - 2 Jul 2024
Viewed by 843
Abstract
Aiming at the problem that dynamic targets in indoor environments lead to low accuracy and large errors in the localization and position estimation of visual SLAM systems and the inability to build maps containing semantic information, a semantic visual SLAM algorithm based on [...] Read more.
Aiming at the problem that dynamic targets in indoor environments lead to low accuracy and large errors in the localization and position estimation of visual SLAM systems and the inability to build maps containing semantic information, a semantic visual SLAM algorithm based on the semantic segmentation network DeepLabV3+ and LK optical flow is proposed based on the ORB-SLAM2 system. First, the dynamic target feature points are detected and rejected based on the lightweight semantic segmentation network DeepLabV3+ and LK optical flow method. Second, the static environment occluded by the dynamic target is repaired using the time-weighted multi-frame fusion background repair technique. Lastly, the filtered static feature points are used for feature matching and position calculation. Meanwhile, the semantic labeling information of static objects obtained based on the lightweight semantic segmentation network DeepLabV3+ is fused with the static environment information after background repair to generate dense point cloud maps containing semantic information, and the semantic dense point cloud maps are transformed into semantic octree maps using the octree spatial segmentation data structure. The localization accuracy of the visual SLAM system and the construction of the semantic maps are verified using the widely used TUM RGB-D dataset and real scene data, respectively. The experimental results show that the proposed semantic visual SLAM algorithm can effectively reduce the influence of dynamic targets on the system, and compared with other advanced algorithms, such as DynaSLAM, it has the highest performance in indoor dynamic environments in terms of localization accuracy and time consumption. In addition, semantic maps can be constructed so that the robot can better understand and adapt to the indoor dynamic environment. Full article
(This article belongs to the Section Robotics and Automation)
Show Figures

Figure 1

Figure 1
<p>ORB-SLAM2 system framework.</p>
Full article ">Figure 2
<p>The overall architecture of DeepLabv3+ models.</p>
Full article ">Figure 3
<p>Sketch of an octree structure.</p>
Full article ">Figure 4
<p>The overall framework of the semantic visual SLAM system.</p>
Full article ">Figure 5
<p>Lightweight semantic segmentation network model.</p>
Full article ">Figure 6
<p>Mobile robotics experiment platform based on the Turtlebot2 robot and the RealSense D435i depth camera.</p>
Full article ">Figure 7
<p>RealSense D435i depth camera.</p>
Full article ">Figure 8
<p>Real experimental environment data. (<b>a</b>) Static desktop environment; (<b>b</b>) static office environment; (<b>c</b>) dynamic office environment.</p>
Full article ">Figure 9
<p>Comparison of feature extraction between ORB-SLAM2 and the proposed algorithm. (<b>a</b>) Original picture; (<b>b</b>) feature extraction effect of the ORB-SLAM2 system; (<b>c</b>) feature extraction effect of the ORB-SLAM2 system adding semantic segmentation; (<b>d</b>) feature extraction effect of the proposed algorithm.</p>
Full article ">Figure 10
<p>Comparison of non-prior dynamic target feature extraction results. (<b>a</b>) The ORB-SLAM2 system; (<b>b</b>) the proposed improved algorithm.</p>
Full article ">Figure 11
<p>Comparison of dynamic feature point rejection results. (<b>a</b>) The ORB-SLAM2 system; (<b>b</b>) the proposed improved algorithm.</p>
Full article ">Figure 12
<p>Comparison of background restoration results for TUM RGB-D data. (<b>a</b>) Original picture; (<b>b</b>) covering masks after semantic segmentation; (<b>c</b>) background restoration results.</p>
Full article ">Figure 13
<p>Comparison of background restoration results for real scenes. (<b>a</b>) Original frame; (<b>b</b>) background restoration.</p>
Full article ">Figure 14
<p>Comparison of absolute trajectory error. (<b>a</b>) The “walking_xyz” sequence; (<b>b</b>) the “walking_rpy” sequence; (<b>c</b>) the “walking_halfsphere” sequence.</p>
Full article ">Figure 15
<p>Comparison of relative position error. (<b>a</b>) The “walking_xyz” sequence; (<b>b</b>) the “walking_rpy” sequence; (<b>c</b>) the “walking_halfsphere” sequence.</p>
Full article ">Figure 16
<p>Dense point cloud map construction using the ORB-SLAM2 algorithm. (<b>a</b>) The “walking_xyz” sequence; (<b>b</b>) the “walking_rpy” sequence; (<b>c</b>) the “walking_halfsphere” sequence; (<b>d</b>) the “walking_static” sequence.</p>
Full article ">Figure 17
<p>Dense point cloud map construction using the proposed algorithm. (<b>a</b>) The “walking_xyz” sequence; (<b>b</b>) the “walking_rpy” sequence; (<b>c</b>) the “walking_halfsphere” sequence; (<b>d</b>) the “walking_static” sequence.</p>
Full article ">Figure 18
<p>Dense point cloud maps of static environments using the proposed algorithm. (<b>a</b>) Static desktop environments; (<b>b</b>) static office environments.</p>
Full article ">Figure 19
<p>Dynamic office environment dense point cloud map. (<b>a</b>) The ORB-SLAM2 algorithm; (<b>b</b>) the proposed algorithm.</p>
Full article ">Figure 20
<p>Octree map construction using the ORB-SLAM2 algorithm. (<b>a</b>) The “walking_xyz” sequence; (<b>b</b>) the “walking_rpy” sequence; (<b>c</b>) the “walking_halfsphere” sequence; (<b>d</b>) the “walking_static” sequence.</p>
Full article ">Figure 21
<p>Octree map construction using the proposed algorithm. (<b>a</b>) The “walking_xyz” sequence; (<b>b</b>) the “walking_rpy” sequence; (<b>c</b>) the “walking_halfsphere” sequence; (<b>d</b>) the “walking_static” sequence.</p>
Full article ">Figure 22
<p>Dynamic office environment octree map. (<b>a</b>) The ORB-SLAM2 algorithm; (<b>b</b>) the proposed algorithm.</p>
Full article ">Figure 23
<p>Semantical dense point cloud map construction using the proposed algorithm. (<b>a</b>) The “walking_xyz” sequence; (<b>b</b>) the “walking_static” sequence.</p>
Full article ">Figure 24
<p>Semantical octree map construction using the proposed algorithm. (<b>a</b>) The “walking_xyz” sequence; (<b>b</b>) the “walking_static” sequence.</p>
Full article ">Figure 25
<p>Semantic dense point cloud map of the office environment. (<b>a</b>) Static desktop environment; (<b>b</b>) dynamic office environment.</p>
Full article ">Figure 26
<p>Semantic octree map of the office environment. (<b>a</b>) Static desktop environment; (<b>b</b>) dynamic office environment.</p>
Full article ">Figure 27
<p>The memory occupation of the map using the proposed algorithm (unit: KB).</p>
Full article ">Figure 28
<p>The memory occupation of the map using the ORB-SLAM2 algorithm (unit: KB).</p>
Full article ">
21 pages, 36012 KiB  
Article
DFD-SLAM: Visual SLAM with Deep Features in Dynamic Environment
by Wei Qian, Jiansheng Peng and Hongyu Zhang
Appl. Sci. 2024, 14(11), 4949; https://doi.org/10.3390/app14114949 - 6 Jun 2024
Viewed by 1331
Abstract
Visual SLAM technology is one of the important technologies for mobile robots. Existing feature-based visual SLAM techniques suffer from tracking and loop closure performance degradation in complex environments. We propose the DFD-SLAM system to ensure outstanding accuracy and robustness across diverse environments. Initially, [...] Read more.
Visual SLAM technology is one of the important technologies for mobile robots. Existing feature-based visual SLAM techniques suffer from tracking and loop closure performance degradation in complex environments. We propose the DFD-SLAM system to ensure outstanding accuracy and robustness across diverse environments. Initially, building on the ORB-SLAM3 system, we replace the original feature extraction component with the HFNet network and introduce a frame rotation estimation method. This method determines the rotation angles between consecutive frames to select superior local descriptors. Furthermore, we utilize CNN-extracted global descriptors to replace the bag-of-words approach. Subsequently, we develop a precise removal strategy, combining semantic information from YOLOv8 to accurately eliminate dynamic feature points. In the TUM-VI dataset, DFD-SLAM shows an improvement over ORB-SLAM3 of 29.24% in the corridor sequences, 40.07% in the magistrale sequences, 28.75% in the room sequences, and 35.26% in the slides sequences. In the TUM-RGBD dataset, DFD-SLAM demonstrates a 91.57% improvement over ORB-SLAM3 in highly dynamic scenarios. This demonstrates the effectiveness of our approach. Full article
(This article belongs to the Special Issue Intelligent Control and Robotics II)
Show Figures

Figure 1

Figure 1
<p>System architecture.</p>
Full article ">Figure 2
<p>The complete process of precise elimination. (<b>a</b>) The segmentation results of YOLOv8. (<b>b</b>) The results of optical flow tracking on the extracted feature points. (<b>c</b>) The results of epipolar constraints. (<b>d</b>,<b>e</b>) The system dividing the detected potential dynamic objects into sub-frames and identifying the dynamic regions within them. In (<b>e</b>), the red boxes indicate dynamic regions, while the green areas indicate static regions. (<b>f</b>) The final retained segmentation results after dilation processing.</p>
Full article ">Figure 3
<p>Filter out optical flow vectors that do not meet the requirements.</p>
Full article ">Figure 4
<p>In a rotating scene, detect the matching situation before and after frame rotation. The first row uses HFNet descriptors. The second row is the frame identified as rotating, with red points indicating the optimized rotation center. The third row uses ORB descriptors instead.</p>
Full article ">Figure 5
<p>Matching performance of DFD-SLAM and ORB-SLAM3 under varying lighting and scene conditions. The first row shows the matching performance of ORB-SLAM3 using its strategies. The second row illustrates the matching performance of DFD-SLAM using HFNet for feature point extraction and descriptor matching. In most cases, the deep-features-based extraction method still holds advantages.</p>
Full article ">Figure 6
<p>Comparison of loop closure detection in monocular mode. The final trajectory maps are shown in (<b>h</b>,<b>i</b>). The numbers annotated above indicate the positions where loop closure detection occurred in each system. Scenes (<b>a</b>–<b>g</b>) correspond to the occurrences of loop closure detection, where the second row indicates the frames where the systems correctly detected loop closures relative to the first row.</p>
Full article ">Figure 7
<p>Comparison of trajectories between outstanding dynamic SLAM systems and our method in highly dynamic environments. The first row shows the trajectory map for the <math display="inline"><semantics> <mrow> <mi mathvariant="italic">W</mi> <mo>/</mo> <mi mathvariant="italic">static</mi> </mrow> </semantics></math> sequence, the second row for the <math display="inline"><semantics> <mrow> <mi mathvariant="italic">W</mi> <mo>/</mo> <mi mathvariant="italic">xyz</mi> </mrow> </semantics></math> sequence, the third row for the <math display="inline"><semantics> <mrow> <mi mathvariant="italic">W</mi> <mo>/</mo> <mi mathvariant="italic">rpy</mi> </mrow> </semantics></math> sequence, and the fourth row for the W/half sequence. The blue lines represent the system’s result trajectory, the black lines indicate the ground truth, and the red lines show the difference between the two. More prominent and numerous red lines indicate a higher absolute trajectory error, signifying lower tracking accuracy of the system.</p>
Full article ">Figure 8
<p>Dynamic point culling flowchart in <math display="inline"><semantics> <mrow> <mi>W</mi> <mo>/</mo> <mi>r</mi> <mi>p</mi> <mi>y</mi> </mrow> </semantics></math> sequence. Each of these lines represents a complete culling process. Each column represents a cull step.</p>
Full article ">
Back to TopTop