[go: up one dir, main page]

Next Article in Journal
Development of Self-Sensing Asphalt Pavements: Review and Perspectives
Next Article in Special Issue
Sensor-Based Structural Health Monitoring of Asphalt Pavements with Semi-Rigid Bases Combining Accelerated Pavement Testing and a Falling Weight Deflectometer Test
Previous Article in Journal
Two-Dimensional SERS Sensor Array for Identifying and Visualizing the Gas Spatial Distributions of Two Distinct Odor Sources
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

Three-Dimensional Reconstruction and Deformation Identification of Slope Models Based on Structured Light Method

1
Faculty of Civil Engineering and Mechanics, Kunming University of Science and Technology, Kunming 650500, China
2
Intelligent Infrastructure Operation and Maintenance Technology Innovation Team, Yunnan Provincial Department of Education, Kunming University of Science and Technology, Kunming 650500, China
3
Faculty of Engineering, China University of Geosciences, Wuhan 430074, China
*
Authors to whom correspondence should be addressed.
Sensors 2024, 24(3), 794; https://doi.org/10.3390/s24030794
Submission received: 18 December 2023 / Revised: 15 January 2024 / Accepted: 22 January 2024 / Published: 25 January 2024
(This article belongs to the Special Issue Sensor-Based Structural Health Monitoring of Civil Infrastructure)
Figure 1
<p>The schematic diagram of the structured light three-dimensional imaging system.</p> ">
Figure 2
<p>Structured light 3D reconstruction process based on Gray code.</p> ">
Figure 3
<p>Calibration process of structured light systems.</p> ">
Figure 4
<p>Pinhole camera model. (<math display="inline"><semantics> <mrow> <mi>O</mi> <mo>−</mo> <msup> <mi>X</mi> <mi>W</mi> </msup> <msup> <mi>Y</mi> <mi>W</mi> </msup> <msup> <mi>Z</mi> <mi>W</mi> </msup> </mrow> </semantics></math> represents the world coordinate system; <math display="inline"><semantics> <mrow> <mi>O</mi> <mo>−</mo> <msup> <mi>X</mi> <mi>C</mi> </msup> <msup> <mi>Y</mi> <mi>C</mi> </msup> <msup> <mi>Z</mi> <mi>C</mi> </msup> </mrow> </semantics></math> denotes the camera coordinate system; <math display="inline"><semantics> <mrow> <mi>O</mi> <mo>−</mo> <mi>x</mi> <mi>y</mi> </mrow> </semantics></math> is defined as the image coordinate system; <math display="inline"><semantics> <mrow> <mi>O</mi> <mo>−</mo> <mi>u</mi> <mi>v</mi> </mrow> </semantics></math> signifies the pixel coordinate system. The red line illustrates the correspondence between point <math display="inline"><semantics> <mi>P</mi> </semantics></math> and point <math display="inline"><semantics> <msup> <mi>P</mi> <mo>′</mo> </msup> </semantics></math>).</p> ">
Figure 5
<p>Recognition results of checkerboard grid corner points. (The red circles denote the positions of the corner points that have been identified).</p> ">
Figure 6
<p>The projection stage in the calibration process of structured light system.</p> ">
Figure 7
<p>The corresponding positions of the chessboard grid corners on the camera imaging plane and the projector’s projection plane. (The solid line arrows indicate the positions of the corresponding points).</p> ">
Figure 8
<p>Structured light measurement system for slope modeling.</p> ">
Figure 9
<p>Workflow of the slope model test.</p> ">
Figure 10
<p>(<b>a</b>) Gypsum spherical physical photograph; (<b>b</b>) 3D model of gypsum spherical surface.</p> ">
Figure 11
<p>The histogram depicting the distribution of distance errors between the slope model reconstructed by the structured light system and the slope model reconstructed by laser scanning was calculated using the M3C2 algorithm.</p> ">
Figure 12
<p>The distribution of distance errors in the slope model reconstructed by the structured light system.</p> ">
Figure 13
<p>Three-dimensional models of slope failure at different slope angles.</p> ">
Figure 14
<p>The point cloud variations of adjacent slope models: (<b>a</b>) Slope change (α = 31°~37°); (<b>b</b>) Slope change (α = 37°~43°); (<b>c</b>) Slope change (α = 43°~49°).</p> ">
Figure 15
<p>Cross-Section of slope models at different slopes.</p> ">
Figure 16
<p>Comparative analysis of slope surface changes in models using structured light and laser landslide prediction methods: (<b>a</b>) The landslide prediction method of structured light; (<b>b</b>) The landslide prediction method of the laser (Includes gravel marker data); (<b>c</b>) The landslide prediction method of the laser (Remove gravel marker data).</p> ">
Figure A1
<p>Summarized protocol for the physical model test.</p> ">
Review Reports Versions Notes

Abstract

:
In this study, we propose a meticulous method for the three-dimensional modeling of slope models using structured light, a swift and cost-effective technique. Our approach aims to enhance the understanding of slope behavior during landslides by capturing and analyzing surface deformations. The methodology involves the initial capture of images at various stages of landslides, followed by the application of the structured light method for precise three-dimensional reconstructions at each stage. The system’s low-cost nature and operational convenience make it accessible for widespread use. Subsequently, a comparative analysis is conducted to identify regions susceptible to severe landslide disasters, providing valuable insights for risk assessment. Our findings underscore the efficacy of this system in facilitating a qualitative analysis of landslide-prone areas, offering a swift and cost-efficient solution for the three-dimensional reconstruction of slope models.

1. Introduction

Landslides are typical natural disaster phenomena and have profoundly and consistently influenced people’s living. Landslides represent the most destructive phenomena, posing a significant threat to both human life and infrastructure. Therefore, the measurement and monitoring of landslide deformation behavior hold crucial significance in the prevention and mitigation of losses resulting from landslide disasters [1,2,3]. The physical slope model as an effective approach has been widely adopted to investigate the landslide mechanism.
Physical model testing is a prevalent method in the investigation of landslide disasters [4,5,6]. This approach effectively elucidates various mechanisms underlying landslide phenomena [7,8,9]. By constructing slope physical models in the laboratory, the types of slopes and disasters can be pre-designed for experimentation, aiming to achieve optimal simulation conditions [10,11]. Real-time monitoring of the slope model can be accomplished by installing various sensors and integrating them with software processing systems. The monitoring information obtained in the laboratory provides reliable experimental data for numerical simulation analysis. Therefore, based on the availability and effectiveness of physical model testing [12], extensive research has been conducted on slope system reinforcement and deformation behavior through customized slope physical models [13,14,15]. Pipatpongsa and Fang et al. [16] investigated the loading path and failure range of the slope model positioned on the cushion plane through slope model tests. Yin and Deng et al. [17] simulated the complete reverse slope failure process using a large inclined platform. Wang [18] investigated the behaviors of landslides reinforced with pile-anchors through slope model tests. From the information gathered during the monitoring of various landslides, the surface morphology and overall deformation of slopes constitute critical factors in slope monitoring.
The data obtained from existing slope model measurement systems can be categorized into one-dimensional, two-dimensional, and three-dimensional data. Common methods of obtaining one-dimensional data encompass the utilization of fiber optic grating sensors, inclinometers, array displacement transducers [19], and others. While sensors can provide relatively precise one-dimensional deformation data for slope models, an excess of sensors may not always be advantageous, as it can impact the structural integrity of the model and alter the deformation behavior of the slope [20]. Two-dimensional data are typically obtained through techniques such as Particle Image Velocimetry (PIV) [21,22] and Digital Image Correlation (DIC) [23,24]. In comparison to two-dimensional data, three-dimensional data for slope models allow for a more intuitive depiction of the slope’s failure process. Common non-contact methods for acquiring 3D data include the use of 3D laser scanners [25,26], binocular vision [27,28], photometric stereo [29,30], and structured light [31,32,33,34,35,36,37,38,39,40,41,42,43]. Three-dimensional laser scanners are expensive and time-consuming, limiting their application to small-scale physical slope model tests. The reconstruction area of binocular vision needs to be within the common field of view of the two cameras, posing a challenge to the specific operations involved in the slope model reconstruction process. Photometric stereo can be applied to objects of various sizes with high resolution, by adjusting the low-cost hardware. However, shiny or reflective surfaces may produce specular highlights, which could distort the images captured and result in inaccuracies in the reconstructed surface models. Though machine learning-based methods can improve this issue [44,45,46], the approach is too complex. Dealing with specular reflections often requires more complex algorithms and can limit the method’s applicability to a range of materials. In contrast, the structured light method offers a range of advantages in slope model reconstruction, including fast modeling speed, convenient system setup, and low cost.
Therefore, in this study, a structured-light-based method is utilized to capture three-dimensional point cloud data of the slope model. Using a gypsum sphere as a reference, the accuracy of the structured light system is validated. By adjusting the slope model to simulate slope landslide disasters, three-dimensional point cloud models of the slope at each stage of the landslide are generated. The deformation of the slope model is analyzed, and the critical areas where landslide disasters occur are identified.

2. Structured Light System

The structured light method is a three-dimensional imaging technique based on optical triangulation. As depicted in Figure 1, the structured light three-dimensional imaging system projects encoded structured light onto the surface of the object under examination using a projector. The camera captures the distorted pattern arising from the modulation of the object’s height, and the distorted pattern is subsequently demodulated using a computer. Ultimately, the three-dimensional shape of the object can be determined.
According to different structured light encoding strategies, the structured light field can be categorized into an intensity-modulated light field and a phase-modulated light field. Common intensity-modulated light fields include speckle-structured light fields [31,32,33,34], multi-line-structured light fields [35,36], and binary-structured light fields [37,38,39], while common phase-modulated light fields include single- and multi-frequency sinusoidal gratings [40,41,42,43]. The Gray code method belongs to the category of intensity-modulated light fields. It is a three-dimensional imaging method with good robustness and noise resistance, achieving high precision in object reconstruction. Therefore, in this study, the structured light three-dimensional imaging method based on the Gray code method is employed to reconstruct the physical model of the slope. The reconstruction steps are illustrated in Figure 2.
The Gray Code is a specific sequence of binary number encoding that mandates any adjacent Gray Codes to differ by only one binary digit, and the Gray Codes corresponding to the minimum and maximum numbers differ by only one digit as well. Therefore, the Gray Code is a coding method with minimal errors. During the projection phase, it is essential to design the Gray Code patterns required for the experiment.

2.1. Design of Gray Code Coding Pattern

To uniquely encode each pixel in the image using Gray Code values, it is necessary to establish the pixel coordinate system on a two-dimensional plane. Encoding is executed independently in both the horizontal and vertical directions of the image, with the stipulation that the resolution of the Gray Code projection pattern is not inferior to that of the projector. Thus, presuming the projector’s resolution is denoted as L × W pixels, the horizontal stripe level l and vertical stripe level w of the designated Gray Code pattern must adhere to Equation (1):
{ l = l o g 2 L w = l o g 2 W
After devising the Gray code striped levels, for each level of the striped encoding pattern, a corresponding Gray code pattern is designed with the same stripe level, as well as with inverted positions of black and white stripes. This facilitates a more precise computation of grayscale values within the regions covered by the Gray Code patterns on the object under measurement. Due to potential variations in the object’s illumination angles, leading to shadowed areas in the captured images, the early identification of these shadow regions during the decoding process can markedly enhance decoding efficiency. Consequently, two supplementary patterns, one black and one white, are projected onto the object to mitigate the impact of shadow regions [47]. Given that the projector’s resolution is denoted by L × W pixels, it becomes imperative to devise ( l + w ) × 2 + 2 Gray Code patterns.

2.2. Gray Code Decoding

Decoding the Gray Code is the process of obtaining the decimal code value corresponding to each pixel’s Gray Code value. The decoding calculations are shown in Equation (2) [48]:
V ( x , y ) = i = 1 m G C i ( x , y ) × 2 ( m i )
where V represents the decimal code value of the Gray Code; m represents the total number of Gray Code patterns projected; and G C i represents the binarized value of the i -th Gray code image obtained by the camera.

2.3. Calibration of Structured Light System

To achieve precise 3D object reconstruction, the geometric relationships and internal parameters of the structured light system play a crucial role. This necessitates calibrating the structured light system before undertaking 3D object reconstruction to establish these transformation relationships. Figure 3 illustrates the structured light system’s calibration process, comprising two components: camera calibration and projector calibration.
In structured light systems, the spatial relationship between the camera and projector is crucial for determining the precision and quality of 3D measurements. Increasing the distance between these components typically results in a reduction in depth resolution, impacting the sharpness and accuracy of the 3D data. However, this expanded distance also allows for a wider field of view, enabling the capture of larger areas in a single scan. This benefit, though, is often counterbalanced by a noticeable decrease in the intricacy and detail of the captured data, as the spread of the light pattern over a larger area reduces its distinctiveness and the system’s ability to discern fine details [49,50]. During the photography process in this study, the camera was positioned approximately 1.5 m from the slope surface, while the projector was placed approximately 1.2 m away from it. The distance between the projector and the camera was approximately 0.3 m.

2.3.1. Calibration of the Camera

In an ideal scenario, a pinhole camera adheres to the principles of linear perspectives, as illustrated in Figure 4 [51]. The transformation relationship for a point P on the object from coordinates ( X W , Y W , Z W ) in the world coordinate system to the projected coordinates ( u , v ) in the pixel plane coordinate system is defined by Equation (3) [52]:
Z C [ u v 1 ] = [ f x 0 u 0 0 0 f y v 0 0 0 0 1 0 ] [ R T 0 1 ] [ X W Y W Z W 1 ] = K W [ X W Y W Z W 1 ]
where Z C represents the scale factor in the linear mapping from three-dimensional to two-dimensional space. f x and f y represent the normalized focal lengths along the u and v axes, respectively. u 0 and v 0 are the pixel coordinates of the camera optical center. The rotation matrix is denoted as R , and the translation matrix as T . Parameters K and W represent the intrinsic and extrinsic parameters, respectively.
In this study, the calibration object employed is a chessboard grid calibration board with known dimensions. The world coordinate system is established by taking the plane where the calibration board is located as the O X W Y W plane. All corner points on the chessboard grid are situated on this O X W Y W plane, rendering their coordinates in the Z direction as zero [53]. As the dimensions of each grid on the chessboard grid are known, it enables the computation of the world coordinates for each corner point. The calibration board images are captured, and corner detection algorithms are applied to determine the pixel coordinates of each corner point in the image. Figure 5 displays the detection results of these corner points. A system of calibration equations can be established as shown in Equation (4):
[ u v 1 ] = 1 Z C [ f x 0 u 0 0 f y v 0 0 0 1 ] [ r 1 r 2 T ] [ X W Y W 1 ] = H [ X W Y W 1 ]
where r 1 and r 2 represent the first and second column vectors of the rotation matrix R, respectively; and H denotes a 3 × 3 homography matrix.
The pixel coordinates and world coordinates of the multiple sets of acquired corner points are substituted into the calibration equation set (4). The homography matrix H is then computed using the Singular Value Decomposition (SVD) method. Utilizing the acquired homography matrix H, an overdetermined system of equations is established. To solve the system of equations, images of the chessboard calibration pattern need to be captured from at least three distinct angles. The Levenberg–Marquardt (LM) algorithm is employed to solve the overdetermined system of equations, thereby obtaining the intrinsic and extrinsic parameter matrices of the camera.

2.3.2. Calibration of the Projector

The camera serves as an image capture device, while the projector functions as an image output device. Presently, the prevailing calibration method for projectors treats the projector as a camera model with an opposing working principle [54,55]. In this model, the projector adheres to the same imaging principles as a camera. In this study, a decoding method based on this model is employed to calibrate the projector through the projection of Gray codes. The projection process is depicted in Figure 6, where the Gray code patterns are sequentially projected onto a standard black-and-white chessboard grid.
As illustrated in Figure 7, assume that the coordinates of a corner point P on the chessboard grid calibration plate are ( X W , Y W ) . Its corresponding projection on the camera imaging plane is point P c , with pixel coordinates ( u c , v c ) . The projection of point P on the projector’s projection plane is point P p , with pixel coordinates ( u p , v p ) . As described in Section 2.1, Gray code values are uniquely assigned to each pixel in the projected image. By analyzing the position of point P c within the black and white stripes of a series of grayscale-encoded patterns, its grayscale code values ( C u , C v ) can be determined. In this context, C u represents the Gray code value along the u-axis (horizontal direction), while Cv denotes the Gray code value along the v -axis (vertical direction). Decoding the Gray code value of point P c allows us to obtain the pixel coordinates ( u p , v p ) of point P p . Utilizing this method, the coordinates of the chessboard grid corners in the projector’s pixel coordinate system can be determined [56]. This subsequently enables the calibration of the projector through the application of camera calibration techniques.

3. Three-Dimensional Modeling and Analysis for Slope Models

In this experiment, a structured light measurement system for the slope modeling consisted of a physical slope model, a camera, a projector, and post-processing software. As illustrated in Figure 8, the collected soil was applied to a rigid platform to create the physical slope model. The platform, constructed from stainless steel, was divided into two sections: the slope section (85 cm × 100 cm × 10 cm) and the base section (40 cm × 100 cm × 10 cm). The white trapezoidal area was the actual modeling area for this experiment. Inclination angles of the slope section were controlled by adjusting the height of the bolts on both sides. The camera and projector were connected to a tripod. The tripod facilitated convenient adjustment of the camera and projector’s height and direction to fulfill the measurement requirements of the structured light system. Table 1 and Table 2 present detailed parameters of the equipment used in this experiment. Figure 9 illustrates the workflow of the slope model test.

3.1. Accuracy Verification Experiment

This section reconstructs the 80 mm radius gypsum sphere depicted in Figure 10a in three dimensions to verify the reconstruction accuracy of the structured light system in this paper. The gypsum sphere serves as an ideal object for three-dimensional reconstruction. Throughout the experimental procedure, the sphere was placed approximately 1.5 m in front of the camera, and the obtained point cloud of the sphere’s surface is presented in Figure 10b. This study employed CloudCompare (v2.12.4) software to perform spherical fitting on the acquired point cloud of the sphere, and the root-mean-square error (RMSE) of the point cloud to the fitted sphere distance was subsequently determined. Concurrently, the radius of the fitted sphere was computed and subsequently compared with the actual size of the gypsum sphere, as shown in Table 3.
The RMSE between the point cloud and the fitted spherical surface in the experimental results is 0.22 mm. This outcome demonstrates that, under relatively ideal reconstruction conditions, the structured light system described in this study is capable of achieving sub-millimeter accuracy.
To ascertain the reconstruction accuracy of the structured light system for the slope materials discussed in this article, a three-dimensional scanning of the slope model was carried out using the Free Scan Combo laser scanner produced by Xianlin (Hangzhou, China) Company. This scanner is capable of capturing three-dimensional data with a precision of 0.02 mm. Table 4 details the comprehensive specifications of this equipment. Given the scanner’s ability to scan the slope model from various angles and positions, it effectively mitigates the impact of light shadowing in the 3D model reconstruction process. The reconstruction model obtained from this laser scanner was used as a benchmark to compare with the models reconstructed by the structured light system described in this study. A comparison between two models was conducted by calculating the distance from cloud to cloud between the models.
In this study, CloudCompare software was used to compute cloud-to-cloud distances. Two different algorithms are used in this software: Closest-to-Closest (C2C) distance and Multiscale Model-to-Model Comparison (M3C2). The C2C algorithm identifies the closest points in the reference cloud and calculates the Euclidean distance, allowing for a quicker completion of point cloud comparison. Nevertheless, this algorithm is sensitive to the roughness and presence of outliers in the point cloud, thereby limiting its effectiveness. Conversely, the M3C2 algorithm utilizes the local surface normals of each point’s neighborhood to calculate point cloud variations, effectively mitigating the impact of point cloud roughness and outliers on the comparison results. This algorithm enables the direct detection of changes in complex terrain on the point cloud model.
The slope model in this study was constructed using soil samples collected from the field. The surface of the slope model is relatively rough. Consequently, the M3C2 algorithm was employed to calculate the distance between the two reconstructed point cloud models. The distribution of distance errors for the reconstruction model of the structured light system is presented in Table 5 and Figure 11. The histogram of M3C2 distances exhibits a normal distribution, with 99% of the points having an M3C2 distance of less than 3.61 mm. The reconstruction accuracy of the structured light system discussed in this paper was assessed using the RMSE of the M3C2 distance errors. The RMSE calculated from the M3C2 distance was found to be 1.08 mm.
Figure 12 illustrates the distance errors between the slope model obtained using a laser scanner and the slope model acquired through the structured light system discussed in this paper. Since these two point cloud models are not in the same coordinate system, registration is necessary before any comparison. Coarse registration is required prior to precise alignment. As depicted in Figure 12, two pieces of gravel of approximately 25 mm in diameter were placed on the surface of the slope model, forming two notable protrusions. This gravel was used as feature objects for the coarse registration of the two point cloud models. The obstruction of light by the gravel prevented the structured light from projecting onto their backside and the adjacent areas. This led to missing point cloud data in these regions in the model reconstructed by the structured light system, while the laser scanner model included these data. Consequently, the lack of point cloud data in the gravel regions of the model reconstructed by the structured light system led to the most significant errors. Therefore, the largest positive and negative distance errors occur in the two gravel regions, which are indicated by the deep red areas in Figure 12.
The results of two distinct experiments indicate that different materials impact the reconstruction accuracy of the structured light system discussed in this paper. However, the structured light system described herein is still capable of achieving an approximate accuracy of 1 mm in the three-dimensional reconstruction of static slope models.

3.2. Acquisition of Parameters for Structured Light System

Before initiating image acquisition of the slope model, calibration of the structured light system is necessary. In this study, a projector resolution of 1920 × 1080 pixels is employed. Following the design method outlined in Section 2.1, the Gray code patterns are configured with 11 levels for both horizontal and vertical stripes. In total, 46 Gray code patterns are designed to fulfill the calibration requirements. Calibration results for the structured light system obtained following the calibration steps outlined in Section 2.3 are presented in Table 6.

3.3. Simulation and Deformation Measurement of Slope Model Landslide Hazards

After the calibration of the structured light system, the collection of slope model images was initiated. The basic test conditions for the physical model test in this paper adhere to the standard protocol established by Fang et al. [57], as illustrated in Appendix A Figure A1. In this study, the slope angle was systematically adjusted to 31°, 37°, 43°, and 49° to simulate landslide phenomena in the slope model. After stabilization of the landslide phenomenon on slopes with various inclinations, the slope angle was readjusted to the initial 31° to maintain consistency in the captured image area on the slope surface. Figure 13 illustrates the 3D models of the slope following landslide occurrences at various inclinations.
A comparative analysis of 3D slope models created at various slope angles offers a visual comprehension of the extent of slope variations, aiding in the monitoring of areas prone to disasters.
The M3C2 algorithm was utilized in this paper to compute the point cloud variations of neighboring slope models, as illustrated in Figure 14.
In Figure 14a, as the slope gradient increases, the elevation of the slope surface exhibits undulations, which are highlighted using a gradient color scheme representing positive and negative values. Between 31° and 37°, the slope surface undergoes a minor descent, resulting in an overall height variation of approximately 2 mm. At the summit of the slope, a small red region is observed, indicating a protrusion in the slope surface. This protrusion is caused by the sliding and accumulation of large soil particles from the area above the photographed region.
From 37° to 43°, the slope surface experiences a substantial landslide. At the slope base, significant soil accumulation results from the sliding in the upper blue area, as depicted in the red zone in Figure 14b. This portion of the slope surface bulges, with a maximum height difference of 30 mm.
From 43° to 49°, with the increasing slope gradient, a large-scale destructive landslide manifests on the slope surface, as illustrated in Figure 14c. The soil in the upper blue region of the slope has experienced considerable slippage, resulting in a maximum height difference of 52 mm. At the base of the slope, a substantial accumulation of soil has transpired due to preceding landslides. Consequently, the soil sliding in the upper section of the slope surface is impeded by the uplifted part of the slope toe, leading to its retention in the middle section and the formation of a red bulging area.
The acquired 3D point cloud data allow us to obtain cross-sections of any part of the slope model. Figure 15 depicts the cross-sectional plane of four slope models with different slopes at the red line in Figure 8. The figure allows for the observation of the general trend of landslides. Slopes ranging from 31° to 37° display minimal deformation, with overlapping slope curves. The 43° slope demonstrates an overall downward shift in comparison to the 37° slope, with the maximum height difference occurring at the slope toe. The 49° slope displays the most extensive landslide, characterized by the most pronounced trend. These findings align with the observations made at the slope model test site.

3.4. Comparison of the Landslide Prediction Method of Structured Light with the Landslide Prediction Method of the Laser

Both the structured light landslide prediction method and the laser landslide prediction method are based on analyzing surface changes of slopes to determine whether a specific area on the slope has a propensity for landslides. In this section, three-dimensional reconstructions of slope models with 37° and 43° inclination angles were performed using the structured light system discussed in this paper and the Free Scan Combo model laser scanner. The M3C2 algorithm was applied to calculate the variations between the 37° and 43° point cloud models reconstructed by the structured light system. The same algorithm was also used to calculate the variations between the 37° and 43° point cloud models reconstructed by the laser scanner. The result of point cloud changes is depicted in Figure 16. Due to the impact of gravel markers on the analysis, the point cloud changes in the laser-reconstructed models were calculated both with and without the gravel marker data, as shown in Figure 16b,c. In Figure 16a, the point cloud model reconstructed by the structured light system is missing some data due to the obstruction of light by the gravel markers. For Figure 16a,c, which both exclude gravel marker data, the slope surface changes in the models are essentially consistent, and the maximum subsidence and maximum uplift distance errors are less than 1 mm. In Figure 16a,b, although the slope surface changes in the models are generally consistent, there is a deviation in the maximum settlement distance attributable to the influence of the gravel data.
The experimental results indicate that the slope changes in the models obtained using the structured light landslide prediction method outlined in this paper are largely consistent with those acquired through the laser landslide prediction method. The laser scanning method, which captures three-dimensional reconstruction models with point cloud data from a wider range of perspectives, allows the laser landslide prediction method to reveal more detailed variations in the slope surface. Acquiring slope surface changes in the models mentioned in this paper takes about 15 min using the laser landslide prediction method, while the structured light landslide prediction method requires only 8 min. Therefore, in terms of prediction time, the structured light landslide prediction method possesses a certain advantage.

4. Discussion and Conclusions

In the experiments conducted for this study, the three-dimensional modeling of the slope model is performed following the cessation of natural landslides at various gradients, when the shape of the slope surface is stabilized and ceases to change. This paper focuses on the static measurement of the slope model before and after deformation. The Gray code method employed in this research is characterized by its simplicity and robustness, rendering it effective for the three-dimensional reconstruction of static slope surfaces. However, due to the necessity of projecting numerous patterns for the unique encoding of each pixel in the image, the Gray code method is less suitable for real-time, high-speed 3D modeling. Several methods now exist for achieving dynamic 3D reconstruction with structured light technology by altering the encoding patterns of projected structured light [58,59]. In future tests of slope models, the structured light system discussed in this paper can be integrated with existing dynamic modeling technologies to conduct tests on dynamic landslides.
The structured light 3D reconstruction method presented in this paper is primarily applied in the domain of indoor slope model monitoring. In an indoor environment, controlling environmental factors is feasible to mitigate their impact on the efficacy of structured light reconstruction. When employing structured light methods outdoors, it is essential to consider the influence of external light sources on the structured light. Existing outdoor structured light measurement approaches include using invisible light (infrared) as a projection source to diminish the effects of outdoor lighting, and adopting laser projection to enhance the contrast of the structured light patterns, among others [60,61]. In subsequent tests, the light source used in the structured light system of this study can be altered, enabling applications in scenarios with higher illumination.
In this study, a three-dimensional-structured-light-system-based method is presented to measure the surface of slope models with different slopes, which can obtain the deformation from the three-dimensional point cloud model of the slope. The method based on the structured light system in this paper can achieve sub-millimeter accuracy in 3D reconstruction. A color-mapped deformation map derived from the slope point cloud model is established. Based on the regions corresponding to different colors in the map, the deformation of the slope models can be obtained. This can help researchers identify the positions and severity of disasters such as collapse and uplift of the slope through the model tests. Another outstanding advantage of this method is its low cost with convenient operation. The cost of the structured light system in this experiment consists of two parts: (a) hardware including projectors and cameras and (b) processing software. With the development of digital technology, low-cost digital products have become widely popular. The projectors and cameras required for this experiment only need to meet the requirements for projection and shooting, so inexpensive second-hand smartphones and simple projection devices can be purchased to form the structured light system used in this paper. The point cloud processing software CloudCompare used in this experiment is free. In general, a relatively low cost is sufficient to complete the construction of the structured light system in this experiment.

Author Contributions

Conceptualization, Z.T. and C.Z.; methodology, C.Z. and K.F.; software, Z.C. and K.F.; resources, C.Z. and W.X.; data curation, Z.C.; writing—original draft preparation, Z.C.; writing—review and editing, C.Z., K.F. and W.X.; supervision, C.Z.; project administration, W.X.; funding acquisition, C.Z. and Z.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the National Natural Science Foundation of China (Grant No. 12162017) and the Yunnan Fundamental Research Projects (Grant Nos. 202101AU070032, 202301AT070394, 202301BE070001-037).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Figure A1. Summarized protocol for the physical model test.
Figure A1. Summarized protocol for the physical model test.
Sensors 24 00794 g0a1

References

  1. Osasan, K.S.; Afeni, T.B. Review of surface mine slope monitoring techniques. J. Min. Sci. 2010, 46, 177–186. [Google Scholar] [CrossRef]
  2. Ohnishi, Y.; Nishiyama, S.; Yano, T.; Matsuyama, H.; Amano, K. A study of the application of digital photogrammetry to slope monitoring systems. Int. J. Rock Mech. Min. Sci. 2006, 43, 756–766. [Google Scholar] [CrossRef]
  3. Guo, Z.; Zhou, M.; Huang, Y.; Pu, J.; Zhou, S.; Fu, B.; Aydin, A. Monitoring performance of slopes via ambient seismic noise recordings: Case study in a colluvium deposit. Eng. Geol. 2023, 324, 107268. [Google Scholar] [CrossRef]
  4. Take, W.A.; Bolton, M.D.; Wong, P.C.P.; Yeung, F.J. Evaluation of landslide triggering mechanisms in model fill slopes. Landslides 2004, 1, 173–184. [Google Scholar] [CrossRef]
  5. Wang, G.; Sassa, K. Pore-pressure generation and movement of rainfall-induced landslides: Effects of grain size and fine-particle content. Eng. Geol. 2003, 69, 109–125. [Google Scholar] [CrossRef]
  6. Zhu, C.; He, M.; Karakus, M.; Cui, X.; Tao, Z. Investigating toppling failure mechanism of anti-dip layered slope due to excavation by physical modelling. Rock Mech. Rock Eng. 2020, 53, 5029–5050. [Google Scholar] [CrossRef]
  7. Rouainia, M.; Davies, O.; O’Brien, T.; Glendinning, S. Numerical modelling of climate effects on slope stability. Proc. Inst. Civ. Eng.-Eng. Sustain. 2009, 162, 81–89. [Google Scholar] [CrossRef]
  8. Lee, L.M.; Kassim, A.; Gofar, N. Performances of two instrumented laboratory models for the study of rainfall infiltration into unsaturated soils. Eng. Geol. 2011, 117, 78–89. [Google Scholar] [CrossRef]
  9. Xu, J.; Ueda, K.; Uzuoka, R. Evaluation of failure of slopes with shaking-induced cracks in response to rainfall. Landslides 2022, 19, 119–136. [Google Scholar] [CrossRef]
  10. Zarrabi, M.; Eslami, A. Behavior of piles under different installation effects by physical modeling. Int. J. Geomech. 2016, 16, 04016014. [Google Scholar] [CrossRef]
  11. Ahmadi, M.; Moosavi, M.; Jafari, M.K. Experimental investigation of reverse fault rupture propagation through wet granular soil. Eng. Geol. 2018, 239, 229–240. [Google Scholar] [CrossRef]
  12. Zhang, B.; Fang, K.; Tang, H.; Sumi, S.; Ding, B. Block-flexure toppling in an anaclinal rock slope based on multi-field monitoring. Eng. Geol. 2023, 327, 107340. [Google Scholar] [CrossRef]
  13. Huang, C.C.; Lo, C.L.; Jang, J.S.; Hwu, L.K. Internal soil moisture response to rainfall-induced slope failures and debris discharge. Eng. Geol. 2008, 101, 134–145. [Google Scholar] [CrossRef]
  14. Hu, X.; Zhou, C.; Xu, C.; Liu, D.; Wu, S.; Li, L. Model tests of the response of landslide-stabilizing piles to piles with different stiffness. Landslides 2019, 16, 2187–2200. [Google Scholar] [CrossRef]
  15. Huang, Y.; Xu, X.; Liu, J.; Mao, W. Centrifuge modeling of seismic response and failure mode of a slope reinforced by a pile-anchor structure. Soil Dyn. Earthq. Eng. 2020, 131, 106037. [Google Scholar] [CrossRef]
  16. Pipatpongsa, T.; Fang, K.; Leelasukseree, C.; Chaiwan, A. Stability analysis of laterally confined slope lying on inclined bedding plane. Landslides 2022, 19, 1861–1879. [Google Scholar] [CrossRef]
  17. Yin, Y.; Deng, Q.; Li, W.; He, K.; Wang, Z.; Li, H.; An, P.; Fang, K. Insight into the crack characteristics and mechanisms of retrogressive slope failures: A large-scale model test. Eng. Geol. 2023, 327, 107360. [Google Scholar] [CrossRef]
  18. Wang, C.; Wang, H.; Qin, W.; Wei, S.; Tian, H.; Fang, K. Behaviour of pile-anchor reinforced landslides under varying water level, rainfall, and thrust load: Insight from physical modelling. Eng. Geol. 2023, 325, 107293. [Google Scholar] [CrossRef]
  19. Chang, Z.; Huang, F.; Huang, J.; Jiang, S.-H.; Zhou, C.; Zhu, L. Experimental study of the failure mode and mechanism of loess fill slopes induced by rainfall. Eng. Geol. 2021, 280, 105941. [Google Scholar] [CrossRef]
  20. Fang, K.; Miao, M.; Tang, H.; Jia, S.; Dong, A.; An, P.; Zhang, B. Insights into the deformation and failure characteristic of a slope due to excavation through multi-field monitoring: A model test. Acta Geotech. 2023, 18, 1001–1024. [Google Scholar] [CrossRef]
  21. Baba, H.O.; Peth, S. Large scale soil box test to investigate soil deformation and creep movement on slopes by Particle Image Velocimetry (PIV). Soil Tillage Res. 2012, 125, 38–43. [Google Scholar] [CrossRef]
  22. Sarkhani Benemaran, R.; Esmaeili-Falak, M.; Katebi, H. Physical and numerical modelling of pile-stabilised saturated layered slopes. Proc. Inst. Civ. Eng.-Geotech. Eng. 2022, 175, 523–538. [Google Scholar]
  23. Zhang, D.; Zhang, Y.; Cheng, T.; Meng, Y.; Fang, K.; Garg, A.; Garg, A. Measurement of displacement for open pit to underground mining transition using digital photogrammetry. Measurement 2017, 109, 187–199. [Google Scholar] [CrossRef]
  24. Liu, D.; Hu, X.; Zhou, C.; Xu, C.; He, C.; Zhang, H.; Wang, Q. Deformation mechanisms and evolution of a pile-reinforced landslide under long-term reservoir operation. Eng. Geol. 2020, 275, 105747. [Google Scholar] [CrossRef]
  25. Abellán, A.; Oppikofer, T.; Jaboyedoff, M.; Rosser, N.J.; Lim, M.; Lato, M.J. Terrestrial laser scanning of rock slope instabilities. Earth Surf. Process. Landf. 2014, 39, 80–97. [Google Scholar] [CrossRef]
  26. Francioni, M.; Salvini, R.; Stead, D.; Coggan, J. Improvements in the integration of remote sensing and rock slope modelling. Nat. Hazards 2018, 90, 975–1004. [Google Scholar] [CrossRef]
  27. Huo, G.; Wu, Z.; Li, J.; Li, S. Underwater target detection and 3D reconstruction system based on binocular vision. Sensors 2018, 18, 3570. [Google Scholar] [CrossRef]
  28. Tian, X.; Liu, R.; Wang, Z.; Ma, J. High quality 3D reconstruction based on fusion of polarization imaging and binocular stereo vision. Inf. Fusion 2022, 77, 19–28. [Google Scholar] [CrossRef]
  29. Woodham, R.J. Photometric method for determining surface orientation from multiple images. Opt. Eng. 1980, 19, 139–144. [Google Scholar] [CrossRef]
  30. Barsky, S.; Petrou, M. The 4-source photometric stereo technique for three-dimensional surfaces in the presence of highlights and shadows. IEEE Trans. Pattern Anal. Mach. Intell. 2003, 25, 1239–1252. [Google Scholar] [CrossRef]
  31. Zhou, P.; Zhu, J.; Jing, H. Optical 3-D surface reconstruction with color binary speckle pattern encoding. Opt. Express 2018, 26, 3452–3465. [Google Scholar] [CrossRef] [PubMed]
  32. Zhang, Q.; Xu, R.; Liu, Y.; Chen, Z. 3D shape measurement based on digital speckle projection and spatio-temporal correlation. Proceedings 2018, 2, 552. [Google Scholar]
  33. Yin, X.; Wang, G.; Shi, C.; Liao, Q. Efficient active depth sensing by laser speckle projection system. Opt. Eng. 2014, 53, 013105. [Google Scholar] [CrossRef]
  34. Jiang, J.; Cheng, J.; Zhao, H. Stereo matching based on random speckle projection for dynamic 3D sensing. In Proceedings of the 2012 11th International Conference on Machine Learning and Applications, Boca Raton, FL, USA, 12–15 December 2012; Volume 1, pp. 191–196. [Google Scholar]
  35. Gühring, J. Dense 3D surface acquisition by structured light using off-the-shelf components. In Proceedings of the Videometrics and Optical Methods for 3D Shape Measurement, San Jose, CA, USA, 22 December 2000; Volume 4309, pp. 220–231. [Google Scholar]
  36. Ettl, S.; Arold, O.; Yang, Z.; Häusler, G. Flying triangulation—An optical 3D sensor for the motion-robust acquisition of complex objects. Appl. Opt. 2012, 51, 281–289. [Google Scholar] [CrossRef] [PubMed]
  37. Sato, K. Range-image system utilizing nematic liquid crystal mask. In Proceedings of the 1st International Conference on Computer Vision (ICCV), London, UK, 8–11 June 1987; pp. 657–661. [Google Scholar]
  38. Ishii, I.; Yamamoto, K.; Doi, K.; Tsuji, T. High-speed 3D image acquisition using coded structured light projection. In Proceedings of the 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, San Diego, CA, USA, 29 October–2 November 2007; pp. 925–930. [Google Scholar]
  39. Valkenburg, R.J.; McIvor, A.M. Accurate 3D measurement using a structured light system. Image Vis. Comput. 1998, 16, 99–110. [Google Scholar] [CrossRef]
  40. Su, X.; Chen, W. Fourier transform profilometry: A review. Opt. Lasers Eng. 2001, 35, 263–284. [Google Scholar] [CrossRef]
  41. Kemao, Q. Windowed Fourier transform for fringe pattern analysis. Appl. Opt. 2004, 43, 2695–2702. [Google Scholar] [CrossRef]
  42. Zuo, C.; Feng, S.; Huang, L.; Tao, T.; Yin, W.; Chen, Q. Phase shifting algorithms for fringe projection profilometry: A review. Opt. Lasers Eng. 2018, 109, 23–59. [Google Scholar] [CrossRef]
  43. Zhang, Y.; Xiong, Z.; Wu, F. Unambiguous 3D measurement from speckle-embedded fringe. Appl. Opt. 2013, 52, 7797–7805. [Google Scholar] [CrossRef]
  44. Ren, M.; Wang, X.; Xiao, G.; Chen, M.; Fu, L. Fast defect inspection based on data-driven photometric stereo. IEEE Trans. Instrum. Meas. 2018, 68, 1148–1156. [Google Scholar] [CrossRef]
  45. Ju, Y.; Jian, M.; Wang, C.; Zhang, C.; Dong, J.; Lam, K.-M. Estimating high-resolution surface normals via low-resolution photometric stereo images. IEEE Trans. Circuits Syst. Video Technol. 2023. early access. [Google Scholar] [CrossRef]
  46. Chen, G.; Han, K.; Wong, K.Y.K. PS-FCN: A flexible learning framework for photometric stereo. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–18. [Google Scholar]
  47. Trobina, M. Error Model of a Coded-Light Range Sensor; Technique Report; Communication Technology Laboratory, ETH Zentrum: Zurich, Switzerland, 1995. [Google Scholar]
  48. Inokuchi, S. Range imaging system for 3-D object recognition. In Proceedings of the International Conference on Pattern Recognition (ICPR), Montreal, QC, Canada, 30 July–2 August 1984. [Google Scholar]
  49. Nie, L.; Ye, Y.; Song, Z. Method for calibration accuracy improvement of projector-camera-based structured light system. Opt. Eng. 2017, 56, 074101. [Google Scholar] [CrossRef]
  50. Chen, C.; Gao, N.; Zhang, Z. Simple calibration method for dual-camera structured light system. J. Eur. Opt. Soc.-Rapid Publ. 2018, 14, 23. [Google Scholar] [CrossRef]
  51. Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
  52. Yilmaz, Ö.; Karakuş, F. Stereo and KinectFusion for continuous 3D reconstruction and visual odometry. Turk. J. Electr. Eng. Comput. Sci. 2016, 24, 2756–2770. [Google Scholar] [CrossRef]
  53. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
  54. Falcao, G.; Hurtos, N.; Massich, J. Plane-based calibration of a projector-camera system. VIBOT Master 2008, 9, 1–12. [Google Scholar]
  55. Zhang, S.; Huang, P.S. Novel method for structured light system calibration. Opt. Eng. 2006, 45, 083601. [Google Scholar]
  56. Moreno, D.; Taubin, G. Simple, accurate, and robust projector-camera calibration. In Proceedings of the 2012 Second International Conference on 3D Imaging, Modeling, Processing, Visualization & Transmission, Zurich, Switzerland, 13–15 October 2012; pp. 464–471. [Google Scholar]
  57. Fang, K.; Tang, H.; Li, C.; Su, X.; An, P.; Sun, S. Centrifuge modelling of landslides and landslide hazard mitigation: A review. Geosci. Front. 2023, 14, 101493. [Google Scholar] [CrossRef]
  58. Bell, T.; Li, B.; Zhang, S. Structured light techniques and applications. In Wiley Encyclopedia of Electrical and Electronics Engineering; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 1999; pp. 1–24. [Google Scholar]
  59. Zhang, S. High-speed 3D shape measurement with structured light methods: A review. Opt. Lasers Eng. 2018, 106, 119–131. [Google Scholar] [CrossRef]
  60. Fofi, D.; Sliwa, T.; Voisin, Y. A comparative survey on invisible structured light. In Proceedings of the Machine Vision Applications in Industrial Inspection XII, San Jose, CA, USA, 3 May 2004; Volume 5303, pp. 90–98. [Google Scholar]
  61. Schaffer, M.; Große, M.; Harendt, B.; Kowarschik, R. Outdoor three-dimensional shape measurements using laser-based structured illumination. Opt. Eng. 2012, 51, 090503. [Google Scholar] [CrossRef]
Figure 1. The schematic diagram of the structured light three-dimensional imaging system.
Figure 1. The schematic diagram of the structured light three-dimensional imaging system.
Sensors 24 00794 g001
Figure 2. Structured light 3D reconstruction process based on Gray code.
Figure 2. Structured light 3D reconstruction process based on Gray code.
Sensors 24 00794 g002
Figure 3. Calibration process of structured light systems.
Figure 3. Calibration process of structured light systems.
Sensors 24 00794 g003
Figure 4. Pinhole camera model. ( O X W Y W Z W represents the world coordinate system; O X C Y C Z C denotes the camera coordinate system; O x y is defined as the image coordinate system; O u v signifies the pixel coordinate system. The red line illustrates the correspondence between point P and point P ).
Figure 4. Pinhole camera model. ( O X W Y W Z W represents the world coordinate system; O X C Y C Z C denotes the camera coordinate system; O x y is defined as the image coordinate system; O u v signifies the pixel coordinate system. The red line illustrates the correspondence between point P and point P ).
Sensors 24 00794 g004
Figure 5. Recognition results of checkerboard grid corner points. (The red circles denote the positions of the corner points that have been identified).
Figure 5. Recognition results of checkerboard grid corner points. (The red circles denote the positions of the corner points that have been identified).
Sensors 24 00794 g005
Figure 6. The projection stage in the calibration process of structured light system.
Figure 6. The projection stage in the calibration process of structured light system.
Sensors 24 00794 g006
Figure 7. The corresponding positions of the chessboard grid corners on the camera imaging plane and the projector’s projection plane. (The solid line arrows indicate the positions of the corresponding points).
Figure 7. The corresponding positions of the chessboard grid corners on the camera imaging plane and the projector’s projection plane. (The solid line arrows indicate the positions of the corresponding points).
Sensors 24 00794 g007
Figure 8. Structured light measurement system for slope modeling.
Figure 8. Structured light measurement system for slope modeling.
Sensors 24 00794 g008
Figure 9. Workflow of the slope model test.
Figure 9. Workflow of the slope model test.
Sensors 24 00794 g009
Figure 10. (a) Gypsum spherical physical photograph; (b) 3D model of gypsum spherical surface.
Figure 10. (a) Gypsum spherical physical photograph; (b) 3D model of gypsum spherical surface.
Sensors 24 00794 g010
Figure 11. The histogram depicting the distribution of distance errors between the slope model reconstructed by the structured light system and the slope model reconstructed by laser scanning was calculated using the M3C2 algorithm.
Figure 11. The histogram depicting the distribution of distance errors between the slope model reconstructed by the structured light system and the slope model reconstructed by laser scanning was calculated using the M3C2 algorithm.
Sensors 24 00794 g011
Figure 12. The distribution of distance errors in the slope model reconstructed by the structured light system.
Figure 12. The distribution of distance errors in the slope model reconstructed by the structured light system.
Sensors 24 00794 g012
Figure 13. Three-dimensional models of slope failure at different slope angles.
Figure 13. Three-dimensional models of slope failure at different slope angles.
Sensors 24 00794 g013
Figure 14. The point cloud variations of adjacent slope models: (a) Slope change (α = 31°~37°); (b) Slope change (α = 37°~43°); (c) Slope change (α = 43°~49°).
Figure 14. The point cloud variations of adjacent slope models: (a) Slope change (α = 31°~37°); (b) Slope change (α = 37°~43°); (c) Slope change (α = 43°~49°).
Sensors 24 00794 g014
Figure 15. Cross-Section of slope models at different slopes.
Figure 15. Cross-Section of slope models at different slopes.
Sensors 24 00794 g015
Figure 16. Comparative analysis of slope surface changes in models using structured light and laser landslide prediction methods: (a) The landslide prediction method of structured light; (b) The landslide prediction method of the laser (Includes gravel marker data); (c) The landslide prediction method of the laser (Remove gravel marker data).
Figure 16. Comparative analysis of slope surface changes in models using structured light and laser landslide prediction methods: (a) The landslide prediction method of structured light; (b) The landslide prediction method of the laser (Includes gravel marker data); (c) The landslide prediction method of the laser (Remove gravel marker data).
Sensors 24 00794 g016
Table 1. Specifications of the camera in this study.
Table 1. Specifications of the camera in this study.
ManufacturerModelStandard ResolutionLuminanceContrast
XGIMI (Beijing, China)H3S1920 × 10802200ANSI lm1000:1
Table 2. Specifications of the projector in this study.
Table 2. Specifications of the projector in this study.
ManufacturerModelSensor TypeSensor SizeImage FormatFocal LengthAperture
DAHENG (Beijing, China)HF2514V-2CCD1.1 in.4096 × 300025 mmf/1.4
Table 3. The error analysis of gypsum sphere’s 3D reconstruction result (unit: mm).
Table 3. The error analysis of gypsum sphere’s 3D reconstruction result (unit: mm).
TypeEvaluation MetricsCalculation Result
Sphere (r = 80)Fit Radius80.13
Absolute Error0.13
RMSE0.22
Table 4. Specifications of the laser scanner in this study.
Table 4. Specifications of the laser scanner in this study.
ManufacturerModelDimensionsScanning AreaAccuracyScan SpeedWorking Distance
XIANLIN (Hangzhou, China)FreeScan Combo193 × 63 × 53 mm1000 × 800 mm0.02 mm3,500,000 scan/s300 mm
Table 5. Distance error of the structured light system reconstruction model in the test.
Table 5. Distance error of the structured light system reconstruction model in the test.
Type99%RMSE
M3C23.61 mm1.08
Table 6. Calibration parameters of the structured light system in this study.
Table 6. Calibration parameters of the structured light system in this study.
Hardware TypeParameter TypeParametric ExpressionCalibration Result
CameraInternal reference matrix A c = [ f x c 0 u 0 c 0 f y c v 0 c 0 0 1 ] [ 7623.55 0 2064.47 0 7592.81 1390.26 0 0 1 ]
Aberration coefficient [ k 1 c , k 2 c , p 1 c , p 2 c , k 3 c ] [ 0.249 , 23.022 , 0.002 , 0.003 , 0 ]
ProjectorInternal reference matrix A p = [ f x p 0 u 0 p 0 f y p v 0 p 0 0 1 ] [ 586.27 0 495.58 0 587.26 400.81 0 0 1 ]
Aberration coefficient [ k 1 p , k 2 p , p 1 p , p 2 p , k 3 p ] [ 0.073 , 0.148 , 0.006 , 0.001 , 0 ]
Structured Light SystemRotation matrix R = [ r 11 r 22 r 33 r 21 r 22 r 23 r 31 r 32 r 33 ] [ 0.957 0.014 0.288 0.110 0.906 0.410 0.267 0.423 0.866 ]
Translation vector T = [ t 1 t 2 t 3 ] T [ 576.746 428.688 635.508 ] T
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, Z.; Zhang, C.; Tang, Z.; Fang, K.; Xu, W. Three-Dimensional Reconstruction and Deformation Identification of Slope Models Based on Structured Light Method. Sensors 2024, 24, 794. https://doi.org/10.3390/s24030794

AMA Style

Chen Z, Zhang C, Tang Z, Fang K, Xu W. Three-Dimensional Reconstruction and Deformation Identification of Slope Models Based on Structured Light Method. Sensors. 2024; 24(3):794. https://doi.org/10.3390/s24030794

Chicago/Turabian Style

Chen, Zhijian, Changxing Zhang, Zhiyi Tang, Kun Fang, and Wei Xu. 2024. "Three-Dimensional Reconstruction and Deformation Identification of Slope Models Based on Structured Light Method" Sensors 24, no. 3: 794. https://doi.org/10.3390/s24030794

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop