[go: up one dir, main page]

CN118941737A - Topographic Survey Method for River Engineering Model Test Based on SFM - Google Patents

Topographic Survey Method for River Engineering Model Test Based on SFM Download PDF

Info

Publication number
CN118941737A
CN118941737A CN202410936877.5A CN202410936877A CN118941737A CN 118941737 A CN118941737 A CN 118941737A CN 202410936877 A CN202410936877 A CN 202410936877A CN 118941737 A CN118941737 A CN 118941737A
Authority
CN
China
Prior art keywords
sfm
model test
dimensional
terrain
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410936877.5A
Other languages
Chinese (zh)
Inventor
彭国平
甘学超
兰容龙
邓映香
杨志刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangxi College of Application Science and Technology
Original Assignee
Jiangxi College of Application Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangxi College of Application Science and Technology filed Critical Jiangxi College of Application Science and Technology
Priority to CN202410936877.5A priority Critical patent/CN118941737A/en
Publication of CN118941737A publication Critical patent/CN118941737A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/02Picture taking arrangements specially adapted for photogrammetry or photographic surveying, e.g. controlling overlapping of pictures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/2433Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures for measuring outlines by shadow casting
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/254Projection of a pattern, viewing through a pattern, e.g. moiré
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • G06T17/205Re-meshing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a river model test topography measurement method based on SFM, which comprises the following steps: acquiring an image sequence of the river model test terrain to be tested; extracting and matching characteristic points of the obtained image sequence by adopting an SFM (small form factor) method, reconstructing sparse point clouds of the image sequence, and reconstructing dense point clouds of the sparse reconstruction result by adopting CMVS (complementary metal-oxide-semiconductor) and PMVS methods to realize three-dimensional reconstruction of the test terrain of the river model to be tested; non-research area point cloud data rejection is carried out on the three-dimensional point cloud data obtained through reconstruction, and fine grid division is carried out on a research area; inserting control points to obtain actual terrain three-dimensional coordinates of the experimental terrain of the river model to be tested, and interpolating the three-dimensional coordinates onto the bed surface grids to obtain the three-dimensional river model meeting the boundary constraint conditions. The remarkable effects are as follows: by combining photogrammetry with computer vision technology, three-dimensional topography can be quickly, efficiently and accurately reconstructed; the single-point measurement is simple and convenient relative to the laser range finder.

Description

River model test terrain measurement method based on SFM
Technical Field
The invention relates to the technical field of three-dimensional terrain measurement of riverbed, in particular to a river model test terrain measurement method based on SFM.
Background
River bed siltation evolution caused by water and sand movement is a phenomenon commonly existing in the nature, and often causes practical engineering problems such as river channel siltation, river bank deformation, coastal back, reservoir siltation and reservoir capacity reduction, and the river bed siltation evolution is closely related to hydraulic engineering design, construction and operation. The method for measuring the three-dimensional topography of the riverbed accurately and efficiently is researched, and the change of the impulse is analyzed, so that the method has important significance for river model tests and practical engineering application.
The traditional measuring method of the three-dimensional topography of the riverbed mainly depends on tools such as a measuring needle, a steel ruler, a level gauge, a theodolite, a total station and the like. However, these methods are single-point measurement, cannot reflect the topographic data of the entire research area, and have the limitations of large workload, low efficiency, and the like.
In addition, the availability of novel topography measuring methods such as photoelectric reflection type topography instruments, resistance type topography instruments, tracking type topography instruments, ultrasonic topography instruments, laser scanners, and close-range photogrammetry is increasing. Along with the rapid development of photogrammetry and computer vision technology, a three-dimensional terrain reconstruction method through a camera shooting technology provides possibility for the terrain research of river model experiments. In the prior art, in the research of gully erosion, the method for measuring the SFM has the advantages of cost and time by comprehensively comparing different measuring methods such as SFM, total station, laser profiler and the like. The effectiveness of SFM has been proved in land mapping, and many research results have also confirmed its practicality in laboratory, its measurement accuracy is similar to other topography measurement technologies such as laser radar. However, how to combine the three-dimensional reconstruction technology of the image with the topography of the river bed in the river model test has an unresolved scientific problem.
Disclosure of Invention
Aiming at the defects of the prior art, the invention aims to provide a river model test terrain measurement method based on SFM, which combines the three-dimensional terrain reconstruction based on SFM (Structure From Motion) with data analysis, is applied to scour terrain evolution research, carries out grid interpolation on three-dimensional terrain point cloud, can accurately and efficiently acquire relevant parameters such as riverbed elevation and the like, and completes terrain measurement.
In order to achieve the above purpose, the invention adopts the following technical scheme:
The river model test terrain measurement method based on SFM is characterized by comprising the following steps:
step 1, acquiring an image sequence of a river model test terrain to be tested;
Step 2, extracting and matching characteristic points of the obtained image sequence by adopting an SFM method, reconstructing sparse point cloud of the image sequence, and reconstructing dense point cloud of the sparse reconstruction result by adopting CMVS and PMVS methods to realize three-dimensional reconstruction of the river model test terrain to be tested;
Step 3, carrying out non-research area point cloud data rejection on the three-dimensional point cloud data obtained through reconstruction, and carrying out fine grid division on the research area;
And 4, inserting control points to obtain actual terrain three-dimensional coordinates of the experimental terrain of the river model to be tested, and interpolating the three-dimensional coordinates onto the bed surface grid to obtain the three-dimensional river model meeting the boundary constraint condition.
Further, when the image sequence is acquired in the step 1, a plurality of calibration points are respectively arranged on two sides of the test terrain of the river model to be tested.
Further, the standard points adopt black and white checkerboard blocks.
Further, in the image sequence obtained in the step 1, the overlapping degree of the images is not less than 80%, and the total number of the images is not less than 200.
Further, in step 1, an image sequence is acquired by using a smart phone or a digital camera.
Further, in the step 2, feature point extraction and matching are performed on the obtained image sequence by adopting an SFM method, and the specific steps for realizing three-dimensional reconstruction of the experimental terrain of the river model to be detected are as follows:
Step 2.1, inputting an acquired image sequence, and detecting and extracting feature points of each image in the image sequence by adopting a SIFT operator and a SURF operator;
Step 2.2, matching the extracted characteristic points on different images by adopting a KD tree approach search algorithm to obtain homonymous characteristic points;
Step 2.3, calculating external camera coordinates, direction angles and space three-dimensional coordinates according to the two-dimensional coordinates of the camera calibration points and the related geometric constraint relation equation;
Step 2.4, reconstructing the sparse point cloud of the image, and expanding the coefficient point cloud through adjustment of the adjustment of a beam method;
And 2.5, clustering the sparsely reconstructed point cloud data by a CMVS method, and then generating dense point cloud by matching, diffusing and filtering under the constraint of local luminosity consistency and global visibility by using PMVS method to finish the reconstruction of the dense point cloud and realize the three-dimensional reconstruction of the test terrain of the river model to be tested.
Further, before feature point matching is performed in step 2.2, feature points of each image are compared according to a preset threshold, feature point matching is performed on images with feature points larger than the preset threshold, and images with feature points smaller than the preset threshold are deleted.
Further, the calculation formula for calculating the external camera coordinates in step 2.3 is as follows:
mq=A[R T]P,
Wherein, The camera is characterized in that m is a scale correction factor, q= (u, v, 1) T is a coordinate under an image coordinate system, (u 0,v0) is a coordinate of a projection center on an image plane, (alpha, beta) is a scale parameter in u and v directions respectively, c is a pixel distortion parameter, P is a topographic coordinate under a world coordinate system, and R and T are external parameters of the camera.
Further, in the step 4, the interpolation result expression for interpolating the three-dimensional coordinates onto the bed surface mesh is as follows:
Where (x 0,y0,z0) denotes a point to be interpolated, (x i,yi,zi) is the i-th point among points nearest to the point to be interpolated (x 0,y0,z0), As the weight of the material to be weighed,For the distance between two points, α is the adjustment parameter, i=1, 2, …, n, n is the number of points nearest to the point to be interpolated (x 0,y0,z0).
Further, the boundary constraint condition includes:
Continuity constraint: the interpolation results have no discontinuous jumps at the boundaries and throughout the area;
smoothness constraint: the curve or surface of the interpolation result is smooth and free of sharp corners or sharp fluctuations.
The invention has the remarkable effects that:
The invention can reconstruct three-dimensional terrain rapidly, efficiently and accurately by combining photogrammetry with computer vision technology;
Screening and removing the point cloud data of the non-research area from a large amount of dense point cloud data carried in the obtained photo, and guaranteeing the accuracy of three-dimensional reconstruction of the point cloud data;
The research area is finely meshed, the physical dimension of the mesh is set to be 1mm multiplied by 1mm, on the basis, the three-dimensional coordinates of the actual terrain are obtained through inserting control points, and the three-dimensional coordinates are interpolated on the mesh of the bed surface to obtain the three-dimensional river bed terrain, so that the front-back change of the whole bed surface can be accurately obtained through subtraction before and after the flushing, and the single-point measurement is simple and convenient relative to a laser range finder.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a schematic flow chart of a three-dimensional reconstruction of SFM;
FIG. 3 is a schematic flow chart of three-dimensional reconstruction by VisualSFM platform;
FIG. 4 is a schematic diagram of the VisualSFM platform inputting an image;
FIG. 5 is a diagram of VisualSFM platform feature point extraction and matching processes;
FIG. 6 is a sparse reconstruction of the VisualSFM platform;
FIG. 7 is a dense reconstruction view of the VisualSFM platform;
fig. 8 is a dense point cloud data diagram in the present embodiment;
fig. 9 is a point cloud reconstruction diagram of the present embodiment;
Fig. 10 is a grid interpolation diagram of the present embodiment;
FIG. 11 is a graph of calculated versus laser measurements for the method of the present invention.
Detailed Description
The following describes the embodiments and working principles of the present invention in further detail with reference to the drawings.
As shown in fig. 1, the embodiment provides a river model test terrain measurement method based on SFM, which specifically comprises the following steps:
step 1, acquiring an image sequence of a river model test terrain to be tested by adopting a smart phone or a digital camera;
When an image sequence is acquired, a plurality of calibration points are respectively arranged on two sides of the experimental terrain of the river model to be measured, and the calibration points adopt black and white checkered blocks. The black-white checkerboard arrangement is used for controlling, so that the recognition degree of the image acquisition device can be enhanced, and the accuracy of evaluating and reconstructing the point cloud can be improved.
In the acquired image sequence, the overlapping degree of the images is not less than 80%, and the total number of the images is not less than 200.
In the image acquisition process, the higher the image resolution is, the more image information can be captured, and the accuracy of shooting measurement can be improved by maximizing the depth of field. And in the process of reconstructing the three-dimensional model, the direction angle and the coordinate of each photo comprise the spatial attitude parameters of the camera corresponding to each photo.
Step 2, extracting and matching characteristic points of the obtained image sequence by adopting an SFM method, reconstructing sparse point cloud of the image sequence, and reconstructing dense point cloud of the sparse reconstruction result by adopting CMVS and PMVS methods to realize three-dimensional reconstruction of the river model test terrain to be tested;
the SFM method can fully automatically reconstruct a three-dimensional scene from a two-dimensional image and simultaneously retrieve the corresponding camera geometry in an arbitrary coordinate system.
A typical workflow of the SFM method includes the following steps: firstly, identifying and matching homologous image points in the overlapped photos; secondly, reconstructing a geometric image acquisition configuration and three-dimensional coordinates corresponding to a matched image point (sparse point cloud) by using an iteration BA; thirdly, sparse point cloud dense matching based on reconstructed image geometric network.
SFM methods are inherently dependent on automated processing tools that can be used by different non-commercial or commercial software packages, which has the advantage of being usable on almost any scale. The method is characterized in that characteristic points of each photo are identified and extracted, and the characteristic points on different photos are matched to obtain homonymy points. And then calculating the external camera coordinates, direction angles and space three-dimensional coordinates according to the camera calibration point two-dimensional coordinates and the related geometric constraint relation equation for three-dimensional reconstruction of the terrain.
Referring to fig. 2, the specific steps of the SFM method for extracting and matching feature points of the acquired image sequence to realize three-dimensional reconstruction of the experimental terrain of the river model to be tested are as follows:
step 2.1, inputting an acquired image sequence, and detecting and extracting characteristic points of each image in the image sequence by adopting a SIFT operator and a SURF operator of a SIFT algorithm;
The SIFT (Scale-INVARIANT FEATURE TRANSFORM, scale invariant feature transform) key point detection method is used for identifying and extracting feature points, and the algorithm has the advantage of being not unstable due to different shooting conditions.
Step 2.2, adopting a KD tree approach search algorithm to enable the known points to quickly and effectively find the nearest d-dimensional space point so as to match the feature points, and matching the extracted feature points on different images to obtain homonymous feature points;
in the matching process, when the number of matching points between the images reaches more than a preset threshold, the SFM main program operation is carried out, and if the number of matching points is lower than the preset threshold, the images are removed.
Step 2.3, calculating external camera coordinates, direction angles and space three-dimensional coordinates according to the two-dimensional coordinates of the camera calibration points and the related geometric constraint relation equation;
the world coordinate system is a three-dimensional coordinate representing a stereoscopic space of the real world, and a topographic coordinate under the world coordinate system is assumed to be P W=(XW,YW,ZW,1)T.
The image coordinate system is a two-dimensional coordinate transformed by projecting the camera-rendered three-dimensional coordinate onto the screen. Let the coordinates under the image coordinate system be q= (u, v, 1) T.
The camera coordinate transformation basic formula is: mq=a [ R T ] P,
Wherein, The camera is characterized in that m is an internal parameter matrix of the camera, q= (u, v, 1) T is a scale correction factor, q= (u 0,v0) is a coordinate of a projection center in an image plane, a (alpha, beta) is a scale parameter in u and v directions, c is a pixel distortion parameter, P is a topographic coordinate of a world coordinate system, R and T are external parameters of the camera, R represents rotation change of the two coordinate systems, and 3 independent parameters are included. T represents the translational change in two coordinate systems, comprising 3 independent parameters.
Errors occur due to the production and processing of the camera lens, and distortion occurs. In the case of considering the radial distortion and tangential distortion of the camera distortion, two more image distortion parameters K 1、K2、K3、K4 are required.
From the above, a total of 16 parameters are considered, namely, 5 internal parameters, 6 external parameters, 4 distortion parameters, and 1 scale parameter.
For convenience of description, 16 parameter quantities are combined into a vector parameter, and a projection operator F is defined:
q=F(θ,P,J,G)
Thus, let the world coordinate system point P, θ denote the camera, which images each photo to obtain the image coordinates q 1……qn. . Wherein J is a parameter vector composed of an inner parameter, a distortion parameter and a scale parameter, and G is a parameter vector composed of an outer parameter. The parameters can be solved by the following formula:
Assuming that the lower calibration point P 1 of the world coordinate system appears n times in n pictures, n independent equations can be established according to the equality of coordinates, camera parameters can be solved under the condition that the matching points in a plurality of pictures are enough, the optimal projection relation is determined, and the world coordinate system is calculated reversely.
In the shooting of the camera theta, b matching points Ci (1 is less than or equal to i is less than or equal to b) appear in a picture a, the matching points are Qi (1 is less than or equal to i is less than or equal to b) in the projection points of the camera, and the back calculation projection points of the matching points are Yi (1 is less than or equal to i is less than or equal to b). When Qi and Yi have small errors, ci can be considered as the world coordinate system of the matching point.
Step 2.4, reconstructing the sparse point cloud of the image, and expanding the coefficient point cloud through adjustment of the adjustment of a beam method;
And 2.5, clustering the sparsely reconstructed point cloud data by a CMVS method, and then generating dense point cloud by matching, diffusing and filtering under the constraint of local luminosity consistency and global visibility by using PMVS method to finish the reconstruction of the dense point cloud and realize the three-dimensional reconstruction of the test terrain of the river model to be tested.
In this embodiment, a VisualSFM platform is selected to implement the three-dimensional reconstruction process, visualSFM is a GUI application program that uses a motion Structure (SFM) to perform three-dimensional reconstruction, and the whole SFM calculation process of importing picture coordinate calibration and obtaining point cloud data can be implemented by performing feature detection, feature matching and packet adjustment by using multi-core parallelism to perform fast running, so that the calculation speed is improved, and PMVS and CMVS algorithms are integrated by the platform.
The operation flow of the VisualSFM platform is shown in fig. 3, and is specifically as follows:
(1) Image input-importing image sequence click [ 1 ], selecting a plurality of images. As shown in fig. 4.
(2) Feature point extraction and matching-clicking [2 ], feature point extraction is carried out on each image, and feature points and feature point matching files are generated under an image folder. Therefore, the feature point extraction is only performed once for the same graph, and the operation time is greatly saved. As shown in fig. 5.
(3) Sparse reconstruction—single click [ 3], sparse reconstruction is started. The three-dimensional coordinates of the matched feature points and the pose information of the camera are restored. As shown in fig. 6.
(4) Dense reconstruction, namely clicking [ 4], then finishing dense point cloud reconstruction through PMVS and a CMVS algorithm, and realizing three-dimensional reconstruction of the test terrain of the river model to be tested. And then a file saving window is popped up, a saving path and a file name are selected, and the saving of the reconstructed three-dimensional point cloud data is completed. As shown in fig. 7.
Step 3, carrying out non-research area point cloud data rejection on the three-dimensional point cloud data obtained through reconstruction, and carrying out fine grid division on the research area;
And 4, inserting control points to obtain actual terrain three-dimensional coordinates of the experimental terrain of the river model to be tested, and interpolating the three-dimensional coordinates onto the bed surface grid to obtain the three-dimensional river model meeting the boundary constraint condition.
The boundary constraint condition includes:
Continuity constraint: the interpolation results have no discontinuous jumps at the boundaries and throughout the area;
smoothness constraint: the curve or surface of the interpolation result is smooth and free of sharp corners or sharp fluctuations.
The interpolation principle is that a data point closest to a point to be interpolated is found in known data points, the value of the data point is assigned to the point to be interpolated, irregular data is interpolated, and interpolation on a regular grid is generated.
And (3) giving different weights to a plurality of adjacent points in a certain range around to interpolate, and smoothing the original data to remove noise or abnormal values so as to reduce the severe change of the data.
The interpolation result expression is as follows:
Where (x 0,y0,z0) denotes a point to be interpolated, (x i,yi,zi) is the i-th point among points nearest to the point to be interpolated (x 0,y0,z0), As the weight of the material to be weighed,For the distance between two points, α is the adjustment parameter, i=1, 2, …, n, n is the number of points nearest to the point to be interpolated (x 0,y0,z0).
In the embodiment, the water tank test section is taken as an example to reconstruct the three-dimensional terrain of the river bed, and the specific process is as follows:
First, 10 black and white checkered blocks are placed on two sides of the side wall of the water tank as standard points, namely, terrain reconstruction control points. And the intelligent mobile phone is used for collecting images around the flushing pit, the overlapping degree of the photos is more than 80%, the overexposure or underexposure of the images in the shooting process can influence the reconstruction precision of point clouds, so that a dark and bright shooting environment can ensure a higher image recognition matching rate, and the interference generated by a moving shadow static light source can be reduced or eliminated. The total number of photos under each test condition is not less than 200. After each test 280 photographs contained black and white checkerboard calibration points, 10 points were calibrated 280 times.
Then, three-dimensional reconstruction is carried out on an image sequence acquired by the smart phone based on the SFM method, an acquired sparse point cloud reconstruction diagram is shown in fig. 6, a dense point cloud reconstruction diagram is shown in fig. 7, a dense point cloud data diagram is shown in fig. 8, and a point cloud reconstruction diagram is shown in fig. 9.
Next, the set research area of this example selects the water flow direction range [ X 1min,X1max ] = [1.2,2.4], the water flow direction range [ Y 1min,Y1max ] = [0,0.4], and the unit is m. The study area was finely gridded, and the physical dimensions of the grid were set to 1mm×1mm (the physical dimensions of the grid may be subdivided according to the test conditions).
Finally, inserting control points to obtain three-dimensional coordinates of the actual terrain, and interpolating the three-dimensional coordinates on the bed surface grid, as shown in fig. 10, to obtain the three-dimensional riverbed terrain. Since the measurement true value cannot be obtained, the elevation of the section can be measured by using the laser range finder as a detection value after each test, and meanwhile, the longitudinal elevation value is calculated by adopting the method of the invention, and the comparison result is shown in fig. 11, wherein the dark blue line and the black line represent elevation values obtained by the method of the invention by calculating the same image sequence twice, and the green circle point, the red circle point and the sky blue circle point represent the measurement values of the laser range finder for three times. The figure shows that the fitting degree of the section elevation is good, the accuracy of the topographic data carried by the reconstructed point cloud data is high, and the change of the whole bed surface before and after the dredging can be accurately obtained.
The technical scheme provided by the invention is described in detail. The principles and embodiments of the present invention have been described herein with reference to specific examples, the description of which is intended only to facilitate an understanding of the method of the present invention and its core ideas. It should be noted that it will be apparent to those skilled in the art that various modifications and adaptations of the invention can be made without departing from the principles of the invention and these modifications and adaptations are intended to be within the scope of the invention as defined in the following claims.

Claims (10)

1.一种基于SFM的河工模型试验地形测量方法,其特征在于,包括如下步骤:1. A river engineering model test topographic measurement method based on SFM, characterized by comprising the following steps: 步骤1、获取待测河工模型试验地形的图像序列;Step 1, obtaining an image sequence of the river engineering model test terrain to be tested; 步骤2、采用SFM方法对获取的图像序列进行特征点提取和匹配,并进行图像序列的稀疏点云重建,然后使用CMVS和PMVS方法对稀疏重建结果进行稠密点云重建,实现待测河工模型试验地形的三维重构;Step 2: Use the SFM method to extract and match the feature points of the acquired image sequence, and reconstruct the sparse point cloud of the image sequence. Then use the CMVS and PMVS methods to reconstruct the dense point cloud of the sparse reconstruction results to achieve three-dimensional reconstruction of the terrain of the river engineering model test to be tested. 步骤3、对重构获得的三维点云数据进行非研究区域点云数据剔除,并对研究区域进行精细网格划分;Step 3: Remove the point cloud data outside the research area from the reconstructed 3D point cloud data, and perform fine grid division on the research area; 步骤4、插入控制点获取待测河工模型试验地形的实际地形三维坐标,并将三维坐标插值到床面网格上,获得满足边界约束条件的三维河工模型。Step 4: insert control points to obtain the actual three-dimensional coordinates of the terrain of the river engineering model test terrain to be tested, and interpolate the three-dimensional coordinates to the bed grid to obtain a three-dimensional river engineering model that meets the boundary constraints. 2.根据权利要求1所述的基于SFM的河工模型试验地形测量方法,其特征在于,步骤1中获取图像序列时,在待测河工模型试验地形的两侧分别设置有多个标定点。2. The SFM-based river model test terrain measurement method according to claim 1 is characterized in that when acquiring the image sequence in step 1, multiple calibration points are respectively set on both sides of the river model test terrain to be measured. 3.根据权利要求2所述的基于SFM的河工模型试验地形测量方法,其特征在于,所述标定点采用黑白棋盘格方块。3. The SFM-based river engineering model test topographic measurement method according to claim 2 is characterized in that the calibration points are black and white checkerboard squares. 4.根据权利要求3所述的基于SFM的河工模型试验地形测量方法,其特征在于,步骤1所获取的图像序列中,图像的重叠度不小于80%,图像的总数不少于200张。4. The SFM-based river engineering model test topographic measurement method according to claim 3 is characterized in that, in the image sequence acquired in step 1, the image overlap is not less than 80%, and the total number of images is not less than 200. 5.根据权利要求1-4任一项所述的基于SFM的河工模型试验地形测量方法,其特征在于,步骤1中采用智能手机或数码相机获取图像序列。5. The SFM-based river engineering model test topographic measurement method according to any one of claims 1 to 4, characterized in that a smart phone or a digital camera is used to acquire the image sequence in step 1. 6.根据权利要求1所述的基于SFM的河工模型试验地形测量方法,其特征在于,步骤2中采用SFM方法对获取的图像序列进行特征点提取和匹配,实现待测河工模型试验地形的三维重构的具体步骤如下:6. The SFM-based river model test topography measurement method according to claim 1 is characterized in that the SFM method is used in step 2 to extract and match feature points of the acquired image sequence to achieve the three-dimensional reconstruction of the river model test topography to be measured, and the specific steps are as follows: 步骤2.1、输入获取的图像序列,并采用SIFT算子和SURF算子对图像序列中每幅图像进行特征点检测提取;Step 2.1, input the acquired image sequence, and use SIFT operator and SURF operator to detect and extract feature points of each image in the image sequence; 步骤2.2、采用KD树临近搜索算法,将提取的不同图像上的特征点进行匹配得到同名特征点;Step 2.2, using the KD tree proximity search algorithm, the feature points extracted from different images are matched to obtain feature points with the same name; 步骤2.3、根据相机标定点二维坐标及相关几何约束关系方程,计算出外部相机坐标、方向角和空间三维坐标;Step 2.3, according to the two-dimensional coordinates of the camera calibration points and the related geometric constraint relationship equations, calculate the external camera coordinates, direction angles and three-dimensional spatial coordinates; 步骤2.4、进行图像稀疏点云重建,并通过光束法平差调整进行系数点云扩展;Step 2.4, reconstruct the image sparse point cloud, and expand the coefficient point cloud through bundle adjustment; 步骤2.5、通过CMVS方法对稀疏重建的点云数据进行聚类,然后利用PMVS方法在局部光度一致性和全局可见性约束下,经过匹配、扩散、过滤生成稠密点云,完成稠密点云重建,实现待测河工模型试验地形的三维重构。Step 2.5: Cluster the sparsely reconstructed point cloud data using the CMVS method, and then use the PMVS method to generate a dense point cloud through matching, diffusion, and filtering under the constraints of local photometric consistency and global visibility, complete the dense point cloud reconstruction, and realize the three-dimensional reconstruction of the river engineering model test terrain to be tested. 7.根据权利要求6所述的基于SFM的河工模型试验地形测量方法,其特征在于,步骤2.2中在进行特征点匹配之前,按照预设阈值对每幅图像的特征点数进行对比,将特征点数大于预设阈值的图像进行特征点匹配,并将特征点数小于预设阈值的图像删除。7. The SFM-based river model test topographic measurement method according to claim 6 is characterized in that, in step 2.2, before performing feature point matching, the number of feature points of each image is compared according to a preset threshold, images with feature point numbers greater than the preset threshold are subjected to feature point matching, and images with feature point numbers less than the preset threshold are deleted. 8.根据权利要求6所述的基于SFM的河工模型试验地形测量方法,其特征在于,步骤2.3中计算外部相机坐标的计算公式为:8. The SFM-based river engineering model test topographic measurement method according to claim 6, characterized in that the calculation formula for calculating the external camera coordinates in step 2.3 is: mq=A[R T]P,mq=A[R T]P, 其中,为相机的内参数矩阵,m为尺度改正因子,q=(u,v,1)T为图像坐标系下坐标,(u0,v0)为投影中心在图像平面的坐标,(α,β)分别为u、v方向的尺度参数,c为像素畸变参数,P为世界坐标系下的地形坐标,R和T为相机的外参数。in, is the intrinsic parameter matrix of the camera, m is the scale correction factor, q=(u,v,1) T is the coordinate in the image coordinate system, (u 0 ,v 0 ) is the coordinate of the projection center in the image plane, (α,β) are the scale parameters in the u and v directions respectively, c is the pixel distortion parameter, P is the terrain coordinate in the world coordinate system, and R and T are the extrinsic parameters of the camera. 9.根据权利要求1所述的基于SFM的河工模型试验地形测量方法,其特征在于,步骤4中将三维坐标插值到床面网格上的插值结果表达式如下:9. The SFM-based river engineering model test topographic measurement method according to claim 1 is characterized in that the interpolation result expression of interpolating the three-dimensional coordinates onto the bed grid in step 4 is as follows: 其中,(x0,y0,z0)表示待插值点,(xi,yi,zi)为距离待插值点(x0,y0,z0)最近的点中的第i个点,为权重,为两个点之间的距离,α为调整参数,i=1,2,…,n,n为距离待插值点(x0,y0,z0)最近的点的数量。Among them, (x 0 , y 0 , z 0 ) represents the point to be interpolated, (x i , y i , z i ) is the i-th point closest to the point to be interpolated (x 0 , y 0 , z 0 ), is the weight, is the distance between two points, α is an adjustment parameter, i=1,2,…,n, and n is the number of points closest to the interpolation point (x 0 , y 0 , z 0 ). 10.根据权利要求1所述的基于SFM的河工模型试验地形测量方法,其特征在于,所述边界约束条件包括:10. The SFM-based river engineering model test topographic measurement method according to claim 1, characterized in that the boundary constraint conditions include: 连续性约束:插值结果在边界处以及整个区域内没有不连续的跳跃;Continuity constraint: The interpolation result has no discontinuous jumps at the boundary and in the entire area; 光滑性约束:插值结果的曲线或曲面光滑,没有尖锐的拐角或剧烈波动。Smoothness constraint: The curve or surface of the interpolation result is smooth, without sharp corners or drastic fluctuations.
CN202410936877.5A 2024-07-12 2024-07-12 Topographic Survey Method for River Engineering Model Test Based on SFM Pending CN118941737A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410936877.5A CN118941737A (en) 2024-07-12 2024-07-12 Topographic Survey Method for River Engineering Model Test Based on SFM

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410936877.5A CN118941737A (en) 2024-07-12 2024-07-12 Topographic Survey Method for River Engineering Model Test Based on SFM

Publications (1)

Publication Number Publication Date
CN118941737A true CN118941737A (en) 2024-11-12

Family

ID=93359258

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410936877.5A Pending CN118941737A (en) 2024-07-12 2024-07-12 Topographic Survey Method for River Engineering Model Test Based on SFM

Country Status (1)

Country Link
CN (1) CN118941737A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119197469A (en) * 2024-11-29 2024-12-27 交通运输部天津水运工程科学研究所 A large-scale wave flume automatic scanning device and method for scouring and silting terrain

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119197469A (en) * 2024-11-29 2024-12-27 交通运输部天津水运工程科学研究所 A large-scale wave flume automatic scanning device and method for scouring and silting terrain
CN119197469B (en) * 2024-11-29 2025-03-11 交通运输部天津水运工程科学研究所 Automatic sweeping device and method for large-scale wave water tank siltation terrain

Similar Documents

Publication Publication Date Title
CN109949399B (en) Scene three-dimensional reconstruction method based on unmanned aerial vehicle aerial image
Kersten et al. Image-based low-cost systems for automatic 3D recording and modelling of archaeological finds and objects
Sime et al. Information on grain sizes in gravel-bed rivers by automated image analysis
CN108198230A (en) A 3D Point Cloud Extraction System of Crop Fruit Based on Scattered Images
CN110533774B (en) Three-dimensional model reconstruction method based on smart phone
CN103218787B (en) Multi-source heterogeneous remote sensing image reference mark automatic acquiring method
CN105678757B (en) A kind of ohject displacement measuring method
Paixão et al. Close-range photogrammetry for 3D rock joint roughness evaluation
JP2019120591A (en) Parallax value calculation device, parallax value calculation method and program
Fang et al. Application of a multi-smartphone measurement system in slope model tests
CN112070870B (en) Point cloud map evaluation method and device, computer equipment and storage medium
Mali et al. Assessing the accuracy of high-resolution topographic data generated using freely available packages based on SfM-MVS approach
CN118941737A (en) Topographic Survey Method for River Engineering Model Test Based on SFM
Sevara Top secret topographies: recovering two and three-dimensional archaeological information from historic reconnaissance datasets using image-based modelling techniques
CN110021041B (en) Unmanned scene incremental gridding structure reconstruction method based on binocular camera
Li et al. Combining Structure from Motion and close-range stereo photogrammetry to obtain scaled gravel bar DEMs
Ahmadabadian et al. Image selection in photogrammetric multi-view stereo methods for metric and complete 3D reconstruction
CN118982623A (en) A three-dimensional reconstruction method, device, equipment and medium
CN117788531A (en) Photon detector acquisition projection image module seam filling method based on image registration
CN110335209A (en) A phase-type 3D laser point cloud noise filtering method
CN113624133A (en) Fault positioning method and device and electronic equipment
Wang et al. Identification of rocky ledge on steep, high slopes based on UAV photogrammetry
Rowley et al. Comparison of terrestrial lidar, SfM, and MBES resolution and accuracy for geomorphic analyses in physical systems that experience subaerial and subaqueous conditions
CN116363302A (en) Pipeline three-dimensional reconstruction and pit quantification method based on multi-view geometry
Letortu et al. Three-dimensional (3D) reconstructions of the coastal cliff face in Normandy (France) based on oblique Pléiades imagery: assessment of Ames Stereo Pipeline®(ASP®) and MicMac® processing chains

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination