CN118941737A - Topographic Survey Method for River Engineering Model Test Based on SFM - Google Patents
Topographic Survey Method for River Engineering Model Test Based on SFM Download PDFInfo
- Publication number
- CN118941737A CN118941737A CN202410936877.5A CN202410936877A CN118941737A CN 118941737 A CN118941737 A CN 118941737A CN 202410936877 A CN202410936877 A CN 202410936877A CN 118941737 A CN118941737 A CN 118941737A
- Authority
- CN
- China
- Prior art keywords
- sfm
- model test
- dimensional
- terrain
- point cloud
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 46
- 238000012360 testing method Methods 0.000 title claims abstract description 40
- 238000000691 measurement method Methods 0.000 claims abstract description 16
- 238000011160 research Methods 0.000 claims abstract description 16
- 238000012876 topography Methods 0.000 claims abstract description 13
- 238000004364 calculation method Methods 0.000 claims description 5
- 238000012937 correction Methods 0.000 claims description 3
- 238000001914 filtration Methods 0.000 claims description 3
- 238000010845 search algorithm Methods 0.000 claims description 3
- 239000011159 matrix material Substances 0.000 claims description 2
- 238000009792 diffusion process Methods 0.000 claims 1
- 238000005259 measurement Methods 0.000 abstract description 11
- 238000005516 engineering process Methods 0.000 abstract description 6
- 230000000694 effects Effects 0.000 abstract description 2
- 230000000295 complement effect Effects 0.000 abstract 1
- 239000004065 semiconductor Substances 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 9
- 230000008569 process Effects 0.000 description 8
- 230000008859 change Effects 0.000 description 6
- 238000000605 extraction Methods 0.000 description 5
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 5
- 230000008901 benefit Effects 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 230000006978 adaptation Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 238000011010 flushing procedure Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 229910000831 Steel Inorganic materials 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000003628 erosive effect Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 235000012736 patent blue V Nutrition 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 239000004576 sand Substances 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 239000010959 steel Substances 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
- G01C11/02—Picture taking arrangements specially adapted for photogrammetry or photographic surveying, e.g. controlling overlapping of pictures
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
- G01B11/2433—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures for measuring outlines by shadow casting
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
- G01B11/25—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
- G01B11/254—Projection of a pattern, viewing through a pattern, e.g. moiré
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
- G06T17/205—Re-meshing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a river model test topography measurement method based on SFM, which comprises the following steps: acquiring an image sequence of the river model test terrain to be tested; extracting and matching characteristic points of the obtained image sequence by adopting an SFM (small form factor) method, reconstructing sparse point clouds of the image sequence, and reconstructing dense point clouds of the sparse reconstruction result by adopting CMVS (complementary metal-oxide-semiconductor) and PMVS methods to realize three-dimensional reconstruction of the test terrain of the river model to be tested; non-research area point cloud data rejection is carried out on the three-dimensional point cloud data obtained through reconstruction, and fine grid division is carried out on a research area; inserting control points to obtain actual terrain three-dimensional coordinates of the experimental terrain of the river model to be tested, and interpolating the three-dimensional coordinates onto the bed surface grids to obtain the three-dimensional river model meeting the boundary constraint conditions. The remarkable effects are as follows: by combining photogrammetry with computer vision technology, three-dimensional topography can be quickly, efficiently and accurately reconstructed; the single-point measurement is simple and convenient relative to the laser range finder.
Description
Technical Field
The invention relates to the technical field of three-dimensional terrain measurement of riverbed, in particular to a river model test terrain measurement method based on SFM.
Background
River bed siltation evolution caused by water and sand movement is a phenomenon commonly existing in the nature, and often causes practical engineering problems such as river channel siltation, river bank deformation, coastal back, reservoir siltation and reservoir capacity reduction, and the river bed siltation evolution is closely related to hydraulic engineering design, construction and operation. The method for measuring the three-dimensional topography of the riverbed accurately and efficiently is researched, and the change of the impulse is analyzed, so that the method has important significance for river model tests and practical engineering application.
The traditional measuring method of the three-dimensional topography of the riverbed mainly depends on tools such as a measuring needle, a steel ruler, a level gauge, a theodolite, a total station and the like. However, these methods are single-point measurement, cannot reflect the topographic data of the entire research area, and have the limitations of large workload, low efficiency, and the like.
In addition, the availability of novel topography measuring methods such as photoelectric reflection type topography instruments, resistance type topography instruments, tracking type topography instruments, ultrasonic topography instruments, laser scanners, and close-range photogrammetry is increasing. Along with the rapid development of photogrammetry and computer vision technology, a three-dimensional terrain reconstruction method through a camera shooting technology provides possibility for the terrain research of river model experiments. In the prior art, in the research of gully erosion, the method for measuring the SFM has the advantages of cost and time by comprehensively comparing different measuring methods such as SFM, total station, laser profiler and the like. The effectiveness of SFM has been proved in land mapping, and many research results have also confirmed its practicality in laboratory, its measurement accuracy is similar to other topography measurement technologies such as laser radar. However, how to combine the three-dimensional reconstruction technology of the image with the topography of the river bed in the river model test has an unresolved scientific problem.
Disclosure of Invention
Aiming at the defects of the prior art, the invention aims to provide a river model test terrain measurement method based on SFM, which combines the three-dimensional terrain reconstruction based on SFM (Structure From Motion) with data analysis, is applied to scour terrain evolution research, carries out grid interpolation on three-dimensional terrain point cloud, can accurately and efficiently acquire relevant parameters such as riverbed elevation and the like, and completes terrain measurement.
In order to achieve the above purpose, the invention adopts the following technical scheme:
The river model test terrain measurement method based on SFM is characterized by comprising the following steps:
step 1, acquiring an image sequence of a river model test terrain to be tested;
Step 2, extracting and matching characteristic points of the obtained image sequence by adopting an SFM method, reconstructing sparse point cloud of the image sequence, and reconstructing dense point cloud of the sparse reconstruction result by adopting CMVS and PMVS methods to realize three-dimensional reconstruction of the river model test terrain to be tested;
Step 3, carrying out non-research area point cloud data rejection on the three-dimensional point cloud data obtained through reconstruction, and carrying out fine grid division on the research area;
And 4, inserting control points to obtain actual terrain three-dimensional coordinates of the experimental terrain of the river model to be tested, and interpolating the three-dimensional coordinates onto the bed surface grid to obtain the three-dimensional river model meeting the boundary constraint condition.
Further, when the image sequence is acquired in the step 1, a plurality of calibration points are respectively arranged on two sides of the test terrain of the river model to be tested.
Further, the standard points adopt black and white checkerboard blocks.
Further, in the image sequence obtained in the step 1, the overlapping degree of the images is not less than 80%, and the total number of the images is not less than 200.
Further, in step 1, an image sequence is acquired by using a smart phone or a digital camera.
Further, in the step 2, feature point extraction and matching are performed on the obtained image sequence by adopting an SFM method, and the specific steps for realizing three-dimensional reconstruction of the experimental terrain of the river model to be detected are as follows:
Step 2.1, inputting an acquired image sequence, and detecting and extracting feature points of each image in the image sequence by adopting a SIFT operator and a SURF operator;
Step 2.2, matching the extracted characteristic points on different images by adopting a KD tree approach search algorithm to obtain homonymous characteristic points;
Step 2.3, calculating external camera coordinates, direction angles and space three-dimensional coordinates according to the two-dimensional coordinates of the camera calibration points and the related geometric constraint relation equation;
Step 2.4, reconstructing the sparse point cloud of the image, and expanding the coefficient point cloud through adjustment of the adjustment of a beam method;
And 2.5, clustering the sparsely reconstructed point cloud data by a CMVS method, and then generating dense point cloud by matching, diffusing and filtering under the constraint of local luminosity consistency and global visibility by using PMVS method to finish the reconstruction of the dense point cloud and realize the three-dimensional reconstruction of the test terrain of the river model to be tested.
Further, before feature point matching is performed in step 2.2, feature points of each image are compared according to a preset threshold, feature point matching is performed on images with feature points larger than the preset threshold, and images with feature points smaller than the preset threshold are deleted.
Further, the calculation formula for calculating the external camera coordinates in step 2.3 is as follows:
mq=A[R T]P,
Wherein, The camera is characterized in that m is a scale correction factor, q= (u, v, 1) T is a coordinate under an image coordinate system, (u 0,v0) is a coordinate of a projection center on an image plane, (alpha, beta) is a scale parameter in u and v directions respectively, c is a pixel distortion parameter, P is a topographic coordinate under a world coordinate system, and R and T are external parameters of the camera.
Further, in the step 4, the interpolation result expression for interpolating the three-dimensional coordinates onto the bed surface mesh is as follows:
Where (x 0,y0,z0) denotes a point to be interpolated, (x i,yi,zi) is the i-th point among points nearest to the point to be interpolated (x 0,y0,z0), As the weight of the material to be weighed,For the distance between two points, α is the adjustment parameter, i=1, 2, …, n, n is the number of points nearest to the point to be interpolated (x 0,y0,z0).
Further, the boundary constraint condition includes:
Continuity constraint: the interpolation results have no discontinuous jumps at the boundaries and throughout the area;
smoothness constraint: the curve or surface of the interpolation result is smooth and free of sharp corners or sharp fluctuations.
The invention has the remarkable effects that:
The invention can reconstruct three-dimensional terrain rapidly, efficiently and accurately by combining photogrammetry with computer vision technology;
Screening and removing the point cloud data of the non-research area from a large amount of dense point cloud data carried in the obtained photo, and guaranteeing the accuracy of three-dimensional reconstruction of the point cloud data;
The research area is finely meshed, the physical dimension of the mesh is set to be 1mm multiplied by 1mm, on the basis, the three-dimensional coordinates of the actual terrain are obtained through inserting control points, and the three-dimensional coordinates are interpolated on the mesh of the bed surface to obtain the three-dimensional river bed terrain, so that the front-back change of the whole bed surface can be accurately obtained through subtraction before and after the flushing, and the single-point measurement is simple and convenient relative to a laser range finder.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a schematic flow chart of a three-dimensional reconstruction of SFM;
FIG. 3 is a schematic flow chart of three-dimensional reconstruction by VisualSFM platform;
FIG. 4 is a schematic diagram of the VisualSFM platform inputting an image;
FIG. 5 is a diagram of VisualSFM platform feature point extraction and matching processes;
FIG. 6 is a sparse reconstruction of the VisualSFM platform;
FIG. 7 is a dense reconstruction view of the VisualSFM platform;
fig. 8 is a dense point cloud data diagram in the present embodiment;
fig. 9 is a point cloud reconstruction diagram of the present embodiment;
Fig. 10 is a grid interpolation diagram of the present embodiment;
FIG. 11 is a graph of calculated versus laser measurements for the method of the present invention.
Detailed Description
The following describes the embodiments and working principles of the present invention in further detail with reference to the drawings.
As shown in fig. 1, the embodiment provides a river model test terrain measurement method based on SFM, which specifically comprises the following steps:
step 1, acquiring an image sequence of a river model test terrain to be tested by adopting a smart phone or a digital camera;
When an image sequence is acquired, a plurality of calibration points are respectively arranged on two sides of the experimental terrain of the river model to be measured, and the calibration points adopt black and white checkered blocks. The black-white checkerboard arrangement is used for controlling, so that the recognition degree of the image acquisition device can be enhanced, and the accuracy of evaluating and reconstructing the point cloud can be improved.
In the acquired image sequence, the overlapping degree of the images is not less than 80%, and the total number of the images is not less than 200.
In the image acquisition process, the higher the image resolution is, the more image information can be captured, and the accuracy of shooting measurement can be improved by maximizing the depth of field. And in the process of reconstructing the three-dimensional model, the direction angle and the coordinate of each photo comprise the spatial attitude parameters of the camera corresponding to each photo.
Step 2, extracting and matching characteristic points of the obtained image sequence by adopting an SFM method, reconstructing sparse point cloud of the image sequence, and reconstructing dense point cloud of the sparse reconstruction result by adopting CMVS and PMVS methods to realize three-dimensional reconstruction of the river model test terrain to be tested;
the SFM method can fully automatically reconstruct a three-dimensional scene from a two-dimensional image and simultaneously retrieve the corresponding camera geometry in an arbitrary coordinate system.
A typical workflow of the SFM method includes the following steps: firstly, identifying and matching homologous image points in the overlapped photos; secondly, reconstructing a geometric image acquisition configuration and three-dimensional coordinates corresponding to a matched image point (sparse point cloud) by using an iteration BA; thirdly, sparse point cloud dense matching based on reconstructed image geometric network.
SFM methods are inherently dependent on automated processing tools that can be used by different non-commercial or commercial software packages, which has the advantage of being usable on almost any scale. The method is characterized in that characteristic points of each photo are identified and extracted, and the characteristic points on different photos are matched to obtain homonymy points. And then calculating the external camera coordinates, direction angles and space three-dimensional coordinates according to the camera calibration point two-dimensional coordinates and the related geometric constraint relation equation for three-dimensional reconstruction of the terrain.
Referring to fig. 2, the specific steps of the SFM method for extracting and matching feature points of the acquired image sequence to realize three-dimensional reconstruction of the experimental terrain of the river model to be tested are as follows:
step 2.1, inputting an acquired image sequence, and detecting and extracting characteristic points of each image in the image sequence by adopting a SIFT operator and a SURF operator of a SIFT algorithm;
The SIFT (Scale-INVARIANT FEATURE TRANSFORM, scale invariant feature transform) key point detection method is used for identifying and extracting feature points, and the algorithm has the advantage of being not unstable due to different shooting conditions.
Step 2.2, adopting a KD tree approach search algorithm to enable the known points to quickly and effectively find the nearest d-dimensional space point so as to match the feature points, and matching the extracted feature points on different images to obtain homonymous feature points;
in the matching process, when the number of matching points between the images reaches more than a preset threshold, the SFM main program operation is carried out, and if the number of matching points is lower than the preset threshold, the images are removed.
Step 2.3, calculating external camera coordinates, direction angles and space three-dimensional coordinates according to the two-dimensional coordinates of the camera calibration points and the related geometric constraint relation equation;
the world coordinate system is a three-dimensional coordinate representing a stereoscopic space of the real world, and a topographic coordinate under the world coordinate system is assumed to be P W=(XW,YW,ZW,1)T.
The image coordinate system is a two-dimensional coordinate transformed by projecting the camera-rendered three-dimensional coordinate onto the screen. Let the coordinates under the image coordinate system be q= (u, v, 1) T.
The camera coordinate transformation basic formula is: mq=a [ R T ] P,
Wherein, The camera is characterized in that m is an internal parameter matrix of the camera, q= (u, v, 1) T is a scale correction factor, q= (u 0,v0) is a coordinate of a projection center in an image plane, a (alpha, beta) is a scale parameter in u and v directions, c is a pixel distortion parameter, P is a topographic coordinate of a world coordinate system, R and T are external parameters of the camera, R represents rotation change of the two coordinate systems, and 3 independent parameters are included. T represents the translational change in two coordinate systems, comprising 3 independent parameters.
Errors occur due to the production and processing of the camera lens, and distortion occurs. In the case of considering the radial distortion and tangential distortion of the camera distortion, two more image distortion parameters K 1、K2、K3、K4 are required.
From the above, a total of 16 parameters are considered, namely, 5 internal parameters, 6 external parameters, 4 distortion parameters, and 1 scale parameter.
For convenience of description, 16 parameter quantities are combined into a vector parameter, and a projection operator F is defined:
q=F(θ,P,J,G)
Thus, let the world coordinate system point P, θ denote the camera, which images each photo to obtain the image coordinates q 1……qn. . Wherein J is a parameter vector composed of an inner parameter, a distortion parameter and a scale parameter, and G is a parameter vector composed of an outer parameter. The parameters can be solved by the following formula:
Assuming that the lower calibration point P 1 of the world coordinate system appears n times in n pictures, n independent equations can be established according to the equality of coordinates, camera parameters can be solved under the condition that the matching points in a plurality of pictures are enough, the optimal projection relation is determined, and the world coordinate system is calculated reversely.
In the shooting of the camera theta, b matching points Ci (1 is less than or equal to i is less than or equal to b) appear in a picture a, the matching points are Qi (1 is less than or equal to i is less than or equal to b) in the projection points of the camera, and the back calculation projection points of the matching points are Yi (1 is less than or equal to i is less than or equal to b). When Qi and Yi have small errors, ci can be considered as the world coordinate system of the matching point.
Step 2.4, reconstructing the sparse point cloud of the image, and expanding the coefficient point cloud through adjustment of the adjustment of a beam method;
And 2.5, clustering the sparsely reconstructed point cloud data by a CMVS method, and then generating dense point cloud by matching, diffusing and filtering under the constraint of local luminosity consistency and global visibility by using PMVS method to finish the reconstruction of the dense point cloud and realize the three-dimensional reconstruction of the test terrain of the river model to be tested.
In this embodiment, a VisualSFM platform is selected to implement the three-dimensional reconstruction process, visualSFM is a GUI application program that uses a motion Structure (SFM) to perform three-dimensional reconstruction, and the whole SFM calculation process of importing picture coordinate calibration and obtaining point cloud data can be implemented by performing feature detection, feature matching and packet adjustment by using multi-core parallelism to perform fast running, so that the calculation speed is improved, and PMVS and CMVS algorithms are integrated by the platform.
The operation flow of the VisualSFM platform is shown in fig. 3, and is specifically as follows:
(1) Image input-importing image sequence click [ 1 ], selecting a plurality of images. As shown in fig. 4.
(2) Feature point extraction and matching-clicking [2 ], feature point extraction is carried out on each image, and feature points and feature point matching files are generated under an image folder. Therefore, the feature point extraction is only performed once for the same graph, and the operation time is greatly saved. As shown in fig. 5.
(3) Sparse reconstruction—single click [ 3], sparse reconstruction is started. The three-dimensional coordinates of the matched feature points and the pose information of the camera are restored. As shown in fig. 6.
(4) Dense reconstruction, namely clicking [ 4], then finishing dense point cloud reconstruction through PMVS and a CMVS algorithm, and realizing three-dimensional reconstruction of the test terrain of the river model to be tested. And then a file saving window is popped up, a saving path and a file name are selected, and the saving of the reconstructed three-dimensional point cloud data is completed. As shown in fig. 7.
Step 3, carrying out non-research area point cloud data rejection on the three-dimensional point cloud data obtained through reconstruction, and carrying out fine grid division on the research area;
And 4, inserting control points to obtain actual terrain three-dimensional coordinates of the experimental terrain of the river model to be tested, and interpolating the three-dimensional coordinates onto the bed surface grid to obtain the three-dimensional river model meeting the boundary constraint condition.
The boundary constraint condition includes:
Continuity constraint: the interpolation results have no discontinuous jumps at the boundaries and throughout the area;
smoothness constraint: the curve or surface of the interpolation result is smooth and free of sharp corners or sharp fluctuations.
The interpolation principle is that a data point closest to a point to be interpolated is found in known data points, the value of the data point is assigned to the point to be interpolated, irregular data is interpolated, and interpolation on a regular grid is generated.
And (3) giving different weights to a plurality of adjacent points in a certain range around to interpolate, and smoothing the original data to remove noise or abnormal values so as to reduce the severe change of the data.
The interpolation result expression is as follows:
Where (x 0,y0,z0) denotes a point to be interpolated, (x i,yi,zi) is the i-th point among points nearest to the point to be interpolated (x 0,y0,z0), As the weight of the material to be weighed,For the distance between two points, α is the adjustment parameter, i=1, 2, …, n, n is the number of points nearest to the point to be interpolated (x 0,y0,z0).
In the embodiment, the water tank test section is taken as an example to reconstruct the three-dimensional terrain of the river bed, and the specific process is as follows:
First, 10 black and white checkered blocks are placed on two sides of the side wall of the water tank as standard points, namely, terrain reconstruction control points. And the intelligent mobile phone is used for collecting images around the flushing pit, the overlapping degree of the photos is more than 80%, the overexposure or underexposure of the images in the shooting process can influence the reconstruction precision of point clouds, so that a dark and bright shooting environment can ensure a higher image recognition matching rate, and the interference generated by a moving shadow static light source can be reduced or eliminated. The total number of photos under each test condition is not less than 200. After each test 280 photographs contained black and white checkerboard calibration points, 10 points were calibrated 280 times.
Then, three-dimensional reconstruction is carried out on an image sequence acquired by the smart phone based on the SFM method, an acquired sparse point cloud reconstruction diagram is shown in fig. 6, a dense point cloud reconstruction diagram is shown in fig. 7, a dense point cloud data diagram is shown in fig. 8, and a point cloud reconstruction diagram is shown in fig. 9.
Next, the set research area of this example selects the water flow direction range [ X 1min,X1max ] = [1.2,2.4], the water flow direction range [ Y 1min,Y1max ] = [0,0.4], and the unit is m. The study area was finely gridded, and the physical dimensions of the grid were set to 1mm×1mm (the physical dimensions of the grid may be subdivided according to the test conditions).
Finally, inserting control points to obtain three-dimensional coordinates of the actual terrain, and interpolating the three-dimensional coordinates on the bed surface grid, as shown in fig. 10, to obtain the three-dimensional riverbed terrain. Since the measurement true value cannot be obtained, the elevation of the section can be measured by using the laser range finder as a detection value after each test, and meanwhile, the longitudinal elevation value is calculated by adopting the method of the invention, and the comparison result is shown in fig. 11, wherein the dark blue line and the black line represent elevation values obtained by the method of the invention by calculating the same image sequence twice, and the green circle point, the red circle point and the sky blue circle point represent the measurement values of the laser range finder for three times. The figure shows that the fitting degree of the section elevation is good, the accuracy of the topographic data carried by the reconstructed point cloud data is high, and the change of the whole bed surface before and after the dredging can be accurately obtained.
The technical scheme provided by the invention is described in detail. The principles and embodiments of the present invention have been described herein with reference to specific examples, the description of which is intended only to facilitate an understanding of the method of the present invention and its core ideas. It should be noted that it will be apparent to those skilled in the art that various modifications and adaptations of the invention can be made without departing from the principles of the invention and these modifications and adaptations are intended to be within the scope of the invention as defined in the following claims.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410936877.5A CN118941737A (en) | 2024-07-12 | 2024-07-12 | Topographic Survey Method for River Engineering Model Test Based on SFM |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410936877.5A CN118941737A (en) | 2024-07-12 | 2024-07-12 | Topographic Survey Method for River Engineering Model Test Based on SFM |
Publications (1)
Publication Number | Publication Date |
---|---|
CN118941737A true CN118941737A (en) | 2024-11-12 |
Family
ID=93359258
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410936877.5A Pending CN118941737A (en) | 2024-07-12 | 2024-07-12 | Topographic Survey Method for River Engineering Model Test Based on SFM |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118941737A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN119197469A (en) * | 2024-11-29 | 2024-12-27 | 交通运输部天津水运工程科学研究所 | A large-scale wave flume automatic scanning device and method for scouring and silting terrain |
-
2024
- 2024-07-12 CN CN202410936877.5A patent/CN118941737A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN119197469A (en) * | 2024-11-29 | 2024-12-27 | 交通运输部天津水运工程科学研究所 | A large-scale wave flume automatic scanning device and method for scouring and silting terrain |
CN119197469B (en) * | 2024-11-29 | 2025-03-11 | 交通运输部天津水运工程科学研究所 | Automatic sweeping device and method for large-scale wave water tank siltation terrain |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109949399B (en) | Scene three-dimensional reconstruction method based on unmanned aerial vehicle aerial image | |
Kersten et al. | Image-based low-cost systems for automatic 3D recording and modelling of archaeological finds and objects | |
Sime et al. | Information on grain sizes in gravel-bed rivers by automated image analysis | |
CN108198230A (en) | A 3D Point Cloud Extraction System of Crop Fruit Based on Scattered Images | |
CN110533774B (en) | Three-dimensional model reconstruction method based on smart phone | |
CN103218787B (en) | Multi-source heterogeneous remote sensing image reference mark automatic acquiring method | |
CN105678757B (en) | A kind of ohject displacement measuring method | |
Paixão et al. | Close-range photogrammetry for 3D rock joint roughness evaluation | |
JP2019120591A (en) | Parallax value calculation device, parallax value calculation method and program | |
Fang et al. | Application of a multi-smartphone measurement system in slope model tests | |
CN112070870B (en) | Point cloud map evaluation method and device, computer equipment and storage medium | |
Mali et al. | Assessing the accuracy of high-resolution topographic data generated using freely available packages based on SfM-MVS approach | |
CN118941737A (en) | Topographic Survey Method for River Engineering Model Test Based on SFM | |
Sevara | Top secret topographies: recovering two and three-dimensional archaeological information from historic reconnaissance datasets using image-based modelling techniques | |
CN110021041B (en) | Unmanned scene incremental gridding structure reconstruction method based on binocular camera | |
Li et al. | Combining Structure from Motion and close-range stereo photogrammetry to obtain scaled gravel bar DEMs | |
Ahmadabadian et al. | Image selection in photogrammetric multi-view stereo methods for metric and complete 3D reconstruction | |
CN118982623A (en) | A three-dimensional reconstruction method, device, equipment and medium | |
CN117788531A (en) | Photon detector acquisition projection image module seam filling method based on image registration | |
CN110335209A (en) | A phase-type 3D laser point cloud noise filtering method | |
CN113624133A (en) | Fault positioning method and device and electronic equipment | |
Wang et al. | Identification of rocky ledge on steep, high slopes based on UAV photogrammetry | |
Rowley et al. | Comparison of terrestrial lidar, SfM, and MBES resolution and accuracy for geomorphic analyses in physical systems that experience subaerial and subaqueous conditions | |
CN116363302A (en) | Pipeline three-dimensional reconstruction and pit quantification method based on multi-view geometry | |
Letortu et al. | Three-dimensional (3D) reconstructions of the coastal cliff face in Normandy (France) based on oblique Pléiades imagery: assessment of Ames Stereo Pipeline®(ASP®) and MicMac® processing chains |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |