[go: up one dir, main page]

CN118781178B - A volume measurement method based on surface reconstruction and triple integral - Google Patents

A volume measurement method based on surface reconstruction and triple integral Download PDF

Info

Publication number
CN118781178B
CN118781178B CN202411266793.1A CN202411266793A CN118781178B CN 118781178 B CN118781178 B CN 118781178B CN 202411266793 A CN202411266793 A CN 202411266793A CN 118781178 B CN118781178 B CN 118781178B
Authority
CN
China
Prior art keywords
point cloud
image
camera
point
reconstruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202411266793.1A
Other languages
Chinese (zh)
Other versions
CN118781178A (en
Inventor
王华锋
蔡航
张瑜
侯裕霖
李怀川
李丹
刘宏森
郭克信
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
North China University of Technology
Beihang University
Original Assignee
North China University of Technology
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by North China University of Technology, Beihang University filed Critical North China University of Technology
Priority to CN202411266793.1A priority Critical patent/CN118781178B/en
Publication of CN118781178A publication Critical patent/CN118781178A/en
Application granted granted Critical
Publication of CN118781178B publication Critical patent/CN118781178B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/30Polynomial surface description
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Mathematical Analysis (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Computer Graphics (AREA)
  • Mathematical Optimization (AREA)
  • Software Systems (AREA)
  • Algebra (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

本发明公开了一种基于表面重建和三重积分的体积测量方法,包括以下步骤:1、使用三维扫描设备和图像拍摄设备获取点云和图像;2、图像和点云的标定和配准;3、点云和图像的融合;4、神经网络的训练与点云分割;5、点云的重建与计算。该测量方法能够有效精准的测量各种不规则物体体积,点云分割中深度学习的应用使得能够适应不同场景,对点云的表面重建可以在三维空间中较大程度上还原物体原有姿态,对其参数化方程三重积分可以快速准确计算其体积,拥有效率高、精度高、自动化程度高等优点。

The present invention discloses a volume measurement method based on surface reconstruction and triple integral, comprising the following steps: 1. using a 3D scanning device and an image capturing device to obtain a point cloud and an image; 2. calibration and registration of the image and the point cloud; 3. fusion of the point cloud and the image; 4. training of a neural network and segmentation of the point cloud; 5. reconstruction and calculation of the point cloud. The measurement method can effectively and accurately measure the volume of various irregular objects, the application of deep learning in point cloud segmentation enables it to adapt to different scenes, the surface reconstruction of the point cloud can restore the original posture of the object to a large extent in the three-dimensional space, and the triple integral of its parameterized equation can quickly and accurately calculate its volume, and has the advantages of high efficiency, high precision, and high degree of automation.

Description

Volume measurement method based on surface reconstruction and triple integration
Technical Field
The invention relates to a measurement method, in particular to a volume measurement method based on surface reconstruction and triple integration.
Background
In the field of three-dimensional data processing and analysis, the processing technology of point cloud data is particularly important. A point cloud is a dataset made up of a series of points in three-dimensional space, typically acquired by various scanning devices such as a LiDAR or stereo photography system. These points contain positional information of the object surface and sometimes color and intensity information, making the point cloud data very useful in many applications, such as environmental awareness of autonomous vehicles, digital preservation of cultural anomalies, topographic mapping, virtual reality, and the like.
Although point cloud data has many advantages in the digitized representation of the three-dimensional world, its unstructured nature also presents several processing challenges, particularly when making accurate volume measurements and complex data segmentations. Conventional point cloud processing methods, such as cluster segmentation, grid-based volume estimation, and the like, while providing a preliminary solution, often lack accuracy and efficiency, particularly when processing large-scale or high-density point cloud data.
Furthermore, with the development of computer vision and machine learning techniques, particularly the successful application of deep learning in image and video analysis, researchers began exploring the potential for the application of these advanced techniques in point cloud data processing. The point cloud segmentation technology based on deep learning can effectively identify and classify different objects and structures in the point cloud, but how to combine the technology with the traditional geometric measurement method to realize more accurate and automatic volume measurement is still a problem worthy of research.
To overcome these challenges, we propose a new approach, namely "triple integral based color point cloud segmentation and volumetric measurement approach". The method combines an advanced color point cloud processing technology and a mathematical triple integral theory, and provides a new visual angle and tool for analysis and application of point cloud data by accurately calculating the volume in the closed curved surface. By the method, the complex point cloud data can be efficiently and accurately processed, and various application scenes from cultural heritage protection to precise engineering modeling are supported.
Disclosure of Invention
The invention aims to provide a volume measurement method based on surface reconstruction and triple integration, which solves the problems of low precision and low efficiency of the traditional method.
The invention aims at realizing the following technical scheme:
A volumetric measurement method based on surface reconstruction and triple integration, the method comprising the steps of:
step 1, acquiring point cloud information and image information of a target object under different angles through a plurality of groups of laser radars and cameras;
step 2, calibrating and registering the camera and the laser radar by using the data obtained in the step 1;
step 3, fusing the image and the point cloud by a transformation matrix obtained through calibration and registration;
Step 4, training the point cloud segmentation neural network by using the fused color point cloud as a data set
And 5, carrying out surface reconstruction on the point cloud data obtained by segmentation, obtaining a curved surface equation, and calculating an equation volume by using a triple integration method to obtain the accurate volume of the object.
In step 1, at least 3 groups of laser radars and cameras are used for acquisition, the laser radars adopt non-repeated scanning to obtain point clouds and images with different angles, and the resolutions of the point clouds and the images are adapted to different scenes and objects by setting the scanning time of the laser radars and the resolution of the camera lens.
In step 2, firstly calibrating the internal parameters of the camera, shooting a plurality of groups of pictures from different distances and angles by using a grid calibration plate, and obtaining an internal parameter matrix of the camera through calculation:
, (1)
radial distortion parameter ,And tangential distortion parameters,;
After obtaining the internal reference matrix (1), fixing a camera and a radar, shooting point cloud data and image data of a plurality of groups of objects, and calculating the external reference matrix of the camera and the radar by utilizing key points of the object data:
, (2)
in the external reference matrix (2), Is a 3x3 rotation matrix, describing the camera to radar rotation,Is a 3x1 translation vector, describing the translation of the camera to the radar,Is a0 vector of 1x3,Is a scalar for maintaining the homogeneous coordinate form of the matrix;
after the internal reference matrix (1) and the external reference matrix (2) are obtained, registration is carried out among radars, the coordinate of one laser radar is used as a reference coordinate, other radars are used as radars to be registered, meanwhile, a calibration object is scanned, at least 3 calibration points are collected, coarse registration is completed, and then fine registration is carried out by using an ICP algorithm.
In step 3, the image is transferred from the image coordinate system to the camera coordinate system through the transformation matrix obtained in step 2, then the camera coordinate system is transferred to the radar coordinate system, then the point cloud mask in fov is obtained, the point cloud mask which can be projected on the image is obtained, the point cloud screened by the mask is projected on the image to obtain a depth map, and fusion of the image and the point cloud is realized.
In step 4, the point cloud and the image are segmented by adopting a deep learning method, the point cloud is comprehensively judged to be segmented by respectively inputting the image and the point cloud training model, and the point cloud is segmented by inputting the color point cloud training model, so that the complete point cloud of the object to be detected is obtained.
In step 5, after the complete point cloud is obtained, firstly ensuring the sealing performance of the point cloud, repairing the holes of the incompletely sealed model by adopting a geometric method, secondly, after the sealing performance of the point cloud model is ensured, carrying out surface reconstruction by using a Poisson reconstruction method, and defining an indication functionIt takes 1 value inside the surface and 0 value outside it, uses point cloud and corresponding normal information to estimate gradientFor each pointAnd its normal lineOrder-makingAt the position ofDirection and position ofConsistent, structured vector fieldAs a means ofIn whichUsing this vector fieldSeeking a scalar fieldSo thatAs close as possible to the gradient of (a)Namely solving:
, (3)
Wherein, Is a laplace operator of the device,Is a vector fieldTo obtain the divergence of (1)Is a randomly defined scalar field from which the iso-surface is extracted, i.eObtaining a smooth and accurate three-dimensional model after reconstruction;
After the surface reconstruction is completed, a surface equation of mathematical description is extracted from the closed surface model by a parameterized surface method, and the parameterized surface is expressed as a function WhereinAndIs a parameter of the sample, which is a parameter,,Is a vector function, maps points on a parameter plane onto a curved surface in three-dimensional space,Wherein,,Is thatAndIs a function of (2);
Parametrization of plane into Parametrizing the cylindrical surface intoWhereinParameterizing the sphere intoWhereinFor more complex surfaces, represented by B-spline surfaces:
, (4)
Wherein the method comprises the steps of Is a control point which is used for controlling the operation of the device,AndIs a B spline basis function, and p and q are degrees of the basis function;
finally, the volume is calculated using triple integration on the parameterized three-dimensional surface equation:
, (5)。
In step 5, repairing the hole by using a geometric method specifically includes the following steps:
The method comprises the steps of firstly, identifying the boundary of a hole in a point cloud, wherein a grid for modeling consists of a vertex set V and a surface set F, each surface in F is defined by a vertex index, and the identification of the boundary is realized by the following algorithm:
① Creating an edge dictionary E, wherein keys are vertex pairs and values are the number of times the edge appears;
② Traversing each surface, and updating the occurrence times of each edge in E;
③ All edges with the occurrence number of 1 in E are boundary edges;
secondly, the boundary smoothing reduces noise and irregularity, and the Laplace smoothing algorithm is used for enabling the uncertainty to be more naturally integrated into an original model, and the basic formula is as follows:
, (6)
Wherein, Is the firstThe location of the individual vertices of the graph,Is thatIs defined by the set of adjacent vertices of the model,Is the number of neighbors that are present and,Is a smoothing factor;
Thirdly, generating patches by using triangle filling for holes with regular boundaries, and using a minimum surface strategy for holes with complex boundaries, wherein the radial basis function RBF is used, and the hole filling based on RBF is expressed as:
, (7)
Is the point on the boundary, to find a meeting Is a curved surface of (2)WhereinIs the coefficient to be solved for,Is thatAnd (3) withEuclidean distance between each hole boundary point is determined by minimizing the distance between the curved surface and the surrounding mesh
Compared with the existing measuring method, the technical proposal provided by the invention can reduce the contact between staff and goods, and simultaneously combines with deep learning, the method can more accurately identify and classify different objects and features in the point cloud, and can provide more accurate volume measurement by directly applying triple integration in a three-dimensional space.
Drawings
FIG. 1 is a flow chart of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, and it is apparent that the embodiments described are only some embodiments of the present invention, not all embodiments of the present invention, and are not limiting of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to fall within the scope of the invention.
Fig. 1 is a schematic diagram of a volumetric measurement method based on surface reconstruction and triple integration according to an embodiment of the present invention, where the method includes:
1. multiple sets of cameras and lidars acquire point clouds and images:
The method comprises the steps of shooting an object to be detected from different angles by using a plurality of groups of laser radars and cameras so as to obtain point clouds and images of different angles. In the present invention, at least three sets of equipment should surround the object to be measured, the scanning range of each set of equipment is at most 120 degrees, at least 3 meters and at most 20 meters away from the object, the object is scanned from obliquely above in overlooking mode, and each set of laser radar and camera does not move after being fixed.
After the installation is completed, all the devices are started simultaneously by a multi-line Cheng Qi moving program to acquire one picture and one point cloud information of each angle.
In the specific implementation, three groups of equipment can be selected to be placed at the vertex position of an equilateral triangle, an object to be detected is placed in the middle of the triangle, the laser radar and the camera are subjected to overlook scanning at an angle of 45 degrees, the distance between the equipment and the object is 5 meters, and good scanning precision can be obtained at the distance.
2. Calibrating and registering images and point clouds:
Calibration and registration of the image and the point cloud includes calibration of the camera and the lidar and registration between the plurality of lidars.
The calibration of the camera and the laser radar is used for measuring and calibrating the spatial relationship and time synchronization between two devices, and the principle is that the specific points in the overlapped fields of view are utilized, the corresponding relationship of the specific points is utilized, the optimal rotation matrix R and translation vector t are calculated through a least square method, and the rotation matrix converts the points of a camera coordinate system into a radar coordinate system.
In the specific implementation, a fixed camera and a laser radar are firstly installed, the relative positions of the camera and the laser radar are ensured not to be changed, rectangular foam with low reflectivity and one meter length and one meter width is used as a calibration plate, the calibration plate is placed at different positions in a view field, a photo and a point cloud are shot at each position, after 10 to 15 groups of data are shot in total, the corner points of each group of data are marked and recorded respectively, and the image coordinates of the corner points are expressed asThree-dimensional coordinates are expressed asSolving the optimal rotation matrix is achieved by optimizing the following problem:
,
the registration between the laser radars can be carried out separately by adopting a similar method, after the calibration of each group of equipment is completed, each group of equipment is respectively installed according to the step 1, an object is scanned at the same time, at least 3 calibration points are recorded, one group of equipment coordinate systems are used as an original coordinate system, other equipment is used as equipment to be registered, the optimization equation is used for obtaining a transformation matrix of coarse registration, and the ICP algorithm is used for finishing fine registration, so that the transformation matrix from each group of equipment to the original coordinate system is obtained.
3. Fusion of image and point cloud:
The transformation matrix from the camera obtained in the second step to the laser radar can be used for coloring point clouds, and all the point clouds can be put into the same coordinate system by using the transformation matrix obtained by radar registration, so that the complete colored point cloud of the object to be detected is obtained.
The method comprises the steps of firstly respectively reading image data and point cloud data, converting an image from an image coordinate system to a camera coordinate system through a transformation matrix, then converting the image from the camera coordinate system to a radar coordinate system, then obtaining a point cloud mask in fov to obtain the point cloud mask which can be projected on the image, and projecting the point cloud screened by the mask to the image to obtain a depth map, so that fusion of the image and the point cloud is realized.
4. Training of neural networks and point cloud segmentation:
the point cloud and the image are segmented by adopting a deep learning method, a model for segmentation can be the image and the point cloud are trained separately, finally the segmentation output point cloud is judged comprehensively, or the multi-dimensional colored point cloud training can be directly input, and the colored point cloud is segmented.
In the process, the multi-modal input can enable the deep learning model to be trained more accurately, and compared with the traditional algorithm and single-mode input, the effect of the multi-modal input deep learning model in a difficult texture area is better.
After the first three steps are completed, the data set can be customized for a specific scene on the premise of knowing the application scene, and a model trained by the specific data set can be used to obtain a better point cloud segmentation effect in the specific scene.
5. Reconstruction and volume calculation of point cloud:
The point cloud obtained by segmentation cannot be directly used for mathematical calculation, for the part of the segmented point cloud which cannot be scanned by laser or is shielded by other objects, hole detection and filling are needed to be carried out on the point cloud, and for the closed point cloud after filling, a surface reconstruction algorithm is used for processing, so that subsequent parameterization and calculation are convenient.
In a specific implementation, the process of reconstructing the point cloud comprises hole identification and filling and surface reconstruction. In the process of data acquisition, when facing most irregular objects in reality, limited three-dimensional scanning equipment often generates data missing due to various reasons (such as object shielding, reflectivity problem and the like), the missing areas are shown as holes in the point cloud, the unrepaired holes can cause failure or misleading of surface reconstruction, and the holes are filled by hole repair technology, so that the integrity and accuracy of data can be improved.
When the hole boundary is identified, the point cloud needs to be preprocessed, firstly, useless points and error points of the reduced point cloud are removed by using voxel downsampling and statistical outliers, and secondly, the point cloud can be converted into a triangular grid by using a Delaunay triangulation method so as to identify the hole boundary more intuitively and effectively. Once the triangular mesh is obtained, the hole boundaries can be identified by analyzing the mesh topology:
searching boundary edges, namely, in the grid, the boundary edges are edges which only belong to one triangle. Boundary edges can be identified by traversing all triangles and counting the occurrence times of each edge, wherein the occurrence times of the boundary edges are 1;
And connecting boundary edges, namely sequentially connecting the boundary edges to form a closed annular structure, so that the boundary of the hole can be defined.
After the hole boundaries are determined, the Laplace smoothing algorithm is used to reduce noise and irregularities so that subsequent patches can be more naturally fused into the original model:
,
Wherein, Is the firstThe location of the individual vertices of the graph,Is thatIs defined by the set of adjacent vertices of the model,Is the number of neighbors that are present and,Is a smoothing factor;
After hole identification and smoothing is completed, different types of patches are adopted according to the size and shape of the holes, plane patches are adopted for smaller or relatively flat holes, curved surface patches such as a minimum curved surface strategy and a Radial Basis Function (RBF) are used for complex or large holes, and hole filling based on the method can be expressed as follows:
,
Is the point on the boundary, to find a meeting Is a curved surface of (2)WhereinIs the coefficient to be solved for,Is thatAnd (3) withEuclidean distance between each hole boundary point is determined by minimizing the distance between the curved surface and the surrounding mesh
After hole filling, poisson reconstruction is used for a closed point cloud model, the core idea of which is to assume that an ideal surface can be reconstructed by solving an elliptic partial differential equation describing how the gradient field of a scalar field is aligned with the normal vector input.
Defining an indication functionIt takes 1 value in the surface and 0 value in the outside, then deduces gradient by utilizing the normal line information of the point cloudIs a direction of (2).
,
Wherein the method comprises the steps ofRepresenting the points in the point cloud,Is indicated at the pointAn estimated normal at.
Establishment of poisson's equation, utilization ofAndConstructing vector fieldsThe vector field is taken asIs an approximation of:
,
next, a scalar field is found Gradient of itClosest toI.e., approaching the following equation:
,
Here, the Is a laplace operator of the device,Is a vector fieldIs a dispersion of (3).
Solving poisson equation, solving poisson equation by array method, finite element method or discrete difference method, and extracting equivalent surfaceTo reconstruct a surface.
Once the surface reconstruction is completed, the mathematically described surface equations can be extracted from the closed surface model by parameterizing the surface.
For plane parameterization is:;
the parameters for the cylindrical surface are: ,;
the parameters for the sphere are: ,;
for more complex surfaces, represented by B-spline surfaces:
,
Wherein the method comprises the steps of Is a control point which is used for controlling the operation of the device,AndIs a B-spline basis function, and p and q are the degrees of the basis function.
Finally, the volume is calculated by using triple integral on the parameterized three-dimensional surface equation:
In summary, the method provided by the embodiment of the invention can recover a smooth surface from noise-containing data, is suitable for incomplete or sparse point clouds, and has higher precision and robustness. The method is suitable for objects with complex topological structures, such as objects containing holes and tunnels, the geometric process is fully automatic, the technical requirements on operators are low, the possibility of damaging the objects caused by contact can be avoided, and the method is more widely applied than the prior art.
It is noted that what is not described in detail in the embodiments of the present invention belongs to the prior art known to those skilled in the art.
In addition, it will be understood by those skilled in the art that all or part of the steps in implementing the methods of the above embodiments may be implemented by a program to instruct related hardware, and the corresponding program may be stored in a computer readable storage medium, where the storage medium may be a read only memory, a magnetic disk or an optical disk, etc.
While the invention has been described with respect to the preferred embodiments, the scope of the invention is not limited thereto, and any changes or substitutions that would be apparent to those skilled in the art are deemed to be within the scope of the invention. Therefore, the protection scope of the present invention should be subject to the protection scope of the claims. The information disclosed in the background section herein is only for enhancement of understanding of the general background of the invention and is not to be taken as an admission or any form of suggestion that this information forms the prior art already known to those of ordinary skill in the art.
It will be readily appreciated by those skilled in the art that what has not been described in detail in the present description is a preferred embodiment of the invention and is not intended to limit the invention, but is to cover all modifications, equivalents, improvements and modifications which are within the spirit and principles of the invention.

Claims (5)

1.一种基于表面重建和三重积分的体积测量方法,其特征在于,所述方法包括如下步骤:1. A volume measurement method based on surface reconstruction and triple integral, characterized in that the method comprises the following steps: 步骤1、通过多组激光雷达与相机采集不同角度下目标物体的点云信息和图像信息;Step 1: Collect point cloud information and image information of the target object at different angles through multiple sets of laser radars and cameras; 步骤2、利用步骤1得到的数据对相机和激光雷达进行标定和配准;Step 2: Use the data obtained in step 1 to calibrate and align the camera and lidar; 步骤3、标定和配准得到的变换矩阵将图像和点云融合;Step 3: The transformation matrix obtained by calibration and registration fuses the image and point cloud; 步骤4、利用融合得到的彩色点云作为数据集训练点云分割神经网络;Step 4: Use the fused color point cloud as a data set to train a point cloud segmentation neural network; 步骤5、对分割得到的点云数据进行表面重建,并获取曲面方程,利用三重积分的方法计算方程体积,得到物体的精确体积;Step 5: Reconstruct the surface of the segmented point cloud data, obtain the surface equation, calculate the volume of the equation using the triple integral method, and obtain the precise volume of the object; 在步骤5中,得到完整点云后,首先确保点云的封闭性,对不完全封闭的模型,采用几何的方法对孔洞进行修补;其次,确保点云模型的封闭性后,使用Poisson重建法进行表面重建,定义一个指示函数,它在曲面内部取值为1,在外部取值为0,使用点云和相应的法线信息来估计梯度的方向;对于每个点及其法线,令处的方向与一致,构造向量场作为的近似,其中;利用这个向量场,寻求一个标量场,使得的梯度尽可能接近,即求解:In step 5, after obtaining the complete point cloud, first ensure the closure of the point cloud. For the incompletely closed model, use geometric methods to repair the holes. Secondly, after ensuring the closure of the point cloud model, use the Poisson reconstruction method to reconstruct the surface and define an indicator function , which takes the value 1 inside the surface and 0 outside, using the point cloud and the corresponding normal information to estimate the gradient direction; for each point and its normal ,make exist The direction and Consistent, construct vector field As is an approximation of ; Using this vector field , seeking a scalar field , so that The gradient is as close as possible to , that is, solve: ,(3) , (3) 其中,是拉普拉斯算子,是向量场的散度,解得是一个随处定义的标量场,从这个标量场中提取等值面,即的曲面,得到重建后的平滑和准确的三维模型;in, is the Laplace operator, is a vector field The divergence of is a scalar field defined everywhere, from which the isosurfaces are extracted, namely The surface is reconstructed to obtain a smooth and accurate three-dimensional model; 完成表面重建后,通过参数化曲面的方法从封闭表面模型中提取数学描述的曲面方程,将参数化曲面表示为函数,其中是参数,是一个向量函数,将参数平面上的点映射到三维空间的曲面上,,其中的函数;After the surface reconstruction is completed, the surface equations described mathematically are extracted from the closed surface model by the parametric surface method, and the parametric surface is expressed as a function ,in and is a parameter, , is a vector function that maps points on the parameter plane to the surface of three-dimensional space. ,in , , yes and Function of 对平面参数化为,对圆柱面参数化为,其中,对球面参数化为,其中;对于更复杂的曲面,用B样条曲面表示:,(4)The plane is parameterized as , the cylindrical surface is parameterized as ,in , the sphere is parameterized as ,in ; For more complex surfaces, use B-spline surfaces: , (4) 其中是控制点,是B样条基函数,p和q是基函数的度数;in is the control point, and is the B-spline basis function, p and q are the degrees of the basis function; 最后,对参数化的三维曲面方程使用三重积分计算体积:Finally, the volume is calculated using a triple integral on the parameterized 3D surface equation: ,(5) , (5) 在步骤5中,采用几何的方法对孔洞进行修补具体包括以下步骤:In step 5, the hole is repaired by a geometric method, which specifically includes the following steps: 第一步,对点云中孔洞的边界进行识别,设模型的网格由顶点集V和面集F组成,其中F中的每个面由顶点索引定义;边界的识别由以下算法实现:The first step is to identify the boundaries of holes in the point cloud. Assume that the mesh of the model consists of a vertex set V and a face set F, where each face in F is defined by a vertex index; the boundary identification is implemented by the following algorithm: ①创建一个边字典E,其中键是顶点对,值是该边出现的次数;① Create an edge dictionary E, where the key is the vertex pair and the value is the number of times the edge appears; ②遍历每个面,更新E中每条边的出现次数;②Traverse each face and update the number of occurrences of each edge in E; ③所有在E中出现次数为1的边都是边界边;③All edges that appear 1 times in E are boundary edges; 第二步,边界平滑减少噪声和不规则性,使用拉普拉斯平滑算法使得不定更自然地融入原始模型,其基本公式为:The second step is boundary smoothing to reduce noise and irregularities. The Laplace smoothing algorithm is used to make the irregularities more naturally integrated into the original model. The basic formula is: ,(6) , (6) 其中,是第个顶点的位置,的相邻顶点集,是邻居的数量,是平滑因子;in, It is The position of the vertices, yes The set of adjacent vertices of is the number of neighbors, is the smoothing factor; 第三步,对于规则边界的孔洞使用三角形填充生成补丁,对于复杂的边界的孔洞,使用最小曲面策略,径向基函数RBF,基于RBF的孔洞填充表示为:In the third step, for holes with regular boundaries, triangles are used to fill patches. For holes with complex boundaries, the minimum surface strategy and radial basis function (RBF) are used. The hole filling based on RBF is expressed as: ,(7) , (7) 是边界上的点,要找到一个满足的曲面,其中是待求系数,之间的欧几里得距离,对于每个孔洞边界点,通过最小化曲面与周围网格的距离来确定 is a point on the boundary. To find a point that satisfies Surface ,in is the coefficient to be determined, yes and The Euclidean distance between the two is determined by minimizing the distance between the surface and the surrounding mesh for each hole boundary point. . 2.根据权利要求1所述的基于表面重建和三重积分的体积测量方法,其特征在于,在步骤1中,用至少3组激光雷达与相机采集,激光雷达采用非重复性扫描,得到不同角度点云和图像,点云和图像的分辨率通过设定激光雷达扫描时间和相机镜头分辨率以适应不同场景和物体。2. The volume measurement method based on surface reconstruction and triple integral according to claim 1 is characterized in that, in step 1, at least 3 groups of laser radars and cameras are used for acquisition, and the laser radar adopts non-repetitive scanning to obtain point clouds and images at different angles, and the resolution of the point clouds and images is adapted to different scenes and objects by setting the laser radar scanning time and the camera lens resolution. 3.根据权利要求1所述基于表面重建和三重积分的体积测量方法,其特征在于,在步骤2中,首先对相机内参进行标定,使用网格标定板从不同距离和角度拍摄多组图片,经过计算得到相机内参矩阵:3. The volume measurement method based on surface reconstruction and triple integral according to claim 1 is characterized in that, in step 2, the camera intrinsic parameters are first calibrated, and a grid calibration plate is used to take multiple groups of pictures from different distances and angles, and the camera intrinsic parameter matrix is obtained by calculation: ,(1) , (1) 以及径向畸变参数和切向畸变参数And radial distortion parameters , and tangential distortion parameters , ; 在得到内参矩阵(1)后,固定相机和雷达,拍摄多组物体的点云数据和图像数据,利用物体数据的关键点计算相机和雷达的外参矩阵:After obtaining the intrinsic parameter matrix (1), fix the camera and radar, capture multiple sets of point cloud data and image data of objects, and use the key points of the object data to calculate the extrinsic parameter matrix of the camera and radar: ,(2) , (2) 外参矩阵(2)中,是一个3x3的旋转矩阵,用于描述相机到雷达的旋转,是一个3x1的平移向量,用于描述相机到雷达的平移,是一个1x3的0向量,是一个标量,用于保持矩阵的齐次坐标形式;In the external parameter matrix (2), is a 3x3 rotation matrix that describes the rotation from camera to radar. is a 3x1 translation vector that describes the translation from camera to radar. is a 1x3 vector of 0, is a scalar used to keep the matrix in homogeneous coordinate form; 得到内参矩阵(1)和外参矩阵(2)后,进行雷达之间的配准,将一个激光雷达的坐标作为基准坐标,其他雷达作为待配准雷达,同时对标定物体扫描,收集至少3个标定点,完成粗配准,再使用ICP算法进行精配准。After obtaining the intrinsic parameter matrix (1) and the extrinsic parameter matrix (2), the radars are aligned with each other. The coordinates of one laser radar are used as the reference coordinates, and the other radars are used as radars to be aligned. The calibration object is scanned at the same time, and at least 3 calibration points are collected to complete the rough alignment. Then, the ICP algorithm is used for fine alignment. 4.根据权利要求1所述基于表面重建和三重积分的体积测量方法,其特征在于,在步骤3中,通过步骤2得到的变换矩阵将图像从图像坐标系到相机坐标系,再从相机坐标系到雷达坐标系,随后获取fov内的点云mask,得到能够投影在图像上的点云mask,把经过mask筛选后的点云投影到图像上得到深度图,实现图像和点云的融合。4. According to the volume measurement method based on surface reconstruction and triple integral of claim 1, it is characterized in that, in step 3, the image is transformed from the image coordinate system to the camera coordinate system, and then from the camera coordinate system to the radar coordinate system through the transformation matrix obtained in step 2, and then the point cloud mask within the fov is obtained to obtain a point cloud mask that can be projected on the image, and the point cloud after mask screening is projected onto the image to obtain a depth map, thereby realizing the fusion of the image and the point cloud. 5.根据权利要求1所述基于表面重建和三重积分的体积测量方法,其特征在于,在步骤4中,采用深度学习的方法对点云和图像进行分割,通过分别输入图像和点云训练模型,综合判断对点云进行分割,通过输入彩色点云训练模型,同时对点云进行分割,得到待测物体的完整点云。5. According to the volume measurement method based on surface reconstruction and triple integral of claim 1, it is characterized in that in step 4, a deep learning method is used to segment the point cloud and the image, the point cloud is segmented by comprehensive judgment by inputting the image and point cloud training model respectively, and the point cloud is segmented at the same time by inputting the color point cloud training model to obtain a complete point cloud of the object to be measured.
CN202411266793.1A 2024-09-11 2024-09-11 A volume measurement method based on surface reconstruction and triple integral Active CN118781178B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411266793.1A CN118781178B (en) 2024-09-11 2024-09-11 A volume measurement method based on surface reconstruction and triple integral

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411266793.1A CN118781178B (en) 2024-09-11 2024-09-11 A volume measurement method based on surface reconstruction and triple integral

Publications (2)

Publication Number Publication Date
CN118781178A CN118781178A (en) 2024-10-15
CN118781178B true CN118781178B (en) 2025-01-21

Family

ID=92983126

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411266793.1A Active CN118781178B (en) 2024-09-11 2024-09-11 A volume measurement method based on surface reconstruction and triple integral

Country Status (1)

Country Link
CN (1) CN118781178B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119107354A (en) * 2024-11-08 2024-12-10 青岛不愁网信息科技有限公司 A method for calculating volume by reconstructing 3D point cloud
CN119354076B (en) * 2024-12-24 2025-04-04 中大智能科技股份有限公司 Method and system for measuring road core sample size based on line laser scanning

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106767562A (en) * 2016-12-30 2017-05-31 苏州西博三维科技有限公司 A kind of measuring method and human body measurement method based on machine vision and speckle
CN113093216A (en) * 2021-06-07 2021-07-09 山东捷瑞数字科技股份有限公司 Irregular object measurement method based on laser radar and camera fusion

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112669393B (en) * 2020-12-31 2021-10-22 中国矿业大学 Laser radar and camera combined calibration method
CN115439603A (en) * 2022-08-11 2022-12-06 大连理工大学 Vehicle-mounted material pile volume calculation method based on multi-mode information fusion and semantic segmentation
CN116051737A (en) * 2022-12-30 2023-05-02 东风汽车有限公司东风日产乘用车公司 Image generation method, device, equipment and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106767562A (en) * 2016-12-30 2017-05-31 苏州西博三维科技有限公司 A kind of measuring method and human body measurement method based on machine vision and speckle
CN113093216A (en) * 2021-06-07 2021-07-09 山东捷瑞数字科技股份有限公司 Irregular object measurement method based on laser radar and camera fusion

Also Published As

Publication number Publication date
CN118781178A (en) 2024-10-15

Similar Documents

Publication Publication Date Title
CN110009727B (en) Automatic reconstruction method and system for indoor three-dimensional model with structural semantics
CN118314300B (en) Engineering measurement accurate positioning and three-dimensional modeling method and system
CN118781178B (en) A volume measurement method based on surface reconstruction and triple integral
CN106709947B (en) Three-dimensional human body rapid modeling system based on RGBD camera
CN113066162B (en) A Rapid Modeling Method of Urban Environment for Electromagnetic Computation
CN113538373B (en) A method for automatic detection of construction progress based on three-dimensional point cloud
CN108090960A (en) A kind of Object reconstruction method based on geometrical constraint
CN115345822A (en) Automatic three-dimensional detection method for surface structure light of aviation complex part
CN112197773B (en) Visual and laser positioning mapping method based on plane information
CN107767456A (en) A kind of object dimensional method for reconstructing based on RGB D cameras
CN108648194A (en) Based on the segmentation of CAD model Three-dimensional target recognition and pose measuring method and device
Xu et al. A 3D reconstruction method for buildings based on monocular vision
CN107945217B (en) A method and system for fast screening of image feature point pairs suitable for automatic assembly
Hu et al. Efficient and automatic plane detection approach for 3-D rock mass point clouds
CN116543117B (en) A high-precision three-dimensional modeling method for large scenes from drone images
Lhuillier et al. Manifold surface reconstruction of an environment from sparse structure-from-motion data
CN116309880A (en) Object pose determining method, device, equipment and medium based on three-dimensional reconstruction
CN116518864A (en) A full-field deformation detection method for engineering structures based on 3D point cloud comparative analysis
CN114332348A (en) Three-dimensional reconstruction method for track integrating laser radar and image data
CN118691776B (en) A 3D real scene modeling and dynamic updating method based on multi-source data fusion
CN115222884A (en) Space object analysis and modeling optimization method based on artificial intelligence
CN117541537B (en) Space-time difference detection method and system based on all-scenic-spot cloud fusion technology
Palma et al. Detection of geometric temporal changes in point clouds
CN119180908A (en) Gaussian splatter-based laser enhanced visual three-dimensional reconstruction method and system
Feng et al. Semi-automatic 3d reconstruction of piecewise planar building models from single image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant