Disclosure of Invention
The invention aims to provide a volume measurement method based on surface reconstruction and triple integration, which solves the problems of low precision and low efficiency of the traditional method.
The invention aims at realizing the following technical scheme:
A volumetric measurement method based on surface reconstruction and triple integration, the method comprising the steps of:
step 1, acquiring point cloud information and image information of a target object under different angles through a plurality of groups of laser radars and cameras;
step 2, calibrating and registering the camera and the laser radar by using the data obtained in the step 1;
step 3, fusing the image and the point cloud by a transformation matrix obtained through calibration and registration;
Step 4, training the point cloud segmentation neural network by using the fused color point cloud as a data set
And 5, carrying out surface reconstruction on the point cloud data obtained by segmentation, obtaining a curved surface equation, and calculating an equation volume by using a triple integration method to obtain the accurate volume of the object.
In step 1, at least 3 groups of laser radars and cameras are used for acquisition, the laser radars adopt non-repeated scanning to obtain point clouds and images with different angles, and the resolutions of the point clouds and the images are adapted to different scenes and objects by setting the scanning time of the laser radars and the resolution of the camera lens.
In step 2, firstly calibrating the internal parameters of the camera, shooting a plurality of groups of pictures from different distances and angles by using a grid calibration plate, and obtaining an internal parameter matrix of the camera through calculation:
, (1)
radial distortion parameter ,And tangential distortion parameters,;
After obtaining the internal reference matrix (1), fixing a camera and a radar, shooting point cloud data and image data of a plurality of groups of objects, and calculating the external reference matrix of the camera and the radar by utilizing key points of the object data:
, (2)
in the external reference matrix (2), Is a 3x3 rotation matrix, describing the camera to radar rotation,Is a 3x1 translation vector, describing the translation of the camera to the radar,Is a0 vector of 1x3,Is a scalar for maintaining the homogeneous coordinate form of the matrix;
after the internal reference matrix (1) and the external reference matrix (2) are obtained, registration is carried out among radars, the coordinate of one laser radar is used as a reference coordinate, other radars are used as radars to be registered, meanwhile, a calibration object is scanned, at least 3 calibration points are collected, coarse registration is completed, and then fine registration is carried out by using an ICP algorithm.
In step 3, the image is transferred from the image coordinate system to the camera coordinate system through the transformation matrix obtained in step 2, then the camera coordinate system is transferred to the radar coordinate system, then the point cloud mask in fov is obtained, the point cloud mask which can be projected on the image is obtained, the point cloud screened by the mask is projected on the image to obtain a depth map, and fusion of the image and the point cloud is realized.
In step 4, the point cloud and the image are segmented by adopting a deep learning method, the point cloud is comprehensively judged to be segmented by respectively inputting the image and the point cloud training model, and the point cloud is segmented by inputting the color point cloud training model, so that the complete point cloud of the object to be detected is obtained.
In step 5, after the complete point cloud is obtained, firstly ensuring the sealing performance of the point cloud, repairing the holes of the incompletely sealed model by adopting a geometric method, secondly, after the sealing performance of the point cloud model is ensured, carrying out surface reconstruction by using a Poisson reconstruction method, and defining an indication functionIt takes 1 value inside the surface and 0 value outside it, uses point cloud and corresponding normal information to estimate gradientFor each pointAnd its normal lineOrder-makingAt the position ofDirection and position ofConsistent, structured vector fieldAs a means ofIn whichUsing this vector fieldSeeking a scalar fieldSo thatAs close as possible to the gradient of (a)Namely solving:
, (3)
Wherein, Is a laplace operator of the device,Is a vector fieldTo obtain the divergence of (1)Is a randomly defined scalar field from which the iso-surface is extracted, i.eObtaining a smooth and accurate three-dimensional model after reconstruction;
After the surface reconstruction is completed, a surface equation of mathematical description is extracted from the closed surface model by a parameterized surface method, and the parameterized surface is expressed as a function WhereinAndIs a parameter of the sample, which is a parameter,,Is a vector function, maps points on a parameter plane onto a curved surface in three-dimensional space,Wherein,,Is thatAndIs a function of (2);
Parametrization of plane into Parametrizing the cylindrical surface intoWhereinParameterizing the sphere intoWhereinFor more complex surfaces, represented by B-spline surfaces:
, (4)
Wherein the method comprises the steps of Is a control point which is used for controlling the operation of the device,AndIs a B spline basis function, and p and q are degrees of the basis function;
finally, the volume is calculated using triple integration on the parameterized three-dimensional surface equation:
, (5)。
In step 5, repairing the hole by using a geometric method specifically includes the following steps:
The method comprises the steps of firstly, identifying the boundary of a hole in a point cloud, wherein a grid for modeling consists of a vertex set V and a surface set F, each surface in F is defined by a vertex index, and the identification of the boundary is realized by the following algorithm:
① Creating an edge dictionary E, wherein keys are vertex pairs and values are the number of times the edge appears;
② Traversing each surface, and updating the occurrence times of each edge in E;
③ All edges with the occurrence number of 1 in E are boundary edges;
secondly, the boundary smoothing reduces noise and irregularity, and the Laplace smoothing algorithm is used for enabling the uncertainty to be more naturally integrated into an original model, and the basic formula is as follows:
, (6)
Wherein, Is the firstThe location of the individual vertices of the graph,Is thatIs defined by the set of adjacent vertices of the model,Is the number of neighbors that are present and,Is a smoothing factor;
Thirdly, generating patches by using triangle filling for holes with regular boundaries, and using a minimum surface strategy for holes with complex boundaries, wherein the radial basis function RBF is used, and the hole filling based on RBF is expressed as:
, (7)
Is the point on the boundary, to find a meeting Is a curved surface of (2)WhereinIs the coefficient to be solved for,Is thatAnd (3) withEuclidean distance between each hole boundary point is determined by minimizing the distance between the curved surface and the surrounding mesh。
Compared with the existing measuring method, the technical proposal provided by the invention can reduce the contact between staff and goods, and simultaneously combines with deep learning, the method can more accurately identify and classify different objects and features in the point cloud, and can provide more accurate volume measurement by directly applying triple integration in a three-dimensional space.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, and it is apparent that the embodiments described are only some embodiments of the present invention, not all embodiments of the present invention, and are not limiting of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to fall within the scope of the invention.
Fig. 1 is a schematic diagram of a volumetric measurement method based on surface reconstruction and triple integration according to an embodiment of the present invention, where the method includes:
1. multiple sets of cameras and lidars acquire point clouds and images:
The method comprises the steps of shooting an object to be detected from different angles by using a plurality of groups of laser radars and cameras so as to obtain point clouds and images of different angles. In the present invention, at least three sets of equipment should surround the object to be measured, the scanning range of each set of equipment is at most 120 degrees, at least 3 meters and at most 20 meters away from the object, the object is scanned from obliquely above in overlooking mode, and each set of laser radar and camera does not move after being fixed.
After the installation is completed, all the devices are started simultaneously by a multi-line Cheng Qi moving program to acquire one picture and one point cloud information of each angle.
In the specific implementation, three groups of equipment can be selected to be placed at the vertex position of an equilateral triangle, an object to be detected is placed in the middle of the triangle, the laser radar and the camera are subjected to overlook scanning at an angle of 45 degrees, the distance between the equipment and the object is 5 meters, and good scanning precision can be obtained at the distance.
2. Calibrating and registering images and point clouds:
Calibration and registration of the image and the point cloud includes calibration of the camera and the lidar and registration between the plurality of lidars.
The calibration of the camera and the laser radar is used for measuring and calibrating the spatial relationship and time synchronization between two devices, and the principle is that the specific points in the overlapped fields of view are utilized, the corresponding relationship of the specific points is utilized, the optimal rotation matrix R and translation vector t are calculated through a least square method, and the rotation matrix converts the points of a camera coordinate system into a radar coordinate system.
In the specific implementation, a fixed camera and a laser radar are firstly installed, the relative positions of the camera and the laser radar are ensured not to be changed, rectangular foam with low reflectivity and one meter length and one meter width is used as a calibration plate, the calibration plate is placed at different positions in a view field, a photo and a point cloud are shot at each position, after 10 to 15 groups of data are shot in total, the corner points of each group of data are marked and recorded respectively, and the image coordinates of the corner points are expressed asThree-dimensional coordinates are expressed asSolving the optimal rotation matrix is achieved by optimizing the following problem:
,
the registration between the laser radars can be carried out separately by adopting a similar method, after the calibration of each group of equipment is completed, each group of equipment is respectively installed according to the step 1, an object is scanned at the same time, at least 3 calibration points are recorded, one group of equipment coordinate systems are used as an original coordinate system, other equipment is used as equipment to be registered, the optimization equation is used for obtaining a transformation matrix of coarse registration, and the ICP algorithm is used for finishing fine registration, so that the transformation matrix from each group of equipment to the original coordinate system is obtained.
3. Fusion of image and point cloud:
The transformation matrix from the camera obtained in the second step to the laser radar can be used for coloring point clouds, and all the point clouds can be put into the same coordinate system by using the transformation matrix obtained by radar registration, so that the complete colored point cloud of the object to be detected is obtained.
The method comprises the steps of firstly respectively reading image data and point cloud data, converting an image from an image coordinate system to a camera coordinate system through a transformation matrix, then converting the image from the camera coordinate system to a radar coordinate system, then obtaining a point cloud mask in fov to obtain the point cloud mask which can be projected on the image, and projecting the point cloud screened by the mask to the image to obtain a depth map, so that fusion of the image and the point cloud is realized.
4. Training of neural networks and point cloud segmentation:
the point cloud and the image are segmented by adopting a deep learning method, a model for segmentation can be the image and the point cloud are trained separately, finally the segmentation output point cloud is judged comprehensively, or the multi-dimensional colored point cloud training can be directly input, and the colored point cloud is segmented.
In the process, the multi-modal input can enable the deep learning model to be trained more accurately, and compared with the traditional algorithm and single-mode input, the effect of the multi-modal input deep learning model in a difficult texture area is better.
After the first three steps are completed, the data set can be customized for a specific scene on the premise of knowing the application scene, and a model trained by the specific data set can be used to obtain a better point cloud segmentation effect in the specific scene.
5. Reconstruction and volume calculation of point cloud:
The point cloud obtained by segmentation cannot be directly used for mathematical calculation, for the part of the segmented point cloud which cannot be scanned by laser or is shielded by other objects, hole detection and filling are needed to be carried out on the point cloud, and for the closed point cloud after filling, a surface reconstruction algorithm is used for processing, so that subsequent parameterization and calculation are convenient.
In a specific implementation, the process of reconstructing the point cloud comprises hole identification and filling and surface reconstruction. In the process of data acquisition, when facing most irregular objects in reality, limited three-dimensional scanning equipment often generates data missing due to various reasons (such as object shielding, reflectivity problem and the like), the missing areas are shown as holes in the point cloud, the unrepaired holes can cause failure or misleading of surface reconstruction, and the holes are filled by hole repair technology, so that the integrity and accuracy of data can be improved.
When the hole boundary is identified, the point cloud needs to be preprocessed, firstly, useless points and error points of the reduced point cloud are removed by using voxel downsampling and statistical outliers, and secondly, the point cloud can be converted into a triangular grid by using a Delaunay triangulation method so as to identify the hole boundary more intuitively and effectively. Once the triangular mesh is obtained, the hole boundaries can be identified by analyzing the mesh topology:
searching boundary edges, namely, in the grid, the boundary edges are edges which only belong to one triangle. Boundary edges can be identified by traversing all triangles and counting the occurrence times of each edge, wherein the occurrence times of the boundary edges are 1;
And connecting boundary edges, namely sequentially connecting the boundary edges to form a closed annular structure, so that the boundary of the hole can be defined.
After the hole boundaries are determined, the Laplace smoothing algorithm is used to reduce noise and irregularities so that subsequent patches can be more naturally fused into the original model:
,
Wherein, Is the firstThe location of the individual vertices of the graph,Is thatIs defined by the set of adjacent vertices of the model,Is the number of neighbors that are present and,Is a smoothing factor;
After hole identification and smoothing is completed, different types of patches are adopted according to the size and shape of the holes, plane patches are adopted for smaller or relatively flat holes, curved surface patches such as a minimum curved surface strategy and a Radial Basis Function (RBF) are used for complex or large holes, and hole filling based on the method can be expressed as follows:
,
Is the point on the boundary, to find a meeting Is a curved surface of (2)WhereinIs the coefficient to be solved for,Is thatAnd (3) withEuclidean distance between each hole boundary point is determined by minimizing the distance between the curved surface and the surrounding mesh。
After hole filling, poisson reconstruction is used for a closed point cloud model, the core idea of which is to assume that an ideal surface can be reconstructed by solving an elliptic partial differential equation describing how the gradient field of a scalar field is aligned with the normal vector input.
Defining an indication functionIt takes 1 value in the surface and 0 value in the outside, then deduces gradient by utilizing the normal line information of the point cloudIs a direction of (2).
,
Wherein the method comprises the steps ofRepresenting the points in the point cloud,Is indicated at the pointAn estimated normal at.
Establishment of poisson's equation, utilization ofAndConstructing vector fieldsThe vector field is taken asIs an approximation of:
,
next, a scalar field is found Gradient of itClosest toI.e., approaching the following equation:
,
Here, the Is a laplace operator of the device,Is a vector fieldIs a dispersion of (3).
Solving poisson equation, solving poisson equation by array method, finite element method or discrete difference method, and extracting equivalent surfaceTo reconstruct a surface.
Once the surface reconstruction is completed, the mathematically described surface equations can be extracted from the closed surface model by parameterizing the surface.
For plane parameterization is:;
the parameters for the cylindrical surface are: ,;
the parameters for the sphere are: ,;
for more complex surfaces, represented by B-spline surfaces:
,
Wherein the method comprises the steps of Is a control point which is used for controlling the operation of the device,AndIs a B-spline basis function, and p and q are the degrees of the basis function.
Finally, the volume is calculated by using triple integral on the parameterized three-dimensional surface equation:
。
In summary, the method provided by the embodiment of the invention can recover a smooth surface from noise-containing data, is suitable for incomplete or sparse point clouds, and has higher precision and robustness. The method is suitable for objects with complex topological structures, such as objects containing holes and tunnels, the geometric process is fully automatic, the technical requirements on operators are low, the possibility of damaging the objects caused by contact can be avoided, and the method is more widely applied than the prior art.
It is noted that what is not described in detail in the embodiments of the present invention belongs to the prior art known to those skilled in the art.
In addition, it will be understood by those skilled in the art that all or part of the steps in implementing the methods of the above embodiments may be implemented by a program to instruct related hardware, and the corresponding program may be stored in a computer readable storage medium, where the storage medium may be a read only memory, a magnetic disk or an optical disk, etc.
While the invention has been described with respect to the preferred embodiments, the scope of the invention is not limited thereto, and any changes or substitutions that would be apparent to those skilled in the art are deemed to be within the scope of the invention. Therefore, the protection scope of the present invention should be subject to the protection scope of the claims. The information disclosed in the background section herein is only for enhancement of understanding of the general background of the invention and is not to be taken as an admission or any form of suggestion that this information forms the prior art already known to those of ordinary skill in the art.
It will be readily appreciated by those skilled in the art that what has not been described in detail in the present description is a preferred embodiment of the invention and is not intended to limit the invention, but is to cover all modifications, equivalents, improvements and modifications which are within the spirit and principles of the invention.