[go: up one dir, main page]

CN115937279A - Point cloud density up-sampling method, system and medium based on cross-modal data registration - Google Patents

Point cloud density up-sampling method, system and medium based on cross-modal data registration Download PDF

Info

Publication number
CN115937279A
CN115937279A CN202211696868.0A CN202211696868A CN115937279A CN 115937279 A CN115937279 A CN 115937279A CN 202211696868 A CN202211696868 A CN 202211696868A CN 115937279 A CN115937279 A CN 115937279A
Authority
CN
China
Prior art keywords
point cloud
skeleton model
image
dimensional
cross
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211696868.0A
Other languages
Chinese (zh)
Other versions
CN115937279B (en
Inventor
肖罡
徐阳
刘小兰
杨钦文
万可谦
魏志宇
赵斯杰
张蔚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangxi Kejun Industrial Co ltd
Original Assignee
Jiangxi Kejun Industrial Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangxi Kejun Industrial Co ltd filed Critical Jiangxi Kejun Industrial Co ltd
Priority to CN202211696868.0A priority Critical patent/CN115937279B/en
Publication of CN115937279A publication Critical patent/CN115937279A/en
Application granted granted Critical
Publication of CN115937279B publication Critical patent/CN115937279B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a point cloud density up-sampling method, a system and a medium based on cross-modal data registration, wherein the method comprises the steps of collecting three-dimensional point cloud data of a target object and images of a plurality of visual angles; respectively extracting image features, registering, calculating spatial correlation between the image and the image features under any visual angle, and constructing a three-dimensional point cloud framework model; extracting an original three-dimensional skeleton model from the three-dimensional point cloud data, and overlapping the original three-dimensional skeleton model with the three-dimensional point cloud skeleton model to obtain an overlapped skeleton model; and reversely acquiring pixel points in the image corresponding to the visual angle aiming at the points in the overlapped skeleton model, copying and expanding the acquired pixel points according to the specified multiple, and finally obtaining a complete sampling result on the density of the three-dimensional point cloud. The invention aims to solve the problems of large calculation amount and low accuracy of an output model in the two existing methods for point cloud densification, has the advantages of simple implementation method and low cost, and can realize the rapid densification of low-density three-dimensional point cloud based on images.

Description

基于跨模态数据配准的点云密度上采样方法、系统及介质Point cloud density upsampling method, system and medium based on cross-modal data registration

技术领域technical field

本发明属于计算机视觉技术领域,具体涉及一种基于跨模态数据配准的点云密度上采样方法、系统及介质。The invention belongs to the technical field of computer vision, and in particular relates to a point cloud density upsampling method, system and medium based on cross-modal data registration.

背景技术Background technique

点云是一个数据集,数据集中的每个点代表一组X、Y、Z几何坐标和一个强度值,这个强度值根据物体表面反射率记录返回信号的强度。当这些点组合在一起时,就会形成一个点云,即空间中代表3D形状或对象的数据点集合。点云也可以自动上色,以实现更真实的可视化。目前点云稠密化采用以下两种方式:(1)基于语义的点云稠密化方法:在点云上进行上采样,即输入一个点云,输出一个更密的点云,且它落在输入点云隐含的几何体(比如表面)上。PU-Net的核心思想,是学习到每个点多个粒度(从局部到全局)下的特征,再在特征空间中扩大点集,最后将扩大的点集映射回三维。(2)基于几何的点云稠密化方法:无论是基于语义还是基于几何的点云稠密化方法,它们在对对象建模时具有很高的复杂性,且对模型本身的几何形状先验性需求较高,对点云进行稠密化进行建模时缺乏准确性和鲁棒性,而实际工况下,对于短时间和有限环境下扫描获得的初始点云即无法获得完整准确的语义先验信息,更缺乏具有充足计算资源进行几何上采样的条件。A point cloud is a data set, each point in the data set represents a set of X, Y, Z geometric coordinates and an intensity value, which records the intensity of the returned signal according to the surface reflectance of the object. When these points are combined, a point cloud is formed, a collection of data points in space that represent a 3D shape or object. Point clouds can also be automatically colored for a more realistic visualization. At present, the following two methods are used for point cloud densification: (1) Semantic-based point cloud densification method: upsampling on the point cloud, that is, input a point cloud, output a denser point cloud, and it falls on the input On the underlying geometry (such as a surface) of the point cloud. The core idea of PU-Net is to learn the features of each point at multiple granularities (from local to global), then expand the point set in the feature space, and finally map the expanded point set back to three dimensions. (2) Geometry-based point cloud densification methods: Whether they are semantic-based or geometry-based point cloud densification methods, they have high complexity when modeling objects, and the geometry of the model itself is a priori. High demand, lack of accuracy and robustness when densifying the point cloud for modeling, and under actual working conditions, it is impossible to obtain a complete and accurate semantic prior for the initial point cloud obtained by scanning in a short time and in a limited environment Information, and the lack of sufficient computing resources for geometric upsampling.

发明内容Contents of the invention

针对现有技术存在的上述技术问题,本发明要解决的技术问题是提供一种基于跨模态数据配准的点云密度上采样方法、系统及介质,本发明旨在解决点云稠密化现有的两种方法中遇到的计算量大、以及输出模型准确度低的难题,提供一种实现方法简单、成本低的点云密度上采样方法,能够基于图像实现对低密度三维点云的快速稠密化。In view of the above technical problems existing in the prior art, the technical problem to be solved by the present invention is to provide a point cloud density upsampling method, system and medium based on cross-modal data registration. The present invention aims to solve the problem of point cloud densification. Some of the problems encountered in the two methods are the large amount of calculation and the low accuracy of the output model. A simple and low-cost point cloud density upsampling method is provided, which can realize the low-density 3D point cloud based on the image. Rapid densification.

为了解决上述技术问题,本发明采用的技术方案为:In order to solve the problems of the technologies described above, the technical solution adopted in the present invention is:

一种基于跨模态数据配准的点云密度上采样方法,包括:A point cloud density upsampling method based on cross-modal data registration, including:

S101,采集目标物体的三维点云数据及多个视角的图像;S101, collecting 3D point cloud data of the target object and images from multiple perspectives;

S102,分别提取多个视角的图像的特征,对多个视角的图像特征进行配准,并计算任意视角下图像和图像特征之间的空间关联;S102, respectively extracting features of images of multiple viewing angles, registering image features of multiple viewing angles, and calculating a spatial correlation between images and image features at any viewing angle;

S103,通过任意两个视角的图像特征之间的空间关联构建三维点云骨架模型;S103, constructing a three-dimensional point cloud skeleton model through the spatial association between image features of any two viewing angles;

S104,从三维点云数据提取原始三维骨架模型,并将提取得到的原始三维骨架模型与构建的三维点云骨架模型和原始三维骨架模型进行重叠,得到重叠骨架模型;S104, extracting an original 3D skeleton model from the 3D point cloud data, and overlapping the extracted original 3D skeleton model with the constructed 3D point cloud skeleton model and the original 3D skeleton model to obtain an overlapping skeleton model;

S105,针对重叠骨架模型中的点,反向获取对应视角的图像中的像素点,并将获取的像素点按照指定的倍数复制并扩张,最终得到完整的三维点云密度上采样结果。S105. For the points in the overlapping skeleton model, reversely obtain the pixel points in the image corresponding to the viewing angle, copy and expand the obtained pixel points according to a specified multiple, and finally obtain a complete 3D point cloud density upsampling result.

可选地,步骤S101中采集的目标物体的三维点云数据为稀疏点云模型PLOptionally, the 3D point cloud data of the target object collected in step S101 is a sparse point cloud model PL .

可选地,步骤S101中采集多个视角的图像时相邻视角夹角固定,且所有视角覆盖目标物体360度范围,且最终得到n个视角的图像I1~InOptionally, when collecting images of multiple viewing angles in step S101 , the angles between adjacent viewing angles are fixed, and all viewing angles cover a range of 360 degrees of the target object, and finally images I 1 -In of n viewing angles are obtained.

可选地,步骤S102中计算任意视角下图像和图像特征之间的空间关联是指计算任意视角i下图像Ii和图像特征Fi之间的距离di=d(Ii,Fi)作为空间关联。Optionally, calculating the spatial correlation between images and image features at any viewing angle in step S102 refers to calculating the distance d i =d(I i , F i ) between image I i and image feature F i at any viewing angle i as a spatial association.

可选地,所述任意视角i下图像Ii和图像特征Fi之间的距离di=d(Ii,Fi)是指欧氏距离。Optionally, the distance d i =d(I i , F i ) between the image I i and the image feature F i at any viewing angle i refers to the Euclidean distance.

可选地,步骤S103包括:Optionally, step S103 includes:

S201,以任意视角i下图像Ii和图像特征Fi之间的距离di=d(Ii,Fi)作为视角i下图像和特征之间的关联权重将视角i下图像Ii和图像特征Fi进行聚合,构造出各个视角图像和特征的二元组数据(I1,F1),(I2,F2),...,(In,Fn);S201, taking the distance d i = d(I i , F i ) between the image I i and the image feature F i at any viewing angle i as the association weight between the image and the feature at the viewing angle i The image features F i are aggregated to construct the binary group data (I 1 , F 1 ), (I 2 , F 2 ), ..., (I n , F n ) of each view image and feature;

S202,将各个视角图像和特征的二元组数据(I1,F1),(I2,F2),...,(In,Fn)进行三维骨架模型重构,其中任意视角i下图像Ii和图像特征Fi中的每个点作为三维骨架中的一个点(x,y,z),从而获得最终重构的重叠骨架模型PIS202. Reconstruct the three-dimensional skeleton model of the binary group data (I 1 , F 1 ), (I 2 , F 2 ), ..., (I n , F n ) of images and features of each view angle, wherein any view angle Each point in image I i and image feature F i under i is used as a point (x, y, z) in the three-dimensional skeleton, so as to obtain the final reconstructed overlapping skeleton model P I .

可选地,步骤S104中将提取得到的原始三维骨架模型与构建的三维点云骨架模型和原始三维骨架模型进行重叠是指将提取得到的原始三维骨架模型与构建的三维点云骨架模型和原始三维骨架模型进行三维空间内的叠加,得到重叠骨架模型P1Optionally, in step S104, overlapping the extracted original 3D skeleton model with the constructed 3D point cloud skeleton model and the original 3D skeleton model refers to combining the extracted original 3D skeleton model with the constructed 3D point cloud skeleton model and the original The three-dimensional skeleton models are superimposed in three-dimensional space to obtain the overlapping skeleton model P 1 .

可选地,步骤S105包括:Optionally, step S105 includes:

S301,针对重叠骨架模型中的每一个点(x,y,z),反向获取对应视角的图像中的像素点(x,y)以及相邻的8个像素点(x-1,y-1)、(x-1,y)、(x-1,y+1)、(x,y-1)、(x,y+1)、(x+1,y-1,z)、(x+1,y)以及(x+1,y+1);S301, for each point (x, y, z) in the overlapping skeleton model, reversely acquire the pixel point (x, y) and the adjacent 8 pixel points (x-1, y- 1), (x-1,y), (x-1,y+1), (x,y-1), (x,y+1), (x+1,y-1,z), ( x+1,y) and (x+1,y+1);

S302,将对应视角的图像中的像素点(x,y)以及相邻的8个像素点(x-1,y-1)、(-1,y)、(x-1,y+1)、(x,y-1)、(x,y+1)、(x+1,y-1,z)、(x+1,y)以及(x+1,y+1)共九个像素点赋予点(x,y,z)的坐标z值扩充为9个点云的坐标点,最终将重叠骨架模型进行9倍扩充得到稠密化点云模型P2作为最终得到的完整的三维点云密度上采样结果。S302, the pixel point (x, y) in the image corresponding to the viewing angle and the adjacent 8 pixel points (x-1, y-1), (-1, y), (x-1, y+1) , (x,y-1), (x,y+1), (x+1,y-1,z), (x+1,y) and (x+1,y+1) a total of nine pixels The coordinate z value of the assigned point (x, y, z) is expanded to 9 coordinate points of the point cloud, and finally the overlapping skeleton model is expanded by 9 times to obtain the dense point cloud model P 2 as the final complete 3D point cloud Density upsampling results.

此外,本发明还提供一种基于跨模态数据配准的点云密度上采样系统,包括相互连接的微处理器和存储器,所述微处理器被编程或配置以执行所述基于跨模态数据配准的点云密度上采样方法。In addition, the present invention also provides a system for point cloud density upsampling based on cross-modal data registration, comprising interconnected microprocessors and memories, the microprocessor is programmed or configured to perform the cross-modal based A Point Cloud Density Upsampling Method for Data Registration.

此外,本发明还提供一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机程序,所述计算机程序用于被微处理器编程或配置以执行所述基于跨模态数据配准的点云密度上采样方法。In addition, the present invention also provides a computer-readable storage medium, wherein a computer program is stored in the computer-readable storage medium, and the computer program is used to be programmed or configured by a microprocessor to perform the cross-modal data allocation based on A standard point cloud density upsampling method.

和现有技术相比,本发明主要具有下述优点:本发明包括采集目标物体的三维点云数据及多个视角的图像;分别提取图像特征、配准并计算任意视角下图像和图像特征之间的空间关联并构建三维点云骨架模型;从三维点云数据提取原始三维骨架模型,与三维点云骨架模型重叠得到重叠骨架模型;针对重叠骨架模型中的点,反向获取对应视角的图像中的像素点,并将获取的像素点按照指定的倍数复制并扩张,最终得到完整的三维点云密度上采样结果。本发明能够解决点云稠密化现有的两种方法中遇到的计算量大、以及输出模型准确度低的难题,具有实现方法简单、成本低的优点,能基于图像实现对低密度三维点云的快速稠密化。Compared with the prior art, the present invention mainly has the following advantages: the present invention includes collecting three-dimensional point cloud data of the target object and images from multiple perspectives; extracting image features, registering and calculating the relationship between images and image features under any perspective. The spatial correlation between the three-dimensional point cloud skeleton model is constructed; the original three-dimensional skeleton model is extracted from the three-dimensional point cloud data, and the overlapping skeleton model is obtained by overlapping with the three-dimensional point cloud skeleton model; for the points in the overlapping skeleton model, the image of the corresponding perspective is reversely acquired The pixel points in , and copy and expand the acquired pixel points according to the specified multiple, and finally get the complete 3D point cloud density upsampling result. The invention can solve the problems of large amount of calculation and low accuracy of the output model encountered in the two existing methods of point cloud densification, has the advantages of simple implementation method and low cost, and can realize low-density three-dimensional point analysis based on images. Rapid densification of clouds.

附图说明Description of drawings

图1为本发明实施例方法的基本流程示意图。Fig. 1 is a schematic flow diagram of the basic process of the method of the embodiment of the present invention.

图2为本发明实施例中采集目标物体的三维点云数据。Fig. 2 is the three-dimensional point cloud data of the target object collected in the embodiment of the present invention.

图3为本发明实施例中最终得到完整的三维点云密度上采样结果。FIG. 3 shows the finally obtained complete 3D point cloud density upsampling result in the embodiment of the present invention.

具体实施方式Detailed ways

如图1所示,本实施例基于跨模态数据配准的点云密度上采样方法包括:As shown in Figure 1, the point cloud density upsampling method based on cross-modal data registration in this embodiment includes:

S101,采集目标物体的三维点云数据及多个视角的图像;S101, collecting 3D point cloud data of the target object and images from multiple perspectives;

S102,分别提取多个视角的图像的特征,对多个视角的图像特征进行配准,并计算任意视角下图像和图像特征之间的空间关联;S102, respectively extracting features of images of multiple viewing angles, registering image features of multiple viewing angles, and calculating a spatial correlation between images and image features at any viewing angle;

S103,通过任意两个视角的图像特征之间的空间关联构建三维点云骨架模型;S103, constructing a three-dimensional point cloud skeleton model through the spatial association between image features of any two viewing angles;

S104,从三维点云数据提取原始三维骨架模型,并将提取得到的原始三维骨架模型与构建的三维点云骨架模型和原始三维骨架模型进行重叠,得到重叠骨架模型;S104, extracting an original 3D skeleton model from the 3D point cloud data, and overlapping the extracted original 3D skeleton model with the constructed 3D point cloud skeleton model and the original 3D skeleton model to obtain an overlapping skeleton model;

S105,针对重叠骨架模型中的点,反向获取对应视角的图像中的像素点,并将获取的像素点按照指定的倍数复制并扩张,最终得到完整的三维点云密度上采样结果。S105. For the points in the overlapping skeleton model, reversely obtain the pixel points in the image corresponding to the viewing angle, copy and expand the obtained pixel points according to a specified multiple, and finally obtain a complete 3D point cloud density upsampling result.

本实施例基于跨模态数据配准的点云密度上采样方法通过是步骤S102、S103中计算空间关联和生成点云骨架模型的配准过程,通过步骤S103通过任意两个视角的图像特征之间的空间关联构建三维点云骨架模型,以及步骤S104从三维点云数据提取原始三维骨架模型,得到了两种好难过模态的骨架模型,通过将提取得到的原始三维骨架模型与构建的三维点云骨架模型和原始三维骨架模型进行重叠,得到重叠骨架模型,实现了跨模态矢量数据配准融合,能够解决点云稠密化现有的两种方法中遇到的计算量大、及输出模型准确度低的难题,具有实现方法简单、成本低的优点,能基于图像实现对低密度三维点云的快速稠密化。In this embodiment, the point cloud density upsampling method based on cross-modal data registration is the registration process of calculating the spatial association and generating the point cloud skeleton model in steps S102 and S103. The spatial correlation between the three-dimensional point cloud skeleton model is constructed, and the step S104 extracts the original three-dimensional skeleton model from the three-dimensional point cloud data, and two kinds of skeleton models of different modes are obtained. By combining the extracted original three-dimensional skeleton model with the constructed three-dimensional The point cloud skeleton model and the original 3D skeleton model are overlapped to obtain the overlapping skeleton model, which realizes the registration and fusion of cross-modal vector data, and can solve the large amount of calculation and output problems encountered in the two existing methods of point cloud densification. The problem of low model accuracy has the advantages of simple implementation method and low cost, and can realize rapid densification of low-density 3D point clouds based on images.

本实施例中,步骤S101中采集的目标物体的三维点云数据为稀疏点云模型PLIn this embodiment, the 3D point cloud data of the target object collected in step S101 is a sparse point cloud model PL .

本实施例中,步骤S101中采集多个视角的图像时相邻视角夹角固定,且所有视角覆盖目标物体360度范围,且最终得到n个视角的图像I1~InIn this embodiment, when images of multiple viewing angles are collected in step S101 , the angles between adjacent viewing angles are fixed, and all viewing angles cover a range of 360 degrees of the target object, and finally images I 1 -In of n viewing angles are obtained.

本实施例中,步骤S102中计算任意视角下图像和图像特征之间的空间关联是指计算任意视角i下图像Ii和图像特征Fi之间的距离di=x(Ii,Fi)作为空间关联。In this embodiment, calculating the spatial correlation between images and image features at any viewing angle in step S102 refers to calculating the distance between image I i and image features F i at any viewing angle i = x(I i , F i ) as a spatial association.

本实施例中,所述任意视角i下图像Ii和图像特征Fi之间的距离di=d(Ii,Fi)是指欧氏距离。分别提取多视角图像中的特征,并对多视角图像特征进行配准获得两两空间关联。将采集获得的不同视角的图像I1,I2,...,n进行图像中边缘像素点特征提取,将每个图像提取的特征记作F1,F2,...,Fn,将提取获得的图像和特征的二元组数据(I1,F1),(I2,F2),...,(In,Fn)进行两两之间的特征和图像数据欧氏距离计算,获得dn=d(In,Fn)。In this embodiment, the distance d i =d(I i , F i ) between the image I i and the image feature F i at any viewing angle i refers to the Euclidean distance. The features in the multi-view image are extracted respectively, and the multi-view image features are registered to obtain the pairwise spatial correlation. The collected images I 1 , I 2 ,..., n from different angles of view are used for feature extraction of edge pixels in the image, and the features extracted from each image are denoted as F 1 , F 2 ,...,F n , The image and feature binary data (I 1 , F 1 ), (I 2 , F 2 ),..., (I n , F n ) obtained by extracting the feature and image data between pairs Calculus distance calculation, obtain d n =d(I n , F n ).

本实施例中,步骤S103包括:In this embodiment, step S103 includes:

S201,以任意视角i下图像Ii和图像特征Fi之间的距离di=d(Ii,Fi)作为视角i下图像和特征之间的关联权重将视角i下图像Ii和图像特征Fi进行聚合(提取特征进行组合),构造出各个视角图像和特征的二元组数据(I1,F1),(I2,F2),...,(In,Fn);S201, taking the distance d i =d(I i , F i ) between the image I i and the image feature F i at any viewing angle i as the association weight between the image and the feature at the viewing angle i The image features F i are aggregated (features are extracted and combined), and the binary group data (I 1 ,F 1 ),(I 2 ,F 2 ),...,(I n ,F n );

S202,将各个视角图像和特征的二元组数据(I1,F1),(I2,F2),...,(In,Fn)进行三维骨架模型重构,其中任意视角i下图像Ii和图像特征Fi中的每个点作为三维骨架中的一个点(x,y,z),将所有点综合在一起从而获得最终重构的重叠骨架模型PIS202. Reconstruct the three-dimensional skeleton model of the binary group data (I 1 , F 1 ), (I 2 , F 2 ),..., (I n , F n ) of images and features of each view angle, wherein any view angle Each point in image I i and image feature F i under i is used as a point (x, y, z) in the three-dimensional skeleton, and all points are integrated to obtain the final reconstructed overlapping skeleton model P I .

本实施例中,步骤S104中将提取得到的原始三维骨架模型与构建的三维点云骨架模型和原始三维骨架模型进行重叠是指将提取得到的原始三维骨架模型与构建的三维点云骨架模型和原始三维骨架模型进行三维空间内的叠加,得到重叠骨架模型P1In this embodiment, overlapping the extracted original 3D skeleton model with the constructed 3D point cloud skeleton model and the original 3D skeleton model in step S104 refers to combining the extracted original 3D skeleton model with the constructed 3D point cloud skeleton model and The original 3D skeleton model is superimposed in the 3D space to obtain the overlapping skeleton model P 1 .

本实施例中,步骤S105包括:In this embodiment, step S105 includes:

S301,针对重叠骨架模型中的每一个点(x,y,z),反向获取对应视角的图像中的像素点(x,y)以及相邻的8个像素点(x-1,y-1)、(x-1,y)、(x-1,y+1)、(x,y-1)、(x,y+1)、(x+1,y-1,z)、(x+1,y)以及(x+1,y+1);S301, for each point (x, y, z) in the overlapping skeleton model, reversely acquire the pixel point (x, y) and the adjacent 8 pixel points (x-1, y- 1), (x-1,y), (x-1,y+1), (x,y-1), (x,y+1), (x+1,y-1,z), ( x+1,y) and (x+1,y+1);

S302,将对应视角的图像中的像素点(,y)以及相邻的8个像素点(x-1,y-1)、(x-1,y)、(x-1,y+1)、(x,y-1)、(x,y+1)、(x+1,y-1,z)、(x+1,y)以及(x+1,y+1)共九个像素点赋予点(,y,z)的坐标z值扩充为9个点云的坐标点,最终将重叠骨架模型进行9倍扩充得到稠密化点云模型P2作为最终得到的完整的三维点云密度上采样结果。S302, the pixel point (, y) in the image corresponding to the viewing angle and the adjacent 8 pixel points (x-1, y-1), (x-1, y), (x-1, y+1) , (x,y-1), (x,y+1), (x+1,y-1,z), (x+1,y) and (x+1,y+1) a total of nine pixels The coordinate z value of the assigned point (, y, z) is expanded to 9 coordinate points of the point cloud, and finally the overlapping skeleton model is expanded by 9 times to obtain the dense point cloud model P 2 as the final complete 3D point cloud density Upsampled results.

本实施例中,步骤S101中采集目标物体的三维点云数据如图2所示,步骤S105最终得到完整的三维点云密度上采样结果如图3所示,可见本实施例能够基于图像实现对低密度三维点云的快速稠密化。In this embodiment, the three-dimensional point cloud data of the target object collected in step S101 is shown in Figure 2, and the complete three-dimensional point cloud density upsampling result obtained in step S105 is shown in Figure 3. It can be seen that this embodiment can realize image-based Fast densification of low-density 3D point clouds.

综上所述,本实施例基于跨模态数据配准的点云密度上采样方法包括采集目标物体的三维点云数据及多个视角的图像;分别提取图像特征、配准并计算任意视角下图像和图像特征之间的空间关联并构建三维点云骨架模型;从三维点云数据提取原始三维骨架模型,与三维点云骨架模型重叠得到重叠骨架模型;针对重叠骨架模型中的点,反向获取对应视角的图像中的像素点,并将获取的像素点按照指定的倍数复制并扩张,最终得到完整的三维点云密度上采样结果。本实施例基于跨模态数据配准的点云密度上采样方法能够有效解决点云稠密化现有的两种方法中遇到的计算量大、以及输出模型准确度低的难题,具有实现方法简单、成本低的优点,能基于图像实现对低密度三维点云的快速稠密化。In summary, the point cloud density upsampling method based on cross-modal data registration in this embodiment includes collecting 3D point cloud data of the target object and images from multiple perspectives; extracting image features, registering and calculating Spatial association between images and image features and construction of a 3D point cloud skeleton model; extract the original 3D skeleton model from the 3D point cloud data, and overlap with the 3D point cloud skeleton model to obtain an overlapping skeleton model; for the points in the overlapping skeleton model, reverse Obtain the pixel points in the image corresponding to the viewing angle, copy and expand the obtained pixel points according to the specified multiple, and finally obtain the complete 3D point cloud density upsampling result. The point cloud density upsampling method based on cross-modal data registration in this embodiment can effectively solve the problems of large amount of calculation and low accuracy of the output model encountered in the two existing methods of point cloud densification, and has an implementation method With the advantages of simplicity and low cost, it can realize rapid densification of low-density 3D point clouds based on images.

此外,本实施例还提供一种基于跨模态数据配准的点云密度上采样系统,包括相互连接的微处理器和存储器,所述微处理器被编程或配置以执行所述基于跨模态数据配准的点云密度上采样方法。In addition, this embodiment also provides a system for point cloud density upsampling based on cross-modal data registration, which includes interconnected microprocessors and memories, and the microprocessor is programmed or configured to perform the cross-modal data registration based on Point Cloud Density Upsampling Method for State Data Registration.

此外,本实施例还提供一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机程序,所述计算机程序用于被微处理器编程或配置以执行所述基于跨模态数据配准的点云密度上采样方法。In addition, this embodiment also provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and the computer program is used to be programmed or configured by a microprocessor to execute the cross-modal data-based Point Cloud Density Upsampling Method for Registration.

本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可读存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。Those skilled in the art should understand that the embodiments of the present application may be provided as methods, systems, or computer program products. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-readable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein. The present application is described with reference to flowcharts and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the present application. It should be understood that each procedure and/or block in the flowchart and/or block diagram, and a combination of procedures and/or blocks in the flowchart and/or block diagram can be realized by computer program instructions. These computer program instructions may be provided to a general purpose computer, special purpose computer, embedded processor, or processor of other programmable data processing equipment to produce a machine such that the instructions executed by the processor of the computer or other programmable data processing equipment produce a An apparatus for realizing the functions specified in one or more procedures of the flowchart and/or one or more blocks of the block diagram. These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to operate in a specific manner, such that the instructions stored in the computer-readable memory produce an article of manufacture comprising instruction means, the instructions The device realizes the function specified in one or more procedures of the flowchart and/or one or more blocks of the block diagram. These computer program instructions can also be loaded onto a computer or other programmable data processing device, causing a series of operational steps to be performed on the computer or other programmable device to produce a computer-implemented process, thereby The instructions provide steps for implementing the functions specified in the flow chart or blocks of the flowchart and/or the block or blocks of the block diagrams.

以上所述仅是本发明的优选实施方式,本发明的保护范围并不仅局限于上述实施例,凡属于本发明思路下的技术方案均属于本发明的保护范围。应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理前提下的若干改进和润饰,这些改进和润饰也应视为本发明的保护范围。The above descriptions are only preferred implementations of the present invention, and the protection scope of the present invention is not limited to the above-mentioned embodiments, and all technical solutions under the idea of the present invention belong to the protection scope of the present invention. It should be pointed out that for those skilled in the art, some improvements and modifications without departing from the principles of the present invention should also be regarded as the protection scope of the present invention.

Claims (10)

1. A point cloud density up-sampling method based on cross-modal data registration is characterized by comprising the following steps:
s10l, collecting three-dimensional point cloud data of a target object and images of a plurality of visual angles;
s102, respectively extracting the features of the images at multiple visual angles, registering the image features at multiple visual angles, and calculating the spatial correlation between the images and the image features at any visual angles;
s103, constructing a three-dimensional point cloud skeleton model through spatial correlation between image features of any two visual angles;
s104, extracting an original three-dimensional skeleton model from the three-dimensional point cloud data, and overlapping the extracted original three-dimensional skeleton model with the constructed three-dimensional point cloud skeleton model and the original three-dimensional skeleton model to obtain an overlapped skeleton model;
and S105, reversely acquiring pixel points in the image corresponding to the visual angle aiming at the points in the overlapped skeleton model, copying and expanding the acquired pixel points according to the specified multiple, and finally obtaining a complete sampling result on the density of the three-dimensional point cloud.
2. The point cloud density up-sampling method based on cross-modal data registration according to claim 1, wherein the three-dimensional point cloud data of the target object collected in step S101 is a sparse point cloud model P L
3. The point cloud density up-sampling method based on cross-modal data registration of claim 1, wherein an included angle between adjacent viewing angles is fixed when the images of multiple viewing angles are collected in step S101, and all viewing angles cover a 360-degree range of a target object, and finally an image I of n viewing angles is obtained 1 ~I n
4. The point cloud density up-sampling method based on cross-modal data registration according to claim 1, wherein the step S102 of calculating the spatial correlation between the image and the image feature at any view angle means calculating the image I at any view angle I i And image feature F i A distance d between i =d(I i ,F i ) As a spatial association.
5. The method of claim 4, wherein the method comprises using a point cloud density upsampling method based on cross-modal data registrationImage I under the arbitrary view angle I i And image feature F i A distance d between i =d(I i ,F i ) Referred to as the euclidean distance.
6. The point cloud density up-sampling method based on cross-modal data registration according to claim 5, wherein step S103 comprises:
s201, image I under any view angle I i And the distance d between the image feature Fi i =d(I i ,F i ) Image I at view I as an associated weight between image and feature at view I i And image feature F i Carrying out aggregation to construct binary data (I) of each view angle image and characteristic 1 ,F 1 ),(I 2 ,F 2 ),...,(I n ,F n );
S202, binary data (I) of each view angle image and feature 1 ,F 1 ),(I 2 ,F 2 ),...,(I n ,F n ) Carrying out three-dimensional skeleton model reconstruction, wherein an image I under an arbitrary visual angle I i And image feature F i As one point (x, y, z) in the three-dimensional skeleton, to obtain a final reconstructed overlapping skeleton model P I
7. The point cloud density up-sampling method based on cross-modal data registration of claim 1, wherein the step S104 of overlapping the extracted original three-dimensional skeleton model with the constructed three-dimensional point cloud skeleton model and the original three-dimensional skeleton model means overlapping the extracted original three-dimensional skeleton model with the constructed three-dimensional point cloud skeleton model and the original three-dimensional skeleton model in a three-dimensional space to obtain an overlapped skeleton model P 1
8. The point cloud density up-sampling method based on cross-modal data registration according to claim 1, wherein step S105 comprises:
s301, aiming at each point (x, y, z) in the overlapped skeleton model, reversely acquiring a pixel point (x, y) in the image of the corresponding view angle and 8 adjacent pixel points (x-1, y-1), (x-1, y + 1), (x, y-1), (x, y + 1), (x +1, y-1, z), (x +1, y) and (x +1, y + 1);
s302, nine pixel points including a pixel point (x, y) in the image corresponding to the visual angle, 8 adjacent pixel points (x-1, y-1), (x-1, y + 1), (x, y-1), (x, y + 1), (x +1, y-1, z), (x +1, y) and (x +1, y + 1) are expanded into coordinate points of 9 point clouds, and finally the overlapped skeleton model is expanded by 9 times to obtain a dense point cloud model P2 as a final complete three-dimensional point cloud density up-sampling result.
9. A point cloud density upsampling system based on cross-modal data registration, comprising a microprocessor and a memory which are connected with each other, wherein the microprocessor is programmed or configured to execute the point cloud density upsampling method based on cross-modal data registration according to any one of claims 1 to 8.
10. A computer-readable storage medium, in which a computer program is stored, wherein the computer program is adapted to be programmed or configured by a microprocessor to perform the method for point cloud density upsampling based on cross-modal data registration according to any one of claims 1 to 8.
CN202211696868.0A 2022-12-28 2022-12-28 Point cloud density upsampling method, system and medium based on cross-modal data registration Active CN115937279B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211696868.0A CN115937279B (en) 2022-12-28 2022-12-28 Point cloud density upsampling method, system and medium based on cross-modal data registration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211696868.0A CN115937279B (en) 2022-12-28 2022-12-28 Point cloud density upsampling method, system and medium based on cross-modal data registration

Publications (2)

Publication Number Publication Date
CN115937279A true CN115937279A (en) 2023-04-07
CN115937279B CN115937279B (en) 2025-07-01

Family

ID=86650747

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211696868.0A Active CN115937279B (en) 2022-12-28 2022-12-28 Point cloud density upsampling method, system and medium based on cross-modal data registration

Country Status (1)

Country Link
CN (1) CN115937279B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118279497A (en) * 2024-05-07 2024-07-02 武汉元一宇宙控股集团股份有限公司 Three-dimensional model generation system and method
CN119850886A (en) * 2025-03-21 2025-04-18 湖南工商大学 Point cloud completion method and related equipment based on cross-mode and depth repair

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8824779B1 (en) * 2011-12-20 2014-09-02 Christopher Charles Smyth Apparatus and method for determining eye gaze from stereo-optic views
US20180330504A1 (en) * 2017-05-14 2018-11-15 International Business Machines Corporation Systems and methods for determining a camera pose of an image
CN110276758A (en) * 2019-06-28 2019-09-24 电子科技大学 Occlusal analysis system based on point cloud spatial features
CN110288517A (en) * 2019-06-28 2019-09-27 电子科技大学 Skeleton Line Extraction Method Based on Projection Matching Group
CN111476802A (en) * 2020-04-09 2020-07-31 山东财经大学 A method, device and readable storage medium for medical image segmentation and tumor detection based on dense convolution model
CN112200854A (en) * 2020-09-25 2021-01-08 华南农业大学 A three-dimensional phenotype measurement method of leafy vegetables based on video images
US20210316463A1 (en) * 2020-04-14 2021-10-14 Hong Kong Applied Science And Technology Research Institute Co., Ltd. Candidate six dimensional pose hypothesis selection
CN114267041A (en) * 2022-03-01 2022-04-01 北京鉴智科技有限公司 Method and device for identifying object in scene
CN114860978A (en) * 2022-05-07 2022-08-05 苏州大学 Text-based pedestrian search task semantic alignment method and system
CN115294294A (en) * 2022-10-10 2022-11-04 中国电建集团山东电力建设第一工程有限公司 Pipeline BIM (building information modeling) model reconstruction method and system based on depth image and point cloud
CN115409931A (en) * 2022-10-31 2022-11-29 苏州立创致恒电子科技有限公司 Three-dimensional reconstruction method based on image and point cloud data fusion

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8824779B1 (en) * 2011-12-20 2014-09-02 Christopher Charles Smyth Apparatus and method for determining eye gaze from stereo-optic views
US20180330504A1 (en) * 2017-05-14 2018-11-15 International Business Machines Corporation Systems and methods for determining a camera pose of an image
CN110276758A (en) * 2019-06-28 2019-09-24 电子科技大学 Occlusal analysis system based on point cloud spatial features
CN110288517A (en) * 2019-06-28 2019-09-27 电子科技大学 Skeleton Line Extraction Method Based on Projection Matching Group
CN111476802A (en) * 2020-04-09 2020-07-31 山东财经大学 A method, device and readable storage medium for medical image segmentation and tumor detection based on dense convolution model
US20210316463A1 (en) * 2020-04-14 2021-10-14 Hong Kong Applied Science And Technology Research Institute Co., Ltd. Candidate six dimensional pose hypothesis selection
CN112200854A (en) * 2020-09-25 2021-01-08 华南农业大学 A three-dimensional phenotype measurement method of leafy vegetables based on video images
CN114267041A (en) * 2022-03-01 2022-04-01 北京鉴智科技有限公司 Method and device for identifying object in scene
CN114860978A (en) * 2022-05-07 2022-08-05 苏州大学 Text-based pedestrian search task semantic alignment method and system
CN115294294A (en) * 2022-10-10 2022-11-04 中国电建集团山东电力建设第一工程有限公司 Pipeline BIM (building information modeling) model reconstruction method and system based on depth image and point cloud
CN115409931A (en) * 2022-10-31 2022-11-29 苏州立创致恒电子科技有限公司 Three-dimensional reconstruction method based on image and point cloud data fusion

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
YUAN, BZ (YUAN, BZ) ; TANG, XF (TANG, XF): "Curvature and Density based Feature Point Detection for Point Cloud Data", 《IET 3RD INTERNATIONAL CONFERENCE ON WIRELESS, MOBILE AND MULTIMEDIA NETWORKS》, 1 January 2010 (2010-01-01) *
刘凯;张立民;范晓磊;: "改进卷积玻尔兹曼机的图像特征深度提取", 哈尔滨工业大学学报, no. 05, 30 May 2016 (2016-05-30) *
鲁丽彬;: "基于多媒体技术的舰船轨迹虚拟重构技术", 舰船科学技术, no. 12, 23 June 2017 (2017-06-23) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118279497A (en) * 2024-05-07 2024-07-02 武汉元一宇宙控股集团股份有限公司 Three-dimensional model generation system and method
CN119850886A (en) * 2025-03-21 2025-04-18 湖南工商大学 Point cloud completion method and related equipment based on cross-mode and depth repair

Also Published As

Publication number Publication date
CN115937279B (en) 2025-07-01

Similar Documents

Publication Publication Date Title
US20240257462A1 (en) Method, apparatus, and storage medium for three-dimensional reconstruction of buildings based on missing point cloud data
CN110458939B (en) Indoor scene modeling method based on visual angle generation
CN103021017B (en) Three-dimensional scene rebuilding method based on GPU acceleration
Fuhrmann et al. MVE-a multi-view reconstruction environment.
CN111127633A (en) Three-dimensional reconstruction method, apparatus, and computer-readable medium
CN107657659A (en) The Manhattan construction method for automatic modeling of scanning three-dimensional point cloud is fitted based on cuboid
CN103606151A (en) A wide-range virtual geographical scene automatic construction method based on image point clouds
Kersten et al. Potential of automatic 3D object reconstruction from multiple images for applications in architecture, cultural heritage and archaeology
Sui et al. A novel 3D building damage detection method using multiple overlapping UAV images
CN100483425C (en) Method and program for identifying multimedia data
Guo et al. Line-based 3d building abstraction and polygonal surface reconstruction from images
CN115937279B (en) Point cloud density upsampling method, system and medium based on cross-modal data registration
CN117475105A (en) Open world three-dimensional scene reconstruction and perception method based on monocular image
CN115953563A (en) Method and system for 3D model completion and repair based on point cloud vectorization skeleton matching
CN116051980B (en) Building identification method, system, electronic equipment and medium based on oblique photography
CN117078470A (en) A three-dimensional expropriation and demolition management system based on BIM+GIS
CN115100354A (en) Three-dimensional static model reconstruction method and device
CN120374885A (en) Method for identifying and generating pile material volume by fusing single-view 3D reconstruction and BIM calibration
Zhu et al. Textured mesh surface reconstruction of large buildings with multi-view stereo
CN119693221A (en) A method and device for generating orthophoto based on scene reconstruction
Rüther et al. Challenges in heritage documentation with terrestrial laser scanning
Esteban et al. Fit3d toolbox: multiple view geometry and 3d reconstruction for matlab
Wang et al. Real‐time fusion of multiple videos and 3D real scenes based on optimal viewpoint selection
CN114963991A (en) Hull stone volume measurement system based on three-dimensional reconstruction
Hsieh A new Kinect-based scanning system and its application

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant