[go: up one dir, main page]

CN114494589A - Three-dimensional reconstruction method, three-dimensional reconstruction device, electronic equipment and computer-readable storage medium - Google Patents

Three-dimensional reconstruction method, three-dimensional reconstruction device, electronic equipment and computer-readable storage medium Download PDF

Info

Publication number
CN114494589A
CN114494589A CN202210044808.4A CN202210044808A CN114494589A CN 114494589 A CN114494589 A CN 114494589A CN 202210044808 A CN202210044808 A CN 202210044808A CN 114494589 A CN114494589 A CN 114494589A
Authority
CN
China
Prior art keywords
weak texture
point cloud
area
target
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210044808.4A
Other languages
Chinese (zh)
Inventor
孟进军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Xaircraft Technology Co Ltd
Original Assignee
Guangzhou Xaircraft Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Xaircraft Technology Co Ltd filed Critical Guangzhou Xaircraft Technology Co Ltd
Priority to CN202210044808.4A priority Critical patent/CN114494589A/en
Publication of CN114494589A publication Critical patent/CN114494589A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a three-dimensional reconstruction method, a device, electronic equipment and a computer readable storage medium, wherein weak texture regions in images to be processed corresponding to a target scene are extracted, the weak texture regions are subjected to mask processing, so that the three-dimensional reconstruction of the weak texture regions is avoided in the three-dimensional reconstruction process, after dense point clouds of regions except the weak texture regions in the target scene are obtained, the weak texture regions are subjected to point cloud filling according to boundary lines and the dense point clouds of the weak texture regions, namely the weak texture regions are not directly subjected to three-dimensional reconstruction, but the weak texture regions are subjected to point cloud filling by utilizing point cloud data related to the boundary lines of the weak texture regions, so that the three-dimensional reconstruction process of the whole target scene is completed, the problem of unsatisfactory reconstruction effect of the weak texture regions in the prior art is effectively solved, and the phenomena of cavities, a large number of noise points and the like in a three-dimensional reconstruction result are avoided, and a better cavity removing effect is realized.

Description

三维重建方法、装置、电子设备和计算机可读存储介质Three-dimensional reconstruction method, apparatus, electronic device and computer-readable storage medium

技术领域technical field

本发明涉及图像处理领域,具体而言,涉及一种三维重建方法、装置、电子设备和计算机可读存储介质。The present invention relates to the field of image processing, and in particular, to a three-dimensional reconstruction method, apparatus, electronic device and computer-readable storage medium.

背景技术Background technique

基于视觉的三维重建技术是通过相关仪器来获取物体的二维图像数据信息,然后对获取的数据信息进行分析处理,最后利用三维重建的相关理论重建出真实环境中物体表面的轮廓信息。基于视觉的三维重建技术是通过图像特征匹配计算相机的姿态信息以及提取稀疏点云,但是对于特殊区域如弱纹理区域、无纹理区域很难提取到特征信息,这就引起在稠密重建中出现空洞现象。The vision-based 3D reconstruction technology is to obtain the 2D image data information of the object through relevant instruments, then analyze and process the obtained data information, and finally reconstruct the contour information of the object surface in the real environment by using the relevant theory of 3D reconstruction. Vision-based 3D reconstruction technology calculates camera pose information and extracts sparse point cloud through image feature matching, but it is difficult to extract feature information for special areas such as weak texture areas and non-textured areas, which causes holes in dense reconstruction. Phenomenon.

目前,大部分的解决方案都是通过优化基于视觉的三维重建算法来提高弱纹理区域的特征信息提取,这种方式对特殊区域的重建效果并不理想,在三维重建结果中仍然会出现空洞现象。At present, most of the solutions are to improve the feature information extraction of weak texture areas by optimizing the 3D reconstruction algorithm based on vision. This method is not ideal for the reconstruction of special areas, and there will still be voids in the 3D reconstruction results. .

发明内容SUMMARY OF THE INVENTION

有鉴于此,本发明的目的在于提供一种三维重建方法、装置、电子设备和计算机可读存储介质,以解决现有的三维重建技术中对弱纹理区域的重建效果不理想,容易出现空洞现象的问题。In view of this, the purpose of the present invention is to provide a three-dimensional reconstruction method, device, electronic device and computer-readable storage medium, so as to solve the problem that the reconstruction effect of the weak texture area in the existing three-dimensional reconstruction technology is not ideal, and the phenomenon of voids is prone to occur. The problem.

为了实现上述目的,本发明实施例采用的技术方案如下:In order to achieve the above purpose, the technical solutions adopted in the embodiments of the present invention are as follows:

第一方面,本发明提供一种三维重建方法,所述方法包括:In a first aspect, the present invention provides a three-dimensional reconstruction method, the method comprising:

提取目标场景对应的各待处理图像中的弱纹理区域;Extracting weak texture areas in each to-be-processed image corresponding to the target scene;

对各待处理图像中的所述弱纹理区域进行掩膜处理,得到各目标图像;Perform mask processing on the weak texture area in each image to be processed to obtain each target image;

依据所述各目标图像对目标区域进行三维重建,得到所述目标区域的稠密点云;所述目标区域为所述目标场景中除所述弱纹理区域之外的区域;Performing three-dimensional reconstruction on the target area according to the target images to obtain a dense point cloud of the target area; the target area is an area other than the weak texture area in the target scene;

根据所述弱纹理区域的边界线和所述稠密点云,对所述弱纹理区域进行点云填充。Point cloud filling is performed on the weak texture area according to the boundary line of the weak texture area and the dense point cloud.

在可选的实施方式中,所述提取目标场景对应的各待处理图像中的弱纹理区域,包括:In an optional implementation manner, the extraction of weak texture regions in each image to be processed corresponding to the target scene includes:

利用各待处理图像中的像素值或者预先训练的检测模型提取各待处理图像中的弱纹理区域,或者通过用户选取目标场景对应的各待处理图像中的弱纹理区域。Use pixel values in each image to be processed or a pre-trained detection model to extract weak texture areas in each image to be processed, or select weak texture areas in each image to be processed corresponding to the target scene by the user.

在可选的实施方式中,所述利用各待处理图像中的像素值或者预先训练的检测模型提取各待处理图像中的弱纹理区域,包括:In an optional implementation manner, the use of pixel values in each to-be-processed image or a pre-trained detection model to extract weak texture regions in each to-be-processed image includes:

根据各待处理图像中的像素值确定各所述待处理图像中的第一弱纹理区域;Determine the first weak texture area in each of the to-be-processed images according to pixel values in each of the to-be-processed images;

将各待处理图像输入预先训练的检测模型,得到各所述待处理图像中的第二弱纹理区域;其中,所述弱纹理区域包括第一弱纹理区域和第二弱纹理区域,所述第一弱纹理区域的反光率高于所述第二弱纹理区域。Input each to-be-processed image into a pre-trained detection model to obtain a second weak texture area in each of the to-be-processed images; wherein the weak texture area includes a first weak texture area and a second weak texture area, and the first weak texture area The reflectivity of a weakly textured region is higher than that of the second weakly textured region.

在可选的实施方式中,所述根据所述弱纹理区域的边界线和所述稠密点云,对所述弱纹理区域进行点云填充,包括:In an optional implementation manner, performing point cloud filling on the weak texture area according to the boundary line of the weak texture area and the dense point cloud includes:

将所述弱纹理区域的边界线与所述稠密点云进行叠加,得到所述弱纹理区域的边界线处的点云数据;superimposing the boundary line of the weak texture area with the dense point cloud to obtain point cloud data at the boundary line of the weak texture area;

对所述弱纹理区域的边界线处的点云数据进行分布统计,并根据统计结果选取出预设范围内的点云数据;Perform distribution statistics on the point cloud data at the boundary line of the weak texture area, and select point cloud data within a preset range according to the statistical results;

根据所述预设范围内的点云数据对所述弱纹理区域进行点云填充。Point cloud filling is performed on the weak texture area according to the point cloud data within the preset range.

在可选的实施方式中,所述根据所述预设范围内的点云数据对所述弱纹理区域进行点云填充,包括:In an optional implementation manner, the performing point cloud filling on the weak texture area according to the point cloud data within the preset range includes:

当所述弱纹理区域为水域时,将所述预设范围内的点云数据的高程平均值作为所述水域的高程值,对所述水域进行点云填充。When the weak texture area is a water area, the water area is filled with a point cloud by taking the average elevation value of the point cloud data within the preset range as the elevation value of the water area.

在可选的实施方式中,所述根据所述预设范围内的点云数据对所述弱纹理区域进行点云填充,包括:In an optional implementation manner, the performing point cloud filling on the weak texture area according to the point cloud data within the preset range includes:

当所述弱纹理区域为非水区域时,计算所述预设范围内的点云数据中每个点云的法向量,将各点云的法向量的平均值作为所述非水区域的垂直方向,并利用所述预设范围内的点云数据和所述非水区域的垂直方向,对所述非水区域进行点云填充。When the weak texture area is a non-water area, calculate the normal vector of each point cloud in the point cloud data within the preset range, and use the average value of the normal vectors of each point cloud as the vertical direction of the non-water area direction, and use the point cloud data within the preset range and the vertical direction of the non-water area to fill the non-water area with a point cloud.

在可选的实施方式中,所述依据所述各目标图像对目标区域进行三维重建,得到所述目标区域的稠密点云,包括:In an optional implementation manner, the three-dimensional reconstruction of the target area according to each target image to obtain a dense point cloud of the target area includes:

对所述各目标图像进行特征点提取以及特征点匹配,得到所述各目标图像之间的匹配点对;Perform feature point extraction and feature point matching on each of the target images to obtain matching point pairs between the various target images;

根据所述匹配点对计算所述各目标图像对应的相机参数以及稀疏点云;Calculate camera parameters and sparse point cloud corresponding to each target image according to the matching point pair;

根据所述相机参数和所述稀疏点云,生成稠密点云。A dense point cloud is generated according to the camera parameters and the sparse point cloud.

第二方面,本发明提供一种三维重建装置,所述装置包括:In a second aspect, the present invention provides a three-dimensional reconstruction device, the device comprising:

弱纹理区域提取模块,用于提取目标场景对应的各待处理图像中的弱纹理区域;A weak texture area extraction module, used to extract weak texture areas in each image to be processed corresponding to the target scene;

掩膜处理模块,用于对各待处理图像中的所述弱纹理区域进行掩膜处理,得到各目标图像;a mask processing module, configured to perform mask processing on the weak texture area in each image to be processed to obtain each target image;

三维重建模块,用于依据所述各目标图像对目标区域进行三维重建,得到所述目标区域的稠密点云;所述目标区域为所述目标场景中除所述弱纹理区域之外的区域;a three-dimensional reconstruction module, configured to perform three-dimensional reconstruction on the target area according to the target images to obtain a dense point cloud of the target area; the target area is an area other than the weak texture area in the target scene;

点云填充模块,用于根据所述弱纹理区域的边界线和所述稠密点云,对所述弱纹理区域进行点云填充。The point cloud filling module is used for filling the weak texture area with point cloud according to the boundary line of the weak texture area and the dense point cloud.

第三方面,本发明提供一种电子设备,包括处理器、存储器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现如前述实施方式中任一项所述的三维重建方法的步骤。In a third aspect, the present invention provides an electronic device, comprising a processor, a memory, and a computer program stored on the memory and executable on the processor, the computer program being executed by the processor to achieve the following: The steps of the three-dimensional reconstruction method described in any one of the preceding embodiments.

第四方面,本发明提供一种计算机可读存储介质,所述计算机可读存储介质上存储计算机程序,所述计算机程序被处理器执行时实现如前述实施方式中任一项所述的三维重建方法的步骤。In a fourth aspect, the present invention provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, realizes the three-dimensional reconstruction according to any one of the foregoing embodiments steps of the method.

本发明实施例提供的三维重建方法、装置、电子设备和计算机可读存储介质,通过提取出目标场景对应的各待处理图像中的弱纹理区域,并对弱纹理区域进行掩膜处理,从而在三维重建过程中避开了对弱纹理区域的三维重建,在得到目标场景中除所述弱纹理区域之外的区域的稠密点云后,根据弱纹理区域的边界线和稠密点云,对弱纹理区域进行点云填充,即并未直接对弱纹理区域进行三维重建,而是利用弱纹理区域的边界线相关的点云数据对弱纹理区域进行点云填充,从而完成了整个目标场景的三维重建过程,有效解决了现有技术中对弱纹理区域重建效果不理想的问题,避免了三维重建结果中会出现空洞以及大量噪点等现象,实现了较好的空洞去除效果。The three-dimensional reconstruction method, device, electronic device, and computer-readable storage medium provided by the embodiments of the present invention extract the weak texture area in each to-be-processed image corresponding to the target scene, and perform mask processing on the weak texture area, so that the In the 3D reconstruction process, the 3D reconstruction of the weak texture area is avoided. After obtaining the dense point cloud of the area other than the weak texture area in the target scene, according to the boundary line and dense point cloud of the weak texture area, the weak texture area is obtained. The texture area is filled with point clouds, that is, the 3D reconstruction of the weak texture area is not directly performed, but the point cloud data related to the boundary line of the weak texture area is used to fill the weak texture area, thus completing the 3D reconstruction of the entire target scene. The reconstruction process effectively solves the problem that the reconstruction effect of the weak texture area in the prior art is not ideal, avoids the phenomenon of holes and a lot of noise in the 3D reconstruction result, and achieves a better hole removal effect.

为使本发明的上述目的、特征和优点能更明显易懂,下文特举较佳实施例,并配合所附附图,作详细说明如下。In order to make the above-mentioned objects, features and advantages of the present invention more obvious and easy to understand, preferred embodiments are given below, and are described in detail as follows in conjunction with the accompanying drawings.

附图说明Description of drawings

为了更清楚地说明本发明实施例的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,应当理解,以下附图仅示出了本发明的某些实施例,因此不应被看作是对范围的限定,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附图。In order to illustrate the technical solutions of the embodiments of the present invention more clearly, the following briefly introduces the accompanying drawings used in the embodiments. It should be understood that the following drawings only show some embodiments of the present invention, and therefore do not It should be regarded as a limitation of the scope, and for those of ordinary skill in the art, other related drawings can also be obtained according to these drawings without any creative effort.

图1示出了点云空洞的示意图;Figure 1 shows a schematic diagram of a point cloud void;

图2示出了本发明实施例提供的三维重建方法的一种流程示意图;FIG. 2 shows a schematic flowchart of a three-dimensional reconstruction method provided by an embodiment of the present invention;

图3示出了容易造成空洞的弱纹理区域的示意图;Figure 3 shows a schematic diagram of a weakly textured region that is prone to voids;

图4示出了本发明实施例提供的三维重建方法的另一种流程示意图;FIG. 4 shows another schematic flowchart of a three-dimensional reconstruction method provided by an embodiment of the present invention;

图5示出了特征点匹配的原始图像的示意图;Fig. 5 shows the schematic diagram of the original image of feature point matching;

图6示出了特征点匹配的结果示意图;Fig. 6 shows the result schematic diagram of feature point matching;

图7示出了对极几何约束示意图;Figure 7 shows a schematic diagram of epipolar geometric constraints;

图8示出了本发明实施例提供的三维重建方法的又一种流程示意图;FIG. 8 shows another schematic flowchart of a three-dimensional reconstruction method provided by an embodiment of the present invention;

图9示出了为去除空洞的三维重建效果示意图;FIG. 9 shows a schematic diagram of a three-dimensional reconstruction effect for removing voids;

图10示出了本发明实施例提供的三维重建装置的一种功能模块图;FIG. 10 shows a functional block diagram of a three-dimensional reconstruction device provided by an embodiment of the present invention;

图11示出了本发明实施例提供的电子设备的一种方框示意图。FIG. 11 shows a schematic block diagram of an electronic device provided by an embodiment of the present invention.

图标:500-三维重建装置;700-电子设备;510-弱纹理区域提取模块;520-掩膜处理模块;530-三维重建模块;540-点云填充模块;710-存储器;720-处理器;730-通信模块。Icon: 500-three-dimensional reconstruction device; 700-electronic equipment; 510-weak texture area extraction module; 520-mask processing module; 530-three-dimensional reconstruction module; 540-point cloud filling module; 710-memory; 720-processor; 730 - Communication Module.

具体实施方式Detailed ways

下面将结合本发明实施例中附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。通常在此处附图中描述和示出的本发明实施例的组件可以以各种不同的配置来布置和设计。The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only a part of the embodiments of the present invention, but not all of the embodiments. The components of the embodiments of the invention generally described and illustrated in the drawings herein may be arranged and designed in a variety of different configurations.

因此,以下对在附图中提供的本发明的实施例的详细描述并非旨在限制要求保护的本发明的范围,而是仅仅表示本发明的选定实施例。基于本发明的实施例,本领域技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本发明保护的范围。Thus, the following detailed description of the embodiments of the invention provided in the accompanying drawings is not intended to limit the scope of the invention as claimed, but is merely representative of selected embodiments of the invention. Based on the embodiments of the present invention, all other embodiments obtained by those skilled in the art without creative work fall within the protection scope of the present invention.

需要说明的是,术语“第一”和“第二”等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。It should be noted that relational terms such as the terms "first" and "second" are only used to distinguish one entity or operation from another entity or operation, and do not necessarily require or imply any relationship between these entities or operations. any such actual relationship or sequence exists. Moreover, the terms "comprising", "comprising" or any other variation thereof are intended to encompass a non-exclusive inclusion such that a process, method, article or device that includes a list of elements includes not only those elements, but also includes not explicitly listed or other elements inherent to such a process, method, article or apparatus. Without further limitation, an element qualified by the phrase "comprising a..." does not preclude the presence of additional identical elements in a process, method, article or apparatus that includes the element.

针对三维重建中容易出现空洞现象的问题,目前大部分方案都是通过优化基于视觉的三维重建算法来提高弱纹理区域的特征信息提取,从而达到空洞消除的目的,但是目前改进算法并没有达到理想的效果,弱纹理区域的三维重建结果中会出现空洞及大量噪点现象(如图1所示),从而影响了三维重建后的点云数据的精度。除了基于视觉的三维重建外,还有基于激光雷达的三维重建。激光雷达是扫描激光束时照射对象物体,通过测量照射到物体弹回的时间来测定离物体的距离和位置的光传感器技术。Aiming at the problem that holes are prone to appear in 3D reconstruction, most of the current solutions are to improve the feature information extraction of weak texture areas by optimizing the vision-based 3D reconstruction algorithm, so as to achieve the purpose of eliminating holes, but the current improved algorithm has not achieved the ideal. The effect of 3D reconstruction of weak texture areas will appear holes and a lot of noise (as shown in Figure 1), which will affect the accuracy of the point cloud data after 3D reconstruction. In addition to vision-based 3D reconstruction, there is also lidar-based 3D reconstruction. Lidar is an optical sensor technology that irradiates the target object when scanning a laser beam, and measures the distance and position of the object by measuring the time it takes for the irradiated object to bounce back.

由于通过优化基于视觉的三维重建算法来提高弱纹理区域重建的方法,目前还没有达到非常理想的效果。而激光雷达设备相对于基于视觉的三维重建设备,价格要贵很多,同时对于大场景来说,点位的精度也会随着距离变远而降低。因此,本发明实施例提出了一种三维重建方法、装置、电子设备和计算机可读存储介质,其通过提取出目标场景对应的各待处理图像中的弱纹理区域,在三维重建过程中,通过对弱纹理区域进行掩膜处理,避开了对弱纹理区域的三维重建,在得到目标场景中除弱纹理区域之外的区域的稠密点云后,根据弱纹理区域的边界线和稠密点云,对弱纹理区域进行点云填充,即并未直接对弱纹理区域进行三维重建,而是利用弱纹理区域的边界线相关的点云数据对弱纹理区域进行点云填充,从而完成了整个目标场景的三维重建过程,有效解决了现有技术中对弱纹理区域重建效果不理想的问题,避免了三维重建结果中会出现空洞以及大量噪点等现象,实现了较好的空洞去除效果;由于三维重建过程中对弱纹理区域进行了掩模处理,故得到的三维重建结果中不会出现大量噪点,有效提升了三维重建后的点云数据的精度。Due to the method of improving the reconstruction of weak texture regions by optimizing the vision-based 3D reconstruction algorithm, it has not yet achieved very ideal results. Compared with vision-based 3D reconstruction equipment, lidar equipment is much more expensive, and for large scenes, the accuracy of point positions will also decrease with distance. Therefore, the embodiments of the present invention propose a three-dimensional reconstruction method, apparatus, electronic device, and computer-readable storage medium, which extract weak texture regions in each to-be-processed image corresponding to the target scene, and in the three-dimensional reconstruction process, through The weak texture area is masked to avoid the 3D reconstruction of the weak texture area. After obtaining the dense point cloud of the area except the weak texture area in the target scene, according to the boundary line of the weak texture area and the dense point cloud , fill the weak texture area with point cloud, that is, instead of directly reconstructing the weak texture area, use the point cloud data related to the boundary line of the weak texture area to fill the weak texture area, thus completing the whole goal. The 3D reconstruction process of the scene effectively solves the problem that the reconstruction effect of the weak texture area in the prior art is not ideal, avoids the phenomenon of holes and a lot of noise in the 3D reconstruction results, and achieves a better hole removal effect; The weak texture area is masked in the reconstruction process, so there will be no large amount of noise in the obtained 3D reconstruction result, which effectively improves the accuracy of the point cloud data after 3D reconstruction.

下面,对本发明实施例提供的三维重建方法进行详细说明。Hereinafter, the three-dimensional reconstruction method provided by the embodiments of the present invention will be described in detail.

请参照图2,为本发明实施例提供的三维重建方法的一种流程示意图。需要说明的是,本发明实施例的三维重建方法并不以图2以及以下的具体顺序为限制,应当理解,在其他实施例中,本发明实施例的三维重建方法其中部分步骤的顺序可以根据实际需要相互交换,或者其中的部分步骤也可以省略或删除。该三维重建方法可以应用在电子设备中,下面将对图2所示的具体流程进行详细阐述。Please refer to FIG. 2 , which is a schematic flowchart of a three-dimensional reconstruction method provided by an embodiment of the present invention. It should be noted that the three-dimensional reconstruction method of the embodiment of the present invention is not limited by the specific order of FIG. 2 and the following. It should be understood that in other embodiments, the order of some steps in the three-dimensional reconstruction method of the embodiment of the present invention may be based on It is actually necessary to exchange each other, or some of the steps can also be omitted or deleted. The three-dimensional reconstruction method can be applied in electronic devices, and the specific flow shown in FIG. 2 will be described in detail below.

步骤S201,提取目标场景对应的各待处理图像中的弱纹理区域。Step S201, extracting weak texture regions in each to-be-processed image corresponding to the target scene.

在本实施例中,该目标场景为待重建场景,其可以是室外场景,也可以是室内场景,对此不进行限制。例如,可以规划无人机的飞行路径,在无人机飞行过程中,利用无人机所携带的相机拍摄目标场景,并将拍摄的图像发送给电子设备。如此,电子设备就可获得目标场景对应的各待处理图像。In this embodiment, the target scene is a scene to be reconstructed, which may be an outdoor scene or an indoor scene, which is not limited. For example, the flight path of the drone can be planned. During the flight of the drone, the camera carried by the drone is used to capture the target scene, and the captured image is sent to the electronic device. In this way, the electronic device can obtain each image to be processed corresponding to the target scene.

在本实施例中,目标场景对应的各待处理图像可以包括由一个或多个相机拍摄的目标场景在不同视角的图像。In this embodiment, each image to be processed corresponding to the target scene may include images of the target scene from different viewing angles captured by one or more cameras.

电子设备在获得目标场景对应的各待处理图像后,可以利用深度学习框架并结合地物光谱信息进行弱纹理区域的提取,从而得到目标场景对应的各待处理图像中的弱纹理区域。其中,弱纹理区域可以理解为仅有少量纹理或者没有纹理的区域。After obtaining each to-be-processed image corresponding to the target scene, the electronic device can use the deep learning framework and combined with the spectral information of ground objects to extract the weak texture area, thereby obtaining the weak texture area in each to-be-processed image corresponding to the target scene. Among them, the weak texture area can be understood as the area with only a small amount of texture or no texture.

步骤S202,对各待处理图像中的弱纹理区域进行掩膜处理,得到各目标图像。Step S202, performing mask processing on weak texture regions in each image to be processed to obtain each target image.

在本实施例中,考虑到三维重建过程中,对于弱纹理区域很难提取到特征信息,故在进行三维重建前,先对各待处理图像中的弱纹理区域进行掩膜处理,即将待处理图像上的弱纹理区域进行屏蔽,使弱纹理区域不参与后续的三维重建过程。可以理解,目标图像是将待处理图像中的弱纹理区域屏蔽掉后所得到的图像。In this embodiment, considering that in the process of 3D reconstruction, it is difficult to extract feature information for weak texture areas, so before performing 3D reconstruction, mask processing is performed on the weak texture areas in each image to be processed, that is, to be processed. The weak texture area on the image is masked, so that the weak texture area does not participate in the subsequent 3D reconstruction process. It can be understood that the target image is an image obtained by masking the weak texture area in the image to be processed.

步骤S203,依据各目标图像对目标区域进行三维重建,得到目标区域的稠密点云;目标区域为目标场景中除弱纹理区域之外的区域。In step S203, three-dimensional reconstruction is performed on the target area according to each target image to obtain a dense point cloud of the target area; the target area is an area other than the weak texture area in the target scene.

在本实施例中,该目标区域可以理解为非弱纹理区域,由于目标图像是将待处理图像中的弱纹理区域屏蔽掉后所得到的图像,故基于各目标图像可以实现目标场景中的非弱纹理区域的三维重建,从而得到非弱纹理区域的稠密点云。In this embodiment, the target area can be understood as a non-weak texture area. Since the target image is obtained by masking the weak texture area in the image to be processed, the non-weak texture area in the target scene can be realized based on each target image. 3D reconstruction of weakly textured regions, resulting in dense point clouds of non-weakly textured regions.

步骤S204,根据弱纹理区域的边界线和稠密点云,对弱纹理区域进行点云填充。Step S204, filling the weak texture area with point cloud according to the boundary line and dense point cloud of the weak texture area.

在本实施例中,在完成目标场景中非弱纹理区域的三维重建后,对于目标场景中的弱纹理区域,可以利用弱纹理区域的边界线和稠密点云,获得弱纹理区域的边界线相关的点云数据,进而根据该弱纹理区域的边界线相关的点云数据完成对弱纹理区域的点云填充。如此,便可获得整个目标场景的三维重建结果。In this embodiment, after completing the three-dimensional reconstruction of the non-weak texture area in the target scene, for the weak texture area in the target scene, the boundary line of the weak texture area and the dense point cloud can be used to obtain the boundary line correlation of the weak texture area. point cloud data, and then complete the point cloud filling of the weak texture area according to the point cloud data related to the boundary line of the weak texture area. In this way, the 3D reconstruction result of the entire target scene can be obtained.

可见,本发明实施例提供的三维重建方法,通过提取出目标场景对应的各待处理图像中的弱纹理区域,并对弱纹理区域进行掩膜处理,从而在三维重建过程中避开了对弱纹理区域的三维重建,在得到目标场景中除弱纹理区域之外的区域的稠密点云后,根据弱纹理区域的边界线和稠密点云,对弱纹理区域进行点云填充,即并未直接对弱纹理区域进行三维重建,而是利用弱纹理区域的边界线相关的点云数据对弱纹理区域进行点云填充,从而完成了整个目标场景的三维重建过程,有效解决了现有技术中对弱纹理区域重建效果不理想的问题,避免了三维重建结果中会出现空洞以及大量噪点等现象,实现了较好的空洞去除效果。由于三维重建过程中对弱纹理区域进行了掩模处理,故得到的三维重建结果中不会出现大量噪点,有效提升了三维重建后的点云数据的精度。It can be seen that the three-dimensional reconstruction method provided by the embodiment of the present invention avoids the weak texture area in the three-dimensional reconstruction process by extracting the weak texture area in each to-be-processed image corresponding to the target scene, and performing mask processing on the weak texture area. For 3D reconstruction of textured area, after obtaining the dense point cloud of the area except the weakly textured area in the target scene, fill the weakly textured area with point cloud according to the boundary line and dense point cloud of the weakly textured area, that is, it is not directly For the 3D reconstruction of the weak texture area, the point cloud data related to the boundary line of the weak texture area is used to fill the weak texture area with point clouds, thus completing the 3D reconstruction process of the entire target scene, effectively solving the problems in the prior art. The problem of unsatisfactory reconstruction effect in weak texture area avoids the phenomenon of holes and a lot of noise in the 3D reconstruction results, and achieves a better hole removal effect. Since the weak texture area is masked in the process of 3D reconstruction, there will not be a lot of noise in the obtained 3D reconstruction result, which effectively improves the accuracy of the point cloud data after 3D reconstruction.

在实际应用中,可以基于不同的方法来提取弱纹理区域。即上述的步骤S201可以包括:利用各待处理图像中的像素值或者预先训练的检测模型提取各待处理图像中的弱纹理区域,或者通过用户选取目标场景对应的各待处理图像中的弱纹理区域。In practical applications, weak texture regions can be extracted based on different methods. That is, the above-mentioned step S201 may include: extracting the weak texture area in each to-be-processed image by using the pixel value in each to-be-processed image or a pre-trained detection model, or selecting the weak texture in each to-be-processed image corresponding to the target scene by the user area.

也即是说,在电子设备获取目标场景对应的各待处理图像后,可以由用户在各待处理图像中选取出弱纹理区域,也可以由电子设备利用各待处理图像中的像素值或者预先训练的检测模型提取各待处理图像中的弱纹理区域。That is to say, after the electronic device obtains each image to be processed corresponding to the target scene, the user can select the weak texture area in each image to be processed, or the electronic device can use the pixel value in each The trained detection model extracts weakly textured regions in each image to be processed.

其中,在利用待处理图像中的像素值提取弱纹理区域时,通过遍历待处理图像中的像素,在当前像素值大于预设值时,采用区域增长方法判断邻域像素的像素值,如果邻域像素的像素值大于预设值,则继续向外扩散,直到像素值小于预设值为止。通过该方法可以获取待处理图像中的弱纹理区域。其中,预设值可以根据实际需要设置,例如,设置为240。Among them, when using the pixel value in the image to be processed to extract the weak texture area, by traversing the pixels in the image to be processed, when the current pixel value is greater than the preset value, the area growth method is used to determine the pixel value of the adjacent pixel, if the adjacent pixel value is If the pixel value of the domain pixel is greater than the preset value, it continues to diffuse outward until the pixel value is less than the preset value. By this method, the weak texture area in the image to be processed can be obtained. The preset value can be set according to actual needs, for example, set to 240.

该预先训练的检测模型可以为深度学习的卷积神经网络,在利用预先训练的检测模型提取弱纹理区域时,可各待处理图像输入预先训练的检测模型,从而得到各待处理图像中的弱纹理区域。具体方法可以如下:The pre-trained detection model can be a deep learning convolutional neural network. When using the pre-trained detection model to extract weak texture areas, each image to be processed can be input into the pre-trained detection model, so as to obtain the weak texture in each image to be processed. texture area. The specific method can be as follows:

(1)利用历史无人机影像进行弱纹理区域的标注,非标注区域为背景。(1) Use historical UAV images to label weak texture areas, and non-labeled areas are the background.

(2)对原始数据和标签数据进行数据增强,以增加模型训练的数据量;增强的方式有:裁剪、旋转、翻转、缩放以及添加噪声等。(2) Data enhancement is performed on the original data and label data to increase the amount of data for model training; the enhancement methods include cropping, rotating, flipping, scaling, and adding noise.

(3)将制作好的数据集按照一定的比例如6:2:2分为三份:训练集、验证集和测试集。首先将训练集数据送入卷积神经网络中,卷积神经网络一般包含输入层、卷积层、池化层、全连接层,利用卷积神经网络为每个像素分配一个初始类别标签,卷积层可以有效的捕捉图像中的局部特征,并以层级的方式将许多这样的模块嵌套在一起。在检测模型训练完成后,利用验证集对检测模型进行评估,选择表现最好的一组超参数在测试集上进行测试。在不断的调整超参数过程中使得检测模型在测试集上表现最优。(3) Divide the prepared data set into three parts according to a certain ratio such as 6:2:2: training set, validation set and test set. First, the training set data is sent to the convolutional neural network. The convolutional neural network generally includes an input layer, a convolutional layer, a pooling layer, and a fully connected layer. The convolutional neural network is used to assign an initial class label to each pixel. Multilayers can effectively capture local features in an image and nest many such modules together in a hierarchical fashion. After the detection model is trained, the detection model is evaluated on the validation set, and a set of hyperparameters with the best performance is selected for testing on the test set. In the process of continuous adjustment of hyperparameters, the detection model performs optimally on the test set.

(4)使用最优的检测模型完成对各待处理图像中的弱纹理区域分类,得到各待处理图像中的弱纹理区域以及相应的类别。其中,弱纹理区域的边界线可以通过对分类结果进行矢量化得到。(4) Use the optimal detection model to complete the classification of weak texture regions in each image to be processed, and obtain the weak texture regions and corresponding categories in each image to be processed. Among them, the boundary line of the weak texture area can be obtained by vectorizing the classification result.

在实际应用中,容易造成空洞的弱纹理区域可能有多种类型(如图3所示),利用上述不同的提取弱纹理区域的方法,可以提取出待处理图像中的不同类型的弱纹理区域。即上述步骤利用各待处理图像中的像素值或者预先训练的检测模型提取各待处理图像中的弱纹理区域,可以包括:In practical applications, there may be various types of weak texture areas that are likely to cause holes (as shown in Figure 3). Using the above different methods for extracting weak texture areas, different types of weak texture areas in the image to be processed can be extracted. . That is, the above steps use the pixel values in each image to be processed or a pre-trained detection model to extract weak texture regions in each image to be processed, which may include:

根据各待处理图像中的像素值确定各待处理图像中的第一弱纹理区域;将各待处理图像输入预先训练的检测模型,得到各待处理图像中的第二弱纹理区域;其中,弱纹理区域包括第一弱纹理区域和第二弱纹理区域,第一弱纹理区域的反光率高于第二弱纹理区域。Determine the first weak texture area in each to-be-processed image according to the pixel value in each to-be-processed image; input each to-be-processed image into a pre-trained detection model to obtain the second weakly textured area in each to-be-processed image; The texture area includes a first weak texture area and a second weak texture area, and the reflectivity of the first weak texture area is higher than that of the second weak texture area.

在本实施例中,第一弱纹理区域为反光率较高的强反光区域,例如强反光路面。第二弱纹理区域为强反光区域以外的容易造成空洞的区域,其反光率低于第一弱纹理区域,例如水域、屋顶以及太阳能反射板等。In this embodiment, the first weak texture area is a strong reflective area with high reflective rate, such as a strong reflective road surface. The second weak texture area is an area other than the strong reflection area that is prone to voids, and its reflectivity is lower than that of the first weak texture area, such as water, roofs, and solar reflective panels.

对于强反光区域这类弱纹理区域,可以利用像素强度值来确定。对于水域、屋顶以及太阳能反射板等这类弱纹理区域,则采用深度学习的卷积神经网络(检测模型)完成提取,实现了不同类型的弱纹理区域采用不同的方法提取出来。For weak texture areas such as strong reflective areas, the pixel intensity value can be used to determine. For weak texture areas such as waters, roofs and solar reflectors, the deep learning convolutional neural network (detection model) is used to complete the extraction, so that different types of weak texture areas can be extracted by different methods.

可见,本发明实施例针对容易造成空洞现象的弱纹理区域的不同类型,采用不同的方法进行弱纹理区域的提取;对于强反光区域,根据像素强度值并结合区域增长方法确定;对于水域、屋顶以及太阳能反射板等弱纹理区域,则基于深度学习框架,利用预先训练的检测模型实现弱纹理区域的提取。如此,实现了对容易造成空洞现象的弱纹理区域的准确提取,确保三维重建结果中不会出现空洞及大量噪点等现象。It can be seen that the embodiment of the present invention adopts different methods to extract weak texture areas for different types of weak texture areas that are likely to cause voids; for strong reflective areas, it is determined according to the pixel intensity value combined with the area growth method; As well as weak texture areas such as solar reflectors, based on the deep learning framework, the pre-trained detection model is used to extract weak texture areas. In this way, the accurate extraction of the weak texture area that is likely to cause voids is achieved, and it is ensured that no voids and a large number of noises appear in the 3D reconstruction result.

在本实施例中,依据各目标图像对目标区域进行三维重建的过程可以包括:特征点提取与匹配、稀疏重建和稠密重建,最终得到目标区域的稠密点云。基于此,请参照图4,上述的步骤S203可以包括如下子步骤:In this embodiment, the process of three-dimensional reconstruction of the target area according to each target image may include: feature point extraction and matching, sparse reconstruction and dense reconstruction, and finally obtain a dense point cloud of the target area. Based on this, please refer to FIG. 4 , the above-mentioned step S203 may include the following sub-steps:

子步骤S2031,对各目标图像进行特征点提取以及特征点匹配,得到各目标图像之间的匹配点对。In sub-step S2031, feature point extraction and feature point matching are performed on each target image to obtain matching point pairs between each target image.

其中,常用的特征点检测有SIFT(Scale Invariant Feature Transform,尺度不变特征变换)、SURF(Speed Up Robust Feature,加速稳健特征)、ORB(Oriented FAST andRotated BRIEF)、Harris等。例如,本实施例中可以采用SIFT算法进行特征点提取,SIFT特征点对旋转、尺度缩放、亮度等保持不变性,是一种非常稳定的局部特征。提取SIFT特征点后进行特征点匹配,获取各目标图像之间的匹配点对。Among them, the commonly used feature point detections are SIFT (Scale Invariant Feature Transform, scale invariant feature transform), SURF (Speed Up Robust Feature, accelerated robust feature), ORB (Oriented FAST and Rotated BRIEF), Harris, etc. For example, in this embodiment, the SIFT algorithm can be used to extract feature points, and the SIFT feature points remain invariant to rotation, scale scaling, brightness, etc., and are very stable local features. After extracting SIFT feature points, feature point matching is performed to obtain matching point pairs between each target image.

子步骤S2032,根据匹配点对计算各目标图像对应的相机参数以及稀疏点云。Sub-step S2032: Calculate camera parameters and sparse point clouds corresponding to each target image according to the matched point pairs.

在本实施例中,在得到各目标图像之间的匹配点对后,通过Sfm(Structure frommotion,运动恢复结构)技术可以进行稀疏重建,从而估计出各目标图像对应的相机参数,并根据三角化得到稀疏点云。其中,Sfm是一种三维重建的算法,可以通过两个或多个场景(图片)恢复相机位姿,并重建稀疏三维坐标点。其中,相机参数包括相机内参数和相机外参数(即相机位姿,包括相机旋转和相机平移)。In this embodiment, after the matching point pairs between each target image are obtained, sparse reconstruction can be performed through the Sfm (Structure from motion, motion recovery structure) technology, thereby estimating the camera parameters corresponding to each target image, and according to the triangulation Get a sparse point cloud. Among them, Sfm is a 3D reconstruction algorithm that can recover the camera pose through two or more scenes (pictures) and reconstruct sparse 3D coordinate points. Among them, the camera parameters include camera internal parameters and camera external parameters (ie camera pose, including camera rotation and camera translation).

子步骤S2033,根据相机参数和稀疏点云,生成稠密点云。Sub-step S2033, generate a dense point cloud according to the camera parameters and the sparse point cloud.

在本实施例中,可以使用多视图立体匹配(MVS,Multi View Stereo)完成进行稠密重建,即在已知各目标图像对应的相机参数以及稀疏点云的前提下,可通过图像一致性(photo consistency)函数,例如平方误差和(SSD,Sum of Squared Difference)、绝对误差和(SAD,Sum of Absolute Differences)以及归一化互相关(NCC,Normalized CrossCorrelation)等,实现目标图像间的稠密匹配,进而重建目标场景中目标区域对应的稠密点云。In this embodiment, multi-view stereo matching (MVS, Multi View Stereo) can be used to complete the dense reconstruction. consistency) functions, such as sum of squared errors (SSD, Sum of Squared Difference), sum of absolute errors (SAD, Sum of Absolute Differences), and normalized cross-correlation (NCC, Normalized CrossCorrelation), etc., to achieve dense matching between target images, Then, the dense point cloud corresponding to the target area in the target scene is reconstructed.

例如,基于深度图融合的稠密匹配可以包含以下过程:(1)为每一幅目标图像选择邻域图像构成立体图像组(候选集)。在邻域图像组最优选择过程中,应考虑图像之间具有足够大的基线。(2)计算参考影像每个像素的深度,经典框架:计算匹配代价,代价聚合,计算深度值,最后深度优化,生成深度图和法线图。使用灰度绝对值差(AD,AbsoluteDifferences)、灰度绝对值差和、归一化相关系数、互信息(MI,Mutual Information)法或者Census变换(CT)法作为两个像素的匹配代价计算方法。(3)按照一定的准则,如相邻像素应该具有连续的深度值,来对代价矩阵进行优化。利用信息传递策略,包括空间传播、视图传播、时许传播等计算每个目标图像的深度图。(4)利用左右一致性检查算法剔除因为遮挡和噪声而导致的错误深度,剔除小连通域算法来剔除孤立异常点,利用中值滤波、双边滤波等平滑算法对深度图进行平滑,以提高深度图的质量。(5)将多个深度图集成到统一的和增强的场景表示中,同时减少左右不一致性,完成深度图的融合,最终获得稠密点云。For example, the dense matching based on depth map fusion may include the following process: (1) Select neighborhood images for each target image to form a stereo image group (candidate set). During the optimal selection of the neighborhood image group, a sufficiently large baseline between images should be considered. (2) Calculate the depth of each pixel of the reference image, the classic framework: calculate the matching cost, cost aggregation, calculate the depth value, and finally optimize the depth, and generate the depth map and normal map. Use gray absolute value difference (AD, Absolute Differences), gray absolute value difference sum, normalized correlation coefficient, mutual information (MI, Mutual Information) method or Census transform (CT) method as the matching cost calculation method of two pixels . (3) According to certain criteria, such as adjacent pixels should have continuous depth values, the cost matrix is optimized. The depth map of each target image is calculated using information transfer strategies, including spatial propagation, view propagation, and temporal propagation. (4) Use the left-right consistency check algorithm to remove the wrong depth caused by occlusion and noise, remove the small connected domain algorithm to remove the isolated abnormal points, and use the median filter, bilateral filter and other smoothing algorithms to smooth the depth map to improve the depth. the quality of the graph. (5) Integrate multiple depth maps into a unified and enhanced scene representation, while reducing left-right inconsistencies, complete the fusion of depth maps, and finally obtain dense point clouds.

在一种实施方式中,上述子步骤S2032具体可以包括:从各目标图像中确定两张初始目标图像,根据两张初始目标图像之间的匹配点对,计算出两张初始目标图像对应的相机参数,并根据两张初始目标图像对应的相机参数,计算两张初始目标图像对应的稀疏点云;不断添加新的目标图像,根据当前已生成的稀疏点云与新的目标图像中的特征点的匹配关系,计算新的目标图像对应的相机参数和稀疏点云,并对新的目标图像对应的相机参数和稀疏点云进行优化,直到计算出所有目标图像对应的相机参数和稀疏点云;调整各目标图像对应的相机参数和稀疏点云,以使所有同名点投影到对应目标图像上的重投影误差之和最小。In one embodiment, the above sub-step S2032 may specifically include: determining two initial target images from each target image, and calculating the cameras corresponding to the two initial target images according to the matching point pairs between the two initial target images parameters, and according to the camera parameters corresponding to the two initial target images, calculate the sparse point cloud corresponding to the two initial target images; continuously add new target images, according to the currently generated sparse point cloud and the feature points in the new target image The matching relationship is calculated, the camera parameters and sparse point cloud corresponding to the new target image are calculated, and the camera parameters and sparse point cloud corresponding to the new target image are optimized until the camera parameters and sparse point cloud corresponding to all target images are calculated; The camera parameters and sparse point cloud corresponding to each target image are adjusted to minimize the sum of the reprojection errors of all points with the same name projected onto the corresponding target image.

也即是说,三维重建过程中的稀疏重建可以包括三个部分:初始重建、增量重建和全局优化。That is to say, the sparse reconstruction in the 3D reconstruction process can include three parts: initial reconstruction, incremental reconstruction and global optimization.

在初始重建过程中,选择相机间可视区域较多的两视角图像(两张初始目标图像,如图5所示)进行初始化,根据两视角图像之间的匹配点对(如图6所示),利用快速近似最近邻算法可以得到更准确的点对。接下来利用对极几何约束(如图7所示)完成初始相机姿态估计:记P为空间中的任一点,p1、p2为左右图像中的匹配点对,O1、O2为左右图像的相机光心位置,R、t为相机运动的旋转矩阵及平移矩阵(即相机外参数)。根据针孔相机模型可以得到:In the initial reconstruction process, two-view images with more visible areas between cameras (two initial target images, as shown in Figure 5) are selected for initialization, and the matching point pairs between the two-view images (as shown in Figure 6) are selected for initialization. ), using the fast approximate nearest neighbor algorithm to get more accurate point pairs. Next, use epipolar geometric constraints (as shown in Figure 7) to complete the initial camera pose estimation: denote P as any point in space, p1 and p2 as the matching point pair in the left and right images, and O1 and O2 as the camera light of the left and right images. The center position, R and t are the rotation matrix and translation matrix of the camera motion (ie, the parameters outside the camera). According to the pinhole camera model, we can get:

s1p1=KP,s2p2=K(RP+t) (1)s1p1=KP, s2p2=K(RP+t) (1)

其中,s1,s2表示两匹配点对应的深度信息,K为相机内参矩阵(即相机内参数)。使用齐次坐标的形式,由于s1、s2为非零值,则可以得到:Among them, s1 and s2 represent the depth information corresponding to the two matching points, and K is the camera intrinsic parameter matrix (ie, the camera intrinsic parameter). Using the form of homogeneous coordinates, since s1 and s2 are non-zero values, you can get:

p1=KP,p2=K(RP+t) (2)p1=KP, p2=K(RP+t) (2)

p1、p2点在归一化平面上的点坐标记为x1、x2,上式(2)可以整理得:The point coordinates of p1 and p2 on the normalized plane are marked as x1 and x2. The above formula (2) can be sorted as:

x2=Rx1+t (3)x2=Rx1+t (3)

上式(3)两侧同时乘t以及左乘x2T,可得:Multiply both sides of the above formula (3) by t and the left by x2 T at the same time, we can get:

x2TtRx1=0 (4)x2 T t Rx1=0 (4)

其中,t表示平移矩阵的反对称矩阵,通过引入反对称矩阵可将矩阵的叉乘变为点乘,即线性变换,从而有利于计算;x2T表示坐标向量x2的转置。Among them, t represents the anti-symmetric matrix of the translation matrix. By introducing the anti-symmetric matrix, the cross product of the matrix can be changed into a point product, that is, a linear transformation, which is beneficial to the calculation; x2 T represents the transpose of the coordinate vector x2.

在式(4)中重新代入p1、p2,可得:Re-substituting p1 and p2 in formula (4), we can get:

p2TK-TtRK-1p1=0 (5)p2 T K -T t RK -1 p1=0 (5)

通过式(5)可以得到本质矩阵(Essential Matrix)E=tR,基础矩阵(Fundamental Matrix)F=K-TEK-1The essential matrix (Essential Matrix) E=t R and the Fundamental Matrix (Fundamental Matrix) F=K -T EK -1 can be obtained by formula (5).

单应矩阵描述着两个平面之间的映射关系,通常用在共面上的点的变换关系。记平面方程为:The homography matrix describes the mapping relationship between two planes, and is usually used for the transformation relationship of points on the same plane. The plane equation is written as:

Figure BDA0003471723070000131
Figure BDA0003471723070000131

其中,n表示平面的法向量,d表示原点到平面的距离,P为空间点。Among them, n represents the normal vector of the plane, d represents the distance from the origin to the plane, and P is the space point.

根据特征匹配点的共面关系可得:According to the coplanar relationship of the feature matching points, we can get:

Figure BDA0003471723070000132
Figure BDA0003471723070000132

通过式(7)可以得到单应矩阵

Figure BDA0003471723070000133
The homography matrix can be obtained by formula (7)
Figure BDA0003471723070000133

利用得到的匹配点对,根据经典的八点法可以完成基础矩阵F、本质矩阵E、单应矩阵H的估计;然后利用奇异值分解(SVD)可以恢复相机运动R、t信息。Using the obtained matching point pairs, the fundamental matrix F, essential matrix E, and homography matrix H can be estimated according to the classic eight-point method; then the camera motion R and t information can be recovered by using singular value decomposition (SVD).

根据图7,可以得到点对之间的运动方程:According to Figure 7, the equation of motion between point pairs can be obtained:

s1x1=s2Rx2+t (8)s1x1=s2Rx2+t (8)

其中s1、s2为特征点的深度值。对上式(8)左右乘一个

Figure BDA0003471723070000134
(x1的反对称矩阵),可得:where s1 and s2 are the depth values of the feature points. Multiply the left and right sides of the above formula (8) by one
Figure BDA0003471723070000134
(Antisymmetric matrix of x1), we get:

s1x1x1=0=s2x1Rx2+x1t (9)s1x1 x1=0=s2x1 Rx2+x1 t (9)

上式(9)中左侧为0,右侧可以堪称s2的一个方程,由于已经知道了R、t,所以利用最小二乘可以解算出深度值s2,根据求解出的s2,也可以求解出s1。如此,就可以得到两张初始目标图像上的特征点的深度值,进而也就得到每个特征点的空间坐标,即稀疏点云。In the above formula (9), the left side is 0, and the right side can be called an equation of s2. Since R and t are already known, the depth value s2 can be solved by using least squares. According to the solved s2, it can also be solved. out s1. In this way, the depth values of the feature points on the two initial target images can be obtained, and then the spatial coordinates of each feature point, that is, the sparse point cloud, can be obtained.

增量重建是在初始重建的基础上,依次对其他图像进行重建,该过程以循环迭代的方式进行。循环内部主要包括三个子过程:基于透视n点(Perspective-n-Point,PnP)的相机定位,基于三角化的场景扩展以及基于BA(Bundle Adjustment,光束法平差)的相机参数与场景点云的优化。该迭代过程循环进行直至所有相机均成功定位或者无相机可继续定位。(1)PnP是求解3D到2D点对运动的方法,描述了当知道n个3D空间点及其投影位置时,如何估计相机的位姿。在进行初始重建过程中,我们得到种子图像中特征点的3D坐标,在增量重建过程中,通过添加图像构建3D到2D的点对运动问题。其中,PnP问题有多种求解方法,如用3对点估计位姿的P3P、直接线性变换法(DLT)、EPnP(Efficient PnP)、UPnP等;还能用非线性优化的方法,构建最小二乘问题并迭代求解。(2)基于三角化的场景扩展,以第三幅目标图像为例,根据初始重建生成的3D点和第三幅图的2D匹配点,构建PnP问题,利用P3P/EPnP等方法估计第三幅目标图对应的相机位姿,然后继续三角化得到更多点的3D坐标,重复该步骤,最后计算所有目标图像对应的相机位姿和3D点。(3)基于BA的相机参数与场景点云的优化,在解算相机参数过程中,一般先使用P3P/EPnP等方法估计相机位姿,再构建最小二乘优化问题对估计值进行调整(即BA)。Incremental reconstruction is to reconstruct other images in turn on the basis of the initial reconstruction, and the process is carried out in a cyclic and iterative manner. The loop mainly includes three sub-processes: camera positioning based on Perspective-n-Point (PnP), scene expansion based on triangulation, and camera parameters and scene point cloud based on BA (Bundle Adjustment). Optimization. This iterative process loops until all cameras are successfully positioned or there are no cameras to continue positioning. (1) PnP is a method for solving the motion of 3D to 2D point pairs, which describes how to estimate the pose of the camera when n 3D space points and their projected positions are known. During the initial reconstruction process, we get the 3D coordinates of the feature points in the seed image, and during the incremental reconstruction process, we construct a 3D-to-2D point pair motion problem by adding images. Among them, there are many solutions to the PnP problem, such as P3P, direct linear transformation (DLT), EPnP (Efficient PnP), UPnP, etc., which use 3 pairs of points to estimate the pose. Multiply the problem and solve it iteratively. (2) Triangulation-based scene expansion, taking the third target image as an example, according to the 3D points generated by the initial reconstruction and the 2D matching points of the third image, the PnP problem is constructed, and the third image is estimated by methods such as P3P/EPnP The camera pose corresponding to the target image, and then continue to triangulate to obtain the 3D coordinates of more points, repeat this step, and finally calculate the camera pose and 3D points corresponding to all target images. (3) Optimization of camera parameters and scene point cloud based on BA, in the process of solving camera parameters, generally use P3P/EPnP and other methods to estimate the camera pose, and then construct a least squares optimization problem to adjust the estimated value (ie BA).

全局优化是在解算完所有目标图像对应的相机参数和稀疏点云后,通过光束法平差算法调整相机位姿和稀疏点云进一步优化,使得所有同名点投影到对应目标图像上的重投影误差之和最小,最终得到优化后的所有目标图像对应的相机参数和稀疏点云。其中,同名点就是同名像点,即同一物点在不同图像上成的像点。The global optimization is to adjust the camera pose and sparse point cloud through the beam method adjustment algorithm after solving the camera parameters and sparse point cloud corresponding to all target images, so that all points with the same name are projected onto the corresponding target image. The sum of the errors is the smallest, and finally the optimized camera parameters and sparse point clouds corresponding to all target images are obtained. Among them, the point with the same name is the image point with the same name, that is, the image point formed by the same object point on different images.

p2=H21p1,p1=H12p2 (10)p2=H 21 p1, p1=H 12 p2 (10)

重投影误差表示为:The reprojection error is expressed as:

Figure BDA0003471723070000141
Figure BDA0003471723070000141

其中

Figure BDA0003471723070000142
为估计得到的两幅目标图像中的完全匹配点对,
Figure BDA0003471723070000143
为重投影误差过程中的估计值。in
Figure BDA0003471723070000142
To estimate the exact matching point pairs in the two target images,
Figure BDA0003471723070000143
is the estimated value during the reprojection error process.

在一种实施方式中,可以根据弱纹理区域边界线处的相关点云数据进行弱纹理区域的点云填充。请参照图8,上述步骤S204可以包括如下子步骤:In one embodiment, the point cloud filling of the weak texture area may be performed according to the relevant point cloud data at the boundary line of the weak texture area. Please refer to FIG. 8 , the above-mentioned step S204 may include the following sub-steps:

子步骤S2041,将弱纹理区域的边界线与稠密点云进行叠加,得到弱纹理区域的边界线处的点云数据。Sub-step S2041, superimpose the boundary line of the weak texture area with the dense point cloud to obtain point cloud data at the boundary line of the weak texture area.

子步骤S2042,对弱纹理区域的边界线处的点云数据进行分布统计,并根据统计结果选取出预设范围内的点云数据。Sub-step S2042, perform distribution statistics on the point cloud data at the boundary line of the weak texture area, and select point cloud data within a preset range according to the statistical result.

例如,可以对弱纹理区域的边界线处的点云数据进行正态分布统计,根据统计结果提取出两个标准差范围(也可以是其他个数的标准差范围)内的点云数据。For example, normal distribution statistics can be performed on the point cloud data at the boundary line of the weak texture area, and point cloud data within two standard deviation ranges (or other standard deviation ranges) can be extracted according to the statistical results.

子步骤S2043,根据预设范围内的点云数据对弱纹理区域进行点云填充。Sub-step S2043, fill the weak texture area with point cloud according to the point cloud data within the preset range.

在本实施例中,由于预设范围内的点云数据属于弱纹理区域的边界线处的点云数据,故根据预设范围内的点云数据对弱纹理区域进行点云填充,可以有效保证三维重建效果。In this embodiment, since the point cloud data within the preset range belongs to the point cloud data at the boundary line of the weak texture area, filling the weak texture area with the point cloud according to the point cloud data within the preset range can effectively guarantee 3D reconstruction effect.

在一种实施方式中,上述子步骤S2043可以包括:当弱纹理区域为水域时,将预设范围内的点云数据的高程平均值作为水域的高程值,对水域进行点云填充。In one embodiment, the above sub-step S2043 may include: when the weak texture area is water, fill the water with a point cloud by using the average elevation of point cloud data within a preset range as the elevation value of the water.

在另一种实施方式中,上述子步骤S2043可以包括:当弱纹理区域为非水区域时,计算预设范围内的点云数据中每个点云的法向量,将各点云的法向量的平均值作为非水区域的垂直方向,并利用预设范围内的点云数据和非水区域的垂直方向,对非水区域进行点云填充。其中,非水区域填充的点云的颜色为非水区域在待处理图像中的位置的颜色。In another embodiment, the above sub-step S2043 may include: when the weak texture area is a non-water area, calculating the normal vector of each point cloud in the point cloud data within a preset range, and calculating the normal vector of each point cloud The average value of is used as the vertical direction of the non-water area, and the non-water area is filled with point clouds using the point cloud data within the preset range and the vertical direction of the non-water area. The color of the point cloud filled by the non-water area is the color of the position of the non-water area in the image to be processed.

也即是说,在进行弱纹理区域的点云填充时,可以根据弱纹理区域的类型(水域、非水区域)进行分别处理。对于水域,统计预设范围内的点云数据的高程平均值作为水域的高程值,完成水域的点云填充;对于非水区域,计算预设范围内的点云数据中每个点云的法向量,求取各点云的法向量的平均值作为非水区域的垂直方向,然后利用预设范围内的点云数据以及法向量的平均值进行平面拟合,完成对非水区域的点云填充,填充区域点云的颜色使用图像中相同位置区域的颜色。如图9所示,为去除空洞的三维重建效果示意图。如此,有效避免了三维重建结果中空洞及大量噪点现象的发生,实现了较好的空洞去除效果以及弱纹理区域的三维重建效果。That is to say, when filling the point cloud of the weak texture area, it can be processed separately according to the type of the weak texture area (water area, non-water area). For water areas, the average elevation value of the point cloud data within the preset range is calculated as the elevation value of the water area to complete the point cloud filling of the water area; for non-water areas, the method of calculating each point cloud in the point cloud data within the preset range is calculated. vector, obtain the average value of the normal vectors of each point cloud as the vertical direction of the non-water area, and then use the point cloud data in the preset range and the average value of the normal vector to perform plane fitting to complete the point cloud in the non-water area. Fill, the color of the fill area point cloud uses the color of the same location area in the image. As shown in FIG. 9 , it is a schematic diagram of the 3D reconstruction effect of removing voids. In this way, the occurrence of holes and a large number of noises in the 3D reconstruction result is effectively avoided, and a better hole removal effect and a 3D reconstruction effect for weak texture areas are achieved.

为了执行上述实施例及各个可能的方式中的相应步骤,下面给出一种三维重建装置的实现方式。请参阅图10,为本发明实施例提供的三维重建装置500的一种功能模块图。需要说明的是,本实施例所提供的三维重建装置500,其基本原理及产生的技术效果和上述实施例相同,为简要描述,本实施例部分未提及之处,可参考上述的实施例中相应内容。该三维重建装置500包括弱纹理区域提取模块510、掩膜处理模块520、三维重建模块530和点云填充模块540。In order to perform the corresponding steps in the foregoing embodiments and various possible manners, an implementation manner of a three-dimensional reconstruction apparatus is given below. Please refer to FIG. 10 , which is a functional block diagram of a three-dimensional reconstruction apparatus 500 according to an embodiment of the present invention. It should be noted that the basic principles and technical effects of the three-dimensional reconstruction device 500 provided in this embodiment are the same as those in the above-mentioned embodiments. For the sake of brief description, for the parts not mentioned in this embodiment, reference may be made to the above-mentioned embodiments. corresponding content. The three-dimensional reconstruction device 500 includes a weak texture region extraction module 510 , a mask processing module 520 , a three-dimensional reconstruction module 530 and a point cloud filling module 540 .

该弱纹理区域提取模块510,用于提取目标场景对应的各待处理图像中的弱纹理区域。The weak texture region extraction module 510 is used for extracting weak texture regions in each image to be processed corresponding to the target scene.

可以理解,该弱纹理区域提取模块510可以执行上述步骤S201。It can be understood that the weak texture region extraction module 510 can perform the above step S201.

该掩膜处理模块520,用于对各待处理图像中的弱纹理区域进行掩膜处理,得到各目标图像。The mask processing module 520 is configured to perform mask processing on weak texture regions in each image to be processed to obtain each target image.

可以理解,该掩膜处理模块520可以执行上述步骤S202。It can be understood that the mask processing module 520 can perform the above step S202.

该三维重建模块530,用于依据各目标图像对目标区域进行三维重建,得到目标区域的稠密点云;目标区域为目标场景中除弱纹理区域之外的区域。The 3D reconstruction module 530 is used for performing 3D reconstruction on the target area according to each target image to obtain a dense point cloud of the target area; the target area is the area in the target scene except the weak texture area.

可以理解,该三维重建模块530可以执行上述步骤S203。It can be understood that the three-dimensional reconstruction module 530 can perform the above step S203.

该点云填充模块540,用于根据弱纹理区域的边界线和稠密点云,对弱纹理区域进行点云填充。The point cloud filling module 540 is used to fill the weak texture area with point cloud according to the boundary line and dense point cloud of the weak texture area.

可以理解,该点云填充模块540可以执行上述步骤S204。It can be understood that the point cloud filling module 540 can perform the above step S204.

可选地,该弱纹理区域提取模块510,具体用于利用各待处理图像中的像素值或者预先训练的检测模型提取各待处理图像中的弱纹理区域,或者通过用户选取目标场景对应的各待处理图像中的弱纹理区域。Optionally, the weak texture region extraction module 510 is specifically configured to extract the weak texture region in each to-be-processed image by using the pixel value in each to-be-processed image or a pre-trained detection model, or select each corresponding to the target scene by the user. Weak textured regions in the image to be processed.

其中,该弱纹理区域提取模块510用于根据各待处理图像中的像素值确定各待处理图像中的第一弱纹理区域;将各待处理图像输入预先训练的检测模型,得到各待处理图像中的第二弱纹理区域;其中,弱纹理区域包括第一弱纹理区域和第二弱纹理区域,第一弱纹理区域的反光率高于第二弱纹理区域。Wherein, the weak texture region extraction module 510 is used to determine the first weak texture region in each to-be-processed image according to the pixel value in each to-be-processed image; input each to-be-processed image into a pre-trained detection model to obtain each to-be-processed image The second weak texture area in; wherein, the weak texture area includes a first weak texture area and a second weak texture area, and the reflectivity of the first weak texture area is higher than that of the second weak texture area.

可选地,该三维重建模块530,具体用于对各目标图像进行特征点提取以及特征点匹配,得到各目标图像之间的匹配点对;根据匹配点对计算各目标图像对应的相机参数以及稀疏点云;根据相机参数和稀疏点云,生成稠密点云。Optionally, the three-dimensional reconstruction module 530 is specifically used to perform feature point extraction and feature point matching on each target image to obtain matching point pairs between each target image; calculate the camera parameters corresponding to each target image according to the matching point pairs and Sparse point cloud; generate dense point cloud based on camera parameters and sparse point cloud.

其中,该三维重建模块530用于从各目标图像中确定两张初始目标图像,根据两张初始目标图像之间的匹配点对,计算出两张初始目标图像对应的相机参数,并根据两张初始目标图像对应的相机参数,计算两张初始目标图像对应的稀疏点云;不断添加新的目标图像,根据当前已生成的稀疏点云与新的目标图像中的特征点的匹配关系,计算新的目标图像对应的相机参数和稀疏点云,并对新的目标图像对应的相机参数和稀疏点云进行优化,直到计算出所有目标图像对应的相机参数和稀疏点云;调整各目标图像对应的相机参数和稀疏点云,以使所有同名点投影到对应目标图像上的重投影误差之和最小。The three-dimensional reconstruction module 530 is used to determine two initial target images from each target image, calculate the camera parameters corresponding to the two initial target images according to the matching point pairs between the two initial target images, and calculate the corresponding camera parameters according to the two initial target images. The camera parameters corresponding to the initial target images are used to calculate the sparse point clouds corresponding to the two initial target images; new target images are continuously added, and new target images are calculated according to the matching relationship between the currently generated sparse point cloud and the feature points in the new target image. The camera parameters and sparse point cloud corresponding to the target image, and optimize the camera parameters and sparse point cloud corresponding to the new target image until the camera parameters and sparse point cloud corresponding to all target images are calculated; Camera parameters and sparse point cloud to minimize the sum of the reprojection errors of all points with the same name projected onto the corresponding target image.

可以理解,该三维重建模块530还可以执行上述子步骤S2031~S2033。It can be understood that the three-dimensional reconstruction module 530 can also perform the above sub-steps S2031-S2033.

可选地,该点云填充模块540,具体用于将弱纹理区域的边界线与稠密点云进行叠加,得到弱纹理区域的边界线处的点云数据;对弱纹理区域的边界线处的点云数据进行分布统计,并根据统计结果选取出预设范围内的点云数据;根据预设范围内的点云数据对弱纹理区域进行点云填充。Optionally, the point cloud filling module 540 is specifically used to superimpose the boundary line of the weak texture area and the dense point cloud to obtain point cloud data at the boundary line of the weak texture area; The point cloud data is distributed and counted, and the point cloud data within a preset range is selected according to the statistical results; the weak texture area is filled with point clouds according to the point cloud data within the preset range.

其中,该点云填充模块540具体用于当弱纹理区域为水域时,将预设范围内的点云数据的高程平均值作为水域的高程值,对水域进行点云填充。当弱纹理区域为非水区域时,计算预设范围内的点云数据中每个点云的法向量,将各点云的法向量的平均值作为非水区域的垂直方向,并利用预设范围内的点云数据和非水区域的垂直方向,对非水区域进行点云填充。其中,非水区域填充的点云的颜色为非水区域在待处理图像中的位置的颜色。Wherein, the point cloud filling module 540 is specifically configured to fill the water area with the point cloud by using the average elevation value of the point cloud data within the preset range as the elevation value of the water area when the weak texture area is water. When the weak texture area is a non-water area, calculate the normal vector of each point cloud in the point cloud data within the preset range, take the average value of the normal vectors of each point cloud as the vertical direction of the non-water area, and use the preset The point cloud data in the range and the vertical direction of the non-water area are filled with point clouds. The color of the point cloud filled by the non-water area is the color of the position of the non-water area in the image to be processed.

可以理解,该点云填充模块540还可以执行上述子步骤S2041~子步骤S2043。It can be understood that the point cloud filling module 540 may also perform the above sub-steps S2041 to S2043.

可见,本发明实施例提供的三维重建装置,弱纹理区域提取模块提取目标场景对应的各待处理图像中的弱纹理区域,掩膜处理模块对各待处理图像中的弱纹理区域进行掩膜处理,得到各目标图像,三维重建模块依据各目标图像对目标区域进行三维重建,得到目标区域的稠密点云;点云填充模块根据弱纹理区域的边界线和稠密点云,对弱纹理区域进行点云填充。如此,通过提取出目标场景对应的各待处理图像中的弱纹理区域,并对弱纹理区域进行掩膜处理,从而在三维重建过程中避开了对弱纹理区域的三维重建,在得到目标场景中除弱纹理区域之外的区域的稠密点云后,根据弱纹理区域的边界线和稠密点云,对弱纹理区域进行点云填充,即并未直接对弱纹理区域进行三维重建,而是利用弱纹理区域的边界线相关的点云数据对弱纹理区域进行点云填充,从而完成了整个目标场景的三维重建过程,有效解决了现有技术中对弱纹理区域重建效果不理想的问题,避免了三维重建结果中会出现空洞以及大量噪点等现象,实现了较好的空洞去除效果。由于三维重建过程中对弱纹理区域进行了掩模处理,故得到的三维重建结果中不会出现大量噪点,有效提升了三维重建后的点云数据的精度。It can be seen that, in the three-dimensional reconstruction device provided by the embodiment of the present invention, the weak texture region extraction module extracts the weak texture regions in the images to be processed corresponding to the target scene, and the mask processing module performs mask processing on the weak texture regions in the images to be processed. , obtain each target image, the 3D reconstruction module reconstructs the target area in 3D according to each target image, and obtains the dense point cloud of the target area; the point cloud filling module performs point cloud filling on the weak texture area according to the boundary line and dense point cloud of the weak texture area. Cloud filling. In this way, by extracting the weak texture area in each image to be processed corresponding to the target scene, and performing mask processing on the weak texture area, the 3D reconstruction of the weak texture area is avoided in the 3D reconstruction process, and the target scene is obtained after the 3D reconstruction of the weak texture area is avoided. After the dense point cloud in the area except the weak texture area, the weak texture area is filled with point cloud according to the boundary line and dense point cloud of the weak texture area, that is, the 3D reconstruction of the weak texture area is not directly performed, but The weak texture area is filled with point cloud data related to the boundary line of the weak texture area, thereby completing the 3D reconstruction process of the entire target scene, and effectively solving the problem of unsatisfactory reconstruction effect for the weak texture area in the prior art. It avoids the phenomenon of holes and a lot of noise in the 3D reconstruction results, and achieves a better hole removal effect. Since the weak texture area is masked in the 3D reconstruction process, a lot of noise will not appear in the obtained 3D reconstruction result, which effectively improves the accuracy of the point cloud data after 3D reconstruction.

请参照图11,为本发明实施例提供的电子设备700的一种方框示意图。电子设备700包括存储器710、处理器720及通信模块730。存储器710、处理器720以及通信模块730各元件相互之间直接或间接地电性连接,以实现数据的传输或交互。例如,这些元件相互之间可通过一条或多条通讯总线或信号线实现电性连接。Please refer to FIG. 11 , which is a schematic block diagram of an electronic device 700 according to an embodiment of the present invention. The electronic device 700 includes a memory 710 , a processor 720 and a communication module 730 . The elements of the memory 710 , the processor 720 and the communication module 730 are directly or indirectly electrically connected to each other to realize data transmission or interaction. For example, these elements may be electrically connected to each other through one or more communication buses or signal lines.

其中,存储器710用于存储程序或者数据。存储器710可以是,但不限于,随机存取存储器(Random Access Memory,RAM),只读存储器(Read Only Memory,ROM),可编程只读存储器(Programmable Read-Only Memory,PROM),可擦除只读存储器(ErasableProgrammable Read-Only Memory,EPROM),电可擦除只读存储器(Electric ErasableProgrammable Read-Only Memory,EEPROM)等。The memory 710 is used for storing programs or data. The memory 710 can be, but is not limited to, random access memory (Random Access Memory, RAM), read only memory (Read Only Memory, ROM), programmable read only memory (Programmable Read-Only Memory, PROM), erasable Read-only memory (ErasableProgrammable Read-Only Memory, EPROM), Electrically Erasable Programmable Read-Only Memory (Electric ErasableProgrammable Read-Only Memory, EEPROM), etc.

处理器720用于读/写存储器710中存储的数据或程序,并执行相应地功能。例如,当存储器710中存储的计算机程序被处理器720执行时,可以实现上述各实施例所揭示的三维重建方法。The processor 720 is used to read/write data or programs stored in the memory 710, and perform corresponding functions. For example, when the computer program stored in the memory 710 is executed by the processor 720, the three-dimensional reconstruction methods disclosed in the above embodiments can be implemented.

通信模块730用于通过网络建立电子设备700与其它通信终端之间的通信连接,并用于通过网络收发数据。The communication module 730 is used to establish a communication connection between the electronic device 700 and other communication terminals through the network, and to send and receive data through the network.

应当理解的是,图11所示的结构仅为电子设备700的结构示意图,电子设备700还可包括比图11中所示更多或者更少的组件,或者具有与图11所示不同的配置。图11中所示的各组件可以采用硬件、软件或其组合实现。It should be understood that the structure shown in FIG. 11 is only a schematic diagram of the structure of the electronic device 700 , and the electronic device 700 may further include more or less components than those shown in FIG. 11 , or have different configurations from those shown in FIG. 11 . . Each component shown in FIG. 11 may be implemented in hardware, software, or a combination thereof.

本发明实施例还提供了一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器720执行时实现上述各实施例所揭示的三维重建方法。Embodiments of the present invention further provide a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by the processor 720, implements the three-dimensional reconstruction methods disclosed in the foregoing embodiments.

在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,也可以通过其它的方式实现。以上所描述的装置实施例仅仅是示意性的,例如,附图中的流程图和框图显示了根据本发明的多个实施例的装置、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段或代码的一部分,所述模块、程序段或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现方式中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或动作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may also be implemented in other manners. The apparatus embodiments described above are merely illustrative, for example, the flowcharts and block diagrams in the accompanying drawings illustrate the architecture, functionality and possible implementations of apparatuses, methods and computer program products according to various embodiments of the present invention. operate. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code that contains one or more functions for implementing the specified logical function(s) executable instructions. It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It is also noted that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented in dedicated hardware-based systems that perform the specified functions or actions , or can be implemented in a combination of dedicated hardware and computer instructions.

另外,在本发明各个实施例中的各功能模块可以集成在一起形成一个独立的部分,也可以是各个模块单独存在,也可以两个或两个以上模块集成形成一个独立的部分。In addition, each functional module in each embodiment of the present invention may be integrated to form an independent part, or each module may exist independently, or two or more modules may be integrated to form an independent part.

所述功能如果以软件功能模块的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。If the functions are implemented in the form of software function modules and sold or used as independent products, they can be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present invention can be embodied in the form of a software product in essence, or the part that contributes to the prior art or the part of the technical solution. The computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present invention. The aforementioned storage medium includes: U disk, mobile hard disk, Read-Only Memory (ROM, Read-Only Memory), Random Access Memory (RAM, Random Access Memory), magnetic disk or optical disk and other media that can store program codes .

以上所述仅为本发明的优选实施例而已,并不用于限制本发明,对于本领域的技术人员来说,本发明可以有各种更改和变化。凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。The above descriptions are only preferred embodiments of the present invention, and are not intended to limit the present invention. For those skilled in the art, the present invention may have various modifications and changes. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention shall be included within the protection scope of the present invention.

Claims (10)

1.一种三维重建方法,其特征在于,所述方法包括:1. A three-dimensional reconstruction method, wherein the method comprises: 提取目标场景对应的各待处理图像中的弱纹理区域;Extracting weak texture areas in each to-be-processed image corresponding to the target scene; 对各待处理图像中的所述弱纹理区域进行掩膜处理,得到各目标图像;Perform mask processing on the weak texture area in each image to be processed to obtain each target image; 依据所述各目标图像对目标区域进行三维重建,得到所述目标区域的稠密点云;所述目标区域为所述目标场景中除所述弱纹理区域之外的区域;Performing three-dimensional reconstruction on the target area according to the target images to obtain a dense point cloud of the target area; the target area is an area other than the weak texture area in the target scene; 根据所述弱纹理区域的边界线和所述稠密点云,对所述弱纹理区域进行点云填充。Point cloud filling is performed on the weak texture area according to the boundary line of the weak texture area and the dense point cloud. 2.根据权利要求1所述的方法,其特征在于,所述提取目标场景对应的各待处理图像中的弱纹理区域,包括:2. The method according to claim 1, wherein the extracting weak texture regions in each to-be-processed image corresponding to the target scene comprises: 利用各待处理图像中的像素值或者预先训练的检测模型提取各待处理图像中的弱纹理区域,或者通过用户选取目标场景对应的各待处理图像中的弱纹理区域。Use pixel values in each image to be processed or a pre-trained detection model to extract weak texture areas in each image to be processed, or select weak texture areas in each image to be processed corresponding to the target scene by the user. 3.根据权利要求2所述的方法,其特征在于,所述利用各待处理图像中的像素值或者预先训练的检测模型提取各待处理图像中的弱纹理区域,包括:3. The method according to claim 2, wherein the extraction of weak texture regions in each to-be-processed image by using pixel values in each to-be-processed image or a pre-trained detection model, comprising: 根据各待处理图像中的像素值确定各所述待处理图像中的第一弱纹理区域;Determine the first weak texture area in each of the to-be-processed images according to pixel values in each of the to-be-processed images; 将各待处理图像输入预先训练的检测模型,得到各所述待处理图像中的第二弱纹理区域;其中,所述弱纹理区域包括第一弱纹理区域和第二弱纹理区域,所述第一弱纹理区域的反光率高于所述第二弱纹理区域。Input each to-be-processed image into a pre-trained detection model to obtain a second weak texture area in each of the to-be-processed images; wherein the weak texture area includes a first weak texture area and a second weak texture area, and the first weak texture area The reflectivity of a weakly textured region is higher than that of the second weakly textured region. 4.根据权利要求1所述的方法,其特征在于,所述根据所述弱纹理区域的边界线和所述稠密点云,对所述弱纹理区域进行点云填充,包括:4 . The method according to claim 1 , wherein, according to the boundary line of the weak texture area and the dense point cloud, filling the weak texture area with a point cloud, comprising: 5 . 将所述弱纹理区域的边界线与所述稠密点云进行叠加,得到所述弱纹理区域的边界线处的点云数据;superimposing the boundary line of the weak texture area with the dense point cloud to obtain point cloud data at the boundary line of the weak texture area; 对所述弱纹理区域的边界线处的点云数据进行分布统计,并根据统计结果选取出预设范围内的点云数据;Perform distribution statistics on the point cloud data at the boundary line of the weak texture area, and select point cloud data within a preset range according to the statistical results; 根据所述预设范围内的点云数据对所述弱纹理区域进行点云填充。Point cloud filling is performed on the weak texture area according to the point cloud data within the preset range. 5.根据权利要求4所述的方法,其特征在于,所述根据所述预设范围内的点云数据对所述弱纹理区域进行点云填充,包括:5 . The method according to claim 4 , wherein the performing point cloud filling on the weak texture area according to the point cloud data within the preset range, comprising: 5 . 当所述弱纹理区域为水域时,将所述预设范围内的点云数据的高程平均值作为所述水域的高程值,对所述水域进行点云填充。When the weak texture area is a water area, the water area is filled with a point cloud by taking the average elevation value of the point cloud data within the preset range as the elevation value of the water area. 6.根据权利要求4所述的方法,其特征在于,所述根据所述预设范围内的点云数据对所述弱纹理区域进行点云填充,包括:6 . The method according to claim 4 , wherein the performing point cloud filling on the weak texture area according to the point cloud data within the preset range, comprising: 6 . 当所述弱纹理区域为非水区域时,计算所述预设范围内的点云数据中每个点云的法向量,将各点云的法向量的平均值作为所述非水区域的垂直方向,并利用所述预设范围内的点云数据和所述非水区域的垂直方向,对所述非水区域进行点云填充。When the weak texture area is a non-water area, calculate the normal vector of each point cloud in the point cloud data within the preset range, and use the average value of the normal vectors of each point cloud as the vertical direction of the non-water area direction, and use the point cloud data within the preset range and the vertical direction of the non-water area to fill the non-water area with a point cloud. 7.根据权利要求1所述的方法,其特征在于,所述依据所述各目标图像对目标区域进行三维重建,得到所述目标区域的稠密点云,包括:7. The method according to claim 1, wherein the three-dimensional reconstruction of the target area according to each target image to obtain a dense point cloud of the target area, comprising: 对所述各目标图像进行特征点提取以及特征点匹配,得到所述各目标图像之间的匹配点对;Perform feature point extraction and feature point matching on each of the target images to obtain matching point pairs between the various target images; 根据所述匹配点对计算所述各目标图像对应的相机参数以及稀疏点云;Calculate camera parameters and sparse point cloud corresponding to each target image according to the matching point pair; 根据所述相机参数和所述稀疏点云,生成稠密点云。A dense point cloud is generated according to the camera parameters and the sparse point cloud. 8.一种三维重建装置,其特征在于,所述装置包括:8. A three-dimensional reconstruction device, wherein the device comprises: 弱纹理区域提取模块,用于提取目标场景对应的各待处理图像中的弱纹理区域;A weak texture area extraction module, used to extract weak texture areas in each image to be processed corresponding to the target scene; 掩膜处理模块,用于对各待处理图像中的所述弱纹理区域进行掩膜处理,得到各目标图像;a mask processing module, configured to perform mask processing on the weak texture area in each image to be processed to obtain each target image; 三维重建模块,用于依据所述各目标图像对目标区域进行三维重建,得到所述目标区域的稠密点云;所述目标区域为所述目标场景中除所述弱纹理区域之外的区域;a three-dimensional reconstruction module, configured to perform three-dimensional reconstruction on the target area according to the target images to obtain a dense point cloud of the target area; the target area is an area other than the weak texture area in the target scene; 点云填充模块,用于根据所述弱纹理区域的边界线和所述稠密点云,对所述弱纹理区域进行点云填充。The point cloud filling module is used for filling the weak texture area with point cloud according to the boundary line of the weak texture area and the dense point cloud. 9.一种电子设备,其特征在于,包括处理器、存储器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现如权利要求1-7中任一项所述的三维重建方法的步骤。9. An electronic device, characterized in that it comprises a processor, a memory, and a computer program stored on the memory and running on the processor, the computer program being executed by the processor to achieve the right The steps of the three-dimensional reconstruction method of any one of claims 1-7. 10.一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储计算机程序,所述计算机程序被处理器执行时实现如权利要求1-7中任一项所述的三维重建方法的步骤。10. A computer-readable storage medium, wherein a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the three-dimensional system according to any one of claims 1-7 is realized. Steps of the reconstruction method.
CN202210044808.4A 2022-01-14 2022-01-14 Three-dimensional reconstruction method, three-dimensional reconstruction device, electronic equipment and computer-readable storage medium Pending CN114494589A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210044808.4A CN114494589A (en) 2022-01-14 2022-01-14 Three-dimensional reconstruction method, three-dimensional reconstruction device, electronic equipment and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210044808.4A CN114494589A (en) 2022-01-14 2022-01-14 Three-dimensional reconstruction method, three-dimensional reconstruction device, electronic equipment and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN114494589A true CN114494589A (en) 2022-05-13

Family

ID=81512837

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210044808.4A Pending CN114494589A (en) 2022-01-14 2022-01-14 Three-dimensional reconstruction method, three-dimensional reconstruction device, electronic equipment and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN114494589A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115330940A (en) * 2022-08-09 2022-11-11 北京百度网讯科技有限公司 Three-dimensional reconstruction method, device, equipment and medium
CN116109781A (en) * 2023-04-12 2023-05-12 深圳市其域创新科技有限公司 Three-dimensional reconstruction method and system
CN116704152A (en) * 2022-12-09 2023-09-05 荣耀终端有限公司 Image processing method and electronic device
CN117593472A (en) * 2024-01-18 2024-02-23 成都市灵奇空间软件有限公司 A method and system for real-time modeling and reconstruction of local three-dimensional scenes using video streams

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017067390A1 (en) * 2015-10-20 2017-04-27 努比亚技术有限公司 Method and terminal for obtaining depth information of low-texture regions in image
CN109712223A (en) * 2017-10-26 2019-05-03 北京大学 A kind of threedimensional model automatic colouring method based on textures synthesis
CN111197976A (en) * 2019-12-25 2020-05-26 山东唐口煤业有限公司 A 3D Reconstruction Method Considering Multi-stage Matching Propagation in Weak Texture Regions
US20200255143A1 (en) * 2017-11-07 2020-08-13 SZ DJI Technology Co., Ltd. Three-dimensional reconstruction method, system and apparatus based on aerial photography by unmanned aerial vehicle
CN111967484A (en) * 2019-05-20 2020-11-20 长沙智能驾驶研究院有限公司 Point cloud clustering method and device, computer equipment and storage medium
CN113837943A (en) * 2021-09-28 2021-12-24 广州极飞科技股份有限公司 Image processing method, apparatus, electronic device and readable storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017067390A1 (en) * 2015-10-20 2017-04-27 努比亚技术有限公司 Method and terminal for obtaining depth information of low-texture regions in image
CN109712223A (en) * 2017-10-26 2019-05-03 北京大学 A kind of threedimensional model automatic colouring method based on textures synthesis
US20200255143A1 (en) * 2017-11-07 2020-08-13 SZ DJI Technology Co., Ltd. Three-dimensional reconstruction method, system and apparatus based on aerial photography by unmanned aerial vehicle
CN111967484A (en) * 2019-05-20 2020-11-20 长沙智能驾驶研究院有限公司 Point cloud clustering method and device, computer equipment and storage medium
CN111197976A (en) * 2019-12-25 2020-05-26 山东唐口煤业有限公司 A 3D Reconstruction Method Considering Multi-stage Matching Propagation in Weak Texture Regions
CN113837943A (en) * 2021-09-28 2021-12-24 广州极飞科技股份有限公司 Image processing method, apparatus, electronic device and readable storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JULIEN FAYER等: "Texturing and inpainting a complete tubular 3D object reconstructed from partial views", 《COMPUTERS & GRAPHICS》, 22 May 2018 (2018-05-22) *
张思远: "弱纹理物体表面鲁棒三维重建方法研究", 《中国优秀硕士学位论文全文库》, 15 February 2021 (2021-02-15) *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115330940A (en) * 2022-08-09 2022-11-11 北京百度网讯科技有限公司 Three-dimensional reconstruction method, device, equipment and medium
CN116704152A (en) * 2022-12-09 2023-09-05 荣耀终端有限公司 Image processing method and electronic device
CN116704152B (en) * 2022-12-09 2024-04-19 荣耀终端有限公司 Image processing method and electronic device
CN116109781A (en) * 2023-04-12 2023-05-12 深圳市其域创新科技有限公司 Three-dimensional reconstruction method and system
CN117593472A (en) * 2024-01-18 2024-02-23 成都市灵奇空间软件有限公司 A method and system for real-time modeling and reconstruction of local three-dimensional scenes using video streams

Similar Documents

Publication Publication Date Title
Penner et al. Soft 3d reconstruction for view synthesis
Liu et al. Depth-map completion for large indoor scene reconstruction
Kim et al. Scene reconstruction from high spatio-angular resolution light fields.
Jancosek et al. Exploiting visibility information in surface reconstruction to preserve weakly supported surfaces
Hirschmuller Stereo processing by semiglobal matching and mutual information
Nair et al. A survey on time-of-flight stereo fusion
US8929645B2 (en) Method and system for fast dense stereoscopic ranging
CN114494589A (en) Three-dimensional reconstruction method, three-dimensional reconstruction device, electronic equipment and computer-readable storage medium
Gehrig et al. Improving sub-pixel accuracy for long range stereo
Banno et al. Disparity map refinement and 3D surface smoothing via directed anisotropic diffusion
CN108090960A (en) A kind of Object reconstruction method based on geometrical constraint
EP3293700B1 (en) 3d reconstruction for vehicle
WO2009023044A2 (en) Method and system for fast dense stereoscopic ranging
Ma et al. A modified census transform based on the neighborhood information for stereo matching algorithm
CN106530333B (en) Hierarchical Optimal Stereo Matching Method Based on Binding Constraints
Shivakumar et al. Real time dense depth estimation by fusing stereo with sparse depth measurements
Gadasin et al. Reconstruction of a Three-Dimensional Scene from its Projections in Computer Vision Systems
Vu et al. Efficient hybrid tree-based stereo matching with applications to postcapture image refocusing
Zhu et al. Local readjustment for high-resolution 3d reconstruction
CN112637582A (en) Three-dimensional fuzzy surface synthesis method for monocular video virtual view driven by fuzzy edge
Chen et al. Bidirectional optical flow NeRF: High accuracy and high quality under fewer views
Li et al. Dynamic view synthesis with spatio-temporal feature warping from sparse views
Chen et al. MoCo‐Flow: Neural Motion Consensus Flow for Dynamic Humans in Stationary Monocular Cameras
Choi et al. Implementation of Real‐Time Post‐Processing for High‐Quality Stereo Vision
CN115409949A (en) Model training method, perspective image generation method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination