[go: up one dir, main page]

CN108734728A - A kind of extraterrestrial target three-dimensional reconstruction method based on high-resolution sequence image - Google Patents

A kind of extraterrestrial target three-dimensional reconstruction method based on high-resolution sequence image Download PDF

Info

Publication number
CN108734728A
CN108734728A CN201810377855.4A CN201810377855A CN108734728A CN 108734728 A CN108734728 A CN 108734728A CN 201810377855 A CN201810377855 A CN 201810377855A CN 108734728 A CN108734728 A CN 108734728A
Authority
CN
China
Prior art keywords
image
model
extraterrestrial target
point cloud
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810377855.4A
Other languages
Chinese (zh)
Inventor
杨宁
张子腾
郭雷
李晖晖
郭世平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN201810377855.4A priority Critical patent/CN108734728A/en
Publication of CN108734728A publication Critical patent/CN108734728A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

本发明涉及一种基于高分辨序列图像的空间目标三维重构方法,利用SIFT特征描述子依次提取序列图像精确稳健点特征,并通过哈希级联对相邻图像进行快速精准匹配;联合对应相机内参数矩阵,利用运动恢复结构,获取图像序列间相对姿态,并得到空间目标稀疏点云模型;通过对序列图像进行匹配,扩散插值,迭代滤除,获得空间目标密集的三维点云模型;利用获取的密集点云模型进行表面重建,得到致密的空间目标网格模型;利用获取的精准致密网格模型进行纹理贴图,最终得到可视化的空间目标三维模型。本发明可实现快速、精准图像匹配,修复重建表面高光空洞,获取精细、可视化空间目标三维模型。

The invention relates to a method for three-dimensional reconstruction of spatial objects based on high-resolution sequence images, using SIFT feature descriptors to sequentially extract accurate and robust point features of sequence images, and quickly and accurately match adjacent images through hash cascade; joint corresponding cameras The internal parameter matrix, using the motion recovery structure, obtains the relative pose between the image sequences, and obtains the sparse point cloud model of the space object; by matching the sequence images, diffusing interpolation, and iterative filtering, the dense 3D point cloud model of the space object is obtained; using The acquired dense point cloud model is used for surface reconstruction to obtain a dense space target grid model; the acquired precise and dense grid model is used for texture mapping, and finally a visualized 3D model of the space target is obtained. The invention can realize fast and accurate image matching, repair and reconstruct surface highlight holes, and obtain a fine and visualized three-dimensional model of a spatial object.

Description

一种基于高分辨序列图像的空间目标三维重构方法A Method of 3D Reconstruction of Space Objects Based on High Resolution Sequence Images

技术领域technical field

本发明属于计算机视觉三维重构技术领域,涉及一种基于高分辨序列图像的空间目标三维重构方法。The invention belongs to the technical field of three-dimensional reconstruction of computer vision, and relates to a three-dimensional reconstruction method of a space object based on high-resolution sequence images.

背景技术Background technique

随着航天科学技术的成熟和发展以及外层空间的军事价值凸显,世界各国尤为重视天地基空间目标的监视和态势感知。我国在应对外来太空军事力量威胁的同时,需努力提升自身的太空军事能力,逐步满足自身在未来太空军事对抗中的需求。新型的空间目标监视系统和技术,特别是对于合作目标以及非合作目标的三维模型、目标姿态、行为信息的获取和估计显得越来越重要。我国虽然在航空航天和计算机科学领域已有长足进步且部分技术领先世界,但受限于某些技术难题尚未突破,我国尚未掌握空间重点目标的三维模型获取能力。如何充分运用我国航空航天技术优势,结合计算机视觉,计算机图形学等专业学科,快速掌握空间重点目标特别是非合作目标的三维模型获取能力,显得尤为紧迫。基于高分辨图像序列的空间目标三维重构技术,就是应用三维重构技术,重点突破基于高分辨图像序列的目标三维重构,三维模型获取难题。With the maturity and development of aerospace science and technology and the military value of outer space, countries around the world pay special attention to the monitoring and situational awareness of space-based space targets. While responding to the threat of foreign space military forces, our country needs to strive to improve its own space military capabilities and gradually meet its own needs in future space military confrontations. New space target monitoring systems and technologies, especially for the acquisition and estimation of 3D models, target attitude and behavior information of cooperative targets and non-cooperative targets, are becoming more and more important. Although my country has made great progress in the fields of aerospace and computer science and some technologies are world-leading, it is limited by the fact that some technical problems have not yet been broken through, and my country has not yet mastered the ability to acquire 3D models of key space targets. How to make full use of the advantages of my country's aerospace technology, combined with computer vision, computer graphics and other professional disciplines, to quickly master the ability to acquire 3D models of key space targets, especially non-cooperative targets, is particularly urgent. The 3D reconstruction technology of space targets based on high-resolution image sequences is the application of 3D reconstruction technology, focusing on breaking through the 3D reconstruction of targets based on high-resolution image sequences and the difficulty of obtaining 3D models.

基于图像的三维重构是一种典型的计算机视觉问题,其核心的目标可以这样来描述,“给定一个物体或者场景的图像集,在已知材料、视点、光照条件的基础假设下,估计可以合理解释这些图像的最可能的三维形状”。这个定义强调了这个任务的困难,也就是假设的材料、视点、光照条件都是已知的。如果这些是未知的,那么一般问题是材料、视点、光照条件病态的组合会产生完全相同的图像。因此,如果没有较好的假设,没有一种方法可以仅仅依赖图像准确地恢复三维结构。三维重构最初的应用主要是结构化的图像集,图像的顺序受影响,比如说视频序列。一些MVS(Multi-View Stereo)应用也遵循这样的规律,比如谷歌街景和微软街景,但已经有MVS可以在不同的场合和硬件下处理无序的图像集,比如基于航拍图像的3D地图。快速和高质量的特征提取和描述已经促使SFM(Structure fromMotion)可以处理无结构的数据集,高质量的描述使建筑物可以从不同的姿态和光照下的图像中得到更长更高的轨迹。Image-based 3D reconstruction is a typical computer vision problem, and its core goal can be described as follows: "Given an image set of an object or scene, under the basic assumptions of known materials, viewpoints, and lighting conditions, estimate The most probable three-dimensional shape that can reasonably explain these images". This definition emphasizes the difficulty of this task, that is, the assumed material, viewpoint, and lighting conditions are all known. If these are unknown, then the general problem is that pathological combinations of materials, viewpoints, lighting conditions will produce the exact same image. Therefore, without better assumptions, there is no method that can accurately recover 3D structures relying solely on images. The initial application of 3D reconstruction is mainly structured image sets, and the order of images is affected, such as video sequences. Some MVS (Multi-View Stereo) applications also follow this rule, such as Google Street View and Microsoft Street View, but there are already MVSs that can process unordered image sets on different occasions and hardware, such as 3D maps based on aerial images. Fast and high-quality feature extraction and description have enabled SFM (Structure from Motion) to process unstructured datasets, and high-quality descriptions enable buildings to obtain longer and higher trajectories from images under different poses and illuminations.

目前,在基于高分辨率序列图像的三维重构领域,国内缺乏以空间环境目标作为研究对象,具有针对性、系统性的研究,现阶段所涉及的研究均处于前期研讨,方案分析,技术论证,或在有限模拟实验环境下的部分算法测试,尚未形成较为完备的理论框架。除此之外,目前国内尚未有实测图像作为依据,将面临一些可能超越三维重构理论体系之外的难点,针对该项目中太空目标的特点,必须首先解决三维重构非朗伯结构(主要针对空间重点目标高光、非均匀光照)带来的数据损伤的影响,这些数据损伤可能会对算法的鲁棒性带来不可预知的挑战,其次卫星目标存在比如帆板、天线等薄形、线形结构,这些都是三维重构领域现今存在的重大技术难题。At present, in the field of 3D reconstruction based on high-resolution sequence images, there is a lack of targeted and systematic research on space environment objects as the research object in China. The research involved at this stage is in the early stage of discussion, program analysis, and technical demonstration. , or some algorithm tests in a limited simulation experiment environment, a relatively complete theoretical framework has not yet been formed. In addition, there is no actual measured image as a basis in China at present, and it will face some difficulties that may go beyond the theoretical system of 3D reconstruction. In view of the characteristics of space targets in this project, it is necessary to first solve the non-Lambertian structure of 3D reconstruction (mainly In view of the impact of data damage caused by high light and non-uniform illumination of key space targets, these data damages may bring unpredictable challenges to the robustness of the algorithm. Secondly, satellite targets have thin and linear shapes such as sailboards and antennas. These are major technical problems in the field of 3D reconstruction.

发明内容Contents of the invention

要解决的技术问题technical problem to be solved

为了避免现有技术的不足之处,本发明提出一种基于高分辨序列图像的空间目标三维重构方法,能够实现空间目标三维重构,获取空间目标三维模型。In order to avoid the deficiencies of the prior art, the present invention proposes a method for three-dimensional reconstruction of a space target based on high-resolution sequence images, which can realize three-dimensional reconstruction of the space target and obtain a three-dimensional model of the space target.

技术方案Technical solutions

一种基于高分辨序列图像的空间目标三维重构方法,其特征在于步骤如下:A method for three-dimensional reconstruction of space targets based on high-resolution sequence images, characterized in that the steps are as follows:

步骤1:利用SIFT特征描述子依次提取一组空间目标高分辨序列图像集I={I1,I2,…,In}的精确稳健点特征,并通过哈希级联对每一幅图像设定的N个近邻图像对进行快速精准匹配,得到图像Ii的N个近邻图像对匹配集为Mi={mi,i-N,mi,i-N+1,…mi,j…,mi,i+N-1,mi,i+N};Step 1: Use the SIFT feature descriptor to sequentially extract a set of accurate and robust point features of a set of spatial target high-resolution sequence images I={I 1 ,I 2 ,…,I n }, and perform hash concatenation on each image The set N neighboring image pairs are quickly and accurately matched, and the matching set of N neighboring image pairs of the image I i is obtained as M i ={ mi,iN ,m i,i-N+1 ,… mi,j … ,m i,i+N-1 ,m i,i+N };

所述精确稳健点特征的特征表现为尺度不变性、旋转不变性;The characteristics of the precise and robust point features are scale invariance and rotation invariance;

步骤2:利用RANSAC估计对经过特征匹配的图像对进行立体匹配,在每次RANSAC循环中,通过从近邻图像对匹配集Mi中选取特征匹配点最多的图像对作为初始匹配对,联合图像的相机内参数矩阵K,采用八点算法估算出基础矩阵F;对基础矩阵F进行归一化奇异值分解,得到图像对的相对旋转矩阵P与平移矩阵t;利用增量式运动恢复结构,获取空间目标捕获相机采集姿态,得到结构化图像集和空间目标稀疏点云模型;Step 2: Use RANSAC estimation to perform stereo matching on the feature-matched image pairs. In each RANSAC cycle, select the image pair with the most feature matching points from the adjacent image pair matching set M i as the initial matching pair, and combine the image pairs The internal parameter matrix K of the camera is estimated by the eight-point algorithm to obtain the basic matrix F; the normalized singular value decomposition is performed on the basic matrix F to obtain the relative rotation matrix P and translation matrix t of the image pair; the incremental motion recovery structure is used to obtain The space target captures the attitude of the camera to obtain a structured image set and a sparse point cloud model of the space target;

步骤3:对结构化图像集和空间目标稀疏点云模型,通过对序列图像I={I1,I2,…,In}进行Harris或DoG角点精匹配,经过对稀疏特征点云模型进行双线性扩散插值,并利用光度一致性原则约束迭代滤除错落在实际表面外部及内部的错误点,扩散插值和滤波过程可迭代3次,获得空间目标密集的三维点云模型;Step 3: For the structured image set and the sparse point cloud model of the spatial target, Harris or DoG corner point fine matching is performed on the sequence image I= { I 1 ,I 2 ,…,In }, and the sparse feature point cloud model Perform bilinear diffusion interpolation, and use the principle of photometric consistency to constrain iteratively to filter out error points that are scattered outside and inside the actual surface. The diffusion interpolation and filtering process can be iterated 3 times to obtain a 3D point cloud model with dense spatial objects;

步骤4、对获取的空间目标密集的三维点云模型进行基于空间浮动尺度隐函数的表面重建:利用生成正则化八叉树体素定义的隐函数的零等值面{x|implictF(x)=0}作为空间目标重构模型表面,得到初始致密的空间目标网格模型,网格呈三角面片式;Step 4. Perform surface reconstruction based on the spatial floating-scale implicit function for the obtained 3D point cloud model with dense spatial objects: use the zero isosurface {x|implictF(x) of the implicit function defined by the generated regularized octree voxel =0} is used as the spatial object to reconstruct the model surface, and the initial dense spatial object mesh model is obtained, and the mesh is in the form of triangular facets;

步骤5:对初始致密的空间目标网格模型进行冗余清除,设定几何置信度阈值进行由边缘向内部的包络去除,得到致密的空间目标网格模型;Step 5: Perform redundancy removal on the initial dense spatial object grid model, set the geometric confidence threshold to remove the envelope from the edge to the inside, and obtain a dense spatial object grid model;

步骤6:对致密的空间目标网格模型进行大尺度无缝拼接的纹理贴图,用成对马尔科夫随机场能量公式E(l)指定一个视图li作为网格表面Fi对应的纹理,相应的lj作为网格面片Fj对应的纹理,其中Fi,Fj表示空间目标网格模型三角面片,该过程最终得到精细可视化的空间目标三维模型,能量公式为:Step 6: Perform large-scale seamless mosaic texture mapping on the dense spatial object grid model, and use the paired Markov random field energy formula E(l) to specify a view l i as the texture corresponding to the grid surface F i , The corresponding l j is used as the texture corresponding to the mesh patch F j , where F i and F j represent the triangular patch of the spatial object mesh model. This process finally obtains a finely visualized 3D model of the spatial object. The energy formula is:

Edate代表选取对应面片Fi∈Faces最佳视图的数据项约束,Esmooth则表征两个网格面片边界接缝处(Fi,Fj)∈Edges的平滑项约束;E date represents the data item constraint for selecting the best view of the corresponding face F i ∈ Faces, and E smooth represents the smooth item constraint at the boundary seam of two mesh faces (F i , F j ) ∈ Edges;

步骤7:通过光度一致性检测消除梯度幅度的邻近视图遮挡,并进一步利用欧氏距离权重衰减进行近邻表面颜色调整,降低空间目标模型接缝可见性,获取精细可视化的空间目标三维模型。Step 7: Eliminate the occlusion of the adjacent view of the gradient magnitude through photometric consistency detection, and further use the Euclidean distance weight attenuation to adjust the color of the adjacent surface, reduce the visibility of the spatial object model seam, and obtain a finely visualized 3D model of the spatial object.

所述步骤2中利用增量式运动恢复结构,获取空间目标捕获相机采集姿态,得到结构化图像集和空间目标稀疏点云模型的方法为:从特征点中均匀随机抽取8个点;利用这8组对应点求基础矩阵F的最小配置解;用未抽到的每一组对应点来验证基础矩阵F,当距离满足阈值要求,认定对应是一致的;将每一幅图像经过立体匹配后放入轨迹中,获取一个多视点图像之间匹配特征点的联通集;通过依次添加一个相机的方式,将全部图像相机添加到轨迹中,最后通过集束调整对结果进行优化,获得空间目标捕获相机采集姿态,结构化图像集和空间目标稀疏点云模型。In the step 2, the incremental motion recovery structure is used to obtain the acquisition attitude of the space target capture camera, and the method of obtaining the structured image set and the space target sparse point cloud model is as follows: uniformly and randomly extract 8 points from the feature points; use this Find the minimum configuration solution of the basic matrix F for 8 sets of corresponding points; verify the basic matrix F with each set of corresponding points that have not been extracted, and when the distance meets the threshold requirement, it is determined that the correspondence is consistent; after stereo matching for each image Put it into the trajectory to obtain a connected set of matching feature points between multi-viewpoint images; add all image cameras to the trajectory by adding a camera in turn, and finally optimize the result through cluster adjustment to obtain a space target capture camera Acquire poses, structured image sets and sparse point cloud models of spatial objects.

所述步骤7获得精细可视化的空间目标三维模型的方法如下:构建成对马尔科夫随机场能量公式来指定一个视图作为网格表面对应的纹理,能量公式E(l)包括数据项Edate和平滑项Esmooth;通过光度一致性检测消除梯度幅度的邻近视图遮挡,并进一步进行表面颜色调整;通过设定角度、距离、颜色差异作为代价值w的代价函数选择代价值较小的视图集进行图像拼接。The method of step 7 to obtain a three-dimensional model of a finely visualized space object is as follows: construct a pair of Markov random field energy formulas to specify a view as the texture corresponding to the grid surface, and the energy formula E(l) includes data items E date and Smoothing item E smooth ; eliminate the adjacent view occlusion of the gradient magnitude through photometric consistency detection, and further adjust the surface color; set the angle, distance, and color difference as the cost function of the cost value w Select the view set with less cost value for image stitching.

所述进行表面颜色调整的方法:利用Potts模型进行图像分割,沿相邻接缝边缘查询颜色不连续性,利用欧氏距离权重衰减降低面片间颜色差异,最后利用泊松编辑进行局部优化,降低空间目标模型接缝可见性。The method for surface color adjustment: using Potts model for image segmentation, querying color discontinuity along the edges of adjacent seams, using Euclidean distance weight attenuation to reduce color differences between patches, and finally using Poisson editing for local optimization, Reduced space object model seam visibility.

有益效果Beneficial effect

本发明提出的一种基于高分辨序列图像的空间目标三维重构方法,利用SIFT特征描述子依次提取序列图像精确稳健点特征,并通过哈希级联对相邻图像进行快速精准匹配;联合对应相机内参数矩阵,利用运动恢复结构,获取图像序列间相对姿态,并得到空间目标稀疏点云模型;通过对序列图像进行匹配,扩散插值,迭代滤除,获得空间目标密集的三维点云模型;利用获取的密集点云模型进行表面重建,得到致密的空间目标网格模型;利用获取的精准致密网格模型进行纹理贴图,最终得到可视化的空间目标三维模型。A three-dimensional reconstruction method for spatial objects based on high-resolution sequence images proposed by the present invention uses SIFT feature descriptors to sequentially extract accurate and robust point features of sequence images, and quickly and accurately matches adjacent images through hash cascade; joint correspondence The internal parameter matrix of the camera, using the motion recovery structure, obtains the relative pose between the image sequences, and obtains the sparse point cloud model of the space object; by matching the sequence images, diffusing interpolation, and iterative filtering, the dense 3D point cloud model of the space object is obtained; The obtained dense point cloud model is used for surface reconstruction to obtain a dense spatial object grid model; the obtained precise dense mesh model is used for texture mapping, and finally a visualized 3D model of the spatial object is obtained.

本发明中通过将高维特征进行局部敏感哈希编码,利用短码的多表散列进行粗搜索,将反馈的候选项进行高维汉明空间映射,建立的汉明距离哈希表可实现快速、精准图像匹配;通过利用隐函数法对密集点云模型进行表面重建,可以建立致密的网格模型,填充重构过程中由于空间目标特性数据产生的高光空洞;通过进行大尺度纹理贴图,依据边缘长度的权重进行衰减进行颜色调整,消除缝隙可见性,增强模型可视化效果。In the present invention, by performing local sensitive hash encoding on high-dimensional features, using multi-table hash of short codes to perform rough search, and mapping the feedback candidates to high-dimensional Hamming space, the established Hamming distance hash table can realize fast, Accurate image matching; by using the implicit function method to reconstruct the surface of the dense point cloud model, a dense mesh model can be established to fill the highlight holes caused by the spatial object characteristic data during the reconstruction process; by performing large-scale texture mapping, according to the edge The weight of the length is attenuated for color adjustment, eliminating the visibility of seams and enhancing model visualization.

附图说明Description of drawings

图1本发明系统具体实施方式总体设计Fig. 1 overall design of the specific embodiment of the system of the present invention

图2本发明具体实施方式获取图像序列间相对姿态,并得到空间目标稀疏点云模型Fig. 2 The specific embodiment of the present invention obtains the relative pose between the image sequences, and obtains the sparse point cloud model of the space object

图3本发明具体实施方式八叉树生成策略Fig. 3 specific embodiment of the present invention octree generation strategy

图4本发明实例获得纹理贴图结果Fig. 4 Example of the present invention obtains the texture map result

具体实施方式Detailed ways

现结合实施例、附图对本发明作进一步描述:Now in conjunction with embodiment, accompanying drawing, the present invention will be further described:

本发明实施例采用的技术方案是:The technical scheme that the embodiment of the present invention adopts is:

通过输入一组空间目标高分辨序列图像集I={I1,I2,…,In},利用SIFT特征描述子提取每一幅图像精确稳健点特征,其特征表现为尺度不变性、旋转不变性,并通过哈希级联对每一幅图像设定的N个近邻图像对进行快速精准匹配,其中图像Ii的N个近邻图像对匹配集为Mi={mi,i-N,mi,i-N+1,…mi,j…,mi,i+N-1,mi,i+N};By inputting a set of spatial target high-resolution sequence image sets I= { I 1 ,I 2 ,…,In }, the SIFT feature descriptor is used to extract the precise and robust point features of each image, and its features are scale invariance, rotation invariance, and quickly and accurately match the N neighboring image pairs set for each image through hash cascading, where the matching set of N neighboring image pairs of image I i is M i ={m i,iN ,m i,i-N+1 ,...m i,j ...,m i,i+N-1 ,m i,i+N };

对于相邻匹配的两幅图像之间的匹配点对X1和X2,它们必然满足对极几何约束X1 TFX2=0,其中F为基础矩阵,从近邻图像对匹配集Mi中选取特征匹配点最多的图像对作为初始匹配对,联合图像的相机内参数矩阵K,用8个对应关系即八点算法估算出基础矩阵F;对基础矩阵F进行归一化奇异值分解,得到图像对的相对旋转矩阵P与平移矩阵t;利用增量式运动恢复结构,即每一次添加一幅图像,直到获取空间目标捕获相机采集姿态,得到结构化图像集和空间目标稀疏点云模型;For the pair of matching points X 1 and X 2 between two adjacent matching images, they must satisfy the epipolar geometric constraint X 1 T FX 2 =0, where F is the fundamental matrix, from the matching set M i of adjacent image pairs The image pair with the most feature matching points is selected as the initial matching pair, and the camera intrinsic parameter matrix K of the image is used to estimate the fundamental matrix F with 8 correspondences, that is, the eight-point algorithm; normalized singular value decomposition is performed on the fundamental matrix F to obtain The relative rotation matrix P and translation matrix t of the image pair; use incremental motion to restore the structure, that is, add an image each time until the acquisition attitude of the space target capture camera is obtained, and a structured image set and a space target sparse point cloud model are obtained;

利用已得到的结构化图像集和空间目标稀疏点云模型,通过对序列图像I={I1,I2,…,In}进行Harris或DoG(Difference-of-Gaussian)角点精匹配,经过对稀疏特征点云模型进行双线性扩散插值,并利用可视化一致性原则约束迭代滤除错落在实际表面外部及内部的错误点,扩散插值和滤波过程可迭代3次,获得空间目标密集的三维点云模型;Using the obtained structured image set and the sparse point cloud model of the space target, by performing Harris or DoG (Difference-of-Gaussian) corner point fine matching on the sequence image I= { I 1 ,I 2 ,…,In }, After performing bilinear diffusion interpolation on the sparse feature point cloud model, and using the principle of visual consistency to constrain iteratively to filter out the wrong points scattered outside and inside the actual surface, the process of diffusion interpolation and filtering can be iterated 3 times to obtain a dense spatial object. 3D point cloud model;

利用获取的空间目标密集的三维点云模型进行基于空间浮动尺度隐函数的表面重建,利用生成正则化八叉树体素定义的隐函数的零等值面{x|implictF(x)=0},作为空间目标重构模型表面,其中x作为所需重建的点位置,impictF(x)为定义的隐函数,零等值面提取可采用移动立方体算法,该过程最终得到初始致密的空间目标网格模型,网格呈三角面片式;Use the acquired 3D point cloud model with dense spatial objects to perform surface reconstruction based on spatial floating-scale implicit functions, and use the zero isosurface {x|implictF(x)=0} of the implicit function defined by regularized octree voxels to generate , as the surface of the spatial object reconstruction model, where x is the point position to be reconstructed, impictF(x) is the defined implicit function, and the zero isosurface extraction can use the moving cube algorithm. This process finally obtains the initial dense spatial object network Lattice model, the mesh is in the form of triangular facets;

利用获取的初始致密的空间目标网格模型进行冗余清除,可设定几何置信度阈值进行由边缘向内部的包络去除,得到致密的空间目标网格模型;Use the acquired initial dense spatial object grid model to remove redundancy, and set the geometric confidence threshold to remove the envelope from the edge to the inside to obtain a dense spatial object grid model;

利用获取的致密的空间目标网格模型进行大尺度无缝拼接的纹理贴图,用成对马尔科夫随机场能量公式E(l)来指定一个视图li作为网格面片Fi对应的纹理,相应的lj作为网格面片Fj对应的纹理,其中Fi,Fj表示空间目标网格模型三角面片,该过程最终得到精细可视化的空间目标三维模型,能量公式为:Use the obtained dense spatial target grid model to perform large-scale seamless mosaic texture maps, and use the paired Markov random field energy formula E(l) to designate a view l i as the texture corresponding to the mesh patch F i , and the corresponding l j is used as the texture corresponding to the mesh patch F j , where F i and F j represent the triangular patch of the space object mesh model. This process finally obtains a finely visualized 3D model of the space object. The energy formula is:

Edate代表选取对应面片Fi∈Faces最佳视图的数据项约束,Esmooth则表征两个网格面片边界接缝处(Fi,Fj)∈Edges的平滑项约束。E date represents the data item constraint for selecting the best view of the corresponding face F i ∈ Faces, and E smooth represents the smooth item constraint at the boundary seam of two mesh faces (F i , F j ) ∈ Edges.

最后,通过光度一致性检测消除梯度幅度的邻近视图遮挡,并进一步利用欧氏距离权重衰减进行近邻表面颜色调整,降低空间目标模型接缝可见性,获取精细可视化的空间目标三维模型。Finally, the adjacent view occlusion of the gradient magnitude is eliminated by photometric consistency detection, and the Euclidean distance weight attenuation is further used to adjust the color of the adjacent surface to reduce the visibility of the spatial object model joints and obtain a finely visualized 3D model of the spatial object.

所述序列图像SIFT特征,具体方法如下:首先对图像计算相邻尺度图像差分,得到一系列图像并在该图像空间中求极值点;然后对极值点进行过滤,依次遍历高斯差分图像中的像素点,和其八邻域及其所在图像的上、下层的各九个邻近的像素点比较,计算极值点;找出稳定的特征点,并对这些极值点进行筛选,去除不稳定的点;根据关键点所在的尺度选择与该尺度最相近的高斯平滑图像;在关键点周围选取一个邻域,并对其中点的梯度进行高斯加权,每个子区域生成一个描述子;The SIFT feature of the sequence image, the specific method is as follows: firstly calculate the image difference of adjacent scales for the image, obtain a series of images and find the extreme points in the image space; then filter the extreme points, and traverse the Gaussian difference image successively Comparing the pixel points of its eight neighbors and the nine neighboring pixels of the upper and lower layers of the image, calculate the extreme points; find out the stable feature points, and filter these extreme points to remove the insignificant points. Stable point; select the Gaussian smooth image closest to the scale according to the scale of the key point; select a neighborhood around the key point, and perform Gaussian weighting on the gradient of the midpoint, and generate a descriptor for each sub-region;

所述利用哈希级联进行快速精准匹配,具体方法如下:利用局部敏感哈希策略,使用内积相似判断哈希簇的哈希方程,从D维正态分布选择一个超平面r的随机向量,将128维特征描述子转化为二进制码;采用带有短码的多表散列查询进行粗搜索,将返回的候选项重新映射到更高维的汉明空间并计算汉明距离,建立汉明距离的哈希表作为精确查询的关键参考,最后利用级联哈希策略进行图像间精确快速匹配;The specific method of using the hash cascade to perform fast and accurate matching is as follows: using the local sensitive hash strategy, using the inner product similarity to judge the hash equation of the hash cluster, and selecting a random vector of a hyperplane r from the D-dimensional normal distribution , convert the 128-dimensional feature descriptor into a binary code; use a multi-table hash query with a short code to perform a rough search, remap the returned candidates to a higher-dimensional Hamming space and calculate the Hamming distance, and establish the Hamming The hash table with clear distance is used as the key reference for accurate query, and finally the cascaded hash strategy is used for accurate and fast matching between images;

所述利用八点算法计算基础矩阵,具体方法如下:数据集就是特征匹配输出的对应关系,而约束就是对极几何,用8个对应关系即可用八点算法估算出基础矩阵F;对于两幅图像之间的匹配点X1和X2,它们必然满足对极几何约束X1 TFX2=0,该方程是关于F的9个未知参数线性齐次方程,由于F在相差一个常数因子的意义下是唯一的,所以可以将一个非零参数归一化儿变为8个未知参数,因此如果能够得到8对匹配点,就可以线性确定基础矩阵F;由于基础矩阵包含两幅图像的内外参数信息,因此,联合相机内参数K可由基础矩阵得到本质矩阵E=KTFK,再对本质矩阵进行奇异值分解就获得第二幅图像的外参数。The method of calculating the fundamental matrix using the eight-point algorithm is as follows: the data set is the corresponding relationship of the feature matching output, and the constraint is the epipolar geometry. The eight-point algorithm can be used to estimate the basic matrix F with 8 corresponding relationships; The matching points X 1 and X 2 between images must satisfy the epipolar geometric constraint X 1 T FX 2 =0, this equation is a linear homogeneous equation with 9 unknown parameters about F, because F differs by a constant factor It is unique in meaning, so a non-zero parameter can be normalized into 8 unknown parameters, so if 8 pairs of matching points can be obtained, the fundamental matrix F can be determined linearly; since the fundamental matrix contains the inside and outside of the two images Parameter information, therefore, the joint internal camera parameter K can be obtained from the fundamental matrix to obtain the essential matrix E=K T FK, and then the singular value decomposition of the essential matrix can be used to obtain the external parameters of the second image.

所述利用RANSAC进行立体匹配,具体方法如下:从特征点中均匀随机抽取8个点;利用这8组对应点求基础矩阵F的最小配置解;用未抽到的每一组对应点来验证F,如果距离足够小,那么这个对应是一致的;反之则返回;Described utilize RANSAC to carry out stereo matching, specific method is as follows: Uniformly randomly extract 8 points from feature point; Utilize these 8 groups of corresponding points to seek the minimum configuration solution of fundamental matrix F; Verify with each group of corresponding points not drawn F, if the distance is small enough, then the correspondence is consistent; otherwise, return;

所述获取图像序列间相对姿态,并得到空间目标稀疏点云模型,具体方法如下:通过增量式添加下一相机,通过三角形法获取空间三维点,对给定一个相机参数集{Pj}和轨迹集Mj指代轨迹的三维坐标,指代的是二维图像坐标投影在ith的相机,集束调整要求最小非线性误差E(P,M)最小:The relative attitude between the acquired image sequences and the sparse point cloud model of the spatial object are obtained, the specific method is as follows: the next camera is incrementally added, the three-dimensional point in space is obtained by the triangle method, and for a given camera parameter set {P j } and trajectory set M j refers to the three-dimensional coordinates of the trajectory, Refers to the camera whose two-dimensional image coordinates are projected on i th , and the cluster adjustment requires the minimum nonlinear error E(P,M) to be minimum:

所述获得空间目标密集的三维点云模型,具体方法如下:通过Harris和DoG(Difference-of-Gaussian)检测特征点,将这些特征点在多幅图像之间进行匹配,得到一系列稀疏面片;在每一幅图像中,取β2×β2像素的Cell(i,j),其表示包含像元的矩形网格,并且返回反映角点和点的局部最大响应值的η,通过多幅图像中检测的特征点重建出一个稀疏的面片集,储存在覆盖图像的Cell(i,j)中。假定图像Ii,其对应的相机光学中心标记为O,对Ii中任意一个特征点f,在相邻匹配图像中搜索具有同样类型(Harris或DoG)特征点f′的候选集合,在对应极线2个像素之内,用点对(f,f′)三角化空间的3D点;The detailed method of obtaining the dense three-dimensional point cloud model of space objects is as follows: detect feature points by Harris and DoG (Difference-of-Gaussian), and match these feature points between multiple images to obtain a series of sparse patches ; In each image, take Cell(i,j) of β 2 × β 2 pixels, which represents a rectangular grid containing pixels, and return η reflecting the local maximum response value of corners and points, through multiple The feature points detected in the two images are used to reconstruct a sparse patch set, which is stored in Cell(i,j) of the overlay image. Assuming image I i , the corresponding camera optical center is marked as O, for any feature point f in I i , search for a candidate set with the same type (Harris or DoG) feature point f′ in the adjacent matching image, in the corresponding Within 2 pixels of the epipolar line, use the point pair (f, f′) to triangulate the 3D points in the space;

给定这些初始匹配,接下来的两步被重复执行3遍;扩散初始匹配到临近像素,得到比较密集的面片;在这一步骤中,我们循环性添加新的邻域到已经存在的面片中,直到覆盖场景中可以观测到的表面,两个面片p和p′如果被储存在同一图像I的连续Cell(i,j)和Cell(i′,j′)中,并且其法平面接近,我们就认为p和p′为近邻;Given these initial matches, the next two steps are repeated 3 times; diffuse the initial matches to adjacent pixels, resulting in denser patches; in this step, we recursively add new neighborhoods to existing faces In the patch, until the surface that can be observed in the scene is covered, if the two patches p and p' are stored in the continuous Cell(i,j) and Cell(i',j') of the same image I, and its method planes are close, we consider p and p' to be neighbors;

通过可视化约束用来去除那些在观察表面前、后的不正确匹配,第一步滤波集中消除错落在实际表面的面片,也就是说考虑一个面片p0和它遮挡的面片集U,当满足我们将p0消除;然后滤波集中针对错落在实际表面内部的离群点,利用相应图像的深度图来简单计算每一个面片p0的S(p0)和T(p0),如果|T(p0)|<γ便滤除p0,S(p0)表示应该可视的,T(p0)表示实际可视的,γ表示阈值,表示映射归一化互相光均值;The visual constraints are used to remove those incorrect matches before and after the observed surface. The first step of filtering focuses on eliminating patches that are staggered on the actual surface, that is, considering a patch p 0 and the set of patches U that it occludes, when satisfied We eliminate p 0 ; then the filter focuses on the outliers that are scattered inside the actual surface, and uses the depth map of the corresponding image to simply calculate S(p 0 ) and T(p 0 ) of each patch p 0 , if| T(p 0 )|<γ will filter out p 0 , S(p 0 ) means what should be visible, T(p 0 ) means what is actually visible, γ means the threshold, Represents the mapped normalized mutual light mean;

所述获得初始致密的空间目标网格模型,具体方法如下:所有的输入样本点都根据其尺度值插入到一个八叉树数据结构中,生成的八叉树具有层次结构,该结构给定了一种采样集,即将八叉树叶节点的顶点作为隐函数implictF(x)的采样点,其中x作为所需重建的点位置,即将八叉树初始化为没有任何节点的空八叉树,第一个样本点i插入到新创建的根节点中,根节点边长为si且以样本点位置Pi为中心;每个样本点i有一个尺度值si,样本的支持域半径为3si。令:The specific method of obtaining the initially dense spatial target grid model is as follows: all input sample points are inserted into an octree data structure according to their scale values, and the generated octree has a hierarchical structure, which is given A sampling set, the vertex of the leaf node of the octree is used as the sampling point of the implicit function implictF(x), where x is the point position to be reconstructed, that is, the octree is initialized as an empty octree without any nodes, the first A sample point i is inserted into the newly created root node. The side length of the root node is s i and the center of the sample point position P i is; each sample point i has a scale value s i , and the support domain radius of the sample is 3s i . make:

其中l为样本点将要插入的八叉树的层级,Sl为八叉树第l层节点的边长。Among them, l is the level of the octree where the sample point will be inserted, and S l is the side length of the node at the lth layer of the octree.

然后在这些样本点位置估算隐函数的值,为了将这些点从输入样本点中区分开,将八叉树的节点分为两类:内部节点和叶节点,内部节点是指有八个子节点的节点,叶节点是指没有子节点的节点;在实际情况中生成的八叉树一般都有混合节点,即子节点数大于0小于8的节点;通过给混合节点分配子节点将其子节点补齐至8个来正则化八叉树,通过正则化消除了八叉树中的混合节点,增加了新的叶节点,称这些点为体素;为了计算x处的隐函数值,采用一种高效的搜寻方法,只选择八叉树中对x处的隐函数值有影响的样本;递归遍历八叉树,判断每个节点是否可能包含影响x处的隐函数值的样本点;由上式可知,节点N所包含的样本点的尺度值最大不超过2SN,其中SN为N的边长;Then estimate the value of the implicit function at these sample points. In order to distinguish these points from the input sample points, the nodes of the octree are divided into two categories: internal nodes and leaf nodes. Internal nodes refer to those with eight child nodes. Node, a leaf node refers to a node without child nodes; in actual situations, the generated octree generally has mixed nodes, that is, nodes with a number of child nodes greater than 0 and less than 8; by assigning child nodes to the mixed node, its child nodes The octree is regularized to 8, the mixed nodes in the octree are eliminated through regularization, new leaf nodes are added, and these points are called voxels; in order to calculate the implicit function value at x, a Efficient search method, only select samples in the octree that affect the value of the implicit function at x; recursively traverse the octree to determine whether each node may contain sample points that affect the value of the implicit function at x; from the above formula It can be seen that the maximum scale value of the sample points contained in node N does not exceed 2S N , where S N is the side length of N;

如果某个节点与x满足关系式,则可以直接略过该节点而不用再进入其子节点进行搜寻;否则,判断该节点中的所有样本点i与x的间距是否小于3si。最后,使用所有搜寻到的对x有影响的样本点计算隐函数的值。If a node satisfies the relational expression with x, the node can be directly skipped without entering its child nodes to search; otherwise, it is judged whether the distance between all sample points i and x in the node is less than 3s i . Finally, use all searched sample points that have an influence on x to calculate the value of the implicit function.

八叉树确定了体素的位置,并在计算出了体素处的隐函数值后,此时就不再需要输入样本点,直接提取出八叉树体素定义的隐函数的零等值面即为最终的重构表面,在标准体素(所有体素大小一致)中提取等值面的常用算法为移动立方体算法,从八叉树中提取出隐函数的等值面作为重构表面:The octree determines the position of the voxel, and after calculating the implicit function value at the voxel, it is no longer necessary to input the sample point at this time, and directly extract the zero equivalent value of the implicit function defined by the octree voxel The surface is the final reconstruction surface. The common algorithm for extracting isosurfaces in standard voxels (all voxels have the same size) is the moving cube algorithm. The isosurface of implicit functions is extracted from the octree as the reconstruction surface. :

{x|implictF(x)=0}{x|implictF(x)=0}

所述获得精准致密的空间目标网格模型,具体方法如下:通过对已获得的初始致密空间目标网格模型进行几何置信度设定,由模型边缘向内进行冗余清除,去除大面积表面包络;The specific method of obtaining a precise and dense spatial object grid model is as follows: by setting the geometric confidence level of the obtained initial dense spatial object grid model, performing redundancy removal from the edge of the model inwards, and removing large-area surface packets; network;

利用获取的致密的空间目标网格模型进行大尺度无缝拼接的纹理贴图,用成对马尔科夫随机场能量公式E(l)来用来指定一个视图li作为网格面片Fi对应的纹理,相应的lj作为网格面片Fj对应的纹理,其中Fi,Fj表示空间目标网格模型三角面片,该过程最终得到精细可视化的空间目标三维模型,能量公式为:Use the obtained dense spatial object grid model to carry out large-scale seamless stitching of texture maps, and use the paired Markov random field energy formula E(l) to designate a view l i as the corresponding mesh patch F i The corresponding l j is the texture corresponding to the mesh patch F j , where F i and F j represent the triangle surface of the spatial object mesh model. This process finally obtains a finely visualized 3D model of the spatial object. The energy formula is:

Edate代表选取对应面片Fi∈Faces最佳视图的数据项约束,Esmooth则表征两个网格面片边界接缝处(Fi,Fj)∈Edges的平滑项约束。E date represents the data item constraint for selecting the best view of the corresponding face F i ∈ Faces, and E smooth represents the smooth item constraint at the boundary seam of two mesh faces (F i , F j ) ∈ Edges.

通过最小化方程得到标签后,面片的颜色通过如下调整:首先确保每个网格顶点恰好属于一个纹理面片,因此接缝上的每个顶点被复制到2个顶点,顶点vleft属于接缝左侧的面片,vright属于接缝右侧的面片;在颜色调整之前,每一个顶点有唯一的颜色fv,进而为每一个顶点计算添加一个矫正gvAfter the label is obtained by minimizing the equation, the color of the patch is adjusted by first ensuring that each mesh vertex belongs to exactly one texture patch, so each vertex on the seam is copied to 2 vertices, and the vertex v left belongs to the seam Seam the patch on the left side, v right belongs to the patch on the right side of the seam; before color adjustment, each vertex has a unique color f v , and then add a correction g v to each vertex calculation:

该式第一项确保接缝左右的颜色尽可能相似,第二项旨在同一纹理贴图面片的相邻两个顶点vi,vj差异较小,λ为调节因子;在每个顶点获得最佳gv之后,使用重心坐标从周围顶点的gv插值对纹理矫正。最后,将校正添加到输入图像,将纹理块打包到纹理图集中,并且将纹理坐标附加到顶点;The first item of this formula ensures that the colors on the left and right sides of the seam are as similar as possible, and the second item aims to make the difference between two adjacent vertices v i and v j of the same texture map patch smaller, and λ is the adjustment factor; at each vertex, After optimal gv , the texture is corrected using barycentric coordinates interpolated from gv of surrounding vertices. Finally, corrections are added to the input image, texture blocks are packed into texture atlases, and texture coordinates are attached to vertices;

对于视图的选择,使用图割和alpha拓展进行优化马尔科夫随机场能量公式,使用光度一致性检测原则代替原来的平滑项;对于数据项通过使用Sobel检测器计算投影网格面片Fi的对应图像梯度幅度并对所有网格面片Fi投影梯度幅度图像φ(Fi,li)像素求和;For the selection of views, use the graph cut and alpha expansion to optimize the energy formula of the Markov random field, and use the photometric consistency detection principle to replace the original smoothing item; for the data item Compute the corresponding image gradient magnitude of the projected mesh patch F i by using a Sobel detector And sum the pixels of the gradient magnitude image φ(F i , l i ) of all grid faces F i ;

由能量方程,首先第一步为标记过程,表面F1,F2,...FK的网格,贴图片段V1,V2,...VN皆对应初始视图;这个给定一个标签向量M={m1,m2,...,mK}∈{0..N}K,规定从片段到表面Fi进行贴图;设定代价值(所涉及到的因素可以是角度,距离,颜色差异),代价值越小,表明贴图片段越适合对应表面,可表示为接缝最小且贴图片段的质量最佳:E(M)=EQ(M)+λES(M),该方程首项为贴片质量后一项Es(M)为一系列相邻贴片间隙的可分辨性 Pi(x)为投影算子,d(·,·)为RGB空间的欧式距离;From the energy equation, the first step is the marking process, the grid of the surface F 1 , F 2 ,...F K , the texture segment V 1 , V 2 ,...V N all correspond to the initial view; this is given a Label vector M={m 1 ,m 2 ,...,m K }∈{0..N} K , specifying from segment Go to the surface F i for mapping; set the cost value (The factors involved can be angle, distance, color difference), the smaller the cost value, the more suitable the texture segment is for the corresponding surface, which can be expressed as The seam is the smallest and the quality of the patch segment is the best: E(M)=E Q (M)+λE S (M), the first term of this equation is the patch quality The latter term E s (M) is the distinguishability of a series of adjacent patch gaps P i (x) is the projection operator, d( , ) is the Euclidean distance of RGB space;

所述进行表面颜色调整,降低接缝可见性,具体方法如下:计算表面投影的平均颜色值,所有可以看见表面的视图为内层图,计算所有内图颜色均值的平均值和协方差,通过多元高斯函数评估视图可视化程度,设定阈值,迭代上述的步骤,以协方差或者内层图数目作为迭代终止条件;To adjust the surface color and reduce the visibility of seams, the specific method is as follows: Calculate the average color value of the surface projection, all views that can see the surface are the inner layer graph, calculate the average value and covariance of the color mean value of all inner graphs, and pass The multivariate Gaussian function evaluates the degree of view visualization, sets the threshold, iterates the above steps, and uses the covariance or the number of inner layers as the iteration termination condition;

沿着边缘查询缓解颜色问题,依据边缘长度的权重进行衰减:顶点v1分别同v0,v2相邻,其颜色在v1v0,v1v2的样本上查找加权平均值,随着同v1距离的增长,权重由1衰减至0;Query along the edge to alleviate the color problem, and attenuate according to the weight of the edge length: vertex v 1 is adjacent to v 0 and v 2 respectively, and its color is searched for the weighted average value on the samples of v 1 v 0 and v 1 v 2 , and then As the distance from v 1 increases, the weight decays from 1 to 0;

全局优化不能消除所有的可见缝隙,需要增加泊松编辑来进行局部优化,将一个颜色的泊松编辑限制在20个像素之间的框内,我们将每个外边缘像素的值与分配给贴片的图像中的像素的颜色的平均值和分配给相邻贴片的图像的平均值进行比较,每个内边缘像素的值被固定为其当前颜色。如果贴片太小,我们省略内边缘。Global optimization cannot eliminate all visible gaps. Poisson editing needs to be added for local optimization. To limit the Poisson editing of a color to a box between 20 pixels, we assign the value of each outer edge pixel to the sticker The average value of the color of the pixels in the tile's image is compared with the average value of the images assigned to adjacent tiles, and the value of each inner edge pixel is fixed to its current color. If the patch is too small, we omit the inner edge.

进行颜色调整后,对网格模型进行拼接贴图,最终获得精细、可视化空间目标三维模型。After color adjustment, the grid model is spliced and textured to finally obtain a fine, visualized 3D model of the spatial object.

具体实施例:本实施方式选取硬件环境:2GHz 2*E5 CPU,128G内存,12G显存计算机;Concrete embodiment: this embodiment selects hardware environment: 2GHz 2*E5 CPU, 128G memory, 12G video memory computer;

操作系统包括Windows7、Ubuntu系统;The operating system includes Windows7, Ubuntu system;

采用C++编译实现本方法;实施举例采用图像分辨率:2048*2048;Adopt C++ to compile and realize this method; Implementation example adopts image resolution: 2048*2048;

本发明总体方案设计如图1所示,具体实施如下:Overall scheme design of the present invention is as shown in Figure 1, and concrete implementation is as follows:

步骤1、如图2所示,利用SIFT特征描述子依次提取序列图像精确稳健点特征,并通过哈希级联对相邻图像进行快速精准匹配;联合图像的相机内参数矩阵,计算图像序列的相对旋转矩阵与平移矩阵,利用运动恢复结构,获取图像序列间相对姿态,并得到空间目标稀疏点云模型;Step 1, as shown in Figure 2, use the SIFT feature descriptor to sequentially extract the precise and robust point features of the sequence image, and quickly and accurately match the adjacent images through the hash cascade; combine the camera internal parameter matrix of the image to calculate the image sequence The relative rotation matrix and translation matrix, using the motion recovery structure, obtains the relative pose between the image sequences, and obtains the sparse point cloud model of the space object;

将选取图像集作为输入,首先计算相邻尺度图像差分,得到一系列图像并在该图像空间中求极值点;然后对极值点进行过滤,依次遍历高斯差分图像中的像素点,和其八邻域及其所在图像的上、下层的各九个邻近的像素点比较,计算极值点;找出稳定的特征点,并对这些极值点进行筛选,去除不稳定的点;根据关键点所在的尺度选择与该尺度最相近的高斯平滑图像;在关键点周围选取一个邻域,并对其中点的梯度进行高斯加权,每个子区域生成一个描述子;Taking the selected image set as input, first calculate the difference between adjacent scale images, obtain a series of images and find the extreme points in the image space; then filter the extreme points, traverse the pixels in the Gaussian difference image in turn, and its Comparing the eight neighbors and the nine adjacent pixels of the upper and lower layers of the image where they are located, calculating extreme points; finding stable feature points, and screening these extreme points to remove unstable points; Select the Gaussian smooth image closest to the scale for the scale of the point; select a neighborhood around the key point, and perform Gaussian weighting on the gradient of the midpoint, and generate a descriptor for each sub-region;

利用哈希级联进行快速精准匹配,具体方法如下:利用局部敏感哈希策略,使用内积相似判断哈希簇的哈希方程,从D维正态分布选择一个超平面r的随机向量,将128维特征描述子转化为二进制码;采用带有短码的多表散列查询进行粗搜索,将返回的候选项重新映射到更高维的汉明空间并计算汉明距离,建立汉明距离的哈希表作为精确查询的关键参考,最后利用级联哈希策略进行图像间精确快速匹配;Use the hash cascade for fast and accurate matching, the specific method is as follows: use the locally sensitive hash strategy, use the inner product similarity to judge the hash equation of the hash cluster, select a random vector of hyperplane r from the D-dimensional normal distribution, and set The 128-dimensional feature descriptor is converted into a binary code; a multi-table hash query with a short code is used to perform a rough search, and the returned candidates are remapped to a higher-dimensional Hamming space and the Hamming distance is calculated to establish the Hamming distance The hash table is used as the key reference for accurate query, and finally the cascaded hash strategy is used for accurate and fast matching between images;

利用八点算法计算基础矩阵,具体方法如下:数据集就是特征匹配输出的对应关系,而约束就是对极几何,用8个对应关系即可用八点算法估算出基础矩阵F;对于两幅图像之间的匹配点X1和X2,它们必然满足对极几何约束X1 TFX2=0,该方程是关于F的9个未知参数线性齐次方程,由于F在相差一个常数因子的意义下是唯一的,所以可以将一个非零参数归一化儿变为8个未知参数,因此如果能够得到8对匹配点,就可以线性确定基础矩阵F;Using the eight-point algorithm to calculate the fundamental matrix, the specific method is as follows: the data set is the corresponding relationship of the feature matching output, and the constraint is the epipolar geometry, and the eight-point algorithm can be used to estimate the fundamental matrix F with eight corresponding relationships; The matching points X 1 and X 2 between them must satisfy the epipolar geometric constraint X 1 T FX 2 =0, this equation is a linear homogeneous equation with 9 unknown parameters about F, because F differs in the sense of a constant factor is unique, so a non-zero parameter can be normalized into 8 unknown parameters, so if 8 pairs of matching points can be obtained, the basic matrix F can be determined linearly;

利用RANSAC进行立体匹配,具体方法如下:从特征点中均匀随机抽取8个点;利用这8组对应点求基础矩阵F的最小配置解;用未抽到的每一组对应点来验证F,如果距离足够小,那么这个对应是一致的;反之则返回;Using RANSAC for stereo matching, the specific method is as follows: uniformly and randomly sample 8 points from the feature points; use these 8 groups of corresponding points to find the minimum configuration solution of the fundamental matrix F; use each group of corresponding points not drawn to verify F, If the distance is small enough, then the correspondence is consistent; otherwise, return;

通过数据优化,获取图像序列间相对姿态,并得到空间目标稀疏点云模型,具体方法如下:通过增量式添加下一相机,通过三角形法获取空间三维点,对给定一个相机参数集{Pj}和轨迹集Mj指代轨迹的三维坐标,指代的是二维图像坐标投影在ith的相机,集束调整要求最小非线性误差E(P,M)最小:Through data optimization, the relative pose between the image sequences is obtained, and the sparse point cloud model of the spatial object is obtained. The specific method is as follows: add the next camera incrementally, and obtain the three-dimensional point in space by the triangle method. For a given camera parameter set {P j } and trajectory set M j refers to the three-dimensional coordinates of the trajectory, Refers to the camera whose two-dimensional image coordinates are projected on i th , and the cluster adjustment requires the minimum nonlinear error E(P,M) to be minimum:

步骤2、将步骤1结果作为输入,获得空间目标密集的三维点云模型,具体方法如下:通过Harris和Difference-of-Gaussian检测特征点,这些特征点在多幅图像之间进行匹配,得到一系列稀疏面片;在每一幅图像中,取β2×β2像素的矩形网格,并且返回作为反映角点和点的局部最大值η;通过多幅图像中检测的特征点重建出一个稀疏的面片集,储存在覆盖图像cellC(i,j)的网格中:假定图像I,其对应的相机光学中心标记为O,对I中任意一个特征点f,在其他图像中搜索具有同样类型(Harris或DoG)特征点f′的集合F,在对应对极线2像素之内,用点对(f,f′)三角化空间的3D点;Step 2. Use the result of step 1 as input to obtain a 3D point cloud model with dense spatial objects. The specific method is as follows: detect feature points through Harris and Difference-of-Gaussian, and match these feature points between multiple images to obtain a A series of sparse patches; in each image, take a rectangular grid of β 2 × β 2 pixels, and return as the local maximum value η reflecting corners and points; reconstruct a Sparse patch set, stored in the grid covering the image cellC(i,j): Assuming image I, its corresponding camera optical center is marked as O, for any feature point f in I, search in other images with A set F of feature points f' of the same type (Harris or DoG), use point pairs (f, f') to triangulate 3D points in space within 2 pixels of the corresponding epipolar line;

给定这些初始匹配,接下来的两步被重复执行3遍;扩散初始匹配到临近像素,得到比较密集的面片;在这一步骤中,我们循环性添加新的邻域到已经存在的面片中,直到覆盖场景中可以观测到的表面,两个面片p和p′如果被储存在同一图像I的连续cellC(i,j)和C(i′,j′)中,并且其法平面接近,我们就认为p和p′为近邻;Given these initial matches, the next two steps are repeated 3 times; diffuse the initial matches to adjacent pixels, resulting in denser patches; in this step, we recursively add new neighborhoods to existing faces In the slice, until the observable surface in the scene is covered, if the two facets p and p' are stored in the continuous cells C(i,j) and C(i',j') of the same image I, and their method planes are close, we consider p and p' to be neighbors;

通过可视化约束用来去除那些在观察表面前、后的不正确匹配,第一步滤波集中消除错落在实际表面的面片,也就是说考虑一个面片p0和它遮挡的面片集U,当满足我们将p0消除;然后滤波集中针对错落在实际表面内部的离群点,利用相应图像的深度图来简单计算每一个面片p0的S(p0)和T(p0),如果|T(p0)|<γ便滤除p0,S(p0)表示应该可视的,T(p0)表示实际可视的,γ表示阈值,表示映射归一化互相光均值;The visual constraints are used to remove those incorrect matches before and after the observed surface. The first step of filtering focuses on eliminating patches that are staggered on the actual surface, that is, considering a patch p 0 and its occluded patch set U, when satisfied We eliminate p 0 ; then the filter focuses on the outliers that are scattered inside the actual surface, and uses the depth map of the corresponding image to simply calculate S(p 0 ) and T(p 0 ) of each patch p 0 , if| T(p 0 )|<γ will filter out p 0 , S(p 0 ) means what should be visible, T(p 0 ) means what is actually visible, γ means the threshold, Represents the mapped normalized mutual light mean;

步骤3、将步骤2结果作为输入,所述获得初始致密的空间目标网格模型,具体方法如下:所有的输入样本点都根据其尺度值插入到一个八叉树数据结构中,生成的八叉树具有层次结构,该结构给定了一种采样集,即将八叉树叶节点的顶点作为隐函数implictF(x)的采样点,其中x作为所需重建的点位置,即将八叉树叶节点的顶点作为隐函数的采样点,即将八叉树初始化为没有任何节点的空八叉树,第一个样本点i插入到新创建的根节点中,根节点边长为si且以样本点位置Pi为中心;Step 3. Using the result of step 2 as input, the method of obtaining an initially dense spatial object grid model is as follows: all input sample points are inserted into an octree data structure according to their scale values, and the generated octree The tree has a hierarchical structure, which gives a sampling set, that is, the vertex of the octree leaf node is used as the sampling point of the implicit function implictF(x), where x is the point position to be reconstructed, that is, the vertex of the octree leaf node As the sampling point of the implicit function, the octree is initialized as an empty octree without any nodes, the first sample point i is inserted into the newly created root node, the root node side length is si and the sample point position P i is the center;

每个样本点i有一个尺度值si,样本的支持域半径为3si。令:Each sample point i has a scale value s i , and the support domain radius of the sample is 3s i . make:

其中l为样本点将要插入的八叉树的层级,Sl为八叉树第l层节点的边长。Among them, l is the level of the octree where the sample point will be inserted, and S l is the side length of the node at the lth layer of the octree.

如图3所示,然后在这些样本点位置估算隐函数的值,为了将这些点从输入样本点中区分开,将八叉树的节点分为两类:内部节点和叶节点,内部节点是指有八个子节点的节点,叶节点是指没有子节点的节点;在实际情况中生成的八叉树一般都有混合节点,即子节点数大于0小于8的节点;通过给混合节点分配子节点将其子节点补齐至8个来正则化八叉树,通过正则化消除了八叉树中的混合节点,增加了新的叶节点,称这些点为体素;为了计算x处的隐函数值,采用一种高效的搜寻方法,只选择八叉树中对x处的隐函数值有影响的样本:递归遍历八叉树,判断每个节点是否可能包含影响x处的隐函数值的样本点;由上式可知,节点N所包含的样本点的尺度值最大不超过2SN,其中SN为N的边长;As shown in Figure 3, the value of the implicit function is then estimated at these sample points. In order to distinguish these points from the input sample points, the nodes of the octree are divided into two categories: internal nodes and leaf nodes. The internal nodes are Refers to a node with eight child nodes, and a leaf node refers to a node without child nodes; in actual situations, the generated octree generally has mixed nodes, that is, nodes with a number of child nodes greater than 0 and less than 8; by assigning child nodes to mixed nodes The node fills its child nodes to 8 to regularize the octree, eliminates the mixed nodes in the octree through regularization, and adds new leaf nodes, which are called voxels; in order to calculate the hidden value at x Function value, using an efficient search method, only select samples in the octree that affect the value of the implicit function at x: recursively traverse the octree, and determine whether each node may contain a sample that affects the value of the implicit function at x Sample point; From the above formula, it can be seen that the maximum scale value of the sample point contained in node N does not exceed 2S N , where S N is the side length of N;

如果某个节点与x满足关系式,则可以直接略过该节点而不用再进入其子节点进行搜寻;否则,判断该节点中的所有样本点i与x的间距是否小于3si。最后,使用所有搜寻到的对x有影响的样本点根据关系式计算隐函数F(x)的值。If a node satisfies the relational expression with x, the node can be directly skipped without entering its child nodes to search; otherwise, it is judged whether the distance between all sample points i and x in the node is less than 3s i . Finally, use all searched sample points that have an influence on x to calculate the value of the implicit function F(x) according to the relational expression.

八叉树并确定了体素的位置,在计算出了体素处的隐函数值后,此时就不再需要输入样本点,直接提取出八叉树体素定义的隐函数的零等值面即为最终的重构表面,在标准体素(所有体素大小一致)中提取等值面的常用算法为移动立方体算法,从八叉树中提取出隐函数的等值面作为重构表面;通过对已获得的初始致密空间目标网格模型进行几何置信度设定,由模型边缘向内进行冗余清除,去除大面积表面包络;The octree determines the position of the voxel. After calculating the implicit function value at the voxel, it is no longer necessary to input the sample point at this time, and directly extract the zero equivalent value of the implicit function defined by the octree voxel. The surface is the final reconstruction surface. The common algorithm for extracting isosurfaces in standard voxels (all voxels have the same size) is the moving cube algorithm. The isosurface of implicit functions is extracted from the octree as the reconstruction surface. ;By setting the geometric confidence level of the obtained initial dense space target grid model, the redundancy is removed from the edge of the model inward, and the large-area surface envelope is removed;

步骤4、将步骤3的结果作为输入,获得精细可视化的空间目标三维模型,具体方法如下:初始步骤就是确定输入图像表面的可视性,使用成对马尔科夫随机场能量公式E来计算一个标签l,用来指定一个视图li作为网格面片Fi对应的纹理,相应的lj作为网格面片Fj对应的纹理,其中Fi,Fj表示空间目标网格模型三角面片:Step 4. Use the result of step 3 as input to obtain a finely visualized 3D model of the spatial object. The specific method is as follows: the initial step is to determine the visibility of the input image surface, and use the paired Markov random field energy formula E to calculate a The label l is used to specify a view l i as the texture corresponding to the mesh patch F i , and the corresponding l j as the texture corresponding to the mesh patch F j , where F i and F j represent the triangle surface of the spatial target mesh model piece:

Edate代表选取对应面片Fi∈Faces最佳视图的数据项约束,Esmooth则表征两个网格面片边界接缝处(Fi,Fj)∈Edges的平滑项约束。E date represents the data item constraint for selecting the best view of the corresponding face F i ∈ Faces, and E smooth represents the smooth item constraint at the boundary seam of two mesh faces (F i , F j ) ∈ Edges.

通过最小化方程得到标签后,面片的颜色通过如下调整:首先确保每个网格顶点恰好属于一个纹理面片,因此接缝上的每个顶点被复制到2个顶点,顶点vleft属于接缝左侧的面片,vright属于接缝右侧的面片;在颜色调整之前,每一个顶点有唯一的颜色fv,进而为每一个顶点计算添加一个矫正gvAfter the label is obtained by minimizing the equation, the color of the patch is adjusted by first ensuring that each mesh vertex belongs to exactly one texture patch, so each vertex on the seam is copied to 2 vertices, and the vertex v left belongs to the seam Seam the patch on the left side, v right belongs to the patch on the right side of the seam; before color adjustment, each vertex has a unique color f v , and then add a correction g v to each vertex calculation:

该式第一项确保接缝左右的颜色尽可能相似,第二项旨在同一纹理贴图面片的相邻两个顶点vi,vj差异较小,λ为调节因子;在每个顶点获得最佳gv之后,使用重心坐标从周围顶点的gv插值对纹理矫正。最后,将校正添加到输入图像,将纹理块打包到纹理图集中,并且将纹理坐标附加到顶点;The first item of this formula ensures that the colors on the left and right sides of the seam are as similar as possible, and the second item aims to make the difference between two adjacent vertices v i and v j of the same texture map patch smaller, and λ is the adjustment factor; at each vertex, After optimal gv , the texture is corrected using barycentric coordinates interpolated from gv of surrounding vertices. Finally, corrections are added to the input image, texture blocks are packed into texture atlases, and texture coordinates are attached to vertices;

对于视图的选择,使用图割和alpha拓展进行优化马尔科夫随机场能量公式,使用光度一致性检测原则代替原来的平滑项;对于数据项通过使用Sobel检测器计算投影网格面片Fi的对应图像梯度幅度并对所有网格面片Fi投影梯度幅度图像φ(Fi,li)像素求和;For the selection of views, use the graph cut and alpha expansion to optimize the energy formula of the Markov random field, and use the photometric consistency detection principle to replace the original smoothing item; for the data item Compute the corresponding image gradient magnitude of the projected mesh patch F i by using a Sobel detector And sum the pixels of the gradient magnitude image φ(F i , l i ) of all grid faces F i ;

由能量方程,首先第一步为标记过程,表面F1,F2,...FK的网格,贴图片段V1,V2,...VN皆对应初始视图;这个给定一个标签向量M={m1,m2,...,mK}∈{0..N}K,规定从片段到表面Fi进行贴图;设定代价值(所涉及到的因素可以是角度,距离,颜色差异),代价值越小,表明贴图片段越适合对应表面,可表示为接缝最小且贴图片段的质量最佳:E(M)=EQ(M)+λES(M),该方程首项为贴片质量后一项Es(M)为一系列相邻贴片间隙的可分辨性 Pi(x)为投影算子,d(·,·)为RGB空间的欧式距离;From the energy equation, the first step is the marking process, the grid of the surface F 1 , F 2 ,...F K , the texture segment V 1 , V 2 ,...V N all correspond to the initial view; this is given a Label vector M={m 1 ,m 2 ,...,m K }∈{0..N} K , specifying from segment Go to the surface F i for mapping; set the cost value (The factors involved can be angle, distance, color difference), the smaller the cost value, the more suitable the texture segment is for the corresponding surface, which can be expressed as The seam is the smallest and the quality of the patch segment is the best: E(M)=E Q (M)+λE S (M), the first term of this equation is the patch quality The latter term E s (M) is the distinguishability of a series of adjacent patch gaps P i (x) is the projection operator, d( , ) is the Euclidean distance of RGB space;

进行表面颜色调整,降低接缝可见性,具体方法如下:计算表面投影的平均颜色值,所有可以看见表面的视图为内层图,计算所有内图颜色均值的平均值和协方差,通过多元高斯函数评估视图可视化程度,设定阈值,迭代上述的步骤,以协方差或者内层图数目作为迭代终止条件;Adjust the surface color to reduce the visibility of the seams. The specific method is as follows: Calculate the average color value of the surface projection. All views that can see the surface are the inner layer image. Calculate the average and covariance of the color mean value of all inner images. Through multivariate Gaussian The function evaluates the degree of visualization of the view, sets the threshold, iterates the above steps, and uses the covariance or the number of inner layers as the iteration termination condition;

沿着边缘查询缓解颜色问题,依据边缘长度的权重进行衰减:顶点v1分别同v0,v2相邻,其颜色在v1v0,v1v2的样本上查找加权平均值,随着同v1距离的增长,权重由1衰减至0;Query along the edge to alleviate the color problem, and attenuate according to the weight of the edge length: vertex v 1 is adjacent to v 0 and v 2 respectively, and its color is searched for the weighted average value on the samples of v 1 v 0 and v 1 v 2 , and then As the distance from v 1 increases, the weight decays from 1 to 0;

全局优化不能消除所有的可见缝隙,需要增加泊松编辑来进行局部优化,将一个颜色的泊松编辑限制在20个像素之间的框内,我们将每个外边缘像素的值与分配给贴片的图像中的像素的颜色的平均值和分配给相邻贴片的图像的平均值进行比较,每个内边缘像素的值被固定为其当前颜色。如果贴片太小,我们省略内边缘;Global optimization cannot eliminate all visible gaps. Poisson editing needs to be added for local optimization. To limit the Poisson editing of a color to a box between 20 pixels, we assign the value of each outer edge pixel to the sticker The average value of the color of the pixels in the tile's image is compared with the average value of the images assigned to adjacent tiles, and the value of each inner edge pixel is fixed to its current color. If the patch is too small, we omit the inner edge;

进行颜色调整后,对网格模型进行拼接贴图;After color adjustment, the grid model is spliced and textured;

经过上述重构步骤,最终得到精细、可视化空间目标三维模型。After the above reconstruction steps, a fine and visualized 3D model of the space object is finally obtained.

Claims (4)

1. a kind of extraterrestrial target three-dimensional reconstruction method based on high-resolution sequence image, it is characterised in that steps are as follows:
Step 1:Son, which is described, using SIFT feature extracts one group of extraterrestrial target high-resolution sequence chart image set I={ I successively1,I2,…, InAccurate steady point feature, and by Hash cascade to N number of neighbour's image of each width image setting to carrying out fast accurate Matching, obtains image IiN number of neighbour's image to set of matches be Mi={ mi,i-N,mi,i-N+1,…mi,j…,mi,i+N-1,mi,i+N};
The feature of the accurate steady point feature shows as scale invariability, rotational invariance;
Step 2:Using RANSAC estimations to the image Jing Guo characteristic matching to carrying out Stereo matching, recycled in each RANSAC In, by from neighbour's image to set of matches MiThe most image of middle selected characteristic match point is to as initial matching pair, joint figure The camera Intrinsic Matrix K of picture estimates basis matrix F using 8 algorithms;Singular value point is normalized to basis matrix F Solution, obtains the relative rotation matrices P and translation matrix t of image pair;Using increment type exercise recovery structure, obtains extraterrestrial target and catch Camera acquisition posture is obtained, structured image collection and the sparse point cloud model of extraterrestrial target are obtained;
Step 3:To structured image collection and the sparse point cloud model of extraterrestrial target, by sequence image I={ I1,I2,…,In} The matching of Harris or DoG angle point essences is carried out, spreads interpolation by carrying out bilinearity to sparse features point cloud model, and utilize luminosity Consistency principle constraint iteration filters out straggly outside actual surface and internal erroneous point, and spreading interpolation and filtering can change In generation 3 times, obtains the intensive three-dimensional point cloud model of extraterrestrial target;
Step 4, the three-dimensional point cloud model intensive to the extraterrestrial target of acquisition carry out the surface based on float in space scale implicit function It rebuilds:Using generating zero contour surface { x | implictF (x)=0 } of the implicit function that defines of regularization Octree voxel as space Object reconstruction model surface, obtains the extraterrestrial target grid model of initial densification, and grid is in triangular facet chip;
Step 5:Redundancy removing is carried out to the extraterrestrial target grid model of initial densification, setting geometry confidence threshold value is carried out by side The envelope removal of edge internally, obtains fine and close extraterrestrial target grid model;
Step 6:The seamless spliced texture mapping of large scale is carried out to fine and close extraterrestrial target grid model, with pairs of Markov Random field energy theorem E (l) specifies a view liAs surface mesh FiCorresponding texture, corresponding ljAs patch grids FjCorresponding texture, wherein Fi,FjRepresentation space target gridding model tri patch, the process finally obtain fine visual Extraterrestrial target threedimensional model, energy theorem are:
EdateIt represents and chooses corresponding dough sheet FiThe data item constraint of ∈ Faces optimal views, EsmoothThen characterize two patch grids Boundary seam crossing (Fi,Fj) ∈ Edges smooth item constraint;
Step 7:The neighbouring view that gradient amplitude is eliminated by luminosity consistency detection blocks, and further Euclidean distance is utilized to weigh Decaying carries out neighbour's surface color adjustment again, reduces extraterrestrial target model seam visibility, obtains fine visual space mesh Mark threedimensional model.
2. the extraterrestrial target three-dimensional reconstruction method based on high-resolution sequence image according to claim 1, it is characterised in that:Institute It states and utilizes increment type exercise recovery structure in step 2, obtain extraterrestrial target capture camera and acquire posture, obtain structured image collection Method with the sparse point cloud model of extraterrestrial target is:8 points of uniformly random extraction from characteristic point;It is asked using this 8 groups of corresponding points The minimal configuration solution of basis matrix F;Basis matrix F is verified with each group of corresponding points not being extracted into, is wanted when distance meets threshold value It asks, assert that correspondence is consistent;To be put into track after Stereo matching per piece image, obtain a multi-view image it Between matching characteristic point unicom collection;By way of adding a camera successively, all images camera is added in track, most Result is optimized by boundling adjustment afterwards, extraterrestrial target capture camera is obtained and acquires posture, structured image collection and space Target sparse point cloud model.
3. the extraterrestrial target three-dimensional reconstruction method based on high-resolution sequence image according to claim 1, it is characterised in that:Institute State step 7 obtain fine visual extraterrestrial target threedimensional model method it is as follows:It is built into markov random file energy Formula specifies a view as the corresponding texture of surface mesh, and energy theorem E (l) includes data item EdateWith smooth item Esmooth;The neighbouring view that gradient amplitude is eliminated by luminosity consistency detection blocks, and further carries out surface color adjustment; It is used as the cost function of cost value w by set angle, distance, color distortionSelect cost The smaller view-set of value carries out image mosaic.
4. the extraterrestrial target three-dimensional reconstruction method based on high-resolution sequence image according to claim 3, it is characterised in that:Institute State the method for carrying out surface color adjustment:Image segmentation is carried out using Potts models, is not connected along adjacent seam edge inquiry color Continuous property, color distortion between reducing dough sheet of being decayed using Euclidean distance weight, is finally carried out local optimum using Poisson editor, reduced Extraterrestrial target model seam visibility.
CN201810377855.4A 2018-04-25 2018-04-25 A kind of extraterrestrial target three-dimensional reconstruction method based on high-resolution sequence image Pending CN108734728A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810377855.4A CN108734728A (en) 2018-04-25 2018-04-25 A kind of extraterrestrial target three-dimensional reconstruction method based on high-resolution sequence image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810377855.4A CN108734728A (en) 2018-04-25 2018-04-25 A kind of extraterrestrial target three-dimensional reconstruction method based on high-resolution sequence image

Publications (1)

Publication Number Publication Date
CN108734728A true CN108734728A (en) 2018-11-02

Family

ID=63939263

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810377855.4A Pending CN108734728A (en) 2018-04-25 2018-04-25 A kind of extraterrestrial target three-dimensional reconstruction method based on high-resolution sequence image

Country Status (1)

Country Link
CN (1) CN108734728A (en)

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109459043A (en) * 2018-12-12 2019-03-12 上海航天控制技术研究所 A kind of spacecraft Relative Navigation based on production reconstructed image
CN109685879A (en) * 2018-12-13 2019-04-26 广东启慧城市信息有限公司 Determination method, apparatus, equipment and the storage medium of multi-view images grain distribution
CN109978885A (en) * 2019-03-15 2019-07-05 广西师范大学 A kind of tree three-dimensional point cloud segmentation method and system
CN110688502A (en) * 2019-09-09 2020-01-14 重庆邮电大学 An image retrieval method and storage medium based on deep hashing and quantization
CN110728671A (en) * 2019-10-15 2020-01-24 西安电子科技大学 Vision-Based Dense Reconstruction Methods for Textureless Scenes
CN110782521A (en) * 2019-09-06 2020-02-11 重庆东渝中能实业有限公司 Mobile terminal three-dimensional reconstruction and model restoration method and system
CN110796694A (en) * 2019-10-13 2020-02-14 西北农林科技大学 A real-time acquisition method of fruit 3D point cloud based on KinectV2
CN111063021A (en) * 2019-11-21 2020-04-24 西北工业大学 A method and device for establishing a three-dimensional reconstruction model of a space moving target
CN111127613A (en) * 2019-12-25 2020-05-08 华南理工大学 3D reconstruction method and system of image sequence based on scanning electron microscope
CN111598927A (en) * 2020-05-18 2020-08-28 京东方科技集团股份有限公司 Positioning reconstruction method and device
CN111627119A (en) * 2020-05-22 2020-09-04 Oppo广东移动通信有限公司 Texture mapping method, device, equipment and storage medium
CN111640187A (en) * 2020-04-20 2020-09-08 中国科学院计算技术研究所 Video splicing method and system based on interpolation transition
CN111815757A (en) * 2019-06-29 2020-10-23 浙江大学山东工业技术研究院 3D Reconstruction Method of Large Component Based on Image Sequence
WO2020243962A1 (en) * 2019-06-06 2020-12-10 深圳市大疆创新科技有限公司 Object detection method, electronic device and mobile platform
CN112102475A (en) * 2020-09-04 2020-12-18 西北工业大学 Space target three-dimensional sparse reconstruction method based on image sequence trajectory tracking
CN112508999A (en) * 2020-11-20 2021-03-16 西北工业大学深圳研究院 Space target motion state identification method based on cooperative observation image sequence
CN112509109A (en) * 2020-12-10 2021-03-16 上海影创信息科技有限公司 Single-view illumination estimation method based on neural network model
CN112907759A (en) * 2019-11-19 2021-06-04 南京理工大学 Splicing redundant point cloud removing method based on point cloud projection and point cloud growth
CN112927211A (en) * 2021-03-09 2021-06-08 电子科技大学 Universal anti-attack method based on depth three-dimensional detector, storage medium and terminal
CN113095380A (en) * 2021-03-26 2021-07-09 上海电力大学 Image hash processing method based on adjacent gradient and structural features
CN113407756A (en) * 2021-05-28 2021-09-17 山西云时代智慧城市技术发展有限公司 Lung nodule CT image reordering method based on self-adaptive weight
CN113592732A (en) * 2021-07-19 2021-11-02 杨薇 Image processing method based on big data and intelligent security
CN113705582A (en) * 2021-08-04 2021-11-26 南京林业大学 Method for extracting edge feature key points of building facade
CN113808273A (en) * 2021-09-14 2021-12-17 大连海事大学 A Disordered Incremental Sparse Point Cloud Reconstruction Method for Numerical Simulation of Ship Traveling Waves
CN114972625A (en) * 2022-03-22 2022-08-30 广东工业大学 Hyperspectral point cloud generation method based on RGB spectrum super-resolution technology
CN115100383A (en) * 2022-08-24 2022-09-23 深圳星坊科技有限公司 Method, Apparatus and Equipment for 3D Reconstruction of Specular Object Based on Common Light Source

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘俊江: "基于多幅图像的几何和纹理自动重建", 《中国优秀硕士学位论文全文数据库》 *
葛均强: "基于无人机航拍图像序列的三维重建", 《中国优秀硕士学位论文全文数据库》 *

Cited By (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109459043A (en) * 2018-12-12 2019-03-12 上海航天控制技术研究所 A kind of spacecraft Relative Navigation based on production reconstructed image
CN109685879A (en) * 2018-12-13 2019-04-26 广东启慧城市信息有限公司 Determination method, apparatus, equipment and the storage medium of multi-view images grain distribution
CN109685879B (en) * 2018-12-13 2023-09-29 广东启慧城市信息有限公司 Method, device, equipment and storage medium for determining multi-view image texture distribution
CN109978885A (en) * 2019-03-15 2019-07-05 广西师范大学 A kind of tree three-dimensional point cloud segmentation method and system
CN109978885B (en) * 2019-03-15 2022-09-13 广西师范大学 Tree three-dimensional point cloud segmentation method and system
WO2020243962A1 (en) * 2019-06-06 2020-12-10 深圳市大疆创新科技有限公司 Object detection method, electronic device and mobile platform
CN111815757A (en) * 2019-06-29 2020-10-23 浙江大学山东工业技术研究院 3D Reconstruction Method of Large Component Based on Image Sequence
CN110782521A (en) * 2019-09-06 2020-02-11 重庆东渝中能实业有限公司 Mobile terminal three-dimensional reconstruction and model restoration method and system
CN110688502A (en) * 2019-09-09 2020-01-14 重庆邮电大学 An image retrieval method and storage medium based on deep hashing and quantization
CN110688502B (en) * 2019-09-09 2022-12-27 重庆邮电大学 Image retrieval method and storage medium based on depth hash and quantization
CN110796694A (en) * 2019-10-13 2020-02-14 西北农林科技大学 A real-time acquisition method of fruit 3D point cloud based on KinectV2
CN110728671A (en) * 2019-10-15 2020-01-24 西安电子科技大学 Vision-Based Dense Reconstruction Methods for Textureless Scenes
CN110728671B (en) * 2019-10-15 2021-07-20 西安电子科技大学 Vision-Based Dense Reconstruction Methods for Textureless Scenes
CN112907759A (en) * 2019-11-19 2021-06-04 南京理工大学 Splicing redundant point cloud removing method based on point cloud projection and point cloud growth
CN111063021A (en) * 2019-11-21 2020-04-24 西北工业大学 A method and device for establishing a three-dimensional reconstruction model of a space moving target
CN111127613B (en) * 2019-12-25 2023-06-16 华南理工大学 Method and system for three-dimensional reconstruction of image sequence based on scanning electron microscope
CN111127613A (en) * 2019-12-25 2020-05-08 华南理工大学 3D reconstruction method and system of image sequence based on scanning electron microscope
CN111640187A (en) * 2020-04-20 2020-09-08 中国科学院计算技术研究所 Video splicing method and system based on interpolation transition
CN111640187B (en) * 2020-04-20 2023-05-02 中国科学院计算技术研究所 A video splicing method and system based on interpolation transition
CN111598927A (en) * 2020-05-18 2020-08-28 京东方科技集团股份有限公司 Positioning reconstruction method and device
CN111627119A (en) * 2020-05-22 2020-09-04 Oppo广东移动通信有限公司 Texture mapping method, device, equipment and storage medium
CN111627119B (en) * 2020-05-22 2023-09-15 Oppo广东移动通信有限公司 Texture mapping method and device, equipment and storage medium
CN112102475A (en) * 2020-09-04 2020-12-18 西北工业大学 Space target three-dimensional sparse reconstruction method based on image sequence trajectory tracking
CN112102475B (en) * 2020-09-04 2023-03-07 西北工业大学 Space target three-dimensional sparse reconstruction method based on image sequence trajectory tracking
CN112508999A (en) * 2020-11-20 2021-03-16 西北工业大学深圳研究院 Space target motion state identification method based on cooperative observation image sequence
CN112508999B (en) * 2020-11-20 2024-02-13 西北工业大学深圳研究院 Space target motion state identification method based on collaborative observation image sequence
CN112509109A (en) * 2020-12-10 2021-03-16 上海影创信息科技有限公司 Single-view illumination estimation method based on neural network model
CN112927211A (en) * 2021-03-09 2021-06-08 电子科技大学 Universal anti-attack method based on depth three-dimensional detector, storage medium and terminal
CN112927211B (en) * 2021-03-09 2023-08-25 电子科技大学 Universal attack countermeasure method based on depth three-dimensional detector, storage medium and terminal
CN113095380A (en) * 2021-03-26 2021-07-09 上海电力大学 Image hash processing method based on adjacent gradient and structural features
CN113407756A (en) * 2021-05-28 2021-09-17 山西云时代智慧城市技术发展有限公司 Lung nodule CT image reordering method based on self-adaptive weight
CN113592732B (en) * 2021-07-19 2023-03-24 安徽省赛达科技有限责任公司 Image processing method based on big data and intelligent security
CN113592732A (en) * 2021-07-19 2021-11-02 杨薇 Image processing method based on big data and intelligent security
CN113705582A (en) * 2021-08-04 2021-11-26 南京林业大学 Method for extracting edge feature key points of building facade
CN113808273A (en) * 2021-09-14 2021-12-17 大连海事大学 A Disordered Incremental Sparse Point Cloud Reconstruction Method for Numerical Simulation of Ship Traveling Waves
CN113808273B (en) * 2021-09-14 2023-09-12 大连海事大学 A disordered incremental sparse point cloud reconstruction method for numerical simulation of ship traveling waves
CN114972625A (en) * 2022-03-22 2022-08-30 广东工业大学 Hyperspectral point cloud generation method based on RGB spectrum super-resolution technology
CN115100383A (en) * 2022-08-24 2022-09-23 深圳星坊科技有限公司 Method, Apparatus and Equipment for 3D Reconstruction of Specular Object Based on Common Light Source

Similar Documents

Publication Publication Date Title
CN108734728A (en) A kind of extraterrestrial target three-dimensional reconstruction method based on high-resolution sequence image
Yu et al. Automatic 3D building reconstruction from multi-view aerial images with deep learning
Zhang et al. A review of deep learning-based semantic segmentation for point cloud
Su et al. Splatnet: Sparse lattice networks for point cloud processing
Berger et al. A survey of surface reconstruction from point clouds
Berger et al. State of the art in surface reconstruction from point clouds
Zhou et al. Fast and extensible building modeling from airborne LiDAR data
CN108038905A (en) A kind of Object reconstruction method based on super-pixel
CN113781667B (en) Three-dimensional structure simplified reconstruction method, device, computer equipment and storage medium
Previtali et al. A flexible methodology for outdoor/indoor building reconstruction from occluded point clouds
CN106228507A (en) A kind of depth image processing method based on light field
CN113192200B (en) Method for constructing urban real scene three-dimensional model based on space-three parallel computing algorithm
Chen et al. Research on 3D reconstruction based on multiple views
Luo et al. Supervoxel-based region growing segmentation for point cloud data
Guo et al. Line-based 3d building abstraction and polygonal surface reconstruction from images
Holzmann et al. Plane-based surface regularization for urban 3d reconstruction
Morelli et al. Deep-image-matching: a toolbox for multiview image matching of complex scenarios
Wei et al. BuilDiff: 3D building shape generation using single-image conditional point cloud diffusion models
Rothermel Development of a SGM-based multi-view reconstruction framework for aerial imagery
Hu et al. Learning structural graph layouts and 3D shapes for long span bridges 3D reconstruction
Budianti et al. Background blurring and removal for 3d modelling of cultural heritage objects
CN112002007A (en) Model obtaining method and device based on air-ground image, equipment and storage medium
Wang et al. A simple deep learning network for classification of 3D mobile LiDAR point clouds
Lyra et al. Development of an efficient 3D reconstruction solution from permissive open-source code
Gao et al. Multi-target 3d reconstruction from rgb-d data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20181102