[go: up one dir, main page]

CN101388115B - Depth image autoegistration method combined with texture information - Google Patents

Depth image autoegistration method combined with texture information Download PDF

Info

Publication number
CN101388115B
CN101388115B CN200810224183XA CN200810224183A CN101388115B CN 101388115 B CN101388115 B CN 101388115B CN 200810224183X A CN200810224183X A CN 200810224183XA CN 200810224183 A CN200810224183 A CN 200810224183A CN 101388115 B CN101388115 B CN 101388115B
Authority
CN
China
Prior art keywords
registration
image
images
depth
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN200810224183XA
Other languages
Chinese (zh)
Other versions
CN101388115A (en
Inventor
齐越
赵沁平
杨棽
沈旭昆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN200810224183XA priority Critical patent/CN101388115B/en
Publication of CN101388115A publication Critical patent/CN101388115A/en
Application granted granted Critical
Publication of CN101388115B publication Critical patent/CN101388115B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

一种结合纹理信息的深度图像自动配准方法,用于各种真实物体三维模型的重建,步骤为:(1)从扫描数据中提取或根据深度图像生成纹理图像;(2)基于SIFT特征提取纹理图像中的兴趣像素,并通过交叉检验的方法从中找出匹配像素对的候选集;(3)根据几何信息约束找出候选集中正确的匹配像素对;(4)在三维空间中找出和匹配像素对对应的匹配顶点对,计算出两幅深度图像间的刚体置换矩阵;(5)使用改进的ICP算法优化这一结果;(6)基于两幅深度图像配准,将多幅深度图像的输入序列分成若干条带状的子序列;(7)采用一种向前搜索的策略合并这些子序列,并构造完整的三维模型。本发明可以快速准确的配准大规模三维扫描数据,生成三维模型,同时抗噪声能力强,通用性好。

An automatic depth image registration method combined with texture information, which is used for the reconstruction of 3D models of various real objects. The steps are: (1) extracting from scan data or generating texture images based on depth images; (2) extracting features based on SIFT Pixels of interest in the texture image, and find out the candidate set of matching pixel pairs through the method of cross-checking; (3) find out the correct matching pixel pairs in the candidate set according to the geometric information constraints; (4) find out and Match the matching vertex pairs corresponding to the pixel pairs, and calculate the rigid body displacement matrix between the two depth images; (5) use the improved ICP algorithm to optimize this result; (6) based on the registration of the two depth images, multiple depth images The input sequence is divided into several strip-like subsequences; (7) A forward search strategy is used to merge these subsequences and construct a complete 3D model. The invention can quickly and accurately register large-scale three-dimensional scanning data to generate a three-dimensional model, and meanwhile has strong anti-noise ability and good versatility.

Description

A kind of depth image autoegistration method of combined with texture information
Technical field
The present invention relates to a kind of method of rebuilding the object dimensional model according to 3 d scan data automatically, a kind of depth image autoegistration method of combined with texture information particularly, belong to computer virtual reality and computer graphics techniques field, be mainly used in the reconstruction of various real-world object three dimensional lifelike models, particularly be applied to the reconstructing three-dimensional model of historical relic in the digital museum.
Background technology
Rebuild the three-dimensional model of object in the real world, by more and more widely be applied in aspects such as virtual emulation, computer animation, reverse engineering, computer-aided design (CAD) and digital museum.Along with the continuous development of 3-D scanning equipment, the data that collect based on 3-D scanning equipment are rebuild the model of real-world object, become a kind of popular three-dimensional modeling method gradually.The spatial digitizer of main flow not only can obtain the depth information on model surface summit, and can also obtain their colouring information, the depth image (range image) and the texture image (texture image) of resolution such as promptly can collect simultaneously, depth image be also referred to as cloud data (point cloud data, PCD).The geometric configuration of real-world object is often complicated, and the visual angle of spatial digitizer is limited, therefore, in order to obtain the whole geological information of three-dimensional object surface, need be in a plurality of different viewpoints scannings, the scan-data that will collect each time is stitched together again, reverts to a complete three-dimensional model, and this process is exactly the registration of scan-data.
At present, can be divided into two processes for the scan-data registration: two width of cloth view registrations (pair-wise registration) and several view registrations (multi-view registration).For two amplitude deepness images in the zone that overlaps, the purpose of two width of cloth view registrations is optimal motion matrixes of finding out between this two amplitude deepness image, under the situation of error minimum, wherein an amplitude deepness image is spliced on another width of cloth.The registration of two amplitude deepness images generally was divided into for two steps: rough registration (coarse registration) and meticulous registration (accurate registration).At first, calculate the estimated value of rigid body permutation matrix between two amplitude deepness images, then, optimize the result of back by minimizing a predefined error function in the meticulous registration stage in the rough registration stage.Similarly, the purpose of several data registrations is the given multi-amplitude deepness images of splicing, and it mainly comprises two aspects: find out the syntople between these depth images, the overlapping region is promptly arranged between which depth image and minimize global error.
In two kinds of thick registrations commonly used, the method of the rough registration of one class is a kind of local feature description of definition symbol, calculate the eigenwert of all points then, find out three pairs or above character pair point between two amplitude deepness images by comparing and mating these eigenwerts again, use Horn (referring to Horn at last, B.Closed-form solution of absolute orientation using unit quaternions.Journal of the Optical Society ofAmerica 4 (1987), 629-642.) method, just can calculate the rigid body permutation matrix between two width of cloth data.Common local feature description's symbol has: image rotating (spin images) is (referring to Huber, D.F.Automatic three-dimensional modeling from reality.PhD thesis, Carnegie Mellon University, 2002.Chair-Martial Hebert.), based on the descriptor of body surface derivative characteristic (referring to Gal R, Cohen-Or D.Salient geometric features for partial shape matching and similarity.ACM Trans.Graph.2006,25,130-150.), based on the descriptor of integration body (referring to Gelfand, N., Mitra, N.J., Guibas, L.J., and Pottmann, H.Robust global registration.In Symposium on Geometry Processing (2005), pp.197-206.) with based on the descriptor of metric space (referring to Li, X., and Guskov, I.Multi-scale features for approximate alignment of point-based surfaces.In SGP ' 05:Proceedings of the third Eurographics symposium on Geometry processing (Aire-la-Ville, Switzerland, Switzerland, 2005), Eurographics Association, p.217.); The method of another kind of rough registration is based on as random sampling consistance (RANdom SAmple Consensus, RANSAC) random algorithm of algorithm.For example Chen is (referring to Chen, C.-S., Hung, Y.-P., and Cheng, J.-B.Ransac-based darces:A new approach to fast auto-matic registration of partially overlapping range images.IEEE Trans.Pattern Anal.Mach.Intell.21,11 (1999), 1229-1234.) method that proposes, by on two amplitude deepness images, choosing some reference point in advance and introducing space constraints and improved the RANSAC algorithm, between registration speed and result's robustness, find an equilibrium point.
Although the character representation of existing local geometric features remains unchanged, even under the certain noise condition, also has robustness under rigid body translation.But, since low dimensional feature and high dimensional feature registration speed and stable aspect intrinsic contradictions and depth image in have noise and a large amount of geometrical defects, for the registration of large-scale data, use local geometric features to be difficult to obtain better effects merely.In addition, for any local feature description symbol, the eigenwert on each summit is all calculated according to its neighborhood inner vertex, therefore, for the different summit of body surface, if the geometric properties of their neighborhoods is similar, will obtain close eigenwert at these places, summit so, this makes the unique point of seeking coupling between two amplitude deepness images become the thing of a difficulty.The constraint condition that adds other is carried out beta pruning or feature is done polymerization all is the effective ways that address this problem, but they have increased the time complexity of Feature Points Matching algorithm greatly.
Consider that the research work at the two dimensional image registration has obtained achievement preferably, researchers begin to attempt the problem that the combined with texture image solves two amplitude deepness image registrations.Wherein Roth is (referring to Roth G.Registering two overlapping range images.In:3-D Digital Imaging and Modeling, Proceedings.Second International Conference on, 1999,191-200.) by the correct corresponding relation of triangle searching that the match interest summit is formed, the efficient of this method and accuracy are not high.Bendels is (referring to Bendels GH, Degener P, Wahl R, Kortgen M, Klein R.Image-based registration of 3d-range data using feature surface elements.In VAST, 2004,115-124.) behind the corresponding vertex of finding out the interest pixel, earlier feature extraction is done on these summits and neighborhood thereof, utilize the constraint of geological information to find out matching relationship correct between them again, the method of finally finishing registration .Bendels the scan-data quality better and between data angle change hour, can obtain registration results preferably.Seo is (referring to Seo JK, Sharp GC, Lee SW.Range data registration using photometric features.In:Computer Vision and Pattern Recognition, CVPR 2005,1140-145.) utilizing optical correction techniques to handle three-dimensional rotation and the projection distortion problem that exists between texture image on the Bendels method basis, but can only correcting plane or the texture in the zone of almost plane, so be difficult to surface geometry information complex objects.Liu Xiaoli (referring to Liu Xiaoli, Peng Xiang, Li Ameng, Gao Pengdong. the degree of depth picture coupling of combined with texture information..Computer-aided design (CAD) and graphics journal, 2007,19 (3), 340-345.) use the character pixel on the Sobel operator extraction texture image, find out the characteristic of correspondence pixel by normalized related coefficient then, at last by Hausdorff apart from check these character pixels right validity. this method relies on the information of texture image merely and finds out matched pixel, and needs man-machine interaction to select the overlapping region, only is applicable to that simple shooting angle changes the registration of little texture image.
In the meticulous registration of two amplitude deepness images, researchers have proposed many accurate method for registering, wherein that tool milestone significance is iterative closest point (Iterative Closest Point, ICP) algorithm is (referring to Besl, P.J., and McKay, N.D.Amethod for registration of 3-d shapes.IEEE Trans.Pattern Anal.Mach.Intell.14,2 (1992), 239-256. and Chen, Y., and Medioni, G.Object modelling by registration of multiple range images.Image Vision Comput.10,3 (1992), 145-155.).The ICP algorithm is an original state with the initial position of two amplitude deepness images, then by an iterative process, constantly reduce all corresponding point between two amplitude deepness images distance and, up to reaching certain threshold value, thereby calculate the kinematic matrix between two amplitude deepness images.ICP algorithm and its mutation generally need an original state preferably, therefore need the process of a rough registration to calculate the original state of the estimated value of rigid motion between two amplitude deepness images as accurate registration.
When the registration multi-amplitude deepness image, (V E), is called illustraton of model (model graph) can to set up between each amplitude deepness image of a whole model of expression the figure G of topological relation usually.Wherein, each vertex v of figure iRepresent an amplitude deepness image, each bar limit e I, jThe expression vertex v iAnd v jBetween two amplitude deepness images of representative the overlapping region is arranged, and the kinematic matrix between them also is assigned to this edge.When the set of the limit of structural map G, in the worst case, need judge whether they are overlapping to any two amplitude deepness images, algorithm time complexity at this moment is O (n 2).Pingi (Pingi P, Fasano A, Cignoni P, Montani C, Scopigno R.Exploiting the scanning sequence for automatic registration of large sets of range maps.Computer Graphics Forum 2005,24 (3), 517-526.) simplify this exhaustive algorithm by pre-defined a kind of scanning strategy.This about the map generalization tree representation of whole model make up the least member collection of complete models by these depth images, in case therefore G is fabricated out, just can generate the complete model of tree reconstruction by seeking of G.
But scanning and registration strategies towards band that Pingi proposes are too idealized, are difficult in the reality guarantee all to have the overlapping region between data adjacent arbitrarily in the range image sequence.
Summary of the invention
Technology of the present invention is dealt with problems: overcome the deficiencies in the prior art, a kind of automatic deepness image registration method of combined with texture information is provided, scan-data that can the fast processing big data quantity generates three-dimensional model, and noise resisting ability is strong simultaneously.
Technical solution of the present invention: a kind of automatic deepness image registration method of combined with texture information, be divided into the registration of two amplitude deepness images and the registration of multi-amplitude deepness image, wherein:
The step of registration of two amplitude deepness images is as follows:
If P and Q are scan-datas subject to registration, the texture image of P and depth image are respectively I pAnd R p, the texture image of Q and depth image are respectively I qAnd R q,
The first step is extracted texture image I from P and Q pAnd I qOr according to depth image R pAnd R qGenerate texture image I pAnd I q
Second step is based on SIFT feature extraction texture image I pAnd I qIn the interest pixel, and therefrom find out the right Candidate Set C of matched pixel by the method for crosscheck;
The 3rd step, use adaptive RANSAC algorithm, it is right to find out matched pixel correct among the Candidate Set C according to geological information constraint;
In the 4th step, it is right to the coupling summit of correspondence to find out in three dimensions with matched pixel, and according to these summit corresponding relations, calculates the rigid body permutation matrix between two amplitude deepness images;
The 5th step, use the result of improved ICP algorithm optimization in the 4th step, finish the accurate registration of two amplitude deepness images;
The step of registration of several scan-datas is as follows:
If the illustraton of model of topological relation is that (V E), schemes each vertex v of G to G between the expression depth image i∈ V all represents an amplitude deepness image, each bar limit e of figure G I, jThe expression vertex v iAnd v jBetween two amplitude deepness images of representative the overlapping region is arranged, and the weights of this edge are from v iTo v jThe rigid body permutation matrix, S=(s 1, s 2..., s n) be the list entries of n amplitude deepness image subject to registration;
In the 6th step,, the list entries S of multi-amplitude deepness image is divided into the banded subsequence S=of k (W based on two amplitude deepness image registrations 1, W 2..., W k), W wherein i∈ S, i=1...k are the subsequences of some amplitude deepness images, W iIn any two adjacent depth images the overlapping region is all arranged;
The 7th step, adopt the strategy of search forward, merge subsequence, according to the complete three-dimensional model of subsequence structure that has merged.
In the described first step, according to depth image R pAnd R qGenerate texture image I pAnd I qMethod be:
It is level and smooth that depth image is Laplacian, obtains its basic mode type, calculates the distance of each summit corresponding vertex on the basic mode type in the depth image again, and with the brightness value of each distance value as each summit respective pixel on texture image.
In described second step, based on SIFT feature extraction texture image I pAnd I qIn the interest pixel, and be by the method that the method for crosscheck is therefrom found out the right Candidate Set C of matched pixel:
(1) use the SIFT algorithm to find out I respectively pAnd I qIn the interest set of pixels; For detected each the interest pixel p of SIFT algorithm, SIFT proper vector with sift (p) expression p, then define (p apart from d, q)=|| sift (p), sift (q) || represent the similarity degree between two interest pixel p and the q, more little then these two pixels of this distance are similar more, otherwise big more then their difference of distance are big more;
(2) according to the Euclidean distance between the SIFT proper vector of two pixels, at I pAnd I qThe interest set of pixels between seek corresponding pixel, its method is:
(2.1) deletion does not have the interest pixel of corresponding vertex in depth image;
(2.2) for the interest pixel i in the texture image, if i is then deleted in the summit of i correspondence in depth image on the border of depth image grid;
(2.3) for I pIn each crucial pixel i p, at I qIn find out and make
Figure GSB00000453911100051
Two minimum crucial pixels
Figure GSB00000453911100052
With
Figure GSB00000453911100053
If Definition
Figure GSB00000453911100055
With
Figure GSB00000453911100056
Between diversity factor be
Figure GSB00000453911100057
And pre-defined diversity factor threshold value δ DiffIf,
Figure GSB00000453911100058
Then
Figure GSB00000453911100059
With
Figure GSB000004539111000510
Between notable difference is arranged,
Figure GSB000004539111000511
Can be used as i pRespective pixel, otherwise i pAt I qIn multiple correspondence is arranged, should give up i p, when
Figure GSB000004539111000512
The time, again with same method at I pIn find out pixel
Figure GSB000004539111000513
Respective pixel
Figure GSB000004539111000514
If
Figure GSB000004539111000515
And i pBe same pixel, then i pWith
Figure GSB000004539111000516
Be I pAnd I qBetween a pair of candidate's respective pixel, will Add Candidate Set C, otherwise still think i pAt I qIn do not have respective pixel.
In described the 3rd step, use adaptive RANSAC algorithm, right method is to find out matched pixel correct among the Candidate Set C according to geological information constraint:
(1) to all elements (i among the Candidate Set C p, i q) according to d (i p, i q) sort with ascending order, and the result is put into a formation L;
(2) whether l element is correct coupling before among the check L, and the method for inspection is:
(2.1) from preceding l the element of L 3 elements of picked at random as a sample;
(2.2) calculate the rigid body permutation matrix according to 3 pairs of matched pixel geological information of corresponding vertex in depth image;
(2.3), judge whether all the other L-3 element is correct coupling under this rigid body displacement according to the geological information of corresponding vertex; The element of correct coupling is interior point, and remaining is an exterior point, and point is formed the unanimity collection of this sampling in all, and the number according to interior point upgrades the sampling number upper limit then;
(2.4) repeat above step (2.1)-(2.3), reach the sampling number upper limit, find out consistent sampling of concentrating element (to count out promptly at most) at most then, and the unanimity collection that will sample specifically is as the element set of correct coupling up to cycle index.
In described the 6th step, based on two amplitude deepness image registrations, the step that the list entries of multi-amplitude deepness image is divided into the subsequence of several strips shape is:
(1) with s 1Add first subsequence W 1In;
(2) all adjacent depth image s among the registration sequence S successively i, s I+1∈ S, if i=1...n-1 is s iWith s I+1The overlapping region is arranged, then with s I+1Add s iThe subsequence at place, otherwise with s I+1Add a new subsequence, and in figure G, add relevant limit.
In described the 7th step, adopt the strategy of search forward, merge these subsequences, and be according to the step of the complete three-dimensional model of the subsequence structure that has merged:
(1) to subsequence W i∈ S, the every amplitude deepness image among the i=2...k is retrieved subsequence W successively jWhether ∈ S exists depth image and W among the j=i-1...1 iIn depth image the overlapping region is arranged, if there is such depth image, so just can be with W iAnd W jMerge, be about to W jIn each element be inserted into W iHeader element before, simultaneously in figure G, add relevant limit;
(2) continuous circulation step (1) is until only remaining a subsequence, perhaps at residue subsequence W iIn do not have the overlapping region between any two;
(3) rebuild whole model according to figure G.
The present invention's beneficial effect compared with prior art is:
(1) the present invention extracts texture image I from P and Q pAnd I qOr according to depth image R pAnd R qGenerate texture image I pAnd I q, compare with traditional method based on the local geometric features coupling, when texture image was known, the inventive method was not calculated the geometrical characteristic on each summit, had greatly improved speed, and there are the data of how much noises in the energy good treatment.
(2) from P and Q, extract texture image I pAnd I qOr according to depth image R pAnd R qGenerate texture image I pAnd I q, compare with existing method for registering based on texture information, when texture image was unknown, the inventive method can generate texture image according to the low-dimensional geometric properties on summit in the three dimensions, obviously has more versatility.
(3) compare with traditional method, the present invention is based on SIFT feature extraction texture image I based on the local geometric features coupling pAnd I qIn the interest pixel, and therefrom find out the right Candidate Set C of matched pixel by the method for crosscheck, can utilize method for registering images to find out corresponding relation between the summit, when handling the scan-data of complex model, have the obvious speed advantage.
(3) compare with the method for registering of existing combined with texture image, the inventive method adopts based on SIFT feature extraction texture image I pAnd I qIn the interest pixel, and therefrom find out the right Candidate Set C of matched pixel by the method for crosscheck, promptly introduced process, and selected correct matched pixel in conjunction with depth image to the crosscheck of interest pixel, improved the accuracy of registration results.
(4) the present invention is divided into the list entries S of multi-amplitude deepness image the subsequence of several strips shape, adopt the strategy of search forward, merge subsequence, according to the complete three-dimensional model of subsequence structure that has merged, multi-amplitude deepness image for input, this method based on divide-and-conquer strategy is generation model figure fast, avoided all will doing any two amplitude deepness images in the classic method problem of registration, improve the speed of several view registrations greatly, therefore greatly reduced the time of whole several registration process.
Description of drawings
Fig. 1 is two amplitude deepness image registration process process flow diagrams of the present invention;
Fig. 2 is the autoregistration process of two amplitude deepness images of the embodiment of the invention, wherein: (a) be two depth images subject to registration, (b) be two width of cloth texture images that from scan-data, extract, wherein white point is represented the respective pixel between two width of cloth images, (c) stain in is the summit of the respective pixel correspondence in three dimensions between texture image, (d) is final registration results;
Fig. 3 is the registration process of the multi-amplitude deepness image of the embodiment of the invention, 13 amplitude deepness images for input, earlier all two adjacent width of cloth data are done registration, whole scanning sequence can be divided into 5 ribbon subsequences like this, merge this 5 subsequences then, finally generate a complete model.
Fig. 4,5,6,7 is respectively the registration results of the different model scan-datas that adopt the inventive method, wherein Fig. 4 is the registration results of a model of big foot, Fig. 5 is the registration results of No. two models of big foot, and Fig. 6 is the registration results of Buddha model, and Fig. 7 is the registration results of terra cotta warriors and horses model.
Embodiment
As shown in Figure 1, the present invention includes the registration process of registration He several scan-datas of two width of cloth scan-datas, specific as follows:
1. from scan-data, extract or generate texture image.
If P and Q are scan-datas subject to registration, the texture image of P and depth image are respectively I pAnd R p, the texture image of Q and depth image are respectively I qAnd R qIf comprised texture image among P and the Q, then directly extracted them and convert gray level image to as I pAnd I q,, then at first generate texture image according to depth image if do not comprise texture image among P and the Q or the texture image that comprises can't registration the time.
A given amplitude deepness image R, then
Figure GSB00000453911100071
Make that Γ is a kind of geometric properties descriptor,
Figure GSB00000453911100072
(eigenwert of v) representing the v place can be done linear transformation to the eigenwert on all summits to Γ, makes their codomain be [0,1], defines the texture image that is generated by Γ so
Figure GSB00000453911100073
Can generate a width of cloth 2 d texture image according to three-dimensional geometric properties Γ like this.Any in theory geometric properties descriptor all can be used to generate texture image, but considers the efficient of algorithm and result's accuracy, wishes that then Γ has good discrimination, and is easy to calculate.
The basic thought that the present invention generates texture image is: establishing the depth image that comprises in the scan-data is
Figure GSB00000453911100074
And
Figure GSB00000453911100075
Be to R 0Make the basic mode type that obtains after level and smooth for k time, wherein
Figure GSB00000453911100076
Be
Figure GSB00000453911100077
Corresponding point after level and smooth, definition so I=1...n, wherein
Figure GSB00000453911100079
Figure GSB000004539111000710
Be
Figure GSB000004539111000711
The normal vector at place.In the specific implementation, the present invention uses the Laplacian based on grid smoothly to obtain the basic mode type. because depth image R 0In contained the two-dimensional structure information of a cloud, so gridding R 0Be simply also fast.In smoothing process, topological relation remains unchanged between the summit, establishes M k=(V k, E) be R kCorresponding triangle gridding, wherein V kRepresent the set on summit and limit respectively with E,
Figure GSB00000453911100081
Be M kThe border.
Figure GSB00000453911100082
Then
Figure GSB00000453911100083
N (i)={ j| (i, j) ∈ E}, d wherein i=| N (i), in experiment, generally get k=64.
2. find out the Candidate Set of respective pixel between texture image.
(1) consider that the SIFT feature not only has unchangeability to the convergent-divergent and the rotation of image, simultaneously illumination variation, noise, block with less conditions such as viewpoint variation under robustness is preferably also arranged, the present invention uses the SIFT algorithm to find out I respectively pAnd I qIn the interest set of pixels;
(2) at I pAnd I qThe interest set of pixels between seek corresponding pixel, for detected each the interest pixel p of SIFT algorithm, SIFT proper vector with sift (p) expression p, then define (p apart from d, q)=|| sift (p), sift (q) || represent the similarity degree between two interest pixel p and the q, more little then these two pixels of this distance are similar more, otherwise big more then their difference of distance are big more.Owing to may there be the complicacy of angle variation and texture image self between depth image; the simple Euclidean distance of SIFT proper vector of using is sought respective pixel as evaluation criterion; introduce a large amount of wrong couplings through regular meeting; simultaneously because the interest pixel of using the SIFT algorithm to find out is too much; the present invention adopts following steps to find out the Candidate Set C of respective pixel between two width of cloth texture images, and its detailed process is:
(2.1) for guaranteeing that the pixel in the result set all has corresponding summit in depth image, deletion does not have the interest pixel of corresponding vertex in depth image;
(2.2) for the interest pixel p in the texture image, if p is then deleted in the summit of p correspondence in depth image on the border of depth image grid;
(2.3) find out I by the method for crosscheck pAnd I qBetween the Candidate Set of respective pixel.For I pIn each crucial pixel i p, at I qIn find out and make distance
Figure GSB00000453911100084
Two minimum crucial pixels
Figure GSB00000453911100085
With
Figure GSB00000453911100086
Might as well establish Definition
Figure GSB00000453911100088
With
Figure GSB00000453911100089
Between diversity factor be
Figure GSB000004539111000810
And pre-defined diversity factor threshold value δ DiffIf,
Figure GSB000004539111000811
Then think With
Figure GSB000004539111000813
Between notable difference is arranged,
Figure GSB000004539111000814
Can be used as i pRespective pixel, otherwise think i pAt I qIn multiple correspondence is arranged, should give up i p, when
Figure GSB000004539111000815
The time, again with same method at I pIn find out pixel
Figure GSB000004539111000816
Respective pixel
Figure GSB000004539111000817
If
Figure GSB000004539111000818
And i pBe same pixel, then think i pWith
Figure GSB000004539111000819
Be I pAnd I qBetween a pair of candidate's respective pixel, will Add Candidate Set C, otherwise, still think i pAt I qIn do not have respective pixel.In experiment, generally get δ Diff=0.2.
3. choose the set of correct matched pixel according to depth image
The coupling that still comprises mistake among the simple respective pixel Candidate Set C that uses the SIFT proper vector to find out, the pixel that method that need to introduce other is found out correct coupling among the C is right. under the known situation of the three-dimensional information of pixel corresponding vertex, can judge the correctness of coupling fully with three-dimensional constraint condition.The present invention uses a kind of adaptive RANSAC algorithm in conjunction with the three-dimensional information on summit, and the pixel of finding out correct coupling from Candidate Set C is right, and its method is:
(1) to all elements (i among the Candidate Set C p, i q) according to d (i p, i q) sort with ascending order, and the result is put into a formation L;
(2) need in the three dimensions not three couple of conllinear coupling summit owing to recover minimum of a rigid body permutation matrix, thus the present invention only check among the L before l element whether be correct coupling, generally get l=50 in the experiment.
(2.1) from preceding l the element of L 3 elements of picked at random as a sample;
(2.2) calculate the rigid body permutation matrix according to these 3 pairs of matched pixel geological information of corresponding vertex in depth image;
(2.3), judge whether all the other l-3 element is correct coupling under this rigid body displacement according to the geological information of corresponding vertex.The element of correct coupling is an interior point (inlier), and remaining is exterior point (outlier), and point is formed the unanimity collection (consensus set) of this sampling in all, and the number according to interior point upgrades the sampling number upper limit then;
(2.4) repeat above three steps, reach the sampling number upper limit up to cycle index, the unanimity collection of maximum sampling of counting out in finally getting as a result of collects the element set as correct coupling.
4. it is right to summit corresponding in depth image to find out correct matched pixel, and uses the estimated value that calculates rigid body permutation matrix between two amplitude deepness images based on the method for hypercomplex number.
Between depth image and texture image, there is mapping F p: R p→ I pAnd F q: R q→ I qBecause algorithm of the present invention can guarantee result set G in the 2nd, 3 steps rIn pixel in depth image, finding corresponding vertex, promptly
Figure GSB00000453911100091
Figure GSB00000453911100092
Figure GSB00000453911100093
Make
Figure GSB00000453911100094
Figure GSB00000453911100095
So in case find out C r, can be according to F pAnd F qFind out R qAnd R qBetween corresponding vertex set, find the solution the rigid body permutation matrix with the method for Horn again.
5. meticulous registration
In case calculate the estimated value of rigid body permutation matrix between two amplitude deepness images, just can use ICP algorithm or its mutation to be original state with this result, finish iteration optimization.In meticulous registration process, the present invention has improved classical ICP algorithm: at first, do not consider to be positioned on the depth image border and close on the summit on border in iterative process, can obtain better accuracy like this; Secondly, the distance of corresponding point and dynamically adjusts distance threshold during according to each iteration, like this in each iterative process, have only the interior corresponding point of threshold range be used to calculate new transformation matrix and the distance that minimizes corresponding point with.
In meticulous registration process, if the distance of corresponding point and not restraining, perhaps converge on a value that surpasses pre-defined threshold value, then thinking can't this two amplitude deepness image of registration, and promptly the result of rough registration is incorrect or do not have an overlapping region between this two amplitude deepness image.In the process of multi-amplitude deepness image registration, this criterion is used to differentiate two amplitude deepness images and whether has the overlapping region.
The step of registration of several scan-datas is as follows:
When registration n amplitude deepness image, the illustraton of model of establishing topological relation between the expression depth image is that (V E), wherein, schemes each vertex v of G to G i∈ V all represents an amplitude deepness image, | V|=n, each bar limit e of figure G I, jThe expression vertex v iAnd v jBetween two amplitude deepness images of representative the overlapping region is arranged, and the weights of this edge are from V iTo v jThe rigid body permutation matrix, easily know, if e I, jThere is then e J, iAlso exist, and its weights are e I, jInverse matrix. the generation tree representation of figure G makes up the least member collection of complete models by these depth images, therefore in case make up the G that publishes picture, just can set by the generation of searching G and rebuild complete model.
6. based on two amplitude deepness image registrations, the list entries of multi-amplitude deepness image is divided into the subsequence of several strips shape.
If S=is (s 1, s 2..., s n) be the list entries of n amplitude deepness image subject to registration, when these data of registration, the present invention need set up illustraton of model G (V equally, E), although can do a registration to judge whether they have the overlapping region to any two amplitude deepness images among the S, if have then to calculate relevant rigid body permutation matrix, all limits among the G thereby structure is published picture.But when handling large-scale data, this is a task very consuming time.Under the scanning strategy of Pingi, s iWith s I-1And s I+1Between all have the overlapping region, so just can be successively to two adjacent amplitude deepness image s i, s I+1∈ S (i=1...n-1) does registration, the G thereby structure is published picture, and assurance figure G is communicated with.Yet this scanning of Pingi and the strategy of registration are too desirable, are difficult to guarantee two adjacent width of cloth data s among the S in practice i, s I+1(i=1...n-1) all there is the overlapping region between.The present invention supposes that the user can be according to a definite sequence when scanning, promptly finish scanning in banded mode, but per two scan stripes interbands are not necessarily continuous, be not necessarily to have the overlapping region between first amplitude deepness image of last amplitude deepness image of last band and next band, under this assumption, sequence S can be divided into k ribbon subsequence: S=(W 1, W 2..., W k),
Figure GSB00000453911100102
All satisfy the hypothesis of Pingi, i.e. W iIn any two adjacent depth images the overlapping region is all arranged.Concrete division methods is:
(1) with s 1Add first subsequence W 1In;
(2) all adjacent depth image s among the registration sequence S successively i, s I+1∈ S, if i=1...n-1 is s iWith s I+1The overlapping region is arranged, then with s I+1Add s iThe subsequence at place, otherwise with s I+1Add a new subsequence, and in figure G, add relevant limit.
7. adopt a kind of strategy of search forward to merge these subsequences, and construct complete three-dimensional model.
(1) to subsequence W i∈ S, the every amplitude deepness image among the i=2...k is retrieved subsequence W successively jWhether ∈ S exists depth image and W among the j=i-1...1 iIn depth image the overlapping region is arranged, if there is such depth image, so just can be with W iAnd W jMerge, be about to W jIn each element be inserted into W iHeader element before, simultaneously in figure G, add relevant limit;
(2) up to only remaining a subsequence, perhaps there is not the overlapping region in constantly cyclic process (1) between any two in the residue subsequence, the former shows that figure G is communicated with, and one that promptly can find out it generates tree and constructs complete model; And under one situation of back, a part of the equal representative model of each remaining subsequence, but be to use autoregistration algorithm of the present invention these parts can't be stitched together again.
(3) rebuild whole model according to the generation tree of figure G
Obviously, the strategy of Pingi proposition can be regarded a special case of the present invention as.With traditional O (n that needs any two width of cloth scan-datas of registration 2) several method for registering of time complexity compare, and adopt method of the present invention, the scan-data in the same subsequence can not done registration again, has therefore reduced the time of whole several registration process greatly.
As shown in Figure 2, (a) be two depth images subject to registration, (b) be two width of cloth texture images that from scan-data, extract, wherein white point is represented the respective pixel between two width of cloth images, (c) stain in is the summit of the respective pixel correspondence in three dimensions between texture image, (d) is final registration results; Fig. 2 has illustrated the process of two amplitude deepness image autoregistrations with example.
As shown in Figure 3,, earlier all two adjacent width of cloth data are done registration, whole scanning sequence can be divided into 5 ribbon subsequences like this, merge this 5 subsequences then, finally generate a complete model for 13 amplitude deepness images of input.
Be respectively the registration results of the different model scan-datas that adopt the inventive method as Fig. 4,5,6,7, Fig. 4 is the registration results of a model of big foot, Fig. 5 is the registration results of No. two models of big foot, and Fig. 6 is the registration results of Buddha model, and Fig. 7 is the registration results of terra cotta warriors and horses model.Wherein, comprised texture image in the scan-data of model of big foot and No. two models of big foot, can directly come registration based on known texture image, and do not comprise texture image in the scan-data of Buddha model and terra cotta warriors and horses model, need to generate texture image according to depth image earlier, come registration based on the texture image that generates again.From registration results, for comprising texture image and not comprising the scan-data of texture image, the present invention all can well registration they.

Claims (3)

1.一种结合纹理信息的深度图像自动配准方法,其特征在于:分为两幅深度图像的配准和多幅深度图像的配准,其中:1. A method for automatic registration of depth images in combination with texture information, characterized in that: be divided into registration of two depth images and registration of multiple depth images, wherein: 两幅深度图像的配准步骤如下:The registration steps of two depth images are as follows: 设P和Q是待配准的扫描数据,P的纹理图像和深度图像分别是Ip和Rp,Q的纹理图像和深度图像分别是Iq和RqLet P and Q be the scan data to be registered, the texture image and depth image of P are I p and R p respectively, the texture image and depth image of Q are I q and R q respectively, 第一步,从P和Q中提取纹理图像Ip和Iq或根据深度图像Rp和Rq生成纹理图像Ip和Iq,其中,根据深度图像Rp和Rq生成纹理图像Ip和Iq的方法为:In the first step, texture images I p and I q are extracted from P and Q or texture images I p and I q are generated according to depth images R p and R q, wherein, texture images I p are generated according to depth images R p and R q and the method of I q as: 对深度图像做Laplacian平滑,得到它的基模型,再计算出深度图像中的每个顶点到基模型上对应顶点的距离,并以每一距离值作为每个顶点在纹理图像上对应像素的亮度值;Perform Laplacian smoothing on the depth image to obtain its base model, then calculate the distance from each vertex in the depth image to the corresponding vertex on the base model, and use each distance value as the brightness of the corresponding pixel of each vertex on the texture image value; 第二步,基于SIFT特征提取纹理图像Ip和Iq中的兴趣像素,并通过交叉检验的方法从中找出匹配像素对的候选集C,具体的步骤为:The second step is to extract the pixels of interest in the texture images I p and I q based on the SIFT feature, and find out the candidate set C of matching pixel pairs through the method of cross-checking. The specific steps are: (2.1)使用SIFT算法分别找出Ip和Iq中的兴趣像素集,对于SIFT算法检测出的每一个兴趣像素p,用sift(p)表示p的SIFT特征向量,则定义距离d(p,q)=||sift(p),sift(q)||表示两个兴趣像素p和q之间的相似程度,这个距离越小则这两个像素越相似,反之,距离越大则它们差别越大;(2.1) Use the SIFT algorithm to find out the interest pixel sets in I p and I q respectively. For each interest pixel p detected by the SIFT algorithm, use sift(p) to represent the SIFT feature vector of p, then define the distance d(p , q)=||sift(p), sift(q)|| indicates the degree of similarity between two interest pixels p and q, the smaller the distance, the more similar the two pixels are, conversely, the larger the distance, the the greater the difference; (2.2)根据两个像素的SIFT特征向量间的欧式距离,在Ip和Iq的兴趣像素集之间寻找对应的像素,其方法为:(2.2) According to the Euclidean distance between the SIFT feature vectors of two pixels, find the corresponding pixel between the pixel sets of interest of I p and I q , the method is: (2.2.1)删除在深度图像中没有对应顶点的兴趣像素;(2.2.1) delete pixels of interest that do not have corresponding vertices in the depth image; (2.2.2)对于纹理图像中的兴趣像素p,如果p在深度图像中对应的顶点在深度图像网格的边界上,则删除p;(2.2.2) For the pixel p of interest in the texture image, if the corresponding vertex of p in the depth image is on the boundary of the depth image grid, then delete p; (2.2.3)对于Ip中的每一个关键像素ip,在Iq中找出使
Figure FSB00000510788000011
最小的两个关键像素
Figure FSB00000510788000012
Figure FSB00000510788000013
Figure FSB00000510788000014
定义
Figure FSB00000510788000015
Figure FSB00000510788000016
间的差异度为
Figure FSB00000510788000017
并预先定义差异度阈值δdiff,如果
Figure FSB00000510788000018
Figure FSB00000510788000019
Figure FSB000005107880000110
间有明显差异,
Figure FSB000005107880000111
能够作为ip的对应像素,否则ip在Iq中有多重对应,应舍弃ip,当
Figure FSB000005107880000112
时,再以同样的方法在Ip中找出像素
Figure FSB000005107880000113
的对应像素
Figure FSB000005107880000114
如果
Figure FSB000005107880000115
和ip为同一个像素,则ip
Figure FSB000005107880000116
Figure FSB000005107880000117
是Ip和Iq间的一对候选对应像素,将
Figure FSB000005107880000118
加入候选集C,否则仍认为ip在Iq中没有对应像素;
(2.2.3) For each key pixel i p in I p , find out in I q that the
Figure FSB00000510788000011
Minimum two key pixels
Figure FSB00000510788000012
and
Figure FSB00000510788000013
set up
Figure FSB00000510788000014
definition
Figure FSB00000510788000015
and
Figure FSB00000510788000016
The difference between
Figure FSB00000510788000017
And pre-define the difference threshold δ diff , if
Figure FSB00000510788000018
but
Figure FSB00000510788000019
and
Figure FSB000005107880000110
There is a clear difference between
Figure FSB000005107880000111
can be used as the corresponding pixel of i p , otherwise i p has multiple correspondences in I q , and i p should be discarded, when
Figure FSB000005107880000112
, then use the same method to find the pixel in I p
Figure FSB000005107880000113
the corresponding pixel of
Figure FSB000005107880000114
if
Figure FSB000005107880000115
and i p are the same pixel, then i p
Figure FSB000005107880000116
and
Figure FSB000005107880000117
is a pair of candidate corresponding pixels between I p and I q , the
Figure FSB000005107880000118
Join the candidate set C, otherwise it is still considered that i p has no corresponding pixel in I q ;
第三步,使用自适应的RANSAC算法,根据几何信息约束找出候选集C中正确的匹配像素对,具体的步骤是:The third step is to use the adaptive RANSAC algorithm to find the correct matching pixel pair in the candidate set C according to the geometric information constraints. The specific steps are: (3.1)对候选集C中所有元素(ip,iq)根据d(ip,iq)以升序进行排序,并将结果放入一个队列L中;(3.1) Sort all elements (i p , i q ) in candidate set C in ascending order according to d(i p , i q ), and put the result into a queue L; (3.2)检验L中前l个元素是否是正确的匹配,检验方法为:(3.2) Check whether the first l elements in L are correct matches, the test method is: (3.2.1)从L的前l个元素中随机选取3个元素作为一个样本;(3.2.1) Randomly select 3 elements from the first l elements of L as a sample; (3.2.2)根据3对匹配像素在深度图像中对应顶点的几何信息计算出刚体置换矩阵;(3.2.2) Calculate the rigid body displacement matrix according to the geometric information of the corresponding vertices in the depth image of the 3 pairs of matching pixels; (3.2.3)根据对应顶点的几何信息,判断其余l-3个元素在此刚体置换下是否为正确的匹配;正确匹配的元素为内点,其余的为外点,所有内点组成本次采样的一致集,然后根据内点的数目更新采样次数上限;(3.2.3) According to the geometric information of the corresponding vertices, judge whether the remaining 1-3 elements are correct matches under this rigid body replacement; the correct matched elements are interior points, and the rest are exterior points. All interior points constitute this time A consistent set of samples, and then update the upper limit of sampling times according to the number of interior points; (3.2.4)重复以上步骤(3.2.1)-(3.2.3),直到循环次数达到采样次数上限,然后找出一致集中元素最多,即内点数目最多的采样,并将这次采样的一致集作为正确的匹配的元素集;(3.2.4) Repeat the above steps (3.2.1)-(3.2.3) until the number of cycles reaches the upper limit of the number of samples, and then find the sample with the most elements in the consistent set, that is, the sample with the largest number of interior points, and divide this sampled Consistency set as the correct matching set of elements; 第四步,在三维空间中找出和匹配像素对对应的匹配顶点对,并根据这些顶点对应关系,计算出两幅深度图像间的刚体置换矩阵;The fourth step is to find the matching vertex pairs corresponding to the matching pixel pairs in the three-dimensional space, and calculate the rigid body replacement matrix between the two depth images according to the corresponding relationship between these vertices; 第五步,使用改进的ICP算法优化第四步中的结果,完成两幅深度图像的精确配准;具体的步骤是:首先,在迭代过程中不考虑位于深度图像边界上和临近边界的顶点,这样能够取得更好的准确性;其次,依照每次迭代时对应点的距离来动态调整距离阈值,这样在每次迭代过程中,只有阈值范围内的对应点用于计算新的变换矩阵和最小化对应点的距离和;The fifth step is to use the improved ICP algorithm to optimize the results in the fourth step to complete the precise registration of the two depth images; the specific steps are: first, the vertices located on the boundary of the depth image and adjacent to the boundary are not considered in the iterative process , so that better accuracy can be achieved; secondly, the distance threshold is dynamically adjusted according to the distance of the corresponding points in each iteration, so that in each iteration, only the corresponding points within the threshold range are used to calculate the new transformation matrix and Minimize the distance sum of corresponding points; 多幅扫描数据的配准步骤如下:The registration steps of multiple scan data are as follows: 设表示深度图像间拓扑关系的模型图为G(V,E),图G的每个顶点vi∈V均表示一幅深度图像,图G的每一条边ei,j表示顶点vi和vj代表的两幅深度图像间有重叠区域,且这条边的权值是从vi到vj的刚体置换矩阵,S=(s1,s2,...,sn)是待配准的n幅深度图像的输入序列;Let the model graph representing the topological relationship between depth images be G(V, E), each vertex v i ∈ V of graph G represents a depth image, and each edge e i, j of graph G represents vertex v i and There is an overlapping area between the two depth images represented by v j , and the weight of this edge is the rigid body replacement matrix from v i to v j , S=(s 1 , s 2 ,...,s n ) is to be an input sequence of n depth images for registration; 第六步,基于两幅深度图像配准,将多幅深度图像的输入序列S分成k条带状的子序列S=(W1,W2,...,Wk),其中Wi∈S,i=1...k,是若干幅深度图像的子序列,Wi中的任意两幅相邻的深度图像都有重叠区域;In the sixth step, based on the registration of two depth images, the input sequence S of multiple depth images is divided into k strip-shaped subsequences S=(W 1 , W 2 ,...,W k ), where W i ∈ S, i=1...k, is a subsequence of several depth images, and any two adjacent depth images in W i have overlapping areas; 第七步,采用向前搜索的策略,合并子序列,根据合并完的子序列构造完整的三维模型。The seventh step is to use a forward search strategy to merge the subsequences, and construct a complete 3D model according to the merged subsequences.
2.根据权利要求1所述的结合纹理信息的深度图像自动配准方法,其特征在于:所述第六步中,基于两幅深度图像配准,将多幅深度图像的输入序列分成若干条带状的子序列的步骤为:2. The depth image automatic registration method combined with texture information according to claim 1, characterized in that: in the sixth step, based on the registration of two depth images, the input sequence of multiple depth images is divided into several The steps of the banded subsequence are: (1)将s1加入第一个子序列W1中;(1) Add s 1 to the first subsequence W 1 ; (2)依次配准序列S中所有相邻的深度图像si,si+1∈S,i=1...n-1,如果si与si+1有重叠区域,则将si+1加入si所在的子序列,否则将si+1加入一个新的子序列,并在图G中加入相关的边。(2) Sequentially register all adjacent depth images s i in the sequence S, s i+1 ∈ S, i=1...n-1, if s i and s i+1 have overlapping areas, then s i+1 is added to the subsequence where s i is located, otherwise s i+1 is added to a new subsequence, and the relevant edges are added in graph G. 3.根据权利要求1所述的结合纹理信息的深度图像自动配准方法,其特征在于:所述第七步中,采用向前搜索的策略,合并这些子序列,并根据合并完的子序列构造完整的三维模型的步骤为:3. The method for automatic registration of depth images in combination with texture information according to claim 1, characterized in that: in the seventh step, the forward search strategy is adopted to merge these subsequences, and according to the merged subsequences, The steps to construct a complete 3D model are: (1)对子序列Wi∈S,i=2...k中的每幅深度图像,依次检索子序列Wj∈S,j=i-1...1中是否存在深度图像和Wi中的深度图像有重叠区域,如果存在这样的深度图像,那么就将Wi和Wj合并,即将Wj中的各元素插入到Wi的首元素之前,同时在图G中加入相关的边;(1) For each depth image in the subsequence W i ∈ S, i=2...k, sequentially search whether there is a depth image and W in the subsequence W j ∈ S, j=i-1...1 The depth image in i has an overlapping area. If there is such a depth image, then merge W i and W j , that is, insert each element in W j before the first element of W i , and add related side; (2)不断循环步骤(1),直至只剩余一个子序列,或者在剩余子序列Wi中任意两个间都不存在重叠区域;(2) Repeat step (1) continuously until there is only one subsequence left, or there is no overlapping area between any two of the remaining subsequences W i ; (3)根据图G重建整个模型。(3) Rebuild the entire model according to Figure G.
CN200810224183XA 2008-10-24 2008-10-24 Depth image autoegistration method combined with texture information Expired - Fee Related CN101388115B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN200810224183XA CN101388115B (en) 2008-10-24 2008-10-24 Depth image autoegistration method combined with texture information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN200810224183XA CN101388115B (en) 2008-10-24 2008-10-24 Depth image autoegistration method combined with texture information

Publications (2)

Publication Number Publication Date
CN101388115A CN101388115A (en) 2009-03-18
CN101388115B true CN101388115B (en) 2011-07-27

Family

ID=40477519

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200810224183XA Expired - Fee Related CN101388115B (en) 2008-10-24 2008-10-24 Depth image autoegistration method combined with texture information

Country Status (1)

Country Link
CN (1) CN101388115B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106570823A (en) * 2016-10-11 2017-04-19 山东科技大学 Planar feature matching-based point cloud crude splicing method

Families Citing this family (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101697232B (en) * 2009-09-18 2012-03-07 浙江大学 SIFT characteristic reducing method facing close repeated image matching
CN102034231B (en) * 2009-09-25 2012-06-06 汉王科技股份有限公司 Image sequence registration method
CN101719282B (en) * 2009-11-11 2012-06-27 哈尔滨工业大学 Digital jointing method of three-dimensional object fractures based on fracture section
CN101916456B (en) * 2010-08-11 2012-01-04 无锡幻影科技有限公司 Method for producing personalized three-dimensional cartoon
CN101930618B (en) * 2010-08-20 2012-05-30 无锡幻影科技有限公司 Method for manufacturing personalized two-dimensional cartoon
CN102034274A (en) * 2010-12-10 2011-04-27 中国人民解放军国防科学技术大学 Method for calculating corresponding relationship between frame and frame in 3D point cloud sequence
CN102202159B (en) * 2011-03-29 2014-08-27 段连飞 Digital splicing method for unmanned aerial photographic photos
EP2697743A4 (en) * 2011-04-11 2014-09-24 Intel Corp Gesture recognition using depth images
CN102208033B (en) * 2011-07-05 2013-04-24 北京航空航天大学 Data clustering-based robust scale invariant feature transform (SIFT) feature matching method
CN102945289B (en) * 2012-11-30 2016-01-06 苏州搜客信息技术有限公司 Based on the image search method of CGCI-SIFT local feature
US9083960B2 (en) * 2013-01-30 2015-07-14 Qualcomm Incorporated Real-time 3D reconstruction with power efficient depth sensor usage
CN103236081B (en) * 2013-04-25 2016-04-27 四川九洲电器集团有限责任公司 A kind of method for registering of colour point clouds
CN103236064B (en) * 2013-05-06 2016-01-13 东南大学 A kind of some cloud autoegistration method based on normal vector
CN103247040B (en) * 2013-05-13 2015-11-25 北京工业大学 Based on the multi-robot system map joining method of hierarchical topology structure
CN103279955B (en) * 2013-05-23 2016-03-09 中国科学院深圳先进技术研究院 Image matching method and system
KR102225617B1 (en) * 2014-11-03 2021-03-12 한화테크윈 주식회사 Method of setting algorithm for image registration
CN104408701B (en) * 2014-12-03 2018-10-09 中国矿业大学 A kind of large scene video image joining method
CN105913492B (en) * 2016-04-06 2019-03-05 浙江大学 A kind of complementing method of RGBD objects in images shape
CN106204710A (en) * 2016-07-13 2016-12-07 四川大学 The method that texture block based on two-dimensional image comentropy is mapped to three-dimensional grid model
CN109215110A (en) * 2017-07-21 2019-01-15 湖南拓视觉信息技术有限公司 Whole scene scanning means and 3-D scanning modeling
CN108520055A (en) * 2018-04-04 2018-09-11 天目爱视(北京)科技有限公司 A kind of product testing identification method compared based on temmoku point cloud
CN109035384A (en) * 2018-06-06 2018-12-18 广东您好科技有限公司 Pixel synthesis technology based on three-dimensional scanning and model automatic vertex processing engine
CN109410318B (en) * 2018-09-30 2020-09-08 先临三维科技股份有限公司 Three-dimensional model generation method, device, equipment and storage medium
CN112734821B (en) * 2019-10-28 2023-12-22 阿里巴巴集团控股有限公司 Depth map generation method, computing node cluster and storage medium
CN110946654B (en) * 2019-12-23 2022-02-08 中国科学院合肥物质科学研究院 Bone surgery navigation system based on multimode image fusion
CN115953406B (en) * 2023-03-14 2023-05-23 杭州太美星程医药科技有限公司 Matching method, device, equipment and readable medium for medical image registration

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1355277A2 (en) * 2002-04-18 2003-10-22 Canon Europa N.V. Three-dimensional computer modelling
CN1588424A (en) * 2004-07-02 2005-03-02 清华大学 Finger print identifying method based on broken fingerprint detection
CN1209073C (en) * 2001-12-18 2005-07-06 中国科学院自动化研究所 Identity discriminating method based on living body iris
CN101154265A (en) * 2006-09-29 2008-04-02 中国科学院自动化研究所 Iris Recognition Method Based on Local Binary Pattern Features and Graph Matching

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1209073C (en) * 2001-12-18 2005-07-06 中国科学院自动化研究所 Identity discriminating method based on living body iris
EP1355277A2 (en) * 2002-04-18 2003-10-22 Canon Europa N.V. Three-dimensional computer modelling
CN1588424A (en) * 2004-07-02 2005-03-02 清华大学 Finger print identifying method based on broken fingerprint detection
CN101154265A (en) * 2006-09-29 2008-04-02 中国科学院自动化研究所 Iris Recognition Method Based on Local Binary Pattern Features and Graph Matching

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106570823A (en) * 2016-10-11 2017-04-19 山东科技大学 Planar feature matching-based point cloud crude splicing method
CN106570823B (en) * 2016-10-11 2019-06-18 山东科技大学 Point cloud coarse stitching method based on plane feature matching

Also Published As

Publication number Publication date
CN101388115A (en) 2009-03-18

Similar Documents

Publication Publication Date Title
CN101388115B (en) Depth image autoegistration method combined with texture information
CN100559398C (en) Automatic deepness image registration method
US10109055B2 (en) Multiple hypotheses segmentation-guided 3D object detection and pose estimation
CN108921895B (en) Sensor relative pose estimation method
WO2015139574A1 (en) Static object reconstruction method and system
Li et al. Confidence-based large-scale dense multi-view stereo
CN108776989B (en) Low-texture planar scene reconstruction method based on sparse SLAM framework
CN113538569B (en) Weak texture object pose estimation method and system
EP3563346A1 (en) Method and device for joint segmentation and 3d reconstruction of a scene
CN112015275A (en) Digital twin AR interaction method and system
Ceylan et al. Factored facade acquisition using symmetric line arrangements
CN114332510A (en) A Hierarchical Image Matching Method
Zhu et al. Automatic multi-view registration of unordered range scans without feature extraction
Guo et al. Line-based 3d building abstraction and polygonal surface reconstruction from images
Wang et al. Vid2Curve: simultaneous camera motion estimation and thin structure reconstruction from an RGB video
CN115393519A (en) Three-dimensional reconstruction method based on infrared and visible light fusion image
Fan et al. Convex hull aided registration method (CHARM)
Wang et al. Image-based building regularization using structural linear features
Kordelas et al. State-of-the-art algorithms for complete 3d model reconstruction
Zhao et al. 3D object tracking via boundary constrained region-based model
Wu et al. Photogrammetric reconstruction of free-form objects with curvilinear structures
Labatut et al. Hierarchical shape-based surface reconstruction for dense multi-view stereo
Chen et al. Multiview stereo via noise suppression patchmatch
Lin et al. High-resolution multi-view stereo with dynamic depth edge flow
CN117576303A (en) Three-dimensional image generation method, device, equipment and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20110727

Termination date: 20141024

EXPY Termination of patent right or utility model