[go: up one dir, main page]

CN101626513A - Method and system for generating panoramic video - Google Patents

Method and system for generating panoramic video Download PDF

Info

Publication number
CN101626513A
CN101626513A CN200910109043A CN200910109043A CN101626513A CN 101626513 A CN101626513 A CN 101626513A CN 200910109043 A CN200910109043 A CN 200910109043A CN 200910109043 A CN200910109043 A CN 200910109043A CN 101626513 A CN101626513 A CN 101626513A
Authority
CN
China
Prior art keywords
video
view
transformation matrix
background
channel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN200910109043A
Other languages
Chinese (zh)
Inventor
裴继红
谢维信
何巧珍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN200910109043A priority Critical patent/CN101626513A/en
Publication of CN101626513A publication Critical patent/CN101626513A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

本发明适用于视频图像处理技术领域,提供了一种全景视频生成方法及系统,所述方法包括:通过多台摄像机采集不同视点的多路视频;分离每路视频中的背景和运动前景,得到多路背景视频和多路运动前景视频;根据所述多路背景视频获取投影变换矩阵;根据所述投影变换矩阵和多路背景视频生成背景全景视频,根据所述投影变换矩阵和多路运动前景视频生成前景全景视频;将所述背景全景视频和前景全景视频进行融合,生成全景视频。本发明实现了将有部分重叠视场的多个摄像机的视频自动生成为全景动态视频,较好地解决了在全景视频视场重叠区域运动目标的重影和虚影问题,以及摄像机投影变换矩阵自动计算中稳定性差的问题。

Figure 200910109043

The present invention is applicable to the technical field of video image processing, and provides a panoramic video generation method and system. The method includes: collecting multiple videos from different viewpoints through multiple cameras; separating the background and moving foreground in each video to obtain Multi-channel background video and multi-channel motion foreground video; obtain a projection transformation matrix according to the multi-channel background video; generate a background panoramic video according to the projection transformation matrix and the multi-channel background video, according to the projection transformation matrix and the multi-channel motion foreground The video generates a foreground panoramic video; the background panoramic video and the foreground panoramic video are fused to generate a panoramic video. The present invention realizes the automatic generation of the videos of multiple cameras with partially overlapping fields of view into a panoramic dynamic video, and better solves the problem of ghosting and ghosting of moving objects in the overlapping area of the panoramic video field of view, as well as the camera projection transformation matrix Poor stability issues in automatic calculations.

Figure 200910109043

Description

Panoramic video generates method and system
Technical field
The invention belongs to technical field of image processing, relate in particular to a kind of panoramic video and generate method and system.
Background technology
The multiple-camera panoramic video is meant with two or more than the video image that the video camera of two visual angles configuration obtains, merges a big visual field video that comprises each visual angle content that forms through splicing.
Two class modes are generally taked in the generation of multiple-camera panoramic video at present:
First kind mode, the video camera that each visual angle is different places single view position, space, the panoramic video of then each camera video splicing is permeated a wide-angle, annular or hemispherical visual field.Under the single view mode, to scene in the different cameras visual field and moving target, they are very little apart from the space length difference of each video camera, promptly the same object in two video camera overlapped fovs is very little at the depth of field difference of different cameras, the different cameras visual field is satisfied substantially the condition of identical affine transformation.Based on the technology of single view various visual angles, because the depth of field difference of different motion target is less to the full-view visual field influence, the technical difficulty of realization is little, existing comparatively ripe product.The overall view visual system Ladybug of Canada PointGrey company is a kind of single view panoramic vision product of six video cameras.Shenzhen research institute of Peking University and security protection scientific ﹠ technical corporation have developed jointly and similar demo system of the panoramic vision product Ladybug of external PointGrey company etc.
The second class mode, different position for video camera, each visual angle be in different points of view position, space, with the video-splicing of each a video camera panoramic video of big visual field that permeates.Because the locus difference at each video camera place, simple affine transformation condition is not satisfied in the visual field of different cameras.In the overlapped fov of two video cameras, different scenery are different with the depth of field of moving target in each video camera, moving target particularly, because the form of its motion does not have any constraint, its depth of field is along with motion is ceaselessly changing.Under this condition, panoramic video is in the overlapping region, visual field, and the ghost image of moving target and diplopia problem are general relatively more serious, and the technical difficulty of solution is big.In this class panoramic video system, multiple-camera is in the mode difference of space laying in addition, and the technical method of realization generally also is different.
According to bibliographical information, texas,U.S A﹠amp; The real time panoramic vision system that M university utilizes multiple-camera to develop to be used for self-navigation; NUS has developed a cover multiple-camera real time panoramic video conferencing system; U.S. Carnegie Mellon university has developed the panorama meeting video system that four video cameras of a cover constitute.Above-mentioned system or product substantially all are that function introduction or demonstration are introduced in report, do not provide concrete realization technology.
All in all, based on the panoramic video technology of many viewpoints various visual angles, the layout mode difference of position of each video camera space, the implementation of panoramic video is also different.Especially, in the overlapping region of the visual field of different cameras, the depth of field difference of moving target is big to the full-view visual field influence, and the technical difficulty of realization is big.Also there are many technological gaps in present many viewpoint various visual angles panoramic video technology.When prior art generates panoramic video by the many viewpoints of multiple-camera, exist the moving target of overlapping region, panoramic video visual field that the problem of ghost image and diplopia is arranged.
Summary of the invention
The purpose of the embodiment of the invention is to provide a kind of panoramic video generation method, is intended to solve in the existing panoramic video generation technique, and there is the problem of ghost image and diplopia in the moving target of the overlapping region, visual field of panoramic video.
The embodiment of the invention is achieved in that a kind of panoramic video generation method, said method comprising the steps of:
Gather the multi-channel video of different points of view by multiple cameras;
Separate background and sport foreground in the video of every road, obtain multichannel background video and multichannel sport foreground video;
Obtain projective transformation matrix according to described multichannel background video;
According to described projective transformation matrix and multichannel background video generation background panoramic video, generate the prospect panoramic video according to described projective transformation matrix and multichannel sport foreground video;
Described background panoramic video and prospect panoramic video are merged, generate panoramic video.
Another purpose of the embodiment of the invention is to provide a kind of panoramic video generation system, and described system comprises:
The multi-channel video collecting unit is used for the multi-channel video by multiple cameras collection different points of view;
Separative element is used for separating the background and the sport foreground of every road video that described multi-channel video collecting unit gathers, and obtains multichannel background video and multichannel sport foreground video;
The projective transformation matrix computing unit, the multichannel background video that is used for obtaining according to described separative element obtains projective transformation matrix;
Background panoramic video generation unit is used for the multichannel background video generation background panoramic video that the projective transformation matrix that calculates according to described projective transformation matrix computing unit and described separative element obtain;
Prospect panoramic video generation unit is used for the multichannel sport foreground video that the projective transformation matrix that calculates according to described projective transformation matrix computing unit and described separative element obtain and generates the prospect panoramic video;
The panoramic video generation unit is used for the background panoramic video of described background panoramic video generation unit generation and the prospect panoramic video of prospect panoramic video generation unit generation are merged, and generates panoramic video.
The embodiment of the invention is decomposed into video background data and video motion foreground data respectively by the video with the multichannel video camera, utilizes the two-path video background data visual field projective transformation matrix between two video cameras of calculating automatically.Background video is carried out projective transformation, the video after the conversion is embedded in the full-view visual field, and seamless fusion treatment is carried out in the overlapping region, visual field; The video motion foreground data is carried out projective transformation, and the foreground moving target of field of detection overlay region is carried out parallax correction and the fusion of prospect panorama to target; At last video background panorama and video foreground panorama are merged, the dynamic video of having realized having the multichannel video camera of the visual field of overlapping is generated as the panorama dynamic video automatically, solved ghost image and diplopia problem preferably at the moving target of overlapping region, panoramic video visual field, and the video camera projective transformation matrix automatically calculate in the problem of poor stability.
Description of drawings
Fig. 1 is the flow chart of the panoramic video generation method that provides of the embodiment of the invention;
Fig. 2 is the flow chart of the calculating projective transformation matrix that provides of the embodiment of the invention;
Fig. 3 is the algorithm flow chart of the calculated characteristics point set that provides of the embodiment of the invention;
Fig. 4 is that 128 dimensional feature vectors that the embodiment of the invention provides are described schematic diagram;
Fig. 5 is that the fusion coefficients of the triangle area weighting that provides of the embodiment of the invention is calculated schematic diagram;
Fig. 6 is overlapping region, the visual field imaging parallax schematic diagram that the embodiment of the invention provides;
Fig. 7 is the structure chart of the panoramic video generation system that provides of the embodiment of the invention.
Embodiment
In order to make purpose of the present invention, technical scheme and advantage clearer,, the present invention is further elaborated below in conjunction with drawings and Examples.Should be appreciated that specific embodiment described herein only in order to explanation the present invention, and be not used in qualification the present invention.
The embodiment of the invention is decomposed into multichannel background video and multichannel sport foreground video respectively by the multi-channel video with the multichannel camera acquisition, utilization multichannel background video wherein calculates the projective transformation matrix between the multi-channel video visual field automatically, according to projective transformation matrix the multichannel background video is carried out projective transformation, multichannel background video after the projective transformation is embedded in the full-view visual field, and carry out seamless fusion treatment, according to projective transformation matrix the sport foreground video is carried out projective transformation, sport foreground video to overlapping region, detected visual field carries out parallax correction, and the sport foreground panoramic video of finishing in the full-view visual field merges, last background panoramic video and prospect panoramic video merge, and have realized the automatic generation of panorama dynamic video.
Fig. 1 shows the flow chart of the panoramic video generation method that the embodiment of the invention provides, and details are as follows.
In step S101, gather the multi-channel video of different points of view by multiple cameras.
In step S102, separate background and sport foreground in the video of every road, obtain multichannel background video and multichannel sport foreground video.
Multi-channel video to the multiple cameras collection carries out background estimating and motion detection respectively, obtains multichannel background video and multichannel sport foreground video.In the prior art, video background method of estimation and sport foreground detection method have multiple, are preferable a kind of based on the background estimating of many gauss hybrid models and motion detection wherein, enumerate no longer one by one herein.
In step S103, obtain projective transformation matrix according to the multichannel background video.
Obtain specifically comprising of projective transformation matrix according to the multichannel background video: the characteristic point of obtaining every road background video; Characteristic point and arest neighbors, inferior nearest neighbor distance decision function according to every road background video obtain the set of candidate matches point; Set is purified to candidate matches point; Obtain projective transformation matrix according to the match point set after purifying.
The present invention see also Fig. 2 in order better to explain, for being example with the two-path video, details are as follows to obtain the process of projective transformation matrix according to the two-way background video:
In step S21, calculate feature point set D1, the D2 of two-way background video respectively, and among D1, the D2 each characteristic point to get 128 dimensional feature vectors be example, calculation process is seen Fig. 3, specifically details are as follows.
In step S211, by every road background video, calculate and make up Gauss's yardstick pyramid diagram picture, calculate and make up Gauss's yardstick difference pyramid diagram picture.Specific as follows:
The function of supposing the background video correspondence is f B(x, y), (x, y is σ) as formula (1) for gaussian kernel function G
G ( x , y , σ ) = 1 2 π σ 2 exp ( - x 2 + y 2 2 σ 2 ) - - - ( 1 )
In the formula (1), σ is a variance, general desirable empirical value σ=1.5, and exp () represents exponential function.Then Gauss's yardstick pyramid diagram is as f G(x, y is k) as formula (2)
f G(x,y,k)=f B(x,y)*G(x,y,2 kσ),k=0,1,2,... (2)
In the formula (2), * is a convolution algorithm.Gauss's yardstick difference pyramid diagram is as f D(x, y is k) as formula (3)
f D(x,y,k)=f G(x,y,k)-f G(x,y,k-1),k=1,2,... (3)
In step S212, calculate the Local Extremum set in the difference of Gaussian pyramid diagram picture.Suppose that the difference pyramid has s layer, s 〉=3.Local Extremum is specific as follows:
If (x y) is the pixel locus of difference of Gaussian pyramid diagram picture, k ∈ 1,2 ..., s} is the pyramidal layer of difference position.Order
F min ( x , y , k ) = 1 , f D ( x , y , k ) < f D ( x + m , y + n , k + l ) , &ForAll; m , n , l &Element; { - 1 , 0,1 } , | m | + | n | + | l | &NotEqual; 0 0 , otherwise
F max ( x , y , k ) = 1 , f D ( x , y , k ) > f D ( x + m , y + n , k + l ) , &ForAll; m , n , l &Element; { - 1 , 0,1 } , | m | + | n | + | l | &NotEqual; 0 0 , otherwise
Then Local Extremum set D1 is as the formula (4):
Dl={P=(x,y,k)|F min(x,y,k)+F max(x,y,k)≠0,(x,y)∈Z 2,k=2,3,...s-1} (4)
In step S213, calculate each some P=(x, y, one 128 dimensional feature vector k) among the Local Extremum set D1.Specific as follows: to extreme point P=(x, y, k), at original background video functions f B(x, y) in, so that (x y) gets 16 * 16 window W for the center 16, calculation window W 16In each pixel place function f B(x, gradient amplitude y) and direction.With W 16Be cut into size and be 4 * 4 subwindow, have 4 * 4=16 such subwindow, as shown in Figure 4.In each subwindow, calculate the gradient accumulated value of each direction, form one 8 dimension subvector by 8 directional statistics; Owing to have 4 * 4=16 such subwindow, then just produced the characteristic vector of 16 * 8=128 dimension at characteristic point P place.
In step S22, the match point of point in feature point set D2 among the calculated characteristics point set D1, the candidate matches point that obtains D1, D2 is gathered D.Be specially: get the characteristic point P among the D1 1i, in D2, calculate and P 1iNearest point of characteristic distance and characteristic distance time near some P 2n1, P 2n2, the distance between their characteristic vectors is distinguished as the formula (5):
d 1=d(P 1i,P 2n1) d 2=d(P 1i,P 2n2) (5)
If d 1/ d 2<δ then puts (P 1i, P 2n1) be a pair of candidate matches point, the general value of threshold value δ is preferable between 0.5~0.7.
Each point among D1, the D2 is carried out above-mentioned differentiation, obtain candidate matches point set D.
In step S23, D adopts the RANSAC algorithm to purify to the set of candidate matches point, the match point set Dc after obtaining purifying.
Wherein, the step of RANSAC purification algorithm is as follows:
Step1: in D, randomly draw 4 pairs of match points, wherein any 3 conllinear not, otherwise sample drawn again.
Step2:, calculate projective transformation matrix M according to 4 pairs of match points that extract.
Step3: by projective transformation matrix M, calculate among the D each match point to the distance under projective transformation, if distance is less than given threshold value, then this is called interior point under the M to match point.The set that point is formed among the D all is Di, the interior some number Ni of Di.
Step4: the random sampling test to Step1-Step3 is carried out m time.Maximum that time sampling test of point in choosing, as the formula (6), order
c = arg max i { N i , i = 1,2 , . . . m } - - - ( 6 )
Then Dc is the match point pair set after purifying through the RANSAC algorithm.
Calculate the method for projective transformation matrix M among the superincumbent Step2 by 4 pairs of match points, in looking geometry, mature technique is arranged more, repeat no more herein.
Set is obtained after the step of projective transformation matrix according to the match point after the purification, and panoramic video generation method also comprises: obtain optimum projective transformation matrix according to default error function, i.e. step S24.
In step S24, utilize the match point pair set Dc after purifying, use the mutual projected position error optimized Algorithm of match point symmetry to calculate projective transformation matrix M.Concrete grammar is:
If visual field A, the projective transformation matrix between the B are M, total n the match point of match point pair set Dc is right, appoints and gets match point to (P A(k), P B(k)) ∈ Dc, P A(k) ∈ A, P B(k) ∈ B, k=1...n.Suppose under the Metzler matrix effect P A(k) subpoint in the B of visual field is Q B(k), P B(k) subpoint in the A of visual field is Q A(k), as the formula (7).
P A ( k ) &RightArrow; M Q B ( k ) , P B ( k ) &RightArrow; M - 1 Q A ( k ) - - - ( 7 )
Then under the effect of projective transformation matrix M, the symmetry of match point pair set Dc projected position error function mutually is defined as shown in the formula (8).
E ( M , Dc ) = &Sigma; k = 1 n ( | | P A ( k ) - Q A ( k ) | | 2 + | | P B ( k ) - Q B ( k ) | | 2 ) - - - ( 8 )
Then optimum projective transformation matrix M *Can be by (M, optimization Dc) obtains, promptly shown in the formula (9) to E.
M * = arg min M E ( M , Dc ) - - - ( 9 )
During specific implementation, the optimization method of based target function has multiple, least square iterative method wherein, and genetic algorithms etc. all are feasible methods, enumerate no longer one by one herein.
Should be appreciated that when multi-channel video comprises three road videos at least it realizes that principle is identical with two-path video, can call various algorithms flexibly and realize, because the process relative complex is not described in detail in this.
In step S104,, generate the prospect panoramic video according to projective transformation matrix and multichannel sport foreground video according to projective transformation matrix and multichannel background video generation background panoramic video.
Step according to projective transformation matrix and multichannel background video generation background panoramic video is specially: according to projective transformation matrix the multichannel background video is projected to unified visual field; Obtain the overlapping region, visual field according to the multichannel background video in the same visual field; According to the overlapping region, visual field the multichannel background video in the unified visual field is carried out seamless fusion treatment.
In embodiments of the present invention, overlapping region, visual field between every two-path video is the convex quadrangle zone, according to the overlapping region, visual field the step that the multichannel background video in the unified visual field carries out seamless fusion treatment is specially: be divided into four triangles according to the overlapping region, visual field of naming a person for a particular job, any one in the overlapping region, visual field; Determine to merge weights according to leg-of-mutton area; According to arbitrarily any the position and merge weights the multichannel background video of the overlapping region, visual field in the unified visual field merged.
In specific implementation, be example with the two-path video equally, establish the projective transformation matrix M between the two-path video of two camera acquisitions, the unified visual field after the projection is C, the background video function of two video cameras is respectively f GA(x, y, t), f GB(t), the background panoramic video of twin camera generates step and is specially for x, y:
Step1: use projective transformation matrix M to f GA(x, y, t), f GB(x, y t) carry out projective transformation, project under the unified visual field C, and the image function after the conversion is respectively f MA(x, y, t), f MB(t), the corresponding visual field of two video cameras after the conversion is respectively A, B for x, y.Calculate overlapping region, the visual field abcd=A ∩ B of A, B, as shown in Figure 5.
Step2: calculate the pixel fusion coefficient w1 in the abcd of overlapping region, visual field, w2.It is specific as follows,
As shown in Figure 5, make P=(x, y) ∈ A ∩ B is the point among overlapping region, the visual field abcd of A, B, four borders of P and overlay region are formed four triangle abP, acP, cdP, bdP respectively, these four leg-of-mutton areas are respectively S1, S2, S3, S4.Make S M12(S1 S2) is minimum value among S1 and the S2, S to=min M34(S3 S4) is minimum value among S3 and the S4, then fusion coefficients such as formula (10) to=min
w 1 = S m 34 S m 12 + S m 34 , w 2 = S m 12 S m 12 + S m 34 = 1 - w 1 - - - ( 10 )
Step3: (t) ∈ A ∪ B is a point in the full-view visual field, then panorama background f for x, y to establish P= C(fusion t) as shown in Equation (11) for x, y.
f C ( P ) = f MA ( P ) P &Element; A - B w 1 &CenterDot; f MA ( P ) + w 2 &CenterDot; f MB ( P ) P &Element; A &cap; B f MB ( P ) P &Element; B - A , - - - ( 11 )
Wherein, A-B is meant the difference set of set A and set B in the formula (1), and B-A is meant the difference set of set B and set A.In case after projective transformation matrix was determined, then the full-view visual field overlapping region had just been determined, fusion coefficients w1, w2 have also just determined.Therefore fusion coefficients w1, w2 can only calculate once after calculating projective transformation matrix, and is stored as the form of look-up table, obtains by the look-up table mode during follow-up fusion.
Wherein, be specially according to projective transformation matrix and multichannel sport foreground video generation prospect panoramic video: multichannel sport foreground video-projection is arrived unified visual field according to projective transformation matrix; Multichannel sport foreground video in the unified visual field is merged.
Identical with background data projective transformation mode, at first the sport foreground data are carried out projective transformation according to projective transformation matrix M, multichannel sport foreground data projection is arrived unified visual field.
Because twin camera is in different points of view, the depth of field of same target between different cameras generally there are differences, and generally there is certain parallax in the same target after the conversion in the different visual fields overlapped fov zone (S104 draws by step).As shown in Figure 6, Object AAnd Object BFor the unified visual field of sport foreground after conversion of the same target correspondence in two visual fields is position in the overlapping region, visual field in the full-view visual field, Δ d is both displacement difference.
In embodiments of the present invention, the prospect panoramic video merges before the generation, need whether visual field A and the sport foreground among the B in the field of detection overlapping region are same target, when both are same target, need earlier visual field A in the overlapping region, visual field and B to be carried out parallax correction, otherwise need not to carry out parallax correction.
In embodiments of the present invention, the determination methods of same target specifically: whether the barycenter of judging the simply connected region of isolated sport foreground video in two visual fields is in the overlapping region, visual field.The sport foreground video that is in the overlapping region, visual field is mated association.Be specially:
If the moving target in the overlapped fov zone among the A of visual field is O A(i), i=1,2, L m, the moving target in the overlapped fov zone among the B of visual field is O B(j), j=1,2, L n.In embodiments of the present invention, moving target is generally a simply connected region, is not a point.
Calculate O A(i) area S A(i) (pixel count sum), i=1,2, L m; Calculate O B(j) area S B(j), j=1,2, L n.
Calculate O AThe length-width ratio L of boundary rectangle frame (i) A(i), i=1,2, L m; Calculate O BThe length-width ratio L of boundary rectangle frame (j) B(j), j=1,2, L n.
Calculate O A(i) RGB color histogram vector H A(i), i=1,2, L m; Calculate O B(j) RGB color histogram vector H B(j), j=1,2, L n.Wherein, H A(i) and H B(i) be 3 * 256=768 n dimensional vector n.
Set weighted value w S, w L, w HCalculate O A(i) and O B(j) matching distance, as shown in Equation (12).
d(O A(i),O B(j))=w S·||S A(i)-S B(j)||+w L·||L A(i)-L B(j)||+w H·||H A(i)-H B(j)|| (12)
Calculate the moving target O in overlapped fov zone among the A A(i), i=1,2, the moving target O in overlapped fov zone among L m and the B B(j), j=1,2, the correlation distance of L n, as shown in Equation (13).
T ik=d(O A(i),O B(k))=min{d(O A(i),O B(j)),j=1,2,...n} (13)
Wherein as the formula (14);
k = arg j min { d ( O A ( i ) , O B ( j ) ) , j = 1 , 2 , . . . n } - - - ( 14 )
Be O B(k) be in B with O A(i) moving target of matching distance minimum.
It is right to calculate the related moving target of coupling, and rule is:
If T Ik<δ 0, i=1,2, L m, then O A(i) and O B(k) coupling is the same target among two visual field A, the B; Otherwise, then in the B of visual field, do not have and O A(i) Pi Pei target.
After having determined the same target of coupling, the parallax correction in the overlapped fov zone is specific as follows,
Visual field A and visual field B are projected under the common coordinate system C, can select the coordinate system of one of them visual field to overlap with common coordinate system C.Suppose the benchmark visual field of B visual field as projection, B visual field target projection is the reference position of merging the visual field to the target location among the C, and definite problem of different target position transfers the position correction that the target in the A visual field is carried out corresponding to the coupling target among the B in the full-view visual field.When target entered A, overlapping region, B visual field by non-overlapped B field of view, visual field A projected to that the target location deducts Δ d among the common coordinate system C, is y CA=y A-Δ d; On the contrary, when target entered A, overlapping region, B visual field by non-overlapped A visual field, visual field A projected to that the target location adds Δ d, i.e. y among the common coordinate system C CA=y A+ Δ d.Wherein, y ABe the barycenter of the target among the A of the visual field horizontal coordinate to the geometric center of visual field A, Δ d is the barycenter horizontal parallax of the same target of the A of coupling, B visual field.Y in like manner BFor the barycenter of the target among the B of visual field horizontal coordinate, no longer describe in detail to the geometric center of visual field B.As shown in Figure 6.The bearing calibration of vertical parallax and the bearing calibration of horizontal parallax are similar.
Can learn that by foregoing the different motion prospect video of overlapping region, visual field correspondence is that the condition of same target is that the moving target barycenter of different motion prospect video is in the overlapping region, visual field.The motion barycenter of the sport foreground video in the A visual field occurs in the overlapping region, visual field of unified visual field, and the sport foreground target barycenter of B visual field is not when occurring in the overlapping region, visual field of unified visual field, same target can not be mated related computing, then can occur same target coverage phenomenon this moment when directly merging.At this moment sport foreground video that mates that need be in the overlapping region, visual field and the sport foreground video in another overlapping region, visual field mate, if the match is successful, then are considered as same target mutually, otherwise are considered as non-same target.
When the success of same object matching and its are merging after position in the visual field determines,, make that merging target the profile ghost image occurs because coupling objective contour size often has inconsistency.Perhaps when one of the same target of two visual fields complete and another when imperfect, also can cause syncretizing effect not good enough because of shape difference when multichannel sport foreground video merges.Need to merge template this moment and obtain the prospect panoramic video according to default prospect.Wherein prospect fusion template can be to obtain with following mode:
In the Non-overlapping Domain of A, B visual field, then prospect fusion template zone is exactly a foreground area, and template position remains unchanged in visual field separately;
In the overlapping region of A, B visual field, the sport foreground video to the same target of A, B visual field coupling by the parallax correction result calculated, moves to barycenter with two zones and overlaps, and the union of getting these two zones merges template M as prospect AB
After the acquisition prospect merged template, the fusion of prospect panoramic video can obtain in the following way:
In the Non-overlapping Domain of A, B visual field, directly use panorama zone that A, B visual field obtain respectively separately as the prospect panoramic video;
In the overlapping region of A, B visual field, the prospect of obtaining is merged template M ABBarycenter be placed on the centroid position of A, the uncorrected moving target in B visual field respectively, and calculate the common factor M of this template and A, B visual field respectively A=M AB∩ A, M B=M AB∩ B; If M AArea greater than M BArea, with template M ABBe placed on the centroid position place of the corresponding target in the original video that does not separate through the background prospect of visual field A, take out original video zone that template covers as the panorama foreground target; Otherwise, if M AArea less than M BArea, then in the original video of visual field B, carry out above-mentioned computing; The prospect in the video overlay zone that will obtain by above-mentioned computing is placed into the suitable position of full-view visual field by parallax correction.Merge by above-mentioned prospect, can realize that target area is big and that profile is more complete embeds in the prospect panorama, so not only can solve the ghost image problem, and also can finely merge when the body of same target has certain difference.
In the prior art, because the depth of field and visual angle is different between moving target and the background, can't use unified projective transformation to the while of the target and background in image registration, thereby make the panoramic picture after the fusion be easy to generate ghost image and diplopia problem, and the embodiment of the invention has effectively overcome these problems.
In step S105, background panoramic video and prospect panoramic video are merged, generate panoramic video.
Background panoramic video and prospect panoramic video are merged, obtain complete panoramic video.Be specially, establish f B(x, y is t) for calculating the background panorama that generates, f F(x, y is t) for calculating the prospect panorama that generates, complete panoramic video f T(x, y t) are obtained by formula (15).
f T ( x , y , t ) = f F ( x , y , t ) , | f F ( x , y , t ) | &NotEqual; 0 f B ( x , y , t ) , | f F ( x , y , t ) | = 0 , ( x , y ) &Element; A &cup; B - - - ( 15 )
The structure chart of the panoramic video generation system that the embodiment of the invention provides sees also Fig. 7, only show the part relevant with the embodiment of the invention for convenience of explanation, this system is built in the unit that software unit, hardware cell or the software and hardware of portable terminal or other-end equipment combine.
In embodiments of the present invention, system comprises multi-channel video collecting unit 71, separative element 72, projective transformation matrix computing unit 73, background panoramic video generation unit 74, prospect panoramic video generation unit 75 and panoramic video generation unit 76.
Multi-channel video collecting unit 71 is gathered the multi-channel video of different points of view by multiple cameras; Background and sport foreground in every road video that separative element 72 separation multi-channel video collecting units 71 are gathered obtain multichannel background video and multichannel sport foreground video; Projective transformation matrix computing unit 73 obtains projective transformation matrix according to the multichannel background video that separative element 72 obtains; The multichannel background video generation background panoramic video that projective transformation matrix that background panoramic video generation unit 74 calculates according to projective transformation matrix computing unit 73 and separative element 72 obtain; The multichannel sport foreground video that projective transformation matrix that prospect panoramic video generation unit 75 calculates according to projective transformation matrix computing unit 73 and separative element 72 obtain generates the prospect panoramic video; Panoramic video generation unit 76 merges the background panoramic video of background panoramic video generation unit 74 generations and the prospect panoramic video of prospect panoramic video generation unit 75 generations, generates panoramic video.
Wherein, projective transformation matrix computing unit 73 comprises:
The characteristic point acquisition module is used to obtain the characteristic point of every road background video that separative element 72 obtains;
Candidate matches point set acquisition module is used for the characteristic point of every road background video of obtaining according to the characteristic point acquisition module and default arest neighbors, inferior nearest neighbor distance decision function and obtains candidate matches point and gather;
The purification module is used for the candidate matches point set that candidate matches point set acquisition module obtains is purified;
The projective transformation matrix acquisition module is used for obtaining projective transformation matrix according to the match point set after the purification of purification module.
Embodiment repeats no more as mentioned above.
The present invention is by carrying out feature point extraction and characteristic point is mated automatically in the multichannel background video, and and then calculates projective transformation matrix between the multichannel visual field by the characteristic point of coupling.This method directly on original video extract minutiae and the method for calculating projective transformation matrix compare computational accuracy height, good stability.
For the projective transformation matrix that is optimized, projective transformation matrix computing unit 73 also comprises:
Optimum projective transformation matrix acquisition module, at least one projective transformation matrix that is used for obtaining according to the projective transformation matrix acquisition module and default error function obtain optimum projective transformation matrix.
Background panoramic video generation unit 74 comprises:
The background video projection module, the projective transformation matrix that calculates according to projective transformation matrix computing unit 73 projects to unified visual field with the multichannel background video that separative element 72 obtains;
Overlapping region, visual field acquisition module, the multichannel background video that is used for the unified visual field that projection obtains according to the background video projection module obtains the overlapping region, visual field;
The background video Fusion Module, the multichannel background video that is used for merging the unified visual field that projection obtains to the background video projection module, overlapping region, visual field that obtains according to overlapping region, visual field acquisition module carries out seamless fusion treatment.
Wherein, adopt the triangle area ratio method to determine to merge weights during the background panoramic video merges, eliminated the splicing vestige of public view field transitional region effectively, realized quick seamless fusion.
Embodiment repeats no more as mentioned above.
Simultaneously, prospect panoramic video generation unit 75 comprises:
Sport foreground video-projection module, the multichannel sport foreground video-projection that the projective transformation matrix that calculates according to projective transformation matrix computing unit 73 obtains separative element 72 arrives unified visual field;
Sport foreground video Fusion Module, the multichannel sport foreground video that is used for the unified visual field that projection obtains to sport foreground video-projection module merges.
Wherein sport foreground video Fusion Module further comprises:
Same target judge module, when being used for barycenter when the simply connected region of the multichannel sport foreground video of the unified visual field that the projection of sport foreground video-projection module obtains and all being in the overlapping region, visual field that overlapping region, visual field acquisition module obtains, related computing is carried out in overlapping region, visual field to the multichannel sport foreground video in the unified visual field, judges according to related operation result whether the overlapping region, visual field of the multichannel sport foreground video in the unified visual field is same target;
The parallax correction module, be used for when same target judge module judges that the overlapping region, visual field of the multichannel sport foreground video of unified visual field is same target, parallax correction is carried out in overlapping region, visual field to the multichannel sport foreground video in the unified visual field, and the multichannel sport foreground video after the parallax correction merges in the visual field to unifying to merge template according to default prospect, generates the prospect panoramic video.
Embodiment repeats no more as mentioned above.The embodiment of the invention obtains the multi-channel video of different points of view by the multichannel camera acquisition, multi-channel video decomposed respectively obtain multichannel background video and multichannel sport foreground video, utilize the multichannel background video to calculate projective transformation matrix automatically, and obtain the overlapping region, visual field, according to projective transformation matrix background video is carried out projective transformation, background video after the conversion is embedded in the full-view visual field, and the overlapping region, visual field carried out seamless fusion treatment, according to projective transformation matrix the sport foreground video is carried out projective transformation, the sport foreground video data of overlapping region, detected visual field correspondence is carried out parallax correction and the prospect panoramic video merges; Last background panoramic video and prospect panoramic video merge and obtain panoramic video, realized that the multichannel dynamic video that will have the visual field of overlapping is generated as the panorama dynamic video automatically, ghost image and diplopia problem have been solved preferably at overlapping region, panoramic video visual field moving target, the interference of having got rid of sport foreground of obtaining owing to projective transformation matrix has improved projective transformation matrix calculates and uses automatically between the video camera accuracy and stability greatly.The above only is preferred embodiment of the present invention, not in order to restriction the present invention, all any modifications of being done within the spirit and principles in the present invention, is equal to and replaces and improvement etc., all should be included within protection scope of the present invention.

Claims (10)

1、一种全景视频生成方法,其特征在于,所述方法包括以下步骤:1, a kind of panoramic video generation method is characterized in that, described method comprises the following steps: 通过多台摄像机采集不同视点的多路视频;Collect multiple videos from different viewpoints through multiple cameras; 分离每路视频中的背景和运动前景,得到多路背景视频和多路运动前景视频;Separate the background and moving foreground in each channel video, obtain multi-channel background video and multi-channel moving foreground video; 根据所述多路背景视频获取投影变换矩阵;Obtain a projection transformation matrix according to the multi-channel background video; 根据所述投影变换矩阵和多路背景视频生成背景全景视频,根据所述投影变换矩阵和多路运动前景视频生成前景全景视频;Generate background panoramic video according to the projection transformation matrix and multi-channel background video, generate foreground panoramic video according to the projection transformation matrix and multi-channel motion foreground video; 将所述背景全景视频和前景全景视频进行融合,生成全景视频。The background panoramic video and the foreground panoramic video are fused to generate a panoramic video. 2、如权利要求1所述的方法,其特征在于,所述根据所述多路背景视频获取投影变换矩阵的步骤具体为:2. The method according to claim 1, wherein the step of obtaining a projection transformation matrix according to the multi-channel background video is specifically: 获取每路背景视频的特征点;Obtain the feature points of each background video; 根据所述每路背景视频的特征点和预设的最近邻、次近邻距离判决函数获取候选匹配点集合;Obtain a set of candidate matching points according to the feature points of each background video and the preset nearest neighbor and second nearest neighbor distance decision functions; 对所述候选匹配点集合进行提纯;Purifying the set of candidate matching points; 根据提纯后的匹配点集合获取投影变换矩阵。Obtain the projection transformation matrix according to the purified matching point set. 3、如权利要求2所述的方法,其特征在于,所述根据提纯后的匹配点集合获取投影变换矩阵的步骤之后,所述方法还包括:3. The method according to claim 2, wherein after the step of obtaining the projection transformation matrix according to the purified matching point set, the method further comprises: 根据预设的误差函数获取最优投影变换矩阵;Obtain the optimal projection transformation matrix according to the preset error function; 所述多路视频为两路视频,分别对应视场A、B,在投影变换矩阵M的作用下,所述多路背景视频的候选匹配点集合包括n个匹配点对(PA(k),PB(k)),PA(k)∈A,PB(k)∈B,k=1~n,n为整数,其中,PA(k)在视场B中的投影点为QB(k),PB(k)在视场A中的投影点为QA(k),则所述误差函数为: &Sigma; k = 1 n ( | | P A ( k ) - Q A ( k ) | | 2 + | | P B ( k ) - Q B ( k ) | | 2 ) , 所述误差函数的值最小的投影变换矩阵为最优投影变换矩阵。The multi-channel video is two-way video, respectively corresponding to the field of view A, B, under the effect of the projection transformation matrix M, the set of candidate matching points of the multi-channel background video includes n matching point pairs (P A (k) , P B (k)), P A (k)∈A, P B (k)∈B, k=1~n, n is an integer, where the projection point of P A (k) in the field of view B is Q B (k), the projection point of P B (k) in the field of view A is Q A (k), then the error function is: &Sigma; k = 1 no ( | | P A ( k ) - Q A ( k ) | | 2 + | | P B ( k ) - Q B ( k ) | | 2 ) , The projection transformation matrix with the minimum value of the error function is an optimal projection transformation matrix. 4、如权利要求1所述的方法,其特征在于,所述根据所述投影变换矩阵和多路背景视频生成背景全景视频的步骤具体为:4. The method according to claim 1, wherein the step of generating a background panoramic video according to the projection transformation matrix and multiple background videos is specifically: 根据所述投影变换矩阵将多路背景视频投影到统一视场;Projecting multiple background videos to a unified field of view according to the projection transformation matrix; 根据所述统一视场中的多路背景视频获取视场重叠区域;Acquiring the overlapping area of the field of view according to the multi-channel background video in the unified field of view; 根据所述视场重叠区域对所述统一视场中的多路背景视频进行无缝融合处理。Perform seamless fusion processing on the multi-channel background videos in the unified field of view according to the overlapping area of the field of view. 5、如权利要求4所述的方法,其特征在于,所述视场重叠区域为凸四边形区域,所述根据所述视场重叠区域对所述统一视场中的多路背景视频进行无缝融合处理的步骤具体为:5. The method according to claim 4, wherein the overlapping area of the field of view is a convex quadrilateral area, and the multi-channel background video in the unified field of view is seamlessly processed according to the overlapping area of the field of view. The specific steps of fusion processing are as follows: 根据所述视场重叠区域中的任意一点将所述视场重叠区域划分为四个三角形;dividing the overlapping area of the field of view into four triangles according to any point in the overlapping area of the field of view; 根据所述三角形的面积确定融合权值;determining the fusion weight according to the area of the triangle; 根据所述任意一点的位置和融合权值对所述统一视场中的多路背景视频进行融合。The multi-channel background video in the unified field of view is fused according to the position of the arbitrary point and the fusion weight. 6、如权利要求1所述的方法,其特征在于,所述根据所述投影变换矩阵和多路运动前景视频生成前景全景视频的步骤具体为:6. The method according to claim 1, wherein the step of generating the foreground panoramic video according to the projection transformation matrix and the multi-channel moving foreground video is specifically as follows: 根据所述投影变换矩阵将多路运动前景视频投影到所述统一视场;Projecting multiple moving foreground videos to the unified field of view according to the projection transformation matrix; 对所述统一视场中的多路运动前景视频进行融合;Fusing the multi-channel moving foreground videos in the unified field of view; 所述对所述统一视场中的多路运动前景视频进行融合的步骤包括:The step of fusing the multi-channel motion foreground video in the unified field of view includes: 当所述统一视场中的多路运动前景视频的单连通区域的质心均处于所述视场重叠区域时,对所述统一视场中的多路运动前景视频的所述视场重叠区域进行关联运算,根据关联运算结果判断所述统一视场中的多路运动前景视频的所述视场重叠区域是否为同一目标;When the centroids of the single-connected regions of the multi-channel moving foreground videos in the unified field of view are all in the overlapping regions of the viewing fields, the overlapping regions of the viewing fields of the multi-channel moving foreground videos in the unified field of view are performed Association operation, judging according to the results of the association operation whether the overlapping areas of the field of view of the multi-channel motion foreground video in the unified field of view are the same target; 当所述统一视场中的多路运动前景视频的所述视场重叠区域为同一目标时,对所述统一视场中的多路运动前景视频的所述视场重叠区域进行视差校正;并根据预设的前景融合模板对所述统一视场中视差校正后的多路运动前景视频进行融合,生成所述前景全景视频。When the overlapping areas of the fields of view of the multiple moving foreground videos in the unified field of view are the same target, performing parallax correction on the overlapping areas of the viewing fields of the multiple moving foreground videos in the unified field of view; and The multi-channel moving foreground video after parallax correction in the unified field of view is fused according to a preset foreground fusion template to generate the foreground panoramic video. 7、一种全景视频生成系统,其特征在于,所述系统包括:7. A panoramic video generation system, characterized in that the system comprises: 多路视频采集单元,用于通过多台摄像机采集不同视点的多路视频;A multi-channel video acquisition unit is used to collect multiple videos from different viewpoints through multiple cameras; 分离单元,用于分离所述多路视频采集单元采集的每路视频中的背景和运动前景,得到多路背景视频和多路运动前景视频;A separation unit, configured to separate the background and moving foreground in each video captured by the multi-channel video acquisition unit, to obtain multiple background videos and multi-channel moving foreground videos; 投影变换矩阵计算单元,用于根据所述分离单元获取的多路背景视频获取投影变换矩阵;A projection transformation matrix calculation unit, configured to obtain a projection transformation matrix according to the multi-channel background video obtained by the separation unit; 背景全景视频生成单元,用于根据所述投影变换矩阵计算单元计算得到的投影变换矩阵和所述分离单元获取的多路背景视频生成背景全景视频;A background panoramic video generation unit, configured to generate a background panoramic video according to the projection transformation matrix calculated by the projection transformation matrix calculation unit and the multi-channel background video obtained by the separation unit; 前景全景视频生成单元,用于根据所述投影变换矩阵计算单元计算得到的投影变换矩阵和所述分离单元获取的多路运动前景视频生成前景全景视频;A foreground panoramic video generation unit, configured to generate a foreground panoramic video according to the projection transformation matrix calculated by the projection transformation matrix calculation unit and the multi-channel motion foreground video obtained by the separation unit; 全景视频生成单元,用于将所述背景全景视频生成单元生成的背景全景视频和前景全景视频生成单元生成的前景全景视频进行融合,生成全景视频。The panoramic video generation unit is used to fuse the background panoramic video generated by the background panoramic video generation unit with the foreground panoramic video generated by the foreground panoramic video generation unit to generate a panoramic video. 8、如权利要求7所述的系统,其特征在于,所述投影变换矩阵计算单元包括:8. The system according to claim 7, wherein the projection transformation matrix calculation unit comprises: 特征点获取模块,用于获取所述分离单元获取的每路背景视频的特征点;A feature point acquisition module, configured to acquire the feature points of each background video acquired by the separation unit; 候选匹配点集合获取模块,用于根据所述特征点获取模块获取的每路背景视频的特征点和预设的最近邻、次近邻距离判决函数获取候选匹配点集合;Candidate matching point set acquisition module, used for obtaining the candidate matching point set according to the feature points of each background video obtained by the feature point acquisition module and the preset nearest neighbor and second nearest neighbor distance judgment function; 提纯模块,用于对所述候选匹配点集合获取模块获取的候选匹配点集合进行提纯;A purification module, configured to purify the set of candidate matching points acquired by the set of candidate matching point acquisition module; 投影变换矩阵获取模块,用于根据所述提纯模块提纯后的匹配点集合获取投影变换矩阵;A projection transformation matrix acquisition module, configured to obtain a projection transformation matrix according to the set of matching points purified by the purification module; 所述投影变换矩阵计算单元还包括:The projection transformation matrix calculation unit also includes: 最优投影变换矩阵获取模块,用于根据所述投影变换矩阵获取模块获取的至少一个投影变换矩阵和预设的误差函数获取最优投影变换矩阵。An optimal projection transformation matrix acquisition module, configured to acquire an optimal projection transformation matrix according to at least one projection transformation matrix acquired by the projection transformation matrix acquisition module and a preset error function. 9、如权利要求7所述的系统,其特征在于,所述背景全景视频生成单元包括:9. The system according to claim 7, wherein the background panoramic video generating unit comprises: 背景视频投影模块,根据所述投影变换矩阵计算单元计算的投影变换矩阵将所述分离单元获取的多路背景视频投影到统一视场;The background video projection module projects the multi-channel background video acquired by the separation unit to a unified field of view according to the projection transformation matrix calculated by the projection transformation matrix calculation unit; 视场重叠区域获取模块,用于根据所述背景视频投影模块投影得到的统一视场中的多路背景视频获取视场重叠区域;The field of view overlapping area acquisition module is used to obtain the field of view overlapping area according to the multi-channel background video in the unified field of view projected by the background video projection module; 背景视频融合模块,用于融合根据所述视场重叠区域获取模块获取的视场重叠区域对所述背景视频投影模块投影得到的统一视场中的多路背景视频进行无缝融合处理。The background video fusion module is used to seamlessly fuse the multi-channel background videos in the unified field of view projected by the background video projection module according to the overlapping area of the field of view acquired by the overlapping area of view acquisition module. 10、如权利要求9所述的系统,其特征在于,所述前景全景视频生成单元包括:10. The system according to claim 9, wherein the foreground panoramic video generation unit comprises: 运动前景视频投影模块,根据所述投影变换矩阵计算单元计算的投影变换矩阵将所述分离单元获取的多路运动前景视频投影到所述统一视场;The moving foreground video projection module projects the multi-channel moving foreground video acquired by the separation unit to the unified field of view according to the projection transformation matrix calculated by the projection transformation matrix calculation unit; 运动前景视频融合模块,用于对所述运动前景视频投影模块投影得到的统一视场中的多路运动前景视频进行融合;The motion foreground video fusion module is used to fuse the multi-channel motion foreground video in the unified field of view projected by the motion foreground video projection module; 其中,所述运动前景视频融合模块进一步包括:Wherein, the motion foreground video fusion module further includes: 同一目标判断模块,用于当所述运动前景视频投影模块投影得到的统一视场中的多路运动前景视频的单连通区域的质心均处于所述视场重叠区域获取模块获取的视场重叠区域时,对所述统一视场中的多路运动前景视频的所述视场重叠区域进行关联运算,根据关联运算结果判断所述统一视场中的多路运动前景视频的所述视场重叠区域是否为同一目标;The same target judgment module is used for when the centroids of the single-connected regions of the multi-channel motion foreground video in the unified field of view projected by the motion foreground video projection module are all in the field of view overlap area acquired by the field of view overlapping area acquisition module , performing an associative operation on the overlapping areas of the multi-channel moving foreground videos in the unified field of view, and judging the overlapping areas of the viewing fields of the multi-channel moving foreground videos in the unified field of view according to the result of the associated operation whether it is the same target; 视差校正模块,用于当所述同一目标判断模块判断统一视场中的多路运动前景视频的所述视场重叠区域为同一目标时,对所述统一视场中的多路运动前景视频的所述视场重叠区域进行视差校正,并根据预设的前景融合模板对所述统一视场中视差校正后的多路运动前景视频进行融合,生成所述前景全景视频。A parallax correction module, configured to correct the multi-channel moving foreground videos in the unified field of view when the same target judging module judges that the overlapping areas of the multiple moving foreground videos in the unified field of view are the same target The parallax correction is performed on the overlapping area of the field of view, and the multi-channel moving foreground video after parallax correction in the unified field of view is fused according to a preset foreground fusion template to generate the foreground panoramic video.
CN200910109043A 2009-07-23 2009-07-23 Method and system for generating panoramic video Pending CN101626513A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN200910109043A CN101626513A (en) 2009-07-23 2009-07-23 Method and system for generating panoramic video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN200910109043A CN101626513A (en) 2009-07-23 2009-07-23 Method and system for generating panoramic video

Publications (1)

Publication Number Publication Date
CN101626513A true CN101626513A (en) 2010-01-13

Family

ID=41522151

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200910109043A Pending CN101626513A (en) 2009-07-23 2009-07-23 Method and system for generating panoramic video

Country Status (1)

Country Link
CN (1) CN101626513A (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101931772A (en) * 2010-08-19 2010-12-29 深圳大学 A panoramic video fusion method, system and video processing equipment
CN102012213A (en) * 2010-08-31 2011-04-13 吉林大学 Method for measuring foreground height through single image
CN102314686A (en) * 2011-08-03 2012-01-11 深圳大学 Reference view field determination method, system and device of splicing type panoramic video
CN102999891A (en) * 2011-09-09 2013-03-27 中国航天科工集团第三研究院第八三五八研究所 Binding parameter based panoramic image mosaic method
CN103294024A (en) * 2013-04-09 2013-09-11 宁波杜亚机电技术有限公司 Intelligent home system control method
CN103795978A (en) * 2014-01-15 2014-05-14 浙江宇视科技有限公司 Multi-image intelligent identification method and device
CN104408701A (en) * 2014-12-03 2015-03-11 中国矿业大学 Large-scale scene video image stitching method
CN104519340A (en) * 2014-12-30 2015-04-15 余俊池 Panoramic video stitching method based on multi-depth image transformation matrix
CN105376504A (en) * 2014-08-27 2016-03-02 北京顶亮科技有限公司 High-speed swing mirror-based infrared imaging system and infrared imaging method
CN105765966A (en) * 2013-12-19 2016-07-13 英特尔公司 Bowl-shaped imaging system
CN105812649A (en) * 2014-12-31 2016-07-27 联想(北京)有限公司 Photographing method and device
CN106504306A (en) * 2016-09-14 2017-03-15 厦门幻世网络科技有限公司 A kind of animation fragment joining method, method for sending information and device
WO2017080206A1 (en) * 2015-11-13 2017-05-18 深圳大学 Video panorama generation method and parallel computing system
CN106851045A (en) * 2015-12-07 2017-06-13 北京航天长峰科技工业集团有限公司 A kind of image mosaic overlapping region moving target processing method
CN108352057A (en) * 2015-11-12 2018-07-31 罗伯特·博世有限公司 Vehicle camera system with polyphaser alignment
CN109769103A (en) * 2017-11-09 2019-05-17 株式会社日立大厦系统 Video surveillance system and video surveillance device
CN112738531A (en) * 2016-11-17 2021-04-30 英特尔公司 Suggested viewport indication for panoramic video
CN113557465A (en) * 2019-03-05 2021-10-26 脸谱科技有限责任公司 Apparatus, system, and method for wearable head-mounted display
CN116996695A (en) * 2023-09-27 2023-11-03 深圳大学 Panoramic image compression method, device, equipment and medium
CN119027639A (en) * 2024-10-30 2024-11-26 中国科学院长春光学精密机械与物理研究所 A method and system for detecting target centroid of infrared sequence images

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101931772A (en) * 2010-08-19 2010-12-29 深圳大学 A panoramic video fusion method, system and video processing equipment
CN101931772B (en) * 2010-08-19 2012-02-29 深圳大学 A panoramic video fusion method, system and video processing equipment
CN102012213A (en) * 2010-08-31 2011-04-13 吉林大学 Method for measuring foreground height through single image
CN102314686A (en) * 2011-08-03 2012-01-11 深圳大学 Reference view field determination method, system and device of splicing type panoramic video
CN102314686B (en) * 2011-08-03 2013-07-17 深圳大学 Reference view field determination method, system and device of splicing type panoramic video
CN102999891A (en) * 2011-09-09 2013-03-27 中国航天科工集团第三研究院第八三五八研究所 Binding parameter based panoramic image mosaic method
CN103294024A (en) * 2013-04-09 2013-09-11 宁波杜亚机电技术有限公司 Intelligent home system control method
CN103294024B (en) * 2013-04-09 2015-07-08 宁波杜亚机电技术有限公司 Intelligent home system control method
CN105765966A (en) * 2013-12-19 2016-07-13 英特尔公司 Bowl-shaped imaging system
US10692173B2 (en) 2013-12-19 2020-06-23 Intel Corporation Bowl-shaped imaging system
US10210597B2 (en) 2013-12-19 2019-02-19 Intel Corporation Bowl-shaped imaging system
CN103795978A (en) * 2014-01-15 2014-05-14 浙江宇视科技有限公司 Multi-image intelligent identification method and device
CN105376504A (en) * 2014-08-27 2016-03-02 北京顶亮科技有限公司 High-speed swing mirror-based infrared imaging system and infrared imaging method
CN104408701A (en) * 2014-12-03 2015-03-11 中国矿业大学 Large-scale scene video image stitching method
WO2016086754A1 (en) * 2014-12-03 2016-06-09 中国矿业大学 Large-scale scene video image stitching method
CN104519340B (en) * 2014-12-30 2016-08-17 余俊池 Panoramic video joining method based on many depth images transformation matrix
CN104519340A (en) * 2014-12-30 2015-04-15 余俊池 Panoramic video stitching method based on multi-depth image transformation matrix
CN105812649B (en) * 2014-12-31 2019-03-29 联想(北京)有限公司 A kind of image capture method and device
CN105812649A (en) * 2014-12-31 2016-07-27 联想(北京)有限公司 Photographing method and device
CN108352057B (en) * 2015-11-12 2021-12-31 罗伯特·博世有限公司 Vehicle camera system with multi-camera alignment
CN108352057A (en) * 2015-11-12 2018-07-31 罗伯特·博世有限公司 Vehicle camera system with polyphaser alignment
WO2017080206A1 (en) * 2015-11-13 2017-05-18 深圳大学 Video panorama generation method and parallel computing system
CN106851045A (en) * 2015-12-07 2017-06-13 北京航天长峰科技工业集团有限公司 A kind of image mosaic overlapping region moving target processing method
CN106504306B (en) * 2016-09-14 2019-09-24 厦门黑镜科技有限公司 A kind of animation segment joining method, method for sending information and device
CN106504306A (en) * 2016-09-14 2017-03-15 厦门幻世网络科技有限公司 A kind of animation fragment joining method, method for sending information and device
CN112738531A (en) * 2016-11-17 2021-04-30 英特尔公司 Suggested viewport indication for panoramic video
CN112738531B (en) * 2016-11-17 2024-02-23 英特尔公司 Suggested viewport indication for panoramic video
CN109769103A (en) * 2017-11-09 2019-05-17 株式会社日立大厦系统 Video surveillance system and video surveillance device
CN113557465A (en) * 2019-03-05 2021-10-26 脸谱科技有限责任公司 Apparatus, system, and method for wearable head-mounted display
CN116996695A (en) * 2023-09-27 2023-11-03 深圳大学 Panoramic image compression method, device, equipment and medium
CN116996695B (en) * 2023-09-27 2024-04-05 深圳大学 A panoramic image compression method, device, equipment and medium
CN119027639A (en) * 2024-10-30 2024-11-26 中国科学院长春光学精密机械与物理研究所 A method and system for detecting target centroid of infrared sequence images

Similar Documents

Publication Publication Date Title
CN101626513A (en) Method and system for generating panoramic video
Won et al. Omnimvs: End-to-end learning for omnidirectional stereo matching
CN107959805B (en) Light field video imaging system and method for processing video frequency based on Hybrid camera array
CN104061907B (en) The most variable gait recognition method in visual angle based on the coupling synthesis of gait three-D profile
CN104408689B (en) Streetscape dough sheet optimization method based on full-view image
JP5105481B2 (en) Lane detection device, lane detection method, and lane detection program
CN104794683B (en) Video Stitching Method Based on Plane Scanning Around Gradient Patchwork Area
CN105608667A (en) Method and device for panoramic stitching
CN105245841A (en) A CUDA-based panoramic video surveillance system
CN108932725B (en) Scene flow estimation method based on convolutional neural network
CN103020941A (en) Panoramic stitching based rotary camera background establishment method and panoramic stitching based moving object detection method
CN104517095B (en) A kind of number of people dividing method based on depth image
JP6174104B2 (en) Method, apparatus and system for generating indoor 2D plan view
CN105005964A (en) Video sequence image based method for rapidly generating panorama of geographic scene
CN103024350A (en) Master-slave tracking method for binocular PTZ (Pan-Tilt-Zoom) visual system and system applying same
CN107341815B (en) Vigorous motion detection method based on multi-eye stereo vision scene stream
CN105488777A (en) System and method for generating panoramic picture in real time based on moving foreground
CN104392416A (en) Video stitching method for sports scene
CN107909643A (en) Mixing scene reconstruction method and device based on model segmentation
CN104574443B (en) The cooperative tracking method of moving target between a kind of panoramic camera
Martínez-González et al. Residual pose: A decoupled approach for depth-based 3D human pose estimation
CN106251357A (en) Based on virtual reality and vision positioning system
CN105303554A (en) Image feature point 3D reconstruction method and device
CN103873773B (en) Primary-auxiliary synergy double light path design-based omnidirectional imaging method
Sun et al. Rolling shutter distortion removal based on curve interpolation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20100113