[go: up one dir, main page]

CN101009021A - Video stabilizing method based on matching and tracking of characteristic - Google Patents

Video stabilizing method based on matching and tracking of characteristic Download PDF

Info

Publication number
CN101009021A
CN101009021A CN 200710036817 CN200710036817A CN101009021A CN 101009021 A CN101009021 A CN 101009021A CN 200710036817 CN200710036817 CN 200710036817 CN 200710036817 A CN200710036817 A CN 200710036817A CN 101009021 A CN101009021 A CN 101009021A
Authority
CN
China
Prior art keywords
video
frame
feature
feature points
sift
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 200710036817
Other languages
Chinese (zh)
Other versions
CN100530239C (en
Inventor
胡蓉
施荣杰
沈一帆
陈文斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fudan University
Original Assignee
Fudan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fudan University filed Critical Fudan University
Priority to CNB2007100368174A priority Critical patent/CN100530239C/en
Publication of CN101009021A publication Critical patent/CN101009021A/en
Application granted granted Critical
Publication of CN100530239C publication Critical patent/CN100530239C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

本发明属于计算机数字图像视频处理技术领域,具体为一种基于特征匹配与跟踪的视频稳定方法。本发明将基于SIFT特征匹配方法应用于视频去抖动问题中,其步骤包括:找出每一帧视频的SIFT特征点;采用仿射模型作为参数估计模型,进行全局参数估计;采用高斯滤波和曲线拟合方法对视频序列的运动进行光滑化处理;对于未知区域进行填补。本发明方法鲁棒性好,受环境因素影响小,运动参数估计准确性高,图像对齐误差小,视频修补的时间代价小。The invention belongs to the technical field of computer digital image video processing, in particular to a video stabilization method based on feature matching and tracking. The present invention applies the SIFT-based feature matching method to the problem of video de-jittering. The steps include: finding out the SIFT feature points of each frame of video; using an affine model as a parameter estimation model to perform global parameter estimation; using Gaussian filtering and curve The fitting method smoothes the motion of the video sequence; fills the unknown area. The method of the invention has good robustness, is less affected by environmental factors, has high motion parameter estimation accuracy, small image alignment error, and small time cost of video repair.

Description

Video stabilizing method based on characteristic matching and tracking
Technical field
The invention belongs to the computer digital image technical field of video processing, be specifically related to a kind of video stabilizing method based on Feature Points Matching and tracking.
Background technology
It is a kind of crucial video enhancement techniques that video goes to tremble (also claiming video stabilization).Along with the significantly reduction of digital photographing apparatus price and the raising of computing power, individual digital picture pick-up device and mobile digital picture pick-up device are more and more universal, and the digital image video treatment technology is subjected to more attention.No matter where you note institute's occurrence at one's side at any time easily to utilize these equipment, and the captured video datas of a large amount of individuals can upload on the internet to be watched and download for the people.Because the instability of picture pick-up device, these people's home videos, or safety monitoring equipment or by the captured video of UAV (UnmannedAerial Vehicles) all exist the shake of high frequency usually significantly, cause image blurring unclearly, simultaneously also can make the beholder produce tiredness.On the other hand, stable video can compress better.If entire image all in vibration, is so just used the more bits number and is write down these motion change, thereby waste the delivery flow rate of more storage space and data.Stable image has better ratio of compression and quality is beneficial to long-range and network browsing.In recent years, there are many researchs to pay close attention to this problem, proposed many new methods and technology, to improve the quality and the speed of video stabilization.
Video jitter is meant in the shooting process owing to there is inconsistent motion noise in video camera and causes the shake of video sequence and fuzzy.In order to eliminate these shakes, need to extract the true global motion parameter of video camera, adopt the motion of suitable converter technique compensation video camera then, make video pictures smooth and stablize, this technology is commonly referred to video and goes to shake or video stabilization.Video goes the technology of trembling to be divided into following two kinds at present: hardware approach and image processing method.The hardware approach light stream stabilization technique that is otherwise known as, it has comprised a cover light stream system and has used motion sensor to compensate the motion of video camera.Though this method ten minutes is effective, has increased the cost of video camera greatly and can only handle some smaller motions usually, so many video cameras do not adopt this technology.Image process method is carried out aftertreatment to captured video clips, to get rid of those by video jitter artificial or that mechanical vibration were produced.Here mainly contain two kinds of methods: characteristic method (FeatureMatching) and optical flow method (Optical Flow).Characteristic method is carried out characteristic matching between consecutive frame on the basis of the unique point of extracting every two field picture, calculate the global motion parameter of video camera then according to the result of coupling, compensates with filtered global motion transfer pair original series at last.The effect of this method depends on the precision of characteristic matching to a great extent, and when existing moving target or textural characteristics not obvious in the scene, the application of this method will be restricted.Optical flow method is at first calculated the light stream between the consecutive frame, according to light stream information, obtains the global motion parameter by motion analysis, compensates original series according to filtered kinematic parameter then.The advantage of this method is to obtain each pixel motion vector, yet if there is inconsistent moving region in the scene, will finish the estimation of global motion usually in conjunction with Video Segmentation.And these class methods generally need sizable calculated amount owing to will each pixel be analyzed.In addition the intrinsic aperture problem of optical flow computation also be this method must consider.At last, in the operation that video goes to tremble, because former video sequence has been carried out translation or rotation, can produce the zone of some the unknowns at the edge, so also very important of a kind of method of video repairing fast and effectively.It is at present main that what use is the method for video-splicing (Video Mosaic) and based on video repairing (Video Completion) method of estimation.But they all have various defectives: simple Mosaic method can produce fuzzy and ghost phenomenon, and based on the method for estimation, though the result who repairs is better than the former,, need bigger time cost owing to need to calculate the motion vector of each picture element.
Summary of the invention
The objective of the invention is to propose the video stabilizing method that a kind of time cost is little, the algorithm robustness is good.
The present invention utilizes yardstick invariant features (Scale-Invariant Features Transform SIFT) to carry out the estimation of interframe, SIFT for image under different scale and rotation all have a constant characteristic, also can retaining part constant for the variation of illumination and 3D video camera observation point.Because the SIFT feature is distributed on space and the frequency field simultaneously, to block, influence chaotic and that noise is produced can reduce widely.And these features have very big identifiability, can higher accuracy mate.The feasible matching problem based on the SIFT feature of these outstanding advantages has possessed better robustness and reliability.Present this method only is to use in the problem of images match and panorama sketch generation, and the present invention at first uses video to it and goes in the problem of trembling, and obtains gratifying experimental result.The video sequence of given one section shake, we finish video according to following step and go the operation of trembling.
1, finds out the SIFT unique point of each frame, and comprise the descriptor of space and frequency domain character description for one of each unique point.Every two field picture is carried out smoothing with the Gaussian function of different scale handle, the SIFT unique point is positioned at the extreme value place (maximum value and minimal value) of the difference of adjacent yardstick.Then, image is carried out up-sampling, carry out identical smoothing and handle, and the like, set up pyramid structure, find the unique point on each yardstick.Next, calculate the gradient direction of each unique point according to the local feature of image.Each unique point has just had position, yardstick and directional information like this.Just getting off, is that the regional area of unique point calculates a descriptor then, even make that such as the variation of light or viewpoint, this descriptor remains unchanged as much as possible when environment changes.Specifically as shown in Figure 1, according to the different scale of unique point, on smooth image, calculate the gradient and the direction of each point.The left figure (a) of Fig. 1, gradient and the direction of sampling around the unique point is divided into the sample area of 2*2 to whole sample window, and the direction histogram of 8 directions of calculating in each zone is seen the right figure (b) of Fig. 1.Each local description comprises in each sample area the size on all directions in the direction histogram like this, and therefore, the local description of a unique point is expressed as the eigenvector of 2 * 2 * 8=32 length.
2, the estimation of global motion parameter.Here the model that has adopted affine model to estimate as kinematic parameter.Affine model is expressed as:
A = a 1 a 2 a 3 a 4 a 5 a 6 0 0 1 ,
A1 in the model, a2, a4, a5 have described convergent-divergent and have rotatablely moved, a3, and a6 has described translation motion.At first adopt the above-mentioned unique point of fast nearest field algorithm (nearest neighbor algorithm) coupling, find nearest neighbours as match point.Utilize Hough (Hough) conversion to determine to belong to all unique points of same object then, determine each parameter (totally 6) in the motion model for these unique points with least square method (least-square solution) at last by the ballot principle.
3, it is smooth to carry out the motion of video sequence in conjunction with gaussian filtering and curve fitting technique.Gaussian filtering and curve fitting technique all are used for the operation of the smooth denoising of signal in a large number, and both have merits and demerits separately.We combine both, utilized the advantage of these two kinds of methods, guaranteed the stability of video flowing on the one hand, overcome the smooth problem of the mistake that Gaussian filter brought of the big window of independent use on the other hand, made that the border zone of ignorance that produces is as much as possible little.The stable video here is not fully motionless, and we expect that the motion of the video that generates is smooth, to a kind of smoothness of beholder, joyful visual experience.In the thought research work in the past that these two kinds of methods are used in combination, also do not occur.Experimental results show that the combination of these two kinds of methods, can obtain gratifying stable video sequence.We carry out conic fitting to the curve movement that estimates above earlier, then the curve movement after smooth are carried out gaussian filtering, and the parameter σ of the gaussian kernel here need not obtain too big (generally getting σ is between the 0.6-1.2), avoids smooth.We did relatively for the sequencing of two kinds of methods, and both difference is little, focuses in the selection of parameter.
4, be directed to filling up of zone of ignorance and always be a problem that relatively is difficult to solve.Our method specifically is, on the stable video stream after handling in the above, earlier on every side neighbours' frame of target frame (frame to be filled up) (about each 4-6 frame) is alignd to target frame, calculate the difference between neighbours' frame and the target frame, according to the size ordering of otherness, generally can be big more from target frame difference far away more.Neighbours' frame by the otherness minimum begins to fill up target frame.If also have zone of ignorance, then fill up target frame by diversity factor second little neighbours' frame, and the like.Be different from general method, we find the path of a difference minimum by DP (Dynamic Programming dynamic programming) method on differential image, with this paths is that two width of cloth images are spliced on the boundary, in order to guarantee temporal continuity, the scope of our limit search (10 pixels) in one section zone on zone of ignorance border.In the Mosaic method, also is the innovative point that we work in conjunction with DP algorithm.
The advantage of invention:
(1) the algorithm robustness is good, and the influence of unfavorable factor such as be subjected to illumination, block is less.
(2) the accuracy height of kinematic parameter estimation, the error of image alignment is less
The combination of (3) two kinds of smooth algorithms can overcome shortcoming each other, produces reasonable effect.
(4) video repairing combines DP algorithm, has guaranteed the continuity on time and the space, has saved time cost widely compared to optical flow approach simultaneously.
Description of drawings
Fig. 1 is the descriptor diagram.Wherein, (a), (b) be the direction histogram of 8 directions for gradient and direction around the sampling unique point.
Fig. 2 is for asking the Gaussian difference diagram.
Fig. 3 is for being asked the territory diagram of extreme point by Gaussian difference.
Fig. 4 is the unique point in the piece image and the diagram of gradient.
The comparison that Fig. 5 fills up for image.Wherein, (a) be result of the present invention, (b) be the result of Mosaic method.
Fig. 6 is an experimental result diagram of the present invention.Wherein, the first row picture is an original video stream, and second row is the video flowing after trembling, and the third line is the result through repairing.
Embodiment
1, test figure is one section shake video clips that hand-held camera is captured.
2, it is smooth to become the long Gaussian function of 2 multiplications to carry out each frame with yardstick, and the difference extreme value of obtaining each layer is as unique point.The up-sampling image is used the Gaussian function smoothing of different scale equally then, asks difference to look for extreme value, and the like.In the experiment our up-sampling 3 times.Ask Gaussian difference specifically referring to Fig. 2, Fig. 3 has illustrated the territory of extreme point, has comprised the neighbor node with layer and levels.
3, calculate the gradient direction of each unique point.Concrete formula is as follows:
m ( x . y ) = ( L ( x + 1 , y ) - L ( x - 1 , y ) ) 2 + ( L ( x , y + 1 ) - L ( x , y - 1 ) ) 2
θ(x,y)=tan -1((L(x,y+1)-L(x,y-1))/(L(x+1,y)-L(x-1,y)))
L is the smooth image of unique point place yardstick, and (x y) is the amplitude of gradient to m, and (x y) is the direction of gradient to θ.Fig. 4 is the unique point of being looked in the piece image and their gradient.
4, utilize formula in the step 3 to calculate the gradient of every bit.As shown in fig. 1, calculate the histogram of gradients (8 directions) in each sample area around the unique point.The gradient amplitude value of each direction has constituted an eigenvector in each zone, as the local description of each unique point.
5, utilize nearest neighbor algorithm (Nearest Neighbor), in adjacent two frames, find the nearest match point of each unique point, change to vote out by Hough then and have most possible motion conditions, find the unique point of those ballots conversely, they should have identical motion conditions, at last determine 6 parameters in the affine model based on these unique points with least square.
6, find motion path after, come the curve movement of smooth 6 parameters respectively with conic fitting.Quafric curve form used herein is: y=ax 2+ bx+c.Determine coefficient a, b, c with least square method.
7, use the curve movement of respectively further smooth 6 parameters of Gaussian function.Gaussian kernel is G ( k ) = ( 1 / 2 π σ 2 ) * exp - k 2 σ 2 . σ is a standard deviation, gets σ=1 in the experiment.K is the distance of neighbours' frame and target frame.The formula of smooth calculating is specific as follows:
T i = Σ j ∈ N A i j G ( j - i ) I ^ i = T i I i
N is the field { N|i-k≤j≤i+k} of i frame.A i jRepresent the kinematic parameter of i frame to the j frame.T iRepresent smooth after, to the motion compensation of i frame.
Figure A20071003681700074
I frame after the expression smoothing.
8, at last to after the motion compensation, the zone of ignorance that is produced on the border is filled up.According to kinematic parameter, neighbours' frame is alignd each 5 frame before and after having used here to target frame.Ask poor with target frame respectively then, again little of sorting greatly according to diversity factor, at first the frame of utilization variance minimum is filled up.On differential image, along in the zone of inside 10 pixels in zone of ignorance border, find the path of a difference minimum with dynamic programming method (DP), splice two width of cloth images along this paths.If also have the zone unknown, next the utilization variance degree second little frame is repaired, and the like.If also have the zone unknown, then need to utilize more neighbours' frame.Fig. 5 is that the effect of filling up compares.Right figure (a) is the result that we propose method, and left figure (b) is the result of general Mosaic method, and tangible stitching error is arranged.Square frame indicates the subregion of filling up, and is convenient to comparison.
9, Fig. 6 is last experimental result.Shown 4 frames in the experiment video among the figure, first row is original video stream, and second row and last column are respectively to tremble the result of back and process repairing, and coordinate is used for more stable result.

Claims (2)

1、一种基于特征匹配与跟踪的视频稳定方法,其特征在于具体步骤如下:1, a kind of video stabilization method based on feature matching and tracking, it is characterized in that concrete steps are as follows: (1)对于一段抖动的视频序列,找出每一帧的SIFT的特征点,并给每个特征点一个包含时间和频域特征描述的描述子,这里SIFT为尺度不变特征;(1) For a jittering video sequence, find out the SIFT feature points of each frame, and give each feature point a descriptor containing time and frequency domain feature descriptions, where SIFT is a scale-invariant feature; (2)全局运动参数估计,采用仿射模型作为运动参数估计的模型,仿射模型表示为:(2) Global motion parameter estimation, using the affine model as the model for motion parameter estimation, the affine model is expressed as: AA == aa 11 aa 22 aa 33 aa 44 aa 55 aa 66 00 00 11 ,, 模型中a1,a2,a4,a5描述缩放和旋转运动,a3,a6描述平移运动;首先采用快速最近领域算法匹配上述的特征点,找到距离最近的邻居作为匹配点;然后利用霍夫变换通过投票原理确定属于同一个物体的所有特征点;最后对于这些特征点用最小二乘方法确定运动模型中的每一个参数;In the model, a1, a2, a4, and a5 describe scaling and rotation motions, and a3, a6 describe translational motions; first, use the fast nearest field algorithm to match the above feature points, and find the nearest neighbor as the matching point; then use the Hough transform to vote The principle determines all the feature points belonging to the same object; finally, for these feature points, use the least square method to determine each parameter in the motion model; (3)针对步骤(2)估计出来的曲线,采用高斯滤波和曲线拟合方法进行视频序列的光滑,高斯核参数σ取值为0.6-1.2;(3) For the curve estimated in step (2), adopt Gaussian filtering and curve fitting methods to smooth the video sequence, and the Gaussian kernel parameter σ is 0.6-1.2; (4)针对未知的区域的填补,在经步骤(3)处理后的稳定的视频流上,先把目标帧左右各4-6帧的邻居帧向目标帧对齐,计算邻居帧与目标帧之间的差异,根据差异性大小排序;由差异性最小的邻居帧来填补目标帧;如果还有未知区域,则由差异度第二小的邻居帧来填补,依次类推。(4) For the filling of the unknown area, on the stable video stream processed by step (3), first align the neighbor frames of 4-6 frames on the left and right sides of the target frame to the target frame, and calculate the distance between the neighbor frame and the target frame. The difference between them is sorted according to the size of the difference; the target frame is filled by the neighbor frame with the smallest difference; if there is still an unknown area, it is filled by the neighbor frame with the second smallest difference, and so on. 2、根据权利要求1所述视频稳定方法,其特征在于所述找出每一帧SIFT特征点的步骤如下:对每帧图像用不同尺度的高斯函数进行光滑化处理,SIFT特征点位于相邻尺度之差的极值处;然后,对图像进行上采样,进行相同的光滑化处理,依次类推,建立金字塔结构,找到各个尺度上的特征点;所述给每个特征点一个描述子的步骤如下:根据图像的局部特征计算每个特征点的梯度方向,得到每个特征点的位置、尺度和方向信息;然后,根据特征点的不同尺度,在光滑图像上算出每个点的梯度和方向;采样特征点周围的梯度和方向,把整个采样窗口分成2×2的采样区域,在每个区域中计算8个方向的方向直方图,这样,一个特征点的局部描述子表示为2×2×8=32长度的特征矢量。2. The video stabilization method according to claim 1, characterized in that the step of finding out the SIFT feature points of each frame is as follows: each frame of image is smoothed with Gaussian functions of different scales, and the SIFT feature points are located in adjacent The extreme value of the scale difference; then, upsample the image, perform the same smoothing process, and so on, build a pyramid structure, find the feature points on each scale; the step of giving each feature point a descriptor As follows: calculate the gradient direction of each feature point according to the local features of the image, and obtain the position, scale and direction information of each feature point; then, calculate the gradient and direction of each point on the smooth image according to the different scales of the feature points ; Sampling the gradient and direction around the feature point, divide the entire sampling window into 2×2 sampling areas, and calculate the direction histogram of 8 directions in each area, so that the local descriptor of a feature point is expressed as 2×2 *8=a feature vector of length 32.
CNB2007100368174A 2007-01-25 2007-01-25 Video stabilizing method based on matching and tracking of characteristic Expired - Fee Related CN100530239C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2007100368174A CN100530239C (en) 2007-01-25 2007-01-25 Video stabilizing method based on matching and tracking of characteristic

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2007100368174A CN100530239C (en) 2007-01-25 2007-01-25 Video stabilizing method based on matching and tracking of characteristic

Publications (2)

Publication Number Publication Date
CN101009021A true CN101009021A (en) 2007-08-01
CN100530239C CN100530239C (en) 2009-08-19

Family

ID=38697430

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2007100368174A Expired - Fee Related CN100530239C (en) 2007-01-25 2007-01-25 Video stabilizing method based on matching and tracking of characteristic

Country Status (1)

Country Link
CN (1) CN100530239C (en)

Cited By (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101316368B (en) * 2008-07-18 2010-04-07 西安电子科技大学 Panoramic Image Stabilization Method Based on Global Feature Point Iteration
CN101281650B (en) * 2008-05-05 2010-05-12 北京航空航天大学 A Fast Global Motion Estimation Method for Video Stabilization
CN101394522B (en) * 2007-09-19 2010-07-21 中国科学院计算技术研究所 Method and system for detecting video copy
CN101839722A (en) * 2010-05-06 2010-09-22 南京航空航天大学 Method for automatically recognizing target at medium and low altitudes and positioning carrier with high accuracy
CN101957996A (en) * 2009-07-17 2011-01-26 上海遥薇实业有限公司 On-off passenger target tracing method based on stable characteristic
CN102099815A (en) * 2008-05-19 2011-06-15 巴黎高等理工学院 Method and device for the invariant affine recognition of shapes
CN102171722A (en) * 2008-12-30 2011-08-31 英特尔公司 Method and apparatus for noise reduction in video
CN101493889B (en) * 2008-01-23 2011-12-07 华为技术有限公司 Method and apparatus for tracking video object
CN102289670A (en) * 2011-08-31 2011-12-21 长安大学 Image characteristic extraction method with illumination robustness
CN102456225A (en) * 2010-10-22 2012-05-16 深圳中兴力维技术有限公司 Video monitoring system and moving target detecting and tracking method thereof
CN101399969B (en) * 2007-09-28 2012-09-05 三星电子株式会社 System,device and method for moving target detection and tracking based on moving camera
CN102908163A (en) * 2011-08-02 2013-02-06 深圳迈瑞生物医疗电子股份有限公司 Frame correlation method, frame correlation device and ultrasonic imaging system
CN101989352B (en) * 2009-08-06 2013-05-01 宋智礼 Image registration method based on improved scale invariant feature transform (SIFT) algorithm and Lissajous figure track
CN103390162A (en) * 2013-07-08 2013-11-13 中国科学院计算技术研究所 Detection method for station captions
CN103413444A (en) * 2013-08-26 2013-11-27 深圳市川大智胜科技发展有限公司 Traffic flow surveying and handling method based on unmanned aerial vehicle high-definition video
CN103413327A (en) * 2013-08-23 2013-11-27 北京理工大学 Video stabilizing method based on multiple planes
CN103761523A (en) * 2014-01-02 2014-04-30 北京理工大学 Automatic identification and tracking method for airborne remote sensing video in specific man-made area
CN103841296A (en) * 2013-12-24 2014-06-04 哈尔滨工业大学 Real-time electronic image stabilizing method with wide-range rotation and horizontal movement estimating function
CN104135597A (en) * 2014-07-04 2014-11-05 上海交通大学 Automatic detection method of jitter of video
WO2015010244A1 (en) * 2013-07-23 2015-01-29 Microsoft Corporation Adaptive path smoothing for video stabilization
CN104636745A (en) * 2013-11-08 2015-05-20 株式会社理光 Method and device for extracting scale-invariant features and method and device for recognizing objects
CN105306785A (en) * 2015-10-27 2016-02-03 武汉工程大学 Electronic image stabilizing method and system based on SIFT feature matching and VFC algorithm
CN105447841A (en) * 2014-05-28 2016-03-30 展讯通信(天津)有限公司 Image matching method and video processing method
CN105959696A (en) * 2016-04-28 2016-09-21 成都三零凯天通信实业有限公司 Video content safety monitoring method based on SIFT characteristic algorithm
CN106210448A (en) * 2016-07-22 2016-12-07 恒业智能信息技术(深圳)有限公司 A kind of video image dithering Processing for removing method
CN106327432A (en) * 2015-06-18 2017-01-11 北京大学 Image restoration method and device based on offset quantity
CN106550174A (en) * 2016-10-28 2017-03-29 大连理工大学 A kind of real time video image stabilization based on homography matrix
US9953400B2 (en) 2013-07-23 2018-04-24 Microsoft Technology Licensing, Llc Adaptive path smoothing for video stabilization
CN108520265A (en) * 2012-07-09 2018-09-11 西斯维尔科技有限公司 Method for converting image descriptor and associated picture processing equipment
CN109792530A (en) * 2016-09-23 2019-05-21 高通股份有限公司 Adapting to image processing in nobody the autonomous vehicles
CN109840917A (en) * 2019-01-29 2019-06-04 北京市商汤科技开发有限公司 Image processing method and device, network training method and device
CN110401796A (en) * 2019-07-05 2019-11-01 浙江大华技术股份有限公司 Method and device for jitter compensation of image acquisition device
CN111034193A (en) * 2017-08-30 2020-04-17 高通股份有限公司 Multi-source video stabilization
CN111279685A (en) * 2018-04-28 2020-06-12 深圳市大疆创新科技有限公司 Motion estimation

Cited By (55)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101394522B (en) * 2007-09-19 2010-07-21 中国科学院计算技术研究所 Method and system for detecting video copy
CN101399969B (en) * 2007-09-28 2012-09-05 三星电子株式会社 System,device and method for moving target detection and tracking based on moving camera
CN101493889B (en) * 2008-01-23 2011-12-07 华为技术有限公司 Method and apparatus for tracking video object
CN101281650B (en) * 2008-05-05 2010-05-12 北京航空航天大学 A Fast Global Motion Estimation Method for Video Stabilization
CN102099815A (en) * 2008-05-19 2011-06-15 巴黎高等理工学院 Method and device for the invariant affine recognition of shapes
CN101316368B (en) * 2008-07-18 2010-04-07 西安电子科技大学 Panoramic Image Stabilization Method Based on Global Feature Point Iteration
US8903191B2 (en) 2008-12-30 2014-12-02 Intel Corporation Method and apparatus for noise reduction in video
CN102171722A (en) * 2008-12-30 2011-08-31 英特尔公司 Method and apparatus for noise reduction in video
CN102171722B (en) * 2008-12-30 2014-08-13 英特尔公司 Method and apparatus for noise reduction in video
CN101957996A (en) * 2009-07-17 2011-01-26 上海遥薇实业有限公司 On-off passenger target tracing method based on stable characteristic
CN101989352B (en) * 2009-08-06 2013-05-01 宋智礼 Image registration method based on improved scale invariant feature transform (SIFT) algorithm and Lissajous figure track
CN101839722A (en) * 2010-05-06 2010-09-22 南京航空航天大学 Method for automatically recognizing target at medium and low altitudes and positioning carrier with high accuracy
CN102456225A (en) * 2010-10-22 2012-05-16 深圳中兴力维技术有限公司 Video monitoring system and moving target detecting and tracking method thereof
CN102456225B (en) * 2010-10-22 2014-07-09 深圳中兴力维技术有限公司 Video monitoring system and moving target detecting and tracking method thereof
CN102908163A (en) * 2011-08-02 2013-02-06 深圳迈瑞生物医疗电子股份有限公司 Frame correlation method, frame correlation device and ultrasonic imaging system
CN102908163B (en) * 2011-08-02 2016-01-20 深圳迈瑞生物医疗电子股份有限公司 A kind of frame correlation technique and device thereof and ultrasonic image-forming system
CN102289670B (en) * 2011-08-31 2013-03-20 长安大学 Image characteristic extraction method with illumination robustness
CN102289670A (en) * 2011-08-31 2011-12-21 长安大学 Image characteristic extraction method with illumination robustness
CN108520265B (en) * 2012-07-09 2021-09-07 新运环球有限公司 Method and related image processing apparatus for converting image descriptors
CN108520265A (en) * 2012-07-09 2018-09-11 西斯维尔科技有限公司 Method for converting image descriptor and associated picture processing equipment
CN103390162A (en) * 2013-07-08 2013-11-13 中国科学院计算技术研究所 Detection method for station captions
CN103390162B (en) * 2013-07-08 2016-08-10 中国科学院计算技术研究所 A kind of station caption detection method
US9953400B2 (en) 2013-07-23 2018-04-24 Microsoft Technology Licensing, Llc Adaptive path smoothing for video stabilization
CN105409196B (en) * 2013-07-23 2018-11-30 微软技术许可有限责任公司 Adaptive Path for video stabilization is smooth
CN105409196A (en) * 2013-07-23 2016-03-16 微软技术许可有限责任公司 Adaptive path smoothing for video stabilization
WO2015010244A1 (en) * 2013-07-23 2015-01-29 Microsoft Corporation Adaptive path smoothing for video stabilization
US9697587B2 (en) 2013-07-23 2017-07-04 Microsoft Technology Licensing, Llc Adaptive path smoothing for video stabilization
CN103413327A (en) * 2013-08-23 2013-11-27 北京理工大学 Video stabilizing method based on multiple planes
CN103413327B (en) * 2013-08-23 2016-05-18 北京理工大学 A kind of based on multilevel video stabilizing method
CN103413444B (en) * 2013-08-26 2015-08-19 深圳市川大智胜科技发展有限公司 A kind of traffic flow based on unmanned plane HD video is investigated method
CN103413444A (en) * 2013-08-26 2013-11-27 深圳市川大智胜科技发展有限公司 Traffic flow surveying and handling method based on unmanned aerial vehicle high-definition video
CN104636745A (en) * 2013-11-08 2015-05-20 株式会社理光 Method and device for extracting scale-invariant features and method and device for recognizing objects
CN104636745B (en) * 2013-11-08 2018-04-10 株式会社理光 Scale invariant feature extracting method and device, object identifying method and device
CN103841296B (en) * 2013-12-24 2017-01-18 哈尔滨工业大学 Real-time electronic image stabilizing method with wide-range rotation and horizontal movement estimating function
CN103841296A (en) * 2013-12-24 2014-06-04 哈尔滨工业大学 Real-time electronic image stabilizing method with wide-range rotation and horizontal movement estimating function
CN103761523A (en) * 2014-01-02 2014-04-30 北京理工大学 Automatic identification and tracking method for airborne remote sensing video in specific man-made area
CN105447841B (en) * 2014-05-28 2019-06-07 展讯通信(天津)有限公司 Image matching method and method for processing video frequency
CN105447841A (en) * 2014-05-28 2016-03-30 展讯通信(天津)有限公司 Image matching method and video processing method
CN104135597A (en) * 2014-07-04 2014-11-05 上海交通大学 Automatic detection method of jitter of video
CN106327432A (en) * 2015-06-18 2017-01-11 北京大学 Image restoration method and device based on offset quantity
CN105306785A (en) * 2015-10-27 2016-02-03 武汉工程大学 Electronic image stabilizing method and system based on SIFT feature matching and VFC algorithm
CN105959696A (en) * 2016-04-28 2016-09-21 成都三零凯天通信实业有限公司 Video content safety monitoring method based on SIFT characteristic algorithm
CN106210448B (en) * 2016-07-22 2020-08-18 深圳市智慧城市建设运行管理有限公司 Video image jitter elimination processing method
CN106210448A (en) * 2016-07-22 2016-12-07 恒业智能信息技术(深圳)有限公司 A kind of video image dithering Processing for removing method
CN109792530A (en) * 2016-09-23 2019-05-21 高通股份有限公司 Adapting to image processing in nobody the autonomous vehicles
CN106550174B (en) * 2016-10-28 2019-04-09 大连理工大学 A real-time video stabilization method based on homography matrix
CN106550174A (en) * 2016-10-28 2017-03-29 大连理工大学 A kind of real time video image stabilization based on homography matrix
CN111034193B (en) * 2017-08-30 2021-04-20 高通股份有限公司 Multi-source video stabilization
CN111034193A (en) * 2017-08-30 2020-04-17 高通股份有限公司 Multi-source video stabilization
CN111279685A (en) * 2018-04-28 2020-06-12 深圳市大疆创新科技有限公司 Motion estimation
US11172218B2 (en) 2018-04-28 2021-11-09 SZ DJI Technology Co., Ltd. Motion estimation
CN109840917A (en) * 2019-01-29 2019-06-04 北京市商汤科技开发有限公司 Image processing method and device, network training method and device
CN110401796B (en) * 2019-07-05 2020-09-29 浙江大华技术股份有限公司 Jitter compensation method and device of image acquisition device
CN110401796A (en) * 2019-07-05 2019-11-01 浙江大华技术股份有限公司 Method and device for jitter compensation of image acquisition device
US12293494B2 (en) 2019-07-05 2025-05-06 Zhejiang Dahua Technology Co., Ltd. Methods and systems for video stabilization

Also Published As

Publication number Publication date
CN100530239C (en) 2009-08-19

Similar Documents

Publication Publication Date Title
CN101009021A (en) Video stabilizing method based on matching and tracking of characteristic
Yang et al. Seeing deeply and bidirectionally: A deep learning approach for single image reflection removal
Albanis et al. Pano3d: A holistic benchmark and a solid baseline for 360deg depth estimation
CN111311666B (en) Monocular vision odometer method integrating edge features and deep learning
Kim et al. Dynamic video deblurring using a locally adaptive blur model
Liu et al. Satellite video super-resolution based on adaptively spatiotemporal neighbors and nonlocal similarity regularization
Duan et al. Guided event filtering: Synergy between intensity images and neuromorphic events for high performance imaging
Song et al. Robust video stabilization based on particle filtering with weighted feature points
CN101316368A (en) Panoramic Image Stabilization Method Based on Global Feature Point Iteration
CN103426182A (en) Electronic image stabilization method based on visual attention mechanism
Okade et al. Video stabilization using maximally stable extremal region features
CN108776971A (en) A kind of variation light stream based on layering nearest-neighbor determines method and system
Choi et al. A contour tracking method of large motion object using optical flow and active contour model
Liu et al. High-speed video generation with an event camera
Liu et al. Learning to see through obstructions with layered decomposition
Yang et al. Deep networks with detail enhancement for infrared image super-resolution
US20230394833A1 (en) Method, system and computer readable media for object detection coverage estimation
Pan et al. Depth map completion by jointly exploiting blurry color images and sparse depth maps
CN115019241A (en) Pedestrian identification and tracking method and device, readable storage medium and equipment
Kim et al. Dynamic scene deblurring using a locally adaptive linear blur model
Wang et al. Adaptive video stabilization based on feature point detection and full-reference stability assessment
Sarıgül A survey on digital video stabilization
Hu et al. Feature-based real-time video stabilization for vehicle video recorder system
Cheong et al. Reflection removal under fast forward camera motion
Li et al. Real-time and robust video stabilization based on block-wised gradient features

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20090819

Termination date: 20120125