[go: up one dir, main page]

CN103455983A - Image disturbance eliminating method in embedded type video system - Google Patents

Image disturbance eliminating method in embedded type video system Download PDF

Info

Publication number
CN103455983A
CN103455983A CN2013103903104A CN201310390310A CN103455983A CN 103455983 A CN103455983 A CN 103455983A CN 2013103903104 A CN2013103903104 A CN 2013103903104A CN 201310390310 A CN201310390310 A CN 201310390310A CN 103455983 A CN103455983 A CN 103455983A
Authority
CN
China
Prior art keywords
data
image
video system
point
unique point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2013103903104A
Other languages
Chinese (zh)
Inventor
潘大任
项芒
古映键
严晶
蒋晓钧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHENZHEN WISESOFT TECHNOLOGY Co Ltd
Original Assignee
SHENZHEN WISESOFT TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHENZHEN WISESOFT TECHNOLOGY Co Ltd filed Critical SHENZHEN WISESOFT TECHNOLOGY Co Ltd
Priority to CN2013103903104A priority Critical patent/CN103455983A/en
Publication of CN103455983A publication Critical patent/CN103455983A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention relates to an image disturbance eliminating method in an embedded type video system. The method comprises the following steps of capturing and acquiring data expressing a current frame image, carrying out feature point searching on the data to acquire the feature values expressing the feature points and storing the feature values, acquiring the feature values acquired in data processing of a previous frame, carrying out feature matching with the feature values in the image data of the current frame, estimating and acquiring the moving parameters of the feature points on the current frame relative to the previous frame and storing the moving parameters according to the result of the feature matching, and carrying out displacement compensation on the image data of the current frame according to the moving parameters. The image disturbance eliminating method in the embedded type video system has the advantages of being easy to operate, accurate, small in size and low in cost.

Description

Image disturbances removing method in Embedded Video System
Technical field
The present invention relates to the processing of view data, more particularly, relate to the image disturbances removing method in a kind of Embedded Video System.
Background technology
Image information makes electronic camera system be applied in more and more widely in various carriers as important information source.And the camera system majority is Embedded Video System, the effect of the video image obtained during its real work to be subject to carrier not attitude in the same time change or the impact of vibration, cause between adjacent image skew or rotation occur, cause image pixel coordinate correspondence one by one.And the removing method of image disturbances (being electronic image stabilizing) is a kind of technology that sequence of video images is revised.Its objective is to remove in sequence of video images and shake unintentionally because of video camera the image disturbances caused, thereby the sequence of video images of exporting after guaranteeing to revise is level and smooth and stable.And in the prior art, usually adopt the digital image stabilization method of the steady picture of traditional optics or dynamo-electric combination to improve the quality of image.The steady picture of traditional optics or the steady picture of dynamo-electric combination carry out the method for removal of images disturbance, although also can realize to a certain extent the purpose that image disturbances is eliminated,, its operation is inconvenient, inaccurate and volume is large, price is higher.
Summary of the invention
The technical problem to be solved in the present invention is, the defect that aforesaid operations for prior art is inconvenient, inaccurate, volume is large, price is higher, provide the image disturbances removing method in the Embedded Video System that a kind of easy to operate, accurate, small volume, price are lower.
The technical solution adopted for the present invention to solve the technical problems is: construct the image disturbances removing method in a kind of Embedded Video System, comprise the steps:
A) catch and obtain meaning the data of current frame image, it is carried out to feature and search, obtain meaning eigenwert the storage of its unique point;
B) obtain the eigenwert obtained when the previous frame view data is processed, and carry out characteristic matching with the eigenwert in the view data of described present frame;
C), according to the result of above-mentioned characteristic matching, estimate and obtain this unique point moving parameter with respect to previous frame on present frame and also store;
D) according to above-mentioned moving parameter, the view data of present frame is carried out to bit shift compensation.
Further, described steps A) the middle Corner Detection that adopts is searched angle point as unique point, and is used least unit to cut apart the filtering image noise to determine effective unique point.
Further, described steps A) in, described current frame image data are divided into to a plurality of quick memory block pointers and carry out the angle point judgement.
Further, described steps A) in, feature is searched further and is comprised:
A1) select to set the data block of size from the gradation data of present frame, the data block of selecting is carried out to binary conversion treatment, in this step, continue select different above-mentioned data blocks and carry out binary conversion treatment, until described data block comprises the pixel of all row;
A2) Gaussian parameter is set, and uses the rapid traverse instruction to replace divide instruction to carry out Gaussian smoothing to the view data of described present frame;
A3) create laplacian image and carry out the unique point judgement.
Further, described steps A) in, search described unique point in the 112X240 pixel coverage; Described steps A 1) set the dimensional data piece in and be of a size of the 72X32 pixel, the data block of its selection is 8.
Further, at described step B) in, adopt turriform template matching method or outline method to be mated the current frame image data eigenwert of former frame.
Further, described coupling adopts the search window of 20X20 pixel to be mated described data characteristics.
Further, described step C) in, adopt affine model and least square process of iteration to obtain described moving parameter, described moving parameter comprises shift position valuation and moving direction.
Further, described step C) further comprise:
C1) according to the result of above-mentioned characteristic matching, obtain this unique point mobile valuation side-play amount with respect to previous frame on present frame;
C2) predict and obtain this unique point at present frame for the moving direction of the movement of previous frame image, and store its moving direction and described side-play amount.
Further, also be included in before deal with data the step that the floating number that will obtain in data is converted to fixed-point number, described conversion is followed: xq=(long) (xf * 2Q); Wherein xq is fixed-point number, and xf is floating number, and Q is any one in Q0-Q15; Wherein, Q0-Q15 means the different constants of described floating number radix point position, and when the radix point of described floating number, when the 0th is right side, Q is Q0; When described floating number radix point, during on the 15th right side, Q is Q15.
Implement the image disturbances removing method in Embedded Video System of the present invention, there is following beneficial effect: owing to adopting the steps such as the searching of unique point, coupling, and the characteristic estimation of the unique point of foundation former frame or multiframe being gone out to the moving parameter of unique point on present frame, its mode of utilizing the pure digi-tal signal to process determines that the interframe of image sequence is offset and compensates.So, video being carried out to when image disturbances is eliminated thering is easy operating, more accurately, volume is little, price is low.
The accompanying drawing explanation
Fig. 1 is the process flow diagram of removal of images perturbation motion method in the image disturbances removing method embodiment in Embedded Video System of the present invention;
Fig. 2 generates the particular flow sheet of binary image in described embodiment;
Fig. 3 is used binary image to find the particular flow sheet of angle point in described embodiment.
Embodiment
Below in conjunction with accompanying drawing, embodiments of the present invention is further illustrated.
As shown in Figure 1, in the image disturbances removing method embodiment in Embedded Video System of the present invention, it comprises the steps:
Step S11 catches current frame image, obtains its data, searches unique point, obtains eigenwert storage: in this step, catch and obtain meaning the data of current frame image, it is carried out to feature and search, obtain meaning eigenwert the storage of its unique point; In the present embodiment, embedded system unlatching background thread is carried out the seizure of asynchronous video frame.Processor is when processing present image, and the MDMA controller can move to the new view data caught in next frame backstage independently in the historical data buffer zone and can not affect the current steady picture computing of processing.In this step, need in view data, search unique point, and desirable unique point must possess explicitly, uniqueness, also is consistent at next frame, and can provide new information to steady picture handling procedure.In the present embodiment, adopt Harris's Corner Detection to carry out feature selecting, use least unit split plot design filtering image noise, by feature precedence, preferentially select effective information.That is to say, in the present embodiment, the current frame image data are divided into to a plurality of quick memory block pointers and carry out the angle point judgement.For searching of unique point, be described in further detail after a while.
Step S12 obtains the eigenwert of previous frame, carry out characteristic matching in current data: in this step, (these eigenwerts are stored to obtain the eigenwert obtained when the previous frame view data is processed, in this step, need in storer, read), and carry out characteristic matching with the eigenwert in the view data of present frame; In the present embodiment, adopt turriform template matching method or outline method to be mated the current frame image data.When adopting the method for template matches, the matching template of these employings is also in advance to set and store.Template matching method is processed a large amount of noises that occur in video, and its treatment effect is better, also comparatively accurate.
Step S13 is according to the result of above-mentioned characteristic matching, obtain the present frame unique point for the moving parameter of previous frame unique point and store: in this step, adopt affine model and least square process of iteration to obtain the present frame unique point for the moving parameter of previous frame unique point storage, described moving parameter comprises shift position valuation and moving direction.It is worth mentioning that, in the present embodiment, may have the situation of a plurality of unique points, the present embodiment be take a unique point and the elimination process of whole image disturbances is described as example.In fact, when having a plurality of unique point, be also to process one by one these unique points, in fact, be exactly for each unique point, relevant step to be carried out one time.Relevant step comprise except view data catch and last demonstration in steps.For the present embodiment, picture catching in step S11 is not carried out, all the other steps are all carried out one time for each unique point, certainly, the eigenwert of wherein taking out the previous frame image be it for or that unique point of processing in the eigenwert of previous frame.For example, if two unique points are arranged, above-mentioned steps will carry out twice, certainly, catch view data and just need not carry out twice.In the present embodiment, for a unique point, the concrete steps of its coupling are as follows: according to the result of characteristic matching, obtain this unique point mobile valuation side-play amount with respect to previous frame on present frame; Predict and obtain this unique point at present frame the moving direction for the previous frame image, and store its moving direction and described side-play amount.Particularly, adopt the search window of a 20x20 to carry out optimum matching in the data block of consecutive frame.Matching process also can adopt outline to substitute original turriform template matches, because processor has been done fixed point optimization for absolute error and algorithm (SAD) in the present embodiment, can within an instruction cycle, carry out repeatedly comparison operation, this has strengthened the ability of Video processing greatly.In addition, advanced very long instruction word (VLIW) structure of processor cores, also provide current application equipment needed very high performance.Other optimization also comprises the built-in grand and internal memory alignment of employing.The process of calculating side-play amount is as follows:
N=NumberofPairedFeatures
Motionx = Σ i = 1 N Lastx i - Currentx i N
Motiony = Σ i = 1 N Lasty i - Currenty i N
Wherein, N representation feature point sum, Lastx/Lasty means the coordinate of the unique point of previous frame, and Currentx/Currenty means the coordinate of the unique point of present frame, and Motionx/Motiony means the skew total amount on the x/y direction.
For the calculating of unique point offset direction, adopt Parabolic Fit camera algorithm (PFC) to estimate the direction of motion of video by the history image queue.Here use 5 frames " data of front " and 30 frames " data of back ".Why use 5 frames " data of front " to be because it has enough fast speed to process direction transformation fast, and not there will be overcompensation.Use 30 frames " data of back " to be because allow historical data to affect these features and don't control it fully.Its concrete steps comprise: change parabolical shape for each new data arranges two parameters, calculate thus the data that need output; These two parameter the insides, one is there is no treated data, one is the data after processing.Each frame does not have the data of processing to put into " before processing " buffer zone, and these packets have contained can be eliminated sawtooth and make the movable information that image is more level and smooth.The video data of having play is put into " after processing " buffer zone, and these historical datas are for anticipation later direction of motion.Once, when having the deposit data of enough catching in two buffer zones, the image that each frame is new can calculate by the Parabolic Fit method position of expection." before processing " buffer data calculates later direction of motion with the anticipation track while finishing, and the video of " after processing " buffer zone output becomes more steady.
Step S14 carries out shift compensation according to described moving parameter to current image date: in this step, the bit shift compensation using the Output rusults of step 6 as last " front " frame, then the display port by processor is shown.(in the present embodiment, " front " frame refers to be captured but the frame that also do not show." afterwards " frame refers to the frame shown.)
In the present embodiment, for for searching unique point in view data, specifically head comprises the steps: again
Select to set the data block of size from the gradation data of present frame, the data block of selecting is carried out to binary conversion treatment; Continue select different above-mentioned data blocks and carry out binary conversion treatment, until described data block comprises the pixel of all row; That is to say, in the present embodiment, at first be the seizure to image, after obtaining view data, if these view data are not gradation datas, need to and be converted to the gray level image data by its taking-up, gray level image data herein can't be for showing, but specially conversion so that search unique point.In the present embodiment, adopt the scale-of-two Corner Detection to search unique point.For raising speed, adopt in the present embodiment the alternative whole two field picture of pixel coverage of 112x240 to find feature.Can make like this more than video remains on 15fps, it is more smooth that picture seems.Because hope obtains and process a gray level image, the configurable MDMA transport module of processor is responsible for copying gray-scale value from the UYVY raw image data.After obtaining the gradation data of image, choose the data block of a 72x32 from gradation data, be sent to the binary conversion treatment of removing to carry out image in the quick internal memory of L1 of processor by MDMA.In the present embodiment, need altogether 8 data blocks to comprise all 240 row pixels.Fig. 2 shows the concrete steps of carrying out in the present embodiment binary conversion treatment.In Fig. 2, at first the picture signal of the present frame that obtains is converted to grey scale signal, and then the grey scale signal obtained is decomposed into to 8 data blocks, respectively each data block is processed, then judge whether that 8 data blocks all finish dealing with, in this way, generate binary image, carry out eigenwert and search, otherwise, carry out process data block, until complete regeneration binary image after the processing of 8 data blocks, carry out eigenwert and search.As shown in Figure 2, in the present embodiment, above-mentioned data block is the data block of the 72x32 pixel chosen in gradation data, and the processing procedure of each data block is as follows: transmission data block obtains the gray level image data block of a 72x32 pixel; Carry out Gaussian smoothing, obtain the smoothed data piece of 72x32 pixel; Carry out the computing of data block binaryzation, obtain the two-value data piece of 72x32 pixel; Transmission data block, to binary image, obtains the binary image of 72x32 pixel.Data block of every processing, obtain the part of binary image, when 8 data blocks are finished dealing with, obtains the binary image of whole frame, carries out therein searching and obtaining of unique point and eigenwert.
Gaussian parameter is set, and uses the rapid traverse instruction to replace divide instruction to carry out Gaussian smoothing to the view data of described present frame; At the present embodiment, the value that Gaussian parameter σ is set is 0.8, can, so that all divisors are all 2 multiples, so just can use the rapid traverse instruction to substitute the divide instruction that expends very much the instruction cycle like this.As follows:
Smoothed ( i , j ) = I i , j - 2 4 + I i , j - 1 2 + I i , j + I i , j + 1 2 + I i , j + 2 4 2
sum=0
sum=sum+(I i,j-2>>2)
sum=sum+(I i,j-1>>1)
sum=sum+I i,j
sum=sum+(I i,j+1>>1)
sum=sum+(I i,j+2>>2)
sum=sum>>1
Smoothed(i,j)=sum
Wherein, Ii, the gray-scale value that the j presentation video lists at the capable j of i.Because all divisors are all 2 multiples, so the summation that can obtain sum by right shift instruction substitutes top division formula.
Create laplacian image and carry out the unique point judgement; In this step, take following method:
laplacian i,j=I i-1,j+I i,j-1+I i+1,j+I i,j+1-(4·I i,j)
Wherein, Ii, j means the gray-scale value that the image through smoothing processing lists at the capable j of i.Laplacian i, j is illustrated in Laplce's binary picture value that the capable j of i lists, and this value is for judging the position of angle point.
In the present embodiment, whole image is divided into to a plurality of quick memory block pointers and carries out binary conversion treatment and angle point judgement, effectively reduced the multiplying occurred in the memory address, improved arithmetic speed.Traditional scale-of-two Corner Detection needs the adjacent area of 37 pixels to judge, this is for the video designs of 720x480.Due to the zone of now only using the 112x240 size, the height of each angle point is halved.Like this just only need 19 pixels that close on to judge whether this point is angle point.Because the quantity contrasted has reduced half, so whole steady picture algorithm is also rapider.Angle point information to the one queue the inside that algorithm stores is all.If this queue is not full, all angle points can add this queue.If queue is full, only has with center brightness and have the angle point of bigger difference just can add queue.After all features are all collected, surely as algorithm, adopt the smallest partition technology to process these features.In the present embodiment, the minor increment of processor adopting is 14, and selecting like this is because can guarantee that whole zone can be detected.The optimization method of loop unrolling also is used to, in Corner Detection, by expansion, circulate, thereby the quantity that can reduce conditional branching has reduced the execution time.
In addition, in the present embodiment, in order to obtain higher processing speed, more shirtsleeve operation process, also processor or data have been carried out to a series of optimization, having comprised:
Although the processor adopted in the present embodiment powerful fixed DSP that is performance, in video processing applications, need first to convert floating number to fixed-point number and carry out computing again, increased so many calculated amount, affected real-time.In order to promote the operation efficiency of floating number, utilize the Q form to carry out fixed point to floating number; Simultaneously, the process of solving equation will relate to the differential of image, how to realize that efficient differential has also just become an emphasis.Particularly, when high-definition image is processed, full figure is differentiated and can be had a strong impact on the real-time performance of algorithm.Simultaneously, objectives are carried out to the optimization of image size, weed out useless pixel, reduce memory accesses.Use embedded instructions collection and zero-overhead loop speed-raising; Use the fixed-point arithmetic computing to improve high-precision computing complexity.
In the present embodiment, after obtaining view data, wherein all floating numbers are turned to fixed-point number and be optimized, when carrying out above-mentioned conversion, relate to the selection of a constant Q, the radix point of floating number is when diverse location, and the constant Q of selection is different; The range of choice of Q is Q0-Q15, and the method for its selection is as follows: when assumed decimal point is positioned at the right side of the 0th, be Q0; When the right side that radix point is positioned to the 15th, be Q15, remainder is analogized; For example:
16 systems are counted 2000h, Q0=8192;
16 systems are counted 2000h, Q1=4096.0;
16 systems are counted 2001h, Q1=4096.5;
16 systems are counted 2001h, Q2=2048.25;
16 systems are counted 2000h, Q15=0.25; .
16 systems are counted A000h, Q15=1.25...... etc.
Mode between floating number and fixed-point number is as follows:
When floating number xf is converted to fixed-point number xq, xq=(long) (xf * 2Q);
When fixed-point number xq is converted to floating number xf, xf=(long) (xq * 2Q).
After turning fixed point optimization through floating-point, then carry out corresponding computing, can promote the overall performance of this image disturbances removing method.
In order to reduce memory accesses, carry out differential optimization.7 pixels supposing a width figure carry out differential to be needed 21 read memories and carries out 21 multiplyings, and if read this 7 pixel parallel computation differential simultaneously, only need read memory 7 times, reduce by 5 multiplyings simultaneously.Similarly method is applied to full figure, can reduce in a large number calculated amount and read memory number of times, significantly the real-time of boosting algorithm.
For the storage data are processed, being at first the inner directly storage of IDMA(that utilizes processor) mode carries out quick internal data transfer; Next is the MasterDMA(MDMA that utilizes its kernel to support), the memory block carried out between low speed internal memory and high-speed internal memory on backstage is moved, thereby catches the execution time that the next frame view data reduces algorithm.
In the prior art, other video processing procedures that also may have class quasi-static picture to process, still, these image processing process are often given fixed station by wireless transmission and are completed.Fixed station system can utilize CPU and internal memory faster to complete Video processing.But, wireless transmission has increased the complicacy of processing, and wireless transmission can be brought noise to image, and this makes characteristics of image be hidden and cover up between frame and frame, is difficult to carry out characteristic matching and tracking.In addition, the wireless radio transmission time can be caused the delay of 300ms left and right.And in the present embodiment, the process of Video processing directly realizes on video card.There is no the interference of radio-frequency signal, the clear video that video card can capture without noise carries out analyzing and processing, and aspect controlling without any delay.
The above embodiment has only expressed several embodiment of the present invention, and it describes comparatively concrete and detailed, but can not therefore be interpreted as the restriction to the scope of the claims of the present invention.It should be pointed out that for the person of ordinary skill of the art, without departing from the inventive concept of the premise, can also make some distortion and improvement, these all belong to protection scope of the present invention.Therefore, the protection domain of patent of the present invention should be as the criterion with claims.

Claims (10)

1. the image disturbances removing method in an Embedded Video System, is characterized in that, comprises the steps:
A) catch and obtain meaning the data of current frame image, it is carried out to unique point and search, obtain meaning eigenwert the storage of its unique point;
B) obtain the eigenwert obtained when the previous frame view data is processed, and carry out characteristic matching with the eigenwert in the view data of described present frame;
C), according to the result of above-mentioned characteristic matching, estimate and obtain this unique point moving parameter with respect to previous frame on present frame and also store;
D) according to above-mentioned moving parameter, the view data of present frame is carried out to bit shift compensation.
2. the image disturbances removing method in Embedded Video System according to claim 1, it is characterized in that, described steps A) in, adopt Corner Detection to search angle point as unique point, and use least unit to cut apart the filtering image noise to determine effective unique point.
3. the image disturbances removing method in Embedded Video System according to claim 2, is characterized in that, described steps A) in, described current frame image data are divided into to a plurality of quick memory block pointers and carry out the angle point judgement.
4. the image disturbances removing method in Embedded Video System according to claim 3, is characterized in that, described steps A) in, feature is searched further and is comprised:
A1) select to set the data block of size from the gradation data of present frame, the data block of selecting is carried out to binary conversion treatment, in this step, continue select different above-mentioned data blocks and carry out binary conversion treatment, until described data block comprises the pixel of all row;
A2) Gaussian parameter is set, and uses the rapid traverse instruction to replace divide instruction to carry out Gaussian smoothing to the view data of described present frame;
A3) create laplacian image and carry out the unique point judgement.
5. the image disturbances removing method in Embedded Video System according to claim 4, is characterized in that, described steps A) in, described unique point searched in the 112X240 pixel coverage; Described steps A 1) set the dimensional data piece in and be of a size of the 72X32 pixel, the data block of its selection is 8.
6. the image disturbances removing method in Embedded Video System according to claim 5, is characterized in that, at described step B) in, adopt turriform template matching method or outline method to be mated the current frame image data eigenwert of former frame.
7. the image disturbances removing method in Embedded Video System according to claim 6, is characterized in that, described coupling adopts the search window of 20X20 pixel to be mated described data characteristics.
8. according to the image disturbances removing method in the described Embedded Video System of claim 1-7 any one, it is characterized in that, described step C) in, adopt affine model and least square process of iteration to obtain described moving parameter, described moving parameter comprises shift position valuation and moving direction.
9. the image disturbances removing method in Embedded Video System according to claim 8, is characterized in that, described step C) further comprise:
C1) according to the result of above-mentioned characteristic matching, obtain this unique point mobile valuation side-play amount with respect to previous frame on present frame;
C2) predict and obtain this unique point at present frame the moving direction for the previous frame image, and store its moving direction and described side-play amount.
10. the image disturbances removing method in Embedded Video System according to claim 9, it is characterized in that, also be included in the front step that floating number in the data that obtain is converted to fixed-point number of deal with data, described conversion is followed: xq=(long) (xf * 2Q); Wherein xq is fixed-point number, and xf is floating number, and Q is any one in Q0-Q15; Wherein, Q0-Q15 means the different constants of described floating number radix point position, and when the radix point of described floating number, when the 0th is right side, Q is Q0; When described floating number radix point, during on the 15th right side, Q is Q15.
CN2013103903104A 2013-08-30 2013-08-30 Image disturbance eliminating method in embedded type video system Pending CN103455983A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2013103903104A CN103455983A (en) 2013-08-30 2013-08-30 Image disturbance eliminating method in embedded type video system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2013103903104A CN103455983A (en) 2013-08-30 2013-08-30 Image disturbance eliminating method in embedded type video system

Publications (1)

Publication Number Publication Date
CN103455983A true CN103455983A (en) 2013-12-18

Family

ID=49738315

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2013103903104A Pending CN103455983A (en) 2013-08-30 2013-08-30 Image disturbance eliminating method in embedded type video system

Country Status (1)

Country Link
CN (1) CN103455983A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104618627A (en) * 2014-12-31 2015-05-13 小米科技有限责任公司 Video processing method and device
CN106257911A (en) * 2016-05-20 2016-12-28 上海九鹰电子科技有限公司 Image stability method and device for video image
CN106447730A (en) * 2016-09-14 2017-02-22 深圳地平线机器人科技有限公司 Parameter estimation method, parameter estimation apparatus and electronic equipment
CN107454303A (en) * 2016-05-31 2017-12-08 宇龙计算机通信科技(深圳)有限公司 A kind of video anti-fluttering method and terminal device
CN110163353A (en) * 2018-02-13 2019-08-23 上海寒武纪信息科技有限公司 A kind of computing device and method
CN108876739B (en) * 2018-06-15 2020-11-24 Oppo广东移动通信有限公司 An image compensation method, electronic device and computer-readable storage medium
CN112492249A (en) * 2019-09-11 2021-03-12 瑞昱半导体股份有限公司 Image processing method and circuit
CN113378867A (en) * 2020-02-25 2021-09-10 北京轻舟智航智能技术有限公司 Asynchronous data fusion method and device, storage medium and electronic equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070236578A1 (en) * 2006-04-06 2007-10-11 Nagaraj Raghavendra C Electronic video image stabilization
CN102427505A (en) * 2011-09-29 2012-04-25 深圳市万兴软件有限公司 Video image stabilization method and system based on Harris Corner

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070236578A1 (en) * 2006-04-06 2007-10-11 Nagaraj Raghavendra C Electronic video image stabilization
CN102427505A (en) * 2011-09-29 2012-04-25 深圳市万兴软件有限公司 Video image stabilization method and system based on Harris Corner

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
彭小江 等: "基于特征匹配和校验的鲁棒实时电子稳像", 《光子学报》 *
温晓峰: "基于嵌入式系统的电子稳像系统的研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
王兆军: "基于视频的成像去抖动方法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
王志民 等: "电子稳像技术综述", 《中国图象图形学报》 *
隋龙 等: "高速启发式金字塔模板匹配算法", 《仪器仪表学报》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104618627A (en) * 2014-12-31 2015-05-13 小米科技有限责任公司 Video processing method and device
CN104618627B (en) * 2014-12-31 2018-06-08 小米科技有限责任公司 Method for processing video frequency and device
CN106257911A (en) * 2016-05-20 2016-12-28 上海九鹰电子科技有限公司 Image stability method and device for video image
CN107454303A (en) * 2016-05-31 2017-12-08 宇龙计算机通信科技(深圳)有限公司 A kind of video anti-fluttering method and terminal device
CN106447730A (en) * 2016-09-14 2017-02-22 深圳地平线机器人科技有限公司 Parameter estimation method, parameter estimation apparatus and electronic equipment
CN106447730B (en) * 2016-09-14 2020-02-28 深圳地平线机器人科技有限公司 Parameter estimation method and device and electronic equipment
CN110163353A (en) * 2018-02-13 2019-08-23 上海寒武纪信息科技有限公司 A kind of computing device and method
CN108876739B (en) * 2018-06-15 2020-11-24 Oppo广东移动通信有限公司 An image compensation method, electronic device and computer-readable storage medium
CN112492249A (en) * 2019-09-11 2021-03-12 瑞昱半导体股份有限公司 Image processing method and circuit
CN112492249B (en) * 2019-09-11 2024-04-09 瑞昱半导体股份有限公司 Image processing method and circuit
CN113378867A (en) * 2020-02-25 2021-09-10 北京轻舟智航智能技术有限公司 Asynchronous data fusion method and device, storage medium and electronic equipment
CN113378867B (en) * 2020-02-25 2023-08-22 北京轻舟智航智能技术有限公司 Asynchronous data fusion method and device, storage medium and electronic equipment

Similar Documents

Publication Publication Date Title
CN103455983A (en) Image disturbance eliminating method in embedded type video system
CN102148934A (en) Multi-mode real-time electronic image stabilizing system
Yang et al. Real-time object detection for streaming perception
Alzugaray et al. Haste: multi-hypothesis asynchronous speeded-up tracking of events
JP2003274416A (en) Adaptive motion estimation device and estimation method
CN109963048A (en) Noise-reduction method, denoising device and Dolby circuit system
CN102665041A (en) Digital image stabilization
CN103402045A (en) Image de-spin and stabilization method based on subarea matching and affine model
CN102427505B (en) Video image stabilization method and system on the basis of Harris Corner
CN109063659A (en) The detection and tracking and system of moving target
Stoll et al. Adaptive integration of feature matches into variational optical flow methods
CN111598918B (en) A Motion Estimation Method for Video Stabilization Based on Reference Frame Selection and Foreground and Background Separation
CN107749987A (en) A kind of digital video digital image stabilization method based on block motion estimation
CN113963204B (en) A twin network target tracking system and method
CN108230282A (en) A kind of multi-focus image fusing method and system based on AGF
CN111091101A (en) High-precision pedestrian detection method, system and device based on one-step method
CN106952304A (en) A Depth Image Calculation Method Using Frame-to-Frame Correlation in Video Sequence
CN106296732A (en) A kind of method for tracking moving object under complex background
CN111726526A (en) An image processing method, device, electronic device and storage medium
CN109765611A (en) Seismic data interpolation method and device
CN119313881A (en) Infrared small target detection method based on non-convex tensor Tucker decomposition
CN109523573A (en) The tracking and device of target object
JP2007228156A (en) Movement detection processing apparatus and method thereof
CN117112833B (en) Video static frame filtering method and device based on storage space optimization
JPS61201581A (en) Method and apparatus for detecting dynamic vector

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20131218

RJ01 Rejection of invention patent application after publication