CN116347232A - Real-time video image stabilizing method and device - Google Patents
Real-time video image stabilizing method and device Download PDFInfo
- Publication number
- CN116347232A CN116347232A CN202210152446.0A CN202210152446A CN116347232A CN 116347232 A CN116347232 A CN 116347232A CN 202210152446 A CN202210152446 A CN 202210152446A CN 116347232 A CN116347232 A CN 116347232A
- Authority
- CN
- China
- Prior art keywords
- points
- image
- matching
- point
- stabilizing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 79
- 230000000087 stabilizing effect Effects 0.000 title claims abstract description 34
- 230000033001 locomotion Effects 0.000 claims abstract description 57
- 239000013598 vector Substances 0.000 claims abstract description 37
- 238000001914 filtration Methods 0.000 claims abstract description 20
- 230000006641 stabilisation Effects 0.000 claims abstract description 16
- 238000011105 stabilization Methods 0.000 claims abstract description 16
- 238000012545 processing Methods 0.000 claims abstract description 9
- 230000009466 transformation Effects 0.000 claims description 16
- 230000003287 optical effect Effects 0.000 claims description 15
- 230000008569 process Effects 0.000 claims description 14
- 239000011159 matrix material Substances 0.000 claims description 8
- 238000006073 displacement reaction Methods 0.000 claims description 6
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 5
- 238000001514 detection method Methods 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 238000013519 translation Methods 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
- H04N21/440281—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the temporal resolution, e.g. by frame skipping
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a real-time video image stabilizing method and device, which mainly solve the problem that the existing electronic image stabilizing in the prior art can not improve the image stabilizing speed and simultaneously maintain the precision. The real-time video image stabilizing method comprises the steps of firstly obtaining a previous frame image and a current frame image of a video, carrying out gray processing, combining FAST with shi-Tomasi for extracting feature points, judging the extracted feature points and matching points through set thresholds, filtering points with wrong matching, estimating motion vectors of the two frames of images according to the matching points, and finally converting the previous frame image according to the motion vectors to serve as the current frame after image stabilization, so that a stable image with high image stabilizing speed and high precision is obtained. Through the scheme, the invention achieves high image stabilizing precision and high speed.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a real-time video image stabilizing method and device.
Background
For a camera which is held by hand or is arranged on a motion carrier, image shake can be caused in the imaging process due to hand shake, motion carrier movement or external interference, so that the imaging quality is reduced, the viewing is not facilitated, the subsequent processing such as target tracking and detection is not facilitated, and therefore, the video image stabilizing research has important significance for production and life.
Video stabilization can be classified in principle into the following three categories: (1) Mechanical image stabilization is mostly realized by compensating random shake of a camera system on a base by a stabilizing platform formed by combining a gyro sensor with a server; (2) The optical image stabilization is to put an optical anti-shake component such as a prism, a reflecting mirror, an optical mode and the like into an image pickup system, and when shake occurs, the self-adaptive adjustment of the optical elements is utilized to compensate the shake so as to keep the stability of the system; (3) Electronic image stabilization is to reduce or even eliminate video blurring caused by random vibration of the camera device by integrated electronic circuit technology and digital image processing technology. The mechanical image stabilization and the optical image stabilization both need additional equipment support, common adjustment is needed, the requirements on the installation precision and the equipment feedback speed are high, the cost is increased, the whole image stabilization system is larger after the equipment is added, and the great trend of the modern camera equipment towards miniaturization and portability is deviated; the electronic image stabilization technology starts from the image level, calculates the video jitter amount, compensates the video jitter amount, ensures the light weight of equipment, has the advantages of low cost, high precision, high speed and the like, and is suitable for various fields of military use and civilian use.
The electronic image stabilization mainly comprises three modules: a motion estimation module, a motion filtering module and a motion compensation module.
The motion estimation method is divided into the following methods in principle: (1) The block matching method is suitable for a translational motion model, has high precision, but has slower speed; (2) The bit plane matching method is suitable for translational motion models, and has the advantages of general precision and high speed; (3) The gray projection method is suitable for a translation model and has the advantages of high precision, high speed and the like; (4) The transformation domain method is suitable for affine perspective models, and has large calculated amount and large influence by rotation; (5) The feature point matching method is suitable for affine perspective models, has higher precision for rotation and translation movement and has higher speed.
The motion filtering method is used for smoothing data, and is used for processing video frames more smoothly and continuously, and comprises an average filtering method, a curve fitting method, a Kalman filtering method and the like.
The motion compensation module performs image compensation by bilinear interpolation.
The current electronic image stabilization technology mainly develops in the aspects of precision and speed, and the higher the precision is, the more stable the obtained video is, so that an observer is more comfortable; the faster the speed is, the better the embedded equipment can be transplanted, the miniaturization and portability are realized, but many existing methods cannot achieve the two aspects, either the accuracy is achieved, the speed is slow, or the image stabilizing effect is poor, so that the problem of the existing image stabilizing method is that the image stabilizing speed is improved while the accuracy is maintained.
Disclosure of Invention
The invention aims to provide a real-time video image stabilizing method and device, which are used for solving the problem that the existing electronic image stabilizing cannot improve the image stabilizing speed and simultaneously maintain the precision.
In order to solve the problems, the invention provides the following technical scheme:
the real-time video image stabilizing method comprises the following steps:
s1, acquiring images of a previous frame and a current frame of a video, and then carrying out gray scale processing;
s2, extracting feature points from the previous frame image obtained in the step S1 by using a FAST method, judging whether the number of the feature points is smaller than a set feature point threshold, if yes, executing a step S3, otherwise, executing a step S4;
s3, extracting characteristic points from the previous frame image obtained in the step S1 by using a shi-Tomasi corner detection method, judging whether the number of the characteristic points is smaller than a set characteristic point threshold value, if yes, skipping the frame image, otherwise, executing the step S4;
s4, carrying out characteristic point matching on the characteristic points and the current frame image, judging whether the number of the matching points is smaller than a set matching point threshold value, if so, skipping the frame, otherwise, executing the step S5;
s5, filtering out points with wrong matching, judging whether the number of the matching points is smaller than a set matching point threshold, if yes, skipping the frame, otherwise, estimating motion vectors of two frames of images according to the matching points;
s6, converting the previous frame according to the motion vector obtained in the step S5 to be used as a current frame after image stabilization.
The method combines FAST and shi-Tomasi for extracting the feature points, ensures the speed of extracting the feature points and the quality of the feature points, judges the extracted feature points and the matching points through the set threshold value, ensures that the program is more stable in operation and free from problems caused by too few points, filters out the points with wrong matching, estimates the motion vectors of two frames of images according to the matching points, and finally transforms the previous frame according to the motion vectors to serve as the current frame after image stabilization, so as to obtain a stable image with high image stabilizing speed and high precision;
further, in step S4, an optical flow method is adopted to match feature points of two frames of images; the feature points are matched by utilizing an optical flow method, namely the feature points of the previous frame are matched in the current frame, so that the feature points of the current frame are not required to be extracted, the time is saved, the feature points can be scored, and the matching points with higher quality are obtained.
Further, in step S5, the motion vectors of the two frames of images are estimated according to the matching points by using a combination of non-singular linear transformation and translational transformation, and the matrix is expressed as follows:
in the above equation, (x ', y') represents the target pixel position, (x, y) represents the original pixel position, a 11 、a 12 、a 21 And a 22 Indicating the zoom and rotation sizes, t x 、t y Indicating the displacement magnitude.
Further, in step S5, the motion vectors of the two frames of images are estimated according to the matching points by using a combination of non-singular linear transformation and translational transformation, and the matrix is expressed as follows:
in the above equation, (x ', y') represents the target pixel position, (x)Y) represents the original pixel position, θ represents the rotation angle, t x 、t y Indicating the displacement magnitude.
Further, the specific process of filtering the matching error point in step S5 is as follows: when the feature points are matched, a certain feature point on the previous frame image has a plurality of successfully matched points on the current frame image, the distance between the feature point of the previous frame and the successfully matched point of the current frame is calculated, the matching point with the minimum distance is reserved, and other matching points are filtered;
further, the motion vector obtained in the step S5 is filtered out of the value outside the threshold range through the threshold range of the set motion vector, and then the final motion vector is obtained through Kalman filtering; setting a motion vector threshold range for judging jitter data, wherein a small motion vector can be regarded as no jitter, a large motion vector can be regarded as calculation error, limiting the size of the motion vector can enable the processed video to be more stable, filtering error data of rotation translation transformation of two frames of images and enabling the processed video to be more stable; the Kalman filtering uses the data smooth motion trail of all the previous frames, so that the processed video frames are more stable and natural while jitter is removed.
Further, after motion compensation is performed on the final motion vector, step S6 is performed, which specifically includes: and carrying out affine transformation on the pixel points of the current frame by using the final motion vector by taking the previous frame as a reference image to determine the value of each pixel point, thereby obtaining the compensation result of the image.
A real-time video image stabilizing method and device comprises a memory: for storing executable instructions;
a processor: the method is used for executing executable instructions stored in the memory to realize a real-time video image stabilizing method.
Compared with the prior art, the invention has the following beneficial effects:
(1) According to the invention, the feature points are extracted by using the FAST method, the number of the feature points is judged, the FAST method is high in speed, a large amount of time can be saved, but the quality of the feature points extracted by the FAST method is not very good, so that when the feature points are fewer or more fuzzy, the shi-Tomasi method is used for extracting the feature points, and the feature points with better quality are extracted; the method combines FAST and shi-Tomasi to extract the characteristic points, so that not only is the extraction speed ensured, but also the effectiveness of extracting the characteristic points is ensured, and the image stabilizing effect is better; the method is suitable for scenes with translational and rotational dithering, and has good effects on complex motion modes, large translational and large rotational motions.
(2) The invention judges the extracted characteristic points and the matching points through the set threshold value, prevents the error caused by insufficient number of points in the running process of the program, and ensures that the program is more stable.
(3) The invention utilizes the optical flow method to match the characteristic points, and aims at the characteristic points of the previous frame to find the matched points in the current frame, so that the characteristic points of the current frame are not required to be extracted, the time is saved, the characteristic points can be scored, and the matching points with higher quality can be obtained.
(4) The invention uses Kalman filtering to smooth the motion trail of the data of all the previous frames, so that the processed video frames are more stable and natural while jitter is removed.
Drawings
For a clearer description of embodiments of the invention or of the prior art, the drawings that are necessary for the description of the embodiments or of the prior art will be briefly described, it being apparent that the drawings in the description below are some of the embodiments of the invention and that, without the inventive effort, further drawings may be obtained according to these drawings, for a person skilled in the art, in which:
FIG. 1 is a schematic flow chart of the present invention.
Fig. 2 is a schematic diagram of FAST method feature point selection.
Fig. 3 is a diagram showing the effect of an original frame (left) and a stabilized frame (right) of a certain frame of a test video.
Fig. 4 is a graph showing the effect of an original frame (left) and a stabilized frame (right) of a certain frame of a test video.
Fig. 5 is a diagram of the original camera path and the optimal path for the test video.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to fig. 1 to 5, and the described embodiments should not be construed as limiting the present invention, and all other embodiments obtained by those skilled in the art without making inventive efforts are within the scope of protection of the present invention.
Before describing embodiments of the present invention in further detail, the terms and terminology involved in the embodiments of the present invention will be described, and the terms and terminology involved in the embodiments of the present invention will be used in the following explanation.
Example 1
As shown in fig. 1 to 5, the real-time video image stabilizing method achieves image stabilizing precision and is high in speed, long videos can be processed, and a camera platform can be used for real-time data processing; the method comprises the following steps:
firstly, acquiring two frames of video images, and carrying out gray processing on the previous frame and the current frame of images;
secondly, extracting feature points from the previous frame of image processed in the first step by using a FAST method;
the specific process of extracting feature points by the FAST method is as follows:
(a) Selecting a pixel P from the image, setting its value as I P Setting a threshold t;
(b) Drawing a discretized circle with radius equal to that of the pixels by taking the pixel point as the center, wherein 16 pixels are arranged on the boundary of the circle;
(c) If the 16 pixels have n continuous pixel points, the pixel values of the pixels are all higher than those of I P Greater than +t, or both greater than I P T is small, the point is a corner point, i.e. a feature point; the value of n may be set to 12 or 9;
(d) Traversing the whole image to obtain all the characteristic points;
thirdly, judging the number of the characteristic points in the second step, if the number of the characteristic points is smaller than a given characteristic point threshold value, performing the fourth step and the fifth step, and if the number of the characteristic points is larger than the given threshold value, performing the sixth step;
fourthly, extracting characteristic points of the previous frame of image by using a shi-Tomasi method;
the specific process of extracting feature points by the shi-Tomasi method is as follows:
(a) Calculating gradient I of pixel point I (x, y) in x and y directions in image x ,I y ;
(b) Calculating the product of gradients of the image in the x and y directions;
I x I y =I x +I y
(c) Using gaussian pairsI x I y Performing Gaussian weighting (sigma=2, ksize=3), and calculating a matrix M corresponding to a window W with a center point of (x, y);
(d) Calculating eigenvalue r of matrix M 1 ,r 2 And calculates a response value R at each pixel point (x, y)
R=min(r 1 ,r 2 );
(e) And setting a threshold t, and if R is smaller than t, taking the point as a corner point, namely a characteristic point.
Fifth, judging the number of the characteristic points in the fourth step, if the number is smaller than a given characteristic point threshold value, skipping the frame, acquiring the next frame of image, entering the first step, and if the number is larger than the given threshold value, executing the sixth step;
sixthly, performing feature point matching on the extracted feature points and the current frame image through an optical flow method, judging whether the matching points are smaller than a set matching point threshold value, if yes, the matching points skip the frame too little, otherwise, entering a seventh step;
the specific process of the optical flow method is as follows:
(a) Building a pyramid: establishing a plurality of layers of images with different resolutions, wherein each layer uses an image with one resolution, the resolution of the bottommost layer is highest, and the resolution is lower as the layer number is higher;
(b) Pyramid feature tracking: calculating the optical flow of the bottommost layer, transmitting the optical flow to the highest layer, and transmitting the optical flow obtained after the correction of the highest layer to the lower layer as an initial value, and finally transmitting the optical flow to the bottommost layer, namely an original image;
(c) Iteration: and (5) repeatedly obtaining an optimal solution.
Seventh, filtering out the matching error points, judging whether the matching points are smaller than a set matching point threshold value, if yes, filtering out the matching points to skip the frame too little, otherwise, entering into an eighth step; the specific process for filtering the matching error points is as follows: when the feature points are matched, a certain feature point on the previous frame image has a plurality of successfully matched points on the current frame image, the distance between the feature point of the previous frame and the successfully matched point of the current frame is calculated, the matching point with the minimum distance is reserved, and other matching points are filtered;
eighth step, estimating motion vectors of the two frames of images according to the matching points; the specific process is as follows: the matrix is expressed as follows by combining non-singular linear transformation and translational transformation:
in the above equation, (x ', y') represents the target pixel position, (x, y) represents the original pixel position, a 11 、a 12 、a 21 And a 22 Indicating the zoom and rotation sizes, t x 、t y Representing the displacement magnitude; the above method has six degrees of freedom, can be calculated by using three coordinate point pairs, has little scaling in a scene, and can be represented by the following matrix if represented by a rotation angle:
in the above equation, (x ', y') represents the target pixel position, (x, y) represents the original pixel position, θ represents the rotation angle, t x 、t y Indicating the displacement magnitude.
And ninth, converting the previous frame according to the motion vector to serve as a current frame after image stabilization.
Example 2
The present embodiment further provides that, based on embodiment 1, the motion vector in the seventh step filters out the large and small motion vectors through the set motion vector threshold range; the setting of the motion vector threshold range is used for judging jitter data, filtering error data of rotation translation transformation of two frames of images, and enabling the processed video to be smoother.
Example 3
The embodiment further obtains a final motion vector by Kalman filtering the motion vector filtered by the motion vector threshold range based on the embodiment 1; the Kalman filtering uses the data smooth motion trail of all the previous frames, so that the processed video frames are more stable and natural while jitter is removed.
Example 4
The embodiment further performs motion compensation on the final motion vector based on embodiment 1, and then proceeds to the ninth step, where the specific process is as follows: and carrying out affine transformation on the pixel points of the current frame by using the final motion vector by taking the previous frame as a reference image to determine the value of each pixel point, thereby obtaining the compensation result of the image.
Example 5
A real-time video image stabilizing method and device comprises a memory: for storing executable instructions; a processor: the method is used for executing executable instructions stored in the memory to realize a real-time video image stabilizing method.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other manners as well. The apparatus embodiments described above are merely illustrative, for example, of the flowcharts and block diagrams in the figures that illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present invention may be integrated together to form a single part, or each module may exist alone, or two or more modules may be integrated to form a single part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention. It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (8)
1. A real-time video image stabilizing method is characterized in that: the method comprises the following steps:
s1, acquiring images of a previous frame and a current frame of a video, and then carrying out gray scale processing;
s2, extracting feature points from the previous frame image obtained in the step S1 by using a FAST method, judging whether the number of the feature points is smaller than a set feature point threshold, if yes, executing a step S3, otherwise, executing a step S4;
s3, extracting characteristic points from the previous frame image obtained in the step S1 by using a shi-Tomasi corner detection method, judging whether the number of the characteristic points is smaller than a set characteristic point threshold value, if yes, skipping the frame image, otherwise, executing the step S4;
s4, carrying out characteristic point matching on the characteristic points and the current frame image, judging whether the number of the matching points is smaller than a set matching point threshold value, if so, skipping the frame, otherwise, executing the step S5;
s5, filtering out points with wrong matching, judging whether the number of the matching points is smaller than a set matching point threshold, if yes, skipping the frame, otherwise, estimating motion vectors of two frames of images according to the matching points;
s6, converting the previous frame according to the motion vector obtained in the step S5 to be used as a current frame after image stabilization.
2. The method for stabilizing video images in real time according to claim 1, wherein: in the step S4, the optical flow method is adopted to match the characteristic points of the two frames of images.
3. The method for stabilizing video images in real time according to claim 1, wherein: in step S5, the motion vector of the two frames of images is estimated according to the matching points by utilizing non-singular linear transformation and translationThe transformation is implemented in combination, and the matrix is expressed as:
in the above equation, (x ', y') represents the target pixel position, (x, y) represents the original pixel position, a 11 、a 12 、a 21 And a 22 Indicating the zoom and rotation sizes, t x 、t y Indicating the displacement magnitude.
4. The method for stabilizing video images in real time according to claim 1, wherein: in step S5, the motion vectors of the two frames of images are estimated according to the matching points by combining the non-singular linear transformation and the translational transformation, and the matrix is expressed as follows:
in the above equation, (x ', y') represents the target pixel position, (x, y) represents the original pixel position, θ represents the rotation angle, t x 、t y Indicating the displacement magnitude.
5. The method for stabilizing video images in real time according to claim 1, wherein: the specific process of filtering the matching error point in the step S5 is as follows: when the feature points are matched, a certain feature point on the previous frame image has a plurality of successfully matched points on the current frame image, the distance between the feature point of the previous frame and the successfully matched point of the current frame is calculated, the matching point with the minimum distance is reserved, and other matching points are filtered.
6. The method for stabilizing video images in real time according to claim 5, wherein: and S5, the motion vector obtained in the step is filtered out of the value outside the threshold range through the set threshold range of the motion vector, and the final motion vector is obtained through Kalman filtering.
7. The method for stabilizing video in real time according to claim 6, wherein: after the final motion vector is subjected to motion compensation, the step S6 is carried out, and the specific process is as follows: and carrying out affine transformation on the pixel points of the current frame by using the final motion vector by taking the previous frame as a reference image to determine the value of each pixel point, thereby obtaining the compensation result of the image.
8. A real-time video image stabilizing method and device are characterized in that: comprising
A memory: for storing executable instructions;
a processor: executable instructions for executing stored in said memory implementing a real-time video stabilization method according to any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210152446.0A CN116347232A (en) | 2022-02-18 | 2022-02-18 | Real-time video image stabilizing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210152446.0A CN116347232A (en) | 2022-02-18 | 2022-02-18 | Real-time video image stabilizing method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116347232A true CN116347232A (en) | 2023-06-27 |
Family
ID=86884591
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210152446.0A Pending CN116347232A (en) | 2022-02-18 | 2022-02-18 | Real-time video image stabilizing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116347232A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN119889057A (en) * | 2025-03-25 | 2025-04-25 | 上海同陆云交通科技有限公司 | Traffic flow statistics and congestion analysis method and system based on video analysis |
-
2022
- 2022-02-18 CN CN202210152446.0A patent/CN116347232A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN119889057A (en) * | 2025-03-25 | 2025-04-25 | 上海同陆云交通科技有限公司 | Traffic flow statistics and congestion analysis method and system based on video analysis |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9979889B2 (en) | Combined optical and electronic image stabilization | |
Wang et al. | Robust digital image stabilization using the Kalman filter | |
US8134603B2 (en) | Method and system for digital image stabilization | |
KR101830804B1 (en) | Digital image stabilization method with adaptive filtering | |
US10217200B2 (en) | Joint video stabilization and rolling shutter correction on a generic platform | |
KR100985805B1 (en) | Image Stabilization Device and Method Using Adaptive Kalman Filter | |
US8175399B2 (en) | Multiple-resolution image processing apparatus | |
US7605845B2 (en) | Motion stabilization | |
US8711938B2 (en) | Methods and systems for motion estimation with nonlinear motion-field smoothing | |
WO2005073919A2 (en) | Stabilizing a sequence of image frames | |
CN113556464B (en) | Shooting method and device and electronic equipment | |
US20170078575A1 (en) | Apparatus, method and recording medium for image stabilization | |
JP6202879B2 (en) | Rolling shutter distortion correction and image stabilization processing method | |
CN109743495B (en) | Electronic stability augmentation method and device for video image | |
KR101202642B1 (en) | Method and apparatus for estimating global motion using the background feature points | |
CN113905147A (en) | Method and device for removing jitter of marine monitoring video picture and storage medium | |
TW200534710A (en) | Method and system for stabilizing video data | |
Auberger et al. | Digital video stabilization architecture for low cost devices | |
CN114429191B (en) | Electronic anti-shake method, system and storage medium based on deep learning | |
CN116347232A (en) | Real-time video image stabilizing method and device | |
Vlahović et al. | Deep learning in video stabilization homography estimation | |
CN111712857A (en) | Image processing method, device, holder and storage medium | |
JP6282133B2 (en) | Imaging device, control method thereof, and control program | |
Lee et al. | Digital image stabilization based on statistical selection of feasible regions | |
Yousaf et al. | Real time video stabilization methods in IR domain for UAVs—A review |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |