CN109146833A - A kind of joining method of video image, device, terminal device and storage medium - Google Patents
A kind of joining method of video image, device, terminal device and storage medium Download PDFInfo
- Publication number
- CN109146833A CN109146833A CN201810874779.8A CN201810874779A CN109146833A CN 109146833 A CN109146833 A CN 109146833A CN 201810874779 A CN201810874779 A CN 201810874779A CN 109146833 A CN109146833 A CN 109146833A
- Authority
- CN
- China
- Prior art keywords
- image
- characteristic point
- matched
- frame
- video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 91
- 238000005304 joining Methods 0.000 title claims abstract description 31
- 238000003860 storage Methods 0.000 title claims abstract description 19
- 238000003708 edge detection Methods 0.000 claims abstract description 18
- 239000000284 extract Substances 0.000 claims abstract description 18
- 230000003287 optical effect Effects 0.000 claims abstract description 18
- 238000011946 reduction process Methods 0.000 claims abstract description 8
- 230000009466 transformation Effects 0.000 claims description 32
- 238000004422 calculation algorithm Methods 0.000 claims description 23
- 230000004927 fusion Effects 0.000 claims description 20
- 238000012216 screening Methods 0.000 claims description 15
- 238000004590 computer program Methods 0.000 claims description 11
- 238000012545 processing Methods 0.000 claims description 11
- 238000013519 translation Methods 0.000 claims description 10
- 230000008569 process Effects 0.000 claims description 9
- 238000006243 chemical reaction Methods 0.000 claims description 7
- 238000001514 detection method Methods 0.000 claims description 7
- 238000001914 filtration Methods 0.000 claims description 7
- 238000005457 optimization Methods 0.000 claims description 7
- 238000007781 pre-processing Methods 0.000 claims description 3
- 238000000605 extraction Methods 0.000 claims 1
- 239000003643 water by type Substances 0.000 abstract description 4
- 230000010354 integration Effects 0.000 abstract description 2
- 230000008859 change Effects 0.000 description 11
- 230000000694 effects Effects 0.000 description 4
- 230000006872 improvement Effects 0.000 description 4
- 239000000654 additive Substances 0.000 description 3
- 230000000996 additive effect Effects 0.000 description 3
- 238000005520 cutting process Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000007500 overflow downdraw method Methods 0.000 description 3
- 239000000463 material Substances 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 230000003595 spectral effect Effects 0.000 description 2
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a kind of joining method of video image, device, terminal device and storage medium, method includes: two adjacent video image frames to be extracted from video to be processed, and carry out noise reduction process to video image frame, obtains two particular video frequency picture frames;Particular video frequency picture frame is detected using edge detection method, determines the matching area of video image frame;It searches and extracts the respective characteristic point of image to be matched;Characteristic point is screened using optical flow method, according to the spatial transform relation of the characteristic point pair filtered out, spatial alternation is carried out to two frame image to be matched of front and back;The overlapping region of uncalibrated image, and image mosaic is carried out according to overlapping region, obtain target video image.The present invention can only rely on image itself, with video image identification scientific discovery and position river, the anastomosing and splicing of image is carried out with video image integration technology, accurately and fast and efficiently automatic Mosaic merges the image progress with realization to waters non-in video image region.
Description
Technical field
The present invention relates to technical field of video image processing more particularly to a kind of joining methods of video image, device, end
End equipment and storage medium.
Background technique
Currently used video image joining method mainly has the image split-joint method based on region and the figure based on feature
As joining method.Image split-joint method based on region can be divided into again based on the matched stitching algorithm of space pixel and based on frequency domain
Stitching algorithm;Mainly basis takes the difference of acquisition characteristics to distinguish to joining method based on feature, such as the wheel of early stage
Wide feature and the SIFT feature occurred later, SURF feature, ORB feature etc..
In image split-joint method based on region, the stitching algorithm based on pixel matching mainly passes through two images pixel
Between gray-scale relation determine the running parameter between image.The matching process of early stage is flat by carrying out in adjacent overlapping portion
It moves, then compares the matching degree of two images, need to test all translation situations.This method operand is very big, and not
It can solve rotation and change of scale problem.There are also carry out Optimum Matching using the multiresolution matching based on pyramid structure to search
Rope can solve the problems, such as change of scale to a certain extent, but Rotation be solved undesirable.Spelling based on frequency domain
Method is connect, is that sky is obtained by the correlativity inverse transformation in frequency domain by doing two dimensional discrete Fourier transform to two images
Between domain correlativity.
In image split-joint method based on feature, all information of image is not utilized, is extracted in the picture first
Feature obtains the variation relation between image by comparing the feature of two images.Using the method for contours extract, first to image
Convolution enhancing is carried out, the point (zero crossing) of pixel sign change is used as boundary point in detection image, carries out feature after extracting profile
Description.Using SIFT (Scale Invariant and Feature Transform), it is general that this method passes through foundation drawing first
Lars pyramid does not eliminate the influence of dimensional variation, feature detection is then carried out in scale space, and according to crucial vertex neighborhood
Gradient direction determine the direction of characteristic point, and then solve scale and Rotation.Image mosaic based on characteristic point is calculated
Method, on joining quality and speed influence it is maximum mainly with Feature Selection, subsequent occurrences of SURF feature, ORB feature etc. are all
More stress the promotion of speed in the case where guaranteeing certain mass.
In the prior art, video image splicing generally depends on the parsing of unmanned plane telemetry, such as longitude and latitude, height
The information such as degree, speed, pitching, roll need to refer to these data to execute image matching operations, and telemetering number in practical flight
According to may be asynchronous with video, or there is the case where certain unmanned plane telemetries can not obtain, leads to traditional video image
Joining method cannot achieve really splicing in real time.Therefore, the prior art is extremely difficult to imitate in real time when pursuing and splicing precision
Fruit, and it is inconsiderate for above-mentioned complex situations complete in the case where pursuing real-time situation, it is difficult to take into account accuracy and speed.
To the prior art research and practice process, although it was found by the inventors of the present invention that technical staff oneself through from more
The different direction of kind has carried out different degrees of improvement and optimization, still, current video image to video image splicing
The still generally existing following problem of joining method:
(1) feature that the prior art uses mostly is extremely difficult to real-time effect, but for splicing continuity and scale
Transform effect is poor;
(2) there is the case where depending on GPS position information unduly in the prior art, may cause splicing and falls flat.
Summary of the invention
The technical problem to be solved by the embodiment of the invention is that providing the joining method based on a kind of video image, dress
It sets, terminal device and storage medium, image itself can be only relied on, become with video image identification technology and video image visual angle
It changes scientific discovery and positions river, to realize that the image to waters region in video image carries out in real time, quickly and steadily certainly
Dynamic splicing fusion.
To solve the above problems, on the one hand, one embodiment of the present of invention provides a kind of joining method of video image, fit
It is executed in calculating equipment, comprising:
Two adjacent video image frames are extracted from video to be processed, and the video image frame is carried out at noise reduction
Reason, obtains two particular video frequency picture frames;
The particular video frequency picture frame is detected using edge detection method, determines the matching area of the video image frame, and
Using the image of the matching area as image to be matched;
Characteristic point lookup is carried out to the two frame image to be matched of front and back respectively, extracts respective characteristic point;
The characteristic point is screened using optical flow method, to obtain meeting the characteristic point pair of image mosaic fusion conditions;
From the characteristic point to filtering out three characteristic points to combination, according to three characteristic points to the spy in combination
The spatial transform relation of sign point pair, carries out spatial alternation to the two frame image to be matched of front and back;
The overlapping region of two frame image to be matched of front and back after spatial alternation is demarcated, and according to the coincidence
Region carries out image mosaic, obtains target video image.
Further, described to extract two adjacent video image frames from video to be processed, and to the video frame into
Row noise reduction process obtains two particular video frequency frames, specifically:
The acutance of each video image frame and color smoothness in video to be processed are calculated separately, and according to the acutance
The two adjacent video image frames for meeting the splicing condition of video image are extracted with the color smoothness;
It is filtered according to the specific filter of image definition dynamic select of the video image frame, and makees binaryzation
Processing, obtains two particular video frequency picture frames;Wherein, the specific filter include Gaussian filter, median filter and
Two-sided filter.
Further, described that the particular video frequency picture frame is detected using edge detection method, determine the video image frame
Matching area, and using the image of the matching area as image to be matched, specifically:
All probable edges of the particular video frequency picture frame are searched using edge detection method, and according to all probable edges
Determine the connected domain of the particular video frequency picture frame;
The edge of matching area is determined according to the connected domain, and further according to the edge of the matching area, to institute
It states matching area and optimizes processing;
Using the image of the matching area after optimization as image to be matched.
Further, described that characteristic point lookup is carried out respectively to two frame image to be matched of front and back, respective characteristic point is extracted,
Specifically:
Using ORB algorithm, AKAZE algorithm and BRISK algorithm, the image characteristic point to be matched to two frame of front and back is looked into
It looks for, and extracts respective ORB characteristic point, AKAZE characteristic point and BRISK characteristic point.
Further, described that the characteristic point is screened using optical flow method, to obtain meeting image mosaic fusion item
The characteristic point pair of part, specifically:
The characteristic point of the corresponding a later frame image to be matched of characteristic point of former frame image to be matched is extracted using optical flow method;
Calculate former frame image to be matched characteristic point and the characteristic point of corresponding a later frame image to be matched between away from
From;
Judge whether the distance is equal to or less than preset threshold, if so, screening obtains meeting image mosaic fusion item
The characteristic point pair of part.
Further, the spatial transform relation includes scale transformation relationship, translation transformation relationship and rotation transformation relationship;
It is described from the characteristic point to filtering out three characteristic points to combination, according to three characteristic points in combination
Characteristic point pair spatial transform relation, to the two frame image to be matched of front and back carry out spatial alternation, specifically:
According to the elevation information of the different moments of unmanned plane, two frame image to be matched of front and back is done into scale transformation, i.e., will before
Two frame image to be matched are unified to identical height afterwards;
According to the location information of matching area in the video image frame of the different moments of unmanned plane shooting, two frame of front and back is waited for
Matching image does translation transformation, i.e., by position between two frame image to be matched of front and back unification to matching area in the picture;
According to the matching properties of the characteristic point of two frame image to be matched of front and back, the rotation transformation relationship of image is obtained, into one
Step obtains the spatial transform relation of the characteristic point of two frame image to be matched of front and back, and is closed according to the spatial alternation of the characteristic point
System carries out spatial alternation to the two frame image to be matched of front and back.
On the other hand, one embodiment of the present of invention additionally provides a kind of splicing apparatus of video image, comprising:
Preprocessing module, for extracting two adjacent video image frames from video to be processed, and to the video figure
As frame progress noise reduction process, two particular video frequency picture frames are obtained;
Matching area determining module determines the view for detecting the particular video frequency picture frame using edge detection method
The matching area of frequency picture frame, and using the image of the matching area as image to be matched;
Characteristic point detection module extracts each for carrying out characteristic point lookup respectively to the two frame image to be matched of front and back
From characteristic point;
Screening module obtains essential characteristic for matching with essential characteristic point matching process to the characteristic point
Point pair;Using optical flow method to the essential characteristic point to screening, characteristic point pair after being screened;
Conversion module, for the spatial transform relation according to the characteristic point pair, to the two frame image to be matched of front and back
Carry out spatial alternation;
Splicing module marks the overlapping region of the two frame image to be matched of the front and back after spatial alternation for root,
Image mosaic is carried out according to the overlapping region, obtains target video image.
Further, the screening module is specifically used for carrying out the characteristic point with essential characteristic point matching process
Matching is obtained essential characteristic point pair, and is waited for using the corresponding a later frame of characteristic point that optical flow method extracts former frame image to be matched
Then the characteristic point of matching image calculates characteristic point and the spy of corresponding a later frame image to be matched of former frame image to be matched
The distance between sign point, and further judge whether the distance is equal to or less than preset threshold, if so, screening is met
The characteristic point pair of image mosaic fusion conditions.
Another aspect, one embodiment of the present of invention additionally provide a kind of terminal device, including processor, memory and
The computer program executed by the processor is stored in the memory and is configured as, the processor executes the meter
The joining method such as video image as claimed in any one of claims 1 to 6 is realized when calculation machine program.
Another aspect, one embodiment of the present of invention additionally provide a kind of computer readable storage medium, the computer
Readable storage medium storing program for executing includes the computer program of storage, wherein controlling the computer in computer program operation can
Equipment executes the joining method such as video image as claimed in any one of claims 1 to 6 where reading storage medium.
The implementation of the embodiments of the present invention has the following beneficial effects: the present invention can only rely on image itself, with video figure
As identification technology finds and position river, the anastomosing and splicing of image is carried out with video image integration technology, to realize to video
Accurately and fast and efficiently automatic Mosaic merges for the image progress in non-waters region in image.
Detailed description of the invention
Fig. 1 is a kind of flow diagram of the joining method for video image that one embodiment of the present of invention provides;
Fig. 2 is a kind of another flow diagram of the joining method for video image that one embodiment of the present of invention provides;
Fig. 3 is the idiographic flow schematic diagram of step S106 in Fig. 1;
Fig. 4 is a kind of splicing apparatus for video image that of the invention another applies example offer.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on
Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other
Embodiment shall fall within the protection scope of the present invention.
First embodiment of the invention:
Please refer to Fig. 1-3.
As shown in Figs. 1-2, the joining method of a kind of video image provided in this embodiment, suitable for being held in calculating equipment
Row includes at least following steps:
S101, two adjacent video image frames are extracted from video to be processed, and the video image frame is dropped
It makes an uproar processing, obtains two particular video frequency picture frames.
Specifically, the acutance of each video image frame and color smoothness in video to be processed are calculated separately, and according to
The acutance and the color smoothness extract the two adjacent video image frames for meeting the splicing condition of video image.
It is filtered according to the specific filter of image definition dynamic select of the video image frame, and makees binaryzation
Processing, obtain two particular video frequency picture frames, wherein the specific filter include Gaussian filter, median filter and
Two-sided filter.
In the present embodiment, by taking the splicing of the river image of unmanned plane as an example, the video to be processed is unmanned plane
The river video taken photo by plane.
It is understood that the binary conversion treatment, refers to that the gray value by the point on image is set to 0 or 255, also
It is that whole image is showed to apparent black and white effect, the region that the general boundary definition using closing, connection does not overlap.It is all
The pixel that gray scale is greater than or equal to threshold values is judged as belonging to certain objects, and gray value is indicated with 255, otherwise these pixels
It is excluded other than object area, gray value 0, indicates the object area of background or exception.After binary conversion treatment, two
A particular video frequency picture frame is the image of black and white gray scale.
S102, the particular video frequency picture frame is detected using edge detection method, determines the Matching band of the video image frame
Domain, and using the image of the matching area as image to be matched.
Specifically, searching all probable edges of the particular video frequency picture frame using edge detection method, and according to all
Probable edge determines the connected domain of the particular video frequency picture frame.
The edge of matching area is determined according to the connected domain, and further according to the edge of the matching area, to institute
It states matching area and optimizes processing.
Using the image of the matching area after optimization as image to be matched.
In the present embodiment, by taking the splicing of the river image of unmanned plane as an example, edge detection method be image procossing and
Basic problem in computer vision passes through the apparent point of brightness change in edge detection method energy reference numbers image, Jin Erneng
Detect all probable edges of image.Matching area is the non-water-surface areas in unmanned plane video, i.e. two land region of river
Domain, can be similar according to shape between the river water surface and desired riverbank profile that video image river detects
Relationship is judged.Finally river two sides region is optimized using edge optimization method.
S103, characteristic point lookup is carried out to the two frame image to be matched of front and back respectively, extracts respective characteristic point.
Specifically, using ORB algorithm, AKAZE algorithm and BRISK algorithm, the image characteristic point to be matched to two frame of front and back
It is searched, and extracts respective ORB characteristic point, AKAZE characteristic point and BRISK characteristic point
In the present embodiment, three kinds of ORB algorithm, AKAZE algorithm and BRISK algorithm algorithms revolve Gaussian Blur, angle
Turn, keep good performance when change of scale and brightness change, and the processing time is shorter, realizes effective feature
Point is searched and image mosaic.
S104, the characteristic point is screened using optical flow method, to obtain meeting the feature of image mosaic fusion conditions
Point pair.
It is specific: the corresponding a later frame image to be matched of characteristic point of former frame image to be matched is extracted using optical flow method
Characteristic point, calculate former frame image to be matched characteristic point and the characteristic point of corresponding a later frame image to be matched between away from
From.
Judge whether the distance is equal to or less than preset threshold, if so, screening obtains meeting image mosaic fusion item
The characteristic point pair of part.
In the present embodiment, light stream is about the concept in the object of which movement detection in the ken.For describing relative to sight
The movement of observed object caused by the movement for the person of examining, surface or edge.Optical flow method based on feature is constantly mainly special to target
Sign is positioned and is tracked, and has robustness to the movement and brightness change of target.
S105, from the characteristic point to filtering out three characteristic points to combination, according to three characteristic points to combination
In characteristic point pair spatial transform relation, to the two frame image to be matched of front and back carry out spatial alternation.
Wherein, the spatial transform relation includes scale transformation relationship, translation transformation relationship and rotation transformation relationship.Specifically
's.According to the elevation information of the different moments of unmanned plane, two frame image to be matched of front and back is done into scale transformation, i.e., by two frame of front and back
Image to be matched is unified to identical height.According to the position of matching area in the video image frame of the different moments of unmanned plane shooting
Confidence breath, does translation transformation for two frame image to be matched of front and back, i.e., the unification of two frame image to be matched of front and back exists to matching area
Image middle position.Wherein, bias source is caused by the transformation in a transverse direction of unmanned plane different moments.According to front and back two
The matching properties of the characteristic point of frame image to be matched obtain the rotation transformation relationship of image, from which further follow that two frame of front and back waits for
The spatial transform relation of characteristic point pair with image, and according to the spatial transform relation of the characteristic point pair, to the front and back two
Frame image to be matched carries out spatial alternation.
In the present embodiment, three features filtered out have the feature of similar triangles, and three features to combination
The unhorizontal also out of plumb of the straight line that any two points are linked to be in point, to eliminate deformation error problem caused by unmanned plane shooting angle.
Spatial alternation is carried out to two frame image to be matched of front and back, the description separating capacity to picture material is strengthened, makes it easier to spell
Connect fusion.
S106, the overlapping region of the two frame image to be matched of front and back after spatial alternation is demarcated, and according to institute
It states overlapping region and carries out image co-registration splicing, obtain target video image.
In the present embodiment, according to the characteristic point to the spatial transformation parameter for obtaining its characteristic point, according to spatial alternation
Two frame image to be matched of front and back is numbered in parameter, carries out the matching of characteristic point pair to image according to number.Overlapping region is
It is characterized a up-and-down boundary area defined.Image co-registration splicing refers to multi-source channel institute is collected about same target
Image data by image procossing and computer technology etc., extract the advantageous information in each self-channel to greatest extent, finally
The comprehensive image at high quality, it is original with utilization rate, improvement computer interpretation precision and the reliability, the promotion that improve image information
The spatial resolution and spectral resolution of image are conducive to monitoring.
In the present embodiment, the use of the additive fusion method of overlapping region is than usual cutting joining method, preferably
Two pictures are stitched together.The target image splicing trace spliced is unobvious, has stronger accuracy and compatibility
Property.
A kind of joining method of video image provided in this embodiment, extracts two adjacent videos from video to be processed
Picture frame, and noise reduction process obtain two particular video frequency picture frames;Particular video frequency picture frame is detected using edge detection method, really
Determine the matching area of video image frame, and the image of matching area as image to be matched and is extracted into characteristic point;According to feature
The spatial transform relation of point carries out spatial alternation to two frame image to be matched of front and back and carries out Feature Points Matching, and according to
The overlapping region of two frame image to be matched of front and back is demarcated with result;According to overlapping region, image mosaic fusion is carried out, target is obtained
Video image.The present invention can only rely on image itself, with video image identification scientific discovery and river be positioned, with video
Image fusion technology carries out the anastomosing and splicing of image, to realize that the image to waters non-in video image region carries out accurately, fastly
Speed and efficiently automatic Mosaic merge.
Second embodiment of the invention:
Please refer to Fig. 4.
As shown in figure 4, a kind of splicing apparatus of video image provided in this embodiment, comprising:
Preprocessing module 201, for extracting two adjacent video image frames from video to be processed, and to the video
Picture frame carries out noise reduction process, obtains two particular video frequency picture frames.
Specifically, the acutance of each video image frame and color smoothness in video to be processed are calculated separately, and according to
The acutance and the color smoothness extract the two adjacent video image frames for meeting the splicing condition of video image.
It is filtered according to the specific filter of image definition dynamic select of the video image frame, and makees binaryzation
Processing, obtain two particular video frequency picture frames, wherein the specific filter include Gaussian filter, median filter and
Two-sided filter.
In the present embodiment, by taking the splicing of the river image of unmanned plane as an example, the video to be processed is unmanned plane
The river video taken photo by plane.
It is understood that the binary conversion treatment, refers to that the gray value by the point on image is set to 0 or 255, also
It is that whole image is showed to apparent black and white effect, the region that the general boundary definition using closing, connection does not overlap.It is all
The pixel that gray scale is greater than or equal to threshold values is judged as belonging to certain objects, and gray value is indicated with 255, otherwise these pixels
It is excluded other than object area, gray value 0, indicates the object area of background or exception.After binary conversion treatment, two
A particular video frequency picture frame is the image of black and white gray scale.
Matching area determining module 202, described in determining using the edge detection method detection particular video frequency picture frame
The matching area of video image frame, and using the image of the matching area as image to be matched.
Specifically, searching all probable edges of the particular video frequency picture frame using edge detection method, and according to all
Probable edge determines the connected domain of the particular video frequency picture frame.
The edge of matching area is determined according to the connected domain, and further according to the edge of the matching area, to institute
It states matching area and optimizes processing.
Using the image of the matching area after optimization as image to be matched.
In the present embodiment, by taking the splicing of the river image of unmanned plane as an example, edge detection method be image procossing and
Basic problem in computer vision passes through the apparent point of brightness change in edge detection method energy reference numbers image, Jin Erneng
Detect all probable edges of image.Matching area is the non-water-surface areas in unmanned plane video, i.e. two land region of river
Domain, can be similar according to shape between the river water surface and desired riverbank profile that video image river detects
Relationship is judged.Finally river two sides region is optimized using edge optimization method.
Characteristic point searching module 203 is extracted respective for carrying out characteristic point lookup respectively to two frame image to be matched of front and back
Characteristic point.
Specifically, using ORB algorithm, AKAZE algorithm and BRISK algorithm, the image characteristic point to be matched to two frame of front and back
It is searched, and extracts respective ORB characteristic point, AKAZE characteristic point and BRISK characteristic point.
In the present embodiment, three kinds of ORB algorithm, AKAZE algorithm and BRISK algorithm algorithms revolve Gaussian Blur, angle
Turn, keep good performance when change of scale and brightness change, and the processing time is shorter, realizes effective feature
Point is searched and image mosaic.
Screening module 204 screens the characteristic point using optical flow method, to obtain meeting image mosaic fusion conditions
Characteristic point pair.
It is specific: the corresponding a later frame image to be matched of characteristic point of former frame image to be matched is extracted using optical flow method
Characteristic point, calculate former frame image to be matched characteristic point and the characteristic point of corresponding a later frame image to be matched between away from
From.
Judge whether the distance is equal to or less than preset threshold, if so, screening obtains meeting image mosaic fusion item
The characteristic point pair of part.
In the present embodiment, light stream is about the concept in the object of which movement detection in the ken.For describing relative to sight
The movement of observed object caused by the movement for the person of examining, surface or edge.Optical flow method based on feature is constantly mainly special to target
Sign is positioned and is tracked, and has robustness to the movement and brightness change of target.
Conversion module 205, from the characteristic point to filtering out three characteristic points to combination, according to three characteristic points
To the spatial transform relation of the characteristic point pair in combination, spatial alternation is carried out to the two frame image to be matched of front and back.
Wherein, the spatial transform relation includes scale transformation relationship, translation transformation relationship and rotation transformation relationship.
Specifically, two frame image to be matched of front and back is done by scale transformation according to the elevation information of the different moments of unmanned plane,
It is i.e. that two frame image to be matched of front and back is unified to identical height.According in the video image frame of the different moments of unmanned plane shooting
Two frame image to be matched of front and back is done translation transformation by the location information of matching area, i.e., two frame image to be matched of front and back is unified
To matching area in the picture between position.Wherein, in unmanned plane different moments, transformation in a transverse direction causes bias source
's.According to the matching properties of the characteristic point of two frame image to be matched of front and back, obtains the rotation transformation relationship of image, from which further follow that
The spatial transform relation of the characteristic point pair of two frame image to be matched of front and back, and according to the spatial transform relation of the characteristic point pair,
Spatial alternation is carried out to the two frame image to be matched of front and back.
In the present embodiment, three features filtered out have the feature of similar triangles, and three features to combination
The unhorizontal also out of plumb of the straight line that any two points are linked to be in point, to eliminate deformation error problem caused by unmanned plane shooting angle.
Spatial alternation is carried out to two frame image to be matched of front and back, the description separating capacity to picture material is strengthened, makes it easier to spell
Connect fusion.
Splicing module 206 demarcates the overlapping region of the two frame image to be matched of front and back after spatial alternation, and
Image mosaic is carried out according to the overlapping region, obtains target video image.
Wherein, the screening module, specifically for being matched with essential characteristic point matching process to the characteristic point,
Essential characteristic point pair is obtained, and extracts the corresponding a later frame of the characteristic point figure to be matched of former frame image to be matched using optical flow method
The characteristic point of picture, then calculate former frame image to be matched characteristic point and the characteristic point of corresponding a later frame image to be matched it
Between distance, and further judge whether the distance is equal to or less than preset threshold, if so, screening obtains meeting image spelling
Connect the characteristic point pair of fusion conditions.
In the present embodiment, overlapping region is characteristic point up-and-down boundary area defined.Image co-registration refers to will be more
Source channel the collected image data about same target by image procossing and computer technology etc., mention to greatest extent
The advantageous information in each self-channel is taken, the image at high quality is finally integrated, is calculated with improving the utilization rate of image information, improving
Machine interprets precision and reliability, the spatial resolution and spectral resolution that promote original image, is conducive to monitoring.
In the present embodiment, the use of the additive fusion method of overlapping region is than usual cutting joining method, preferably
Two pictures are stitched together.The target image splicing trace spliced is unobvious, has stronger accuracy and compatibility
Property.
In the present embodiment, as shown in figure 3, former frame matches three characteristic points pair, a later frame matches three characteristic points pair.
To characteristic point to spatial alternation is carried out, spatial alternation includes: characteristic point to scale transformation, and characteristic point is to rotation transformation, characteristic point
To translation transformation.Splicing fusion is carried out to the image after spatial alternation, specifically, carrying out overlapping region to video to be matched
Mark, overlapping region is characteristic point bound area defined.
Further, the splicing and fusion of image are carried out to overlapping region by image interfusion method.
In the present embodiment, the use of the additive fusion method of overlapping region is than usual cutting joining method, preferably
Two pictures are stitched together.The target image splicing trace spliced is unobvious, has stronger accuracy and compatibility
Property.
The splicing apparatus of a kind of video image provided in this embodiment, by with video image identification technology and video figure
As view transformation scientific discovery and position river, can quickly and also steadily in automatic Mosaic video image non-water-surface areas figure
Picture.
One embodiment of the present of invention additionally provides a kind of terminal device, including processor, memory and is stored in institute
The computer program executed by the processor is stated in memory and is configured as, the processor executes the computer program
The joining method of Shi Shixian for example above-mentioned video image.
One embodiment of the present of invention additionally provides a kind of computer readable storage medium, the computer-readable storage medium
Matter includes the computer program of storage, wherein controls the computer readable storage medium in computer program operation
Place equipment executes the joining method such as above-mentioned video image.
The above is a preferred embodiment of the present invention, it is noted that for those skilled in the art
For, without departing from the principle of the present invention, several improvement and deformations can also be made, these improvement and deformations are also considered as
Protection scope of the present invention.
Those of ordinary skill in the art will appreciate that realizing all or part of the process in above-described embodiment method, being can be with
Relevant hardware is instructed to complete by computer program, the program can be stored in a computer-readable storage medium
In, the program is when being executed, it may include such as the process of the embodiment of above-mentioned each method.Wherein, the storage medium can be magnetic
Dish, CD, read-only memory (Read-Only Memory, ROM) or random access memory (Random Access
Memory, RAM) etc..
Claims (10)
1. a kind of joining method of video image, suitable for being executed in calculating equipment characterized by comprising
Two adjacent video image frames are extracted from video to be processed, and noise reduction process is carried out to the video image frame, are obtained
To two particular video frequency picture frames;
The particular video frequency picture frame is detected using edge detection method, determines the matching area of the video image frame, and by institute
The image of matching area is stated as image to be matched;
Characteristic point lookup is carried out to the two frame image to be matched of front and back respectively, extracts respective characteristic point;
The characteristic point is screened using optical flow method, to obtain meeting the characteristic point pair of image mosaic fusion conditions;
From the characteristic point to filtering out three characteristic points to combination, according to three characteristic points to the characteristic point in combination
Pair spatial transform relation, to the two frame image to be matched of front and back carry out spatial alternation;
The overlapping region of two frame image to be matched of front and back after spatial alternation is demarcated, and according to the overlapping region
Image mosaic is carried out, target video image is obtained.
2. the joining method of video image according to claim 1, which is characterized in that described to be extracted from video to be processed
Two adjacent video image frames, and noise reduction process is carried out to the video frame, two particular video frequency frames are obtained, specifically:
The acutance of each video image frame and color smoothness in video to be processed are calculated separately, and according to the acutance and institute
It states color smoothness and extracts the two adjacent video image frames for meeting the splicing condition of video image;
It is filtered, and made at binaryzation according to the specific filter of image definition dynamic select of the video image frame
Reason, obtains two particular video frequency picture frames;Wherein, the specific filter includes Gaussian filter, median filter and double
Side filter.
3. the joining method of video image according to claim 1, which is characterized in that described to be detected using edge detection method
The particular video frequency picture frame, determines the matching area of the video image frame, and using the image of the matching area as to
Matching image, specifically:
All probable edges of the particular video frequency picture frame are searched using edge detection method, and are determined according to all probable edges
The connected domain of the particular video frequency picture frame;
The edge of matching area is determined according to the connected domain, and further according to the edge of the matching area, to described
Processing is optimized with region;
Using the image of the matching area after optimization as image to be matched.
4. the joining method of video image according to claim 1, which is characterized in that described to two frame of front and back figure to be matched
As progress characteristic point lookup respectively, respective characteristic point is extracted, specifically:
Using ORB algorithm, AKAZE algorithm and BRISK algorithm, the image characteristic point to be matched to two frame of front and back is searched, and
Extract respective ORB characteristic point, AKAZE characteristic point and BRISK characteristic point.
5. the joining method of video image according to claim 1, which is characterized in that described to use optical flow method to the spy
Sign point is screened, to obtain meeting the characteristic point pair of image mosaic fusion conditions, specifically:
The characteristic point of the corresponding a later frame image to be matched of characteristic point of former frame image to be matched is extracted using optical flow method;
Calculate characteristic point and the distance between the characteristic point of corresponding a later frame image to be matched of former frame image to be matched;
Judge whether the distance is equal to or less than preset threshold, if so, screening obtains meeting image mosaic fusion conditions
Characteristic point pair.
6. the joining method of video image according to claim 1, which is characterized in that the spatial transform relation includes contracting
Put transformation relation, translation transformation relationship and rotation transformation relationship;
It is described from the characteristic point to filtering out three characteristic points to combination, according to three characteristic points to the spy in combination
The spatial transform relation of sign point pair, carries out spatial alternation to the two frame image to be matched of front and back,
Specifically: according to the elevation information of the different moments of unmanned plane, two frame image to be matched of front and back is done into scale transformation, i.e., will
Two frame image to be matched of front and back is unified to identical height;
It is according to the location information of matching area in the video image frame of the different moments of unmanned plane shooting, two frame of front and back is to be matched
Image does translation transformation, i.e., by position between two frame image to be matched of front and back unification to matching area in the picture;
According to the matching properties of the characteristic point pair of two frame image to be matched of front and back, the rotation transformation relationship of image is obtained, further
Obtain the spatial transform relation of the characteristic point of two frame image to be matched of front and back, and according to the spatial transform relation of the characteristic point,
Spatial alternation is carried out to the two frame image to be matched of front and back.
7. a kind of splicing apparatus of video image characterized by comprising
Preprocessing module, for extracting two adjacent video image frames from video to be processed, and to the video image frame
Noise reduction process is carried out, two particular video frequency picture frames are obtained;
Matching area determining module determines the video figure for detecting the particular video frequency picture frame using edge detection method
As the matching area of frame, and using the image of the matching area as image to be matched;
Characteristic point detection module extracts respective for carrying out characteristic point lookup respectively to the two frame image to be matched of front and back
Characteristic point;
Screening module obtains essential characteristic point pair for matching with essential characteristic point matching process to the characteristic point,
And use optical flow method to the essential characteristic point to screening, characteristic point pair after being screened;
Conversion module, for from the characteristic point to filtering out three characteristic points to combination, according to three characteristic points pair
The spatial transform relation of characteristic point pair in combination carries out spatial alternation to the two frame image to be matched of front and back;
Splicing module marks the overlapping region of the two frame image to be matched of the front and back after spatial alternation for root, according to
The overlapping region carries out image mosaic, obtains target video image.
8. the splicing apparatus of video image according to claim 7, which is characterized in that
The screening module obtains basic specifically for matching with essential characteristic point matching process to the characteristic point
Characteristic point pair, and the feature of the corresponding a later frame image to be matched of characteristic point using optical flow method extraction former frame image to be matched
Point, then calculate former frame image to be matched characteristic point and the characteristic point of corresponding a later frame image to be matched between away from
From, and further judge whether the distance is equal to or less than preset threshold, if so, screening obtains meeting image mosaic fusion
The characteristic point pair of condition.
9. a kind of terminal device, which is characterized in that including processor, memory and store in the memory and be configured
For the computer program executed by the processor, the processor realizes such as claim 1 when executing the computer program
To the joining method of 6 described in any item video images.
10. a kind of computer readable storage medium, which is characterized in that the computer readable storage medium includes the calculating of storage
Machine program, wherein equipment where controlling the computer readable storage medium in computer program operation is executed as weighed
Benefit requires the joining method of 1 to 6 described in any item video images.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810874779.8A CN109146833A (en) | 2018-08-02 | 2018-08-02 | A kind of joining method of video image, device, terminal device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810874779.8A CN109146833A (en) | 2018-08-02 | 2018-08-02 | A kind of joining method of video image, device, terminal device and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109146833A true CN109146833A (en) | 2019-01-04 |
Family
ID=64791406
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810874779.8A Pending CN109146833A (en) | 2018-08-02 | 2018-08-02 | A kind of joining method of video image, device, terminal device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109146833A (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109859104A (en) * | 2019-01-19 | 2019-06-07 | 创新奇智(重庆)科技有限公司 | A kind of video generates method, computer-readable medium and the converting system of picture |
CN109948602A (en) * | 2019-01-21 | 2019-06-28 | 创新奇智(南京)科技有限公司 | A kind of method, computer-readable medium and identifying system identifying commodity |
CN110264406A (en) * | 2019-05-07 | 2019-09-20 | 威盛电子股份有限公司 | The method of image processing apparatus and image procossing |
CN111062984A (en) * | 2019-12-20 | 2020-04-24 | 广州市鑫广飞信息科技有限公司 | Method, device and equipment for measuring area of video image region and storage medium |
CN111639658A (en) * | 2020-06-03 | 2020-09-08 | 北京维盛泰科科技有限公司 | Method and device for detecting and eliminating dynamic characteristic points in image matching |
CN111723713A (en) * | 2020-06-09 | 2020-09-29 | 上海合合信息科技股份有限公司 | Video key frame extraction method and system based on optical flow method |
CN111915587A (en) * | 2020-07-30 | 2020-11-10 | 北京大米科技有限公司 | Video processing method, video processing device, storage medium and electronic equipment |
CN112614051A (en) * | 2020-12-08 | 2021-04-06 | 上海裕芮信息技术有限公司 | Building facade image splicing method, system, equipment and storage medium |
CN112884817A (en) * | 2019-11-29 | 2021-06-01 | 中移物联网有限公司 | Dense optical flow calculation method, dense optical flow calculation device, electronic device, and storage medium |
CN112906710A (en) * | 2021-03-26 | 2021-06-04 | 北京邮电大学 | Visual image feature extraction method based on BAKAZE-MAGSAC |
CN114418839A (en) * | 2021-12-09 | 2022-04-29 | 浙江大华技术股份有限公司 | Image stitching method, electronic device, and computer-readable storage medium |
WO2023237095A1 (en) * | 2022-06-09 | 2023-12-14 | 咪咕视讯科技有限公司 | Video synthesis method based on surround angle of view, and controller and storage medium |
CN119737929A (en) * | 2025-02-26 | 2025-04-01 | 洛阳润海建筑工程有限公司 | Municipal engineering mapping device and mapping method |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050013507A1 (en) * | 2003-07-15 | 2005-01-20 | Samsung Electronics Co., Ltd. | Apparatus for and method of constructing multi-view face database, and apparatus for and method of generating multi-view face descriptor |
CN101504761A (en) * | 2009-01-21 | 2009-08-12 | 北京中星微电子有限公司 | Image splicing method and apparatus |
CN103745449A (en) * | 2013-12-24 | 2014-04-23 | 南京理工大学 | Rapid and automatic mosaic technology of aerial video in search and tracking system |
CN104134200A (en) * | 2014-06-27 | 2014-11-05 | 河海大学 | Mobile scene image splicing method based on improved weighted fusion |
CN104301630A (en) * | 2014-09-10 | 2015-01-21 | 天津航天中为数据系统科技有限公司 | A video image splicing method and device |
CN105787870A (en) * | 2016-02-21 | 2016-07-20 | 郑州财经学院 | Graphic image splicing fusion system |
-
2018
- 2018-08-02 CN CN201810874779.8A patent/CN109146833A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050013507A1 (en) * | 2003-07-15 | 2005-01-20 | Samsung Electronics Co., Ltd. | Apparatus for and method of constructing multi-view face database, and apparatus for and method of generating multi-view face descriptor |
CN101504761A (en) * | 2009-01-21 | 2009-08-12 | 北京中星微电子有限公司 | Image splicing method and apparatus |
CN103745449A (en) * | 2013-12-24 | 2014-04-23 | 南京理工大学 | Rapid and automatic mosaic technology of aerial video in search and tracking system |
CN104134200A (en) * | 2014-06-27 | 2014-11-05 | 河海大学 | Mobile scene image splicing method based on improved weighted fusion |
CN104301630A (en) * | 2014-09-10 | 2015-01-21 | 天津航天中为数据系统科技有限公司 | A video image splicing method and device |
CN105787870A (en) * | 2016-02-21 | 2016-07-20 | 郑州财经学院 | Graphic image splicing fusion system |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109859104A (en) * | 2019-01-19 | 2019-06-07 | 创新奇智(重庆)科技有限公司 | A kind of video generates method, computer-readable medium and the converting system of picture |
CN109948602A (en) * | 2019-01-21 | 2019-06-28 | 创新奇智(南京)科技有限公司 | A kind of method, computer-readable medium and identifying system identifying commodity |
CN109948602B (en) * | 2019-01-21 | 2023-03-03 | 创新奇智(南京)科技有限公司 | Method for identifying commodity, computer readable medium and identification system |
CN110264406A (en) * | 2019-05-07 | 2019-09-20 | 威盛电子股份有限公司 | The method of image processing apparatus and image procossing |
CN110264406B (en) * | 2019-05-07 | 2023-04-07 | 威盛电子(深圳)有限公司 | Image processing apparatus and image processing method |
CN112884817B (en) * | 2019-11-29 | 2022-08-02 | 中移物联网有限公司 | Dense optical flow calculation method, device, electronic device and storage medium |
CN112884817A (en) * | 2019-11-29 | 2021-06-01 | 中移物联网有限公司 | Dense optical flow calculation method, dense optical flow calculation device, electronic device, and storage medium |
CN111062984A (en) * | 2019-12-20 | 2020-04-24 | 广州市鑫广飞信息科技有限公司 | Method, device and equipment for measuring area of video image region and storage medium |
CN111062984B (en) * | 2019-12-20 | 2024-03-15 | 广州市鑫广飞信息科技有限公司 | Method, device, equipment and storage medium for measuring area of video image area |
CN111639658A (en) * | 2020-06-03 | 2020-09-08 | 北京维盛泰科科技有限公司 | Method and device for detecting and eliminating dynamic characteristic points in image matching |
CN111723713B (en) * | 2020-06-09 | 2022-10-28 | 上海合合信息科技股份有限公司 | Video key frame extraction method and system based on optical flow method |
CN111723713A (en) * | 2020-06-09 | 2020-09-29 | 上海合合信息科技股份有限公司 | Video key frame extraction method and system based on optical flow method |
CN111915587A (en) * | 2020-07-30 | 2020-11-10 | 北京大米科技有限公司 | Video processing method, video processing device, storage medium and electronic equipment |
CN111915587B (en) * | 2020-07-30 | 2024-02-02 | 北京大米科技有限公司 | Video processing method, device, storage medium and electronic equipment |
CN112614051A (en) * | 2020-12-08 | 2021-04-06 | 上海裕芮信息技术有限公司 | Building facade image splicing method, system, equipment and storage medium |
CN112906710A (en) * | 2021-03-26 | 2021-06-04 | 北京邮电大学 | Visual image feature extraction method based on BAKAZE-MAGSAC |
CN114418839A (en) * | 2021-12-09 | 2022-04-29 | 浙江大华技术股份有限公司 | Image stitching method, electronic device, and computer-readable storage medium |
WO2023237095A1 (en) * | 2022-06-09 | 2023-12-14 | 咪咕视讯科技有限公司 | Video synthesis method based on surround angle of view, and controller and storage medium |
CN119737929A (en) * | 2025-02-26 | 2025-04-01 | 洛阳润海建筑工程有限公司 | Municipal engineering mapping device and mapping method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109146833A (en) | A kind of joining method of video image, device, terminal device and storage medium | |
CN115439424B (en) | Intelligent detection method for aerial video images of unmanned aerial vehicle | |
Han et al. | KCPNet: Knowledge-driven context perception networks for ship detection in infrared imagery | |
CN112215925B (en) | Adaptive coal mining machine tracking multi-camera video stitching method | |
GB2569751A (en) | Static infrared thermal image processing-based underground pipe leakage detection method | |
CN109410207A (en) | A kind of unmanned plane line walking image transmission line faultlocating method based on NCC feature | |
CN109146832B (en) | Video image splicing method and device, terminal equipment and storage medium | |
CN110349207A (en) | A kind of vision positioning method under complex environment | |
CN111275696A (en) | A kind of medical image processing method, image processing method and device | |
CN111597930A (en) | Coastline extraction method based on remote sensing cloud platform | |
CN107194866B (en) | Image fusion method for reducing spliced image dislocation | |
Lipschutz et al. | New methods for horizon line detection in infrared and visible sea images | |
CN103903237A (en) | Dual-frequency identification sonar image sequence splicing method | |
Zhang et al. | Robust visual odometry in underwater environment | |
CN110021029A (en) | A kind of real-time dynamic registration method and storage medium suitable for RGBD-SLAM | |
CN106650663A (en) | Building true/false change judgement method and false change removal method comprising building true/false change judgement method | |
WO2024198528A1 (en) | Target tracking method and system based on direction feature driving | |
CN114359149B (en) | Dam bank dangerous case video detection method and system based on real-time image edge enhancement | |
CN106991682B (en) | Automatic port cargo ship extraction method and device | |
CN109299655A (en) | An online rapid identification method of marine oil spill based on UAV | |
CN106169086B (en) | A method for extracting damaged roads from high-resolution optical images with the aid of navigation data | |
CN113204986A (en) | Moving target detection method suitable for unmanned aerial vehicle | |
CN117455952A (en) | An optical image ship target positioning and tracking method | |
CN115035281B (en) | Rapid infrared panoramic image stitching method | |
CN106558065A (en) | The real-time vision tracking to target is realized based on color of image and texture analysiss |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190104 |
|
RJ01 | Rejection of invention patent application after publication |