CN112215304A - Gray level image matching method and device for geographic image splicing - Google Patents
Gray level image matching method and device for geographic image splicing Download PDFInfo
- Publication number
- CN112215304A CN112215304A CN202011226012.8A CN202011226012A CN112215304A CN 112215304 A CN112215304 A CN 112215304A CN 202011226012 A CN202011226012 A CN 202011226012A CN 112215304 A CN112215304 A CN 112215304A
- Authority
- CN
- China
- Prior art keywords
- image
- geographic
- matching
- geographic image
- data string
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 48
- 238000004364 calculation method Methods 0.000 claims abstract description 42
- 238000006243 chemical reaction Methods 0.000 claims abstract description 26
- 238000007781 pre-processing Methods 0.000 claims abstract description 9
- 230000015654 memory Effects 0.000 claims description 21
- 238000012545 processing Methods 0.000 claims description 18
- 230000009466 transformation Effects 0.000 claims description 13
- 238000004458 analytical method Methods 0.000 claims description 11
- 238000012937 correction Methods 0.000 claims description 6
- 238000003384 imaging method Methods 0.000 claims description 6
- 238000013178 mathematical model Methods 0.000 claims description 6
- 238000012952 Resampling Methods 0.000 claims description 4
- 238000004891 communication Methods 0.000 claims description 2
- 230000001131 transforming effect Effects 0.000 claims 1
- 230000008569 process Effects 0.000 description 8
- 230000005855 radiation Effects 0.000 description 5
- 238000013519 translation Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000012163 sequencing technique Methods 0.000 description 1
- 230000009897 systematic effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/751—Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a gray image matching method and a gray image matching device for geographic image splicing, wherein the method comprises the following steps: performing polar coordinate conversion on a first geographic image to obtain a first data string which uniquely represents the first geographic image; acquiring the outline of the target pattern in the second geographic image; selecting a matching area with the same size as the first geographic image in the target pattern search area, and performing polar coordinate conversion on the matching area to obtain a second data string uniquely identifying the matching area; calculating the similarity of the first data string and the second data string; and until all the matching areas complete similarity calculation, selecting the matching area with the maximum similarity as a target pattern, and acquiring the position and the direction of the target pattern. The invention reduces the search range of matching by preprocessing and improves the matching speed; the data string which uniquely represents the image is obtained through sequential combination calculation of the pixel points, so that the data becomes small and unique and accurate, and the matching speed and accuracy are improved.
Description
Technical Field
The invention relates to the field of image matching, in particular to a gray level image matching method and device for geographic image splicing.
Background
Image matching technology, one of the most important technologies in the recent information processing field, has been applied in increasingly wide fields, including resource analysis, weather forecasting, medical diagnosis, industrial manufacturing, security monitoring, intelligent transportation, and the like. The image matching refers to identifying a homonymous point between two or more images through a certain matching algorithm, for example, in the two-dimensional image matching, a window center point corresponding to the maximum relational number in a search area is taken as the homonymous point by comparing correlation coefficients of windows with the same size in a target area and the search area. The essence is to apply the best search problem of matching criteria under the condition of primitive similarity. The gray image matching technology is widely applied to farmers in life and work at present, but the traditional matching method is long in time consumption, difficult to meet the scientific and technological requirements of high-speed development at present, low in matching accuracy, easy to interfere and the like.
The existing image matching is mainly divided into two types, namely a matching method based on gray scale and a matching method based on characteristics. The existing image matching can only complete simple image matching, namely the geometric differences such as translation, scale, rotation and the like between a reference image and an image to be matched and the radiation difference cannot be too large. And in most cases because factors such as shooting angle and weather lead to that the image translation that unmanned aerial vehicle shooed and obtained, geometric difference such as yardstick, rotation are big and the radiation difference is big, lead to prior art can't carry out accurate matching to the image that unmanned aerial vehicle shot. The traditional image matching method is to match the target detection image with the image to be detected in a full image mode, and the matching speed is low.
Disclosure of Invention
Therefore, the technical problem to be solved by the present invention is to overcome the defect that images with large geometric differences such as translation, scale, rotation, and the like and large radiation differences cannot be quickly and accurately matched in the prior art, so as to provide a grayscale image matching method and device for geographic image stitching.
According to a first aspect, an embodiment of the present invention provides a grayscale image matching method for geographic image stitching, including:
performing polar coordinate conversion on the first geographic image, and converting the converted first geographic image data into a first data string which is used for uniquely representing the first geographic image;
repeatedly executing the following steps until the similarity calculation is completed for all the matching areas: acquiring the outline of the target pattern in the second geographic image as a target pattern search area; selecting a matching area with the same size as the first geographic image in the target pattern search area, performing polar coordinate conversion on the matching area, and converting the converted image data of the matching area into a second data string for uniquely identifying the matching area; calculating the similarity of the first data string and the second data string;
and selecting the matching area with the maximum similarity as a target pattern, and acquiring the position and the direction of the target pattern.
According to the gray scale image matching method for geographic image splicing, provided by the embodiment of the invention, the first image is subjected to polar coordinate conversion, the converted first geographic image data is converted into the first data string uniquely representing the geographic image, and in the process, the data is processed and converted into the uniquely corresponding first data string, so that the first image has the uniquely determined identification code in the subsequent processing process, and the speed and the accuracy of matching calculation are favorably improved. The same conversion method is adopted when the second geographic image is converted, so that the data strings uniquely and correspondingly represented by each geographic image are ensured to be of the same system, and the data strings are ensured not to have errors when the similarity matching is carried out. In actual work, the image to be matched is often large in area, and the area to be matched is reduced, the matching speed is accelerated, and resources and time are saved by acquiring the outline of the target pattern in the second geographic image as the target search area. The matching area with the same size as the first geographic image is selected from the search area of the target pattern, the strong correlation between the data strings is guaranteed, the data strings with the same size and the same conversion mode are integrated and systematic, and more accurate and matched data strings are provided for subsequent similarity calculation.
With reference to the first aspect, in a first embodiment of the first aspect, before polar coordinate transformation is performed on the first geographic image, geometric correction is further performed on the first geographic image and the second geographic image, including the following steps: preprocessing the first geographic image and the second geographic image, wherein the preprocessing is to perform geometric fine correction on the image to be detected, and the geometric fine correction comprises the following steps: determining a mathematical model between the coordinates of the first geographical image and the second geographical image and the ground coordinates according to the imaging mode of the first geographical image and the second geographical image; calculating a transformation parameter by adjustment according to the ground control point and the corresponding image point coordinate, and evaluating the precision; and performing geometric transformation calculation on the first geographic image and the second geographic image, and resampling the pixel gray scale.
The gray level image matching method for the geographic image splicing, provided by the embodiment of the invention, is characterized in that a mathematical model between an image coordinate and a ground coordinate is determined for an image to be detected in an imaging mode, and then a square error calculation transformation parameter is carried out according to a ground control point and a corresponding image point coordinate, so that the precision is evaluated. In the process, the precision evaluation of the mathematical model and the parameters determined by the imaging mode ensures that the result obtained when the geographic image is subjected to geometric transformation has the optimal error and is the minimum, so that the matching result obtained when the geographic image is matched is more accurate. The imaging mode of the image to be detected comprises angle and position information and the like when the unmanned aerial vehicle shoots.
With reference to the first aspect, in a second implementation form of the first aspect, the converting the converted first geographic image data into a data string uniquely representing the first geographic image includes: carrying out graying processing on the first geographic image to obtain a first grayscale image; acquiring the gray value of each pixel point from the first gray image; and performing combined calculation on all gray values according to the sequence of the pixel points in the first gray image to obtain the first data string.
According to the gray level image matching method for geographic image splicing, the first gray level image is obtained by carrying out gray level processing on the first geographic image, and then all gray level values are combined and calculated according to the sequence of the pixel points by the gray level values of the pixel points obtained from the gray level image, so that the data string is obtained. Different images are different in gray level images after gray level processing, the gray level values and the sequence of the pixel points contained in the different images are also different, and the data string obtained by recombining and calculating all the gray level values of the gray level images according to the sequence of the pixel points is used as the unique identifier of the image, so that the image can be represented more simply and more accurately, and the similarity calculation is more rapid and accurate. The sequence of the pixel points can be according to the arrangement sequence of the rows and the columns or other fixed and unchangeable sequences.
With reference to the first aspect, in a third implementation form of the first aspect, the converting the converted first geographic image data into a first data string uniquely representing the first geographic image includes: carrying out graying processing on the first geographic image to obtain a first grayscale image; acquiring the gray value of each pixel point from the first gray image; taking pixel points with preset unit quantity as a unit, and calculating the gray value of the pixel points in each unit to obtain a corresponding numerical value; and combining the numerical values corresponding to each unit according to the arrangement sequence of the units to obtain a multi-dimensional array as the first data string.
The gray image matching method for geographic image splicing, provided by the embodiment of the invention, comprises the steps of carrying out gray processing on a first geographic image to obtain a first gray image and a pixel value, dividing the gray first-degree image into a plurality of units according to the same number of unit points, calculating the gray value of each unit to obtain a corresponding numerical value, sequencing according to the arrangement sequence of the units, and combining to obtain a multidimensional data string as a first data string. In the process, the gray value of the first gray image is embodied in a data mode, the data string is used as the unique representation of the first geographic image to express the identity of the first geographic image is definite, the first gray image of the first geographic image is calculated in units according to the pixel value, the data quantity is reduced, the matching speed is accelerated, the first data string is used as the unique representation of the first geographic image, the unique represented data string of the geographic image is obtained by using the same processing method for the geographic images in subsequent calculation, and therefore the result obtained when the similarity is calculated is more accurate.
According to the first aspect, in a fourth implementation form of the first aspect, the acquiring a contour of the target pattern in the second geographic image includes: setting a corresponding gray threshold value to binarize the second geographic image, changing the binarized second geographic image into black and white, performing connected domain analysis on the binarized second geographic image by adopting a pixel marking or run connectivity analysis method, and calculating the outline of the target pattern.
The gray level image matching method for the geographic image splicing, provided by the embodiment of the invention, has the advantages that the image is simplified by using binarization, the data volume is reduced, and the outline of an interested target can be highlighted, so that the image can be further processed. And then the contour of the target pattern is obtained through analysis and calculation of the connected domain and is used as a search area of the target pattern, so that the search range is greatly reduced, the search speed is improved, and the time is saved.
According to the first aspect, in a fifth implementation form of the first aspect, the target pattern search area is based on a range where conjugate points of the region to be matched exist on the image, which is determined according to a priori knowledge and constraint conditions.
In the implementation of the invention, the priori knowledge and the constraint condition are used for determining the possible range of the conjugate point of the area to be matched according to the shooting mode or the invariable parameters existing in the shooting device or a certain fixed protrusion in the shot image. For example, in some fields, the shooting angle is not changed, and a gray image obtained through gray processing protrudes an interested area to be used as a target search area, or a range in which a pixel point corresponding to a certain point to be matched may exist is used as a target search area.
According to the first aspect, in a sixth implementation form of the first aspect, the first geographical image and the second geographical image are images of a geographical area photographed by an unmanned aerial vehicle. The geographic image to be matched is an image of a geographic area shot by the unmanned aerial vehicle.
According to a second aspect, an embodiment of the present invention provides a grayscale image matching apparatus for geographic image stitching, including:
the conversion module is used for carrying out polar coordinate conversion on the first geographic image and converting the converted first geographic image data into a first data string which is used for uniquely representing the first geographic image;
the similarity calculation module is used for repeatedly executing the following steps until the similarity calculation is completed for all the matching areas: acquiring the outline of the target pattern in the second geographic image as a target pattern search area; selecting a matching area with the same size as the first geographic image in the target pattern search area, performing polar coordinate conversion on the matching area, and converting the converted image data of the matching area into a second data string for uniquely identifying the matching area; calculating the similarity of the first data string and the second data string;
and the acquisition module is used for selecting the matching area with the maximum similarity as a target pattern and acquiring the position and the direction of the target pattern.
According to the gray level image matching device for geographic image splicing, provided by the embodiment of the invention, aiming at the characteristics that the image shot by the unmanned aerial vehicle has geometric differences such as translation, scale and rotation and large radiation difference, the search area of the pattern in the target image is determined by the preprocessing method of carrying out binarization and connected domain analysis on the image, so that the search range of matching is reduced, and the matching speed is increased. And then the data string which uniquely represents the image is obtained by the combination calculation of polar coordinate conversion and gray processing according to the sequence of the pixel points, so that the data becomes small and unique and accurate, the matching speed is accelerated, and the matching accuracy is improved.
According to a third aspect, the invention provides an electronic device comprising: the device comprises a memory and a processor, wherein the memory and the processor are mutually connected in a communication manner, the memory stores computer instructions, and the processor executes the computer instructions so as to execute the grayscale image matching method for the geographic image splicing.
According to a fourth aspect, the present invention provides a computer-readable storage medium storing computer instructions for causing a computer to execute the above grayscale image matching method for geographic image stitching.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart of a grayscale image matching method for geographic image stitching according to embodiment 1 of the present invention;
fig. 2 is a flowchart illustrating a process of preprocessing a geographic image according to embodiment 1 of the present invention;
fig. 3 is a flowchart of calculating a data string by combining pixel point sequences in embodiment 1 of the present invention;
fig. 4 is a flowchart of calculating a data string by using a pixel unit arrangement order in embodiment 1 of the present invention;
fig. 5 is a schematic diagram of a grayscale image matching device for geographic image stitching according to this embodiment 2;
fig. 6 is a schematic diagram of a hardware structure of an electronic device according to embodiment 3 of the present invention.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; the two elements may be directly connected or indirectly connected through an intermediate medium, or may be communicated with each other inside the two elements, or may be wirelessly connected or wired connected. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
In addition, the technical features involved in the different embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
Example 1
As shown in fig. 1, the present invention provides a grayscale image matching method for geographic image stitching, which includes the following steps:
s1, performing polar coordinate conversion on the first geographic image, and converting the converted first geographic image data into a first data string which is used for uniquely representing the first geographic image;
in the embodiment of the invention, the first geographic image is subjected to polar coordinate conversion, normalized conversion calculation is carried out, then the data is converted into the first data string which uniquely represents the geographic image, and the first geographic image data with various complex interference factors is converted into the simple first data string, so that the data quantity needing to be matched is reduced, the identity mark is unique, the data quantity needing to be matched during matching is reduced, and the speed and the accuracy are further improved.
S2, acquiring the outline of the target pattern in the second geographic image as a target pattern search area;
in the embodiment of the invention, the contour of the interested target can be highlighted by carrying out binarization processing on the second geographic image, which is favorable for further processing the image. And then the contour of the target pattern is obtained through analysis and calculation of the connected domain and is used as a search area of the target pattern, so that the search range is greatly reduced, the search speed is increased, and the time is saved.
S3, selecting a matching area with the same size as the first geographic image in the target pattern search area, performing polar coordinate conversion on the matching area, and converting the converted image data of the matching area into a second data string for uniquely identifying the matching area;
in the embodiment of the invention, the target pattern search area is divided into the matching areas with the same size as the first geographic image, and the divided matching areas are divided into the areas with the same size according to the size of the first geographic influence, namely the length and the width of the search area. The matching area divided by the target pattern search area needs to be consistent with the length and width of the first search area so as to ensure that parameters of data strings obtained by converting image data of the matching area are consistent after polar coordinate conversion. The same calculation method as that for the first data string is adopted in the calculation of the second data string. The embodiment of the invention simplifies the image data in a form of the data string which uniquely represents the matching area, and improves the matching speed and accuracy. The calculation of the second data string for the second geographical influence should be performed in accordance with the calculation of the first data string.
S4, calculating the similarity of the first data string and the second data string;
in the embodiment of the present invention, the calculation method between the first data string and the second data string needs to be consistent all the time, and for example, the first data string is obtained by sequentially calculating the gray values of the first gray image after the gray processing of the first image, and then the second data string needs to be calculated by the same calculation method, so as to ensure that the parameters are consistent, the association between the data strings is stronger, and the matching is more accurate.
S5, judging whether all the matching areas complete the similarity calculation, if not, returning to S2 to repeat the steps S2 to S5; if yes, step S6 is executed.
And S6, selecting the matching area with the maximum similarity as a target pattern, and acquiring the position and the direction of the target pattern.
Optionally, as shown in fig. 2, before the polar coordinate transformation is performed on the first geographic image, preprocessing is further performed on the first geographic image and the second geographic image, that is, performing geometric fine correction on the first geographic image and the second geographic image, including the following steps:
s11, determining a mathematical model between the coordinates of the first geographical image and the second geographical image and the ground coordinates according to the imaging mode of the first geographical image and the second geographical image;
s12, calculating a conversion parameter by adjustment according to the ground control point and the corresponding image point coordinate, and evaluating the precision;
and S13, performing geometric transformation calculation on the first geographic image and the second geographic image, and resampling the pixel gray scale.
Specifically, the image shot by the unmanned aerial vehicle has deviation between coordinates and ground coordinates due to a coordinate system with shooting angles and other problems, but the ground is a relatively fixed and unchangeable reference coordinate system, so that a mathematical model between the coordinates is determined according to the ground coordinates and the angles shot by the unmanned aerial vehicle, then a square error calculation transformation parameter is carried out according to the coordinates of a certain fixed and unchangeable point on the ground and the coordinates of the point in the image, the precision of angle matching between the coordinates of the image to be detected and the ground coordinates is determined, then the first geographic image and the second geographic image are subjected to geometric transformation calculation according to the transformation parameter, and then the image subjected to geometric transformation calculation is subjected to gray level resampling to obtain the gray level value of the corrected geographic image.
Optionally, as shown in fig. 3, the converting the converted first geographical image data into a first data string uniquely representing the first geographical image includes the following steps:
s101, performing graying processing on the first geographic image to obtain a first grayscale image;
s102, acquiring the gray value of each pixel point from the first gray image;
s103, performing combined calculation on all gray values according to the sequence of the pixel points in the first gray image to obtain the first data string.
Specifically, in the embodiment of the present invention, a graying process is performed on a first geographic image to obtain a first grayscale image, a grayscale value of each pixel is extracted, and all grayscale values are combined and calculated according to the sequence of the pixels in the first grayscale image to obtain a first data string uniquely corresponding to and represented by the first geographic image. The first data string uniquely expresses the first geographic image as unique identification, the probability of error occurrence is reduced in subsequent matching, the first data string is calculated by sequentially combining the gray values of the first geographic image, the first data string also carries the image data of the first geographic image, and the geographic image is expressed in a data string mode, so that the geographic image data are simplified, and the speed of subsequent matching is improved. The gray-level value sequence of the geographic image according to which the data string is based may be a row-column ordering or other fixed-form ordering. In the embodiment of the invention, the calculation of all the data strings of the geographic images must be the same calculation method so as to ensure the relevance and systematicness of the data, thereby ensuring the calculation capable of efficiently reducing errors in the subsequent similarity calculation and further ensuring the accuracy of matching.
Preferably, as shown in fig. 4, the another method for converting the converted first geographical image data into the first data string uniquely representing the first geographical image includes the following steps:
s201, performing graying processing on the first geographic image to obtain a first grayscale image;
s202, acquiring the gray value of each pixel point from the first gray image;
s203, taking the pixel points with the preset unit quantity as a unit, and calculating the gray value of the pixel points in each unit to obtain a corresponding numerical value;
and S204, combining the numerical values corresponding to each unit according to the arrangement sequence of the units to obtain a multi-dimensional array as the first data string.
Specifically, in the present scheme, the gray value calculation is performed on the pixel points of the preset unit number as one unit according to the arrangement order of the units by using the arrangement order of the pixel units, and the multi-dimensional data group obtained by combining the numerical values corresponding to each unit as the data string is the preferable scheme for obtaining the data string by performing the combination calculation on the gray values of all the pixel points according to the arrangement order of the pixel points in the gray image. According to the scheme, the pixel points of the preset unit number are used as one unit in the arrangement sequence of the pixel units, the gray value is calculated to obtain the numerical value, and the numerical value corresponding to each unit is combined to obtain a multi-dimensional data group as a data string according to the arrangement sequence of the units, so that the data quantity is further simplified, and the matching speed is increased. Similarly, if the first geographic image is calculated by the scheme to obtain the first data string, the second image is also calculated by the scheme to obtain the second data string; the arrangement order may be according to the arrangement order of rows and columns or other fixed orders.
Optionally, acquiring the contour of the target pattern in the second geographic image includes: setting a corresponding gray threshold value to binarize the second geographic image, changing the binarized second geographic image into black and white, performing connected domain analysis on the binarized second geographic image by adopting a pixel marking or run connectivity analysis method, and calculating the outline of the target pattern.
Optionally, the target pattern search area determines a range of conjugate points of the to-be-matched region on the image according to a priori knowledge and constraint conditions.
Specifically, the a priori knowledge and the constraint condition are used for determining the possible existing range of the area to be matched according to the shooting mode or the existing invariant parameters of the shooting device. For example, in some fields, the shooting angle is not changed, the region of interest is projected by the gray value to serve as the target search region, or the range where the pixel point corresponding to a certain point to be matched may exist is used as the target search region.
Optionally, the method of claim 1, wherein the first and second geographic imagery are imagery of a geographic area taken with a drone.
Example 2
The present embodiment provides a gray scale matching device for geographic image stitching, as shown in fig. 5, including:
301, a conversion module, configured to perform polar coordinate conversion on a first geographic image, and convert the converted first geographic image data into a first data string uniquely representing the first geographic image;
302, a similarity calculation module, configured to repeatedly perform the following steps until similarity calculation is completed for all matching regions: acquiring the outline of the target pattern in the second geographic image as a target pattern search area; selecting a matching area with the same size as the first geographic image in the target pattern search area, performing polar coordinate conversion on the matching area, and converting the converted image data of the matching area into a second data string for uniquely identifying the matching area; calculating the similarity of the first data string and the second data string;
303, an obtaining module, configured to select a matching area with the largest similarity as a target pattern, and obtain a position and a direction of the target pattern.
According to the gray level image matching device for geographic image splicing, provided by the embodiment of the invention, aiming at the characteristics that the image shot by the unmanned aerial vehicle has geometric differences such as translation, scale and rotation and large radiation difference, the search area of the pattern in the target image is determined by the preprocessing method of carrying out binarization and connected domain analysis on the image, so that the search range of matching is reduced, and the matching speed is increased. And then the data string which uniquely represents the image is obtained by the combination calculation of polar coordinate conversion and gray processing according to the sequence of the pixel points, so that the data becomes small and unique and accurate, the matching speed is accelerated, and the matching accuracy is improved.
Example 3
An electronic device is provided in the embodiment of the present invention, as shown in fig. 6, the electronic device includes a processor 21 and a memory 22, where the processor 21 and the memory 22 may be connected by a bus or other means.
The processor 21 may be a Central Processing Unit (CPU). The processor 21 may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or a combination of such chips.
The memory 22 is a non-transitory computer readable storage medium, and can be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the grayscale image matching method or apparatus for geographic image stitching according to the embodiments of the present invention. The processor 21 executes various functional applications and data processing of the processor by running the non-transitory software programs, instructions and modules stored in the memory 22, that is, the grayscale image matching method for geographic image stitching in the above method embodiment is implemented.
The memory 22 may further include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created by the processor 21, and the like. Further, the memory 22 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 22 may optionally include memory located remotely from the processor 21, and these remote memories may be connected to the processor 21 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The memory 22 stores one or more modules that, when executed by the processor 21, perform the grayscale image matching method for geographic image stitching in the embodiment shown in fig. 1.
The above-mentioned details of the electronic device can be understood by referring to the corresponding related descriptions and effects of the embodiment shown in fig. 1 to 5. And will not be described in detail herein.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a Flash Memory (Flash Memory), a Hard Disk (Hard Disk Drive, abbreviated as HDD), a Solid State Drive (SSD), or the like; the storage medium may also comprise a combination of memories of the kind described above.
It should be understood that the above examples are only for clarity of illustration and are not intended to limit the embodiments. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. And obvious variations or modifications therefrom are within the scope of the invention.
Claims (10)
1. A gray image matching method for geographic image splicing is characterized by comprising the following steps:
performing polar coordinate conversion on the first geographic image, and converting the converted first geographic image data into a first data string which is used for uniquely representing the first geographic image;
repeatedly executing the following steps until the similarity calculation is completed for all the matching areas: acquiring the outline of the target pattern in the second geographic image as a target pattern search area; selecting a matching area with the same size as the first geographic image in the target pattern search area, performing polar coordinate conversion on the matching area, and converting the converted image data of the matching area into a second data string for uniquely identifying the matching area; calculating the similarity of the first data string and the second data string;
and selecting the matching area with the maximum similarity as a target pattern, and acquiring the position and the direction of the target pattern.
2. The method of claim 1, further comprising, prior to polar transforming the first geographic imagery:
preprocessing the first geographic image and the second geographic image, wherein the preprocessing is to perform geometric fine correction on the first geographic image and the second geographic image, and the geometric fine correction comprises the following steps: determining a mathematical model between the coordinates of the first geographical image and the second geographical image and the ground coordinates according to the imaging mode of the first geographical image and the second geographical image; calculating a transformation parameter by adjustment according to the ground control point and the corresponding image point coordinate, and evaluating the precision; and performing geometric transformation calculation on the first geographic image and the second geographic image, and resampling the pixel gray scale.
3. The method of claim 1, wherein converting the converted first geographic image data into a first data string uniquely representing the first geographic image comprises:
carrying out graying processing on the first geographic image to obtain a first grayscale image;
acquiring the gray value of each pixel point from the first gray image;
and performing combined calculation on all gray values according to the sequence of the pixel points in the first gray image to obtain the first data string.
4. The method of claim 1, wherein converting the converted first geographic image data into a first data string uniquely representing the first geographic image comprises:
carrying out graying processing on the first geographic image to obtain a first grayscale image;
acquiring the gray value of each pixel point from the first gray image;
taking pixel points with preset unit quantity as a unit, and calculating the gray value of the pixel points in each unit to obtain a corresponding numerical value;
and combining the numerical values corresponding to each unit according to the arrangement sequence of the units to obtain a multi-dimensional array as the first data string.
5. The method of claim 1, wherein the obtaining the contour of the target pattern in the second geographic image comprises:
setting a corresponding gray threshold value to binarize the second geographic image, changing the binarized second geographic image into black and white, performing connected domain analysis on the binarized second geographic image by adopting a pixel marking or run connectivity analysis method, and calculating the outline of the target pattern.
6. The method of claim 1, wherein the target pattern search area is based on determining a range of conjugate points of the region to be matched existing on the image according to a priori knowledge and constraint conditions.
7. The method of claim 1, wherein the first and second geographic imagery are imagery of a geographic area taken with a drone.
8. A grayscale image matching apparatus for geographic image stitching, comprising:
the conversion module is used for carrying out polar coordinate conversion on the first geographic image and converting the converted first geographic image data into a first data string which is used for uniquely representing the first geographic image;
the similarity calculation module is used for repeatedly executing the following steps until the similarity calculation is completed for all the matching areas: acquiring the outline of the target pattern in the second geographic image as a target pattern search area; selecting a matching area with the same size as the first geographic image in the target pattern search area, performing polar coordinate conversion on the matching area, and converting the converted image data of the matching area into a second data string for uniquely identifying the matching area; calculating the similarity of the first data string and the second data string;
and the acquisition module is used for selecting the matching area with the maximum similarity as a target pattern and acquiring the position and the direction of the target pattern.
9. An electronic device, comprising:
a memory and a processor, the memory and the processor are connected with each other in communication, the memory stores computer instructions, the processor executes the computer instructions to execute the grayscale image matching method for geographical image stitching according to any one of claims 1-7.
10. A computer-readable storage medium storing computer instructions for causing a computer to execute the grayscale image matching method for geographical image stitching according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011226012.8A CN112215304A (en) | 2020-11-05 | 2020-11-05 | Gray level image matching method and device for geographic image splicing |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011226012.8A CN112215304A (en) | 2020-11-05 | 2020-11-05 | Gray level image matching method and device for geographic image splicing |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112215304A true CN112215304A (en) | 2021-01-12 |
Family
ID=74058389
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011226012.8A Pending CN112215304A (en) | 2020-11-05 | 2020-11-05 | Gray level image matching method and device for geographic image splicing |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112215304A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112330727A (en) * | 2020-11-02 | 2021-02-05 | 珠海大横琴科技发展有限公司 | Image matching method and device, computer equipment and storage medium |
CN113611650A (en) * | 2021-03-19 | 2021-11-05 | 联芯集成电路制造(厦门)有限公司 | Method for aligning wafer pattern |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1926007A2 (en) * | 2006-09-05 | 2008-05-28 | Honeywell International, Inc. | Method and system for navigation of an unmanned aerial vehicle in an urban environment |
CN101339658A (en) * | 2008-08-12 | 2009-01-07 | 北京航空航天大学 | A Fast and Robust Registration Method for Aerial Traffic Video |
CN101950419B (en) * | 2010-08-26 | 2012-09-05 | 西安理工大学 | Quick image rectification method in presence of translation and rotation at same time |
CN102982515A (en) * | 2012-10-23 | 2013-03-20 | 中国电子科技集团公司第二十七研究所 | Method of unmanned plane image real-time splicing |
CN103020945A (en) * | 2011-09-21 | 2013-04-03 | 中国科学院电子学研究所 | Remote sensing image registration method of multi-source sensor |
CN103593838A (en) * | 2013-08-01 | 2014-02-19 | 华中科技大学 | Rapid cross-correlation grey-scale image coupling method and rapid cross-correlation grey-scale image coupling device |
CN103679674A (en) * | 2013-11-29 | 2014-03-26 | 航天恒星科技有限公司 | Method and system for splicing images of unmanned aircrafts in real time |
CN106023086A (en) * | 2016-07-06 | 2016-10-12 | 中国电子科技集团公司第二十八研究所 | Aerial photography image and geographical data splicing method based on ORB feature matching |
CN106898017A (en) * | 2017-02-27 | 2017-06-27 | 网易(杭州)网络有限公司 | Method, device and terminal device for recognizing image local area |
CN107808362A (en) * | 2017-11-15 | 2018-03-16 | 北京工业大学 | A kind of image split-joint method combined based on unmanned plane POS information with image SURF features |
CN108765298A (en) * | 2018-06-15 | 2018-11-06 | 中国科学院遥感与数字地球研究所 | Unmanned plane image split-joint method based on three-dimensional reconstruction and system |
CN110033411A (en) * | 2019-04-12 | 2019-07-19 | 哈尔滨工业大学 | The efficient joining method of highway construction scene panoramic picture based on unmanned plane |
CN110310248A (en) * | 2019-08-27 | 2019-10-08 | 成都数之联科技有限公司 | A kind of real-time joining method of unmanned aerial vehicle remote sensing images and system |
CN111583110A (en) * | 2020-04-24 | 2020-08-25 | 华南理工大学 | Splicing method of aerial images |
-
2020
- 2020-11-05 CN CN202011226012.8A patent/CN112215304A/en active Pending
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1926007A2 (en) * | 2006-09-05 | 2008-05-28 | Honeywell International, Inc. | Method and system for navigation of an unmanned aerial vehicle in an urban environment |
CN101339658A (en) * | 2008-08-12 | 2009-01-07 | 北京航空航天大学 | A Fast and Robust Registration Method for Aerial Traffic Video |
CN101950419B (en) * | 2010-08-26 | 2012-09-05 | 西安理工大学 | Quick image rectification method in presence of translation and rotation at same time |
CN103020945A (en) * | 2011-09-21 | 2013-04-03 | 中国科学院电子学研究所 | Remote sensing image registration method of multi-source sensor |
CN102982515A (en) * | 2012-10-23 | 2013-03-20 | 中国电子科技集团公司第二十七研究所 | Method of unmanned plane image real-time splicing |
CN103593838A (en) * | 2013-08-01 | 2014-02-19 | 华中科技大学 | Rapid cross-correlation grey-scale image coupling method and rapid cross-correlation grey-scale image coupling device |
CN103679674A (en) * | 2013-11-29 | 2014-03-26 | 航天恒星科技有限公司 | Method and system for splicing images of unmanned aircrafts in real time |
CN106023086A (en) * | 2016-07-06 | 2016-10-12 | 中国电子科技集团公司第二十八研究所 | Aerial photography image and geographical data splicing method based on ORB feature matching |
CN106898017A (en) * | 2017-02-27 | 2017-06-27 | 网易(杭州)网络有限公司 | Method, device and terminal device for recognizing image local area |
CN107808362A (en) * | 2017-11-15 | 2018-03-16 | 北京工业大学 | A kind of image split-joint method combined based on unmanned plane POS information with image SURF features |
CN108765298A (en) * | 2018-06-15 | 2018-11-06 | 中国科学院遥感与数字地球研究所 | Unmanned plane image split-joint method based on three-dimensional reconstruction and system |
CN110033411A (en) * | 2019-04-12 | 2019-07-19 | 哈尔滨工业大学 | The efficient joining method of highway construction scene panoramic picture based on unmanned plane |
CN110310248A (en) * | 2019-08-27 | 2019-10-08 | 成都数之联科技有限公司 | A kind of real-time joining method of unmanned aerial vehicle remote sensing images and system |
CN111583110A (en) * | 2020-04-24 | 2020-08-25 | 华南理工大学 | Splicing method of aerial images |
Non-Patent Citations (3)
Title |
---|
刘彩云 等: "一种自适应极坐标变换的旋转图像配准方法", 《扬州大学学报(自然科学版)》 * |
张婷婷主编: "《遥感技术概论[M]》", 31 July 2011, 黄河水利出版社 * |
杜刚: "无人机航拍图像的一种处理方法", 《科技博览》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112330727A (en) * | 2020-11-02 | 2021-02-05 | 珠海大横琴科技发展有限公司 | Image matching method and device, computer equipment and storage medium |
CN113611650A (en) * | 2021-03-19 | 2021-11-05 | 联芯集成电路制造(厦门)有限公司 | Method for aligning wafer pattern |
US11692946B2 (en) | 2021-03-19 | 2023-07-04 | United Semiconductor (Xiamen) Co., Ltd. | Method for aligning to a pattern on a wafer |
CN113611650B (en) * | 2021-03-19 | 2024-02-27 | 联芯集成电路制造(厦门)有限公司 | Method for aligning wafer pattern |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111222395B (en) | Target detection method and device and electronic equipment | |
KR101895647B1 (en) | Location-aided recognition | |
CN109658454B (en) | Pose information determination method, related device and storage medium | |
EP3843036A1 (en) | Sample labeling method and device, and damage category identification method and device | |
CN111666855B (en) | Method, system and electronic equipment for extracting three-dimensional parameters of animals based on drone | |
CN111291825A (en) | Focus classification model training method and device, computer equipment and storage medium | |
CN112036249B (en) | Method, system, medium and terminal for end-to-end pedestrian detection and attribute identification | |
CN113240744B (en) | Image processing method and device | |
CN112070069A (en) | Method and device for identifying remote sensing image | |
CN112529827A (en) | Training method and device for remote sensing image fusion model | |
CN115995005B (en) | Crop extraction method and device based on single-period high-resolution remote sensing image | |
CN112215304A (en) | Gray level image matching method and device for geographic image splicing | |
CN113435300B (en) | Real-time identification method and system for lightning arrester instrument | |
CN116863349A (en) | Remote sensing image change area determining method and device based on triangular network dense matching | |
CN111583166A (en) | Image fusion network model construction and training method and device | |
CN113469167B (en) | Meter reading identification method, device, equipment and storage medium | |
CN113132693B (en) | Color correction method | |
CN112818993A (en) | Character wheel reading meter end identification method and equipment for camera direct-reading meter reader | |
CN112184776A (en) | Target tracking method, device and storage medium | |
CN112200845A (en) | Image registration method and device | |
CN111639651A (en) | Ship retrieval method and device based on full-connection layer feature extraction | |
US11989908B2 (en) | Visual positioning method, mobile machine using the same, and computer readable storage medium | |
CN113496145B (en) | Label corner obtaining method, device and equipment | |
CN112308010B (en) | Ship shielding detection method and device based on YOLO-V3 algorithm | |
CN113962925A (en) | Method and device for detecting heterogeneous remote sensing image change based on satellite and unmanned aerial vehicle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210112 |