CN117456542B - Image matching method, device, electronic equipment and storage medium - Google Patents
Image matching method, device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN117456542B CN117456542B CN202311801766.5A CN202311801766A CN117456542B CN 117456542 B CN117456542 B CN 117456542B CN 202311801766 A CN202311801766 A CN 202311801766A CN 117456542 B CN117456542 B CN 117456542B
- Authority
- CN
- China
- Prior art keywords
- character
- image
- representation information
- structure representation
- region
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 43
- 238000013507 mapping Methods 0.000 claims description 69
- 238000001514 detection method Methods 0.000 claims description 25
- 238000012545 processing Methods 0.000 claims description 14
- 238000013145 classification model Methods 0.000 claims description 7
- 238000004590 computer program Methods 0.000 claims description 5
- 238000004364 calculation method Methods 0.000 abstract description 19
- 230000008569 process Effects 0.000 abstract description 9
- 230000009286 beneficial effect Effects 0.000 abstract description 8
- 238000013527 convolutional neural network Methods 0.000 description 10
- 238000010586 diagram Methods 0.000 description 6
- 238000007781 pre-processing Methods 0.000 description 6
- 238000004422 calculation algorithm Methods 0.000 description 5
- 238000013528 artificial neural network Methods 0.000 description 4
- 230000006872 improvement Effects 0.000 description 3
- 238000013461 design Methods 0.000 description 2
- 230000002349 favourable effect Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 241001270131 Agaricus moelleri Species 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/19—Recognition using electronic means
- G06V30/19007—Matching; Proximity measures
- G06V30/19013—Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
- G06V30/1902—Shifting or otherwise transforming the patterns to accommodate for positional errors
- G06V30/19067—Matching configurations of points or features, e.g. constellation matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/19—Recognition using electronic means
- G06V30/19007—Matching; Proximity measures
- G06V30/19093—Proximity measures, i.e. similarity or distance measures
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Character Input (AREA)
Abstract
The embodiment of the application provides an image matching method, an image matching device, electronic equipment and a storage medium. The method comprises the following steps: acquiring a first character image, wherein the first character image comprises at least one first character area, and each first character area corresponds to a single first character; acquiring first structure representation information corresponding to a first character image, wherein the first structure representation information comprises character representation information corresponding to each first character area and/or character relation information between adjacent first character areas; and determining a first matching result of whether the first character image and the second character image are matched by using the first structure representation information and the second structure representation information, wherein the second structure representation information is generated based on a second character area in the second character image. The scheme is beneficial to simplifying the calculation process of image matching and improving the accuracy of image matching.
Description
Technical Field
The present application relates to the field of image matching technology, and more particularly, to an image matching method, an image matching apparatus, an electronic device, and a storage medium.
Background
Image matching refers to a method for searching the same image target by carrying out similarity and consistency analysis on the corresponding relation of image content, characteristics, structures, relations, textures, gray scales and the like. At present, the image matching is mainly performed by extracting the characteristics of the whole image and then comparing the characteristics of the two images so as to determine the matching result between the two images. The method has complex processing process and poor reliability of matching results.
Disclosure of Invention
The present application has been made in view of the above-described problems. The application provides an image matching method, an image matching device, an electronic device and a storage medium.
According to an aspect of the present application, there is provided an image matching method including: acquiring a first character image, wherein the first character image comprises at least one first character area, and each first character area corresponds to a single first character; acquiring first structure representation information corresponding to a first character image, wherein the first structure representation information comprises character representation information corresponding to each first character area and/or character relation information between adjacent first character areas; and determining a first matching result of whether the first character image and the second character image are matched by using the first structure representation information and the second structure representation information, wherein the second structure representation information is generated based on a second character area in the second character image.
In the above technical solution, the first matching result between the first character image and the second character image may be determined more accurately by using the first structure representation information corresponding to the first character image and the second structure representation information corresponding to the second character image. The scheme is beneficial to simplifying the calculation process of image matching and improving the accuracy of image matching.
Illustratively, the character representation information includes a location and/or size of each character area; the character relation information comprises a distance between adjacent character areas and/or an angle between connecting lines, wherein the connecting lines are connecting lines between the adjacent character areas.
According to the technical scheme, the position and/or the size of each character area are used as character representing information, and the distance between adjacent character areas and/or the angle between connecting lines are used as character relation information, so that the structure representing information corresponding to the character image can be accurately represented, and an accurate basis can be provided for image matching in the subsequent step.
Illustratively, acquiring first structure representation information corresponding to a first character image includes: character detection is performed on the first character image to obtain character representation information of each first character region, and first structure representation information is determined based on the character representation information.
In the above technical solution, the first structure representation information is determined by using the character representation information of each first character area, which is helpful to accurately obtain the first structure representation information of the first character image, so as to help to provide more accurate basis for the subsequent steps.
Illustratively, determining whether the first character image matches the second character image using the first structure representation information and the second structure representation information includes: drawing a first structure representation image corresponding to the first structure representation information and a second structure representation image corresponding to the second structure representation information respectively; calculating a first similarity between the first structural representation image and the second structural representation image; and when the first similarity is larger than a first similarity threshold, determining that the first structure representation information is matched with the second structure representation information by the first matching result, otherwise, determining that the first structure representation information is not matched with the second structure representation information by the first matching result.
According to the technical scheme, the first similarity between the first structural representation image and the second structural representation image is calculated, so that whether the first structural representation image is matched with the second structural representation image or not can be accurately determined based on the first similarity, and the first matching result can be accurately determined. According to the scheme, the matching calculation among the character geometric information is converted into the matching calculation among the images, so that the image matching calculation step is simplified, and the accuracy of the image matching calculation is improved.
Illustratively, drawing the first structural representation image corresponding to the first structural representation information and the second structural representation image corresponding to the second structural representation information, respectively, includes: representing an image for a first structure: creating a first preset background image, wherein the size of the first preset background image is consistent with the size of the first character image; determining a first character mapping area in a first preset background image and a first connecting line between the first character mapping areas by using the first structure representation information, wherein the position of the first character mapping area on the first preset background image is consistent with the position of the first character area on the first character image; setting pixel values of pixels of the first character mapping region and the first connection line as first pixel values and setting pixel values in regions other than the first character mapping region and the first connection line in the first preset background image as second pixel values, thereby generating a first structure representation image corresponding to the first structure representation information; representing an image for a second structure: creating a second preset background image, wherein the size of the second preset background image is consistent with the size of the second character image; determining a second character mapping area in a second preset background image and a second connecting line between the second character mapping areas by using second structure representation information, wherein the position of the second character mapping area on the second preset background image is consistent with the position of the second character area on the second character image; setting the pixel values of the pixels of the second character mapping region and the second connecting line to the first pixel value and setting the pixel values in the region other than the second character mapping region and the second connecting line in the second preset background image to the second pixel value, thereby generating a second structure representation image corresponding to the second structure representation information.
According to the technical scheme, the first structure representation image and the second structure representation image can be accurately drawn, so that accurate basis is provided for the determination of the first matching result in the subsequent step.
Illustratively, determining a first character mapping region within the first preset background image and a first connection line between the first character mapping regions using the first structure representation information includes: determining a first connection line between the first character mapping areas on the first preset background image based on a specific connection line setting rule; the specific connection line setting rule is used for specifying a distance range in which a distance between two character mapping areas connected by the connection line falls and/or a line width of the connection line; determining, using the second structural representation information, a second character mapping region within a second preset background image and a second connection line between the second character mapping regions, comprising: and determining second connection lines between the second character mapping areas on the second preset background image based on the specific connection line setting rule.
According to the technical scheme, the connecting lines can be generated based on the specific connecting line setting rules, so that the character relation information in the corresponding structure representation image can be accurately represented. The scheme is favorable for providing accurate basis for the determination of the first matching result in the subsequent step.
Illustratively, the specific connection line setting rule is related to the resolution, size, aspect ratio of the first character image/the second character image and the number of the corresponding first character areas and/or second character areas.
According to the technical scheme, the specific connection line setting rules are set according to the resolution, the size and the length-width ratio of the character image and the number of the corresponding character areas, so that connection lines meeting requirements can be generated in the structural representation image, and the accuracy of the first matching result can be guaranteed.
Illustratively, after determining the first matching result of whether the first character image and the second character image match using the first structure representation information and the second structure representation information, the method further comprises: and judging whether each first character area is matched with the corresponding second character area or not so as to determine a second matching result.
According to the technical scheme, whether the first character image is matched with the second character image can be further determined by judging whether each first character area is matched with the corresponding second character area. The scheme is helpful for further improving the accuracy of image matching.
Illustratively, determining whether each first character region matches a corresponding second character region to determine a second match result includes: for each first character area, calculating a second similarity between the image information contained in the first character area and the image information contained in the corresponding second character area; and when the second similarity is greater than a second similarity threshold, determining that the first character region matches the corresponding second character region.
According to the technical scheme, the similarity between each first character area and the corresponding second character area is calculated, so that whether each first character area is matched with the corresponding second character area or not can be accurately determined, and the second matching result can be accurately determined.
Illustratively, acquiring the first character image includes: inputting the character image to be detected into a target detection model to detect at least one suspected character area and corresponding character representation information, wherein the character representation information of the at least one suspected character area comprises the position of the at least one suspected character area; extracting at least one first image block containing the suspected character areas in one-to-one correspondence from the character image to be detected based on character representation information of the at least one suspected character area; inputting at least one first image block into a first classification model to obtain classification results corresponding to at least one suspected character region, wherein the classification results are used for indicating whether the suspected character region is a first character region or not; and processing the character image to be detected based on the classification result to acquire a first character image.
According to the technical scheme, the suspected character area in the character image to be detected is determined by using the target detection model, and whether the suspected character area is the first character area is judged by using the first classification model, so that the character image to be detected can be processed based on the classification result, and the first character image is further obtained. This scheme helps to achieve an improvement in the quality of the obtained first character image.
Illustratively, the classification result includes a first confidence that the suspected character region is a first character region and/or a second confidence that the suspected character region is not a character region; based on the classification result, processing the character image to be detected to obtain a first character image, including: for each suspected character area, when the corresponding first confidence coefficient is smaller than a first confidence coefficient threshold value or the second confidence coefficient is larger than a second confidence coefficient threshold value, extracting a second image block containing the suspected character area from the character image to be detected; performing image reconstruction on the second image block to obtain a reconstructed image block; and updating the character image to be detected in a mode of filling the reconstructed image block into the position of the suspected character area in the character image to be detected, so as to obtain a first character image.
According to the technical scheme, the image reconstruction is carried out on the second image block corresponding to the suspected character region with the first confidence coefficient smaller than the first confidence coefficient threshold value or with the second confidence coefficient larger than the second confidence coefficient threshold value, so that the image quality of the obtained first character image is improved, and the accuracy of the second matching result determined in the follow-up step is guaranteed.
According to another aspect of the present application, there is provided an image matching apparatus including: a first acquisition module for acquiring a first character image, wherein the first character image comprises at least one first character area, and each first character area corresponds to a single first character; the second acquisition module is used for acquiring first structure representation information corresponding to the first character image, wherein the first structure representation information comprises character representation information corresponding to each first character area and/or character relation information between adjacent first character areas; and a determining module for determining whether the first character image and the second character image match with each other by using the first structure representation information and the second structure representation information, wherein the second structure representation information is generated based on the second character region in the second character image.
In the above technical solution, the first matching result between the first character image and the second character image may be determined more accurately by using the first structure representation information corresponding to the first character image and the second structure representation information corresponding to the second character image. The scheme is beneficial to simplifying the calculation process of image matching and improving the accuracy of image matching.
According to yet another aspect of the present application, there is provided an electronic device comprising a processor and a memory, wherein the memory has stored therein computer program instructions which, when executed by the processor, are adapted to carry out the above-described image matching method.
In the above technical solution, the first matching result between the first character image and the second character image may be determined more accurately by using the first structure representation information corresponding to the first character image and the second structure representation information corresponding to the second character image. The scheme is beneficial to simplifying the calculation process of image matching and improving the accuracy of image matching.
According to still another aspect of the present application, there is provided a storage medium having stored thereon program instructions for executing the above-described image matching method when executed.
In the above technical solution, the first matching result between the first character image and the second character image may be determined more accurately by using the first structure representation information corresponding to the first character image and the second structure representation information corresponding to the second character image. The scheme is beneficial to simplifying the calculation process of image matching and improving the accuracy of image matching.
Drawings
The above and other objects, features and advantages of the present application will become more apparent from the following more particular description of embodiments of the present application, as illustrated in the accompanying drawings. The accompanying drawings are included to provide a further understanding of embodiments of the application and are incorporated in and constitute a part of this specification, illustrate the application and together with the embodiments of the application, and not constitute a limitation to the application. In the drawings, like reference numerals generally refer to like parts or steps.
FIG. 1 shows a schematic flow chart of an image matching method according to one embodiment of the application;
FIG. 2 shows a schematic diagram of a character image according to one embodiment of the application;
Fig. 3 shows a schematic block diagram of an image matching apparatus according to an embodiment of the present application;
fig. 4 shows a schematic block diagram of an electronic device according to an embodiment of the application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, exemplary embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some embodiments of the present application and not all embodiments of the present application, and it should be understood that the present application is not limited by the example embodiments described herein. Based on the embodiments of the application described in the present application, all other embodiments that a person skilled in the art would have without inventive effort shall fall within the scope of the application.
In order to at least partially solve the above-mentioned problems, an embodiment of the present application provides an image matching method. Fig. 1 shows a schematic flow chart of an image matching method according to an embodiment of the application. As shown in fig. 1, the image matching method may include the following steps S110, S120, and S130.
In step S110, a first character image is acquired, wherein the first character image includes at least one first character region, each first character region corresponding to a single first character.
The first character image according to an embodiment of the present application may be any image including at least one first character area. In other words, at least one first character area may be included in the first character image. The number of first character areas may be arbitrary, each first character area corresponding to a single first character. The first character may be any type of character such as a number, letter, word, or punctuation mark, and the application is not limited thereto.
The number of first character areas in the first character image will be described in a specific embodiment. In a specific embodiment, the first character image includes the first characters "a", "B" and "S", and then the number of the first character areas in the first character image is three.
The first character image may be a black-and-white image or a color image, for example. The first character image may be an image of any size or resolution size, for example. Alternatively, the first character image may be an image satisfying a preset resolution requirement. In one example, the first character image may be a black and white image having a 512 x 512 pixel size. The requirements for the first character image may be based on actual requirements, which the present application does not limit.
The first character image may be an original image acquired by the image acquisition device, for example. According to the embodiment of the application, the first character image can be acquired by adopting any existing or future image acquisition mode. For example, a camera may be employed to acquire the first character image.
In another example, the first character image may be an image after the preprocessing operation is performed on the original image.
The preprocessing operation may be any preprocessing operation that may satisfy the needs of the subsequent steps, and may include all operations for improving the visual effect of the image, improving the sharpness of the image, or highlighting some features in the image, etc. to facilitate the acquisition of the first structure identification information in the first character image. Optionally, the preprocessing operation may include denoising operations such as filtering, and may also include adjustment of image parameters such as adjustment of image enhancement gray scale, contrast, and brightness. Alternatively, the preprocessing operation may include pixel normalization processing of the first character image. For example, each pixel of the first character image may be divided by 255 such that the pixel of the preprocessed first character image is in the range of 0-1. This helps to improve the efficiency of subsequent acquisition of the first structure identification information in the first character image.
Illustratively, the preprocessing operation may further include an operation of cropping the image. For example, the original image may be cropped according to the position of the first character region in the original image, so that only the region of the original image where the first character region is located is reserved. This solution is advantageous for reducing the calculation amount of the subsequent steps.
In step S120, first structure representation information corresponding to the first character image is acquired, wherein the first structure representation information includes character representation information corresponding to each first character region and/or character relationship information between adjacent first character regions.
Alternatively, character representation information of the first character region is used to represent feature information of the first character region. The character relation information between the adjacent first character areas may be used to represent a positional relation between the adjacent first character areas. Illustratively, the character representation information includes a location and/or size of each character area; the character relation information comprises a distance between adjacent character areas and/or an angle between connecting lines, wherein the connecting lines are connecting lines between the adjacent character areas.
Alternatively, the position of each character region may be represented in the position of the character frame of the corresponding character region in the character image. For example, the first character region may be represented by coordinates of each vertex of a character frame of the first character region. For another example, the first character region may be represented by coordinates of a center point of the first character region.
Alternatively, the size of each character region may be expressed in terms of the number of pixels in the character region. Alternatively, the size of each character region may be expressed in terms of the size of a character frame of the corresponding character region. For example, if the size of the character frame of the character area is 5*5, the size of the character area is 5*5. The character box may be determined using a character detection model described below.
Alternatively, the distance between adjacent character areas may be the shortest distance between the character frames to which the two adjacent character areas correspond respectively. Alternatively, the distance between adjacent character areas may be the distance between the center points of the character frames to which the two adjacent character areas correspond respectively,
Alternatively, for any two adjacent character areas, the connection line may be a connection line between any two points on the two adjacent character areas. For example, a line may be formed between the center points corresponding to the two adjacent character areas. The angle between the connecting lines may be the angle of the included angle between any two connecting lines having a common intersection point.
The character representing information and the character relation information are described below in a specific embodiment. Fig. 2 shows a schematic diagram of a character image according to an embodiment of the application. As shown in fig. 2, the character image includes a character area a, a character area B, and a character area C. The connection line between the character area a and the character area C is L1, the connection line between the character area a and the character area B is L2, and the connection line between the character area B and the character area C is L3. In this embodiment, the character representation information may include vertex coordinates and sizes of character frames of each of the character area a, the character area B, and the character area C. The character relationship information may include: the distance between the character area a and the character area B (which may be the length of L2, for example), the distance between the character area a and the character area C (which may be the length of L1, for example), the distance between the character area B and the character area C (which may be the length of L3, for example), the included angle between L1 and L2, the included angle between L1 and L3, and the included angle between L2 and L3.
According to the technical scheme, the position and/or the size of each character area are used as character representing information, and the distance between adjacent character areas and/or the angle between connecting lines are used as character relation information, so that the structure representing information corresponding to the character image can be accurately represented, and an accurate basis can be provided for image matching in the subsequent step.
Illustratively, step S120, acquiring the first structure representation information corresponding to the first character image may include the steps of: character detection is performed on the first character image to obtain first structural representation information of the first character image.
Alternatively, the character detection may be implemented using any of a number of existing or future developed trained character detection models. Character detection models include, but are not limited to, regional convolutional neural networks (Region-based Convolutional Neural Network, RCNN), fast regional convolutional neural networks (FASTER RCNN), or 5-generation single-order classical detectors (You Only Look Once, YOLOV 5), and the like. In a specific embodiment, the character detection model may be trained by pre-labeling sample character images to obtain a trained character detection model.
The scheme can directly obtain the first structure representation information of the first character image, is simple to operate and is beneficial to improving the image matching efficiency.
Illustratively, the step S120 of acquiring the first structure representation information corresponding to the first character image may include the steps of: character detection is performed on the first character image to obtain character representation information of each first character region, and first structure representation information is determined based on the character representation information.
Alternatively, similar to the above character detection method, the method of character detection for the first character image in the exemplary embodiment may be implemented using any of the existing or future developed trained character detection models. The result of the trained character detection model is character representation information for each first character region. For another example, the character detection model may be an object detection model described below.
After obtaining the character representation information of each first character region, the first structure representation information may be determined based on the character representation information. For example, the character representation information may include positions and sizes of the first character areas, and the first structure representation information of the first character image may be determined by generating connection lines according to the positions of the respective first character areas and calculating distances between adjacent first character areas and angles between the connection lines.
In the above technical solution, the first structure representation information is determined by using the character representation information of each first character area, which is helpful to accurately obtain the first structure representation information of the first character image, so as to help to provide more accurate basis for the subsequent steps.
In step S130, a first matching result of whether the first character image and the second character image are matched is determined using the first structure representation information and the second structure representation information, wherein the second structure representation information is information generated based on the second character region in the second character image.
Alternatively, the second character image may be any one of the sample character images in the pre-established sample character image library. In this embodiment, sample character images may be collected in advance, and respective corresponding structure representation information of each sample character image may be acquired, and then a sample character image library may be generated based on each sample character image and the respective corresponding structure representation information. In step S130, the respective corresponding structure representation information of each sample character image may be used as the second structure representation information, so as to obtain a first matching result between each sample character image and the first character image in the sample character image library. Thus, a sample character image that matches the first character image can be determined in the sample character image library based on each first matching result. The manner of acquiring the structure representation information corresponding to each sample character image is similar to that of acquiring the first structure representation information corresponding to the first character image, and is not repeated.
Alternatively, the second character image may be an image acquired by means such as an image acquisition device, a web crawler, or the like. In this embodiment, a second character image may be acquired, and second structure representation information corresponding to the second character image is acquired, and then step S130 is performed based on the second structure representation information, thereby determining a first matching result between the first character image and the second character image. The manner of acquiring the second structure representation information corresponding to the second character image is similar to the manner of acquiring the first structure representation information corresponding to the first character image, and is not repeated.
Alternatively, the first matching result may be determined by calculating a difference between the first structural representation information and the second structural representation information. For example, when the structural representation information includes the position of the character region, the first matching result may be determined by a position difference between the first character region in the first character image and the second character region in the second character image. Alternatively, the corresponding structural representation image may be generated based on the first structural representation information and the second structural representation information, respectively, and then the first matching result may be determined by calculating the similarity between the two structural representation images. The manner in which the corresponding structure representation image is generated based on the first structure representation information and the second structure representation information, respectively, is described in detail below, and is not described here again.
In the above technical solution, the first matching result between the first character image and the second character image may be determined more accurately by using the first structure representation information corresponding to the first character image and the second structure representation information corresponding to the second character image. The scheme is beneficial to simplifying the calculation process of image matching and improving the accuracy of image matching.
Illustratively, determining whether the first character image and the second character image match with each other using the first structure representation information and the second structure representation information may include the steps of: drawing a first structure representation image corresponding to the first structure representation information and a second structure representation image corresponding to the second structure representation information respectively; calculating a first similarity between the first structural representation image and the second structural representation image; and when the first similarity is larger than a first similarity threshold, determining that the first structure representation information is matched with the second structure representation information by the first matching result, otherwise, determining that the first structure representation information is not matched with the second structure representation information by the first matching result.
Alternatively, the first structural representation image may include character frames corresponding to the respective first character areas and connecting lines between adjacent first character areas. Taking the embodiment shown in fig. 2 as an example, the first structural representation image may include a character area a, a character area B, and a character area C. The connection line between the character area a and the character area C is L1, the connection line between the character area a and the character area B is L2, and the connection line between the character area B and the character area C is L3.
Alternatively, the first similarity may be calculated by any of existing or future developed similarity calculation methods. For example, the first similarity may be calculated using any one of a cosine similarity algorithm, a Structural Similarity (SSIM) algorithm.
Alternatively, the first similarity threshold may be set as desired. For example, the corresponding first similarity threshold may be set according to the requirement of image matching of the user. In a particular embodiment, the first similarity threshold may be in the range of [0.9,1 ]. For example, it may be 0.95, 0.96, 0.97, 0.98, etc.
According to the technical scheme, the first similarity between the first structural representation image and the second structural representation image is calculated, so that whether the first structural representation image is matched with the second structural representation image or not can be accurately determined based on the first similarity, and the first matching result can be accurately determined. According to the scheme, the matching calculation among the character geometric information is converted into the matching calculation among the images, so that the image matching calculation step is simplified, and the accuracy of the image matching calculation is improved.
Illustratively, drawing the first structural representation image corresponding to the first structural representation information and the second structural representation image corresponding to the second structural representation information, respectively, includes:
Representing an image for a first structure: creating a first preset background image, wherein the size of the first preset background image is consistent with the size of the first character image; determining a first character mapping area in a first preset background image and a first connecting line between the first character mapping areas by using the first structure representation information, wherein the position of the first character mapping area on the first preset background image is consistent with the position of the first character area on the first character image; setting pixel values of pixels of the first character mapping region and the first connection line to first pixel values and setting pixel values in regions other than the first character mapping region and the first connection line in the first preset background image to second pixel values, thereby generating a first structure representation image corresponding to the first structure representation information.
Representing an image for a second structure: creating a second preset background image, wherein the size of the second preset background image is consistent with the size of the second character image; determining a second character mapping area in a second preset background image and a second connecting line between the second character mapping areas by using second structure representation information, wherein the position of the second character mapping area on the second preset background image is consistent with the position of the second character area on the second character image; setting the pixel values of the pixels of the second character mapping region and the second connecting line to the first pixel value and setting the pixel values in the region other than the second character mapping region and the second connecting line in the second preset background image to the second pixel value, thereby generating a second structure representation image corresponding to the second structure representation information.
The process of generating a structural representation image is described in one specific embodiment. Taking the first structural representation image as an example, first, a first preset background image may be created that is consistent with the resolution of the first character image. The first preset background image may be a solid background. For example, it may be a pure black background (i.e., the pixel values in the region are all 0). After the creation is completed, the first character region in the first character image may be mapped into a first preset background image to form a first character mapping region. It will be appreciated that the position of the first character region in the first character image is the same as the position of the corresponding first character map region in the first preset background image. Accordingly, a first character map area corresponding to the first character area may be generated at a corresponding position in the first preset background image. The first character map area may be filled with a pure white color (i.e., the pixel values within the area are all 255). After the first character map area is obtained, connection lines of adjacent first character map areas may be further determined. For example, the center points of adjacent first character map areas may be connected to obtain a connecting line. Thereby, a first structure representation image is obtained. The drawing process of the second structure representation image is similar to that of the first structure representation image, and is not repeated.
According to the technical scheme, the first structure representation image and the second structure representation image can be accurately drawn, so that accurate basis is provided for the determination of the first matching result in the subsequent step.
Illustratively, determining the first character mapping region within the first preset background image and the first connection line between the first character mapping regions using the first structure representation information may include the steps of: determining a first connection line between the first character mapping areas on the first preset background image based on a specific connection line setting rule; wherein the specific connection line setting rule is used for specifying a distance range in which a distance between two character mapping regions connected by the connection line falls and/or a line width of the connection line.
Determining, using the second structural representation information, a second character mapping region within a second preset background image and a second connection line between the second character mapping regions, comprising: and determining second connection lines between the second character mapping areas on the second preset background image based on the specific connection line setting rule.
Alternatively, a specific connection rule may be used to specify a distance range within which a distance between two character mapping regions to which a connection line is connected falls. For example, when the distance between the two character mapping regions is determined to fall within the preset distance range, the first connection line between the two character mapping regions may be generated. Optionally, determining the first connection line between the first character mapping areas on the first preset background image based on the specific connection line setting rule includes: and generating a first connecting line between the two first character mapping areas when the distance between the two first character mapping areas is smaller than a preset distance threshold value for any two first character mapping areas. In this embodiment, the preset distance threshold may be set as needed, for example, the preset distance threshold may be 100 pixels. The second connection line is similar to the first connection line in the manner of generation, and is not described in detail. Alternatively, the user may set a preset distance threshold as desired. For example, an appropriate preset distance threshold may be set according to the number of character areas in the character image and the degree of distribution.
Alternatively, the preset distance threshold may be set according to the distance between any two character areas in the character image. For example, when the distance between any two character areas in the character image is not within 80-100 pixels, the preset distance threshold may be set to 100 pixels. It will be appreciated that in the case of character detection of a character image, there may be detection errors, which may result in that the detected character areas are not necessarily completely identical or accurate, and thus there may be some errors in the distance between the character areas. For example, in the same character image, there may be some small range of variation in the position of the character area that is detected. The solution of this embodiment helps to overcome this error and avoid that the error accumulation leads to a wrong final first matching result.
Alternatively, a particular connection rule may be used to specify the line width of the connection line. In this embodiment, the particular connection line rule may include a line width of each connection line. In some embodiments, the line width of the connection line may be in the range of [10 pixels, 40 pixels ]. For example, 20 pixels, 25 pixels, 30 pixels, etc. In this embodiment, by setting the line width of the connection line, the weight of the character relation information at the time of calculating the first image matching result can be adjusted. It can be understood that the larger the line width of the connection line, the more pixels the connection line occupies in the corresponding structure representation image, and the larger the weight the connection line occupies when participating in the calculation of the first image matching result. Alternatively, the user may set the line width of the connection line as needed. For example, when the number of first character areas in the first character image is small and the distribution is sparse, a larger line width of the connecting line may be set, thereby increasing the weight of the character relationship information.
According to the technical scheme, the connecting lines can be generated based on the specific connecting line setting rules, so that the character relation information in the corresponding structure representation image can be accurately represented. The scheme is favorable for providing accurate basis for the determination of the first matching result in the subsequent step.
Illustratively, the specific connection line setting rule is related to the resolution, size, aspect ratio of the first character image/the second character image and the number of the corresponding first character areas and/or second character areas.
Alternatively, the range of the connection line and the line width may be set according to the resolution, the size, the aspect ratio of the character image, and the number of the corresponding character areas. For example, when the resolution of the character image is large, a large preset distance threshold may be selected. In one embodiment, when the resolution of the character image is 512×512, the area of each character area in the character image is 20×50, and the number is 10. At this time, if the distribution of the character areas in the character image is sparse, a larger preset distance threshold may be set. For example, the preset distance threshold may be 100 pixels. Similarly, a larger line width of the connection line may be set so as to increase the weight of the character relation information. For example, the line width of the connection line may be set to 30 pixels.
According to the technical scheme, the specific connection line setting rules are set according to the resolution, the size and the length-width ratio of the character image and the number of the corresponding character areas, so that connection lines meeting requirements can be generated in the structural representation image, and the accuracy of the first matching result can be guaranteed.
Illustratively, after determining whether the first character image and the second character image are matched with the first matching result using the first structure representation information and the second structure representation information at step S130, the method may further include step S140. In step S140, it is determined whether each first character region matches with a corresponding second character region to determine a second matching result.
Optionally, in step S140, it is determined whether each first character area matches with a corresponding second character area to determine a second matching result, and the step may be performed after the first matching result indicates that the first character image matches with the second character image. In this embodiment, step S130 may be performed to determine a first matching result between the first character image and the second character image. If the first matching result indicates that the first character image is matched with the second character image, whether each first character area in the first character image is matched with the corresponding second character area in the second character image or not can be continuously judged so as to determine a second matching result.
It will be appreciated that when the first matching result indicates that the first character image matches the second character image, the first character image and the second character image have one-to-one correspondence of character areas therein. In one embodiment, the first character image includes a first character area a, a first character area B, and a first character area C, and the second character image includes a second character area D, a second character area E, and a second character area F, where the first character area a corresponds to the second character area D, the first character area B corresponds to the second character area E, and the first character area C corresponds to the second character area F. In step S140, it may be calculated whether each first character region matches a corresponding second character region. I.e. whether the first character area a matches the second character area D, whether the first character area B matches the second character area E, and whether the first character area C matches the second character area F can be calculated, thereby determining a second matching result between each first character area and the corresponding second character area.
According to the technical scheme, whether the first character image is matched with the second character image can be further determined by judging whether each first character area is matched with the corresponding second character area. The scheme is helpful for further improving the accuracy of image matching.
Illustratively, step S140, determining whether each first character region matches a corresponding second character region to determine a second matching result may include the steps of: for each first character area, calculating a second similarity between the image information contained in the first character area and the image information contained in the corresponding second character area; and when the second similarity is greater than a second similarity threshold, determining that the first character region matches the corresponding second character region.
Alternatively, the image information may be a gray value corresponding to the character region or a pixel value corresponding to each pixel point within the character region.
Alternatively, the second similarity may be calculated by any of the similarity calculation methods existing or developed in the future. For example, the second similarity may be calculated using any one of a cosine similarity algorithm, a Structural Similarity (SSIM) algorithm.
Alternatively, the second similarity threshold may be set as desired. For example, the corresponding second similarity threshold may be set according to the requirement of image matching of the user. In a particular embodiment, the second similarity threshold may be in the range of [0.9,1). For example, it may be 0.95, 0.96, 0.97, 0.98, etc.
Optionally, determining whether each first character region matches with the corresponding second character region to determine the second matching result may include the steps of: and when the second similarity corresponding to each first character area is larger than a second similarity threshold value, determining that the second matching result is that the first character image is matched with the second character image.
According to the technical scheme, the similarity between each first character area and the corresponding second character area is calculated, so that whether each first character area is matched with the corresponding second character area or not can be accurately determined, and the second matching result can be accurately determined.
Illustratively, acquiring the first character image includes: inputting the character image to be detected into a target detection model to detect at least one suspected character area and corresponding character representation information, wherein the character representation information of the at least one suspected character area comprises the position of the at least one suspected character area; extracting at least one first image block containing the suspected character areas in one-to-one correspondence from the character image to be detected based on character representation information of the at least one suspected character area; inputting at least one first image block into a first classification model to obtain classification results corresponding to at least one suspected character region, wherein the classification results are used for indicating whether the suspected character region is a first character region or not; and processing the character image to be detected based on the classification result to acquire a first character image.
Alternatively, the object detection model may be implemented using any of a number of trained character detection models, either existing or developed in the future. For example, it may be a regional convolutional neural network (Region-based Convolutional Neural Network, RCNN), a fast regional convolutional neural network (FASTER RCNN), or a 5-generation single-order classical detector (You Only Look Once, YOLOV 5), or the like.
Alternatively, the first classification model may be trained using any of the existing or future developed neural networks having classification functions. For example, the first classification model may be trained using any of Convolutional Neural Networks (CNNs), recurrent Neural Networks (RNNs), and the like.
Optionally, processing the character image to be detected based on the classification result may include the following steps: and deleting the suspected character area which is not the first character area. In this embodiment, the calculation amount in the subsequent step can be reduced by deleting the suspected character region that is not the first character region, thereby contributing to an improvement in image matching efficiency. Alternatively, processing the character image to be detected based on the classification result may include the steps of: and when the classification result corresponding to the suspected character area is that the suspected character area is not the first character area, performing image reconstruction on the suspected character area, and filling the reconstructed suspected character area into a position corresponding to the suspected character area in the character image to be detected. The scheme of the embodiment can improve the image quality on the premise of not losing image information through image reconstruction, thereby being beneficial to improving the accuracy of image matching. Specific image reconstruction methods are described in detail below.
According to the technical scheme, the suspected character area in the character image to be detected is determined by using the target detection model, and whether the suspected character area is the first character area is judged by using the first classification model, so that the character image to be detected can be processed based on the classification result, and the first character image is further obtained. This scheme helps to achieve an improvement in the quality of the obtained first character image.
Illustratively, the classification result includes a first confidence that the suspected character region is a first character region and/or a second confidence that the suspected character region is not a character region; based on the classification result, processing the character image to be detected to obtain a first character image, including: for each suspected character area, when the corresponding first confidence coefficient is smaller than a first confidence coefficient threshold value or the second confidence coefficient is larger than a second confidence coefficient threshold value, extracting a second image block containing the suspected character area from the character image to be detected; performing image reconstruction on the second image block to obtain a reconstructed image block; and updating the character image to be detected in a mode of filling the reconstructed image block into the position of the suspected character area in the character image to be detected, so as to obtain a first character image.
It is understood that the first confidence may be used to represent a probability that the suspected character region is a character region and the second confidence may represent a probability that the suspected character region is not a character region. The first confidence and the second confidence are interrelated, e.g., when the first confidence is 0.6, the second confidence is 0.4.
Both the first confidence threshold and the second confidence threshold may be set as desired. It can be understood that when the first confidence coefficient is smaller than the first confidence coefficient threshold or the second confidence coefficient is larger than the second confidence coefficient threshold, the character defect degree corresponding to the current suspected character region is higher, and at this time, the second image block containing the suspected character region can be subjected to image reconstruction so as to ensure the integrity of the characters corresponding to the suspected character region.
Alternatively, the second image block may be image reconstructed using the trained image reconstruction model. The image reconstruction model can be obtained by training a neural network which can be used for image reconstruction by adopting any existing or future developed neural network. For example, the trained image reconstruction model may be obtained by training any one of a Convolutional Neural Network (CNN), a generating countermeasure network (GAN), and the like.
According to the technical scheme, the image reconstruction is carried out on the second image block corresponding to the suspected character region with the first confidence coefficient smaller than the first confidence coefficient threshold value or with the second confidence coefficient larger than the second confidence coefficient threshold value, so that the image quality of the obtained first character image is improved, and the accuracy of the second matching result determined in the follow-up step is guaranteed.
According to another aspect of the present application, there is provided an image matching apparatus. Fig. 3 shows a schematic block diagram of an image matching apparatus according to an embodiment of the present application. As shown in fig. 3, the image matching apparatus may include a first acquisition module 310, a second acquisition module 320, and a determination module 330.
The first obtaining module 310 is configured to obtain a first character image, where the first character image includes at least one first character area, and each first character area corresponds to a single first character.
The second obtaining module 320 is configured to obtain first structure representation information corresponding to the first character image, where the first structure representation information includes character representation information corresponding to each first character area and/or character relationship information between adjacent first character areas.
The determining module 330 is configured to determine, using the first structural representation information and the second structural representation information, whether the first character image matches the second character image, where the second structural representation information is generated based on the second character region in the second character image.
According to another aspect of the present application, an electronic device is provided. Fig. 4 shows a schematic block diagram of an electronic device according to an embodiment of the application. As shown in fig. 4, the control device 400 includes a processor 410 and a memory 420. The memory 420 has stored therein a computer program. The processor 410 is used to execute a computer program to implement the image matching method.
In the alternative, the processor may comprise any suitable processing device having data processing capabilities and/or instruction execution capabilities. For example, the processor may be implemented using one or a combination of several of a Programmable Logic Controller (PLC), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), a Micro Control Unit (MCU), and other forms of processing units.
According to yet another aspect of an embodiment of the present application, there is also provided a storage medium. The storage medium has stored therein a computer program/instruction which, when executed by a processor, implements the image matching method described above. The storage medium may include, for example, read-only memory (ROM), erasable programmable read-only memory (EPROM), portable compact disc read-only memory (CD-ROM), USB memory, or any combination of the preceding. The computer-readable storage medium may be any combination of one or more computer-readable storage media.
Those skilled in the art can understand the specific implementation schemes of the image matching apparatus, the electronic device, and the storage medium by reading the above description about the image matching method, and for brevity, the description is omitted here.
Although the illustrative embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the above illustrative embodiments are merely illustrative and are not intended to limit the scope of the present application thereto. Various changes and modifications may be made therein by one of ordinary skill in the art without departing from the scope and spirit of the application. All such changes and modifications are intended to be included within the scope of the present application as set forth in the appended claims.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It should be noted that the above-mentioned embodiments illustrate rather than limit the application, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The application may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, third, etc. do not denote any order. These words may be interpreted as names.
The foregoing description is merely illustrative of specific embodiments of the present application and the scope of the present application is not limited thereto, and any person skilled in the art can easily think about variations or substitutions within the scope of the present application. The protection scope of the application is subject to the protection scope of the claims.
Claims (12)
1. An image matching method, comprising:
Acquiring first character images, wherein the first character images comprise at least one first character area, and each first character area corresponds to a single first character;
Acquiring first structure representation information corresponding to the first character image, wherein the first structure representation information comprises character representation information corresponding to each first character area and/or character relation information between adjacent first character areas;
Determining a first matching result of whether the first character image and the second character image are matched by using the first structure representation information and the second structure representation information, wherein the second structure representation information is generated based on a second character area in the second character image;
the determining, by using the first structure representation information and the second structure representation information, a first matching result of whether the first character image and the second character image match, includes:
Drawing a first structure representation image corresponding to the first structure representation information and a second structure representation image corresponding to the second structure representation information respectively;
Calculating a first similarity between the first structural representation image and the second structural representation image;
When the first similarity is larger than a first similarity threshold, the first matching result is determined to be that the first structure representation information is matched with the second structure representation information, otherwise, the first matching result is determined to be that the first structure representation information is not matched with the second structure representation information;
The drawing of the first structure representation image corresponding to the first structure representation information and the second structure representation image corresponding to the second structure representation information respectively includes:
representing an image for a first structure:
Creating a first preset background image, wherein the size of the first preset background image is consistent with the size of the first character image;
Determining a first character mapping area and a first connecting line between the first character mapping areas in the first preset background image by using the first structure representation information, wherein the position of the first character mapping area on the first preset background image is consistent with the position of the first character area on the first character image;
Setting pixel values of pixels of the first character mapping region and a first connection line to a first pixel value and setting pixel values in a region other than the first character mapping region and the first connection line in the first preset background image to a second pixel value, thereby generating the first structure representation image corresponding to the first structure representation information;
representing an image for a second structure:
creating a second preset background image, wherein the size of the second preset background image is consistent with the size of the second character image;
Determining a second character mapping area in the second preset background image and a second connecting line between the second character mapping areas by using the second structure representation information, wherein the position of the second character mapping area on the second preset background image is consistent with the position of the second character area on the second character image;
Setting the pixel values of the pixels of the second character mapping region and the second connecting line to be first pixel values and setting the pixel values in the regions except the second character mapping region and the second connecting line in the second preset background image to be second pixel values, thereby generating a second structure representation image corresponding to the second structure representation information.
2. The image matching method according to claim 1, wherein the character representation information includes a position and/or a size of each character area; the character relation information comprises a distance between adjacent character areas and/or an angle between connecting lines, wherein the connecting lines are connecting lines between the adjacent character areas.
3. The image matching method according to claim 2, wherein the acquiring first structural representation information corresponding to the first character image includes:
And performing character detection on the first character image to obtain character representation information of each first character area, and determining the first structure representation information based on the character representation information.
4. The image matching method according to claim 1, wherein,
The determining, by using the first structure representation information, a first character mapping region within the first preset background image and a first connection line between the first character mapping regions, includes:
Determining a first connection line between first character mapping areas on the first preset background image based on a connection line setting rule;
The connection line setting rule is used for specifying a distance range in which a distance between two character mapping areas connected by the connection line falls and/or a line width of the connection line;
The determining, by using the second structural representation information, a second character mapping region in the second preset background image and a second connection line between the second character mapping regions includes:
and determining a second connecting line between second character mapping areas on the second preset background image based on the connecting line setting rule.
5. The image matching method according to claim 4, wherein the connection line setting rule is related to a resolution, a size, an aspect ratio of the first character image/the second character image, and the number of the corresponding first character areas and/or second character areas.
6. The image matching method according to claim 1, wherein after said determining a first matching result of whether said first character image and said second character image match using said first structural representation information and second structural representation information, said method further comprises:
And judging whether each first character area is matched with the corresponding second character area or not so as to determine a second matching result.
7. The method of claim 6, wherein determining whether each first character region matches a corresponding second character region to determine a second matching result comprises:
For each of said first character areas,
Calculating a second similarity between the image information contained in the first character region and the image information contained in the corresponding second character region;
And when the second similarity is larger than a second similarity threshold value, determining that the first character area is matched with the corresponding second character area.
8. The image matching method according to any one of claims 1 to 7, wherein acquiring the first character image includes:
Inputting a character image to be detected into a target detection model to detect at least one suspected character area and corresponding character representation information, wherein the character representation information of the at least one suspected character area comprises the position of the at least one suspected character area;
Extracting at least one first image block containing the suspected character areas in one-to-one correspondence from the character image to be detected based on character representation information of the at least one suspected character area;
Inputting the at least one first image block into a first classification model to obtain classification results corresponding to the at least one suspected character region, wherein the classification results are used for indicating whether the suspected character region is a first character region or not;
and processing the character image to be detected based on the classification result to acquire the first character image.
9. The image matching method according to claim 8, wherein the classification result includes a first confidence that the suspected character region is a first character region and/or a second confidence that the suspected character region is not a character region;
The processing the character image to be detected based on the classification result to obtain the first character image includes:
for each region of the suspected character,
When the corresponding first confidence coefficient is smaller than a first confidence coefficient threshold value or the second confidence coefficient is larger than a second confidence coefficient threshold value, extracting a second image block containing the suspected character area from the character image to be detected;
performing image reconstruction on the second image block to obtain a reconstructed image block;
And updating the character image to be detected by filling the reconstructed image block to the position of the suspected character region in the character image to be detected, so as to acquire the first character image.
10. An image matching apparatus, comprising:
A first acquisition module for acquiring a first character image, wherein the first character image comprises at least one first character area, and each first character area corresponds to a single first character;
A second obtaining module, configured to obtain first structure representation information corresponding to the first character image, where the first structure representation information includes character representation information corresponding to each first character area and/or character relationship information between adjacent first character areas;
A determining module, configured to determine, using the first structure representation information and second structure representation information, a first matching result of whether the first character image and the second character image match, where the second structure representation information is information generated based on a second character region in the second character image;
the determining module includes:
A drawing sub-module for drawing a first structure representation image corresponding to the first structure representation information and a second structure representation image corresponding to the second structure representation information, respectively;
a computing sub-module for computing a first similarity between the first structural representation image and the second structural representation image;
a determining submodule, configured to determine, when the first similarity is greater than a first similarity threshold, that the first structure representation information is matched with the second structure representation information, and if not, that the first structure representation information is not matched with the second structure representation information;
The drawing submodule includes:
A first creating unit configured to create a first preset background image for a first structural representation image, the size of the first preset background image being identical to the size of the first character image;
A first determining unit configured to determine, for a first structure representation image, a first character mapping region within the first preset background image and a first connection line between the first character mapping regions, using the first structure representation information, wherein a position of the first character mapping region on the first preset background image is consistent with a position of the first character region on the first character image;
A first setting unit configured to set, for a first structure representation image, pixel values of pixels of the first character mapping region and a first connection line to first pixel values and set pixel values in a region other than the first character mapping region and the first connection line in the first preset background image to second pixel values, thereby generating the first structure representation image corresponding to the first structure representation information;
A second creating unit configured to create a second preset background image for a second structural representation image, the size of the second preset background image being identical to the size of the second character image;
a second determining unit configured to determine, for a second structure representation image, a second character mapping region within the second preset background image and a second connection line between the second character mapping regions using the second structure representation information, wherein a position of the second character mapping region on the second preset background image is consistent with a position of the second character region on the second character image;
A second setting unit configured to set, for a second structure representation image, pixel values of pixels of the second character mapping region and a second connection line to first pixel values and set pixel values in a region other than the second character mapping region and the second connection line in the second preset background image to second pixel values, thereby generating a second structure representation image corresponding to the second structure representation information.
11. An electronic device comprising a processor and a memory, wherein the memory has stored therein computer program instructions which, when executed by the processor, are adapted to carry out the image matching method of any of claims 1-9.
12. A storage medium having stored thereon program instructions for performing the image matching method of any of claims 1-9 when run.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311801766.5A CN117456542B (en) | 2023-12-26 | 2023-12-26 | Image matching method, device, electronic equipment and storage medium |
CN202410394072.2A CN118298441A (en) | 2023-12-26 | 2023-12-26 | Image matching method, device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311801766.5A CN117456542B (en) | 2023-12-26 | 2023-12-26 | Image matching method, device, electronic equipment and storage medium |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410394072.2A Division CN118298441A (en) | 2023-12-26 | 2023-12-26 | Image matching method, device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117456542A CN117456542A (en) | 2024-01-26 |
CN117456542B true CN117456542B (en) | 2024-04-26 |
Family
ID=89593377
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410394072.2A Pending CN118298441A (en) | 2023-12-26 | 2023-12-26 | Image matching method, device, electronic equipment and storage medium |
CN202311801766.5A Active CN117456542B (en) | 2023-12-26 | 2023-12-26 | Image matching method, device, electronic equipment and storage medium |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410394072.2A Pending CN118298441A (en) | 2023-12-26 | 2023-12-26 | Image matching method, device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (2) | CN118298441A (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106874909A (en) * | 2017-01-18 | 2017-06-20 | 深圳怡化电脑股份有限公司 | A kind of recognition methods of image character and its device |
CN111860512A (en) * | 2020-02-25 | 2020-10-30 | 北京嘀嘀无限科技发展有限公司 | Vehicle identification method and device, electronic equipment and computer readable storage medium |
CN113255674A (en) * | 2020-09-14 | 2021-08-13 | 深圳怡化时代智能自动化系统有限公司 | Character recognition method, character recognition device, electronic equipment and computer-readable storage medium |
CN115223173A (en) * | 2022-09-20 | 2022-10-21 | 深圳市志奋领科技有限公司 | Object identification method and device, electronic equipment and storage medium |
CN115690803A (en) * | 2022-10-31 | 2023-02-03 | 中电金信软件(上海)有限公司 | Digital image recognition method, device, electronic device and readable storage medium |
CN116863484A (en) * | 2023-06-30 | 2023-10-10 | 支付宝(杭州)信息技术有限公司 | Character recognition method, device, storage medium and electronic equipment |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FI117217B (en) * | 2003-10-01 | 2006-07-31 | Nokia Corp | Enforcement and User Interface Checking System, Corresponding Device, and Software Equipment for Implementing the Process |
DE102019217733A1 (en) * | 2019-11-18 | 2021-05-20 | Volkswagen Aktiengesellschaft | Method for operating an operating system in a vehicle and operating system for a vehicle |
-
2023
- 2023-12-26 CN CN202410394072.2A patent/CN118298441A/en active Pending
- 2023-12-26 CN CN202311801766.5A patent/CN117456542B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106874909A (en) * | 2017-01-18 | 2017-06-20 | 深圳怡化电脑股份有限公司 | A kind of recognition methods of image character and its device |
CN111860512A (en) * | 2020-02-25 | 2020-10-30 | 北京嘀嘀无限科技发展有限公司 | Vehicle identification method and device, electronic equipment and computer readable storage medium |
CN113255674A (en) * | 2020-09-14 | 2021-08-13 | 深圳怡化时代智能自动化系统有限公司 | Character recognition method, character recognition device, electronic equipment and computer-readable storage medium |
CN115223173A (en) * | 2022-09-20 | 2022-10-21 | 深圳市志奋领科技有限公司 | Object identification method and device, electronic equipment and storage medium |
CN115690803A (en) * | 2022-10-31 | 2023-02-03 | 中电金信软件(上海)有限公司 | Digital image recognition method, device, electronic device and readable storage medium |
CN116863484A (en) * | 2023-06-30 | 2023-10-10 | 支付宝(杭州)信息技术有限公司 | Character recognition method, device, storage medium and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN117456542A (en) | 2024-01-26 |
CN118298441A (en) | 2024-07-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN116994140B (en) | Cultivated land extraction method, device, equipment and medium based on remote sensing image | |
KR20230124713A (en) | Fault detection methods, devices and systems | |
CN110866871A (en) | Text image correction method and device, computer equipment and storage medium | |
CN111626190A (en) | Water level monitoring method for scale recognition based on clustering partitions | |
CN109343920B (en) | Image processing method and device, equipment and storage medium thereof | |
CN111523414A (en) | Face recognition method and device, computer equipment and storage medium | |
CN113628180B (en) | Remote sensing building detection method and system based on semantic segmentation network | |
US20230386023A1 (en) | Method for detecting medical images, electronic device, and storage medium | |
CN112364834A (en) | Form identification restoration method based on deep learning and image processing | |
CN118447322A (en) | Wire surface defect detection method based on semi-supervised learning | |
CN114463503B (en) | Method and device for integrating three-dimensional model and geographic information system | |
CN112801227B (en) | Typhoon identification model generation method, device, equipment and storage medium | |
CN109767431A (en) | Accessory appearance defect inspection method, device, equipment and readable storage medium storing program for executing | |
CN112528982A (en) | Method, device and system for detecting water gauge line of ship | |
JPH07220090A (en) | Object recognition method | |
CN116935369A (en) | Ship water gauge reading method and system based on computer vision | |
CN117557565A (en) | Detection method and device for lithium battery pole piece | |
CN112419208A (en) | Construction drawing review-based vector drawing compiling method and system | |
CN118470641B (en) | Ship overload determination method and device based on image recognition | |
CN117474932B (en) | Object segmentation method and device, electronic equipment and storage medium | |
CN114332870A (en) | Water level identification method, device, device and readable storage medium | |
CN117456542B (en) | Image matching method, device, electronic equipment and storage medium | |
CN118196168A (en) | Method and device for calculating length of dry beach of tailing pond, storage medium and product | |
CN118154529A (en) | Image defect target detection method, device, equipment and storage medium | |
CN114862761B (en) | Power transformer liquid level detection method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |