[go: up one dir, main page]

CN112733829B - Feature block identification method, electronic equipment and computer readable storage medium - Google Patents

Feature block identification method, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN112733829B
CN112733829B CN202011627747.1A CN202011627747A CN112733829B CN 112733829 B CN112733829 B CN 112733829B CN 202011627747 A CN202011627747 A CN 202011627747A CN 112733829 B CN112733829 B CN 112733829B
Authority
CN
China
Prior art keywords
image
identified
connected domain
information
block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011627747.1A
Other languages
Chinese (zh)
Other versions
CN112733829A (en
Inventor
许广军
张军
黄国庆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
iFlytek Co Ltd
Original Assignee
iFlytek Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by iFlytek Co Ltd filed Critical iFlytek Co Ltd
Priority to CN202011627747.1A priority Critical patent/CN112733829B/en
Publication of CN112733829A publication Critical patent/CN112733829A/en
Application granted granted Critical
Publication of CN112733829B publication Critical patent/CN112733829B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a feature block identification method, electronic equipment and a computer readable storage medium, wherein the method comprises the following steps: acquiring an image to be identified containing a feature block; wherein the feature block has a preset shape; performing contour detection on the image to be identified to obtain a final detection image containing foreground contour information of the image to be identified; and determining the feature blocks in the image to be identified based on the foreground contour information in the final detection image. By the method, the accuracy of feature block identification can be improved.

Description

Feature block identification method, electronic equipment and computer readable storage medium
Technical Field
The present application relates to the field of intelligent examination papers, and in particular, to a feature block identification method, an electronic device, and a computer readable storage medium.
Background
With the development of computer science and technology, the automation information processing capability is remarkably improved. The education mode also develops step by step to the information automation direction, and the appearance of the intelligent paper marking system releases people from heavy mechanical labor. At present, the intelligent examination paper mainly utilizes an OMR identification technology to extract and identify structural information on a question card through scanning and positioning, and compares the structural information with corresponding objective question standard answers to rapidly complete the supplement of examinee information and the examination and grading of objective questions. The scanning positioning generally depends on black rectangular blocks or triangular blocks for positioning, if the positioning block fails to be successfully identified, the positioning of the answer sheet image fails, and the follow-up intelligent examination paper flow such as objective questions, standard examination questions and the like cannot be normally performed, so that how to improve the accuracy of positioning block identification in the answer sheet image becomes a problem with great research value.
Disclosure of Invention
An embodiment of the present application provides a feature block identifying method, including: acquiring an image to be identified containing a feature block; wherein the feature block has a preset shape; performing contour detection on the image to be identified to obtain a final detection image containing foreground contour information of the image to be identified; and determining the feature blocks in the image to be identified based on the foreground contour information in the final detection image.
A second aspect of the embodiment of the present application provides an electronic device, where the electronic device includes a processor and a memory connected to the processor, where the memory is configured to store program data, and the processor is configured to execute the program data to implement the foregoing feature block identification method.
A third aspect of the embodiments of the present application provides a computer-readable storage medium having stored therein program data which, when executed by a processor, is configured to implement the aforementioned feature block identification method.
The beneficial effects of the application are as follows: compared with the prior art, the method and the device have the advantages that the image to be identified containing the feature blocks is obtained, the feature blocks have the preset shape, and then the contour detection is carried out on the image to be identified, so that a final detection image containing the foreground contour information of the image to be identified can be obtained, the feature blocks in the image to be identified can be determined based on the foreground contour information in the final detection image, the interference background in the image to be identified can be well filtered by adopting the contour detection mode, the feature blocks in the image to be identified can be accurately identified based on the foreground contour information in the final detection image, and the accuracy of feature block identification is improved.
Drawings
In order to more clearly illustrate the technical solutions of the present application, the drawings required in the description of the embodiments will be briefly described below, it being obvious that the drawings described below are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art. Wherein:
FIG. 1 is a flow chart of an embodiment of a feature block identification method provided by the present application;
Fig. 2 is a first schematic diagram of an answer sheet image provided by the application;
fig. 3 is a schematic diagram of a first application scenario of the feature block identification method provided by the present application;
FIG. 4 is a flow chart of another embodiment of a feature block identification method provided by the present application;
FIG. 5 is a flowchart illustrating another embodiment of step S23 in FIG. 4 according to the present application;
FIG. 6 is a flowchart illustrating another embodiment of step S24 in FIG. 4 according to the present application;
fig. 7 is a second schematic diagram of an answer sheet image provided by the application;
fig. 8 is a schematic diagram of a second application scenario of the feature block identification method provided by the present application;
Fig. 9 is a third schematic diagram of an answer sheet image provided by the application;
fig. 10 is a schematic diagram of a third application scenario of the feature block identifying method provided by the present application;
FIG. 11 is a schematic diagram of a fourth application scenario of the feature block identification method provided by the present application;
FIG. 12 is a flowchart illustrating another embodiment of step S243 in the feature block identifying method according to the present application;
FIG. 13 is a flowchart illustrating another embodiment of step S3433 in the feature block identification method according to the present application;
Fig. 14 is a fourth schematic diagram of an answer sheet image provided by the application;
fig. 15 is a schematic diagram of a fifth application scenario of the feature block identifying method provided by the present application;
FIG. 16 is a schematic diagram of a frame of an embodiment of an electronic device provided by the present application;
FIG. 17 is a schematic diagram of a computer storage medium according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The terms "first" and "second" in the present application are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present application, the meaning of "plurality" means at least two, for example, two, three, etc., unless specifically defined otherwise. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
At present, the basic idea of locating block identification in an answer sheet is to carry out binarization processing on corresponding areas in a scanned image by means of corresponding position information on an answer sheet template, then a rectangular locating block or a triangular locating block is searched by utilizing a connected domain area ratio through a OPENCV fitting function, but when the scanned image is dirty due to unclear image printing or poor scanning quality of a scanner, global binarization or local self-adaptive binarization is carried out on the corresponding positions of the scanned image, and then the locating block is possibly erased by mistake or an interference trace (scanned image dirty) appears, so that the locating block identification fails; in addition, when some 'emblem', 'plain', 'universal' words are arranged on the answer sheet, the binarized image is similar to a rectangle, and the binary image is easily and wrongly identified into a positioning block through the area ratio of the connected domain, so that the positioning block has low identification accuracy, and the follow-up intelligent examination flow such as objective questions, quasi-examination questions and the like cannot be normally carried out.
In this regard, the application provides a feature block identification method, which identifies a feature block by searching foreground contour information of an image to be identified, can well process abnormal image quality conditions such as light scanning, dark scanning or uneven illumination, and the like, and can clearly obtain the contour of characters by the extracted foreground contour information, so that the character can be distinguished from the feature block, further, the character can be prevented from being identified into the feature block by mistake, and the accuracy of feature block identification can be improved.
Referring to fig. 1 to 3, fig. 1 is a flowchart of an embodiment of a feature block recognition method provided by the present application, fig. 2 is a first schematic diagram of an answer sheet image provided by the present application, and fig. 3 is a schematic diagram of a first application scenario of the feature block recognition method provided by the present application. Specifically, the method may include the steps of:
step S11: acquiring an image to be identified containing a feature block; wherein the feature block has a preset shape.
In the embodiment of the disclosure, the feature block may be a positioning block in an answer sheet image.
Optionally, the answer sheet image may be obtained by scanning with an answer sheet reader, or photographing with a high-speed camera, a mobile terminal, or the like. The answer sheet reader is not limited to a cursor reader (Optical MARK READER, OMR).
Optionally, before the step S11, preprocessing of the answer sheet image may be further included, and specifically, gray processing, image correction and segmentation of the answer sheet image may be included, so as to obtain an image to be identified including the feature block. The number of feature blocks in the image to be identified may be one or more, and is not limited herein.
It can be understood that the feature block may divide the answer sheet image into two areas, namely a first area and a second area, where the first area is a valuable information area in the feature block, the first area generally includes personal information and answer information of the examinee, the second area is a blank area outside the feature block, and the second area generally has no valuable information.
In an embodiment of the present disclosure, the feature block has a preset shape. A feature block may be a polygon comprising N vertices and N edges, N typically being 4. Specifically, the preset shape may include, but is not limited to, rectangle, square, triangle, and circle. In one implementation, the feature block is solid square in shape.
Step S12: and carrying out contour detection on the image to be identified to obtain a final detection image containing foreground contour information of the image to be identified.
The edge is the place where the local intensity variation of the image is most obvious, the first derivative considers the maximum value to correspond to the edge position, and the second derivative corresponds to the edge position by the zero crossing point.
In the embodiment of the disclosure, the edge detection operator may specifically be used to perform contour detection on the image to be identified, so as to obtain a final detection image containing foreground contour information of the image to be identified. Alternatively, the edge detection operator may include, but is not limited to: roberts, sobel, prewitt, laplacian, log/Marr, canny, kirsch, nevitia.
In the embodiment of the disclosure, a Canny edge detection operator can be adopted to perform contour detection on an image to be identified. The basic idea of the Canny edge detection operator for solving the edge points is to carry out smooth filtering on the original image, and then apply a non-extremum suppression technology to the smoothed image to obtain the final required edge image. The Canny edge detection operator can effectively inhibit noise and accurately position the edge. Generally, the Canny edge detection operator includes four steps of gaussian filtering a smoothed image, calculating gradient magnitudes and directions, non-maximum suppressing the gradient magnitudes, and detecting and connecting edges using a dual-threshold algorithm, and is described in detail with reference to the disclosed embodiments below.
As shown in fig. 2 to 3, fig. 2 shows an answer sheet image in a dark scene, and the area 3a in fig. 3 is obtained under the global seek effect. Fig. 3a shows a to-be-identified image including the feature block area in fig. 2, specifically, fig. 3a is a left edge area in fig. 2, and the image is dark, so that a positioning block of an answer sheet cannot be clearly identified from the to-be-identified image; in the figure 3b, an image obtained by directly adopting binarization processing to an image to be identified is shown, and because the image to be identified is dark, interference background is difficult to filter in the binarization processing process, and the real foreground and background of the image to be identified are distinguished, the position of a positioning block can not be identified almost based on the image shown in the figure 3 b; in the figure 3c, after the contour detection is performed on the image to be identified based on the feature block identification method provided by the application, the final detection image containing the foreground contour information of the image to be identified is obtained, and the position of the feature block can be accurately identified based on the final detection image.
Step S13: and determining the feature blocks in the image to be identified based on the foreground contour information in the final detection image.
It can be understood that after the final detection image containing the foreground contour information of the image to be identified is obtained, the foreground contour information corresponding to the preset feature information of the feature block can be screened out from the final detection image according to the preset feature information of the feature block, so that the position information of the feature block in the image to be identified can be obtained, and the feature block in the image to be identified can be determined.
According to the scheme, the image to be identified comprising the feature blocks is obtained, wherein the feature blocks have the preset shape, and then the contour detection is carried out on the image to be identified, so that a final detection image comprising the foreground contour information of the image to be identified can be obtained, and further, the feature blocks in the image to be identified can be determined based on the foreground contour information in the final detection image.
Referring to fig. 4 to 11, fig. 4 is a schematic flow chart of another embodiment of the feature block identifying method provided by the present application, fig. 5 is a schematic flow chart of another embodiment of step S23 in fig. 4 provided by the present application, fig. 6 is a schematic flow chart of another embodiment of step S24 in fig. 4 provided by the present application, fig. 7 is a second schematic diagram of an answer sheet image provided by the present application, fig. 8 is a schematic diagram of a second application scenario of the feature block identifying method provided by the present application, fig. 9 is a third schematic diagram of an answer sheet image provided by the present application, fig. 10 is a schematic diagram of a third application scenario of the feature block identifying method provided by the present application, and fig. 11 is a schematic diagram of a fourth application scenario of the feature block identifying method provided by the present application.
Specifically, the method may include the steps of:
Step S21: acquiring an image to be identified containing a feature block; wherein the feature block has a preset shape.
Optionally, before step S22, the image to be identified may also be subjected to a filtering process, such as gaussian filtering. Since the edge detection is susceptible to noise, filtering processing can be performed on the image to be identified before the edge detection, so that the contour of the effective foreground in the image to be identified can be conveniently searched.
In the embodiment of the disclosure, steps S22 and S23 are another implementation manner of the step S12 described above:
Step S22: and carrying out edge detection on the image to be identified by using a preset edge detection operator so as to obtain an initial detection image.
The preset edge detection operator can be selected according to an actual application scene, and is not limited herein. In the embodiment of the disclosure, the preset edge detection operator is a Canny edge detection operator, and the Canny edge detection operator can not only effectively inhibit noise, but also accurately position the edge, so that the Canny edge detection operator is utilized to carry out edge detection on the image to be identified, the accuracy of edge detection can be improved, and the accuracy of feature block identification can be improved.
Generally, the Canny edge detection operator comprises the steps of:
(1) Gaussian filtered smooth image
In general, a gaussian smoothing filter is used in a Canny edge recognition algorithm to convolve and reduce noise, namely, gray values of pixel points and neighborhood points are weighted and averaged according to a certain parameter rule, and fuzzy edges of images are effectively filtered. In addition, other filtering methods can be adopted to filter the image to be treated necessarily. The Gaussian filtering of the image can be realized through twice weighted one-dimensional Gaussian kernel, and can also be realized through one-time two-dimensional Gaussian kernel convolution.
Under the same condition, the larger the size of the Gaussian convolution kernel is, the better the image smoothing effect is, the image is shown to be blurred, and part of details of the edge are easier to lose; conversely, the smaller the kernel size, the weaker the smoothing effect, the clearer the image, and the less likely the details are lost, so the size of the gaussian convolution kernel in the embodiments of the present disclosure may be selected according to the actual scene, which is not limited herein.
(2) Calculating gradient magnitude and direction
Specifically, the partial derivatives of the image in the horizontal (x) and vertical (y) directions may be calculated using convolution operators, and then the partial derivatives in the x and y directions are used to calculate the gradient magnitude and direction for each pixel in the image to be identified. The size of the convolution operator may be chosen for the actual scene, for example as a2 x 2 matrix, or a3 x 3 matrix.
(3) Non-maximum suppression of gradient amplitude
The points of the suspected edge can be roughly obtained according to the element values in the image gradient amplitude matrix, but the points are not necessarily edge points, and the accuracy of edge point identification can be improved by keeping the maximum value of the local gradient and restraining the non-maximum value. The non-maximum value is suppressed by utilizing a gradient direction, whether the gray value of the pixel point is maximum in the eight-value neighborhood of the pixel point is determined, each pixel partial derivative value is compared with the partial derivative value of the adjacent pixel, and the pixel point corresponding to the maximum partial derivative value is taken as an edge point, so that the influence of other edge pixel points can be eliminated.
(4) Detecting and connecting edges using a dual threshold algorithm
Selecting a high threshold value and a low threshold value from the image after non-maximum value inhibition to operate the image, and if the amplitude of a certain pixel position exceeds the high threshold value, reserving the pixel as an edge pixel; if the magnitude of a pixel location is less than a low threshold, the pixel is excluded; if the magnitude of a pixel location is between two thresholds, the pixel is only preserved when connected to a pixel above the high threshold.
The detection result with high threshold value can remove noise to a great extent and lose effective edge information; and then, the image is obtained by utilizing low threshold detection to retain edge information, and the images with the edges not closed are connected into complete contours, so that the two images are required to be subjected to contrast application, missing information is supplemented, and the edges of the images are connected, so that the edges of the images are complete.
Step S23: and filtering the first connected domain in the initial detection image to obtain a final detection image containing foreground contour information of the image to be identified.
After edge detection is performed on an image to be identified, a foreground contour in the image to be identified can be obtained preliminarily in the obtained initial detection image, and noise may still exist in the initial detection image due to other reasons in edge detection, so as to further remove the noise and obtain a more accurate foreground contour, the embodiment of the present disclosure provides a method for denoising the initial detection image by using length characterization information of connected domains in the initial detection image, specifically, as shown in fig. 5, step S23 may include the following steps S231 to S234:
step S231: and determining the length characterization information of the first connected domain in the initial detection image.
The first connected domain is a connected domain in the initial detection image, and the length characterization information may be Circumference (C). The connected Region (Connected Component, simply called connected Region) generally refers to an image Region (Region, blob) formed by foreground pixels having the same pixel value and adjacent positions in the image. The connected region analysis (Connected Component Analysis, connected Component Labeling) refers to finding and marking individual connected regions in the image.
Specifically, the connected domain analysis may be performed on the initial detection image, so that the mark marks each connected domain in the image to be detected, so as to further obtain the perimeter and the area of the connected domain. The perimeter of the communication domain is the length of the contour line surrounded by the outermost pixel points of the communication domain, and the area of the communication domain is the area of the contour line surrounded by the outermost pixel points of the communication domain.
Alternatively, in embodiments of the present disclosure, the perimeter and area may be in units of pixels (pixels).
Step S232: and obtaining fitting parameters based on the length characterization information of the first communication domain.
The fitting parameter may be a minimum fitting side length, specifically, the minimum fitting side length in the polygon fitting may be obtained based on the perimeter of the first communication domain, and is denoted as eplison.
Step S232 may specifically include: acquiring a first ratio between the length characterization information of the first communication domain and a first numerical value; selecting the smaller one from the first ratio and the second value as the undetermined fitting side length; and selecting the larger one from the undetermined fitting side length and the third numerical value as the smallest fitting side length.
The magnitudes of the first value, the second value, and the third value may be selected according to the actual scene, which is not limited herein.
In some embodiments, a first ratio between the perimeter of the first communication domain and the first value is obtained; selecting the smaller one from the first ratio and the second value as the undetermined fitting side length; the larger one of the undetermined fitting side length and the third numerical value is selected as the minimum fitting side length, and the minimum fitting side length can be specifically expressed by the following formula:
Wherein eplison denotes a minimum fitting side length, C denotes a perimeter of the first communicating region, α denotes a first value, β denotes a second value, and γ denotes a third value.
In some embodiments, where the first value is 20, the second value is 10, and the third value is 2, the minimum fit edge eplison may be calculated using the following equation:
For example, when the perimeter C of the connected domain is 120, the minimum fitting edge eplison is 6, eplison represents the accuracy of the polygon fitting, and the larger the same connected domain is, the higher the accuracy of the polygon fitting is, the more vertices of the polygon obtained by fitting are, conversely, the smaller eplison is, the lower the accuracy of the polygon fitting is, and the fewer vertices of the polygon obtained by fitting are. For example, a value of eplison is calculated as 6, and there are two connected and smaller-angle segments in the connected domain, where each segment has a length smaller than 6, and the two segments can be regarded as one segment, and if each segment has a length greater than 6, it is very likely to be approximately two segments, and the two segments are determined together according to other factors in the polygon fitting function.
According to the embodiment of the disclosure, the minimum fitting side length is calculated through the perimeter of the first communication domain, so that the suitable minimum fitting side length of the first communication domain can be accurately calculated, and the shape of the polygon fitted by the minimum fitting side length can be more similar to the real shape of the first communication domain.
Step S233: and performing polygon fitting on the first connected domain based on the fitting parameters to obtain preset characteristic information of a fitting polygon corresponding to the first connected domain.
Optionally, the fitting the preset characteristic information of the polygon includes fitting at least one of the number of vertices and the area of the polygon.
Optionally, a approxPolyDP function may be used to perform polygon fitting on the first connected domain, so that the number of vertices and the area of the fitted polygon corresponding to the first connected domain may be obtained. In addition, other polygonal fitting functions may be used for fitting.
Step S234: and filtering out a first connected domain of which the preset characteristic information meets a first preset requirement in the initial detection image to obtain a final detection image.
Optionally, the first preset requirement may be selected according to an actual scene, for example, when the feature block is rectangular, the first preset requirement may be set to be that the number of vertices of the fitted polygon is 4, the area of the fitted polygon is within a range of 300-350 pixel, and if the number of vertices of the fitted polygon of the first connected domain in the initial detection image is not equal to 4 or the area of the fitted polygon of the first connected domain is not within a range of 300-350 pixel, the corresponding first connected domain is filtered to obtain the final detection image.
Step S24: and determining the feature blocks in the image to be identified based on the foreground contour information in the final detection image.
As shown in fig. 6, in an embodiment of the present disclosure, step S24 may include substeps S241, S242, and S243:
Step S241: taking the final detection image as a reference image; or performing morphological operation on the final detection image to obtain a reference image.
In some embodiments, the final detection image may be directly used as a reference image, a reference block having a preset feature in the final detection image is detected, and a feature block in the image to be identified is determined based on the position of the reference block.
In other embodiments, if the feature block is rectangular, the first kernel function and the second kernel function may be used to perform a first morphological operation and a second morphological operation on the final detected image, respectively, to filter out diagonal segments and other small interference terms (e.g., segments having a width less than the size of the first kernel function, and segments having a length less than the size of the second kernel function).
Specifically, morphological operations are performed on the final detection image to obtain a reference image, including: performing a first morphological operation on the final detection image by using a first kernel function to obtain a first morphological image; performing a second morphological operation on the final detection image by using a second kernel function to obtain a second morphological image; and fusing the first morphological image and the second morphological image to obtain a reference image.
The size of the first kernel function is determined according to the width of the feature block in the template image, and the first morphological operation is transverse expansion corrosion; the second kernel is sized according to the height of the feature block in the template image and the second morphological operation is vertical dilation erosion.
Alternatively, the sizes of the first kernel function and the second kernel function may be set according to actual situations. In some embodiments, the first kernel may be two-thirds the size of the feature block in the template image and the second kernel may be two-thirds the length of the feature block in the template image. If the feature block is square, the first kernel function and the second kernel function may be two-thirds of the feature block side length in the template image.
It can be understood that other interference items such as characters may exist in the final detection image, the characters are usually intermittent line segments in the transverse direction or the vertical direction, and the length of the line segments is shorter and is generally smaller than two thirds of the length and the width of the feature block, so that the line segments with the width smaller than two thirds of the width of the feature block and the line segments with the length smaller than two thirds of the length of the feature block in the final detection image can be filtered through the method, and the contour information of the feature block is reserved.
As shown in fig. 7 to 8, fig. 7 shows an answer sheet image with uneven illumination, and an area 8a in the drawing is obtained under the local search effect. Fig. 8 includes 8 a-8 f. Wherein, 8a in the figure shows that one image to be identified containing the feature block area in fig. 7 is taken, specifically, 8a in the figure is the lower left corner edge area in fig. 7. In fig. 8b, an image obtained by directly performing binarization processing on an image to be identified is shown, and since an interference background exists in the image to be identified, a feature block can be seen from the image directly subjected to binarization processing and is covered by the interference background, so that it is difficult to accurately detect the feature block.
Please continue to refer to 8c-8f in fig. 8, wherein 8c is an initial detection image obtained after performing edge detection on the image to be identified by using a preset edge detection operator; in the figure 8d, the first morphological image after the transverse expansion corrosion is performed on the figure 8c, and as can be seen from the figure 8d, the transverse expansion corrosion can filter out the vertical and oblique line segments in the initial detection image, and can filter out the short line segments which do not meet the width requirement of the feature block in the transverse line segments, so that a clearer transverse contour line can be obtained; in the figure 8e, the second morphological image after the vertical expansion corrosion is performed on the figure 8c, and as can be seen from the figure 8e, the vertical expansion corrosion can filter out the transverse and oblique line segments in the initial detection image, and can filter out the short line segments which do not meet the length requirement of the feature block in the vertical line segments, so that a clearer vertical contour line can be obtained; in the figure 8f, the reference image after the first morphological image 8d and the second morphological image 8e are fused is filtered to remove the filtered diagonal segment and other small interference items in the initial detection image after the first morphological operation and the second morphological operation.
As shown in fig. 9 and 10, fig. 9 shows an answer sheet image in which a feature block is disturbed, and the region 10a in fig. 10 is obtained under the local search effect. Fig. 10 includes fig. 10a to 10c, wherein fig. 10a illustrates a view of an image to be identified including a feature block region of fig. 9, and specifically, fig. 10a illustrates an upper left corner edge region of fig. 9. In the figure 10b, an image obtained by directly performing binarization processing on an image to be identified is shown, and because an interference background (a folded angle exists in a test paper and a feature block is connected or shielded) exists in the image to be identified, a connected domain of the feature block can be seen to include a folded angle region by mistake in the image 10b after the direct binarization processing, so that the area of the connected domain becomes large, and only a solid image of the connected domain can be detected, but the interfered feature block cannot be detected. With continued reference to fig. 10c, fig. 10c shows a reference image obtained after the feature block identification method of the present application, specifically, the angle of refraction (black triangle area) may be converted into an oblique line by foreground contour extraction, and then the oblique line may be filtered by transverse and vertical expansion corrosion, i.e. the angle of refraction Li Lvchu may be obtained, so as to retain the actual feature block. In addition, it can be seen from figure 10c that the text "notice" can also be effectively filtered out.
With continued reference to fig. 11, fig. 11 is a schematic diagram illustrating a fourth application scenario of the feature block identifying method. The answer sheet often includes some characters similar to a rectangle, such as "badge", "common", "same", "effect", etc., which are easily identified as feature blocks after being directly binarized, so that the accuracy of feature block identification is reduced. As shown in fig. 11a to 11c, fig. 11a shows an image to be recognized including feature blocks and characters of an approximate rectangle, fig. 11b shows an image directly subjected to binarization processing, and fig. 11c shows an image containing foreground contour information in the image to be recognized. When the polygon fitting algorithm is adopted to fit the characters 11b in the graph, as the characters such as common characters and high characters in the graph 11b are similar to rectangles, the ratio of the foreground value to the area of the connected domain is large, the characters can be easily identified as characteristic blocks by mistake, the characters can be filtered out after the characters 11c in the graph are subjected to horizontal and vertical expansion corrosion, and hollow rectangular frames are reserved, so that interference factors such as the characters are eliminated, and the accuracy of characteristic block identification is improved.
Step S242: and detecting a reference block with preset characteristics in the reference image.
Step S243: feature blocks in the image to be identified are determined based on the locations of the reference blocks.
In general, rectangular feature blocks are represented as open rectangular frames in a reference image, so that preset features corresponding to the open rectangular frames can be set, so that a reference block in the reference image can be detected based on the preset features, and the feature blocks in an image to be identified can be determined based on the positions of the reference blocks.
Wherein the preset features include, but are not limited to, four sides and four vertices.
According to the scheme, the image to be identified is subjected to edge detection by utilizing the preset edge detection operator to obtain the initial detection image, and then the first connected domain in the initial detection image is filtered to obtain the final detection image containing the foreground contour information of the image to be identified, so that the first connected domain which does not meet the corresponding requirement of the feature block can be filtered, and the accuracy of feature block identification is improved.
Further, determining length characterization information of the first connected domain in the initial detection image; obtaining fitting parameters based on the length characterization information of the first communication domain; performing polygon fitting on the first connected domain based on fitting parameters to obtain preset characteristic information of a fitting polygon corresponding to the first connected domain; the first connected domain, of which the preset characteristic information does not meet the first preset requirement, in the initial detection image is filtered, the final detection image is obtained, fitting parameters for fitting polygons can be obtained by utilizing the length characterization information of the connected domain, polygons which are closer to the real shapes can be fitted based on the fitting parameters, and therefore the first connected domain which is not a characteristic block can be accurately filtered based on the preset characteristic parameters of the polygons, and therefore accuracy of characteristic block identification is improved.
Further, the final detection image is taken as a reference image; or performing morphological operation on the final detection image to obtain a reference image; detecting a reference block with preset characteristics in a reference image; feature blocks in the image to be identified are determined based on the locations of the reference blocks. Specifically, performing a first morphological operation on the final detection image by using a first kernel function to obtain a first morphological image; performing a second morphological operation on the final detection image by using a second kernel function to obtain a second morphological image; and fusing the first morphological image and the second morphological image to obtain a reference image. The size of the first kernel function is determined according to the width of the feature block in the template image, and the first morphological operation is transverse expansion corrosion; the second kernel is sized according to the height of the feature block in the template image and the second morphological operation is vertical dilation erosion. By adopting the mode, the oblique line segments or other small interference items can be filtered, and the edge information image of the feature block is reserved.
Referring to fig. 12 to 15, fig. 12 is a flowchart of another embodiment of step S243 in the feature block recognition method provided by the present application, fig. 13 is a flowchart of another embodiment of step S3433 in the feature block recognition method provided by the present application, fig. 14 is a fourth schematic view of an answer sheet image provided by the present application, and fig. 15 is a schematic view of a fifth application scenario of the feature block recognition method provided by the present application.
Unlike the above embodiment, since some images to be identified include the hollow feature blocks and the solid feature blocks, such as the M area shown in fig. 7, and the outlines of the hollow feature blocks and the solid feature blocks in the reference image are hollow rectangular frames, which cannot be distinguished, and since the solid feature blocks are to be detected finally, in the embodiment of the present disclosure, after the reference blocks in the reference image are detected, the reference blocks are further detected in a hollow and solid manner, so as to distinguish the hollow feature blocks and the solid feature blocks, reduce errors, and improve the accuracy of feature block identification.
Specifically, in the embodiment of the present disclosure, step S243 may include steps S3431, S3432, and S3433:
step S3431: based on the position of the reference block, intercepting at least part of the image to be identified as a candidate detection image; wherein the candidate detection image includes an area of the reference block corresponding to the position.
In some embodiments, the image to be identified may be directly used as a candidate detection image, or a part of the image to be identified may be used as a candidate detection image, or the candidate detection image of the area corresponding to the image to be identified may be re-intercepted according to the answer sheet image.
Step S3432: and carrying out binarization processing on the candidate detection image to obtain a binarized image.
The image binarization process is to screen and reflect the whole image by a proper threshold (called as a binarization threshold). All gray values above the threshold are set to maximum (e.g., 255) and all gray values below the threshold are set to minimum (e.g., 0) in the image, and such processing only has two gray values, i.e., black and white, for the entire image. The binarization threshold is a scale for separating the foreground from the background, and the principle of the binarization threshold selection is that the key information of the image is saved and the interference of noise can be reduced.
In the embodiment of the disclosure, the swill entropy of the candidate detection image can be used as a binarization threshold, and binarization processing can be performed on the candidate detection image to obtain a binarized image. The Rayleigh entropy can well reflect the busyness of an image, namely the stability of the image, and key information of the image can be saved to a great extent and noise interference can be reduced by calculating the binarization threshold of the image through the Rayleigh entropy. The rayleigh entropy R (a, x) can be calculated specifically by the following formula:
where x is a non-zero vector and a is an nxn Hermitan matrix (hermite matrix). The Hermitan matrix is a matrix satisfying equality of the conjugate transpose matrix and the original matrix, namely, a H =a, and if the matrix a is a real matrix, a H =a is satisfied, namely, the matrix a is Hermitan matrix.
Step S3433: and obtaining the feature block in the image to be identified based on the second connected domain in the binarized image.
As shown in fig. 13, in some embodiments, step S3433 may include steps S34331, S34332, and S34333:
step S34331: and filtering out a second connected domain meeting a second preset requirement in the binarized image.
Specifically, a second ratio between the foreground value and the area of each second connected domain in the binarized image can be obtained, and the second connected domains in the binarized image, for which the second ratio does not meet a preset threshold, can be filtered, so that the empty feature blocks can be filtered.
It can be understood that, in the above binarized image, the foreground value corresponding to the hollow feature block is smaller than the foreground value corresponding to the solid feature block, so that, in the case of a small difference in area, the second ratio between the foreground value and the area of the second connected domain corresponding to the hollow feature block is smaller than the second ratio between the foreground value and the area of the second connected domain corresponding to the solid feature block. The units of the foreground value and the area can be pixels.
In general, in a print defect scene, the area of the connected domain corresponding to the print defect part is much larger than that of the connected domain of the normal print part, so that the calculated second ratio is smaller, and the second connected domain with the print defect can be filtered by setting a preset ratio.
Specifically, whether the second ratio meets the preset threshold may be determined whether the second ratio meets the preset threshold, or alternatively, the magnitude of the preset threshold may be selected according to the actual application scenario, which is not limited herein.
In some embodiments, because the solid connected domain may be interference factors such as black edge, the application can further utilize the length characterization information of the second connected domain in the binarized image to filter the interference factors such as black edge in the binarized image, specifically, the length characterization information of the second connected domain in the binarized image can be determined, and fitting parameters are obtained based on the length characterization information of the second connected domain; performing polygon fitting on the second connected domain based on the fitting parameters to obtain preset characteristic information of a fitting polygon corresponding to the second connected domain; and filtering out a second connected domain of which the preset characteristic information in the binarized image meets a third preset requirement.
The length characterization information is perimeter, and the preset characteristic information of the fitting polygon comprises at least one of the number of vertexes and the area of the fitting polygon.
Optionally, the third preset requirement may be selected according to an actual scene, for example, when the feature block is rectangular, the third preset requirement may be set to be that the number of vertices of the fitted polygon is 4, the area of the fitted polygon is within a range of 300-350 pixel, and if the number of vertices of the fitted polygon of the first connected domain in the initial detection image is not equal to 4 or the area of the fitted polygon of the first connected domain is not within a range of 300-350 pixel, the corresponding second connected domain in the binary image is filtered. Whereby the black edges with a vertex count of typically 2 will be filtered out.
In some embodiments, if the fitting parameter is a minimum fitting side length, the fitting parameter is obtained based on the length characterization information of the second connected domain, including: acquiring a first ratio between the length characterization information of the second connected domain and the first numerical value; selecting the smaller one from the first ratio and the second value as the undetermined fitting side length; and selecting the larger one from the undetermined fitting side length and the third numerical value as the smallest fitting side length. In the embodiment of the present disclosure, the above steps may be referred to step S232 in the above embodiment, which is not described herein again.
Step S34332: and selecting the remaining second connected domain matched with the position of the reference block in the binarized image as a candidate block.
Specifically, the position information of the reference block in the reference image and the position information of the remaining second connected domain in the binarized image may be acquired, and the position information of the reference block and the position information of the second connected domain are matched, so that the second connected domain in the binarized image, which is identical or substantially identical to the position information of the reference block, is selected as the candidate block. In other embodiments, a reference block in the reference image, which is identical or substantially identical to the position information of the second connected domain, may also be selected as the candidate block.
Step S34333: and obtaining the characteristic blocks in the image to be identified based on the candidate blocks.
Specifically, comparing the position information of the candidate block with the position information of the feature block in the template image; based on the comparison result, it is determined whether the candidate block is a feature block in the image to be identified.
As shown in the Q area of fig. 9, the Q area includes two solid rectangular blocks, where one solid rectangular block on the left is a feature block to be detected, and one solid rectangular block on the right is an interference factor, and after the above steps, both the solid rectangular blocks may be retained and become candidate blocks, so in order to identify a true feature block, finally, the position information of the candidate block needs to be compared with the position information of the feature block in the template image, and if the position information of the candidate block is the same as the position information of the feature block in the template image, the candidate block is identified as the feature block in the image to be identified.
Referring to fig. 14 to 15, fig. 14 shows an answer sheet image with uneven illumination, fig. 15 includes fig. 15a to 15e, fig. 15a shows an image to be identified including a feature block area in fig. 14, and specifically, fig. 15a is a left edge area in fig. 14. In the figure 15b, an image obtained by directly adopting binarization processing to an image to be identified is shown, and as the image to be identified has uneven illumination to interfere with the background, the lower boundary of the image can be seen from the image directly subjected to binarization processing to change the background into the foreground, so that the characteristic blocks are filtered, and the identification effect is poor. The final detection image processed by the feature block recognition method provided by the application is shown in fig. 15c, the reference image after the final detection image is subjected to the first morphological operation and the second morphological operation is shown in fig. 15d, and the image after the reference image is subjected to the hollow feature block filtering and black edge processing is shown in fig. 15 e. Compared with the 15b in the figure, the scheme provided by the embodiment of the disclosure can well process the scene with uneven illumination or dirty scanner, and the accuracy of feature block identification is higher.
According to the scheme, at least part of the image to be identified is taken as a candidate detection image based on the position of the reference block, wherein the candidate detection image comprises an area corresponding to the position of the reference block, binarization processing is carried out on the candidate detection image to obtain a binarized image, and then the feature block in the image to be identified is obtained based on a second connected domain in the binarized image. The method comprises the steps of taking the Rayleigh entropy of a candidate detection image as a binarization threshold, and performing binarization processing on the candidate detection image to obtain a binarized image; filtering a second connected domain meeting a second preset requirement in the binarized image; selecting the remaining second connected domain matched with the position of the reference block in the binarized image as a candidate block; and obtaining the characteristic blocks in the image to be identified based on the candidate blocks. According to the method, the reference block can be subjected to hollow and solid detection so as to distinguish the hollow characteristic block from the solid characteristic block, the error is reduced, the accuracy of characteristic block identification is improved, the Rayleigh entropy is calculated to serve as a binarization threshold (the background value of a gray image), the background and the foreground can be just distinguished, the binarization effect is better, and therefore the accuracy of characteristic block identification is higher.
Further, the fitting parameters are obtained by determining the length representation information of the second connected domain in the binarized image based on the length representation information of the second connected domain, then polygon fitting is carried out on the second connected domain based on the fitting parameters, the preset feature information of the fitting polygon corresponding to the second connected domain is obtained, and then the second connected domain, of which the preset feature information meets the third preset requirement, in the binarized image is filtered, so that the second connected domain corresponding to interference factors such as black edges can be filtered, and the accuracy of feature block identification is improved.
Referring to fig. 16, fig. 16 is a schematic diagram of a frame of an embodiment of an electronic device according to the present application.
The electronic device 600 includes: a processor 610 and a memory 620 connected to the processor 610, the memory 620 for storing program data, the processor 610 for executing the program data to implement the steps of any of the method embodiments described above.
Electronic device 600 includes, but is not limited to, televisions, desktop computers, laptop computers, handheld computers, wearable devices, head mounted displays, reader devices, portable music players, portable gaming devices, notebook computers, ultra-mobile personal computers (ultra-mobilepersonal computer, abbreviated as UMPC), netbooks, and cicada cell phones, personal digital assistants (personaldigital assistant, abbreviated as PDA), augmented reality (augmented reality, abbreviated as AR), virtual Reality (VR) devices.
In particular, the processor 610 is operative to control itself and the memory 620 to implement the steps in any of the method embodiments described above. The processor 610 may also be referred to as a CPU (Central Processing Unit ). The processor 610 may be an integrated circuit chip having signal processing capabilities. The Processor 610 may also be a general purpose Processor, a digital signal Processor (DIGITAL SIGNAL Processor, DSP), an Application SPECIFIC INTEGRATED Circuit (ASIC), a Field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, a discrete gate or transistor logic device, a discrete hardware component. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, the processor 610 may be commonly implemented by a plurality of integrated circuit chips.
Referring to fig. 17, fig. 17 is a schematic diagram of a frame of an embodiment of a computer storage medium according to the present application.
The computer readable storage medium 700 stores program data 710 for implementing the steps of any of the method embodiments described above when the program data 710 is executed by a processor.
The computer readable storage medium 700 may be a medium such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, which may store a computer program, or may be a server storing the computer program, where the server may send the stored computer program to another device for running, or may also run the stored computer program itself.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical, or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing description is only illustrative of the present application and is not intended to limit the scope of the application, and all equivalent structures or equivalent processes or direct or indirect application in other related technical fields are included in the scope of the present application.

Claims (12)

1. A method for identifying a feature block, comprising:
Acquiring an image to be identified containing a feature block; wherein the feature block has a preset shape;
Performing contour detection on the image to be identified by using a preset edge detection operator to obtain a final detection image containing foreground contour information of the image to be identified; the foreground contour information comprises edge information of each object in the image to be identified;
performing a first morphological operation on the final detection image by using a first kernel function to obtain a first morphological image; performing a second morphological operation on the final detection image by using a second kernel function to obtain a second morphological image; fusing the first morphological image and the second morphological image to obtain a reference image; the first morphological operation includes a lateral expansion etch and the second morphological operation includes a vertical expansion etch;
detecting a reference block with preset characteristics in the reference image;
The feature block in the image to be identified is determined based on the position of the reference block.
2. The method of claim 1, wherein the step of determining the position of the substrate comprises,
The step of carrying out contour detection on the image to be identified by using a preset edge detection operator to obtain foreground contour information of the image to be identified comprises the following steps:
Performing edge detection on the image to be identified by using the preset edge detection operator to obtain an initial detection image;
And filtering the first connected domain in the initial detection image to obtain a final detection image containing the foreground contour information of the image to be identified.
3. The method according to claim 2, wherein filtering the first connected domain in the initial detection image to obtain a final detection image containing foreground contour information of the image to be identified comprises:
determining length characterization information of a first connected domain in the initial detection image;
obtaining fitting parameters based on the length characterization information of the first connected domain;
Performing polygon fitting on the first connected domain based on the fitting parameters to obtain preset characteristic information of a fitting polygon corresponding to the first connected domain;
And filtering out a first connected domain of which the preset characteristic information does not meet a first preset requirement in the initial detection image to obtain the final detection image.
4. The method of claim 1, wherein the determining the feature block in the image to be identified based on the location of the reference block comprises:
Based on the position of the reference block, intercepting at least part of the image to be identified as a candidate detection image; wherein the candidate detection image comprises an area at a position corresponding to the reference block;
Performing binarization processing on the candidate detection image to obtain a binarized image;
And obtaining the feature block in the image to be identified based on the second connected domain in the binarized image.
5. The method of claim 4, wherein binarizing the candidate detection image to obtain a binarized image comprises:
Taking the Rayleigh entropy of the candidate detection image as a binarization threshold, and performing binarization processing on the candidate detection image to obtain a binarization image;
and/or, the obtaining the feature block in the image to be identified based on the second connected domain in the binarized image includes:
filtering a second connected domain meeting a second preset requirement in the binarized image;
Selecting the rest second connected domain matched with the position of the reference block in the binarized image as a candidate block;
and obtaining the characteristic blocks in the image to be identified based on the candidate blocks.
6. The method of claim 5, wherein filtering the second connected domain of the binarized image that meets a second predetermined requirement comprises:
Acquiring a second ratio between a foreground value and an area of each second connected domain in the binarized image, and filtering the second connected domains of which the second ratio in the binarized image does not meet a preset threshold; and/or the number of the groups of groups,
Determining the length characterization information of a second connected domain in the binarized image, and obtaining fitting parameters based on the length characterization information of the second connected domain; performing polygon fitting on the second connected domain based on the fitting parameters to obtain preset characteristic information of a fitting polygon corresponding to the second connected domain; and filtering out a second connected domain of which the preset characteristic information in the binarized image meets a third preset requirement.
7. The method of claim 3 or 6, wherein the length characterization information is a perimeter, and the predetermined characteristic information of the fitted polygon includes at least one of a number of vertices and an area of the fitted polygon.
8. A method according to claim 3, wherein the fitting parameter is a minimum fitting side length, and the obtaining the fitting parameter based on the length characterization information of the first connected domain includes:
Acquiring a first ratio between the length characterization information of the first connected domain and a first numerical value;
selecting the smaller one from the first ratio and the second value as the undetermined fitting side length;
Selecting the larger one from the undetermined fitting side length and a third numerical value as the minimum fitting side length; the first value, the second value and the third value are preset values.
9. The method of claim 6, wherein the step of providing the first layer comprises,
The fitting parameter is a minimum fitting side length, and the obtaining the fitting parameter based on the length characterization information of the second connected domain includes:
Acquiring a first ratio between the length characterization information of the second connected domain and a first numerical value;
selecting the smaller one from the first ratio and the second value as the undetermined fitting side length;
Selecting the larger one from the undetermined fitting side length and a third numerical value as the minimum fitting side length; the first value, the second value and the third value are preset values.
10. The method according to claim 5, wherein the obtaining the feature block in the image to be identified based on the candidate block includes:
comparing the position information of the candidate block with the position information of the feature block in the template image;
Based on the comparison result, it is determined whether the candidate block is a feature block in the image to be identified.
11. An electronic device comprising a processor and a memory coupled to the processor,
The memory is for storing program data and the processor is for executing the program data to implement the method of any one of claims 1-10.
12. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein program data, which when executed by a processor, is adapted to carry out the method according to any one of claims 1-10.
CN202011627747.1A 2020-12-31 2020-12-31 Feature block identification method, electronic equipment and computer readable storage medium Active CN112733829B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011627747.1A CN112733829B (en) 2020-12-31 2020-12-31 Feature block identification method, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011627747.1A CN112733829B (en) 2020-12-31 2020-12-31 Feature block identification method, electronic equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN112733829A CN112733829A (en) 2021-04-30
CN112733829B true CN112733829B (en) 2024-07-09

Family

ID=75608032

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011627747.1A Active CN112733829B (en) 2020-12-31 2020-12-31 Feature block identification method, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN112733829B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114757961A (en) * 2022-04-07 2022-07-15 逸超医疗科技(武汉)有限公司 Method, device, equipment and storage medium for detecting skin contour

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110232364A (en) * 2019-06-18 2019-09-13 华中师范大学 A kind of answering card page number recognition methods and device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011146031A (en) * 2009-12-18 2011-07-28 Canon Software Inc Information processing apparatus, information processing method, computer program and recording medium
CN107679479A (en) * 2017-09-27 2018-02-09 武汉颂大教育科技股份有限公司 A kind of objective full-filling recognition methods based on morphological image process
CN108416345B (en) * 2018-02-08 2021-07-09 海南云江科技有限公司 Answer sheet area identification method and computing device
CN110751146B (en) * 2019-10-23 2023-06-20 北京印刷学院 Text area detection method, device, electronic terminal and computer-readable storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110232364A (en) * 2019-06-18 2019-09-13 华中师范大学 A kind of answering card page number recognition methods and device

Also Published As

Publication number Publication date
CN112733829A (en) 2021-04-30

Similar Documents

Publication Publication Date Title
CN115908269B (en) Visual defect detection method, visual defect detection device, storage medium and computer equipment
Moghadam et al. Fast vanishing-point detection in unstructured environments
Yang et al. Binarization of low-quality barcode images captured by mobile phones using local window of adaptive location and size
Sörös et al. Blur-resistant joint 1D and 2D barcode localization for smartphones
US5647027A (en) Method of image enhancement using convolution kernels
CN110717489A (en) Method and device for identifying character area of OSD (on screen display) and storage medium
CN108805116A (en) Image text detection method and its system
JP2002133426A (en) Ruled line extraction device for extracting ruled lines from multi-valued images
CN112329756A (en) Method and device for extracting seal and recognizing characters
CN115205223B (en) Visual inspection method and device for transparent object, computer equipment and medium
CN109948521B (en) Image deviation rectifying method and device, equipment and storage medium
Chen et al. Decompose algorithm for thresholding degraded historical document images
CN111539238B (en) Two-dimensional code image restoration method and device, computer equipment and storage medium
CN107545223B (en) Image recognition method and electronic equipment
CN112419207A (en) Image correction method, device and system
CN109784328B (en) Method for positioning bar code, terminal and computer readable storage medium
CN117911338A (en) Image definition evaluation method, device, computer equipment and storage medium
CN112733829B (en) Feature block identification method, electronic equipment and computer readable storage medium
CN110135288B (en) Method and device for quickly checking electronic certificate
Bodnár et al. A novel method for barcode localization in image domain
Agrawal et al. Stroke-like pattern noise removal in binary document images
CN118520893B (en) A method, device and storage medium for barcode label recognition applied to AOI
CN113435219B (en) Anti-counterfeiting detection method and device, electronic equipment and storage medium
CN113076952B (en) Text automatic recognition and enhancement method and device
Dosil et al. A new radial symmetry measure applied to photogrammetry

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant