[go: up one dir, main page]

CN113870182A - A Masked Otsu Thresholding Method - Google Patents

A Masked Otsu Thresholding Method Download PDF

Info

Publication number
CN113870182A
CN113870182A CN202110991862.5A CN202110991862A CN113870182A CN 113870182 A CN113870182 A CN 113870182A CN 202110991862 A CN202110991862 A CN 202110991862A CN 113870182 A CN113870182 A CN 113870182A
Authority
CN
China
Prior art keywords
image
mask
gray
threshold
threshold value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110991862.5A
Other languages
Chinese (zh)
Inventor
钟铭恩
赵保勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen University of Technology
Original Assignee
Xiamen University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen University of Technology filed Critical Xiamen University of Technology
Priority to CN202110991862.5A priority Critical patent/CN113870182A/en
Publication of CN113870182A publication Critical patent/CN113870182A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Probability & Statistics with Applications (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开一种带掩膜的大津阈值化方法,包括如下步骤:步骤一,确定掩膜个数及掩膜内渗透区域数量,根据原图像制作等大小模板,并在模板中的屏蔽区域和渗透区域分别填充灰度屏蔽因子和渗透因子;步骤二,以图像和掩膜的Hadamard乘积作为提取的目标区域;步骤三,计算目标区域中的最大类间方差,得到目标区域阈值化操作的阈值;步骤四,以步骤三得到的阈值进行阈值二值化处理。此种方法首先通过个性化掩膜设计对图像进行遮罩和渗透,然后使用Hadamard乘积对目标区域进行提取,最后仅以掩膜区域为对象采用最大类间方差计算理论确定阈值,原理简单,实用性较好。

Figure 202110991862

The invention discloses an Otsu thresholding method with a mask, comprising the following steps: Step 1, determining the number of masks and the number of permeable areas in the mask, making templates of equal size according to the original image, and comparing the masking areas in the template and the number of permeable areas in the mask. The penetration area is filled with the grayscale mask factor and penetration factor respectively; in step 2, the Hadamard product of the image and the mask is used as the extracted target area; in step 3, the maximum inter-class variance in the target area is calculated to obtain the threshold value of the thresholding operation of the target area ; Step 4, perform threshold binarization processing with the threshold obtained in step 3. This method first masks and penetrates the image through personalized mask design, then uses the Hadamard product to extract the target area, and finally only uses the mask area as the object to determine the threshold using the maximum inter-class variance calculation theory. The principle is simple and practical. Sex is better.

Figure 202110991862

Description

Otsu thresholding method with mask
Technical Field
The invention relates to the technical field of digital image processing in machine vision, in particular to an Otsu thresholding method with a mask function.
Background
With the rapid development of technologies such as machine vision, computer vision, artificial intelligence and the like, digital image processing technology is more and more widely applied, for example: the method comprises the following fields of industrial non-contact detection, medical image processing, remote sensing image processing, military safety, intelligent transportation, intelligent home furnishing and the like. In the process of processing the digital image, because the accuracy of the segmentation result plays a key role in the processing result, the segmentation of the real and effective target information from the image is the important emphasis on the image processing task. Conventional image segmentation methods that are commonly used include: the method comprises the steps of region segmentation, threshold segmentation, edge detection and the like, wherein the threshold segmentation has the widest application range because the method is simple to operate and high in calculation speed.
In the threshold segmentation, the determination of the threshold is particularly important. Improper threshold selection can cause inaccurate target segmentation, and the unfavorable condition that the same body is thresholded and segmented into two or more parts and under-segmented, that is, two or more individuals are thresholded and then are adhered together to form a whole is generated. Currently, the method for determining the threshold mainly includes: fixed threshold methods, local threshold methods, and global threshold methods.
The fixed threshold method performs thresholding operation on an image by using a specific threshold, and has the advantages of convenience and quickness, and has the defects that the threshold selection is influenced by subjective consciousness of people, the reasonability is difficult to evaluate, and the algorithm robustness is poor.
The local thresholding method divides an image into a plurality of sub-images, determines a threshold value for each sub-image correspondingly, or equally divides the image into regular equal-size regions for adaptive threshold calculation. The local threshold has the advantages that the influences of uneven illumination, background interference and the like can be reduced, and the defects that the calculation amount of an algorithm is increased by combining two complex operations after image splitting and thresholding, and the target adaptability is poor due to the fact that the image area occupation ratio is relatively large and the appearance is irregular.
The global thresholding method firstly adopts different mathematical theories to carry out threshold calculation according to the overall gray level information of the image, and then carries out thresholding operation based on the threshold. The calculation method of the threshold is the key of the algorithm, and common threshold calculation theories comprise theories of minimum error, maximum inter-class variance, maximum entropy and the like. The Otsu global threshold algorithm adopts the maximum between-class variance theory, and is widely used in digital image processing because of the advantages of simple principle, convenient operation and the like.
The Otsu algorithm is applied to digital images, and is limited by the acquisition mode and the two-dimensional matrix representation form of the images, and only processes gray values in the images or rectangular areas in the images. Meanwhile, because the algorithm takes the maximum inter-class variance value as the metric of the threshold calculation result, the algorithm usually produces a better thresholding effect on the gray level set with the hump distribution characteristic. However, in practical applications, the geometric shape and the gray level distribution of the target object tend to be complex, the "hump" distribution feature is difficult to satisfy in the whole image or rectangular region, and there are often more interferences in the image background, so the thresholding operation effect of the algorithm tends to be less than ideal.
The Chinese patent application CN201811294422.9 discloses a method for extracting an image of a region of interest by using a mask to carry out Otsu thresholding, which comprises the steps of manufacturing an annular mask by using inner and outer diameter parameters of a steel pipe obtained by least square fitting; performing intersection operation on the annular mask and the original image to extract an interested area image only containing the end face and not containing the chamfer; and performing Otsu thresholding operation on the extracted interesting image to obtain a binary image. However, the purpose of setting the mask is to extract the annular region of interest and set the background pixel value of the image outside the annular region to 0, and Otsu calculation is still performed by taking all pixels in the whole image as objects after extraction, which only applies Otsu algorithm and does not involve any improvement on the algorithm. The 0 pixel value distribution in the background is still included as an objective in the calculation process of the Otsu algorithm, and although a good effect is obtained in the implementation case, the analysis is that the difference between the gray value of the defect and the 0 value is large in the implementation case, and the 0 value distribution does not have a great influence on the segmentation result. If the gray level of the defect portion is closer to 0, the method will cause a serious erroneous segmentation. In view of the above, there is a need for improvements and innovations in the existing Otsu algorithm.
Disclosure of Invention
The invention aims to provide a Otsu thresholding method with a mask, which comprises the steps of firstly masking and permeating an image through personalized mask design, then extracting a target region by using Hadamard product, and finally determining a threshold value by using the mask region as an object and adopting a maximum between-class variance calculation theory.
In order to achieve the above purpose, the solution of the invention is:
a method for thresholding an Otsu with a mask comprises the following steps:
step one, determining the number f of masksmAnd the number r of penetrated regions in the maskmFrom the original image Sm×nMaking equal-size templates Mm×nAnd a shielding region M in the template0And a penetration region M1Filling the gray-scale masking factors w separately0And a permeability factor w1Wherein w is0∈[0,1],w1∈[0,1]And w is0And w1Not simultaneously 0 or 1;
step two, using the image Sm×nAnd mask Mm×nThe Hadamard product of (1) as the extracted target region;
step three, calculating the maximum inter-class variance in the target region to obtain a threshold value of the thresholding operation of the target region;
and step four, performing threshold value binarization processing on the threshold value obtained in the step three.
In the first step, if the original image is a RectN image, the gray scale of the image includes black and gray, dark and gray, the target object is R gray-white images in the dark gray image block, and f is setm=1,rmR; obtaining binarization results of black gray and dark gray parts by utilizing an Otsu algorithm, taking the results as a template M of a mask after one-time image closing operation, and filling w in a white area of the mask1Black area filling w ═ 10=0。
In the first step, if the original image is a Branch image, according to the fact that the image comprises a dark gray background and an off-white foreground, gray distribution is characterized by humping; setting fm=1,rm1 is ═ 1; filling w in the edge with the edge detection result as a template M1The rest is filled with w0And 0, performing expansion according to the filling result as a mask design result.
In the first step, if the original image is a Circle image, the number f of masks is determined according to the property of the target object and the image processing taskmNumber of penetrated areas r within each maskmTaking 1; solving gradient by using Sobel operator, then binarizing the gradient value, finally extracting each circle by using Hough circle detection, and filling w in the intersection part of every two circles1The rest is filled with w0=0。
In the second step, the Hadamard product is expressed as follows:
Figure BDA0003232624420000031
Figure BDA0003232624420000032
Figure BDA0003232624420000033
wherein p is(i,j)The gray scale values of the ith row and the jth column of the image are shown.
The concrete process of the third step is as follows:
step 31, calculating the number and probability distribution of each gray scale pixel;
step 32, dividing the target area gray set into C by taking the parameter t as a threshold value0、C1Two classes, t ∈ [0,255 ]];
Step 33, calculating C respectively0、C1The intra-class probabilities of the two classes are calculated to obtain C0And C1The gray level mean value of (1);
step 34, calculating C respectively0、C1Intra-class variance of the two classes, and inter-class variance of the two classes;
step 35, taking t 'as threshold, t' is belonged to [0, t) U (t, 255)]Repeating the steps 32-34 until the maximum inter-class variance appears, wherein the corresponding value T is the solved threshold value T; if fmIf the threshold value is 1, the threshold value solution is ended; if fm>1, the remaining masks need to be thresholded separately.
In the fourth step, if fmIf the value is 1, completing image threshold segmentation; if fm>And 1, respectively carrying out binarization processing on the rest target regions, and then carrying out intersection operation on the binary images to finish image threshold segmentation.
On a theoretical level, the maximum between-class variance theory can be applied to a binary classification task of any value set. Therefore, the invention provides a maximum between-class variance thresholding algorithm with a mask, which is called a MaskOtsu algorithm. The MaskOtsu algorithm masks and permeates a processed image by adopting masks (masks) with any shapes and quantity in the process of calculating the maximum inter-class variance of the gray level of the image, so that the processed image can only process the region in which a user is most interested, the thresholding operation of the region with any shape can be realized, the influence of background gray level distribution on the peak value of foreground gray level distribution is reduced, the complexity of the thresholding algorithm with the gray level distribution as a multi-peak distribution characteristic is simplified, and the defects of the existing Otsu algorithm are overcome; and the influence of background pixels on the calculation of the maximum inter-class variance can be removed by using a mask, so that the calculation amount of threshold value search of the algorithm in a gray level exhaustive space is greatly reduced, and the efficiency and the accuracy of the algorithm are improved.
After the scheme is adopted, the principle of the method is simple, and the performance of the algorithm, such as the anti-interference capability, the adaptability, the accuracy, the operation efficiency and the like, can be improved because the attention of solving the maximum inter-class variance is concentrated in the area where the target object is located.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a schematic diagram of factor fill in mask design;
wherein (a) is a template Mm×nSchematic representation, (b) is w0And w1A factor fill schematic;
FIG. 3 is a schematic diagram of an experiment-mask design;
wherein, (a) is an experimental material RectN image, and (b) is a mask design result;
FIG. 4 is a schematic diagram of an experimental two-mask design;
wherein, (a) is an experimental material Branch image, and (b) is a mask design result;
FIG. 5 is a schematic diagram of an experimental three-mask design;
wherein, (a) is an experimental material Circle image, and (b) is a gray distribution histogram;
FIG. 6 is a diagram illustrating the design results of the three masks of FIG. 5;
FIG. 7 is a graph showing the results of applying the present invention;
wherein (a) is the result of experiment one, (b) is the result of experiment two, and (c) is the result of experiment three.
Detailed Description
The technical solution and the advantages of the present invention will be described in detail with reference to the accompanying drawings.
As shown in FIG. 1, the invention provides a Otsu thresholding method with a mask, which combines the mask in the application of the maximum between-class variance method (Otsu), which is called as MaskOtsu algorithm, and the core idea is that selective shielding and infiltration are performed on an image by using the mask, so that the algorithm only counts the pixel values inside the mask when performing the between-class variance calculation, and the pixel values outside the mask are not input into the algorithm any more, so as to achieve the purpose of divide and conquer in the thresholding process; the invention specifically comprises the following steps:
step 1, designing a mask.
Firstly, determining the number of masks according to the properties of a target object and an image processing task, and recording the number as fm(ii) a Then determining the number of the permeation areas in the mask according to the number of the target objects, and recording the number as rm(ii) a Then, an equal-size template M is created according to the size (M × n) of the original imagem×n(ii) a Finally, the region M is shielded in the template0And a penetration region M1Filling the gray-scale masking factors w separately0(w0∈[0,1]) And a permeability factor w1(w1∈[0,1]). When w is0When 0, w indicates that the gray scale is completely masked0A value of 1 indicates no masking; when w is1When the value is 0, the gray scale transmittance is 0, w1When the value is 1, the transmittance is 100%. FIG. 2 is a schematic of the masking factor and permeability factor fill.
To illustrate the specific design method of the mask, the mask design method under the condition of the regular region, the irregular region and the gray-scale multi-peak distribution characteristics is introduced in the following three experiments respectively.
Experiment one: image segmentation and localization
Purpose of the experiment: segmenting the 5 off-white small squares in a black-gray background and tessellated within a block of dark gray image using MaskOtsu's algorithm; and the position of the algorithm is positioned by means of a connected domain analysis method so as to verify the algorithm segmentation accuracy. The experimental material image is shown in fig. 3(a), and the mask design result is shown in fig. 3 (b).
The mask design steps are as follows:
(1) image characteristic analysis. As can be seen from the RectN image, the gray levels thereof can be classified into three categories: dark grey, dark grey and grey-white. The target object is mainly located in the dark gray image block, so the dark gray part is an interference part in the thresholding operation, and a mask needs to be designed for shielding.
(2) Target object attribute analysis. The grey distribution of each target object is the same, and the local background is the same, so a single mask is made, i.e. fm=1。
(3) Target object location distribution. The target objects are distributed in 5 different positions of the image, so the number r of penetration areas in the maskm=5。
(4) Mask making. The target object is smaller and the probability density in the image is lower, so the binarization result of the black and gray parts of the image is obtained by using the prior Otsu algorithm, the result is used as the template M of the mask after one image closing operation, and the white area is filled with w1Black area filling w ═ 10The design result is shown in fig. 3(b) when the value is 0.
Experiment two: image segmentation and target object extraction
Purpose of the experiment: irregular curves are segmented and extracted from a strong interference background image by using a MaskOtsu algorithm. The experimental material image is shown in fig. 4(a), and the mask design result is shown in fig. 4 (b).
The mask design analysis idea is the same as the experiment I, and the Branch image is specially usedThe qualitative analysis shows that the image mainly comprises a dark gray background and a gray foreground, the gray distribution is characterized by a hump, but the dark gray background has noise interference which is difficult to inhibit, so that the part needs to be shielded during thresholding; design mask number fm1, number of penetration regions rm1 is ═ 1; because the target edge is obvious, the edge detection result is used as the template M, and w is filled in the edge1The rest is filled with w0And 0, performing expansion according to the filling result as a mask design result. Experiments have shown that a 1.5-fold expansion allows the permeation factor to completely cover the target area with less background interference introduced.
Experiment three: image segmentation and geometric feature calculation
Purpose of the experiment: and segmenting pairwise intersecting parts in the three circles by using a MaskOtsu algorithm, and calculating the areas of the intersecting parts by means of Blob analysis to verify the applicability of the algorithm to the characteristic that the image gray scale is in multimodal distribution. The experimental material image is shown in fig. 5(a), and the histogram of the gradation distribution is shown in fig. 5 (b).
The mask design analysis idea is the same as the above, and it can be known from the Circle gray distribution histogram that the image gray is in a multi-peak distribution characteristic, and a plurality of masks are required to be added to obtain a better thresholding effect. And adjusting the gray distribution of the target object into a bimodal distribution through different mask actions. Since the target object exhibits three kinds of gradations, the number of masks fmTaking 3, the number of penetration regions r in each maskm1 is taken. Because the edge of each circle has larger gray level mutation, a Sobel operator is used for solving the gradient firstly, then the gradient value is binarized, finally the Hough circle detection is used for extracting three circles respectively, and w is filled in the intersected part of every two circles1The rest is filled with w0The results of the design of the three masks are shown in fig. 6, which is 0.
And 2, extracting a target area from the original image.
Representing the image and the mask as a two-dimensional matrix S, respectivelym×nAnd Mm×nWith p(i,j)The gray scale values of the ith row and the jth column of the image are shown. The two matrices are equal in the number of rows and columns respectively,the extraction operation can be represented by multiplication of corresponding elements of the matrix, and is therefore represented as a Hadamard (Hadamard) product as follows:
Figure BDA0003232624420000071
Figure BDA0003232624420000072
Figure BDA0003232624420000073
when w is0=0、w1When the value is 0, the Hadamard product is a zero matrix of m × n, that is, the extraction result is an image with a gray value of 0.
When w is0=1、w1At 1, the Hadamard product remains the original, i.e., the null mask.
When w is0=0、w1When not equal to 0, the Hadamard product has the original gray scale value and w in the osmotic factor action region1The product of (3) is 0 in the other regions, and the extraction result is the region where the osmotic factor acts.
When w is0≠0、w1When the value is equal to 0, the result is w0=0、w1Backward extraction when not equal to 0.
When w is0≠0、w1And when the mask is not equal to 0, the whole image is extracted at the moment, but the gray value is changed according to the mask design result, and the situation belongs to an effective mask.
Through the analysis, in order to make the mask design result have practical significance, the requirement w0And w1Not simultaneously 0 or 1.
Step 3, calculating the maximum inter-class variance under the mask, wherein the calculation process comprises the following steps:
(1) calculating the number of pixels per gray scale and the probability distribution. Setting the gray scale number of the image and the mask after Hadamard multiplication as LmThe number of ith gradation is nmiTotal number of pixels
Figure BDA0003232624420000081
Figure BDA0003232624420000082
The number of shielding factors and permeability factors can be calculated from the mask design result and respectively recorded as
Figure BDA0003232624420000083
And
Figure BDA0003232624420000084
and
Figure BDA0003232624420000085
the three satisfy the relation:
Figure BDA0003232624420000086
when w is0=0、w1With ≠ 0, the ith-order grayscale probability distribution can be expressed as:
Figure BDA0003232624420000087
when w is0≠0、w1When 0, the ith gray level probability distribution can be expressed as:
Figure BDA0003232624420000088
when w is0≠0、w1With ≠ 0, the ith-order grayscale probability distribution can be expressed as:
Figure BDA0003232624420000089
(2) calculating an intra-class probability distribution. Let parameter t (t ∈ [0, 255)]) Dividing the extraction target area gray set into C by taking t as a threshold value0、C1Two types are provided. Wherein, C0To representThe gray value is [0, …, t ]]Set of gray levels of C1Representing a gray value of [ t +1, …, Lm-1]The intra-class probability distributions of the gray level set of (1) are respectively as follows:
Figure BDA00032326244200000810
(3) calculating the mean intra-class gray level. C0And C1The mean value of the class gray levels is calculated according to the following formula:
Figure BDA00032326244200000811
(4) calculate the intra-class variance. The calculation formula is as follows:
Figure BDA0003232624420000091
(5) calculate the between-class variance. The calculation formula is as follows:
Figure BDA0003232624420000092
and 4, determining a threshold value. Taking t 'as threshold value, where t' is equal to [0, t ] U (t, 255)]. And (5) repeating the steps (2) to (5) in the step (3) until the maximum inter-class variance appears, wherein the corresponding value T is the required threshold value T. If fmIf the threshold value is 1, the threshold value solution is ended; if fm>1, the remaining masks need to be thresholded separately.
And 5, thresholding the target area image. And (4) taking the threshold value T obtained in the step (4) as a threshold value of the thresholding operation of the target region to carry out threshold value binarization processing, and specifically selecting a thresholding function cv: (thermoshold () or other thresholding methods). If fmThe algorithm ends up by 1; if fm>And 1, respectively carrying out binarization processing on the rest target regions, and then carrying out simple intersection operation on each binary image to finish image threshold segmentation.
The experimental results can be referred to as shown in fig. 7. By using the method provided by the invention, the gray square target is accurately segmented in the first experiment; in experiment two, irregular curves are successfully extracted, although sporadic interference still exists in the result, the irregular curves can be removed by further simple analysis and processing; in the third experiment, a plurality of masks are used, and a divide-and-conquer idea is adopted to simplify a complex object into a simple object, so that a remarkable effect is achieved.
Parameter w0And w1Can be selected according to requirements, and according to the prior experience, w is preferably selected under the conventional condition0=0,w1=1。
The above embodiments are only for illustrating the technical idea of the present invention, and the protection scope of the present invention is not limited thereby, and any modifications made on the basis of the technical scheme according to the technical idea of the present invention fall within the protection scope of the present invention.

Claims (7)

1. A Otsu thresholding method with a mask is characterized by comprising the following steps:
step one, determining the number f of masksmAnd the number r of penetrated regions in the maskmFrom the original image Sm×nMaking equal-size templates Mm×nAnd a shielding region M in the template0And a penetration region M1Filling the gray-scale masking factors w separately0And a permeability factor w1Wherein w is0∈[0,1],w1∈[0,1]And w is0And w1Not simultaneously 0 or 1;
step two, using the image Sm×nAnd mask Mm×nThe Hadamard product of (1) as the extracted target region;
step three, calculating the maximum inter-class variance in the target region to obtain a threshold value of the thresholding operation of the target region;
and step four, performing threshold value binarization processing on the threshold value obtained in the step three.
2. The masked Otsu thresholding method according to claim 1, where: said step (c) isIn one embodiment, if the original image is a RectN image, the gray scale of the image includes black and gray, dark and gray white, the target object is R gray white images in a dark gray image block, and f is setm=1,rmR; obtaining binarization results of black gray and dark gray parts by utilizing an Otsu algorithm, taking the results as a template M of a mask after one-time image closing operation, and filling w in a white area of the mask1Black area filling w ═ 10=0。
3. The masked Otsu thresholding method according to claim 1, where: in the first step, if the original image is a Branch image, according to the fact that the image comprises a dark gray background and an off-white foreground, gray distribution is characterized by humping; setting fm=1,rm1 is ═ 1; filling w in the edge with the edge detection result as a template M1The rest is filled with w0And 0, performing expansion according to the filling result as a mask design result.
4. The masked Otsu thresholding method according to claim 1, where: in the first step, if the original image is a Circle image, the number f of masks is determined according to the property of the target object and the image processing taskmNumber of penetrated areas r within each maskmTaking 1; solving gradient by using Sobel operator, then binarizing the gradient value, finally extracting each circle by using Hough circle detection, and filling w in the intersection part of every two circles1The rest is filled with w0=0。
5. The masked Otsu thresholding method according to claim 1, where: in the second step, the Hadamard product is expressed as follows:
Figure FDA0003232624410000021
Figure FDA0003232624410000022
Figure FDA0003232624410000023
wherein p is(i,j)The gray scale values of the ith row and the jth column of the image are shown.
6. The masked Otsu thresholding method according to claim 1, where: the third step comprises the following specific processes:
step 31, calculating the number and probability distribution of each gray scale pixel;
step 32, dividing the target area gray set into C by taking the parameter t as a threshold value0、C1Two classes, t ∈ [0,255 >];
Step 33, calculating C respectively0、C1The intra-class probabilities of the two classes are calculated to obtain C0And C1The gray level mean value of (1);
step 34, calculating C respectively0、C1Intra-class variance of the two classes, and inter-class variance of the two classes;
step 35, taking t 'as threshold, t' is belonged to [0, t) U (t, 255)]Repeating the steps 32-34 until the maximum inter-class variance appears, wherein the corresponding value T is the solved threshold value T; if fmIf the threshold value is 1, the threshold value solution is ended; if fm>1, the remaining masks need to be thresholded separately.
7. The masked Otsu thresholding method according to claim 6, where: in the fourth step, if fmIf the value is 1, completing image threshold segmentation; if fm>And 1, respectively carrying out binarization processing on the rest target regions, and then carrying out intersection operation on the binary images to finish image threshold segmentation.
CN202110991862.5A 2021-08-27 2021-08-27 A Masked Otsu Thresholding Method Pending CN113870182A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110991862.5A CN113870182A (en) 2021-08-27 2021-08-27 A Masked Otsu Thresholding Method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110991862.5A CN113870182A (en) 2021-08-27 2021-08-27 A Masked Otsu Thresholding Method

Publications (1)

Publication Number Publication Date
CN113870182A true CN113870182A (en) 2021-12-31

Family

ID=78988448

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110991862.5A Pending CN113870182A (en) 2021-08-27 2021-08-27 A Masked Otsu Thresholding Method

Country Status (1)

Country Link
CN (1) CN113870182A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107808386A (en) * 2017-09-26 2018-03-16 上海大学 A kind of sea horizon detection method based on image, semantic segmentation
EP2779089B1 (en) * 2010-07-30 2018-11-14 Fundação D. Anna Sommer Champalimaud E Dr. Carlos Montez Champalimaud Systems and methods for segmentation and processing of tissue images and feature extraction from same for treating, diagnosing, or predicting medical conditions
CN111383244A (en) * 2020-02-28 2020-07-07 浙江大华技术股份有限公司 Target detection tracking method
WO2020248439A1 (en) * 2019-06-11 2020-12-17 江苏农林职业技术学院 Crown cap surface defect online inspection method employing image processing

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2779089B1 (en) * 2010-07-30 2018-11-14 Fundação D. Anna Sommer Champalimaud E Dr. Carlos Montez Champalimaud Systems and methods for segmentation and processing of tissue images and feature extraction from same for treating, diagnosing, or predicting medical conditions
CN107808386A (en) * 2017-09-26 2018-03-16 上海大学 A kind of sea horizon detection method based on image, semantic segmentation
WO2020248439A1 (en) * 2019-06-11 2020-12-17 江苏农林职业技术学院 Crown cap surface defect online inspection method employing image processing
CN111383244A (en) * 2020-02-28 2020-07-07 浙江大华技术股份有限公司 Target detection tracking method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
YONGKANG YAO; YUCHENG ZHANG AND WENDU NIE: "Pest Detection in Crop Images Based on OTSU Algorithm and Deep Convolutional Neural Network", 2020 INTERNATIONAL CONFERENCE ON VIRTUAL REALITY AND INTELLIGENT SYSTEMS (ICVRIS), 19 July 2020 (2020-07-19) *
汤世福;钟铭恩;郑重港: "基于灯体特征阈值分割的交通红绿灯图像识别", 机电技术, 28 February 2021 (2021-02-28) *
谷正气;李健;张勇;夏威;罗伦;: "一种高分辨率可见光遥感影像中车辆目标检测方法", 测绘通报, no. 01, 25 January 2015 (2015-01-25) *

Similar Documents

Publication Publication Date Title
CN113435460B (en) A recognition method for bright crystal granular limestone images
Zaitoun et al. Survey on image segmentation techniques
CN107563380A (en) A kind of vehicle license plate detection recognition method being combined based on MSER and SWT
CN109948625A (en) Definition of text images appraisal procedure and system, computer readable storage medium
CN110570442A (en) Contour detection method under complex background, terminal device and storage medium
CN111353371A (en) Shoreline extraction method based on spaceborne SAR images
Liang et al. An extraction and classification algorithm for concrete cracks based on machine vision
CN103116890B (en) A kind of intelligent search matching process based on video image
CN103971367A (en) Hydrologic data image segmenting method
CN114387329A (en) Progressive regularization method of building outline based on high-resolution remote sensing images
CN104866850B (en) A kind of optimization method of text image binaryzation
CN104408721B (en) Stamper image extracting method based on background density estimation
Sui et al. ECGAN: An improved conditional generative adversarial network with edge detection to augment limited training data for the classification of remote sensing images with high spatial resolution
Othman et al. Road crack detection using adaptive multi resolution thresholding techniques
Giveki et al. Atanassov's intuitionistic fuzzy histon for robust moving object detection
CN107230210A (en) A kind of fast partition method of remote sensing images harbour waterborne target
CN106600615A (en) Image edge detection algorithm evaluation system and method
CN113870182A (en) A Masked Otsu Thresholding Method
CN111968136A (en) Coal rock microscopic image analysis method and analysis system
Agarwal et al. Forensic analysis of colorized grayscale images using local binary pattern
Sekhar et al. An object-based splicing forgery detection using multiple noise features
Al-Mahadeen et al. Signature region of interest using auto cropping
Mičušík et al. Steerable semi-automatic segmentation of textured images
Vikram et al. Image edge detection
CN114240988B (en) Image segmentation method based on nonlinear scale space

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination