[go: up one dir, main page]

CN109636784B - Image saliency object detection method based on maximum neighborhood and superpixel segmentation - Google Patents

Image saliency object detection method based on maximum neighborhood and superpixel segmentation Download PDF

Info

Publication number
CN109636784B
CN109636784B CN201811488182.6A CN201811488182A CN109636784B CN 109636784 B CN109636784 B CN 109636784B CN 201811488182 A CN201811488182 A CN 201811488182A CN 109636784 B CN109636784 B CN 109636784B
Authority
CN
China
Prior art keywords
image
saliency
color
detected
superpixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811488182.6A
Other languages
Chinese (zh)
Other versions
CN109636784A (en
Inventor
李洁
张航
王颖
王飞
陈聪
张敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201811488182.6A priority Critical patent/CN109636784B/en
Publication of CN109636784A publication Critical patent/CN109636784A/en
Application granted granted Critical
Publication of CN109636784B publication Critical patent/CN109636784B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

本发明提出了一种基于最大邻域和超像素分割的图像显著性目标检测方法,用于解决现有技术中图像显著性目标检测准确率低的技术问题。实现步骤为:1.对待检测图像进行超像素分割;2.统计待检测图像中每种颜色出现的频次;3.对待检测图像进行颜色替代;4.对颜色替代后的图像进行预处理;5.计算待检测图像的初始显著性图像;6.确定K个超像素块的显著性值;7.获取最终显著性图像并输出。本发明提高了图像显著性目标检测的准确率,并且可以将图像显著性目标一致高亮,可用于计算机视觉领域中的图像预处理过程。

Figure 201811488182

The invention proposes an image salient target detection method based on maximum neighborhood and superpixel segmentation, which is used to solve the technical problem of low image salient target detection accuracy in the prior art. The implementation steps are: 1. Perform superpixel segmentation on the image to be detected; 2. Count the frequency of occurrence of each color in the image to be detected; 3. Perform color substitution on the image to be detected; 4. Preprocess the image after color substitution; 5. . Calculate the initial saliency image of the image to be detected; 6. Determine the saliency values of the K superpixel blocks; 7. Obtain and output the final saliency image. The invention improves the accuracy of image saliency target detection, can uniformly highlight the image saliency target, and can be used for image preprocessing in the field of computer vision.

Figure 201811488182

Description

Image saliency target detection method based on maximum neighborhood and super-pixel segmentation
Technical Field
The invention belongs to the technical field of computer image processing, relates to an image saliency target detection method, in particular to an image saliency target detection method based on maximum neighborhood and superpixel segmentation, and can be used for an image preprocessing process in the field of computer vision.
Background
When a human being views an image, the human being usually only focuses on a more significant portion of the entire image. Therefore, in computer simulation of the human visual system, simulation is mainly performed by detecting salient regions in images. The image saliency target detection can improve the performance of a plurality of computer vision and image processing algorithms, and can be particularly used in the research fields of image segmentation, target recognition, image retrieval and the like.
According to the detection principle, the image saliency target detection can be divided into three types, namely a model based on global comparison, a model based on background prior and a model based on local comparison, wherein the model based on global comparison is used for calculating a saliency value by comparing pixel points and global characteristics, so that the problem that the interior of a target cannot be detected can be reduced, but when the image foreground is complex and the appearance is changeable, the method cannot accurately detect the target; the background prior-based model judges background information in an image to be detected through background prior, and then inhibits the detected background information when calculating a significant characteristic value.
The model based on local comparison calculates a significance value by comparing pixel points and local region characteristics of the pixel points, can detect a small target in an image, but for a larger target, the method can only detect a target boundary and cannot detect the interior of the target. For example, patent application publication No. CN103996195A entitled "a method for detecting saliency of an image" discloses an algorithm for detecting feature values of an image by fusing various feature values of the image to the same range. The method comprises the steps of partitioning an image into image blocks with the same size, and then calculating a brightness characteristic value, a color characteristic value, a direction characteristic value, a depth characteristic value and a sparse characteristic value of each block; the method comprises the steps of quantizing all characteristic values of image blocks to the same interval range, carrying out fusion calculation on all characteristic values to obtain difference values between all image blocks and the rest image blocks, determining weighting coefficients, carrying out weighting summation calculation on the difference values between all image blocks and the rest image blocks to obtain significance values of all image blocks, and finally obtaining an image significance detection result. The method can provide the most characteristic values for the image sub-blocks, but has the defect that the final significance detection accuracy is low because the significance detection image is obtained by weighting the difference values between different sub-blocks of the image, and the non-target area is reserved when the significance target in the image is detected.
For another example, in an article "sales detection using maximum symmetry systematic surrounding" published by Achanta et al in ICIP 2010, color and brightness information of pixel points in an image is used to provide a Saliency image with full resolution based on a maximum symmetric neighborhood detection image Saliency target, and the method can detect the Saliency target, but cannot remove a non-target region, resulting in low detection accuracy.
Disclosure of Invention
The invention aims to provide an image saliency target detection method based on maximum neighborhood and super-pixel segmentation aiming at overcoming the defects of the prior art and aiming at improving the accuracy of image saliency target detection.
The technical idea of the invention is as follows: under Lab space, taking the two norms of the difference value between the color vector of each pixel point and the average color vector in the maximum neighborhood of the position of each pixel point as the significance value of the current pixel point to obtain an initial significance image of the image to be detected, and then determining the significance value of each super-pixel block according to the initial significance image and the super-pixel segmentation result of the image to be detected to obtain a final significance image of the image to be detected, wherein the specific implementation steps are as follows:
(1) performing superpixel segmentation on an image to be detected:
performing superpixel segmentation on an image to be detected to obtain K superpixel blocks and storing the K superpixel blocks, wherein K is more than or equal to 200;
(2) counting the frequency of each color in the image to be detected:
dividing three color channels in RGB color space into N equal parts, wherein N is more than or equal to 10 to obtain N3Color is planted and counted in the image to be detected and N is counted3The frequency of occurrence of each color corresponding to the seed color;
(3) carrying out color substitution on an image to be detected:
arranging all the counted colors according to the sequence of the appearance frequency from large to small, sequentially accumulating the appearance frequency of each color in the sequence until the accumulation result is 80% of the total pixel number M of the image to be detected, and keeping the representative color C ═ C of the frequency contained in the accumulation resultp1,Cp2,…,Cpi,…,CppAnd simultaneously, the representative color C is used for corresponding to the frequency which does not participate in accumulationt1,Ct2,…Ctj,…,CttReplacing to obtain an image with the replaced color;
(4) preprocessing the image after color substitution:
performing Gaussian filtering on the image after color replacement, and performing RGB-to-Lab color space conversion on the filtered image to obtain a preprocessed image in a Lab space;
(5) calculating an initial saliency image of an image to be detected:
(5a) carrying out color channel separation on the image preprocessed in the Lab space to obtain a color vector I (x, y) of each pixel point, wherein the (x, y) is a coordinate of the pixel point;
(5b) calculating the average color vector I in the maximum neighborhood of the position (x, y) of each pixel pointμ(x, y), and mixing I (x, y) and IμTaking the two-norm of the (x, y) difference value as the significance value of the current pixel point;
(5c) normalizing the significance values of all the pixel points to obtain an initial significance image sm of the image to be detected;
(6) determining significance values for K superpixel blocks:
(6a) taking the average significance value T of an initial significance image sm of an image to be detected as a threshold, marking the pixel points with the significance values larger than the threshold in sm as 1, and marking the rest pixel points as 0 to obtain the significance label of each pixel point;
(6b) judging whether the number of the pixels with the pixel saliency labels of 1 in each super pixel block exceeds half, if so, taking 1 as the saliency value K of the super pixel blocklOtherwise, 0 is taken as the super imageSignificance value K of a prime blocklObtaining significance values of K superpixel blocks;
(7) acquiring a final saliency image and outputting:
and assigning the significance value of each super-pixel block in the K super-pixel blocks to each pixel contained in the super-pixel block to obtain a significance map SM ', and taking the maximum connected domain in the SM' as a final significance image and outputting the final significance image.
Compared with the prior art, the invention has the following advantages:
1) the method adopts a significance value calculation method based on maximum neighborhood and superpixel segmentation, after an initial significance image is obtained through maximum neighborhood calculation, according to the combination of a superpixel segmentation result and the initial significance image, the significance value of a superpixel block is determined, the significance value of the superpixel block is assigned to pixel points contained in the superpixel block to obtain a significance detection image, then the maximum connected domain of the significance detection image is taken as a finally output significance target detection image, non-target regions in the image are effectively removed, and a simulation result shows that the method can accurately detect the significance target of the image and improve the accuracy of the significance target detection.
2) In the image preprocessing process, the color substitution operation is carried out on the image to be detected, the main color in the image to be detected is reserved, and meanwhile, the main color is used for substituting the non-main color, so that the color interference of a non-target area is reduced, and the accuracy of the detection of the saliency target is improved.
Drawings
FIG. 1 is a flow chart of an implementation of the present invention;
FIG. 2 is an image to be detected as employed in an embodiment of the present invention;
fig. 3 is a diagram of a target result of a manual mark in an image to be detected, which is adopted in a simulation experiment of the present invention, and a simulation diagram of a detection result of the prior art and the present invention.
Detailed Description
The invention is described in further detail below with reference to the figures and the specific embodiments.
Referring to fig. 1, an image saliency target detection method based on maximum neighborhood and super-pixel segmentation includes the following steps:
step 1) performing superpixel segmentation on an image to be detected:
an image to be detected adopts an SLIC superpixel segmentation method, SLIC is a short name of a simple linear iterative clustering algorithm (simple linear iterative cluster), the SLIC algorithm considers space and color distance between pixel points at the same time, the image is segmented into superpixel blocks containing a plurality of pixel points, K superpixel blocks are obtained and stored finally, and the segmentation quantity K of the best experimental effect is obtained by comparing a plurality of commonly used values K200, 250,300,400 and 500, wherein the segmentation quantity K of the best experimental effect is 200, the image to be detected is shown in FIG. 2, a significant target in the image to be detected is a flower, and a non-target area in the image comprises leaves and branches of the flower;
step 2) counting the frequency of each color in the image to be detected:
the method comprises the steps of dividing three color channels in an RGB color space into N equal parts respectively, enabling the range of the three color channels of the RGB to be 0-255, enabling a model of the RGB color space to be a space cube, and dividing the RGB color space into N equal parts after evenly dividing the edges of the cube3Comparing the experimental effects of a plurality of common values N10, 14, 16 and 32 to obtain the best experimental effect, N16, and dividing RGB color space into 163Color is planted and counted in the image to be detected and 163The frequency of occurrence of each color corresponding to the seed color;
step 3) carrying out color substitution on the image to be detected:
arranging all the counted colors according to the sequence of the appearance frequency from large to small, sequentially accumulating the appearance frequency of each color in the sequence until the accumulation result is 80% of the total pixel number M of the image to be detected, and keeping the representative color C ═ C of the frequency contained in the accumulation resultp1,Cp2,…,Cpi,…,CppRepresenting colors are colors with higher occurrence frequency in the image to be detected, including colors of the image significance targets, and representing colors C are used for colors which do not participate in image significanceColor C' corresponding to the accumulated frequency ═ Ct1,Ct2,…Ctj,…,CttReplacing:
wherein, the representative color C is used to correspond to the frequency not participating in accumulationt1,Ct2,…Ctj,…,CttThe replacing steps are as follows:
step 3a) calculating the color C corresponding to the frequency not participating in accumulationtjAnd representative color C ═ Cp1,Cp2,…,Cpi,…,CppThe Euclidean distance of } of
Figure BDA0001895052630000056
The calculation formula is as follows:
Figure BDA0001895052630000051
wherein, Ctj,RAnd Cpi,RRepresents the R component, Ctj,GAnd Cpi,GRepresents the G component, Ctj,BAnd Cpi,BRepresents the B component;
step 3b) selecting the Euclidean distance with the minimum value from the calculated Euclidean distances
Figure BDA0001895052630000052
And through use of
Figure BDA0001895052630000053
C in (1)p′Color C in image to be detectedtjColor replacement is carried out, wherein
Figure BDA0001895052630000054
The selection formula is as follows:
Figure BDA0001895052630000055
step 3C) replacing the color C' with low frequency in the image to be detected by using the representative color C through the steps 3a and 3b to obtain an image with replaced color, wherein the image only contains the representative color C, the color with high frequency is the color containing the target area, and the color replacement can effectively reduce the color interference of the non-target area;
step 4) preprocessing the image after color substitution:
the image after color replacement is subjected to Gaussian filtering, the image can be effectively smoothed by the Gaussian filtering, a filtering template with the length of 3 multiplied by 3 and the sigma of 0.5 is adopted, RGB-to-Lab color space conversion is carried out on the filtered image, a preprocessed image in a Lab space is obtained, the brightness and color information of the image can be provided in the Lab space, the difference between different colors can be more fully shown, and the conversion formula is as follows:
Figure BDA0001895052630000061
wherein R, G, B represents the red, green, and blue color components, respectively, and L, a, and b represent the color components of the color space-converted luminance, green to red, and blue to yellow, respectively;
step 5) calculating an initial saliency image of the image to be detected:
step 5a) color channel separation is carried out on the image preprocessed in the Lab space, and a color vector I (x, y) of each pixel point is obtained:
separating the preprocessed image under the Lab space into three channels of L, a and b, wherein I (x, y) is composed of a brightness component value L (x, y), a color component value a (x, y) and b (x, y), and the combination mode is as follows:
I(x,y)=(L(x,y),a(x,y),b(x,y))
wherein, (x, y) represents the coordinates of the pixel;
step 5b) calculating the average color vector I in the maximum neighborhood of the position of each pixel pointμ(x, y), and mixing I (x, y) and IμTaking the two-norm of the (x, y) difference as the significance value of the current pixel:
step 5b1) the maximum neighborhood is the maximum rectangular region with the position of the pixel point (x, y) as the center point, and is used for calculating the pixel point (x, y)The significance value provides a more reasonable local area, and the average color vector I in the maximum neighborhood of the position of each pixel pointμThe formula for the calculation of (x, y) is:
Figure BDA0001895052630000062
x0=min(x,w-x)
y0=min(y,h-y)
A=(2x0+1)(2y0+1)
w and h respectively represent the width and height of the image to be detected, I (I, j) is a color vector with pixel point coordinates (I, j), and x0,y0Respectively representing the width and half of the height width of the maximum neighborhood taking (x, y) as a central point, and A representing the total number of pixel points contained in the maximum neighborhood taking (x, y) as the central point;
step 5b2) combine I (x, y) and IμTaking the two-norm of the (x, y) difference as the significance value of the current pixel point, wherein the calculation formula is as follows:
S(x,y)=||Iμ(x,y)-I(x,y)||2
wherein S (x, y) represents a saliency value calculated for a (x, y) position of the pixel coordinate.
Step 5c) normalizing the significance values of all the pixel points obtained in the step 5b to 0-255 to obtain an initial significance image sm of the image to be detected, wherein the initial significance image sm is a detection result image similar to a gray image, and the greater the significance value of the pixel point is, the more likely the position of the pixel point is to be a significant target in the image;
step 6) determining the significance values of the K superpixel blocks:
step 6a) taking the average significance value T of the initial significance image sm of the image to be detected as a threshold, marking the pixel points with the pixel point significance values larger than the threshold in the sm as 1, marking the rest pixel points as 0, and obtaining the significance label of each pixel point, wherein the average significance value T is an integral expression of the significance value of the initial significance image sm, and the significance labels of the pixel points are obtained by taking the average significance value T as the threshold, so that the significance degree of the pixel points in the image to be detected can be reflected better:
step 6a1) average saliency value T of the initial saliency image sm, which is calculated by the formula:
Figure BDA0001895052630000071
wherein λ is a threshold parameter, and comparing the experimental effects of a plurality of common values λ 1, 1.1, 1.2, 1.4, where λ is 1.2 when the optimal experimental effect is obtained, sm (x, y) represents a saliency value of an (x, y) position in the initial saliency image;
step 6a2), the calculation formula of the saliency label of the pixel point is as follows:
Figure BDA0001895052630000072
wherein sm' (x, y) is a significance label result with coordinates of (x, y) position;
step 6b) judging whether the number of the pixels with the pixel saliency labels of 1 in each super pixel block exceeds half, if so, taking 1 as the saliency value K of the super pixel blocklOtherwise, 0 is taken as the saliency value K of the super-pixel blocklAnd obtaining significance values of K superpixel blocks, wherein the superpixel blocks comprise a series of pixel points with similar colors and brightness, and the significance values of the superpixel blocks can be more accurately represented by judging whether more than half of the pixel point significance labels in the superpixel blocks are 1, and the interference of non-significant pixel points in the superpixel blocks can be reduced, and K islThe calculation formula of (2) is as follows:
Figure BDA0001895052630000081
wherein n is the number of pixel points contained in the first super pixel block, and K is the number of the super pixel blocks;
and 7) acquiring and outputting a final saliency image:
assigning the significance value of each super-pixel block in the K super-pixel blocks to each pixel contained in the super-pixel blocks to obtain a significance map SM ', and taking the maximum connected domain in the SM' as a final significance image and outputting:
for each super-pixel block, the significance value of the super-pixel block is assigned to each pixel point contained by the super-pixel block as the significance value of the pixel point to obtain a significance map SM ', at the moment, the SM ' contains a detection result of a significant target in an image to be detected and a small non-target area, the significant target in the image is the target which can most cause a point of interest, for the small non-target area, the non-target area can be removed by obtaining the maximum connected domain in the image, the maximum connected domain is a connected area with the largest area (8 connected) in all connected domains of the binary image, the accuracy of significant target detection can be improved by selecting the maximum connected domain, and the maximum connected domain in the significance map SM ' is finally output as the detection result.
The technical effects of the present invention will be further described with reference to simulation experiments.
1. Simulation conditions are as follows: the invention is performed on WINDOWS 10 systems using MatlabR2014a platform.
2. And (5) simulating content and result analysis.
Simulation 1:
the image to be detected adopted by the embodiment of the invention is shown in fig. 2, the salient object in the image to be detected is a flower, and the non-object area comprises a flower leaf and a flower branch. Fig. 3 includes a target result diagram (a) of an artificial marker in an image to be detected, which is adopted in a simulation experiment of the present invention, and a detection result simulation diagram (b) of the prior art and a detection result simulation diagram (c) of the present invention. By comparing the detection result graph (b) with the graph (c), the method can accurately detect the salient object in the image and has good effect of inhibiting the non-object area.
Simulation 2:
the average accuracy of the prior art and the present invention on the MSRA1K dataset is shown in the table below, from which it can be seen that the present invention has a significant improvement in accuracy over the prior art.
Prior Art The invention
Rate of accuracy 0.803 0.847

Claims (7)

1.一种基于最大邻域和超像素分割的图像显著性目标检测方法,其特征在于包括以下步骤:1. an image saliency target detection method based on maximum neighborhood and superpixel segmentation, is characterized in that comprising the following steps: (1)对待检测图像进行超像素分割:(1) Perform superpixel segmentation on the image to be detected: 对待检测图像进行超像素分割,得到K个超像素块并保存,K≥200;Perform superpixel segmentation on the image to be detected, obtain K superpixel blocks and save them, K≥200; (2)统计待检测图像中每种颜色出现的频次:(2) Count the frequency of each color in the image to be detected: 将RGB颜色空间中的三种颜色通道分别划分为N个等份,N≥10,得到N3种颜色,并统计待检测图像中与N3种颜色对应的每种颜色出现的频次;Divide the three color channels in the RGB color space into N equal parts, N≥10, to obtain N 3 colors, and count the occurrence frequency of each color corresponding to the N 3 colors in the image to be detected; (3)对待检测图像进行颜色替代:(3) Replace the color of the image to be detected: 对统计出的所有颜色按照出现频次由大到小的顺序进行排列,并对排序得到的数列中各颜色出现的频次依次进行累加,直到累加结果为待检测图像总像素数M的80%,保留累加结果所包含频次的代表颜色C={Cp1,Cp2,…,Cpi,…,Cpp},同时通过代表颜色C对未参与累加的频次所对应的颜色C′={Ct1,Ct2,…Ctj,…,Ctt}进行替代,得到颜色替代后的图像;Arrange all the counted colors in descending order of occurrence frequency, and accumulate the frequency of occurrence of each color in the sequence obtained by sorting, until the accumulated result is 80% of the total number of pixels M of the image to be detected, keep it The representative color C={C p1 ,C p2 ,...,C pi ,...,C pp } of the frequencies included in the accumulation result, and the color C'={C t1 , C t2 ,…C tj ,…,C tt } is replaced to obtain an image after color substitution; (4)对颜色替代后的图像进行预处理:(4) Preprocess the image after color substitution: 对颜色替代后的图像进行高斯滤波,并对滤波后的图像进行RGB到Lab颜色空间转换,得到Lab空间下预处理后的图像;Perform Gaussian filtering on the color-replaced image, and convert the filtered image from RGB to Lab color space to obtain a preprocessed image in Lab space; (5)计算待检测图像的初始显著性图像:(5) Calculate the initial saliency image of the image to be detected: (5a)对Lab空间下预处理后的图像进行颜色通道分离,得到每个像素点的颜色向量I(x,y),(x,y)是像素点的坐标;(5a) Perform color channel separation on the preprocessed image in Lab space, and obtain the color vector I(x, y) of each pixel point, where (x, y) is the coordinate of the pixel point; (5b)计算每个像素点所在位置(x,y)的最大邻域内,即计算以每个像素点所在位置(x,y)为中心点的最大矩形区域内的平均颜色向量Iμ(x,y),并将I(x,y)和Iμ(x,y)差值的二范数作为当前像素点的显著性值;(5b) Calculate the maximum neighborhood of the position (x, y) of each pixel point, that is, calculate the average color vector I μ (x , y), and the two-norm of the difference between I (x, y) and I μ (x, y) is used as the saliency value of the current pixel; (5c)对所有像素点的显著性值进行归一化,得到待检测图像的初始显著性图像sm;(5c) Normalize the saliency values of all pixel points to obtain the initial saliency image sm of the image to be detected; (6)确定K个超像素块的显著性值:(6) Determine the saliency values of the K superpixel blocks: (6a)将待检测图像的初始显著性图像sm的平均显著性值T作为阈值,并将sm中像素点显著性值大于阈值的像素点标记为1,其余的像素点标记为0,得到每个像素点的显著性标签;(6a) The average saliency value T of the initial saliency image sm of the image to be detected is used as the threshold value, and the pixel points in sm whose saliency value is greater than the threshold value are marked as 1, and the rest of the pixels are marked as 0. saliency labels of pixels; (6b)判断每个超像素块内像素点显著性标签为1的像素点是否超过一半,若是,将1作为该超像素块的显著性值Kl,否则,将0作为该超像素块的显著性值Kl,得到K个超像素块的显著性值;(6b) Judging whether the pixels with the saliency label of 1 in each superpixel block are more than half, if so, take 1 as the saliency value K1 of the superpixel block, otherwise, take 0 as the saliency value of the superpixel block The saliency value K l , the saliency values of K superpixel blocks are obtained; (7)获取最终显著性图像并输出:(7) Obtain the final saliency image and output: 将K个超像素块中每个超像素块的显著性值赋于该超像素块包含的每个像素,得到显著性图SM′,并将SM′中的最大连通域作为最终显著性图像并输出。The saliency value of each superpixel block in the K superpixel blocks is assigned to each pixel contained in the superpixel block, and the saliency map SM' is obtained, and the maximum connected domain in SM' is used as the final saliency image. output. 2.根据权利要求1所述的基于最大邻域和超像素分割的图像显著性目标检测方法,其特征在于,步骤(3)中所述的通过代表颜色C对未参与累加的频次所对应的颜色C′={Ct1,Ct2,…Ctj,…,Ctt}进行替代,实现步骤为:2. the image saliency target detection method based on maximum neighborhood and superpixel segmentation according to claim 1, is characterized in that, described in step (3) by representing color C corresponding to the frequency that does not participate in accumulating. Color C′={C t1 , C t2 ,…C tj ,…,C tt } is replaced, and the implementation steps are: (3a)计算未参与累加的频次所对应的颜色Ctj和代表颜色C={Cp1,Cp2,…,Cpi,…,Cpp}的欧氏距离
Figure FDA0003045378140000026
计算公式为:
(3a) Calculate the Euclidean distance between the color C tj corresponding to the frequency not participating in the accumulation and the representative color C={C p1 ,C p2 ,...,C pi ,...,C pp }
Figure FDA0003045378140000026
The calculation formula is:
Figure FDA0003045378140000021
Figure FDA0003045378140000021
其中,Ctj,R和Cpi,R表示R分量,Ctj,G和Cpi,G表示G分量,Ctj,B和Cpi,B表示B分量;Among them, C tj, R and C pi, R represent the R component, C tj, G and C pi, G represent the G component, C tj, B and C pi, B represent the B component; (3b)在计算得到的欧式距离中选取数值最小的
Figure FDA0003045378140000022
并通过用
Figure FDA0003045378140000023
中的Cp′颜色对待检测图像中的Ctj颜色进行替换,其中
Figure FDA0003045378140000024
的选取公式为:
(3b) Select the smallest value among the calculated Euclidean distances
Figure FDA0003045378140000022
and by using
Figure FDA0003045378140000023
The C p' color in the to-be-detected image is replaced by the C tj color, where
Figure FDA0003045378140000024
The selection formula is:
Figure FDA0003045378140000025
Figure FDA0003045378140000025
3.根据权利要求1所述的基于最大邻域和超像素分割的图像显著性目标检测方法,其特征在于,步骤(4)中所述的对滤波后的图像进行RGB到Lab颜色空间转换,转换公式为:3. the image saliency target detection method based on maximum neighborhood and superpixel segmentation according to claim 1, is characterized in that, described in step (4) is carried out RGB to Lab color space conversion to the filtered image, The conversion formula is:
Figure FDA0003045378140000031
Figure FDA0003045378140000031
其中,R、G、B分别表示红、绿、蓝颜色分量,L、a、b分别表示颜色空间转换后的亮度、从绿色到红色和从蓝色到黄色的颜色分量。Among them, R, G, B represent the red, green, and blue color components, respectively, and L, a, and b represent the color space-converted brightness, green-to-red, and blue-to-yellow color components, respectively.
4.根据权利要求1所述的基于最大邻域和超像素分割的图像显著性目标检测方法,其特征在于,步骤(5a)中所述的像素点的颜色向量I(x,y),实现步骤为:4. the image saliency target detection method based on maximum neighborhood and superpixel segmentation according to claim 1, is characterized in that, the color vector I (x, y) of the pixel point described in the step (5a), realizes The steps are: 将Lab空间下预处理后的图像分离成L、a、b三个通道,I(x,y)由亮度分量值L(x,y)、颜色分量值a(x,y)和b(x,y)组成,组合方式为:The preprocessed image in Lab space is separated into three channels: L, a, and b. I(x,y) consists of luminance component values L(x,y), color component values a(x,y) and b(x). ,y) composition, the combination method is: I(x,y)=(L(x,y),a(x,y),b(x,y))I(x,y)=(L(x,y),a(x,y),b(x,y)) 其中,(x,y)表示像素点的坐标。Among them, (x, y) represents the coordinates of the pixel point. 5.根据权利要求1所述的基于最大邻域和超像素分割的图像显著性目标检测方法,其特征在于,步骤(5b)中所述的每个像素点所在位置(x,y)的最大邻域内的平均颜色向量Iμ(x,y),其计算公式为:5. The image saliency target detection method based on maximum neighborhood and superpixel segmentation according to claim 1, is characterized in that, the maximum value of each pixel location (x, y) described in step (5b) The average color vector I μ (x, y) in the neighborhood, its calculation formula is:
Figure FDA0003045378140000032
Figure FDA0003045378140000032
x0=min(x,w-x)x 0 =min(x,wx) y0=min(y,h-y)y 0 =min(y,hy) A=(2x0+1)(2y0+1)A=(2x 0 +1)(2y 0 +1) 其中,w,h分别代表待检测图像的宽度和高度,I(i,j)是像素点坐标为(i,j)的颜色向量,(x,y)表示像素点的坐标,x0,y0分别表示以(x,y)为中心点的最大邻域宽度和高宽的一半,A表示以(x,y)为中心点的最大邻域所包含的像素点总数。Among them, w and h represent the width and height of the image to be detected, respectively, I(i, j) is the color vector with the pixel coordinates (i, j), (x, y) represents the coordinates of the pixel, x 0 , y 0 represents the maximum neighborhood width and half of the height and width with (x, y) as the center point, respectively, and A represents the total number of pixels contained in the maximum neighborhood with (x, y) as the center point.
6.根据权利要求1所述的基于最大邻域和超像素分割的图像显著性目标检测方法,其特征在于,步骤(6a)中所述的待检测图像的初始显著性图像sm的平均显著性值T,其计算公式为:6. The image saliency target detection method based on maximum neighborhood and superpixel segmentation according to claim 1, wherein the average saliency of the initial saliency image sm of the image to be detected described in the step (6a) value T, which is calculated as:
Figure FDA0003045378140000041
Figure FDA0003045378140000041
其中,λ是阈值参数,w,h分别代表待检测图像的宽度和长度,(x,y)为像素点坐标,sm(x,y)表示初始显著性图像中(x,y)位置的显著性值。Among them, λ is the threshold parameter, w and h represent the width and length of the image to be detected, respectively, (x, y) are the pixel coordinates, and sm(x, y) represents the saliency at the (x, y) position in the initial saliency image. sex value.
7.根据权利要求1所述的基于最大邻域和超像素分割的图像显著性目标检测方法,其特征在于,步骤(6b)中所述的超像素块的显著性值Kl,计算公式为:7. the image saliency target detection method based on maximum neighborhood and superpixel segmentation according to claim 1, is characterized in that, the saliency value K1 of the superpixel block described in the step (6b), calculation formula is :
Figure FDA0003045378140000042
Figure FDA0003045378140000042
其中,n为第l个超像素块所包含像素点的个数,sm′(x,y)是像素坐标为(x,y)的显著性标签,K是对超像素块的数目。Among them, n is the number of pixels contained in the l-th superpixel block, sm'(x, y) is the saliency label whose pixel coordinate is (x, y), and K is the number of superpixel blocks.
CN201811488182.6A 2018-12-06 2018-12-06 Image saliency object detection method based on maximum neighborhood and superpixel segmentation Active CN109636784B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811488182.6A CN109636784B (en) 2018-12-06 2018-12-06 Image saliency object detection method based on maximum neighborhood and superpixel segmentation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811488182.6A CN109636784B (en) 2018-12-06 2018-12-06 Image saliency object detection method based on maximum neighborhood and superpixel segmentation

Publications (2)

Publication Number Publication Date
CN109636784A CN109636784A (en) 2019-04-16
CN109636784B true CN109636784B (en) 2021-07-27

Family

ID=66071740

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811488182.6A Active CN109636784B (en) 2018-12-06 2018-12-06 Image saliency object detection method based on maximum neighborhood and superpixel segmentation

Country Status (1)

Country Link
CN (1) CN109636784B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110136142A (en) * 2019-04-26 2019-08-16 微梦创科网络科技(中国)有限公司 Image cropping method, device and electronic device
CN110175563B (en) * 2019-05-27 2023-03-24 上海交通大学 Metal cutting tool drawing mark identification method and system
CN110276350B (en) * 2019-06-25 2021-08-27 上海海事大学 Target detection method for marine ship
CN111028259B (en) * 2019-11-15 2023-04-28 广州市五宫格信息科技有限责任公司 Foreground extraction method adapted through image saliency improvement
CN111951949B (en) * 2020-01-21 2021-11-09 武汉博科国泰信息技术有限公司 Intelligent nursing interaction system for intelligent ward
CN111583279A (en) * 2020-05-12 2020-08-25 重庆理工大学 A Superpixel Image Segmentation Method Based on PCBA
CN111860534A (en) * 2020-06-12 2020-10-30 国家海洋局北海预报中心((国家海洋局青岛海洋预报台)(国家海洋局青岛海洋环境监测中心站)) A SAR image oil spill detection method based on image saliency analysis
CN111784703B (en) * 2020-06-17 2023-07-14 泰康保险集团股份有限公司 Image segmentation method and device, electronic equipment and storage medium
CN112418218B (en) * 2020-11-24 2023-02-28 中国地质大学(武汉) Target area detection method, device, equipment and storage medium
CN113469976A (en) * 2021-07-06 2021-10-01 浙江大华技术股份有限公司 Object detection method and device and electronic equipment
CN114638822B (en) * 2022-03-31 2022-12-13 扬州市恒邦机械制造有限公司 Method and system for detecting surface quality of automobile cover plate by using optical means
CN114998290A (en) * 2022-06-20 2022-09-02 佛山技研智联科技有限公司 Fabric flaw detection method, device, equipment and medium based on supervised mode
CN114998320B (en) * 2022-07-18 2022-12-16 银江技术股份有限公司 Method, system, electronic device and storage medium for visual saliency detection

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102855622A (en) * 2012-07-18 2013-01-02 中国科学院自动化研究所 Infrared remote sensing image sea ship detecting method based on significance analysis
US8577182B1 (en) * 2010-07-13 2013-11-05 Google Inc. Method and system for automatically cropping images
CN103390279A (en) * 2013-07-25 2013-11-13 中国科学院自动化研究所 Target prospect collaborative segmentation method combining significant detection and discriminant study
CN105427314A (en) * 2015-11-23 2016-03-23 西安电子科技大学 SAR image target detection method based on Bayesian saliency
CN107169487A (en) * 2017-04-19 2017-09-15 西安电子科技大学 The conspicuousness object detection method positioned based on super-pixel segmentation and depth characteristic

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2012244276A1 (en) * 2012-10-30 2014-05-15 Canon Kabushiki Kaisha Method, apparatus and system for detecting a supporting surface region in an image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8577182B1 (en) * 2010-07-13 2013-11-05 Google Inc. Method and system for automatically cropping images
CN102855622A (en) * 2012-07-18 2013-01-02 中国科学院自动化研究所 Infrared remote sensing image sea ship detecting method based on significance analysis
CN103390279A (en) * 2013-07-25 2013-11-13 中国科学院自动化研究所 Target prospect collaborative segmentation method combining significant detection and discriminant study
CN105427314A (en) * 2015-11-23 2016-03-23 西安电子科技大学 SAR image target detection method based on Bayesian saliency
CN107169487A (en) * 2017-04-19 2017-09-15 西安电子科技大学 The conspicuousness object detection method positioned based on super-pixel segmentation and depth characteristic

Also Published As

Publication number Publication date
CN109636784A (en) 2019-04-16

Similar Documents

Publication Publication Date Title
CN109636784B (en) Image saliency object detection method based on maximum neighborhood and superpixel segmentation
CN109522908B (en) Image Saliency Detection Method Based on Region Label Fusion
CN105761266B (en) The method that Rectangle building is extracted from remote sensing images
CN111340824B (en) An Image Feature Segmentation Method Based on Data Mining
CN105354865B (en) Method and system for automatic cloud detection of multi-spectral remote sensing satellite images
CN108537239B (en) Method for detecting image saliency target
CN110120042B (en) A Method of Extracting Disease and Pest Areas of Crop Images Based on SLIC Superpixels and Automatic Threshold Segmentation
CN112633212B (en) A computer vision-based method for identifying and classifying tea sprouts
CN108629783B (en) Image segmentation method, system and medium based on image feature density peak search
CN106204509B (en) Infrared and visible light image fusion method based on regional characteristics
CN109389163B (en) Unmanned aerial vehicle image classification system and method based on topographic map
CN107392968B (en) Image saliency detection method fused with color contrast map and color space distribution map
CN106851437A (en) A kind of method for extracting video frequency abstract
CN103714181B (en) A kind of hierarchical particular persons search method
CN109446922B (en) Real-time robust face detection method
CN104715251B (en) A kind of well-marked target detection method based on histogram linear fit
CN102254326A (en) Image segmentation method by using nucleus transmission
CN106157330B (en) Visual tracking method based on target joint appearance model
CN110415208A (en) An adaptive target detection method and its device, equipment, and storage medium
CN103778435A (en) Pedestrian fast detection method based on videos
CN104657980A (en) Improved multi-channel image partitioning algorithm based on Meanshift
CN103971367B (en) Hydrologic data image segmenting method
CN108492296B (en) Intelligent counting system and method of wheat ears based on superpixel segmentation
CN115147746B (en) Saline-alkali geological identification method based on unmanned aerial vehicle remote sensing image
CN109886146A (en) Remote sensing intelligent collection method and equipment for flood disaster information based on machine vision detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant