[go: up one dir, main page]

CN113393458A - Hand wound detection method based on wound weighting significance algorithm - Google Patents

Hand wound detection method based on wound weighting significance algorithm Download PDF

Info

Publication number
CN113393458A
CN113393458A CN202110794700.2A CN202110794700A CN113393458A CN 113393458 A CN113393458 A CN 113393458A CN 202110794700 A CN202110794700 A CN 202110794700A CN 113393458 A CN113393458 A CN 113393458A
Authority
CN
China
Prior art keywords
wound
color
image
region
segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110794700.2A
Other languages
Chinese (zh)
Inventor
袁玉波
车云云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
East China University of Science and Technology
Original Assignee
East China University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by East China University of Science and Technology filed Critical East China University of Science and Technology
Priority to CN202110794700.2A priority Critical patent/CN113393458A/en
Publication of CN113393458A publication Critical patent/CN113393458A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/162Segmentation; Edge detection involving graph-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30088Skin; Dermal

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

本文提出一种基于伤口加权显著性的手部伤口检测方法。一般情况下,显著性算法对于图像的显著效果对于专门区域的提取相对不明显,并且有很多局限性。本方法主要是先使用GrabCut进行前景提取,然后对手部图像分别进行初始量化、高级颜色筛选及转换颜色空间与GB分割,主要在不同的分割区域可以在Lab空间下进行区域之间距离的影响计算,并引入伤口颜色特征的加权,使伤口更突显,除此之外,还引入移动视觉焦点,计算图像中真正的视觉焦点并重新计算显著值,得到最后的突出伤口的显著图。大量实验证明该方法对于伤口检测很有效。

Figure 202110794700

In this paper, a method for hand wound detection based on wound weighted saliency is proposed. In general, the salient effect of saliency algorithms on images is relatively insignificant for the extraction of specialized regions, and has many limitations. This method mainly uses GrabCut for foreground extraction, and then performs initial quantization, advanced color screening, conversion color space and GB segmentation on the hand image respectively, and the influence of the distance between regions can be calculated in different segmentation regions in Lab space. , and the weighting of the wound color feature is introduced to make the wound more prominent. In addition, the moving visual focus is also introduced, the real visual focus in the image is calculated and the saliency value is recalculated to obtain the final saliency map of the prominent wound. A large number of experiments have proved that this method is very effective for wound detection.

Figure 202110794700

Description

一种基于伤口加权显著性算法的手部伤口检测方法A hand wound detection method based on wound weighted saliency algorithm

技术领域technical field

本发明主要涉及图像处理技术,具体涉及一种基于伤口加权显著性的手部伤口检测方法The invention mainly relates to image processing technology, in particular to a hand wound detection method based on wound weighted significance

背景技术Background technique

近些年来,随着“智慧城市”“智慧厨房”等概念层出不穷,一方面反映了人们对科技进步的需求更胜从前;另一方面,关于手部的研究也随之有了新的发展进程。如今,智慧厨房[4]的建设也是一个热点,缘由在于人们迫切的想要将最新的技术融入到自己的日常生活中,希望自己的生活“高科技”。智慧厨房的设计概念及物化实施过程中包含五个原则,分别是厨房设施智能化、操作人性化、低碳化、交流平台开放化、餐厨一体化。而关于设施智能化最为重要的设施可能就是晨检仪了,晨检仪就是检测厨师手部是否合格的智能设备,将手部内容的检测算法与晨检仪结合在一起才能实现其目的。但是纵观国内外,对于手部的研究无非就是手势的识别与关键点的检测,而对于我们手部内容的研究少之又少。因此在这种相对比较窘迫的情势下,手部彩色图像内容检测技术就显得意义重大。In recent years, with the emergence of concepts such as "smart city" and "smart kitchen", on the one hand, it reflects that people's demand for scientific and technological progress is greater than before; on the other hand, the research on the hand also has a new development process. . Nowadays, the construction of smart kitchen [4] is also a hot topic, because people are eager to integrate the latest technology into their daily life and hope that their life will be "high-tech". The design concept and materialization implementation process of smart kitchen includes five principles, namely, intelligent kitchen facilities, humanized operation, low carbonization, open communication platform, and integration of dining and kitchen. The most important facility for facility intelligence may be the morning detector. The morning detector is an intelligent device that detects whether the chef's hand is qualified. The detection algorithm of the hand content and the morning detector can be combined to achieve its purpose. However, throughout the country and abroad, the research on the hand is nothing more than the recognition of gestures and the detection of key points, and there is very little research on the content of our hands. Therefore, in this relatively embarrassing situation, the detection technology of hand color image content is of great significance.

手部彩色图像内容检测中非常重要的一个内容就是对手部伤口的检测。对于手部彩色图像来说,如果手部有伤口,无论是新鲜伤口还是已经结痂的伤口,都相对于整张手部图像较为显著,在人们的正常视力范围内,是比较显眼的。因此手部彩色图像伤口检测部分主要使用的显著性算法,而对于先将手部提取出来的彩色图像应用则为区域显著性算法。A very important content detection in hand color image content is the detection of hand wounds. For the color image of the hand, if there is a wound on the hand, whether it is a fresh wound or a scabbed wound, it is more conspicuous than the whole hand image, and it is more conspicuous within the normal vision range of people. Therefore, the saliency algorithm is mainly used in the wound detection part of the color image of the hand, and the regional saliency algorithm is applied to the color image extracted from the hand first.

显著性算法比较著名的有基于生物模型融合特征整合理论的Itti算法、基于颜色特征的中央-周边算子视觉显著性检测方法FT算法等,也有近几年出现的基于全局区域对比度RC算法引入引力模型检测人眼中最为显著部分的MRC算法等,但是这些算法对于手部图像中的伤口检测没有比较好的效果,从而基于MRC算法提出DRC显著性算法。The more famous saliency algorithms are the Itti algorithm based on the fusion feature integration theory of biological models, the central-peripheral operator visual saliency detection method FT algorithm based on color features, etc. There are also RC algorithms based on global area contrast that have introduced gravity in recent years. The model detects the MRC algorithm of the most salient part of the human eye, etc., but these algorithms do not have a good effect on wound detection in hand images, so a DRC saliency algorithm is proposed based on the MRC algorithm.

发明内容SUMMARY OF THE INVENTION

本发明的目的在于提出一种基于伤口加权显著性的手部伤口检测方法,以达到在复杂背景的手部图像中相对准确的凸显手部的伤口区域。在该发明中,这个实现过程都是由计算机自动完成的,用户只需要输入手部图像,便可以自己检测图像中的伤口区域。The purpose of the present invention is to propose a hand wound detection method based on wound weighted saliency, so as to achieve relatively accurate highlighting of the wound area of the hand in the hand image with complex background. In this invention, the realization process is automatically completed by the computer, and the user only needs to input the hand image, and then he can detect the wound area in the image by himself.

本发明的技术方案如下:The technical scheme of the present invention is as follows:

步骤S1,使用GrabCut技术对原图像进行提取前景,获得提取到手部的分割图K;Step S1, use the GrabCut technology to extract the foreground of the original image, and obtain the segmentation map K extracted to the hand;

步骤S2,对步骤S1获得的分割图K进行初始量化及高级颜色筛选,得到量化后的视觉差异小但颜色数目减少的图像集K2In step S2, initial quantization and advanced color screening are performed on the segmentation map K obtained in step S1, to obtain an image set K 2 with small visual differences after quantization but with a reduced number of colors;

高级颜色筛选其实是进一步的图像量化,主要目的是为了保证保留原图像的95%的颜色与高质量;Advanced color screening is actually a further image quantization, the main purpose is to ensure that 95% of the original image color and high quality are preserved;

同时对得到的图像集K2进行颜色空间转换,得到图像集K3At the same time, color space conversion is performed on the obtained image set K 2 to obtain an image set K 3 ;

想要从最常见的RGB颜色空间转为Lab颜色空间,中间要经过XYZ颜色空间,转换颜色空间的目的是为了更加符合人类的视觉;If you want to convert from the most common RGB color space to the Lab color space, you need to go through the XYZ color space in the middle. The purpose of converting the color space is to be more in line with human vision;

步骤S3,对步骤S1获得的分割图K使用GB技术进行分割,得到颜色分割图CmIn step S3, the segmentation map K obtained in step S1 is segmented using GB technology to obtain a color segmentation map C m ;

GB分割技术是最常用的一种图像分割技术,主要利用的是像素点与像素点之间的相似度,以及将图像具化为图进行区域划分,并且在分割的过程中可以对不同区域进行不同颜色的着色,以更好的显示分割效果;GB segmentation technology is the most commonly used image segmentation technology. It mainly uses the similarity between pixels and pixels, and converts the image into a map for region division. In the process of segmentation, different regions can be divided into different regions. Coloring in different colors to better display the segmentation effect;

步骤S4,在步骤S2获得的Lab颜色空间图像K3上根据步骤S3获得的颜色分割图Cm计算区域之间的颜色距离,并引入伤口特征加权得到初始显著图K4In step S4, on the Lab color space image K3 obtained in step S2, the color distance between regions is calculated according to the color segmentation map Cm obtained in step S3, and wound feature weighting is introduced to obtain an initial saliency map K4 ;

计算初始显著值主要利用的原理是区域离的越远彼此影响越小,离得越近影响越大;The main principle used to calculate the initial significant value is that the farther the regions are, the less they affect each other, and the closer they are, the greater the impact;

伤口加权主要是计算手部图像的显著区域与标准伤口区域的相似度,越相似就说明这个区域越接近伤口,从而增加这个权重值,更加凸显伤口区域;Wound weighting mainly calculates the similarity between the salient area of the hand image and the standard wound area. The more similar the area is, the closer the area is to the wound, thus increasing the weight value and highlighting the wound area.

步骤S5,对步骤S4得到的初始显著图K4进行重新计算图像视觉焦点与真正的手部伤口显著图K5In step S5, the initial saliency map K4 obtained in step S4 is recalculated for the visual focus of the image and the real hand wound saliency map K5 .

初始视觉焦点通常是图像的中心,因为大部分图像的重心不与中心重合,为了更加准确的计算显著值,在这里引入显著性引力模型进行最终视觉焦点的计算,主要计算该点的方向与距离得到最终的视觉焦点。The initial visual focus is usually the center of the image, because the center of gravity of most images does not coincide with the center. In order to calculate the saliency value more accurately, the saliency gravity model is introduced here to calculate the final visual focus, mainly calculating the direction and distance of the point. Get the ultimate visual focus.

附图说明Description of drawings

读者在参照附图阅读了本发明的具体实施方式以后,将会更清楚地了解本发明的各个方面。其中,Various aspects of the present invention will be more clearly understood by the reader after reading the detailed description of the invention with reference to the accompanying drawings. in,

图1是本发明基于MRC显著性算法的DRC显著性算法的流程图;Fig. 1 is the flow chart of the DRC saliency algorithm based on the MRC saliency algorithm of the present invention;

图2为本发明的具体实施过程图;Fig. 2 is a specific implementation process diagram of the present invention;

图3为最终和其他方法伤口检测对比图。Figure 3 is a comparison chart of wound detection between the final and other methods.

具体实施方式Detailed ways

步骤S1,使用GrabCut技术对原图像进行提取前景,获得提取到手部的分割图像;Step S1, use the GrabCut technology to extract the foreground of the original image, and obtain the segmented image extracted to the hand;

步骤S2,对步骤S1获得的分割图进行初始量化,其量化过程为在RGB颜色空间下将图像中每个通道的颜色值量化为12个不同的值,初始量化算子为:P:Rm×n→Rm×n,对于每一幅图像的每一个像素来说,量化后的颜色值为:In step S2, initial quantization is performed on the segmentation image obtained in step S1, and the quantization process is to quantize the color value of each channel in the image into 12 different values in the RGB color space, and the initial quantization operator is: P:R m ×n →R m×n , for each pixel of each image, the quantized color value is:

Figure BDA0003162219760000031
Figure BDA0003162219760000031

其中,s=1,2,…,m;t=1,2,…,n,

Figure BDA0003162219760000032
分别表示原始图像中像素点在RGB空间每个通道的颜色值。
Figure BDA0003162219760000033
分别表示量化后图像中每个像素点在RGB空间的颜色通道值;Int(.)表示浮点型转换为整型。Among them, s=1,2,...,m; t=1,2,...,n,
Figure BDA0003162219760000032
Respectively represent the color value of each channel of the pixel in the original image in the RGB space.
Figure BDA0003162219760000033
Respectively represent the color channel value of each pixel in the RGB space after quantization; Int(.) represents the conversion of floating point type to integer type.

经过初级量化后的图像是达不到我们的要求的,为了保证量化后的图像包含原图像95%的像素点且不影响质量,因此还需要高级颜色筛选,也就是进一步量化。对于每一个像素

Figure BDA0003162219760000034
量化函数如下:The image after primary quantization cannot meet our requirements. In order to ensure that the quantized image contains 95% of the pixels of the original image and does not affect the quality, advanced color screening is also required, that is, further quantization. for every pixel
Figure BDA0003162219760000034
The quantization function is as follows:

Figure BDA0003162219760000035
Figure BDA0003162219760000035

其中,

Figure BDA0003162219760000036
分别表示量化后图像中像素点在RGB空间的每个通道的颜色值。in,
Figure BDA0003162219760000036
Respectively represent the color value of each channel in the RGB space of the pixel in the quantized image.

经过量化的图像集颜色数目大大减少,从而减少了计算的时间,加快了速度。The number of colors in the quantized image set is greatly reduced, which reduces the calculation time and speeds up the speed.

同时,要计算图像的显著值,需要先将图像的颜色空间进行转变,转换过程为先将图像从RGB颜色空间转换到XYZ颜色空间下,转换公式为:At the same time, to calculate the salient value of the image, it is necessary to convert the color space of the image first. The conversion process is to convert the image from the RGB color space to the XYZ color space first. The conversion formula is:

Figure BDA0003162219760000037
Figure BDA0003162219760000037

其中,

Figure BDA0003162219760000038
分别表示的是经过量化后得到的图像中像素点
Figure BDA0003162219760000039
在RGB颜色空间下三个颜色通道(RGB)的颜色值,
Figure BDA00031622197600000310
分别表示的是经过以上公式转换得到的图像中像素点Hs,t在XYZ颜色空间下三个颜色通道(XYZ)的颜色值。并且B为转换矩阵,定义如下:in,
Figure BDA0003162219760000038
Respectively represent the pixels in the image obtained after quantization
Figure BDA0003162219760000039
The color values of the three color channels (RGB) in the RGB color space,
Figure BDA00031622197600000310
Respectively represent the color values of the three color channels (XYZ) of the pixel point H s,t in the XYZ color space in the image converted by the above formula. And B is the transformation matrix, defined as follows:

Figure BDA00031622197600000311
Figure BDA00031622197600000311

然后将图像从XYZ颜色空间转换到Lab颜色空间下,转换公式如下:Then convert the image from XYZ color space to Lab color space, the conversion formula is as follows:

Figure BDA0003162219760000041
Figure BDA0003162219760000041

Figure BDA0003162219760000042
Figure BDA0003162219760000042

其中,

Figure BDA0003162219760000043
分别表示的是经过转换后得到的图像中像素点
Figure BDA0003162219760000044
在Lab颜色空间下每个通道的颜色值。in,
Figure BDA0003162219760000043
Respectively represent the pixels in the image obtained after conversion
Figure BDA0003162219760000044
The color value of each channel in the Lab color space.

转换颜色空间的目的是为了更符合人类视觉对颜色的差异感知,方便计算显著值。The purpose of converting the color space is to be more in line with the difference perception of color by human vision, and to facilitate the calculation of salient values.

步骤S3,对步骤S1获得的分割图进行GB分割,其核心思想是基于图的贪心聚类算法,将图像具象为图,在顶点、边、权、树、最小生成树等概念上进行相似像素点的区域划分,这样就得到了一块块的分割区域图像。In step S3, GB segmentation is performed on the segmentation graph obtained in step S1. The core idea is a graph-based greedy clustering algorithm, which concretizes the image as a graph, and performs similar pixels in concepts such as vertices, edges, weights, trees, and minimum spanning trees. The area of the point is divided, so that the image of the segmented area of the block is obtained.

[00]像素点的划分规则核心主要是利用不相似度阈值,其公式为:[00] The core of the division rule of pixels is to use the dissimilarity threshold, and its formula is:

Figure BDA0003162219760000045
Figure BDA0003162219760000045

其中,wi,j为当前合并后区域最大的边,即Int(Ci∪Cj)=wi,j。Cx代表各个顶点。Wherein, w i,j is the largest edge of the current merged region, that is, Int(C i ∪C j )=wi ,j . Cx represents each vertex.

步骤S4,得到步骤S3的分割图与步骤S2的Lab空间图便可以根据距离近的区域影响大远的影响小来进行初步的显著值计算,计算公式如下:In step S4, after obtaining the segmentation map of step S3 and the Lab space map of step S2, a preliminary significant value calculation can be performed according to the area with a short distance and a small influence, and the calculation formula is as follows:

Figure BDA0003162219760000046
Figure BDA0003162219760000046

其中,M(ck)表示的是分割区域ck的显著值;w(ci)是区域ci的权值,代表着其对区域ck显著值的影响;Lc(ck,ci)是区域ck和ci之间的空间距离,也就是加入的这个空间信息;σ2用来控制空间权值强度,如果σ2值过大,随之空间权值的影响强度也会增加,并且距离较远的区域也会对当前区域有较强的影响,这儿σ2的值取0.4;其中Ll(ck,ci)表示的是区域ck和ci之间的颜色距离度量,而f(nk,i)是分割区域ck中第i种颜色在该区域所有颜色nk出现的频数,k={1,2};n1和n2分别表示区域c1和区域c2的颜色总数目。Among them, M(c k ) represents the significant value of the segmented region c k ; w(c i ) is the weight of the region c i , representing its influence on the significant value of the region c k ; L c (c k ,c i ) is the spatial distance between the regions ck and ci , that is, the added spatial information; σ 2 is used to control the strength of the spatial weight, if the value of σ 2 is too large, the influence of the spatial weight will also be increases, and the farther area will also have a strong influence on the current area, where the value of σ 2 is 0.4; where L l ( ck , ci ) represents the color between the area ck and ci Distance metric, and f(n k,i ) is the frequency of the i-th color in the segmented area ck appearing in all colors n k in this area, k={1,2}; n 1 and n 2 represent the area c 1 respectively and the total number of colors in area c 2 .

为了更加凸显计算伤口对于图像显著性的影响,计算初始显著值引入了伤口权重,将伤口图像与图像集中的分割区域进行相似度计算,越相似权重越大,从而增加伤口的显著性,并将伤口权重w(ci)重新命名为W(ci)且其计算公式为:In order to highlight the influence of calculating the wound on the saliency of the image, the wound weight is introduced to calculate the initial saliency value, and the similarity between the wound image and the segmented area in the image set is calculated. The wound weight w (ci ) is renamed W( ci ) and its calculation formula is:

Figure BDA0003162219760000051
Figure BDA0003162219760000051

其中,l(ci,cj)指的是分割区域ci与标准伤口图像cj的相似度。Wherein, l( ci , c j ) refers to the similarity between the segmented region c i and the standard wound image c j .

在计算伤口图像的相似度过程中,使用的是伤口的颜色特征,一阶颜色距,因为这个值可以反映图片的整体敏感程度,计算公式如下:In the process of calculating the similarity of the wound image, the color feature of the wound and the first-order color distance are used, because this value can reflect the overall sensitivity of the image, and the calculation formula is as follows:

Figure BDA0003162219760000052
Figure BDA0003162219760000052

其中,D代表的是某一指定区域像素点的个数,x代表的是指定区域中像素点的值。Among them, D represents the number of pixels in a specified area, and x represents the value of pixels in the specified area.

步骤S5,对步骤S4得到的初始显著图进行重新计算显著值,具体主要引入移动视觉焦点,这是因为大多图像的重心不在图像中心,使用图像中心来作为视觉焦点计算显著值会有一定偏差。为了得到最终的视觉焦点,引入显著性引力模型,每个像素点都会受到引力的作用,分为水平方向和垂直方向的引力:In step S5, the initial saliency map obtained in step S4 is recalculated for the saliency value. Specifically, the moving visual focus is mainly introduced. This is because the center of gravity of most images is not in the center of the image, and the calculation of the saliency value using the image center as the visual focus will have a certain deviation. In order to get the final visual focus, a saliency gravity model is introduced, and each pixel will be affected by gravity, which is divided into horizontal and vertical gravity:

Figure BDA0003162219760000053
Figure BDA0003162219760000053

其中,P指的是万有引力,而θ指的是当前像素点与视觉焦点的角度。Among them, P refers to the gravitational force, and θ refers to the angle between the current pixel and the visual focus.

Figure BDA0003162219760000054
Figure BDA0003162219760000054

其中,ax和ay表示的是当前像素点的坐标(水平方向和垂直方向);VPx和VPy表示的是视觉焦点的坐标(水平方向和垂直方向);O(ai,VP)指的便是当前像素点ai到视觉焦点VP的欧式距离。Among them, a x and a y represent the coordinates of the current pixel (horizontal and vertical directions); VP x and VP y represent the coordinates of the visual focus (horizontal and vertical directions); O(a i , VP) It refers to the Euclidean distance from the current pixel point a i to the visual focus VP.

要得到最后的视觉焦点,便要计算初始焦点与最终视觉焦点的距离,主要通过力的移动计算,两个力的方向相反则朝力更大的方向移动。To get the final visual focus, it is necessary to calculate the distance between the initial focus and the final visual focus, which is mainly calculated by the movement of the force, and the direction of the two forces is opposite to the direction of the larger force.

最后实用最终的视觉焦点来重新计算显著值,最终的计算公式为:Finally, the final visual focus is used to recalculate the saliency value. The final calculation formula is:

Figure BDA0003162219760000061
Figure BDA0003162219760000061

Jf=exp(-τ×Lc(ck,VPf)) (13)J f =exp(-τ×L c (c k ,VP f )) (13)

其中,Lc(ck,VPf)表示的是分割区域ck与最终的视觉焦点VPf之间的空间距离,τ用来控制视觉焦点的强度,增加τ视觉焦点的作用力也会变大,导致Jf增大,从而增加靠近视觉焦点的区域的显著值,从而更容易将视觉焦点周围区域提取出来。Among them, L c (c k , VP f ) represents the spatial distance between the segmented area ck and the final visual focus VP f , τ is used to control the intensity of the visual focus, and the force of increasing the τ visual focus will also increase , resulting in an increase in J f , which increases the salient value of the area close to the visual focus, making it easier to extract the area around the visual focus.

Claims (6)

1. A hand wound detection method based on wound weighted significance is characterized by comprising the following steps: wound weighting the saliency images, performing the following steps:
step S1, extracting the foreground of the original image by using GrabCont technology to obtain a segmentation image K for extracting the hand;
step S2, the segmentation chart K obtained in step S1 is initially quantized and subjected to advanced color screening to obtain an image set K with small visual difference and reduced color number after quantization2And performing color space conversion to obtain an image set K3
Step S3, the segmentation chart K obtained in the step S1 is segmented by using the GB technology to obtain a color segmentation chart Cm
Step S4, Lab color space image K obtained in step S23The color segmentation chart C obtained according to the step S3mCalculating color distance between regions and introducing wound featuresWeighting to obtain an initial saliency map K4
Step S5, the initial saliency map K obtained in step S44Recalculating visual focus of image and actual salient map K of hand wound5
2. The wound-weighted significance-based hand wound detection method of claim 1, wherein: in step S2, the segmentation map K obtained in step S1 is initially quantized, in which the color value of each channel in the image is quantized into 12 different values in RGB color space, and the initial quantization operator is: p is Rm×n→Rm×nFor each pixel of each image, the quantized color value is:
Figure FDA0003162219750000011
wherein s is 1,2, …, m; t is 1,2, …, n,
Figure FDA0003162219750000012
and respectively representing the color value of each channel of the pixel points in the original image in the RGB space.
Figure FDA0003162219750000013
Respectively representing the color channel value of each pixel point in the quantized image in the RGB space; int (.) denotes the conversion of floating point type to integer type.
In order to ensure that the quantized image contains 95% of the pixels of the original image and does not affect the quality, further quantization, i.e. advanced color screening, is required for each pixel
Figure FDA0003162219750000014
The quantization function is as follows:
Figure FDA0003162219750000015
wherein,
Figure FDA0003162219750000016
and respectively representing the color value of each channel of the pixel point in the quantized image in the RGB space.
3. The wound-weighted significance-based hand wound detection method of claim 1, wherein: step S2 of the image set K obtained in step two2The color space conversion is carried out, the RGB color space is converted into the Lab color space, and the main purpose is to better accord with the difference perception of human vision to color and facilitate calculation. The conversion process requires an excess space XYZ.
When converting from RGB color space to XYZ color space, the conversion formula is:
Figure FDA0003162219750000021
wherein,
Figure FDA0003162219750000022
respectively represent the pixel points in the image obtained after quantization
Figure FDA0003162219750000023
Color values of three color channels (RGB) in the RGB color space,
Figure FDA0003162219750000024
respectively represents the pixel points H in the image obtained by the conversion of the formulas,tColor values of three color channels (XYZ) in the XYZ color space. And B is a transformation matrix defined as follows:
Figure FDA0003162219750000025
then, the image is converted from the XYZ color space to the Lab color space, and the conversion formula is as follows:
Figure FDA0003162219750000026
Figure FDA0003162219750000027
wherein,
Figure FDA0003162219750000028
respectively representing the pixel points in the image obtained by conversion
Figure FDA0003162219750000029
Color values for each channel in Lab color space.
4. The wound-weighted significance-based hand wound detection method of claim 1, wherein: and step S3, performing GB segmentation on the image set K obtained in the step S1, and realizing region judgment on each vertex by the algorithm based on a greedy clustering algorithm of the image to divide the similar pixel points into a region, thereby finally obtaining the segmentation regions with different colors.
5. The wound-weighted significance-based hand wound detection method of claim 1, wherein: the Lab color space image K obtained in step S23The color segmentation chart C obtained according to the step S3mCalculating the color distance between the regions, wherein the obvious value calculation formula is as follows:
Figure FDA0003162219750000031
wherein, M (c)k) Showing a divided region ckA significance value of;w(ci) Is a region ciThe weight of (c) represents its corresponding region ckThe effect of significance; l isc(ck,ci) Is a region ckAnd ciThe spatial distance between them, i.e. this spatial information added; sigma2For controlling the spatial weight strength, if σ2If the value is too large, the influence strength of the space weight will also increase, and the regions with longer distance will also have stronger influence on the current region, where σ2The value of (A) is 0.4; wherein L isl(ck,ci) Denotes the region ckAnd ciThe color distance measure between, and f (n)k,i) Is a divided region ckAll colors n in the region of the ith colorkThe frequency of occurrence, k ═ 1,2 }; n is1And n2Respectively represent the regions c1And region c2Total number of colors of (a).
Calculating the initial significance value introduces wound weight, similarity calculation is carried out on the wound image and the segmentation region in the image set, the more similar weight is larger, the significance of the wound is increased, and the wound weight w (c)i) Renamed as W (c)i) And the calculation formula is as follows:
Figure FDA0003162219750000032
wherein l (c)i,cj) Refers to the divided region ciAnd standard wound image cjThe similarity of (c).
6. The wound-weighted significance-based hand wound detection method of claim 1, wherein: for the initial saliency map K obtained in step S44Recalculating the visual focus of the image, and obtaining a final hand wound saliency map, wherein the formula is as follows:
Figure FDA0003162219750000033
Jf=exp(-τ×Lc(ck,VPf)) (9)
wherein L isc(ck,VPf) Showing a divided region ckAnd the final visual focus VPfThe spatial distance between τ and τ is used to control the intensity of the visual focus, and the force to increase τ is also greater, resulting in JfIncreasing thereby increasing the saliency of the area near the visual focus and thus making it easier to focus the area around the visual focus.
CN202110794700.2A 2021-07-14 2021-07-14 Hand wound detection method based on wound weighting significance algorithm Pending CN113393458A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110794700.2A CN113393458A (en) 2021-07-14 2021-07-14 Hand wound detection method based on wound weighting significance algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110794700.2A CN113393458A (en) 2021-07-14 2021-07-14 Hand wound detection method based on wound weighting significance algorithm

Publications (1)

Publication Number Publication Date
CN113393458A true CN113393458A (en) 2021-09-14

Family

ID=77626079

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110794700.2A Pending CN113393458A (en) 2021-07-14 2021-07-14 Hand wound detection method based on wound weighting significance algorithm

Country Status (1)

Country Link
CN (1) CN113393458A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103810707A (en) * 2014-01-28 2014-05-21 华东理工大学 Mobile visual focus based image vision salient detection method
CN108573221A (en) * 2018-03-28 2018-09-25 重庆邮电大学 A Vision-based Saliency Detection Method for Robotic Target Parts
CN110866896A (en) * 2019-10-29 2020-03-06 中国地质大学(武汉) Image saliency object detection method based on k-means and level set superpixel segmentation
WO2020082686A1 (en) * 2018-10-25 2020-04-30 深圳创维-Rgb电子有限公司 Image processing method and apparatus, and computer-readable storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103810707A (en) * 2014-01-28 2014-05-21 华东理工大学 Mobile visual focus based image vision salient detection method
CN108573221A (en) * 2018-03-28 2018-09-25 重庆邮电大学 A Vision-based Saliency Detection Method for Robotic Target Parts
WO2020082686A1 (en) * 2018-10-25 2020-04-30 深圳创维-Rgb电子有限公司 Image processing method and apparatus, and computer-readable storage medium
CN110866896A (en) * 2019-10-29 2020-03-06 中国地质大学(武汉) Image saliency object detection method based on k-means and level set superpixel segmentation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张文康;朱倩;陈潇君;: "拟人视觉系统的显著性检测方法", 电子技术应用, no. 11 *
李然;李记鹏;宋超;: "基于显著性检测的协同图像分割研究", 现代计算机(专业版), no. 24 *

Similar Documents

Publication Publication Date Title
CN108197587B (en) Method for performing multi-mode face recognition through face depth prediction
CN109815826B (en) Method and device for generating face attribute model
WO2020108362A1 (en) Body posture detection method, apparatus and device, and storage medium
CN106920243B (en) Sequenced Image Segmentation Method of Ceramic Material Parts with Improved Fully Convolutional Neural Network
CN111680706B (en) A Two-channel Output Contour Detection Method Based on Encoding and Decoding Structure
CN110853026B (en) Remote sensing image change detection method integrating deep learning and region segmentation
CN108537239B (en) Method for detecting image saliency target
CN108280397B (en) Human body image hair detection method based on deep convolutional neural network
CN108491835A (en) Binary channels convolutional neural networks towards human facial expression recognition
CN110544251A (en) Dam crack detection method based on multi-transfer learning model fusion
CN109446922B (en) Real-time robust face detection method
CN104361313B (en) A kind of gesture identification method merged based on Multiple Kernel Learning heterogeneous characteristic
CN107066916B (en) Scene semantic segmentation method based on deconvolution neural network
CN111080656A (en) Image processing method, image synthesis method and related device
CN109657612B (en) Quality sorting system based on facial image features and application method thereof
CN109359527B (en) Hair region extraction method and system based on neural network
CN101211356A (en) An Image Query Method Based on Salient Regions
CN110827304B (en) Traditional Chinese medicine tongue image positioning method and system based on deep convolution network and level set method
CN102117413A (en) Automatic filtering method of bad image based on multi-layer feature
CN109993803A (en) An Intelligent Analysis and Evaluation Method of Urban Tones
CN107169508A (en) A kind of cheongsam Image emotional semantic method for recognizing semantics based on fusion feature
CN108230297B (en) Color collocation assessment method based on garment replacement
CN112950780A (en) Intelligent network map generation method and system based on remote sensing image
CN112906550A (en) Static gesture recognition method based on watershed transformation
CN109741358B (en) A Superpixel Segmentation Method Based on Adaptive Hypergraph Learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210914