CN101246554A - Multi-object Image Segmentation Method Based on Pixel Labeling - Google Patents
Multi-object Image Segmentation Method Based on Pixel Labeling Download PDFInfo
- Publication number
- CN101246554A CN101246554A CNA2008101017042A CN200810101704A CN101246554A CN 101246554 A CN101246554 A CN 101246554A CN A2008101017042 A CNA2008101017042 A CN A2008101017042A CN 200810101704 A CN200810101704 A CN 200810101704A CN 101246554 A CN101246554 A CN 101246554A
- Authority
- CN
- China
- Prior art keywords
- target
- pixel
- scanning
- image
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 30
- 238000002372 labelling Methods 0.000 title claims description 5
- 238000003709 image segmentation Methods 0.000 title abstract description 10
- 238000003384 imaging method Methods 0.000 claims abstract description 5
- 230000011218 segmentation Effects 0.000 claims description 4
- 238000005192 partition Methods 0.000 claims 6
- 239000011159 matrix material Substances 0.000 description 7
- 238000005259 measurement Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000003708 edge detection Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
Images
Landscapes
- Image Analysis (AREA)
Abstract
本发明公开一种基于像素标记的多目标图像分割方法,具体步骤如下:以视频成像传感器所摄取的图像作为输入信号;对图像进行二值化处理;对处理后的图像作第一次扫描;建立等价数组;进行第二次扫描、得到目标数量、面积、周长、位置坐标等信息;采用本发明能够对全视场二值图像进行目标标记和边缘标记,具有一定的通用性,为下一步计算目标的特征量以及图像理解等高端过程提供了充分的条件;而且本发明不受目标个数的限制,根据硬件资源的配置,还可以随意减小或增大处理区的面积,具有很强的灵活性。
The invention discloses a multi-target image segmentation method based on pixel marking. The specific steps are as follows: using an image captured by a video imaging sensor as an input signal; performing binary processing on the image; scanning the processed image for the first time; Establish an equivalent array; perform the second scan to obtain information such as target quantity, area, perimeter, and position coordinates; adopt the present invention to carry out target marking and edge marking on the binary image of the full field of view, which has certain versatility. The next step is to calculate the feature quantity of the target and high-end processes such as image understanding provide sufficient conditions; and the present invention is not limited by the number of targets, and can also reduce or increase the area of the processing area at will according to the configuration of hardware resources, with Great flexibility.
Description
技术领域 technical field
本发明属于光电探测与跟踪测量领域,具体地说是一种基于多目标分割的描述方法,它用于成像视频跟踪中多目标标记与特征量提取。The invention belongs to the field of photoelectric detection and tracking measurement, in particular to a description method based on multi-target segmentation, which is used for multi-target marking and feature quantity extraction in imaging video tracking.
背景技术 Background technique
在图像处理中,图像分割是很重要的一部分,通过图像分割可以提取出图像中用户关心的目标(也称为前景)并为以后的图像理解提供必要的数据。实现图像分割的方法主要包括:基于边缘的图像分割方法、基于区域的图像分割方法,以及基于上述两类方法的综合分割方法。In image processing, image segmentation is a very important part. Through image segmentation, the target (also called foreground) that the user cares about in the image can be extracted and the necessary data can be provided for future image understanding. The methods to achieve image segmentation mainly include: edge-based image segmentation methods, region-based image segmentation methods, and comprehensive segmentation methods based on the above two types of methods.
图像分割的目的之一是为实现自动目标识别打下基础,因此提取目标的特征量对于系统有着至关重要的意义。为此需要同时获得图像分割后的目标物体的面积、周长、质心坐标甚至目标所有像素的位置坐标等参数。为了实现这个功能,常用的方法有游程连通性分析法和像素标记法。游程连通性分析法是通过分析由连续扫描线得到的游程连通性来标记目标的方法,需要事先对图像进行处理得到灰度相同的像素块,其条件是在扫描前已经形成了灰度相同的像素块即游程。而像素标记法则是在不要求任何先验信息的前提下对二值图像进行扫描,并按像素间的连通性决定像素是属于哪个目标的,最后所有像素都有了标记该像素属于哪个目标的特征量。此后可以通过统计每个目标的像素来得到目标的面积、质心坐标等特征。One of the purposes of image segmentation is to lay the foundation for the realization of automatic target recognition, so extracting the feature quantity of the target is of vital significance to the system. For this reason, parameters such as the area, perimeter, centroid coordinates, and even the position coordinates of all pixels of the target object after image segmentation need to be obtained simultaneously. In order to realize this function, commonly used methods include run-length connectivity analysis and pixel labeling. The run-length connectivity analysis method is a method of marking the target by analyzing the run-length connectivity obtained by continuous scanning lines. It is necessary to process the image in advance to obtain pixel blocks with the same gray level. The condition is that the same gray level has been formed before scanning. A block of pixels is a run. The pixel labeling method scans the binary image without requiring any prior information, and determines which target the pixel belongs to according to the connectivity between pixels, and finally all pixels have a mark which target the pixel belongs to. Feature amount. Afterwards, the area and centroid coordinates of the target can be obtained by counting the pixels of each target.
发明内容 Contents of the invention
本发明要解决的技术问题:本发明是对传统的像素标记法进行了扩展,该方法使得所有像素除了有属于哪个目标的标记外,还有是否是目标边界点的标记,在统计这些点后,可以得到目标的单像素宽的边界的周长,避免了边缘检测后对多像素宽边界进行统计得到不准确的周长的缺点;由于周长在目标测量中的重要地位(如圆形度的测量),这个改进是有一定意义的。The technical problem to be solved by the present invention: the present invention expands the traditional pixel marking method. This method makes all pixels not only have the mark of which target they belong to, but also the mark of whether they are target boundary points. After counting these points , the perimeter of the single-pixel-wide boundary of the target can be obtained, avoiding the disadvantage of obtaining inaccurate perimeters by performing statistics on multi-pixel wide boundaries after edge detection; due to the important position of the perimeter in target measurement (such as circularity measurement), this improvement is meaningful.
本发明解决其技术问题所采用的技术方案是:一种基于像素标记的多目标图像分割方法,其特征在于包括以下步骤:The technical solution adopted by the present invention to solve the technical problems is: a multi-target image segmentation method based on pixel marking, which is characterized in that it comprises the following steps:
(1)以视频成像传感器所摄取的图像作为输入信号(用lpraw表示);(1) Take the image captured by the video imaging sensor as the input signal (expressed by lpraw);
(2)对所摄入的图像经阈值分割获得二值图像(用lpImg01表示);(2) Obtain a binary image (indicated by lpImg01) through threshold segmentation on the taken image;
设lpImg01[i][j]表示坐标(i,j)像素的像素值,lpImg01[i][j]=0表示点(i,j)为背景的一部分,lpImg01[i][j]=1表示点(i,l)属于目标区的一部分;同时,用Perimeter[m]表示第m个目标的周长;Area[m]表示第m个目标的面积;Posx[m]表示第m个目标在整幅图像中的x位置坐标;Posy[m]表示第m个目标在整幅图像中的y位置坐标;Object[i][j]表示坐标(i,j)的像素属于哪个目标;Edge[i][j]表示坐标(i,j)的像素是否是目标的边界点;Equ[i][j]表示目标属性为i和j的像素属于同一个目标;Access[i]表示第一次扫描后目标属性为i的像素实际属于Access[i]个目标;Let lpImg01[i][j] represent the pixel value of the pixel at coordinate (i, j), lpImg01[i][j]=0 means point (i, j) is part of the background, lpImg01[i][j]=1 Indicates that the point (i, l) belongs to a part of the target area; at the same time, use Perimeter[m] to represent the perimeter of the m-th target; Area[m] represents the area of the m-th target; Posx[m] represents the m-th target The x position coordinates in the entire image; Posy[m] indicates the y position coordinates of the mth target in the entire image; Object[i][j] indicates which target the pixel of coordinates (i, j) belongs to; Edge [i][j] indicates whether the pixel with coordinates (i, j) is the boundary point of the target; Equ[i][j] indicates that the pixels whose target attributes are i and j belong to the same target; Access[i] indicates the first After the second scan, the pixel whose target attribute is i actually belongs to the Access[i] target;
(3)对所得的二值化图像进行第一次扫描;(3) scan the obtained binarized image for the first time;
扫描从左到右、从上到下进行,设用变量TTs来标志第一次扫描中的所有不同的目标标记的数目,在扫描到t时,T1~T4位置处的点已经扫描过了,故必须考虑t与这4个点的关系。在t的灰度为1即t为目标时,当前扫描点与其8邻域的连通关系可表示为:Scanning is carried out from left to right and from top to bottom, and the variable TTs is used to mark the number of all different target marks in the first scan. When scanning reaches t, the points at positions T 1 ~ T 4 have been scanned Therefore, the relationship between t and these four points must be considered. When the gray level of t is 1, that is, t is the target, the connection relationship between the current scanning point and its 8 neighbors can be expressed as:
(1)当T1~T4位置处的灰度值均为零时,表示该点与前面相邻的4个像素点不具有连通性,则赋予当前像素一个新的标记,即TTs加1,并将TTs赋予当前像素t的Object[i][j],即Object[i][j]=TTs;(1) When the gray values at positions T 1 ~ T 4 are all zero, it means that this point has no connectivity with the previous 4 adjacent pixels, and a new mark is given to the current pixel, that is, TTs plus 1 , and assign TTs to Object[i][j] of the current pixel t, that is, Object[i][j]=TTs;
(2)当T1~T4位置处的灰度值有且只有一个为1时,设Tm(m=1,2,3,4&Tm非零)为那个灰度为1的像素,则将Tm的Object值赋予t的Object值,表示当前点目标属性与Tm像素的目标属性相同,并与Tm像素属于同一个目标;(2) When there is one and only one gray value at positions T 1 to T 4 that is 1, let T m (m=1, 2, 3, 4&T m non-zero) be the pixel whose gray value is 1, then Assign the Object value of T m to the Object value of t, indicating that the target attribute of the current point is the same as the target attribute of the Tm pixel, and belongs to the same target as the Tm pixel;
(3)当T1~T4位置处的灰度值有超过一个为1时,则将Tm(m=1,2,3,4&Tm非零)中最小的Object值即Object_min赋予当前点t的Object(设Object_min表示Tm中最小的Object值,并设Object_max表示Tm中最大的Object值);同时,在等价矩阵中置Equ[Object_min][Object_max]=1(其中,Object_min<Object_max),表示第一次扫描中的Object值为Object_min和Object_max的像素实际属于同一目标。(3) When more than one of the gray values at positions T 1 to T 4 is 1, assign the smallest Object value in Tm (m=1, 2, 3, 4 & Tm is non-zero), that is, Object_min, to the current point t Object (set Object_min to represent the smallest Object value in Tm, and set Object_max to represent the largest Object value in Tm); at the same time, set Equ[Object_min][Object_max]=1 in the equivalence matrix (wherein, Object_min<Object_max), which means The pixels whose Object values are Object_min and Object_max in the first scan actually belong to the same object.
同时,对于像素的边界属性,设当前被扫描像素为t,只要当Tm(m=1~8)中至少有一个的灰度为零时,则置t的Edge[i][j]=1,即将图像中坐标位置为(i,j)的像素边界属性置为1,表示该点为边界点;否则置Edge[i][j]=0表示非边界点。于是,在第一次扫描后,即得到了目标单像素宽的边缘信息。At the same time, for the boundary attribute of the pixel, set the currently scanned pixel as t, as long as at least one grayscale in T m (m=1~8) is zero, then set Edge[i][j] of t= 1, that is, set the boundary attribute of the pixel whose coordinate position is (i, j) in the image to 1, indicating that the point is a boundary point; otherwise, set Edge[i][j]=0 to indicate a non-boundary point. Therefore, after the first scan, the edge information of the target single-pixel width is obtained.
(4)把第一次扫描后所得到的离散的等价矩阵进行规划等价数组处理;(4) carry out planning equivalent array processing to the discrete equivalent matrix obtained after the first scan;
在第一次扫描后,对于属于同一目标的所有目标属性标记为TT1,TT2,...,TTn,其中TT1<TT2<...<TTn,等价数组有以下3种可能:After the first scan, all target attributes belonging to the same target are marked as TT 1 , TT 2 , ..., TTn, where TT 1 <TT 2 <...<TTn, the equivalent array has the following three possibilities :
(1)所有目标属性已有Equ[TT1][TTi]=1,i=2,3,...,n,这种情况不需要作进一步处理;(1) All target attributes already have Equ[TT 1 ][TT i ]=1, i=2, 3,..., n, this situation does not need to be further processed;
(2)有Equ[TT1][TTi]=1,Equ[TTi][TTj]=1,则需要使Equ[TT1][TTj]=1;(2) Equ[TT 1 ][TT i ]=1, Equ[TT i ][TT j ]=1, then Equ[TT 1 ][TT j ]=1;
(3)有Equ[TT1][Tj]=1,Equ[TTi][TTj]=1,则需要使Equ[TT1][TTi]=1;(3) Equ[TT 1 ][T j ]=1, Equ[TT i ][TT j ]=1, then Equ[TT 1 ][TT i ]=1;
经过以上算法处理后,可以完全达到预期要求,为第二次扫描作了充分的准备;此时所有属于同一目标的不同标记,都与属于这一目标的最小标记建立了等价关系。After the above algorithm processing, the expected requirements can be fully met, and full preparations have been made for the second scan; at this time, all the different marks belonging to the same target have established an equivalence relationship with the smallest mark belonging to this target.
(5)然后进行第二次扫描;获得所需目标的面积、周长、位置坐标等目标信息;(5) Then perform the second scan; obtain target information such as the area, perimeter, and position coordinates of the desired target;
经过前面的第一次扫描和规划等价数组之后,进行第二次扫描;该过程分两个步骤完成,步骤一:根据前面处理过后的等价数组Equ,凡是属于同一目标的像素,其目标属性值Object应该相同,并赋值为第一次扫描后属于这一目标的最小标记,同时记录当前目标属性中最小标记指向的这个目标是图像中从上到下,从左到右的第几个目标。步骤2做最后的扫尾工作,根据Access数组,将像素的目标属性Object最后赋予真正的目标序号,即对于所有像素的目标属性值Object,若Access[Object[i][j]]>0,则Object[i][j]=Access[Object[i][j]];否则,Object保持原值不变;同时根据Object[i][j],以及Edge[i][j]之间的相互关系,统计各个目标的实际面积、周长和质心坐标等目标特性参数,并勾勒出目标边缘轮廓。After the previous first scan and the planning of the equivalent array, the second scan is performed; this process is completed in two steps. Step 1: According to the equivalent array Equ after the previous processing, all pixels belonging to the same target, the target The attribute value Object should be the same, and it should be assigned as the smallest mark belonging to this target after the first scan, and record the number of the target pointed to by the smallest mark in the current target attribute from top to bottom and from left to right in the image Target. Step 2 is to do the final finishing work. According to the Access array, assign the target attribute Object of the pixel to the real target serial number, that is, for the target attribute value Object of all pixels, if Access[Object[i][j]]>0, then Object[i][j]=Access[Object[i][j]]; otherwise, the original value of Object remains unchanged; at the same time, according to the interaction between Object[i][j] and Edge[i][j] Relationships, statistics of the target characteristic parameters such as the actual area, perimeter and centroid coordinates of each target, and outline the target edge.
本发明与现有技术相比具有如下优点:Compared with the prior art, the present invention has the following advantages:
1、利用基于8邻域像素标记的方法,得到了目标单像素宽的边缘信息;从而得到目标的单像素宽的边界的周长,避免了边缘检测后对多像素宽边界进行统计得到不准确的周长的缺点;1. Using the method based on 8-neighborhood pixel marking, the edge information of the single-pixel width of the target is obtained; thus the perimeter of the single-pixel-wide boundary of the target is obtained, which avoids inaccurate statistics of multi-pixel wide boundaries after edge detection the disadvantage of the perimeter;
2、本发明能准确地给出目标面积、位置坐标等信息,为图像的进一步分析处理奠定了基础。目标特征提取准确,而且方法简单,实时性和实用性好;2. The present invention can accurately provide information such as the target area and position coordinates, which lays a foundation for further image analysis and processing. The target feature extraction is accurate, and the method is simple, real-time and practical;
3、本发明统计的目标数可以随意多,仅受实时系统中硬件资源的限制。3. The number of objects counted by the present invention can be as many as desired, only limited by the hardware resources in the real-time system.
附图说明 Description of drawings
图1为本发明中像素标记算法流程图;Fig. 1 is the flow chart of pixel labeling algorithm in the present invention;
图2为本发明中基于8邻域像素标记的扫描关系图;Fig. 2 is the scan relationship diagram based on 8 neighborhood pixel marks in the present invention;
图中:t表示当前被扫描像素,T1~T8表示t像素在8个连通区域的相邻像素;In the figure: t represents the currently scanned pixel, and T 1 ~ T 8 represent the adjacent pixels of t pixel in 8 connected regions;
图3为本发明中第二次扫描步骤1流程图。Fig. 3 is a flow chart of the second scanning step 1 in the present invention.
具体实施方式 Detailed ways
以下结合附图及具体实施例详细介绍本发明。The present invention will be described in detail below in conjunction with the accompanying drawings and specific embodiments.
本发明实现多目标的标记,在得到单像素宽的目标边界的同时,获取目标周长、面积、位置坐标等特征信号。The invention realizes multi-target marking, obtains characteristic signals such as target circumference, area, position coordinates and the like while obtaining a single-pixel-wide target boundary.
其实现方法流程图如图1所示,其具体实现步骤如下:The flow chart of its implementation method is shown in Figure 1, and its specific implementation steps are as follows:
(1)首先以视频成像传感器所摄取的图像作为输入信号;(1) First, the image captured by the video imaging sensor is used as an input signal;
(2)然后对原始输入的视频信号进行二值化处理,生成了用‘0’和‘1’表示的二值图像;(2) Then the original input video signal is binarized to generate a binary image represented by '0' and '1';
(3)紧接着对二值化图像从左到右,从上到下进行第一次扫描(设定图像的起点在图像的左上方);(3) Immediately afterwards, scan the binarized image from left to right and from top to bottom for the first time (the starting point of the image is set at the upper left of the image);
第一次扫描的功能包括:对二值化图像的每个像素进行第一次标记和根据连通性记录有不同目标属性的区域的目标属性值,并标记属于目标边缘的边界属性,具体实现方法如下:设用变量TTs来标志第一次扫描中的所有不同的目标标记的数目,lpImg01[i][j]表示坐标(i,j)像素的像素值。lpImg01[i][j]=0表示点(i,j)为背景的一部分,lpImg01[i][j]=1表示点(i,j)属于目标区的一部分。同时,Object[i][j]表示坐标(i,j)的像素属于哪个目标;Edge[i][j]表示坐标(i,j)的像素是否是目标的边界点;Equ[i][j]表示目标属性为i和j的像素属于同一个目标;Access[i]表示第一次扫描后目标属性为i的像素实际属于Access[i]个目标;为了标记当前被扫描的像素,采用8邻域像素标记法,需要检查该像素与它之前扫描到的4个近邻像素的连通性,如图2所示;图2中t表示当前被扫描像素,T1~T8表示t像素在8个连通区域的相邻像素。扫描从左到右、从上到下进行,在扫描到t时,T1~T4位置处的点已经扫描过了,故必须考虑t与这4个点的关系,在t的灰度为1即t为目标时:The function of the first scan includes: marking each pixel of the binarized image for the first time and recording the target attribute values of the regions with different target attributes according to the connectivity, and marking the boundary attributes belonging to the target edge. The specific implementation method As follows: assume that the variable TTs is used to mark the number of all different target marks in the first scan, and lpImg01[i][j] represents the pixel value of the coordinate (i, j) pixel. lpImg01[i][j]=0 indicates that the point (i, j) is a part of the background, and lpImg01[i][j]=1 indicates that the point (i, j) belongs to a part of the target area. At the same time, Object[i][j] indicates which target the pixel of coordinate (i, j) belongs to; Edge[i][j] indicates whether the pixel of coordinate (i, j) is the boundary point of the target; Equ[i][ j] indicates that the pixels whose target attribute is i and j belong to the same target; Access[i] indicates that the pixel whose target attribute is i actually belongs to the Access[i] target after the first scan; in order to mark the currently scanned pixel, use The 8-neighborhood pixel marking method needs to check the connectivity between the pixel and the four adjacent pixels scanned before it, as shown in Figure 2; in Figure 2, t represents the currently scanned pixel, and T 1 ~ T 8 represent the t pixel in the Neighboring pixels of 8 connected regions. Scanning is carried out from left to right and from top to bottom. When scanning to t, the points at positions T 1 ~ T 4 have been scanned, so the relationship between t and these 4 points must be considered. The gray level at t is 1 is when t is the target:
(1)当T1~T4位置处的灰度值均为零时,表示该点与前面相邻的4个像素点不具有连通性,则赋予当前像素一个新的标记,即TTs加1,并将TTs赋予当前像素t的Object[i][j],即Object[i][j]=TTs;;(1) When the gray values at positions T 1 ~ T 4 are all zero, it means that this point has no connectivity with the previous 4 adjacent pixels, and a new mark is given to the current pixel, that is, TTs plus 1 , and assign TTs to Object[i][j] of the current pixel t, namely Object[i][j]=TTs;
(2)当T1~T4位置处的灰度值有且只有一个为1时,设Tm(m=1,2,3,4&Tm非零)为那个灰度为1的像素,则将Tm的Object值赋予t的Object值,表示当前点目标属性与像素Tm的目标属性相同,并与像素Tm属于同一个目标;(2) When there is one and only one gray value at positions T 1 to T 4 that is 1, let T m (m=1, 2, 3, 4&T m non-zero) be the pixel whose gray value is 1, then Assign the Object value of T m to the Object value of t, indicating that the target attribute of the current point is the same as the target attribute of the pixel T m , and belongs to the same target as the pixel Tm;
(3)当T1~T4位置处的灰度值有超过一个为1时,则将Tm(m=1,2,3,4&Tm非零)中最小的Object值即Object_min赋予当前点t的Object值(这里设Object_min表示Tm中最小的Object值,并设Object_max表示Tm中最大的Object值)。同时,在等价矩阵中置Equ[Object_min][Object_max]=1(其中,Object_min<Object_max),表示第一次扫描中的Object值为Object_min和Object_max的像素实际属于同一目标。(3) When more than one of the gray values at positions T 1 to T 4 is 1, assign the smallest Object value in T m (m=1, 2, 3, 4 & T m is non-zero), that is, Object_min, to the current point Object value of t (here Object_min represents the smallest Object value in T m , and Object_max represents the largest Object value in T m ). At the same time, set Equ[Object_min][Object_max]=1 in the equivalence matrix (wherein, Object_min<Object_max), indicating that the pixels whose Object values are Object_min and Object_max in the first scan actually belong to the same object.
同时,对于像素的边界属性,设当前被扫描像素为t,只要当Tm(m=1~8)中至少有一个的灰度为零时,则置t的Edge[i][j]=1,表示该点为边界点;否则置Edge[i][j]=0表示非边界点。于是,在第一次扫描后,即得到了目标单像素宽的边缘信息。At the same time, for the boundary attribute of the pixel, set the currently scanned pixel as t, as long as at least one grayscale in T m (m=1~8) is zero, then set Edge[i][j] of t= 1, indicating that the point is a boundary point; otherwise, set Edge[i][j]=0 to indicate a non-boundary point. Therefore, after the first scan, the edge information of the target single-pixel width is obtained.
(4)把第一次扫描后所得到的离散的等价矩阵进行规划等价数组处理;(4) carry out planning equivalent array processing to the discrete equivalent matrix obtained after the first scan;
规划等价数组是本发明的核心;其功能是把第一次扫描后所得到的离散的Equ矩阵进行整理,将同一目标区域的所有目标属性归为一类。具体考虑到第二次扫描实现的实时性和简易性,在将所有同一目标的目标属性归为一类后,要确保Equ矩阵中所有这些目标属性都与其中最小的目标属性等价。Planning the equivalent array is the core of the present invention; its function is to organize the discrete Equ matrix obtained after the first scan, and to classify all target attributes of the same target area into one category. Specifically considering the real-time and simplicity of the second scan, after classifying all the target attributes of the same target into one category, it is necessary to ensure that all these target attributes in the Equ matrix are equivalent to the smallest target attribute.
规划等价数组的过程如下:对于属于同一目标的所有目标属性标记为TT1,TT2,...,TTn,其中TT1<TT2<...<TTn,等价数组有以下3种可能:The process of planning an equivalent array is as follows: For all target attributes belonging to the same target, marked as TT 1 , TT 2 , ..., TTn, where TT 1 <TT 2 <...<TTn, there are three types of equivalent arrays possible:
(1)所有目标属性已有Equ[TT1][TTi]=1,i=2,3,...,n,这种情况不需要作进一步处理;(1) All target attributes already have Equ[TT 1 ][TT i ]=1, i=2, 3,..., n, this situation does not need to be further processed;
(2)有Equ[TT1][TTi]=1,Equ[TTi][TTj]=1,则需要使Equ[TT1][TTj]=1;(2) Equ[TT 1 ][TT i ]=1, Equ[TT i ][TT j ]=1, then Equ[TT 1 ][TT j ]=1;
(3)有Equ[TT1][TTj]=1,Equ[TTi][TTj]=1,则需要使Equ[TT1][TTi]=1。(3) If Equ[TT 1 ][TT j ]=1 and Equ[TT i ][TT j ]=1, it is necessary to make Equ[TT 1 ][TT i ]=1.
经过以上规划等价数组处理之后,将所有属于同一目标的像素都归为了一个目标,这为第二次扫描作了充分的准备。此时所有属于同一目标的不同标记,都与属于这一目标的最小标记建立了等价关系。After the above planning equivalent array processing, all the pixels belonging to the same target are grouped into one target, which is fully prepared for the second scan. At this time, all the different marks belonging to the same target establish an equivalence relationship with the smallest mark belonging to this target.
(5)进行第二次扫描;(5) Carry out the second scan;
第二次扫描分两个步骤完成,步骤1:根据前面处理过后的等价数组Equ,凡是属于同一目标的像素,其目标属性值Object应该相同,并赋值为第一次扫描后属于这一目标的最小标记,同时记录当前目标属性中最小标记指向的这个目标是图像中从上到下,从左到右的第几个目标;步骤1算法实现如图3,设整数变量ObjectNumber表示图像的实际目标数量,Access[]数组初始化全部为零;步骤1的关键是要辨别哪个像素标记是真正的新目标,哪个像素标记只是同一目标的几个不同的目标属性之一,由于扫描是从左到右,从上到下,显然当一个目标属性没有比它小的等价属性时,其必然代表真正的新目标的出现,设这个属性为新目标起始属性。于是,只要在规划后的等价矩阵中寻找是否有小于当前像素目标属性的等价属性即可判断当前像素的目标属性是否是新目标起始属性。同时,当扫描第一次遇到含有新目标起始属性的像素时,其必为新目标在扫描方向上遇到的第一个像素,此时目标的实际数量ObjectNumber应该加1,同时将ObjectNumber赋给当前像素的目标属性,即Access[Object[i][j]]=ObjectNumber。在步骤1结束后,图像共有ObjectNumber个目标,每个目标都只有一个目标标识,为原来属于该目标的所有目标表示中的最小值。同时Access数组记录了这些标识实际对应的是第几个目标。The second scan is completed in two steps. Step 1: According to the equivalent array Equ after the previous processing, all pixels belonging to the same target should have the same target attribute value Object, and assign it to belong to this target after the first scan. At the same time, it is recorded that the target pointed to by the minimum mark in the current target attribute is the number target from top to bottom and from left to right in the image; the algorithm implementation of step 1 is shown in Figure 3, and the integer variable ObjectNumber is set to represent the actual value of the image. The number of targets, the Access[] array initialization is all zero; the key of step 1 is to distinguish which pixel mark is the real new target, which pixel mark is just one of several different target attributes of the same target, since the scan is from left to Right, from top to bottom, it is obvious that when a target attribute has no equivalent attribute smaller than it, it must represent the emergence of a real new target, and this attribute is set as the initial attribute of the new target. Therefore, it is only necessary to find whether there is an equivalent attribute smaller than the target attribute of the current pixel in the planned equivalence matrix to determine whether the target attribute of the current pixel is the starting attribute of the new target. At the same time, when scanning encounters a pixel containing the starting attribute of a new object for the first time, it must be the first pixel encountered by the new object in the scanning direction. At this time, the actual number of objects ObjectNumber should be increased by 1, and ObjectNumber The target attribute assigned to the current pixel, namely Access[Object[i][j]]=ObjectNumber. After step 1, the image has a total of ObjectNumber objects, and each object has only one object ID, which is the minimum value among all object representations originally belonging to this object. At the same time, the Access array records which targets these identifiers actually correspond to.
步骤2做最后的扫尾工作,根据Access数组,将像素的目标属性Object最后赋予真正的目标序号。即对于所有像素的目标属性值Object,若Access[Object[i][j]]>0,则Object[i][j]=Access[Object[i][j]];否则,Object保持原值不变。Step 2 is the final finishing work, according to the Access array, assign the target attribute Object of the pixel to the real target serial number. That is, for the target attribute value Object of all pixels, if Access[Object[i][j]]>0, then Object[i][j]=Access[Object[i][j]]; otherwise, Object keeps the original value constant.
(6)最终获得所需目标的面积、周长、位置坐标等目标信息。(6) Finally obtain target information such as the area, perimeter, and position coordinates of the desired target.
设用Perimeter[m]表示第m个目标的周长;Area[m]表示第m个目标的面积;Posx[m]表示第m个目标在整幅图像中的x位置坐标;Posy[m]表示第m个目标在整幅图像中的y位置坐标;根据Object[i][j]、总目标数TTs以及Edge[i][j]之间的相互关系,统计各个目标的实际面积、周长和质心坐标等目标特性参数,具体过程为:在整幅图像范围内,分别对目标属性为TTi(其中TTi大于零,并小于或等于TTs)的像素进行如下操作:统计目标属性为TTi的像素点总和,并存放在Area[TTi]中,即得到了当前目标的面积;统计单像素宽的边界点Edge[i][j]=1的个数,并存放在Perimeter[TTi]中,即得到了当前目标的周长;将目标属性为TTi时的行方向计数i和列方向计数j分别进行累加,并分别存放在临时变量SPosx[TTi]和SPosy[TTi]中,然后SPosx[TTi]除以Area[TTi],其结果存放在Posx[TTi]中,得到当前目标行方向的位置Posx[TTi];SPosy[TTi]除以Area[TTi],其结果存放在Posy[TTi]中,得到当前目标列方向的位置Posy[TTi],最终,当前目标在整幅图像中的质心坐标为(Posx[TTi],Posy[TTi])。Let Perimeter[m] represent the perimeter of the m-th target; Area[m] represent the area of the m-th target; Posx[m] represent the x-position coordinates of the m-th target in the entire image; Posy[m] Indicates the y position coordinates of the mth target in the entire image; according to the relationship between Object[i][j], the total number of targets TTs and Edge[i][j], the actual area and circumference of each target are counted Target characteristic parameters such as length and centroid coordinates, the specific process is: within the entire image range, respectively perform the following operations on the pixels whose target attribute is TTi (where TTi is greater than zero and less than or equal to TTs): count the pixels whose target attribute is TTi The sum of the pixels is stored in Area[TTi], that is, the area of the current target is obtained; the number of single-pixel-wide boundary points Edge[i][j]=1 is counted, and stored in Perimeter[TTi], That is, the perimeter of the current target is obtained; the row direction count i and the column direction count j when the target attribute is TTi are accumulated respectively, and stored in the temporary variables SPosx[TTi] and SPosy[TTi] respectively, and then SPosx[TTi ] divided by Area[TTi], the result is stored in Posx[TTi], and the position Posx[TTi] in the direction of the current target row is obtained; SPosy[TTi] is divided by Area[TTi], and the result is stored in Posy[TTi] , get the position Posy[TTi] of the current target column direction, and finally, the centroid coordinates of the current target in the whole image are (Posx[TTi], Posy[TTi]).
Claims (6)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNA2008101017042A CN101246554A (en) | 2008-03-11 | 2008-03-11 | Multi-object Image Segmentation Method Based on Pixel Labeling |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNA2008101017042A CN101246554A (en) | 2008-03-11 | 2008-03-11 | Multi-object Image Segmentation Method Based on Pixel Labeling |
Publications (1)
Publication Number | Publication Date |
---|---|
CN101246554A true CN101246554A (en) | 2008-08-20 |
Family
ID=39946993
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CNA2008101017042A Pending CN101246554A (en) | 2008-03-11 | 2008-03-11 | Multi-object Image Segmentation Method Based on Pixel Labeling |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN101246554A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102376094A (en) * | 2010-08-17 | 2012-03-14 | 上海宝康电子控制工程有限公司 | Fast image marking method for video detection |
CN103218808A (en) * | 2013-03-26 | 2013-07-24 | 中山大学 | Method for tracking binary image profile, and device thereof |
CN103400125A (en) * | 2013-07-08 | 2013-11-20 | 西安交通大学 | Double-scanning double-labeling method for image connected domain |
CN104318543A (en) * | 2014-01-27 | 2015-01-28 | 郑州大学 | Board metering method and device based on image processing method |
CN105006002A (en) * | 2015-08-31 | 2015-10-28 | 北京华拓金融服务外包有限公司 | Automatic picture matting method and apparatus |
CN105635583A (en) * | 2016-01-27 | 2016-06-01 | 宇龙计算机通信科技(深圳)有限公司 | Shooting method and device |
CN107424155A (en) * | 2017-04-17 | 2017-12-01 | 河海大学 | A kind of focusing dividing method towards light field refocusing image |
CN112446918A (en) * | 2019-09-04 | 2021-03-05 | 三赢科技(深圳)有限公司 | Method and device for positioning target object in image, computer device and storage medium |
CN113297893A (en) * | 2021-02-05 | 2021-08-24 | 深圳高通半导体有限公司 | Method for extracting stroke contour point set |
-
2008
- 2008-03-11 CN CNA2008101017042A patent/CN101246554A/en active Pending
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102376094A (en) * | 2010-08-17 | 2012-03-14 | 上海宝康电子控制工程有限公司 | Fast image marking method for video detection |
CN102376094B (en) * | 2010-08-17 | 2016-03-09 | 上海宝康电子控制工程有限公司 | For the fast image marking method that video detects |
CN103218808A (en) * | 2013-03-26 | 2013-07-24 | 中山大学 | Method for tracking binary image profile, and device thereof |
CN103400125B (en) * | 2013-07-08 | 2017-02-01 | 西安交通大学 | Double-scanning double-labeling method for image connected domain |
CN103400125A (en) * | 2013-07-08 | 2013-11-20 | 西安交通大学 | Double-scanning double-labeling method for image connected domain |
CN104318543A (en) * | 2014-01-27 | 2015-01-28 | 郑州大学 | Board metering method and device based on image processing method |
CN105006002B (en) * | 2015-08-31 | 2018-11-13 | 北京华拓金融服务外包有限公司 | Automated graphics scratch drawing method and device |
CN105006002A (en) * | 2015-08-31 | 2015-10-28 | 北京华拓金融服务外包有限公司 | Automatic picture matting method and apparatus |
CN105635583A (en) * | 2016-01-27 | 2016-06-01 | 宇龙计算机通信科技(深圳)有限公司 | Shooting method and device |
CN107424155A (en) * | 2017-04-17 | 2017-12-01 | 河海大学 | A kind of focusing dividing method towards light field refocusing image |
CN107424155B (en) * | 2017-04-17 | 2020-04-21 | 河海大学 | A focus segmentation method for light field refocusing images |
CN112446918A (en) * | 2019-09-04 | 2021-03-05 | 三赢科技(深圳)有限公司 | Method and device for positioning target object in image, computer device and storage medium |
CN113297893A (en) * | 2021-02-05 | 2021-08-24 | 深圳高通半导体有限公司 | Method for extracting stroke contour point set |
CN113297893B (en) * | 2021-02-05 | 2024-06-11 | 深圳高通半导体有限公司 | Method for extracting stroke outline point set |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101246554A (en) | Multi-object Image Segmentation Method Based on Pixel Labeling | |
CN105205488B (en) | Word area detection method based on Harris angle points and stroke width | |
CN110120042B (en) | A Method of Extracting Disease and Pest Areas of Crop Images Based on SLIC Superpixels and Automatic Threshold Segmentation | |
CN102999886B (en) | Image Edge Detector and scale grating grid precision detection system | |
CN101408937B (en) | Character line positioning method and device | |
CN105825169B (en) | A Pavement Crack Recognition Method Based on Road Image | |
CN110569774B (en) | An Automatic Digitization Method of Line Chart Image Based on Image Processing and Pattern Recognition | |
CN106169080A (en) | A kind of combustion gas index automatic identifying method based on image | |
CN101593277A (en) | A method and device for automatic positioning of text regions in complex color images | |
CN112215790A (en) | KI67 index analysis method based on deep learning | |
CN104680531B (en) | A kind of connection amount statistical information extracting method and VLSI structure | |
CN113887378A (en) | Digital pathological image detection method and system for cervix liquid-based cells | |
CN117788790A (en) | Material installation detection method, system, equipment and medium for general scene | |
CN110473174A (en) | A method of pencil exact number is calculated based on image | |
CN110443811B (en) | A fully automatic segmentation method for leaf images with complex background | |
CN113096099A (en) | Permeable asphalt mixture communication gap identification method based on color channel combination | |
CN115797344B (en) | Machine room equipment identification management method based on image enhancement | |
CN104504385B (en) | The recognition methods of hand-written adhesion numeric string | |
CN110458042A (en) | A kind of number of probes detection method in fluorescence CTC | |
CN110473250A (en) | Accelerate the method for Blob analysis in a kind of processing of machine vision | |
CN102073868A (en) | Digital image closed contour chain-based image area identification method | |
CN113780168B (en) | Automatic extraction method for hyperspectral remote sensing image end member beam | |
CN106295642B (en) | License plate positioning method based on fault tolerance rate and texture features | |
Deidda et al. | An automatic system for rainfall signal recognition from tipping bucket gage strip charts | |
CN109919863B (en) | Full-automatic colony counter, system and colony counting method thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C02 | Deemed withdrawal of patent application after publication (patent law 2001) | ||
WD01 | Invention patent application deemed withdrawn after publication |
Open date: 20080820 |