[go: up one dir, main page]

CN101246554A - Multi-object Image Segmentation Method Based on Pixel Labeling - Google Patents

Multi-object Image Segmentation Method Based on Pixel Labeling Download PDF

Info

Publication number
CN101246554A
CN101246554A CNA2008101017042A CN200810101704A CN101246554A CN 101246554 A CN101246554 A CN 101246554A CN A2008101017042 A CNA2008101017042 A CN A2008101017042A CN 200810101704 A CN200810101704 A CN 200810101704A CN 101246554 A CN101246554 A CN 101246554A
Authority
CN
China
Prior art keywords
target
pixel
scanning
image
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CNA2008101017042A
Other languages
Chinese (zh)
Inventor
陈忠碧
张启衡
彭先蓉
蔡敬菊
徐智勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Optics and Electronics of CAS
Original Assignee
Institute of Optics and Electronics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Optics and Electronics of CAS filed Critical Institute of Optics and Electronics of CAS
Priority to CNA2008101017042A priority Critical patent/CN101246554A/en
Publication of CN101246554A publication Critical patent/CN101246554A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

本发明公开一种基于像素标记的多目标图像分割方法,具体步骤如下:以视频成像传感器所摄取的图像作为输入信号;对图像进行二值化处理;对处理后的图像作第一次扫描;建立等价数组;进行第二次扫描、得到目标数量、面积、周长、位置坐标等信息;采用本发明能够对全视场二值图像进行目标标记和边缘标记,具有一定的通用性,为下一步计算目标的特征量以及图像理解等高端过程提供了充分的条件;而且本发明不受目标个数的限制,根据硬件资源的配置,还可以随意减小或增大处理区的面积,具有很强的灵活性。

Figure 200810101704

The invention discloses a multi-target image segmentation method based on pixel marking. The specific steps are as follows: using an image captured by a video imaging sensor as an input signal; performing binary processing on the image; scanning the processed image for the first time; Establish an equivalent array; perform the second scan to obtain information such as target quantity, area, perimeter, and position coordinates; adopt the present invention to carry out target marking and edge marking on the binary image of the full field of view, which has certain versatility. The next step is to calculate the feature quantity of the target and high-end processes such as image understanding provide sufficient conditions; and the present invention is not limited by the number of targets, and can also reduce or increase the area of the processing area at will according to the configuration of hardware resources, with Great flexibility.

Figure 200810101704

Description

基于像素标记的多目标图像分割方法 Multi-object Image Segmentation Method Based on Pixel Labeling

技术领域 technical field

本发明属于光电探测与跟踪测量领域,具体地说是一种基于多目标分割的描述方法,它用于成像视频跟踪中多目标标记与特征量提取。The invention belongs to the field of photoelectric detection and tracking measurement, in particular to a description method based on multi-target segmentation, which is used for multi-target marking and feature quantity extraction in imaging video tracking.

背景技术 Background technique

在图像处理中,图像分割是很重要的一部分,通过图像分割可以提取出图像中用户关心的目标(也称为前景)并为以后的图像理解提供必要的数据。实现图像分割的方法主要包括:基于边缘的图像分割方法、基于区域的图像分割方法,以及基于上述两类方法的综合分割方法。In image processing, image segmentation is a very important part. Through image segmentation, the target (also called foreground) that the user cares about in the image can be extracted and the necessary data can be provided for future image understanding. The methods to achieve image segmentation mainly include: edge-based image segmentation methods, region-based image segmentation methods, and comprehensive segmentation methods based on the above two types of methods.

图像分割的目的之一是为实现自动目标识别打下基础,因此提取目标的特征量对于系统有着至关重要的意义。为此需要同时获得图像分割后的目标物体的面积、周长、质心坐标甚至目标所有像素的位置坐标等参数。为了实现这个功能,常用的方法有游程连通性分析法和像素标记法。游程连通性分析法是通过分析由连续扫描线得到的游程连通性来标记目标的方法,需要事先对图像进行处理得到灰度相同的像素块,其条件是在扫描前已经形成了灰度相同的像素块即游程。而像素标记法则是在不要求任何先验信息的前提下对二值图像进行扫描,并按像素间的连通性决定像素是属于哪个目标的,最后所有像素都有了标记该像素属于哪个目标的特征量。此后可以通过统计每个目标的像素来得到目标的面积、质心坐标等特征。One of the purposes of image segmentation is to lay the foundation for the realization of automatic target recognition, so extracting the feature quantity of the target is of vital significance to the system. For this reason, parameters such as the area, perimeter, centroid coordinates, and even the position coordinates of all pixels of the target object after image segmentation need to be obtained simultaneously. In order to realize this function, commonly used methods include run-length connectivity analysis and pixel labeling. The run-length connectivity analysis method is a method of marking the target by analyzing the run-length connectivity obtained by continuous scanning lines. It is necessary to process the image in advance to obtain pixel blocks with the same gray level. The condition is that the same gray level has been formed before scanning. A block of pixels is a run. The pixel labeling method scans the binary image without requiring any prior information, and determines which target the pixel belongs to according to the connectivity between pixels, and finally all pixels have a mark which target the pixel belongs to. Feature amount. Afterwards, the area and centroid coordinates of the target can be obtained by counting the pixels of each target.

发明内容 Contents of the invention

本发明要解决的技术问题:本发明是对传统的像素标记法进行了扩展,该方法使得所有像素除了有属于哪个目标的标记外,还有是否是目标边界点的标记,在统计这些点后,可以得到目标的单像素宽的边界的周长,避免了边缘检测后对多像素宽边界进行统计得到不准确的周长的缺点;由于周长在目标测量中的重要地位(如圆形度的测量),这个改进是有一定意义的。The technical problem to be solved by the present invention: the present invention expands the traditional pixel marking method. This method makes all pixels not only have the mark of which target they belong to, but also the mark of whether they are target boundary points. After counting these points , the perimeter of the single-pixel-wide boundary of the target can be obtained, avoiding the disadvantage of obtaining inaccurate perimeters by performing statistics on multi-pixel wide boundaries after edge detection; due to the important position of the perimeter in target measurement (such as circularity measurement), this improvement is meaningful.

本发明解决其技术问题所采用的技术方案是:一种基于像素标记的多目标图像分割方法,其特征在于包括以下步骤:The technical solution adopted by the present invention to solve the technical problems is: a multi-target image segmentation method based on pixel marking, which is characterized in that it comprises the following steps:

(1)以视频成像传感器所摄取的图像作为输入信号(用lpraw表示);(1) Take the image captured by the video imaging sensor as the input signal (expressed by lpraw);

(2)对所摄入的图像经阈值分割获得二值图像(用lpImg01表示);(2) Obtain a binary image (indicated by lpImg01) through threshold segmentation on the taken image;

设lpImg01[i][j]表示坐标(i,j)像素的像素值,lpImg01[i][j]=0表示点(i,j)为背景的一部分,lpImg01[i][j]=1表示点(i,l)属于目标区的一部分;同时,用Perimeter[m]表示第m个目标的周长;Area[m]表示第m个目标的面积;Posx[m]表示第m个目标在整幅图像中的x位置坐标;Posy[m]表示第m个目标在整幅图像中的y位置坐标;Object[i][j]表示坐标(i,j)的像素属于哪个目标;Edge[i][j]表示坐标(i,j)的像素是否是目标的边界点;Equ[i][j]表示目标属性为i和j的像素属于同一个目标;Access[i]表示第一次扫描后目标属性为i的像素实际属于Access[i]个目标;Let lpImg01[i][j] represent the pixel value of the pixel at coordinate (i, j), lpImg01[i][j]=0 means point (i, j) is part of the background, lpImg01[i][j]=1 Indicates that the point (i, l) belongs to a part of the target area; at the same time, use Perimeter[m] to represent the perimeter of the m-th target; Area[m] represents the area of the m-th target; Posx[m] represents the m-th target The x position coordinates in the entire image; Posy[m] indicates the y position coordinates of the mth target in the entire image; Object[i][j] indicates which target the pixel of coordinates (i, j) belongs to; Edge [i][j] indicates whether the pixel with coordinates (i, j) is the boundary point of the target; Equ[i][j] indicates that the pixels whose target attributes are i and j belong to the same target; Access[i] indicates the first After the second scan, the pixel whose target attribute is i actually belongs to the Access[i] target;

(3)对所得的二值化图像进行第一次扫描;(3) scan the obtained binarized image for the first time;

扫描从左到右、从上到下进行,设用变量TTs来标志第一次扫描中的所有不同的目标标记的数目,在扫描到t时,T1~T4位置处的点已经扫描过了,故必须考虑t与这4个点的关系。在t的灰度为1即t为目标时,当前扫描点与其8邻域的连通关系可表示为:Scanning is carried out from left to right and from top to bottom, and the variable TTs is used to mark the number of all different target marks in the first scan. When scanning reaches t, the points at positions T 1 ~ T 4 have been scanned Therefore, the relationship between t and these four points must be considered. When the gray level of t is 1, that is, t is the target, the connection relationship between the current scanning point and its 8 neighbors can be expressed as:

(1)当T1~T4位置处的灰度值均为零时,表示该点与前面相邻的4个像素点不具有连通性,则赋予当前像素一个新的标记,即TTs加1,并将TTs赋予当前像素t的Object[i][j],即Object[i][j]=TTs;(1) When the gray values at positions T 1 ~ T 4 are all zero, it means that this point has no connectivity with the previous 4 adjacent pixels, and a new mark is given to the current pixel, that is, TTs plus 1 , and assign TTs to Object[i][j] of the current pixel t, that is, Object[i][j]=TTs;

(2)当T1~T4位置处的灰度值有且只有一个为1时,设Tm(m=1,2,3,4&Tm非零)为那个灰度为1的像素,则将Tm的Object值赋予t的Object值,表示当前点目标属性与Tm像素的目标属性相同,并与Tm像素属于同一个目标;(2) When there is one and only one gray value at positions T 1 to T 4 that is 1, let T m (m=1, 2, 3, 4&T m non-zero) be the pixel whose gray value is 1, then Assign the Object value of T m to the Object value of t, indicating that the target attribute of the current point is the same as the target attribute of the Tm pixel, and belongs to the same target as the Tm pixel;

(3)当T1~T4位置处的灰度值有超过一个为1时,则将Tm(m=1,2,3,4&Tm非零)中最小的Object值即Object_min赋予当前点t的Object(设Object_min表示Tm中最小的Object值,并设Object_max表示Tm中最大的Object值);同时,在等价矩阵中置Equ[Object_min][Object_max]=1(其中,Object_min<Object_max),表示第一次扫描中的Object值为Object_min和Object_max的像素实际属于同一目标。(3) When more than one of the gray values at positions T 1 to T 4 is 1, assign the smallest Object value in Tm (m=1, 2, 3, 4 & Tm is non-zero), that is, Object_min, to the current point t Object (set Object_min to represent the smallest Object value in Tm, and set Object_max to represent the largest Object value in Tm); at the same time, set Equ[Object_min][Object_max]=1 in the equivalence matrix (wherein, Object_min<Object_max), which means The pixels whose Object values are Object_min and Object_max in the first scan actually belong to the same object.

同时,对于像素的边界属性,设当前被扫描像素为t,只要当Tm(m=1~8)中至少有一个的灰度为零时,则置t的Edge[i][j]=1,即将图像中坐标位置为(i,j)的像素边界属性置为1,表示该点为边界点;否则置Edge[i][j]=0表示非边界点。于是,在第一次扫描后,即得到了目标单像素宽的边缘信息。At the same time, for the boundary attribute of the pixel, set the currently scanned pixel as t, as long as at least one grayscale in T m (m=1~8) is zero, then set Edge[i][j] of t= 1, that is, set the boundary attribute of the pixel whose coordinate position is (i, j) in the image to 1, indicating that the point is a boundary point; otherwise, set Edge[i][j]=0 to indicate a non-boundary point. Therefore, after the first scan, the edge information of the target single-pixel width is obtained.

(4)把第一次扫描后所得到的离散的等价矩阵进行规划等价数组处理;(4) carry out planning equivalent array processing to the discrete equivalent matrix obtained after the first scan;

在第一次扫描后,对于属于同一目标的所有目标属性标记为TT1,TT2,...,TTn,其中TT1<TT2<...<TTn,等价数组有以下3种可能:After the first scan, all target attributes belonging to the same target are marked as TT 1 , TT 2 , ..., TTn, where TT 1 <TT 2 <...<TTn, the equivalent array has the following three possibilities :

(1)所有目标属性已有Equ[TT1][TTi]=1,i=2,3,...,n,这种情况不需要作进一步处理;(1) All target attributes already have Equ[TT 1 ][TT i ]=1, i=2, 3,..., n, this situation does not need to be further processed;

(2)有Equ[TT1][TTi]=1,Equ[TTi][TTj]=1,则需要使Equ[TT1][TTj]=1;(2) Equ[TT 1 ][TT i ]=1, Equ[TT i ][TT j ]=1, then Equ[TT 1 ][TT j ]=1;

(3)有Equ[TT1][Tj]=1,Equ[TTi][TTj]=1,则需要使Equ[TT1][TTi]=1;(3) Equ[TT 1 ][T j ]=1, Equ[TT i ][TT j ]=1, then Equ[TT 1 ][TT i ]=1;

经过以上算法处理后,可以完全达到预期要求,为第二次扫描作了充分的准备;此时所有属于同一目标的不同标记,都与属于这一目标的最小标记建立了等价关系。After the above algorithm processing, the expected requirements can be fully met, and full preparations have been made for the second scan; at this time, all the different marks belonging to the same target have established an equivalence relationship with the smallest mark belonging to this target.

(5)然后进行第二次扫描;获得所需目标的面积、周长、位置坐标等目标信息;(5) Then perform the second scan; obtain target information such as the area, perimeter, and position coordinates of the desired target;

经过前面的第一次扫描和规划等价数组之后,进行第二次扫描;该过程分两个步骤完成,步骤一:根据前面处理过后的等价数组Equ,凡是属于同一目标的像素,其目标属性值Object应该相同,并赋值为第一次扫描后属于这一目标的最小标记,同时记录当前目标属性中最小标记指向的这个目标是图像中从上到下,从左到右的第几个目标。步骤2做最后的扫尾工作,根据Access数组,将像素的目标属性Object最后赋予真正的目标序号,即对于所有像素的目标属性值Object,若Access[Object[i][j]]>0,则Object[i][j]=Access[Object[i][j]];否则,Object保持原值不变;同时根据Object[i][j],以及Edge[i][j]之间的相互关系,统计各个目标的实际面积、周长和质心坐标等目标特性参数,并勾勒出目标边缘轮廓。After the previous first scan and the planning of the equivalent array, the second scan is performed; this process is completed in two steps. Step 1: According to the equivalent array Equ after the previous processing, all pixels belonging to the same target, the target The attribute value Object should be the same, and it should be assigned as the smallest mark belonging to this target after the first scan, and record the number of the target pointed to by the smallest mark in the current target attribute from top to bottom and from left to right in the image Target. Step 2 is to do the final finishing work. According to the Access array, assign the target attribute Object of the pixel to the real target serial number, that is, for the target attribute value Object of all pixels, if Access[Object[i][j]]>0, then Object[i][j]=Access[Object[i][j]]; otherwise, the original value of Object remains unchanged; at the same time, according to the interaction between Object[i][j] and Edge[i][j] Relationships, statistics of the target characteristic parameters such as the actual area, perimeter and centroid coordinates of each target, and outline the target edge.

本发明与现有技术相比具有如下优点:Compared with the prior art, the present invention has the following advantages:

1、利用基于8邻域像素标记的方法,得到了目标单像素宽的边缘信息;从而得到目标的单像素宽的边界的周长,避免了边缘检测后对多像素宽边界进行统计得到不准确的周长的缺点;1. Using the method based on 8-neighborhood pixel marking, the edge information of the single-pixel width of the target is obtained; thus the perimeter of the single-pixel-wide boundary of the target is obtained, which avoids inaccurate statistics of multi-pixel wide boundaries after edge detection the disadvantage of the perimeter;

2、本发明能准确地给出目标面积、位置坐标等信息,为图像的进一步分析处理奠定了基础。目标特征提取准确,而且方法简单,实时性和实用性好;2. The present invention can accurately provide information such as the target area and position coordinates, which lays a foundation for further image analysis and processing. The target feature extraction is accurate, and the method is simple, real-time and practical;

3、本发明统计的目标数可以随意多,仅受实时系统中硬件资源的限制。3. The number of objects counted by the present invention can be as many as desired, only limited by the hardware resources in the real-time system.

附图说明 Description of drawings

图1为本发明中像素标记算法流程图;Fig. 1 is the flow chart of pixel labeling algorithm in the present invention;

图2为本发明中基于8邻域像素标记的扫描关系图;Fig. 2 is the scan relationship diagram based on 8 neighborhood pixel marks in the present invention;

图中:t表示当前被扫描像素,T1~T8表示t像素在8个连通区域的相邻像素;In the figure: t represents the currently scanned pixel, and T 1 ~ T 8 represent the adjacent pixels of t pixel in 8 connected regions;

图3为本发明中第二次扫描步骤1流程图。Fig. 3 is a flow chart of the second scanning step 1 in the present invention.

具体实施方式 Detailed ways

以下结合附图及具体实施例详细介绍本发明。The present invention will be described in detail below in conjunction with the accompanying drawings and specific embodiments.

本发明实现多目标的标记,在得到单像素宽的目标边界的同时,获取目标周长、面积、位置坐标等特征信号。The invention realizes multi-target marking, obtains characteristic signals such as target circumference, area, position coordinates and the like while obtaining a single-pixel-wide target boundary.

其实现方法流程图如图1所示,其具体实现步骤如下:The flow chart of its implementation method is shown in Figure 1, and its specific implementation steps are as follows:

(1)首先以视频成像传感器所摄取的图像作为输入信号;(1) First, the image captured by the video imaging sensor is used as an input signal;

(2)然后对原始输入的视频信号进行二值化处理,生成了用‘0’和‘1’表示的二值图像;(2) Then the original input video signal is binarized to generate a binary image represented by '0' and '1';

(3)紧接着对二值化图像从左到右,从上到下进行第一次扫描(设定图像的起点在图像的左上方);(3) Immediately afterwards, scan the binarized image from left to right and from top to bottom for the first time (the starting point of the image is set at the upper left of the image);

第一次扫描的功能包括:对二值化图像的每个像素进行第一次标记和根据连通性记录有不同目标属性的区域的目标属性值,并标记属于目标边缘的边界属性,具体实现方法如下:设用变量TTs来标志第一次扫描中的所有不同的目标标记的数目,lpImg01[i][j]表示坐标(i,j)像素的像素值。lpImg01[i][j]=0表示点(i,j)为背景的一部分,lpImg01[i][j]=1表示点(i,j)属于目标区的一部分。同时,Object[i][j]表示坐标(i,j)的像素属于哪个目标;Edge[i][j]表示坐标(i,j)的像素是否是目标的边界点;Equ[i][j]表示目标属性为i和j的像素属于同一个目标;Access[i]表示第一次扫描后目标属性为i的像素实际属于Access[i]个目标;为了标记当前被扫描的像素,采用8邻域像素标记法,需要检查该像素与它之前扫描到的4个近邻像素的连通性,如图2所示;图2中t表示当前被扫描像素,T1~T8表示t像素在8个连通区域的相邻像素。扫描从左到右、从上到下进行,在扫描到t时,T1~T4位置处的点已经扫描过了,故必须考虑t与这4个点的关系,在t的灰度为1即t为目标时:The function of the first scan includes: marking each pixel of the binarized image for the first time and recording the target attribute values of the regions with different target attributes according to the connectivity, and marking the boundary attributes belonging to the target edge. The specific implementation method As follows: assume that the variable TTs is used to mark the number of all different target marks in the first scan, and lpImg01[i][j] represents the pixel value of the coordinate (i, j) pixel. lpImg01[i][j]=0 indicates that the point (i, j) is a part of the background, and lpImg01[i][j]=1 indicates that the point (i, j) belongs to a part of the target area. At the same time, Object[i][j] indicates which target the pixel of coordinate (i, j) belongs to; Edge[i][j] indicates whether the pixel of coordinate (i, j) is the boundary point of the target; Equ[i][ j] indicates that the pixels whose target attribute is i and j belong to the same target; Access[i] indicates that the pixel whose target attribute is i actually belongs to the Access[i] target after the first scan; in order to mark the currently scanned pixel, use The 8-neighborhood pixel marking method needs to check the connectivity between the pixel and the four adjacent pixels scanned before it, as shown in Figure 2; in Figure 2, t represents the currently scanned pixel, and T 1 ~ T 8 represent the t pixel in the Neighboring pixels of 8 connected regions. Scanning is carried out from left to right and from top to bottom. When scanning to t, the points at positions T 1 ~ T 4 have been scanned, so the relationship between t and these 4 points must be considered. The gray level at t is 1 is when t is the target:

(1)当T1~T4位置处的灰度值均为零时,表示该点与前面相邻的4个像素点不具有连通性,则赋予当前像素一个新的标记,即TTs加1,并将TTs赋予当前像素t的Object[i][j],即Object[i][j]=TTs;;(1) When the gray values at positions T 1 ~ T 4 are all zero, it means that this point has no connectivity with the previous 4 adjacent pixels, and a new mark is given to the current pixel, that is, TTs plus 1 , and assign TTs to Object[i][j] of the current pixel t, namely Object[i][j]=TTs;

(2)当T1~T4位置处的灰度值有且只有一个为1时,设Tm(m=1,2,3,4&Tm非零)为那个灰度为1的像素,则将Tm的Object值赋予t的Object值,表示当前点目标属性与像素Tm的目标属性相同,并与像素Tm属于同一个目标;(2) When there is one and only one gray value at positions T 1 to T 4 that is 1, let T m (m=1, 2, 3, 4&T m non-zero) be the pixel whose gray value is 1, then Assign the Object value of T m to the Object value of t, indicating that the target attribute of the current point is the same as the target attribute of the pixel T m , and belongs to the same target as the pixel Tm;

(3)当T1~T4位置处的灰度值有超过一个为1时,则将Tm(m=1,2,3,4&Tm非零)中最小的Object值即Object_min赋予当前点t的Object值(这里设Object_min表示Tm中最小的Object值,并设Object_max表示Tm中最大的Object值)。同时,在等价矩阵中置Equ[Object_min][Object_max]=1(其中,Object_min<Object_max),表示第一次扫描中的Object值为Object_min和Object_max的像素实际属于同一目标。(3) When more than one of the gray values at positions T 1 to T 4 is 1, assign the smallest Object value in T m (m=1, 2, 3, 4 & T m is non-zero), that is, Object_min, to the current point Object value of t (here Object_min represents the smallest Object value in T m , and Object_max represents the largest Object value in T m ). At the same time, set Equ[Object_min][Object_max]=1 in the equivalence matrix (wherein, Object_min<Object_max), indicating that the pixels whose Object values are Object_min and Object_max in the first scan actually belong to the same object.

同时,对于像素的边界属性,设当前被扫描像素为t,只要当Tm(m=1~8)中至少有一个的灰度为零时,则置t的Edge[i][j]=1,表示该点为边界点;否则置Edge[i][j]=0表示非边界点。于是,在第一次扫描后,即得到了目标单像素宽的边缘信息。At the same time, for the boundary attribute of the pixel, set the currently scanned pixel as t, as long as at least one grayscale in T m (m=1~8) is zero, then set Edge[i][j] of t= 1, indicating that the point is a boundary point; otherwise, set Edge[i][j]=0 to indicate a non-boundary point. Therefore, after the first scan, the edge information of the target single-pixel width is obtained.

(4)把第一次扫描后所得到的离散的等价矩阵进行规划等价数组处理;(4) carry out planning equivalent array processing to the discrete equivalent matrix obtained after the first scan;

规划等价数组是本发明的核心;其功能是把第一次扫描后所得到的离散的Equ矩阵进行整理,将同一目标区域的所有目标属性归为一类。具体考虑到第二次扫描实现的实时性和简易性,在将所有同一目标的目标属性归为一类后,要确保Equ矩阵中所有这些目标属性都与其中最小的目标属性等价。Planning the equivalent array is the core of the present invention; its function is to organize the discrete Equ matrix obtained after the first scan, and to classify all target attributes of the same target area into one category. Specifically considering the real-time and simplicity of the second scan, after classifying all the target attributes of the same target into one category, it is necessary to ensure that all these target attributes in the Equ matrix are equivalent to the smallest target attribute.

规划等价数组的过程如下:对于属于同一目标的所有目标属性标记为TT1,TT2,...,TTn,其中TT1<TT2<...<TTn,等价数组有以下3种可能:The process of planning an equivalent array is as follows: For all target attributes belonging to the same target, marked as TT 1 , TT 2 , ..., TTn, where TT 1 <TT 2 <...<TTn, there are three types of equivalent arrays possible:

(1)所有目标属性已有Equ[TT1][TTi]=1,i=2,3,...,n,这种情况不需要作进一步处理;(1) All target attributes already have Equ[TT 1 ][TT i ]=1, i=2, 3,..., n, this situation does not need to be further processed;

(2)有Equ[TT1][TTi]=1,Equ[TTi][TTj]=1,则需要使Equ[TT1][TTj]=1;(2) Equ[TT 1 ][TT i ]=1, Equ[TT i ][TT j ]=1, then Equ[TT 1 ][TT j ]=1;

(3)有Equ[TT1][TTj]=1,Equ[TTi][TTj]=1,则需要使Equ[TT1][TTi]=1。(3) If Equ[TT 1 ][TT j ]=1 and Equ[TT i ][TT j ]=1, it is necessary to make Equ[TT 1 ][TT i ]=1.

经过以上规划等价数组处理之后,将所有属于同一目标的像素都归为了一个目标,这为第二次扫描作了充分的准备。此时所有属于同一目标的不同标记,都与属于这一目标的最小标记建立了等价关系。After the above planning equivalent array processing, all the pixels belonging to the same target are grouped into one target, which is fully prepared for the second scan. At this time, all the different marks belonging to the same target establish an equivalence relationship with the smallest mark belonging to this target.

(5)进行第二次扫描;(5) Carry out the second scan;

第二次扫描分两个步骤完成,步骤1:根据前面处理过后的等价数组Equ,凡是属于同一目标的像素,其目标属性值Object应该相同,并赋值为第一次扫描后属于这一目标的最小标记,同时记录当前目标属性中最小标记指向的这个目标是图像中从上到下,从左到右的第几个目标;步骤1算法实现如图3,设整数变量ObjectNumber表示图像的实际目标数量,Access[]数组初始化全部为零;步骤1的关键是要辨别哪个像素标记是真正的新目标,哪个像素标记只是同一目标的几个不同的目标属性之一,由于扫描是从左到右,从上到下,显然当一个目标属性没有比它小的等价属性时,其必然代表真正的新目标的出现,设这个属性为新目标起始属性。于是,只要在规划后的等价矩阵中寻找是否有小于当前像素目标属性的等价属性即可判断当前像素的目标属性是否是新目标起始属性。同时,当扫描第一次遇到含有新目标起始属性的像素时,其必为新目标在扫描方向上遇到的第一个像素,此时目标的实际数量ObjectNumber应该加1,同时将ObjectNumber赋给当前像素的目标属性,即Access[Object[i][j]]=ObjectNumber。在步骤1结束后,图像共有ObjectNumber个目标,每个目标都只有一个目标标识,为原来属于该目标的所有目标表示中的最小值。同时Access数组记录了这些标识实际对应的是第几个目标。The second scan is completed in two steps. Step 1: According to the equivalent array Equ after the previous processing, all pixels belonging to the same target should have the same target attribute value Object, and assign it to belong to this target after the first scan. At the same time, it is recorded that the target pointed to by the minimum mark in the current target attribute is the number target from top to bottom and from left to right in the image; the algorithm implementation of step 1 is shown in Figure 3, and the integer variable ObjectNumber is set to represent the actual value of the image. The number of targets, the Access[] array initialization is all zero; the key of step 1 is to distinguish which pixel mark is the real new target, which pixel mark is just one of several different target attributes of the same target, since the scan is from left to Right, from top to bottom, it is obvious that when a target attribute has no equivalent attribute smaller than it, it must represent the emergence of a real new target, and this attribute is set as the initial attribute of the new target. Therefore, it is only necessary to find whether there is an equivalent attribute smaller than the target attribute of the current pixel in the planned equivalence matrix to determine whether the target attribute of the current pixel is the starting attribute of the new target. At the same time, when scanning encounters a pixel containing the starting attribute of a new object for the first time, it must be the first pixel encountered by the new object in the scanning direction. At this time, the actual number of objects ObjectNumber should be increased by 1, and ObjectNumber The target attribute assigned to the current pixel, namely Access[Object[i][j]]=ObjectNumber. After step 1, the image has a total of ObjectNumber objects, and each object has only one object ID, which is the minimum value among all object representations originally belonging to this object. At the same time, the Access array records which targets these identifiers actually correspond to.

步骤2做最后的扫尾工作,根据Access数组,将像素的目标属性Object最后赋予真正的目标序号。即对于所有像素的目标属性值Object,若Access[Object[i][j]]>0,则Object[i][j]=Access[Object[i][j]];否则,Object保持原值不变。Step 2 is the final finishing work, according to the Access array, assign the target attribute Object of the pixel to the real target serial number. That is, for the target attribute value Object of all pixels, if Access[Object[i][j]]>0, then Object[i][j]=Access[Object[i][j]]; otherwise, Object keeps the original value constant.

(6)最终获得所需目标的面积、周长、位置坐标等目标信息。(6) Finally obtain target information such as the area, perimeter, and position coordinates of the desired target.

设用Perimeter[m]表示第m个目标的周长;Area[m]表示第m个目标的面积;Posx[m]表示第m个目标在整幅图像中的x位置坐标;Posy[m]表示第m个目标在整幅图像中的y位置坐标;根据Object[i][j]、总目标数TTs以及Edge[i][j]之间的相互关系,统计各个目标的实际面积、周长和质心坐标等目标特性参数,具体过程为:在整幅图像范围内,分别对目标属性为TTi(其中TTi大于零,并小于或等于TTs)的像素进行如下操作:统计目标属性为TTi的像素点总和,并存放在Area[TTi]中,即得到了当前目标的面积;统计单像素宽的边界点Edge[i][j]=1的个数,并存放在Perimeter[TTi]中,即得到了当前目标的周长;将目标属性为TTi时的行方向计数i和列方向计数j分别进行累加,并分别存放在临时变量SPosx[TTi]和SPosy[TTi]中,然后SPosx[TTi]除以Area[TTi],其结果存放在Posx[TTi]中,得到当前目标行方向的位置Posx[TTi];SPosy[TTi]除以Area[TTi],其结果存放在Posy[TTi]中,得到当前目标列方向的位置Posy[TTi],最终,当前目标在整幅图像中的质心坐标为(Posx[TTi],Posy[TTi])。Let Perimeter[m] represent the perimeter of the m-th target; Area[m] represent the area of the m-th target; Posx[m] represent the x-position coordinates of the m-th target in the entire image; Posy[m] Indicates the y position coordinates of the mth target in the entire image; according to the relationship between Object[i][j], the total number of targets TTs and Edge[i][j], the actual area and circumference of each target are counted Target characteristic parameters such as length and centroid coordinates, the specific process is: within the entire image range, respectively perform the following operations on the pixels whose target attribute is TTi (where TTi is greater than zero and less than or equal to TTs): count the pixels whose target attribute is TTi The sum of the pixels is stored in Area[TTi], that is, the area of the current target is obtained; the number of single-pixel-wide boundary points Edge[i][j]=1 is counted, and stored in Perimeter[TTi], That is, the perimeter of the current target is obtained; the row direction count i and the column direction count j when the target attribute is TTi are accumulated respectively, and stored in the temporary variables SPosx[TTi] and SPosy[TTi] respectively, and then SPosx[TTi ] divided by Area[TTi], the result is stored in Posx[TTi], and the position Posx[TTi] in the direction of the current target row is obtained; SPosy[TTi] is divided by Area[TTi], and the result is stored in Posy[TTi] , get the position Posy[TTi] of the current target column direction, and finally, the centroid coordinates of the current target in the whole image are (Posx[TTi], Posy[TTi]).

Claims (6)

1. multiple goal image partition method based on element marking is characterized in that may further comprise the steps:
(1) image that is absorbed with the video imaging sensor is represented with lpraw as input signal;
(2) carry out Threshold Segmentation and obtain binary image; If lpImg01[i] [j] expression binary image coordinate (i, the j) pixel value of pixel, lpImg01[i] and [j]=0 expression point (i j) be the part of background, lpImg01[i] [j]=1 represents that (i j) belongs to the part of target area to point;
(3) binary image to gained carries out the scanning first time;
(4) plan array of equal value, be about to all marks of equal value and be included into equivalent set;
(5) carry out the scanning second time then;
(6) obtain target informations such as required area, girth, position coordinates.
2. a kind of multiple goal image partition method according to claim 1 based on element marking, it is characterized in that: when in the described step (3) the gained binary image being carried out scanning the first time, from left to right scan from top to bottom, the starting point of scanning is positioned at the upper left side of image.
3. a kind of multiple goal image partition method according to claim 1 based on element marking, it is characterized in that: when in the described step (3) the gained binary image being carried out scanning the first time, adopt 8 neighborhood territory pixel labeling methods, the number that indicates all the different target labels in the scanning for the first time with variable TTs, suppose that current scan point is t, in the gray scale of the point of t position is 1 to be the point of t position when being target, and the connected relation of current scan point and its 8 neighborhood can be expressed as:
(1) works as T 1~T 4When the gray-scale value of position is zero, represent that this point 4 pixels adjacent with the front do not have connectedness, then give current pixel a new mark, promptly TTs adds 1, and TTs is given the Object[i of current pixel t] [j], i.e. Object[i] [j]=TTs;
(2) work as T 1~T 4The gray-scale value of position has and has only one to be at 1 o'clock, establishes T m(m=1,2,3,4﹠amp; T mNon-zero) for that gray scale is 1 pixel, then with T mThe Object value Object value of giving t, represent that current point target attribute is identical with the objective attribute target attribute of Tm pixel, and belong to same target with the Tm pixel;
(3) work as T 1~T 4The gray-scale value of position has that to surpass one be at 1 o'clock, then with Tm (m=1,2,3,4﹠amp; The Tm non-zero) Object value minimum in is the Object that Object_min gives current some t, simultaneously at the mid-Equ[Object_min of equivalent matrice] [Object_max]=1, to be that the pixel of Object_min and Object_max is actual belong to same target to the Object value in the expression scanning for the first time; Wherein Object_min represents Object value minimum among the Tm, and Object_max represents Object value maximum among the Tm, and Object_min<Object_max.
4. a kind of multiple goal image partition method according to claim 1 based on element marking, it is characterized in that: when carrying out scanning the first time in the described step (3), all 8 neighbors that to each gray scale are 8 connected regions of 1 current point are made following statistics: if having at least one to be background dot in 8 pixels, it is 0 that a gray scale is promptly arranged, then current point belongs to the object boundary point, otherwise belongs to the target internal point; Through what obtain after current the processing is single pixel edge.
5. a kind of multiple goal image partition method according to claim 1 based on element marking, it is characterized in that: planning array of equal value is put the resulting discrete equivalent matrice in scanning back for the first time in order in the described step (4), and all objective attribute target attributes of same target area are classified as a class; After planning array manipulation of equal value, all pixels that belong to same target all have been classified as a target.
6. a kind of multiple goal image partition method based on element marking according to claim 1 is characterized in that: the scanning second time of described step (5) can realize by two steps:
Step 1 is according to previous processed array of equal value later, every pixel that belongs to same target, its objective attribute target attribute value should be identical, and assignment belongs to the minimum mark of this target after for scanning for the first time, writes down this object simultaneously and be in the image from top to bottom which target from left to right;
Step 2, do last tailing in work, according to objective attribute target attribute after the scanning for the first time is actual which target that belongs to of pixel of i, give real target sequence number at last with the objective attribute target attribute of pixel, simultaneously according to objective attribute target attribute, and the mutual relationship between the border, add up the target property parameters such as real area, girth and center-of-mass coordinate of each target, and sketch the contours of the object edge profile.
CNA2008101017042A 2008-03-11 2008-03-11 Multi-object Image Segmentation Method Based on Pixel Labeling Pending CN101246554A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNA2008101017042A CN101246554A (en) 2008-03-11 2008-03-11 Multi-object Image Segmentation Method Based on Pixel Labeling

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNA2008101017042A CN101246554A (en) 2008-03-11 2008-03-11 Multi-object Image Segmentation Method Based on Pixel Labeling

Publications (1)

Publication Number Publication Date
CN101246554A true CN101246554A (en) 2008-08-20

Family

ID=39946993

Family Applications (1)

Application Number Title Priority Date Filing Date
CNA2008101017042A Pending CN101246554A (en) 2008-03-11 2008-03-11 Multi-object Image Segmentation Method Based on Pixel Labeling

Country Status (1)

Country Link
CN (1) CN101246554A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102376094A (en) * 2010-08-17 2012-03-14 上海宝康电子控制工程有限公司 Fast image marking method for video detection
CN103218808A (en) * 2013-03-26 2013-07-24 中山大学 Method for tracking binary image profile, and device thereof
CN103400125A (en) * 2013-07-08 2013-11-20 西安交通大学 Double-scanning double-labeling method for image connected domain
CN104318543A (en) * 2014-01-27 2015-01-28 郑州大学 Board metering method and device based on image processing method
CN105006002A (en) * 2015-08-31 2015-10-28 北京华拓金融服务外包有限公司 Automatic picture matting method and apparatus
CN105635583A (en) * 2016-01-27 2016-06-01 宇龙计算机通信科技(深圳)有限公司 Shooting method and device
CN107424155A (en) * 2017-04-17 2017-12-01 河海大学 A kind of focusing dividing method towards light field refocusing image
CN112446918A (en) * 2019-09-04 2021-03-05 三赢科技(深圳)有限公司 Method and device for positioning target object in image, computer device and storage medium
CN113297893A (en) * 2021-02-05 2021-08-24 深圳高通半导体有限公司 Method for extracting stroke contour point set

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102376094A (en) * 2010-08-17 2012-03-14 上海宝康电子控制工程有限公司 Fast image marking method for video detection
CN102376094B (en) * 2010-08-17 2016-03-09 上海宝康电子控制工程有限公司 For the fast image marking method that video detects
CN103218808A (en) * 2013-03-26 2013-07-24 中山大学 Method for tracking binary image profile, and device thereof
CN103400125B (en) * 2013-07-08 2017-02-01 西安交通大学 Double-scanning double-labeling method for image connected domain
CN103400125A (en) * 2013-07-08 2013-11-20 西安交通大学 Double-scanning double-labeling method for image connected domain
CN104318543A (en) * 2014-01-27 2015-01-28 郑州大学 Board metering method and device based on image processing method
CN105006002B (en) * 2015-08-31 2018-11-13 北京华拓金融服务外包有限公司 Automated graphics scratch drawing method and device
CN105006002A (en) * 2015-08-31 2015-10-28 北京华拓金融服务外包有限公司 Automatic picture matting method and apparatus
CN105635583A (en) * 2016-01-27 2016-06-01 宇龙计算机通信科技(深圳)有限公司 Shooting method and device
CN107424155A (en) * 2017-04-17 2017-12-01 河海大学 A kind of focusing dividing method towards light field refocusing image
CN107424155B (en) * 2017-04-17 2020-04-21 河海大学 A focus segmentation method for light field refocusing images
CN112446918A (en) * 2019-09-04 2021-03-05 三赢科技(深圳)有限公司 Method and device for positioning target object in image, computer device and storage medium
CN113297893A (en) * 2021-02-05 2021-08-24 深圳高通半导体有限公司 Method for extracting stroke contour point set
CN113297893B (en) * 2021-02-05 2024-06-11 深圳高通半导体有限公司 Method for extracting stroke outline point set

Similar Documents

Publication Publication Date Title
CN101246554A (en) Multi-object Image Segmentation Method Based on Pixel Labeling
CN105205488B (en) Word area detection method based on Harris angle points and stroke width
CN110120042B (en) A Method of Extracting Disease and Pest Areas of Crop Images Based on SLIC Superpixels and Automatic Threshold Segmentation
CN102999886B (en) Image Edge Detector and scale grating grid precision detection system
CN101408937B (en) Character line positioning method and device
CN105825169B (en) A Pavement Crack Recognition Method Based on Road Image
CN110569774B (en) An Automatic Digitization Method of Line Chart Image Based on Image Processing and Pattern Recognition
CN106169080A (en) A kind of combustion gas index automatic identifying method based on image
CN101593277A (en) A method and device for automatic positioning of text regions in complex color images
CN112215790A (en) KI67 index analysis method based on deep learning
CN104680531B (en) A kind of connection amount statistical information extracting method and VLSI structure
CN113887378A (en) Digital pathological image detection method and system for cervix liquid-based cells
CN117788790A (en) Material installation detection method, system, equipment and medium for general scene
CN110473174A (en) A method of pencil exact number is calculated based on image
CN110443811B (en) A fully automatic segmentation method for leaf images with complex background
CN113096099A (en) Permeable asphalt mixture communication gap identification method based on color channel combination
CN115797344B (en) Machine room equipment identification management method based on image enhancement
CN104504385B (en) The recognition methods of hand-written adhesion numeric string
CN110458042A (en) A kind of number of probes detection method in fluorescence CTC
CN110473250A (en) Accelerate the method for Blob analysis in a kind of processing of machine vision
CN102073868A (en) Digital image closed contour chain-based image area identification method
CN113780168B (en) Automatic extraction method for hyperspectral remote sensing image end member beam
CN106295642B (en) License plate positioning method based on fault tolerance rate and texture features
Deidda et al. An automatic system for rainfall signal recognition from tipping bucket gage strip charts
CN109919863B (en) Full-automatic colony counter, system and colony counting method thereof

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Open date: 20080820