CN107886507A - A kind of salient region detecting method based on image background and locus - Google Patents
A kind of salient region detecting method based on image background and locus Download PDFInfo
- Publication number
- CN107886507A CN107886507A CN201711122796.8A CN201711122796A CN107886507A CN 107886507 A CN107886507 A CN 107886507A CN 201711122796 A CN201711122796 A CN 201711122796A CN 107886507 A CN107886507 A CN 107886507A
- Authority
- CN
- China
- Prior art keywords
- mrow
- super
- pixel block
- cluster
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Quality & Reliability (AREA)
- Probability & Statistics with Applications (AREA)
- Image Analysis (AREA)
Abstract
本发明提供一种基于图像背景和空间位置的显著区域检测方法,方法主要包括如下步骤:对目标图像进行M种尺度的超像素分层分割,得到M层目标子图像;超像素块特征向量提取,提取超像素块的颜色特征和纹理特征;背景超像素块聚类;利用空间位置和与背景超像素差异对每个超像素块进行显著性计算;多尺度超像素块显著性融合。优点为:本发明所述方法能够准确判断图像的显著区域,表现效果良好,能有效提高检测的准确率和计算效率,为应用到互联网、云计算的海量图像或视频数据的筛选和分析提供技术支持。
The present invention provides a salient area detection method based on the image background and spatial position, the method mainly includes the following steps: performing M scale superpixel layered segmentation on the target image to obtain M layer target sub-images; extracting superpixel block feature vectors , extract the color features and texture features of superpixel blocks; cluster background superpixel blocks; use the spatial position and the difference with the background superpixel to calculate the saliency of each superpixel block; multi-scale superpixel block saliency fusion. The advantages are: the method of the present invention can accurately judge the significant area of the image, the performance effect is good, the accuracy and calculation efficiency of the detection can be effectively improved, and the technology can be provided for the screening and analysis of massive images or video data applied to the Internet and cloud computing support.
Description
技术领域technical field
本发明属于图像显著性区域检测技术领域,具体涉及一种基于图像背景和空间位置的显著区域检测方法。The invention belongs to the technical field of image salient area detection, and in particular relates to a salient area detection method based on image background and spatial position.
背景技术Background technique
从推动高级智能机器人研发的角度来说,显著区域检测能够使智能机器人从同一时间接收到的大量视频数据中,筛选出与当前任务最相关的部分进行处理。这可以有效模拟人类视觉感知的指向性和集中性,为完成智能任务奠定基础。从促进视觉领域智能应用的角度来说,将显著区域检测方法应用到互联网、云计算的海量图像或视频数据的筛选和分析中,能有效提高检测的准确率和计算效率;应用到侦察机、视频监控领域,可为目标识别、热点追踪等算法提供前期重点区域标记,提高相关算法的计算效率;应用到图像或视频传输领域,可对图像或视频上的重点区域有针对性的进行压缩,提高图像或视频传输的效率。此外,显著区域检测方法还可广泛应用于路径导航、无人机等其它领域。From the perspective of promoting the development of advanced intelligent robots, salient area detection can enable intelligent robots to select the most relevant part of the current task from a large amount of video data received at the same time for processing. This can effectively simulate the directivity and concentration of human visual perception, laying the foundation for the completion of intelligent tasks. From the perspective of promoting intelligent applications in the visual field, applying the salient area detection method to the screening and analysis of massive images or video data from the Internet and cloud computing can effectively improve the accuracy and computational efficiency of detection; In the field of video surveillance, it can provide key area marks in the early stage for algorithms such as target recognition and hot spot tracking, and improve the calculation efficiency of related algorithms; when applied to the field of image or video transmission, it can compress key areas on images or videos in a targeted manner. Improve the efficiency of image or video transmission. In addition, the salient area detection method can also be widely used in path navigation, unmanned aerial vehicle and other fields.
近年来,众多学者提出了许多用于在图像中检测显著性区域或目标的方法。为提高计算效率并忽略图像中一些不必要的细节,这些方法大多首先提取图像的感知同质元素,如超级像素、区域(当然也有直接使用像素的方法),然后计算它们的局部对比性、全局对比性或稀疏噪声以获得每个感知同质元素的显著性值,最后进行整合来分割整个显著目标。从近年来的研究趋势来看,相对于局部对比性,全局线索由于其可在相似图像区域上分配有对比性的显著值而更受关注。In recent years, many scholars have proposed many methods for detecting salient regions or objects in images. In order to improve computational efficiency and ignore some unnecessary details in the image, most of these methods first extract the perceptually homogeneous elements of the image, such as superpixels, regions (of course there are methods that directly use pixels), and then calculate their local contrast, global Contrastive or sparse noise is used to obtain saliency values for each perceptually homogenous element, which is finally integrated to segment the entire salient object. From the research trend in recent years, compared with local contrast, global cues have attracted more attention because they can assign contrastive saliency values on similar image regions.
名称为“一种视频显著性区域检测方法及系统”、公开号为CN 104424642 A的专利公开一种视频显著性区域检测方法及系统,通过分别获得像素级的静态显著性特征、局部区域级的静态显著性特征、局部区域级的动态显著性特征、全局级的静态显著性特征和全局级的动态显著性特征,利用视频帧之间的相关性对该视频显著性特征进行调制,基于调制后的视频显著性特征,采用3D-MRF设置视频帧的视频显著性区域,然后利用Graph-cuts选择最优的视频显著性区域,对视频显著性区域进行分割。该方法应用显著区域检测的互补性先验来提高算法的性能,但是当图像的边界区域不能良好描述背景时,如边框区域特征差异较大时,把整个边框放在一起计算背景特征,这种方法对背景特征的计算不准确。The patent titled "A Video Salient Area Detection Method and System" with the publication number CN 104424642 A discloses a video salient area detection method and system, by obtaining pixel-level static salient features and local area-level Static saliency features, local region-level dynamic saliency features, global-level static saliency features and global-level dynamic saliency features, using the correlation between video frames to modulate the video saliency features, based on the modulated 3D-MRF is used to set the video saliency region of the video frame, and then the Graph-cuts is used to select the optimal video saliency region to segment the video saliency region. This method uses the complementary prior of salient region detection to improve the performance of the algorithm, but when the border region of the image cannot describe the background well, such as when the characteristics of the border region are quite different, the whole border is put together to calculate the background feature. The method does not compute the background features accurately.
名称为“一种显著性区域的检测方法”的专利公开了一种显著性区域的检测方法,其将参与差异性对比计算的基本元素定义为区域,使之与最终的检测结果在同一量级,从而提高了显著性区域检测的效率。但是该发明只是应用了颜色空间转换和图分割等局部对比度,当图像目标不明显时,效果不好。The patent titled "A Detection Method of a Significant Region" discloses a detection method of a significant region, which defines the basic elements involved in the difference comparison calculation as a region, so that it is in the same order of magnitude as the final detection result , thus improving the efficiency of salient region detection. However, this invention only applies local contrast such as color space conversion and image segmentation, and the effect is not good when the image target is not obvious.
名称为“一种深度学习的图像显著性区域检测方法”的专利公开了一种深度学习的图像显著性区域检测方法,通过将深度学习下不同网络层的结果进行结合,得到图像在不同尺度下的特征,从而得到更好的检测性能;同时利用图像分割进行超像素阈值学习。但是该发明提出的方法受其训练集的图像类别(复杂背景或简单背景,包含单一目标或多个目标)和数量影响,这种方法容易出现过度适用风险,当图像类别发生变化时可能会表现不好。The patent titled "A Deep Learning Image Salient Area Detection Method" discloses a deep learning image salient area detection method. By combining the results of different network layers under deep learning, the image is obtained at different scales. features, so as to get better detection performance; at the same time, image segmentation is used for superpixel threshold learning. However, the method proposed by the invention is affected by the image category (complex background or simple background, containing a single target or multiple targets) and quantity of its training set. This method is prone to the risk of over-application, and may show not good.
由此可见,上述各类图像显著性区域检测方法,均具有一定的使用局限性,从而导致检测的准确率不高,检测的算法过于复杂。It can be seen that the above-mentioned various image salient region detection methods all have certain application limitations, resulting in a low detection accuracy and an overly complex detection algorithm.
发明内容Contents of the invention
针对现有技术存在的缺陷,本发明提供一种基于图像背景和空间位置的显著区域检测方法,可有效解决上述问题。Aiming at the defects in the prior art, the present invention provides a salient area detection method based on image background and spatial position, which can effectively solve the above problems.
本发明采用的技术方案如下:The technical scheme that the present invention adopts is as follows:
本发明提供一种基于图像背景和空间位置的显著区域检测方法,包括以下步骤:The present invention provides a salient area detection method based on image background and spatial position, comprising the following steps:
步骤1,对目标图像进行M种尺度的超像素分层分割,其中,M为尺度的总层数,得到M层目标子图像;每层目标子图像由多个超像素块组成;Step 1: Carry out hierarchical segmentation of superpixels of M scales on the target image, where M is the total number of layers of scales to obtain M-layer target sub-images; each layer of target sub-images is composed of multiple super-pixel blocks;
步骤2,对于每层目标子图像,均执行以下步骤2.1-步骤2.3:Step 2, for each layer of target sub-images, perform the following steps 2.1-step 2.3:
步骤2.1,提取目标子图像中每个超像素块的特征向量,得到超像素块特征向量;Step 2.1, extracting the feature vector of each superpixel block in the target sub-image to obtain the feature vector of the superpixel block;
步骤2.2,将目标子图像的边框区域当作图像背景,属于图像背景的超像素块称为背景超像素块;In step 2.2, the frame area of the target sub-image is regarded as the image background, and the superpixel blocks belonging to the image background are called background superpixel blocks;
对背景超像素块进行聚类,得到n个聚类,分别为:第1个聚类,第2个聚类…第n个聚类;第1个聚类的聚类中心特征向量为B1,第2个聚类的聚类中心特征向量为B2,依此类推,第n个聚类的聚类中心特征向量为Bn,因此,聚类中心特征向量B={B1,B2,…,Bn};Cluster the background superpixel blocks to obtain n clusters, which are: the first cluster, the second cluster...the nth cluster; the cluster center eigenvector of the first cluster is B 1 , the eigenvector of the cluster center of the second cluster is B 2 , and so on, the eigenvector of the cluster center of the nth cluster is B n , therefore, the eigenvector of the cluster center B={B 1 ,B 2 ,...,B n };
步骤2.3,对于目标子图像的每个超像素块,表示为超像素块p,均采用下式计算超像素块p的显著性值s:Step 2.3, for each superpixel block of the target sub-image, denoted as a superpixel block p, the saliency value s of the superpixel block p is calculated by the following formula:
其中:in:
其中:in:
D(p,Bi)表示超像素块p和第i个聚类的聚类中心特征向量Bi之间的距离,i={1,2,…,n};σ代表尺度因子;D(p,B i ) represents the distance between the superpixel block p and the cluster center feature vector B i of the i-th cluster, i={1,2,...,n}; σ represents the scale factor;
w为权值,用于衡量超像素块p与本层目标子图像中心点间的距离,(x,y)表示超像素块p的中心点坐标,(x',y')表示本层目标子图像的中心点坐标;w is the weight, which is used to measure the distance between the superpixel block p and the center point of the target sub-image of this layer, (x, y) represents the coordinates of the center point of the superpixel block p, and (x', y') represents the target of this layer The coordinates of the center point of the sub-image;
由此计算得到每层目标子图像的每个超像素块的显著性值;The saliency value of each superpixel block of the target sub-image of each layer is thus calculated;
步骤3,多尺度超像素块显著性融合,得到最终的显著性图,并在显著性图上检测到显著性区域,具体包括:Step 3, saliency fusion of multi-scale superpixel blocks to obtain the final saliency map, and detect salient regions on the saliency map, including:
步骤3.1,计算融合后显著性图上任意像素点j的显著性值:Step 3.1, calculate the saliency value of any pixel point j on the fused saliency map:
像素点j的显著性值sj是其在所有尺度下位于对应的超像素块的显著性值的平均值,即:The saliency value s j of pixel point j is the average value of the saliency values of its corresponding superpixel block at all scales, namely:
其中:sl是像素点j位于第l层目标子图像的超像素块的显著性值;Among them: s l is the saliency value of the superpixel block of the pixel j located in the l-th layer target sub-image;
步骤3.2,所有像素点j的显著性值形成图像显著性图,在显著性图上,超过设定阈值的区域即为最终检测到的显著性区域。In step 3.2, the saliency values of all pixel points j form an image saliency map, and on the saliency map, the region exceeding the set threshold is the finally detected salient region.
优选的,步骤1中,采用SLIC算法对目标图像进行M种尺度的超像素分层分割。Preferably, in step 1, the SLIC algorithm is used to perform M scale superpixel hierarchical segmentation on the target image.
优选的,步骤2.1中,提取目标子图像中每个超像素块的特征向量,具体为:提取每个超像素块的颜色特征和纹理特征,每个超像素块的特征向量包括:RGB平均值的3个分量、RGB直方图的256个分量、HSV平均值的3个分量、HSV直方图的256个分量、Lab平均值的3个分量、Lab直方图的256个分量和LM滤波器响应的48个分量。Preferably, in step 2.1, extract the feature vector of each superpixel block in the target sub-image, specifically: extract the color feature and texture feature of each superpixel block, the feature vector of each superpixel block includes: RGB average value 3 components of , 256 components of RGB histogram, 3 components of HSV mean, 256 components of HSV histogram, 3 components of Lab mean, 256 components of Lab histogram and LM filter response 48 servings.
优选的,步骤2.2中,对背景超像素块进行聚类,具体为,采用改进K-Means聚类算法对背景超像素块进行聚类。Preferably, in step 2.2, the background superpixel blocks are clustered, specifically, an improved K-Means clustering algorithm is used to cluster the background superpixel blocks.
优选的,采用改进K-Means聚类算法对背景超像素块进行聚类,具体包括:Preferably, the background superpixel blocks are clustered using the improved K-Means clustering algorithm, specifically including:
步骤2.2.1,设定改进的K-means聚类算法的初始聚类数为z,即最后聚类得到z个聚类数;Step 2.2.1, set the initial clustering number of the improved K-means clustering algorithm as z, that is, the final clustering obtains z clustering numbers;
步骤2.2.2,使用K-means聚类算法进行初始聚类,得到若干个初始聚类;在初始聚类时,采用以下方法计算任意两个超像素块的距离:Step 2.2.2, use the K-means clustering algorithm for initial clustering to obtain several initial clusters; in the initial clustering, the following method is used to calculate the distance between any two superpixel blocks:
对于目标子图像中任意两个超像素块,分别记为超像素块u和超像素块v;For any two superpixel blocks in the target sub-image, they are respectively denoted as superpixel block u and superpixel block v;
设提取目标子图像中超像素块u的RGB平均值为f1 u,RGB直方图为HSV平均值为HSV直方图为Lab平均值为Lab直方图为LM滤波器响应为 Suppose the RGB average value of the superpixel block u in the extracted target sub-image is f 1 u , and the RGB histogram is HSV mean is HSV histogram is Lab average is Lab histogram is The LM filter response is
设提取目标子图像中超像素块v的RGB平均值为f1 v,RGB直方图为HSV平均值为HSV直方图为Lab平均值为Lab直方图为LM滤波器响应为 Let the RGB average value of the superpixel block v in the extracted target sub-image be f 1 v , and the RGB histogram be HSV mean is HSV histogram is Lab average is Lab histogram is The LM filter response is
超像素块u和超像素块v之间的距离D(u,v)为:The distance D(u,v) between superpixel block u and superpixel block v is:
其中:N(●)表示归一化;Among them: N(●) means normalization;
表示超像素块u和超像素块v之间的第a个特征的距离; Indicates the distance of the a-th feature between superpixel block u and superpixel block v;
其中,a=1,3,5,7,分别代表RGB平均值特征,HSV平均值特征,Lab平均值特征和LM滤波器响应特征;m为各特征的维度总数,e为各特征的维度数量参数,对于RGB平均值特征,其维度为3;对于HSV平均值特征,其维度为3;对于Lab平均值特征,其维度为3;对于LM滤波器响应特征,其维度为48;是超像素块u的第a个特征的第e个维度分量;是超像素块v的第a个特征的第e个维度分量;Among them, a=1, 3, 5, 7, representing the RGB average feature, HSV average feature, Lab average feature and LM filter response feature; m is the total number of dimensions of each feature, and e is the number of dimensions of each feature Parameters, for the RGB average feature, its dimension is 3; for the HSV average feature, its dimension is 3; for the Lab average feature, its dimension is 3; for the LM filter response feature, its dimension is 48; is the e-th dimension component of the a-th feature of the superpixel block u; is the e-th dimension component of the a-th feature of the superpixel block v;
表示超像素块u和超像素块v之间的第c个特征的距离;其中,c=2,4,6,分别代表RGB直方图、HSV直方图和Lab直方图;b为直方图区间个数;d为直方图区间数量参数;是超像素块u的第c个特征的第d个直方图值;是超像素块v的第c个特征的第d个直方图值; Indicates the distance of the cth feature between superpixel block u and superpixel block v; where c=2, 4, 6 represent RGB histogram, HSV histogram and Lab histogram respectively; b is the histogram interval number; d is the number parameter of the histogram interval; is the dth histogram value of the cth feature of the superpixel block u; is the dth histogram value of the cth feature of the superpixel block v;
然后计算每个初始聚类的聚类中心特征向量;将一个聚类里的所有超像素的特征分别做平均值得到聚类中心;Then calculate the cluster center eigenvector of each initial cluster; average the features of all superpixels in a cluster to obtain the cluster center;
步骤2.2.3,选定欧式距离作为初始聚类间的相似性度量,从而计算出聚类中心之间的差异值;Step 2.2.3, select the Euclidean distance as the similarity measure between the initial clusters, so as to calculate the difference value between the cluster centers;
步骤2.2.4,判断任意两个聚类中心的差异是否小于阈值θ;设聚类中心集合为A,则D(g,h)代表聚类中心g和聚类中心h之间的欧式距离;Step 2.2.4, judge whether the difference between any two cluster centers is smaller than the threshold θ; set the cluster center set to A, then D(g,h) represents the Euclidean distance between the cluster center g and the cluster center h;
步骤2.2.5,如果步骤2.2.4的结果为“是”,则聚类的个数N减1,返回步骤2.2.2重新聚类;Step 2.2.5, if the result of step 2.2.4 is "Yes", then the number N of clusters is reduced by 1, and return to step 2.2.2 for re-clustering;
步骤2.2.6,如果步骤2.2.4的结果为“否”,则进入步骤2.2.7;Step 2.2.6, if the result of step 2.2.4 is "no", go to step 2.2.7;
步骤2.2.7,记录聚类的个数和聚类中心的特征向量;Step 2.2.7, record the number of clusters and the eigenvector of the cluster center;
步骤2.2.8,流程结束。In step 2.2.8, the process ends.
本发明提供的一种基于图像背景和空间位置的显著区域检测方法具有以下优点:A salient region detection method based on image background and spatial position provided by the present invention has the following advantages:
本发明所述方法能够准确判断图像的显著区域,表现效果良好,能有效提高检测的准确率和计算效率,为应用到互联网、云计算的海量图像或视频数据的筛选和分析提供技术支持。The method of the invention can accurately judge the salient area of the image, has a good performance effect, can effectively improve the detection accuracy and calculation efficiency, and provides technical support for the screening and analysis of massive image or video data applied to the Internet and cloud computing.
附图说明Description of drawings
图1为本发明提供的基于图像背景和空间位置的显著区域检测方法的整体流程示意图;FIG. 1 is a schematic diagram of the overall flow of the salient region detection method based on image background and spatial position provided by the present invention;
图2为本发明提供的基于图像背景和空间位置的显著区域检测方法的超像素分层分割结果示意图;Fig. 2 is a schematic diagram of the superpixel layered segmentation results of the salient region detection method based on image background and spatial position provided by the present invention;
图3为本发明提供的一种改进K-means聚类算法的方法流程图;Fig. 3 is a kind of method flowchart of improving K-means clustering algorithm provided by the present invention;
图4为显著区域检测结果对比示意图。Figure 4 is a schematic diagram of the comparison of salient area detection results.
具体实施方式Detailed ways
为了使本发明所解决的技术问题、技术方案及有益效果更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本发明,并不用于限定本发明。In order to make the technical problems, technical solutions and beneficial effects solved by the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention.
本发明提供一种基于图像背景和空间位置的显著区域检测方法,方法主要包括如下步骤:对目标图像进行M种尺度的超像素分层分割,得到M层目标子图像;超像素块特征向量提取,提取超像素块的颜色特征和纹理特征;背景超像素块聚类;利用空间位置和与背景超像素差异对每个超像素块进行显著性计算;多尺度超像素块显著性融合。本发明所述方法能够准确判断图像的显著区域,表现效果良好,能有效提高检测的准确率和计算效率,为应用到互联网、云计算的海量图像或视频数据的筛选和分析提供技术支持。The present invention provides a salient area detection method based on the image background and spatial position, the method mainly includes the following steps: performing M scale superpixel layered segmentation on the target image to obtain M layer target sub-images; extracting superpixel block feature vectors , extract the color features and texture features of superpixel blocks; cluster background superpixel blocks; use the spatial position and the difference with the background superpixel to calculate the saliency of each superpixel block; multi-scale superpixel block saliency fusion. The method of the invention can accurately judge the salient area of the image, has a good performance effect, can effectively improve the detection accuracy and calculation efficiency, and provides technical support for the screening and analysis of massive image or video data applied to the Internet and cloud computing.
基于图像背景和空间位置的显著区域检测方法,在每层图像上进行图像背景超像素聚类,计算超像素特征向量,依据超像素空间位置,和其与背景超像素的差异,计算该超像素显著性值。最后对各层图像上超像素显著值进行融合,得到最终显著图。参考图1,包括以下步骤:Based on the salient region detection method of image background and spatial position, the image background superpixels are clustered on each layer of image, and the superpixel feature vector is calculated, and the superpixel is calculated according to the superpixel spatial position and its difference with the background superpixel significance value. Finally, the saliency values of the superpixels on the images of each layer are fused to obtain the final saliency map. Referring to Figure 1, the following steps are included:
步骤1,对目标图像进行M种尺度的超像素分层分割,其中,M为尺度的总层数,得到M层目标子图像;每层目标子图像由多个超像素块组成;Step 1: Carry out hierarchical segmentation of superpixels of M scales on the target image, where M is the total number of layers of scales to obtain M-layer target sub-images; each layer of target sub-images is composed of multiple super-pixel blocks;
本步骤中,具体采用SLIC(Simple Linear Iterative Cluster)算法对目标图像进行M种尺度的超像素分层分割。图像超像素分层分割能够模拟人眼不同视觉细胞的不同视觉粒度,以取得更良好的效果,从不同尺度上对图像上像素进行显著性判断,从而最终得到一个公平客观的显著图。综合考虑人眼特征和算法性能,可以采取3层超像素分层分割方法。In this step, the SLIC (Simple Linear Iterative Cluster) algorithm is specifically used to perform M scale superpixel hierarchical segmentation on the target image. Image superpixel hierarchical segmentation can simulate the different visual granularity of different visual cells of the human eye to achieve better results, and judge the saliency of pixels on the image from different scales, so as to finally obtain a fair and objective saliency map. Considering the characteristics of the human eye and the performance of the algorithm, a three-layer superpixel layered segmentation method can be adopted.
步骤2,对于每层目标子图像,均执行以下步骤2.1-步骤2.3:Step 2, for each layer of target sub-images, perform the following steps 2.1-step 2.3:
步骤2.1,提取目标子图像中每个超像素块的特征向量,得到超像素块特征向量;具体的,提取每个超像素块的颜色特征和纹理特征,每个超像素块的特征向量包括:RGB平均值的3个分量、RGB直方图的256个分量、HSV平均值的3个分量、HSV直方图的256个分量、Lab平均值的3个分量、Lab直方图的256个分量和LM滤波器响应的48个分量。Step 2.1, extract the feature vector of each superpixel block in the target sub-image to obtain the superpixel block feature vector; specifically, extract the color feature and texture feature of each superpixel block, and the feature vector of each superpixel block includes: 3 components of RGB mean, 256 components of RGB histogram, 3 components of HSV mean, 256 components of HSV histogram, 3 components of Lab mean, 256 components of Lab histogram and LM filtering 48 components of the sensor response.
步骤2.2,将目标子图像的边框区域当作图像背景,属于图像背景的超像素块称为背景超像素块;In step 2.2, the frame area of the target sub-image is regarded as the image background, and the superpixel blocks belonging to the image background are called background superpixel blocks;
对背景超像素块进行聚类,得到n个聚类,分别为:第1个聚类,第2个聚类…第n个聚类;第1个聚类的聚类中心特征向量为B1,第2个聚类的聚类中心特征向量为B2,依此类推,第n个聚类的聚类中心特征向量为Bn,因此,聚类中心特征向量B={B1,B2,…,Bn};Cluster the background superpixel blocks to obtain n clusters, which are: the first cluster, the second cluster...the nth cluster; the cluster center eigenvector of the first cluster is B 1 , the eigenvector of the cluster center of the second cluster is B 2 , and so on, the eigenvector of the cluster center of the nth cluster is B n , therefore, the eigenvector of the cluster center B={B 1 ,B 2 ,...,B n };
一般情况下,将背景超像素块聚类至1-3个聚类集合,能够防止因图像边框超像素差别较大而引起的背景特征向量计算错误,从而给背景节点一个更准确的评价方式。In general, clustering the background superpixel blocks into 1-3 cluster sets can prevent background feature vector calculation errors caused by large differences in image border superpixels, thereby giving background nodes a more accurate evaluation method.
本步骤中,对背景超像素块进行聚类,具体为,采用改进K-Means聚类算法对背景超像素块进行聚类,参考图3,包括以下步骤:In this step, the background superpixel blocks are clustered, specifically, an improved K-Means clustering algorithm is used to cluster the background superpixel blocks, referring to Figure 3, including the following steps:
步骤2.2.1,设定改进的K-means聚类算法的初始聚类数为z,即最后聚类得到z个聚类数;一般z取值为3;Step 2.2.1, set the initial clustering number of the improved K-means clustering algorithm as z, that is, the final clustering results in z clustering numbers; generally, the value of z is 3;
步骤2.2.2,使用K-means聚类算法进行初始聚类,得到若干个初始聚类;在初始聚类时,采用以下方法计算任意两个超像素块的距离:Step 2.2.2, use the K-means clustering algorithm for initial clustering to obtain several initial clusters; in the initial clustering, the following method is used to calculate the distance between any two superpixel blocks:
对于目标子图像中任意两个超像素块,分别记为超像素块u和超像素块v;For any two superpixel blocks in the target sub-image, they are respectively denoted as superpixel block u and superpixel block v;
设提取目标子图像中超像素块u的RGB平均值为f1 u,RGB直方图为HSV平均值为HSV直方图为Lab平均值为Lab直方图为LM滤波器响应为 Suppose the RGB average value of the superpixel block u in the extracted target sub-image is f 1 u , and the RGB histogram is HSV mean is HSV histogram is Lab average is Lab histogram is The LM filter response is
设提取目标子图像中超像素块v的RGB平均值为f1 v,RGB直方图为HSV平均值为HSV直方图为Lab平均值为Lab直方图为LM滤波器响应为 Let the RGB average value of the superpixel block v in the extracted target sub-image be f 1 v , and the RGB histogram be HSV mean is HSV histogram is Lab average is Lab histogram is The LM filter response is
超像素块u和超像素块v之间的距离D(u,v)为:The distance D(u,v) between superpixel block u and superpixel block v is:
其中:N(●)表示归一化;Among them: N(●) means normalization;
表示超像素块u和超像素块v之间的第a个特征的距离; Indicates the distance of the a-th feature between superpixel block u and superpixel block v;
其中,a=1,3,5,7,分别代表RGB平均值特征,HSV平均值特征,Lab平均值特征和LM滤波器响应特征;m为各特征的维度总数,e为各特征的维度数量参数,对于RGB平均值特征,其维度为3;对于HSV平均值特征,其维度为3;对于Lab平均值特征,其维度为3;对于LM滤波器响应特征,其维度为48;是超像素块u的第a个特征的第e个维度分量;是超像素块v的第a个特征的第e个维度分量;Among them, a=1, 3, 5, 7, representing the RGB average feature, HSV average feature, Lab average feature and LM filter response feature; m is the total number of dimensions of each feature, and e is the number of dimensions of each feature Parameters, for the RGB average feature, its dimension is 3; for the HSV average feature, its dimension is 3; for the Lab average feature, its dimension is 3; for the LM filter response feature, its dimension is 48; is the e-th dimension component of the a-th feature of the superpixel block u; is the e-th dimension component of the a-th feature of the superpixel block v;
表示超像素块u和超像素块v之间的第c个特征的距离;其中,c=2,4,6,分别代表RGB直方图、HSV直方图和Lab直方图;b为直方图区间个数;d为直方图区间数量参数;是超像素块u的第c个特征的第d个直方图值;是超像素块v的第c个特征的第d个直方图值; Indicates the distance of the cth feature between superpixel block u and superpixel block v; where c=2, 4, 6 represent RGB histogram, HSV histogram and Lab histogram respectively; b is the histogram interval number; d is the number parameter of the histogram interval; is the dth histogram value of the cth feature of the superpixel block u; is the dth histogram value of the cth feature of the superpixel block v;
然后计算每个初始聚类的聚类中心特征向量;将一个聚类里的所有超像素的特征分别做平均值得到聚类中心;Then calculate the cluster center eigenvector of each initial cluster; average the features of all superpixels in a cluster to obtain the cluster center;
步骤2.2.3,选定欧式距离作为初始聚类间的相似性度量,从而计算出聚类中心之间的差异值;Step 2.2.3, select the Euclidean distance as the similarity measure between the initial clusters, so as to calculate the difference value between the cluster centers;
步骤2.2.4,判断任意两个聚类中心的差异是否小于阈值θ;设聚类中心集合为A,则D(g,h)代表聚类中心g和聚类中心h之间的欧式距离;Step 2.2.4, judge whether the difference between any two cluster centers is smaller than the threshold θ; set the cluster center set to A, then D(g,h) represents the Euclidean distance between the cluster center g and the cluster center h;
步骤2.2.5,如果步骤2.2.4的结果为“是”,则聚类的个数N减1,返回步骤2.2.2重新聚类;Step 2.2.5, if the result of step 2.2.4 is "Yes", then the number N of clusters is reduced by 1, and return to step 2.2.2 for re-clustering;
步骤2.2.6,如果步骤2.2.4的结果为“否”,则进入步骤2.2.7;Step 2.2.6, if the result of step 2.2.4 is "no", go to step 2.2.7;
步骤2.2.7,记录聚类的个数和聚类中心的特征向量;Step 2.2.7, record the number of clusters and the eigenvector of the cluster center;
步骤2.2.8,流程结束。In step 2.2.8, the process ends.
本步骤中,背景先验基于摄影原理,将图像的四个边框区域当作图像背景对待。目前大多数使用背景先验的算法,把图像的整个边框作为背景,提取背景区域特征向量,这种方式不能有效利用图像边框背景差异。通过调查发现,许多图像边框的区域可划分成1至3个部分,且一般在3个部分以下。因此,为良好描述图像背景区域,在对图像进行超像素分割的基础上,针对图像边框背景超像素块集合,本发明使用改进k-means聚类算法,将图像四个边框上背景超像素块聚类为1至3个集合,作为图像背景区域。利用图像四个边框上的所有背景超像素块形成背景超像素块集合,提取所有超像素块的颜色特征和纹理特征来描述超像素块信息。In this step, the background prior is based on the principle of photography, and the four border areas of the image are treated as the image background. At present, most algorithms that use background priors use the entire border of the image as the background to extract the feature vector of the background region. This method cannot effectively utilize the background difference of the image border. Through investigation, it is found that the area of many image borders can be divided into 1 to 3 parts, and generally less than 3 parts. Therefore, in order to describe the background area of the image well, on the basis of superpixel segmentation of the image, the present invention uses the improved k-means clustering algorithm for the background superpixel block set of the image border to divide the background superpixel blocks on the four borders of the image Clustered into 1 to 3 sets, as image background regions. All background superpixel blocks on the four borders of the image are used to form a set of background superpixel blocks, and the color features and texture features of all superpixel blocks are extracted to describe the superpixel block information.
步骤2.3,对于目标子图像的每个超像素块,表示为超像素块p,均采用下式计算超像素块p的显著性值s:Step 2.3, for each superpixel block of the target sub-image, denoted as a superpixel block p, the saliency value s of the superpixel block p is calculated by the following formula:
其中:in:
其中:in:
D(p,Bi)表示超像素块p和第i个聚类的聚类中心特征向量Bi之间的距离,i={1,2,…,n};σ代表尺度因子,通常取值为0.5;D(p,B i ) represents the distance between the superpixel block p and the cluster center feature vector B i of the i-th cluster, i={1,2,...,n}; σ represents the scale factor, usually taken as Value is 0.5;
w为权值,用于衡量超像素块p与本层目标子图像中心点间的距离,(x,y)表示超像素块p的中心点坐标,(x',y')表示本层目标子图像的中心点坐标;w is the weight, which is used to measure the distance between the superpixel block p and the center point of the target sub-image of this layer, (x, y) represents the coordinates of the center point of the superpixel block p, and (x', y') represents the target of this layer The coordinates of the center point of the sub-image;
由此计算得到每层目标子图像的每个超像素块的显著性值;The saliency value of each superpixel block of the target sub-image of each layer is thus calculated;
本步骤进行超像素块的显著性值的计算时,利用空间位置和与背景超像素差异对超像素块进行显著性计算,具体为:在每层目标子图像上进行背景超像素块聚类,依据超像素块空间位置,和其与背景超像素块的差异,计算该超像素块显著性值。超像素块p的显著性值为其与所有背景超像素块聚类中心差异的加权平均值,其权重和其与该层图像中心点的距离有关,距离越小,权重越大。When calculating the saliency value of the superpixel block in this step, the saliency calculation of the superpixel block is performed using the spatial position and the difference with the background superpixel, specifically: clustering the background superpixel block on each layer of the target sub-image, According to the spatial position of the superpixel block and its difference with the background superpixel block, the saliency value of the superpixel block is calculated. The saliency value of the superpixel block p is the weighted average of the differences between it and all background superpixel block cluster centers, and its weight is related to its distance from the center point of the layer image. The smaller the distance, the greater the weight.
步骤3,多尺度超像素块显著性融合,得到最终的显著性图,并在显著性图上检测到显著性区域,具体包括:Step 3, saliency fusion of multi-scale superpixel blocks to obtain the final saliency map, and detect salient regions on the saliency map, including:
步骤3.1,计算融合后显著性图上任意像素点j的显著性值:Step 3.1, calculate the saliency value of any pixel point j on the fused saliency map:
像素点j的显著性值sj是其在所有尺度下位于对应的超像素块的显著性值的平均值,即:The saliency value s j of pixel point j is the average value of the saliency values of its corresponding superpixel block at all scales, namely:
其中:sl是像素点j位于第l层目标子图像的超像素块的显著性值;Among them: s l is the saliency value of the superpixel block of the pixel j located in the l-th layer target sub-image;
步骤3.2,所有像素点j的显著性值形成图像显著性图,在显著性图上,超过设定阈值的区域即为最终检测到的显著性区域。In step 3.2, the saliency values of all pixel points j form an image saliency map, and on the saliency map, the region exceeding the set threshold is the finally detected salient region.
采用本发明提出的基于图像背景和空间位置的显著区域检测方法BSP、经典算法GR、经典算法SF分别对图4中的原始图进行显著区域检测,检测结果如图4所示,由结果示意图图4可知,本发明BSP算法的显著区域检测效果良好,明显优于经典算法GR和经典算法SF。另外,计算三种检测算法的平均绝对误差MAE和ROC曲线下面积AUC,计算结果如下表所示,从下表可以看出,BSP的MAE值低于GR和SF;BSP的AUC值高于GR和SF,由此表明,BSP方法的综合性能表现良好。The salient region detection method BSP based on the image background and spatial position proposed by the present invention, the classic algorithm GR, and the classic algorithm SF are respectively used to detect the salient region in the original image in Fig. 4, and the detection results are shown in Fig. 4. 4. It can be seen that the salient area detection effect of the BSP algorithm of the present invention is good, which is obviously better than the classic algorithm GR and the classic algorithm SF. In addition, the average absolute error MAE and the area under the ROC curve AUC of the three detection algorithms are calculated. The calculation results are shown in the table below. It can be seen from the table below that the MAE value of BSP is lower than that of GR and SF; the AUC value of BSP is higher than that of GR and SF, which shows that the comprehensive performance of the BSP method is good.
表:三种检测算法比较Table: Comparison of Three Detection Algorithms
本发明提供的一种基于图像背景和空间位置的显著区域检测方法,具有以下优点:A salient region detection method based on image background and spatial position provided by the present invention has the following advantages:
通过本发明方法,能够准确判断图像的显著区域,能有效提高检测的准确率和计算效率,为应用到互联网、云计算的海量图像或视频数据的筛选和分析提供技术支持,具有很好的应用前景。Through the method of the present invention, it is possible to accurately judge the significant area of the image, effectively improve the detection accuracy and calculation efficiency, and provide technical support for the screening and analysis of massive image or video data applied to the Internet and cloud computing, and has good application prospect.
以上所述仅是本发明的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视本发明的保护范围。The above is only a preferred embodiment of the present invention, it should be pointed out that, for those of ordinary skill in the art, without departing from the principle of the present invention, some improvements and modifications can also be made, and these improvements and modifications can also be made. It should be regarded as the protection scope of the present invention.
Claims (5)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711122796.8A CN107886507B (en) | 2017-11-14 | 2017-11-14 | A kind of salient region detecting method based on image background and spatial position |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711122796.8A CN107886507B (en) | 2017-11-14 | 2017-11-14 | A kind of salient region detecting method based on image background and spatial position |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107886507A true CN107886507A (en) | 2018-04-06 |
CN107886507B CN107886507B (en) | 2018-08-21 |
Family
ID=61776610
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711122796.8A Active CN107886507B (en) | 2017-11-14 | 2017-11-14 | A kind of salient region detecting method based on image background and spatial position |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107886507B (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108921820A (en) * | 2018-05-30 | 2018-11-30 | 咸阳师范学院 | A kind of saliency object detection method based on feature clustering and color contrast |
CN110866896A (en) * | 2019-10-29 | 2020-03-06 | 中国地质大学(武汉) | Image saliency object detection method based on k-means and level set superpixel segmentation |
CN112017158A (en) * | 2020-07-28 | 2020-12-01 | 中国科学院西安光学精密机械研究所 | Spectral characteristic-based adaptive target segmentation method in remote sensing scene |
CN112085020A (en) * | 2020-09-08 | 2020-12-15 | 北京印刷学院 | Visual saliency target detection method and device |
CN112418147A (en) * | 2020-12-02 | 2021-02-26 | 中国人民解放军军事科学院国防科技创新研究院 | Track \30758identificationmethod and device based on aerial images |
CN113378873A (en) * | 2021-01-13 | 2021-09-10 | 杭州小创科技有限公司 | Algorithm for determining attribution or classification of target object |
CN113901929A (en) * | 2021-10-13 | 2022-01-07 | 河北汉光重工有限责任公司 | Dynamic target detection and identification method and device based on significance |
CN114612336A (en) * | 2022-03-21 | 2022-06-10 | 北京达佳互联信息技术有限公司 | An image processing method, device, equipment and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040179742A1 (en) * | 2003-03-13 | 2004-09-16 | Sharp Laboratories Of America, Inc. | Compound image compression method and apparatus |
CN1770161A (en) * | 2004-09-29 | 2006-05-10 | 英特尔公司 | K-means clustering using t-test computation |
CN105913456A (en) * | 2016-04-12 | 2016-08-31 | 西安电子科技大学 | Video significance detecting method based on area segmentation |
CN106203430A (en) * | 2016-07-07 | 2016-12-07 | 北京航空航天大学 | A kind of significance object detecting method based on foreground focused degree and background priori |
-
2017
- 2017-11-14 CN CN201711122796.8A patent/CN107886507B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040179742A1 (en) * | 2003-03-13 | 2004-09-16 | Sharp Laboratories Of America, Inc. | Compound image compression method and apparatus |
CN1770161A (en) * | 2004-09-29 | 2006-05-10 | 英特尔公司 | K-means clustering using t-test computation |
CN105913456A (en) * | 2016-04-12 | 2016-08-31 | 西安电子科技大学 | Video significance detecting method based on area segmentation |
CN106203430A (en) * | 2016-07-07 | 2016-12-07 | 北京航空航天大学 | A kind of significance object detecting method based on foreground focused degree and background priori |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108921820A (en) * | 2018-05-30 | 2018-11-30 | 咸阳师范学院 | A kind of saliency object detection method based on feature clustering and color contrast |
CN108921820B (en) * | 2018-05-30 | 2021-10-29 | 咸阳师范学院 | A Salient Object Detection Method Based on Color Features and Clustering Algorithm |
CN110866896A (en) * | 2019-10-29 | 2020-03-06 | 中国地质大学(武汉) | Image saliency object detection method based on k-means and level set superpixel segmentation |
CN112017158A (en) * | 2020-07-28 | 2020-12-01 | 中国科学院西安光学精密机械研究所 | Spectral characteristic-based adaptive target segmentation method in remote sensing scene |
CN112017158B (en) * | 2020-07-28 | 2023-02-14 | 中国科学院西安光学精密机械研究所 | Spectral characteristic-based adaptive target segmentation method in remote sensing scene |
CN112085020A (en) * | 2020-09-08 | 2020-12-15 | 北京印刷学院 | Visual saliency target detection method and device |
CN112085020B (en) * | 2020-09-08 | 2023-08-01 | 北京印刷学院 | A visually salient target detection method and device |
CN112418147A (en) * | 2020-12-02 | 2021-02-26 | 中国人民解放军军事科学院国防科技创新研究院 | Track \30758identificationmethod and device based on aerial images |
CN113378873A (en) * | 2021-01-13 | 2021-09-10 | 杭州小创科技有限公司 | Algorithm for determining attribution or classification of target object |
CN113901929A (en) * | 2021-10-13 | 2022-01-07 | 河北汉光重工有限责任公司 | Dynamic target detection and identification method and device based on significance |
CN114612336A (en) * | 2022-03-21 | 2022-06-10 | 北京达佳互联信息技术有限公司 | An image processing method, device, equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN107886507B (en) | 2018-08-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107886507B (en) | A kind of salient region detecting method based on image background and spatial position | |
CN104537647B (en) | A kind of object detection method and device | |
CN105869173B (en) | A kind of stereoscopic vision conspicuousness detection method | |
CN103413347B (en) | Based on the extraction method of monocular image depth map that prospect background merges | |
CN113065558A (en) | Lightweight small target detection method combined with attention mechanism | |
CN109614985A (en) | A target detection method based on densely connected feature pyramid network | |
CN108960404B (en) | Image-based crowd counting method and device | |
Asokan et al. | Machine learning based image processing techniques for satellite image analysis-a survey | |
CN105976378A (en) | Graph model based saliency target detection method | |
CN105678278A (en) | Scene recognition method based on single-hidden-layer neural network | |
CN109255375A (en) | Panoramic picture method for checking object based on deep learning | |
CN112288758B (en) | Infrared and visible light image registration method for power equipment | |
CN105335725A (en) | Gait identification identity authentication method based on feature fusion | |
CN111738114B (en) | Vehicle target detection method based on accurate sampling of remote sensing images without anchor points | |
CN107977660A (en) | Region of interest area detecting method based on background priori and foreground node | |
Ma et al. | A multi-scale progressive collaborative attention network for remote sensing fusion classification | |
CN107392968A (en) | The image significance detection method of Fusion of Color comparison diagram and Color-spatial distribution figure | |
CN106407943A (en) | Pyramid layer positioning based quick DPM pedestrian detection method | |
Liu et al. | Extended faster R-CNN for long distance human detection: Finding pedestrians in UAV images | |
CN106682678A (en) | Image angle point detection and classification method based on support domain | |
CN106780727A (en) | A kind of headstock detection model method for reconstructing and device | |
CN104050674B (en) | Salient region detection method and device | |
Azaza et al. | Context proposals for saliency detection | |
CN104268595A (en) | General object detecting method and system | |
CN110910497B (en) | Method and system for realizing augmented reality map |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |