[go: up one dir, main page]

CN105488812A - Motion-feature-fused space-time significance detection method - Google Patents

Motion-feature-fused space-time significance detection method Download PDF

Info

Publication number
CN105488812A
CN105488812A CN201510823908.7A CN201510823908A CN105488812A CN 105488812 A CN105488812 A CN 105488812A CN 201510823908 A CN201510823908 A CN 201510823908A CN 105488812 A CN105488812 A CN 105488812A
Authority
CN
China
Prior art keywords
superpixel
saliency
image
motion
superpixels
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510823908.7A
Other languages
Chinese (zh)
Inventor
于凤芹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangnan University
Original Assignee
Jiangnan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangnan University filed Critical Jiangnan University
Priority to CN201510823908.7A priority Critical patent/CN105488812A/en
Publication of CN105488812A publication Critical patent/CN105488812A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention belongs to the field of image and video processing, in particular to a space-time significance detection method which fuses space-time significance and motion features. The space-time significance detection method comprises the following steps: firstly, utilizing a superpixel partitioning algorithm to express each frame of image as one series of superpixels, and extracting a superpixel-level color histogram as features; then, obtaining a spatial salient map through the calculation of the global comparison and the spatial distribution of colors; thirdly, through optical flow estimation and block matching methods, obtaining a temporal salient map; and finally, using a dynamic fusion strategy to fuse the spatial salient map and the temporal salient map to obtain a final space-time salient map. The method fuses the space significance and the motion features to carry out significance detection, and the algorithm can be simultaneously applied to the significance detection in dynamic and static scenes.

Description

一种融合运动特征的时空显著性检测方法A spatio-temporal saliency detection method integrating motion features

1、技术领域1. Technical field

本发明属于图像与视频处理领域,具体是一种融合运动特征的时空显著性检测方法。本发明以基于区域的显著性检测模型为基础,首先,用超像素分割算法将每一帧图像表示为一系列超像素并且提取超像素级的颜色直方图作为特征;然后,通过光流估计和块匹配的方法得到运动显著图,根据颜色的全局对比和空间分布得到空间显著图;最后,使用一种动态融合策略将运动显著图和空间显著图融合成为最终的时空显著图。该方法融合了运动特征进行显著性检测,能够同时运用于静态和动态场景中。The invention belongs to the field of image and video processing, and in particular relates to a spatio-temporal saliency detection method integrating motion features. The present invention is based on a region-based saliency detection model. First, each frame of image is represented as a series of superpixels with a superpixel segmentation algorithm and the superpixel-level color histogram is extracted as a feature; then, through optical flow estimation and The block matching method is used to obtain the motion saliency map, and the spatial saliency map is obtained according to the global contrast and spatial distribution of colors; finally, a dynamic fusion strategy is used to fuse the motion saliency map and the spatial saliency map into the final spatio-temporal saliency map. The method combines motion features for saliency detection and can be applied to both static and dynamic scenes.

2、背景技术2. Background technology

显著性检测是指准确地定位并提取在视频和图像中包含较大的信息量、能吸引人的注意力的区域。在图像处理过程中,将较高的处理优先级赋予图像显著区域,既可以降低计算过程的复杂度,又可以提高图像处理的效率。因此,显著性检测在目标识别、图像检索、视频图像编码等领域具有广泛的应用前景。Saliency detection refers to accurately locating and extracting areas in videos and images that contain a large amount of information and can attract people's attention. In the process of image processing, assigning higher processing priority to salient regions of the image can not only reduce the complexity of the calculation process, but also improve the efficiency of image processing. Therefore, saliency detection has broad application prospects in object recognition, image retrieval, video image coding and other fields.

根据处理的信息源图像的不同,显著性检测模型可分为空域显著性检测模型和时空显著性检测模型。针对静态场景的空域显著性检测,主要有基于生物启发模型和特征整合理论的显著性检测方法,该算法通过中心-周围算子计算不同尺度上特征的差别来得到显著度。有通过计算像素与其周边像素的特征距离的基于局部对比度分析的图像显著性检测。也有将图像分割为若干区域,结合图像的空间特性和颜色对比度来计算全局对比的显著区域检测方法。According to the different information source images processed, saliency detection models can be divided into spatial-domain saliency detection models and spatio-temporal saliency detection models. For the spatial saliency detection of static scenes, there are mainly saliency detection methods based on biologically inspired models and feature integration theory. This algorithm calculates the difference of features at different scales through the center-surrounding operator to obtain the saliency. There is image saliency detection based on local contrast analysis by computing the feature distance of a pixel to its surrounding pixels. There is also a salient area detection method that divides the image into several areas, and combines the spatial characteristics and color contrast of the image to calculate the global contrast.

动态场景下的时空显著性检测算法不仅要考虑检测空间显著性区域,还要考虑时间轴上显著性的影响因素,如物体运动、自然条件变化、相机移动等等。根据计算方式的不同,时空显著性检测算法主要分为四类:(1)基于特征融合的模型:在图像显著性检测模型的基础上加入运动特征,计算连续两帧图像的差异来得到运动产生的显著性;(2)基于空间平面的模型:谱残余方法的基础上,认为在时间轴上,帧序列构成的像素点在X-T,Y-T平面都满足谱残余的结论,将X-T,Y-T平面看作二维矩阵,分别进行低秩-稀疏分解,最后融合成为最终的显著图;(3)基于频域分析的模型:从频域分析角度出发,将亮度、颜色、运动三类特征组合四元数组,然后通过四元数傅里叶变换获取视频在时空域上的相位谱,并利用该相位谱检测视频的显著性;(4)基于背景检测的模型:高斯混合模型也被用于时空显著性检测,在单一场景中,将高斯混合模型计算出的前景区域作为显著区域。以上方法将静态显著性和动态显著性简单的线性融合或者只突出了运动特征,忽略了场景中的动态特性和空间特性,难以获取准确的显著区域。Spatio-temporal saliency detection algorithms in dynamic scenes should not only consider the detection of spatial saliency regions, but also consider factors affecting saliency on the time axis, such as object movement, changes in natural conditions, camera movement, and so on. According to different calculation methods, spatiotemporal saliency detection algorithms are mainly divided into four categories: (1) Models based on feature fusion: motion features are added to the image saliency detection model, and the difference between two consecutive frames of images is calculated to obtain motion generation. (2) The model based on the spatial plane: based on the spectral residual method, it is considered that on the time axis, the pixels formed by the frame sequence satisfy the conclusion of the spectral residual in the X-T and Y-T planes. Viewing the X-T and Y-T planes Make a two-dimensional matrix, perform low-rank-sparse decomposition respectively, and finally fuse into the final saliency map; (3) A model based on frequency domain analysis: from the perspective of frequency domain analysis, combine the three types of features of brightness, color, and motion into four elements Array, and then obtain the phase spectrum of the video in the time-space domain through the quaternion Fourier transform, and use the phase spectrum to detect the saliency of the video; (4) Model based on background detection: Gaussian mixture model is also used for spatio-temporal saliency In a single scene, the foreground area calculated by the Gaussian mixture model is regarded as the salient area. The above methods simply linearly fuse static saliency and dynamic saliency or only highlight motion features, ignoring the dynamic and spatial characteristics of the scene, making it difficult to obtain accurate salient regions.

3、发明内容3. Contents of the invention

本发明以基于区域的显著性检测模型为基础,融入光流向量块对比的方法提取图像序列的运动特征,并提出了一种将空域显著性和时域显著性动态融合的策略,能够同时运用于静态和动态的自然场景中。Based on the region-based saliency detection model, the invention incorporates the optical flow vector block comparison method to extract the motion features of the image sequence, and proposes a strategy to dynamically integrate spatial saliency and temporal saliency, which can simultaneously use in static and dynamic natural scenes.

(1)超像素分割及特征提取(1) Superpixel segmentation and feature extraction

使用简单线性迭代聚类将每一帧图像Ft表示为一系列超像素将超像素作为基本的处理单元,可以减少处理单元的数量,而且可以保证最终的检测结果能够均匀地突出显著对象。颜色特征的提取采用颜色直方图,将Lab颜色空间的每一个通道量化得到12个不同的值,将颜色的数量减少到qc=123=1728,对于每一个超像素计算超像素中所有像素在Lab空间的均值,并且进行量化得到颜色直方图CHt,最后将颜色直方图归一化,使得 Σ k = 1 q c CH t ( k ) = 1. Represent each frame of image F t as a sequence of superpixels using simple linear iterative clustering Using superpixels as the basic processing unit can reduce the number of processing units and ensure that the final detection results can evenly highlight salient objects. The extraction of color features uses a color histogram, quantizes each channel of the Lab color space to obtain 12 different values, and reduces the number of colors to qc=12 3 =1728, for each superpixel Calculate the mean value of all pixels in the superpixel in the Lab space, and perform quantization to obtain the color histogram CH t , and finally normalize the color histogram, so that Σ k = 1 q c CH t ( k ) = 1.

(2)基于运动特征的时间显著性(2) Temporal salience based on motion features

本发明采用光流运动估计和块匹配的方法来提取图像序列的运动特征。光流运动估计法的基本思想是将运动图像函数f(x,y,t)作为基本函数,根据图像强度守恒原理建立光流约束方程,通过求解光流约束方程,计算运动参数。对于当前帧Ft,使用其前一帧Ft-1作为参考帧,通过光流运动估计法计算得到Ft的运动向量场(u(x,y),v(x,y)),对于当前帧Ft中的每一个超像素计算其平均运动矢量大小 The invention adopts the method of optical flow motion estimation and block matching to extract the motion feature of the image sequence. The basic idea of the optical flow motion estimation method is to use the moving image function f(x, y, t) as the basic function, establish the optical flow constraint equation according to the image intensity conservation principle, and calculate the motion parameters by solving the optical flow constraint equation. For the current frame F t , using its previous frame F t-1 as a reference frame, the motion vector field (u (x, y) , v (x, y) ) of F t is calculated by the optical flow motion estimation method. For Each superpixel in the current frame F t Calculate its average motion vector magnitude

为了克服背景运动和相机抖动的影响,本发明通过块匹配的方法在前一帧中找到与当前帧最匹配的超像素,并且计算该超像素与其背景超像素的相对运动值作为其显著值。具体的实现方法:在帧Ft-1中选择与其最匹配的超像素相连接的超像素与其一起作为相关联超像素集合ψi分别表示超像素i,j的中心位置。因此,超像素的时间显著性定义为:In order to overcome the influence of background motion and camera shake, the present invention uses a block matching method to find the superpixel that best matches the current frame in the previous frame, and calculates the relative motion value of the superpixel and its background superpixel as its salient value. The specific implementation method: select the superpixel that best matches it in the frame F t-1 Bundle The connected superpixels are together with it as the set of associated superpixels ψ i , Denote the center positions of superpixels i and j, respectively. Therefore, superpixels The temporal significance of is defined as:

SS mm oo tt ii oo nno (( spsp tt ii )) == ΣΣ spsp tt -- 11 jj ∈∈ ψψ ii λλ (( spsp tt ii ,, spsp tt -- 11 jj )) ·· SS mm oo tt ii oo nno (( spsp tt -- 11 jj )) ΣΣ spsp tt -- 11 jj ∈∈ ψψ ii λλ (( spsp tt ii ,, spsp tt -- 11 jj )) -- -- -- (( 11 ))

其中,帧间相关性 λ ( sp t i , sp t - 1 j ) = c ( i ) · c ( j ) · [ 1 - | | μ t i - μ t - 1 j + mmv t i | | d ] , c(i)和c(j)分别表示超像素i和j在Lab颜色空间量化后的颜色值。Among them, the inter-frame correlation λ ( sp t i , sp t - 1 j ) = c ( i ) &Center Dot; c ( j ) &Center Dot; [ 1 - | | μ t i - μ t - 1 j + mmv t i | | d ] , c(i) and c(j) denote the quantized color values of superpixels i and j in the Lab color space, respectively.

(3)空间显著性(3) Spatial salience

本发明根据颜色的全局对比和空间分布来计算每一帧图像的空间显著性。对于当前帧中的超像素其颜色全局对比显著值定义为:The present invention calculates the spatial salience of each frame image according to the global contrast and spatial distribution of colors. For the superpixels in the current frame Its color global contrast salient value is defined as:

SS GG CC (( ii )) == ΣΣ jj == 11 nno tt ff jj ·· || || cc (( ii )) -- cc (( jj )) || || -- -- -- (( 22 ))

其中,fj超像素的直方图在整幅图像中出现的概率,c(i)和c(j)分别表示超像素i和j在Lab颜色空间量化后的颜色值。where f j superpixel The probability that the histogram of is present in the entire image, c(i) and c(j) represent the color values of superpixels i and j quantized in the Lab color space, respectively.

颜色的空间分布也会影响图像的显著性,颜色分布越紧密,则显著性越高,因此颜色的空间分布显著性定义为:The spatial distribution of colors also affects the salience of the image, the tighter the color distribution, the higher the salience, so the salience of the spatial distribution of colors is defined as:

SS sthe s cc (( ii )) == ΣΣ jj == 11 nno tt λλ (( spsp tt jj ,, spsp tt jj )) ·· DD. (( spsp tt jj )) ΣΣ jj == 11 nno tt λλ (( spsp tt ii ,, spsp tt jj )) -- -- -- (( 33 ))

其中,为超像素j的中心与图像中心的距离, λ ( sp t i , sp t j ) = c ( i ) · c ( j ) · [ 1 - | | μ t i - μ t j | | d ] , 当超像素i和j的颜色越相近并且距离越小,的值越大。in, is the distance between the center of superpixel j and the image center, λ ( sp t i , sp t j ) = c ( i ) · c ( j ) &Center Dot; [ 1 - | | μ t i - μ t j | | d ] , When the colors of superpixels i and j are closer and the distance is smaller, The larger the value is.

最后将颜色全局分布显著性与空间分布显著性融合得到图像的空间显著性:Finally, the spatial saliency of the image is obtained by combining the saliency of the global color distribution and the saliency of the spatial distribution:

SS sthe s pp aa tt ii aa ll (( spsp tt ii )) == SS GG CC (( spsp tt ii )) ·· SS SS CC (( spsp tt ii )) -- -- -- (( 44 ))

(4)时空显著性(4) Spatiotemporal salience

将时间显著性和空间显著性自适应线性融合得到时空显著图,即:The temporal saliency and spatial saliency are adaptively fused linearly to obtain a spatiotemporal saliency map, namely:

SS aa ll ii ee nno cc ythe y (( spsp tt ii )) == αα ·&Center Dot; SS sthe s pp aa tt ii aa ll (( spsp tt ii )) ++ (( 11 -- αα )) ·&Center Dot; SS nno mm pp oo rr aa ll (( spsp tt ii )) -- -- -- (( 55 ))

权值α定义为:The weight α is defined as:

αα == mm aa xx [[ mm ee dd (( MSMS tt -- ff ,, ...... ,, MSMS tt -- 11 )) mm ee dd (( MSMS tt -- ff ,, ...... ,, MSMS tt -- 11 )) ++ MSMS tt ,, 0.50.5 ]] -- -- -- (( 66 ))

其中, 表示超像素i中的像素个数。动态场景下当运动越明显时,时间显著性的权值越大,静态场景下,α直接设置为1。in, Indicates the number of pixels in superpixel i. In a dynamic scene, when the movement is more obvious, the weight of time salience is larger. In a static scene, α is directly set to 1.

4、附图说明4. Description of drawings

附图是本发明的原理和实现步骤。Accompanying drawing is the principle and implementation steps of the present invention.

5、具体实施方式5. Specific implementation

附图1为该发明的实施流程图,其具体步骤为:Accompanying drawing 1 is the implementation flowchart of this invention, and its concrete steps are:

(1)图像预处理:将输入的每一帧图像通过SLIC超像素分割算法分割为一系列大小均匀、紧凑的超像素作为显著性检测的基本处理单元。(1) Image preprocessing: Each frame of input image is divided into a series of uniform and compact superpixels by SLIC superpixel segmentation algorithm as the basic processing unit of saliency detection.

(2)颜色特征的提取:对于每一帧图像,以超像素为单位,计算超像素中所有像素在Lab空间的均值,进行量化得到颜色直方图CHt并且归一化使得 (2) Extraction of color features: For each frame of image, in units of superpixels, calculate the mean value of all pixels in the superpixel in the Lab space, quantify to obtain the color histogram CH t and normalize so that

(3)时间显著性计算:通过光流运动估计法计算得到Ft相对于前一帧Ft-1的运动向量场(u(x,y),v(x,y)),并且计算得到超像素内平均矢量场大小然后通过块匹配的方法在前一帧中找到与当前帧最匹配的超像素及其相关超像素集合,使用公式(1)得到图像的时间显著图。(3) Temporal saliency calculation: Calculate the motion vector field (u (x, y) , v (x, y) ) of F t relative to the previous frame F t-1 by optical flow motion estimation method, and calculate super pixel Inner mean vector field size Then find the superpixel and its related superpixel set that best match the current frame in the previous frame through the block matching method, and use formula (1) to get the temporal saliency map of the image.

(4)空间显著性计算:根据基于区域的静态显著性检测模型为基础,使用公式(2)和公式(3)分别得到以超像素为基本单位的颜色全局对比显著性和颜色空间分布显著性,并通过公式(4)将其融合成为最终的空间显著图。(4) Spatial saliency calculation: Based on the region-based static saliency detection model, formula (2) and formula (3) are used to obtain the color global contrast saliency and color space distribution saliency with superpixels as the basic unit , and fuse them into the final spatial saliency map by formula (4).

(5)时空显著性计算:根据公式(5)将时间显著性和空间显著性自适应线性融合得到时空显著图。(5) Calculation of spatio-temporal saliency: According to formula (5), the spatio-temporal saliency and spatial saliency are adaptively fused linearly to obtain a spatio-temporal saliency map.

Claims (5)

1.一种融合运动特征的时空显著性检测方法,其特征是:1. A spatio-temporal saliency detection method of fusion motion feature, characterized in that: 每将一帧图像进行超像素分割,并以超像素作为基本单元提取颜色直方图,根据颜色的全局对比和空间分布计算每一帧图像的空间显著性,采用光流运动估计和块匹配的方法来提取图像序列的运动特征,将空域显著性和时域显著性动态融合,使该算法能够同时运用于静态和动态的自然场景中运动特征的检测。Superpixel segmentation is performed on each frame of image, and the color histogram is extracted with superpixel as the basic unit. The spatial saliency of each frame of image is calculated according to the global contrast and spatial distribution of the color, and the method of optical flow motion estimation and block matching is used. To extract the motion features of the image sequence, the spatial saliency and temporal saliency are dynamically fused, so that the algorithm can be applied to the detection of motion features in static and dynamic natural scenes at the same time. 2.如权利要求1所述的一种融合运动特征的时空显著性检测方法,其特征是所述的超像素分割和超像素区域颜色直方图特征提取方法是:2. a kind of space-time saliency detection method of fusion motion feature as claimed in claim 1, it is characterized in that described superpixel segmentation and superpixel region color histogram feature extraction method are: 使用线性迭代聚类方法将每一帧图像Ft表示为一系列超像素将超像素作为基本的处理单元,可以减少处理单元的数量,而且可以保证最终的检测结果能够均匀地突出显著对象。Represent each frame of image F t as a series of superpixels using a linear iterative clustering method Using superpixels as the basic processing unit can reduce the number of processing units and ensure that the final detection results can evenly highlight salient objects. 对分割得到的超像素区域的颜色特征计算颜色直方图。将Lab颜色空间的每一个通道量化得到12个不同的值,将颜色的数量减少到qc=123=1728。对于每一个超像素计算超像素中所有像素在Lab空间的均值,并且进行量化得到颜色直方图CHt,最后将颜色直方图归一化,使得 A color histogram is calculated for the color features of the segmented superpixel regions. Each channel of the Lab color space is quantized to obtain 12 different values, reducing the number of colors to qc=12 3 =1728. For each superpixel Calculate the mean value of all pixels in the superpixel in the Lab space, and perform quantization to obtain the color histogram CH t , and finally normalize the color histogram, so that 3.如权利要求1所述的一种融合运动特征的时空显著性检测方法,其特征是所述的空间显著性计算方法是:3. A kind of spatio-temporal saliency detection method of fusion motion feature as claimed in claim 1, it is characterized in that described spatial saliency calculation method is: 本发明根据颜色的全局对比和空间分布来计算每一帧图像的空间显著性。对于当前帧中的超像素其颜色全局对比显著值定义为:The present invention calculates the spatial salience of each frame image according to the global contrast and spatial distribution of colors. For the superpixels in the current frame Its color global contrast salient value is defined as: 其中,fj超像素的直方图在整幅图像中出现的概率,c(i)和c(j)分别表示超像素i和j在Lab颜色空间量化后的颜色值。where f j superpixel The probability that the histogram of is present in the entire image, c(i) and c(j) represent the color values of superpixels i and j quantized in the Lab color space, respectively. 颜色的空间分布也会影响图像的显著性,颜色分布越紧密,则显著性越高,因此颜色的空间分布显著性定义为:The spatial distribution of colors also affects the salience of the image, the tighter the color distribution, the higher the salience, so the salience of the spatial distribution of colors is defined as: 其中,为超像素j的中心与图像中心的距离,当超像素i和j的颜色越相近并且距离越小,的值越大。in, is the distance between the center of superpixel j and the image center, When the colors of superpixels i and j are closer and the distance is smaller, The larger the value is. 最后将颜色全局分布显著性与空间分布显著性融合得到图像的空间显著性:Finally, the spatial saliency of the image is obtained by combining the saliency of the global color distribution and the saliency of the spatial distribution: . 4.如权利要求1所述的一种融合运动特征的时空显著性检测方法,其特征是所述的采用光流运动估计和块匹配的方法来提取图像序列的运动特征的计算方法是:4. a kind of spatio-temporal saliency detection method of fusion motion feature as claimed in claim 1, it is characterized in that described employing optical flow motion estimation and the method for block matching to extract the calculation method of the motion feature of image sequence is: 光流运动估计法的基本思想是将运动图像函数f(x,y,t)作为基本函数,根据图像强度守恒原理建立光流约束方程,通过求解光流约束方程,计算运动参数。对于当前帧Ft,使用其前一帧Ft-1作为参考帧,通过光流运动估计法计算得到Ft的运动向量场(u(x,y),v(x,y)),对于当前帧Ft中的每一个超像素计算其平均运动矢量大小 The basic idea of the optical flow motion estimation method is to use the moving image function f(x, y, t) as the basic function, establish the optical flow constraint equation according to the image intensity conservation principle, and calculate the motion parameters by solving the optical flow constraint equation. For the current frame F t , using its previous frame F t-1 as a reference frame, the motion vector field (u (x, y) , v (x, y) ) of F t is calculated by the optical flow motion estimation method. For Each superpixel in the current frame F t Calculate its average motion vector magnitude 为了克服背景运动和相机抖动的影响,本发明通过块匹配的方法在前一帧中找到与当前帧最匹配的超像素,并且计算该超像素与其背景超像素的相对运动值作为其显著值。具体的实现方法:在帧Ft-1中选择与其最匹配的超像素相连接的超像素与其一起作为相关联超像素集合ψi分别表示超像素i,j的中心位置。因此,超像素的时间显著性定义为:In order to overcome the influence of background motion and camera shake, the present invention uses a block matching method to find the superpixel that best matches the current frame in the previous frame, and calculates the relative motion value of the superpixel and its background superpixel as its salient value. The specific implementation method: select the superpixel that best matches it in the frame F t-1 Bundle The connected superpixels are together with it as the set of associated superpixels ψ i , Denote the center positions of superpixels i and j, respectively. Therefore, superpixels The temporal significance of is defined as: 其中,帧间相关性c(i)和c(j)分别表示超像素i和j在Lab颜色空间量化后的颜色值。Among them, the inter-frame correlation c(i) and c(j) denote the quantized color values of superpixels i and j in the Lab color space, respectively. 5.如权利要求1所述的一种融合运动特征的时空显著性检测方法,其特征是所述的时间显著性和空间显著性自适应线性融合得到时空显著图的计算方法是:5. A kind of spatio-temporal saliency detection method of fusion motion feature as claimed in claim 1, it is characterized in that described temporal saliency and spatial saliency adaptive linear fusion obtains the calculation method of spatio-temporal saliency map is: 权值α定义为:The weight α is defined as: 其中, 表示超像素i中的像素个数。动态场景下当运动越明显时,时间显著性的权值越大,静态场景下,α直接设置为1。in, Indicates the number of pixels in superpixel i. In a dynamic scene, when the movement is more obvious, the weight of time salience is larger. In a static scene, α is directly set to 1.
CN201510823908.7A 2015-11-24 2015-11-24 Motion-feature-fused space-time significance detection method Pending CN105488812A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510823908.7A CN105488812A (en) 2015-11-24 2015-11-24 Motion-feature-fused space-time significance detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510823908.7A CN105488812A (en) 2015-11-24 2015-11-24 Motion-feature-fused space-time significance detection method

Publications (1)

Publication Number Publication Date
CN105488812A true CN105488812A (en) 2016-04-13

Family

ID=55675778

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510823908.7A Pending CN105488812A (en) 2015-11-24 2015-11-24 Motion-feature-fused space-time significance detection method

Country Status (1)

Country Link
CN (1) CN105488812A (en)

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105898278A (en) * 2016-05-26 2016-08-24 杭州电子科技大学 Stereoscopic video saliency detection method based on binocular multidimensional perception characteristic
CN106097392A (en) * 2016-06-13 2016-11-09 西安电子科技大学 High-precision optical flow estimation method based on two-stage edge sensitive filtering
CN106210449A (en) * 2016-08-11 2016-12-07 上海交通大学 The frame rate up-conversion method for estimating of a kind of Multi-information acquisition and system
CN106250895A (en) * 2016-08-15 2016-12-21 北京理工大学 A kind of remote sensing image region of interest area detecting method
CN106372636A (en) * 2016-08-25 2017-02-01 上海交通大学 HOG-TOP-based video significance detection method
CN106778776A (en) * 2016-11-30 2017-05-31 武汉大学深圳研究院 A kind of time-space domain significance detection method based on location-prior information
CN107085725A (en) * 2017-04-21 2017-08-22 河南科技大学 A Method for Clustering Image Regions via Adaptive Codebook Based LLC
CN107220616A (en) * 2017-05-25 2017-09-29 北京大学 A kind of video classification methods of the two-way Cooperative Study based on adaptive weighting
CN107392968A (en) * 2017-07-17 2017-11-24 杭州电子科技大学 The image significance detection method of Fusion of Color comparison diagram and Color-spatial distribution figure
CN107392917A (en) * 2017-06-09 2017-11-24 深圳大学 A kind of saliency detection method and system based on space-time restriction
CN107507225A (en) * 2017-09-05 2017-12-22 明见(厦门)技术有限公司 Moving target detecting method, device, medium and computing device
CN107767400A (en) * 2017-06-23 2018-03-06 北京理工大学 Remote sensing images sequence moving target detection method based on stratification significance analysis
CN108052947A (en) * 2017-11-08 2018-05-18 北京航空航天大学 A kind of dynamic background suppressing method based on multiple dimensioned space-time consistency
CN108241854A (en) * 2018-01-02 2018-07-03 天津大学 A Deep Video Saliency Detection Method Based on Motion and Memory Information
CN108833920A (en) * 2018-06-04 2018-11-16 四川大学 A DVC Side Information Fusion Method Based on Optical Flow and Block Matching
CN109146925A (en) * 2018-08-23 2019-01-04 郑州航空工业管理学院 Conspicuousness object detection method under a kind of dynamic scene
CN109191485A (en) * 2018-08-29 2019-01-11 西安交通大学 A kind of more video objects collaboration dividing method based on multilayer hypergraph model
CN109255793A (en) * 2018-09-26 2019-01-22 国网安徽省电力有限公司铜陵市义安区供电公司 A monitoring and early warning system based on visual features
CN109446976A (en) * 2018-10-24 2019-03-08 闽江学院 A kind of video big data information extracting method based on wavelet transform and Characteristic Contrast
CN109711417A (en) * 2018-12-06 2019-05-03 重庆邮电大学 A video saliency detection method based on low-level saliency fusion and geodesics
CN110827193A (en) * 2019-10-21 2020-02-21 国家广播电视总局广播电视规划院 Panoramic video saliency detection method based on multi-channel features
CN110866896A (en) * 2019-10-29 2020-03-06 中国地质大学(武汉) Image saliency object detection method based on k-means and level set superpixel segmentation
CN110969605A (en) * 2019-11-28 2020-04-07 华中科技大学 Method and system for detecting moving small target based on space-time saliency map
CN111723715A (en) * 2020-06-10 2020-09-29 东北石油大学 A video saliency detection method and device, electronic device and storage medium
CN115953419A (en) * 2023-03-09 2023-04-11 天津艾思科尔科技有限公司 Dynamic video detection preprocessing method based on superpixel analysis

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102509308A (en) * 2011-08-18 2012-06-20 上海交通大学 Motion segmentation method based on mixtures-of-dynamic-textures-based spatiotemporal saliency detection
CN102903120A (en) * 2012-07-19 2013-01-30 中国人民解放军国防科学技术大学 Time-space condition information based moving object detection method
CN103020992A (en) * 2012-11-12 2013-04-03 华中科技大学 Video image significance detection method based on dynamic color association
CN103208125A (en) * 2013-03-14 2013-07-17 上海大学 Visual salience algorithm of color and motion overall contrast in video frame image
CN103747240A (en) * 2013-12-25 2014-04-23 浙江大学 Fusion color and motion information vision saliency filtering method
CN104063872A (en) * 2014-07-04 2014-09-24 西安电子科技大学 Method for detecting salient regions in sequence images based on improved visual attention model

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102509308A (en) * 2011-08-18 2012-06-20 上海交通大学 Motion segmentation method based on mixtures-of-dynamic-textures-based spatiotemporal saliency detection
CN102903120A (en) * 2012-07-19 2013-01-30 中国人民解放军国防科学技术大学 Time-space condition information based moving object detection method
CN103020992A (en) * 2012-11-12 2013-04-03 华中科技大学 Video image significance detection method based on dynamic color association
CN103208125A (en) * 2013-03-14 2013-07-17 上海大学 Visual salience algorithm of color and motion overall contrast in video frame image
CN103747240A (en) * 2013-12-25 2014-04-23 浙江大学 Fusion color and motion information vision saliency filtering method
CN104063872A (en) * 2014-07-04 2014-09-24 西安电子科技大学 Method for detecting salient regions in sequence images based on improved visual attention model

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
刘晓辉等: "融合运动和空间关系特性的显著性区域检测", 《华中科技大学学报(自然科学版)》 *
张焱等: "基于动态显著性特征的粒子滤波多目标跟踪算法", 《电子学报》 *
汪国有等: "复杂背景下序贯显著性特征海面目标检测算法", 《华中科技大学学报(自然科学版)》 *

Cited By (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105898278A (en) * 2016-05-26 2016-08-24 杭州电子科技大学 Stereoscopic video saliency detection method based on binocular multidimensional perception characteristic
CN105898278B (en) * 2016-05-26 2017-10-27 杭州电子科技大学 A kind of three-dimensional video-frequency conspicuousness detection method based on binocular Multidimensional Awareness characteristic
CN106097392A (en) * 2016-06-13 2016-11-09 西安电子科技大学 High-precision optical flow estimation method based on two-stage edge sensitive filtering
CN106210449A (en) * 2016-08-11 2016-12-07 上海交通大学 The frame rate up-conversion method for estimating of a kind of Multi-information acquisition and system
CN106210449B (en) * 2016-08-11 2020-01-07 上海交通大学 A frame rate up-conversion motion estimation method and system for multi-information fusion
CN106250895A (en) * 2016-08-15 2016-12-21 北京理工大学 A kind of remote sensing image region of interest area detecting method
CN106250895B (en) * 2016-08-15 2019-07-26 北京理工大学 A method for detecting regions of interest in optical remote sensing images
CN106372636A (en) * 2016-08-25 2017-02-01 上海交通大学 HOG-TOP-based video significance detection method
CN106778776A (en) * 2016-11-30 2017-05-31 武汉大学深圳研究院 A kind of time-space domain significance detection method based on location-prior information
CN106778776B (en) * 2016-11-30 2020-04-10 武汉大学深圳研究院 Time-space domain significance detection method based on position prior information
CN107085725A (en) * 2017-04-21 2017-08-22 河南科技大学 A Method for Clustering Image Regions via Adaptive Codebook Based LLC
CN107085725B (en) * 2017-04-21 2020-08-14 河南科技大学 Method for clustering image areas through LLC based on self-adaptive codebook
CN107220616A (en) * 2017-05-25 2017-09-29 北京大学 A kind of video classification methods of the two-way Cooperative Study based on adaptive weighting
CN107392917A (en) * 2017-06-09 2017-11-24 深圳大学 A kind of saliency detection method and system based on space-time restriction
CN107392917B (en) * 2017-06-09 2021-09-28 深圳大学 Video significance detection method and system based on space-time constraint
CN107767400A (en) * 2017-06-23 2018-03-06 北京理工大学 Remote sensing images sequence moving target detection method based on stratification significance analysis
CN107767400B (en) * 2017-06-23 2021-07-20 北京理工大学 A moving target detection method for remote sensing image sequences based on hierarchical saliency analysis
CN107392968B (en) * 2017-07-17 2019-07-09 杭州电子科技大学 Image saliency detection method fused with color contrast map and color space distribution map
CN107392968A (en) * 2017-07-17 2017-11-24 杭州电子科技大学 The image significance detection method of Fusion of Color comparison diagram and Color-spatial distribution figure
CN107507225B (en) * 2017-09-05 2020-10-27 明见(厦门)技术有限公司 Moving object detection method, device, medium and computing equipment
CN107507225A (en) * 2017-09-05 2017-12-22 明见(厦门)技术有限公司 Moving target detecting method, device, medium and computing device
CN108052947A (en) * 2017-11-08 2018-05-18 北京航空航天大学 A kind of dynamic background suppressing method based on multiple dimensioned space-time consistency
CN108052947B (en) * 2017-11-08 2019-12-27 北京航空航天大学 Dynamic background suppression method based on multi-scale space-time consistency
CN108241854B (en) * 2018-01-02 2021-11-09 天津大学 Depth video saliency detection method based on motion and memory information
CN108241854A (en) * 2018-01-02 2018-07-03 天津大学 A Deep Video Saliency Detection Method Based on Motion and Memory Information
CN108833920B (en) * 2018-06-04 2022-02-11 四川大学 DVC side information fusion method based on optical flow and block matching
CN108833920A (en) * 2018-06-04 2018-11-16 四川大学 A DVC Side Information Fusion Method Based on Optical Flow and Block Matching
CN109146925B (en) * 2018-08-23 2022-09-06 郑州航空工业管理学院 Method for detecting salient target in dynamic scene
CN109146925A (en) * 2018-08-23 2019-01-04 郑州航空工业管理学院 Conspicuousness object detection method under a kind of dynamic scene
CN109191485A (en) * 2018-08-29 2019-01-11 西安交通大学 A kind of more video objects collaboration dividing method based on multilayer hypergraph model
CN109255793B (en) * 2018-09-26 2019-07-05 国网安徽省电力有限公司铜陵市义安区供电公司 A monitoring and early warning system based on visual features
CN109255793A (en) * 2018-09-26 2019-01-22 国网安徽省电力有限公司铜陵市义安区供电公司 A monitoring and early warning system based on visual features
CN109446976A (en) * 2018-10-24 2019-03-08 闽江学院 A kind of video big data information extracting method based on wavelet transform and Characteristic Contrast
CN109711417A (en) * 2018-12-06 2019-05-03 重庆邮电大学 A video saliency detection method based on low-level saliency fusion and geodesics
CN109711417B (en) * 2018-12-06 2022-12-27 重庆邮电大学 Video saliency detection method based on low-level saliency fusion and geodesic
CN110827193A (en) * 2019-10-21 2020-02-21 国家广播电视总局广播电视规划院 Panoramic video saliency detection method based on multi-channel features
CN110827193B (en) * 2019-10-21 2023-05-09 国家广播电视总局广播电视规划院 Panoramic video significance detection method based on multichannel characteristics
CN110866896A (en) * 2019-10-29 2020-03-06 中国地质大学(武汉) Image saliency object detection method based on k-means and level set superpixel segmentation
CN110866896B (en) * 2019-10-29 2022-06-24 中国地质大学(武汉) Image saliency target detection method based on k-means and level set super-pixel segmentation
CN110969605A (en) * 2019-11-28 2020-04-07 华中科技大学 Method and system for detecting moving small target based on space-time saliency map
CN111723715A (en) * 2020-06-10 2020-09-29 东北石油大学 A video saliency detection method and device, electronic device and storage medium
CN111723715B (en) * 2020-06-10 2022-03-15 东北石油大学 Video saliency detection method and device, electronic equipment and storage medium
CN115953419A (en) * 2023-03-09 2023-04-11 天津艾思科尔科技有限公司 Dynamic video detection preprocessing method based on superpixel analysis

Similar Documents

Publication Publication Date Title
CN105488812A (en) Motion-feature-fused space-time significance detection method
KR102138950B1 (en) Depth map generation from a monoscopic image based on combined depth cues
CN107680116B (en) A method for monitoring moving objects in video images
CN106529419B (en) An automatic object detection method for video saliency stack aggregation
CN102542571B (en) Moving target detecting method and device
CN107330390B (en) People counting method based on image analysis and deep learning
CN105528794A (en) Moving object detection method based on Gaussian mixture model and superpixel segmentation
Shen et al. Real-time and robust compressive background subtraction for embedded camera networks
Cai et al. Real-time video dehazing based on spatio-temporal mrf
CN109215053A (en) Moving vehicle detection method containing halted state in a kind of unmanned plane video
US20130342559A1 (en) Temporally consistent superpixels
Hsiao et al. Background initialization and foreground segmentation for bootstrapping video sequences
US20120237126A1 (en) Apparatus and method for determining characteristic of motion picture
CN107358189B (en) An object detection method in indoor environment based on multi-view target extraction
CN107368784A (en) A kind of novel background subtraction moving target detecting method based on wavelet blocks
Wang et al. COME for no-reference video quality assessment
Jiang et al. Surveillance video analysis using compressive sensing with low latency
Li et al. Gaussian-based codebook model for video background subtraction
Mohanty et al. A survey on moving object detection using background subtraction methods in video
Wang Image matting with transductive inference
CN110163149A (en) Acquisition methods, device and the storage medium of LBP feature
CN113553957B (en) A multi-scale prediction behavior recognition system and method
US20140270567A1 (en) Selective perceptual masking via scale separation in the spatial and temporal domains using intrinsic images for use in data compression
CN105741315B (en) A kind of statistics background subtraction method based on down-sampled strategy
CN103824297B (en) In complicated high dynamic environment, background and the method for prospect is quickly updated based on multithreading

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20160413