[go: up one dir, main page]

CN101510299B - Image self-adapting method based on vision significance - Google Patents

Image self-adapting method based on vision significance Download PDF

Info

Publication number
CN101510299B
CN101510299B CN2009100469761A CN200910046976A CN101510299B CN 101510299 B CN101510299 B CN 101510299B CN 2009100469761 A CN2009100469761 A CN 2009100469761A CN 200910046976 A CN200910046976 A CN 200910046976A CN 101510299 B CN101510299 B CN 101510299B
Authority
CN
China
Prior art keywords
image
energy
original
new
piece
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN2009100469761A
Other languages
Chinese (zh)
Other versions
CN101510299A (en
Inventor
刘志
颜红波
韩忠民
沈礼权
张兆杨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Anyan Information Technology Co Ltd
State Grid Shanghai Electric Power Co Ltd
Original Assignee
University of Shanghai for Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Shanghai for Science and Technology filed Critical University of Shanghai for Science and Technology
Priority to CN2009100469761A priority Critical patent/CN101510299B/en
Publication of CN101510299A publication Critical patent/CN101510299A/en
Application granted granted Critical
Publication of CN101510299B publication Critical patent/CN101510299B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种上述基于视觉显著性的图像自适应方法,该方法首先是计算原始图像的能量;其次是提取原始图像中的显著性对象并且增强显著性对象的相对能量;然后是利用动态规划技术找出图像中垂直方向和水平方向上能量最低的缝隙,剔除这些缝隙,从而实现图像的自适应。该自适应方法由于剔除的是图像中能量最低的缝隙,所以图像整体能量损失最小;由于显著性对象的相对能量较高,剔除的缝隙不会穿过显著性对象,所以图像自适应后显著性对象会保持完整。因此,该方法能够实现在智能移动设备中低分辨率和小屏幕环境下将图片的视觉效果失真降到最低,且保持其中显著性对象的完整性,为观看者提供与高分辨率显示设备上完全一样的视觉效果。

Figure 200910046976

The present invention discloses an image adaptive method based on visual saliency. The method firstly calculates the energy of the original image; secondly extracts the salient objects in the original image and enhances the relative energy of the salient objects; and then utilizes the dynamic The planning technology finds the gaps with the lowest energy in the vertical and horizontal directions in the image, and eliminates these gaps, so as to realize the self-adaptation of the image. Since this adaptive method removes the seam with the lowest energy in the image, the overall energy loss of the image is the smallest; because the relative energy of the salient object is high, the seam removed will not pass through the salient object, so the saliency of the image after self-adaptation Objects will remain intact. Therefore, the method can minimize the distortion of the visual effect of the picture in the low-resolution and small-screen environment of the smart mobile device, and maintain the integrity of the salient objects in it, and provide viewers with the same experience as on the high-resolution display device. Exactly the same visual effect.

Figure 200910046976

Description

基于视觉显著性的图像自适应方法Image adaptive method based on visual saliency

技术领域technical field

本发明涉及到一种基于视觉显著性的图像自适应方法。此方法主要是从图像显示的视觉效果这一角度来考虑,旨在于将适合高清晰电视或宽屏显示器显示环境的图像进行自适应处理,能够实现在智能移动设备低分辨率和小屏幕环境下将图像内容的整体失真降到最低限度,同时保持原始图像中显著性内容的完整性。The invention relates to an image adaptive method based on visual saliency. This method is mainly considered from the perspective of the visual effect of image display, and aims to adaptively process the image suitable for the display environment of high-definition TV or wide-screen display, and can realize the low-resolution and small-screen environment of smart mobile devices. The overall distortion of the image content is minimized while maintaining the integrity of the salient content in the original image.

背景技术Background technique

随着多媒体技术的飞速发展,在移动设备中浏览图片、实时播放电视节目已成为现实;随着人们经济实力的日益提高,愈来愈多的消费者购买了智能移动设备。相比普通移动通讯设备,智能移动设备的最大优势在于集成了更多的应用媒体。通过使用这些设备,如图8所示的苹果公司的iPhone,人们几乎能够在随时随地获取所需的信息。但是由于该移动设备本身的限制,如屏幕尺寸十分有限和分辨率相对高清晰电视和宽屏显示器要低很多,会给用户在浏览网络图片和编辑相片集时带来很大的不便。这一点主要表现在图片内容失真、分辨率降低和精彩细节部分的丢失上。针对这些问题,迫切需要一个能够保持原始图片内容不失真同时提高视觉效果自适应方法。With the rapid development of multimedia technology, it has become a reality to browse pictures and play TV programs in real time on mobile devices; with the increasing economic strength of people, more and more consumers have purchased smart mobile devices. Compared with ordinary mobile communication devices, the biggest advantage of smart mobile devices is that they integrate more application media. By using these devices, such as Apple's iPhone shown in Figure 8, people can get the information they need almost anywhere anytime. However, due to the limitations of the mobile device itself, such as very limited screen size and much lower resolution than high-definition televisions and widescreen monitors, it will bring great inconvenience to users when browsing online pictures and editing photo collections. This is mainly manifested in the distortion of picture content, the reduction of resolution and the loss of wonderful details. To solve these problems, there is an urgent need for an adaptive method that can keep the original image content undistorted while improving the visual effect.

一般来说,可以将人们日常生活中接触到的图像分为三类:风景图,几何结构图和显著性对象图,其中风景图和显著性对象图最为常见。将原始高分辨率和大尺寸图像,例如,图2a中的风景图、图2b中的几何结构图像和图2c中的显著性对象图像传播到智能移动设备中显示,该智能移动设备中所显示的图像为图9a,图9b图9c中的风景图、几何结构图和显著性对象图,其中图9a中的风景图的失真较少,而图9b中的几何结构图和图9c中的显著性对象图的失真特别大。智能移动设备的屏幕尺寸越小、分辨率越低,失真就更加明显。Generally speaking, images that people come into contact with in daily life can be divided into three categories: landscape images, geometric structure images, and saliency object images, among which landscape images and saliency object images are the most common. The original high-resolution and large-scale images, such as the landscape image in Figure 2a, the geometric structure image in Figure 2b, and the salient object image in Figure 2c, are propagated to the smart mobile device for display, and the displayed image in the smart mobile device The images of Fig. 9a, Fig. 9b and Fig. 9c are the landscape map, geometric structure map and saliency object map, where the landscape map in Fig. 9a has less distortion, while the geometric structure map in Fig. 9b and the salient object map in Fig. 9c Sexual object maps are particularly distorted. The smaller the screen size and lower the resolution of the smart mobile device, the more noticeable the distortion will be.

发明内容Contents of the invention

本发明的目的在于针对已有技术存在的缺陷,提出一种基于视觉显著性的图像自适应方法,该方法能够实现在智能移动设备中低分辨率和小屏幕环境下将图片的视觉效果失真降到最低,且保持其中显著性对象的完整性,为观看者提供与高分辨率显示设备上完全一样的视觉效果。The purpose of the present invention is to propose a visual salience-based image adaptive method aimed at the defects in the prior art, which can reduce the visual distortion of pictures in the low-resolution and small-screen environment of intelligent mobile devices. To the minimum, and maintain the integrity of the salient objects in it, to provide viewers with exactly the same visual effect as on high-resolution display devices.

为了达到上述目的,本发明采用的方案如下。In order to achieve the above object, the scheme adopted by the present invention is as follows.

动态规划常应用于解决最优化问题,图像中挑出能量最低的缝隙也是一种最优化问题(找出缝隙的方法有很多种,但是不同的方法找出的缝隙会有不同的能量,缝隙的能量指的是该缝隙中每个像素能量的累加),所以本发明采用动态规划来找出这些缝隙。Dynamic programming is often used to solve optimization problems. Picking out the gap with the lowest energy in the image is also an optimization problem (there are many ways to find the gap, but the gaps found by different methods will have different energies. Energy refers to the energy accumulation of each pixel in the gap), so the present invention uses dynamic programming to find these gaps.

上述基于视觉显著性的图像自适应方法,其特征在于首先是计算原始图像的能量;其次是提取原始图像中的显著性对象并且增强显著性对象的相对能量;然后利用动态规划技术找出图像中垂直方向和水平方向上能量最低的缝隙,剔除这些缝隙,实现图像的自适应。The above-mentioned image adaptive method based on visual saliency is characterized in that it first calculates the energy of the original image; secondly, it extracts the salient objects in the original image and enhances the relative energy of the salient objects; and then uses dynamic programming technology to find out the The gaps with the lowest energy in the vertical and horizontal directions are eliminated to realize image self-adaptation.

其具体实现步骤是:Its specific implementation steps are:

A、计算原始图像的能量:将原始彩色图像转换为灰度图像,然后计算灰度图像的梯度,每个像素梯度值的大小即为原始图像对应的每个像素的能量值;A. Calculate the energy of the original image: convert the original color image into a grayscale image, and then calculate the gradient of the grayscale image. The size of the gradient value of each pixel is the energy value of each pixel corresponding to the original image;

B、提取显著性对象并且提高其相对能量:将原始彩色图像进行颜色分解,接着进行差分颜色重组,然后对重组后的图像分块并且计算相应的块均值和块方差,最后分别计算块均值和块方差的信息熵,根据所计算信息熵的一致性来确定显著性对象。B. Extract salient objects and improve their relative energy: decompose the original color image, then perform differential color recombination, then block the reorganized image and calculate the corresponding block mean and block variance, and finally calculate the block mean and block respectively. The information entropy of the block variance determines the salient objects based on the consistency of the calculated information entropy.

C、利用动态规划技术来找出图像中垂直方向和水平方向上能量最低的缝隙,剔除这些缝隙:定义图像垂直和水平方向上的分析,利用动态规划技术反复找出局部最优的缝隙,剔除。C. Use dynamic programming technology to find the gaps with the lowest energy in the vertical and horizontal directions in the image, and eliminate these gaps: define the analysis in the vertical and horizontal directions of the image, and use dynamic programming technology to repeatedly find the local optimal gaps and eliminate them .

上述方法的步骤A中的计算原始图像的能量按公式(1)计算:The energy of calculating original image in the step A of above-mentioned method is calculated by formula (1):

采用梯度算子作为能量函数。假设输入的原始图像为I(m,n),其中,m,n分别对应原始图像的高和宽。The gradient operator is used as the energy function. Assume that the input original image is I(m, n), where m and n correspond to the height and width of the original image, respectively.

EE. (( II )) == || ∂∂ II ∂∂ xx || ++ || ∂∂ II ∂∂ ythe y || -- -- -- (( 11 ))

其中,E(I)表示原始图像的能量(以下简称能量图),||,

Figure G2009100469761D00022
分别代表绝对值符号,图像分别在x,y方向上的偏导数。Among them, E(I) represents the energy of the original image (hereinafter referred to as the energy map), ||,
Figure G2009100469761D00022
Respectively represent the absolute value sign, the partial derivative of the image in the x and y directions respectively.

上述方法的步骤B中的提取原始图像中的显著性对象并且提高其相对能量按以下步骤进行:Extracting the salient object in the original image in step B of the above method and improving its relative energy are carried out in the following steps:

原始图像中的快速变化处被梯度算子检测出来,能量从低到高分别用0-255的数值来表示,0表示最低的能量,255表示高能量最高。但是使用梯度算子只能够检测出快速变化处,对于显著性对象,其内部由于变化平缓,其相对能量值相对较低,这样会导致在自适应时显著性对象被破坏。这不是基于视觉显著性的图像自适应所期待的结果,必须检测出显著性区域并且提高其相对能量,使图像内容在自适应时保持完整性。该方法按照以下几个步骤来检测和增强显著性对象。The rapid changes in the original image are detected by the gradient operator, and the energy is represented by a value of 0-255 from low to high, with 0 representing the lowest energy and 255 representing the highest energy. But using the gradient operator can only detect fast changes. For the salient objects, the relative energy value is relatively low due to the gentle changes inside, which will lead to the destruction of the salient objects during self-adaptation. This is not the expected result of image adaptation based on visual saliency. Salient regions must be detected and their relative energy increased to keep the integrity of image content during adaptation. The method follows several steps to detect and enhance salient objects.

B1、将原始彩色图像进行彩色分解。B1. Decompose the original color image into color.

如果原始图像不是RGB(R:red,G:green,B:blue)图像,将其转换为RGB图像,然后按照公式(2)进行颜色分解。If the original image is not an RGB (R: red, G: green, B: blue) image, convert it to an RGB image, and then perform color decomposition according to formula (2).

RR newnew == rr -- (( gg ++ bb )) // 22 GG newnew == gg -- (( rr ++ bb )) // 22 BB newnew == bb -- (( rr ++ gg )) // 22 YY newnew == (( rr ++ gg )) // 22 -- || rr -- gg || // 22 -- bb -- -- -- (( 22 ))

其中,r,g,b表示原始RGB图像三个颜色通道值:红色,绿和蓝色,Rnew,Gnew,Bnew,Ynew分别表示分离后的单色图像:红色,绿色,蓝色和黄色。Among them, r, g, b represent the three color channel values of the original RGB image: red, green and blue, R new , G new , B new , Y new respectively represent the separated monochrome image: red, green, blue and yellow.

B2、将分离后的单色图像进行相互的差分运算,一共得到6种差分图像。B2. Perform mutual difference operation on the separated monochrome images, and obtain 6 kinds of difference images in total.

将分离后的单色图像Rnew,Gnew,Bnew,Ynew进程相互的差分运算,用Θ表示图像间的差分运算,具体步骤如公式(3)所示。For the difference operation between the separated monochrome images R new , G new , B new , Y new process, use Θ to represent the difference operation between images, and the specific steps are shown in formula (3).

RGRG diffdiff == RR newnew ΘΘ GG newnew RBRB diffdiff == RR newnew ΘΘ BB newnew RYRY diffdiff == RR newnew ΘΘ YY newnew GBGB diffdiff == GG newnew ΘΘ BB newnew GYGY diffdiff == GG newnew ΘΘ YY newnew BYBY diffdiff == BB newnew ΘΘ YY newnew -- -- -- (( 33 ))

RGdiff,RBdiff,RYdiff,GBdiff,GYdiff,BYdiff分别对应红绿,红蓝,红黄,绿蓝,绿黄和蓝黄的差分图像。RG diff , RB diff , RY diff , GB diff , GY diff , BY diff correspond to red-green, red-blue, red-yellow, green-blue, green-yellow and blue-yellow differential images respectively.

B3、计算差分图像的块均值和块方差并且进行二值化。B3. Calculate the block mean and block variance of the difference image and perform binarization.

对每个差分图像进行分块,用Block(i,j)表示,每个块的大小为N×M。然后按照公式(4)来计算差分图像的块均值和块方差。Each differential image is divided into blocks, represented by Block(i, j), and the size of each block is N×M. Then calculate the block mean and block variance of the difference image according to formula (4).

σσ ii ,, jj == ΣΣ xx == 00 NN -- 11 ΣΣ ythe y == 00 Mm -- 11 (( II ii ,, jj (( xx ,, ythe y )) -- μμ ii ,, jj )) 22 // (( NN ×× Mm )) μμ ii ,, jj == [[ ΣΣ xx == 00 NN -- 11 ΣΣ ythe y == 00 Mm -- 11 II ii ,, jj (( xx ,, ythe y )) ]] // (( NN ×× Mm )) -- -- -- (( 44 ))

σi,j和μi,j分别表示块Block(i,j)的偏差和均值,Ii,j(x,y)表示块Block(i,j)内的像素。该方法采用一种基于块的局部量化方法:若σi,j和μi,j分别大于Tσ up和Tμ up,设σi,j=255,μi,j=255;若σi,j和μi,j分别小于Tσ low和Tμ low,设σi,j=0,μi,j=0;计算量化后均值图像和偏差图像的信息量,通过比较信息量选出最佳的差分图像,得出显著性图。σ i, j and μ i, j represent the deviation and mean of the block Block(i, j) respectively, and I i, j (x, y) represent the pixels in the block Block(i, j). This method adopts a block-based local quantization method: if σ i,j and μ i,j are greater than T σ up and T μ up respectively, set σ i,j =255, μ i,j =255; if σ i , j and μ i, j are less than T σ low and T μ low respectively, set σ i, j = 0, μ i, j = 0; calculate the amount of information of the mean image and deviation image after quantization, and select The best difference image, resulting in a saliency map.

根据香农信息理论原理,计算出每个均值图像和方差图像的信息熵,根据信息熵的一致性得出每个差分图像作为显著性图像的可能性,挑选出包含显著性对象的那张图像并且提高显著性对象的相对能量。信息熵的计算(5)式所示。According to the principle of Shannon information theory, the information entropy of each mean image and variance image is calculated, and the possibility of each difference image as a salient image is obtained according to the consistency of information entropy, and the image containing the salient object is selected and Increases the relative energy of salient objects. The calculation of information entropy is shown in formula (5).

Entropy=-log(P(x))              (5)Entropy=-log(P(x)) (5)

其中,Entropy表示信息熵,P(x)表示某个差分图像对应均值图像或方差图像最大值像素所占整个总像素的概率。一致性判据为:若量化后的均值图像和方差图像的信息量熵的相对距离越近,对应的差分图像的作为显著性图像的概率越大,找出显著性图像然后增强显著性图像的能量。最后的提高了相对能量的显著性图像。Among them, Entropy represents information entropy, and P(x) represents the probability that a difference image corresponds to the maximum pixel of the mean image or variance image and occupies the entire total pixel. The consistency criterion is: if the relative distance between the quantized mean image and the variance image’s information entropy is closer, the corresponding difference image has a higher probability of being a salient image, and the probability of finding a salient image and then enhancing the salient image is greater. energy. The final saliency image with increased relative energy.

上述方法的步骤C中的利用动态规划技术确定图像中垂直方向和水平方向上能量最低的缝隙和剔除缝隙的步骤如下:The steps of using dynamic programming technology in step C of the above method to determine the gap with the lowest energy in the vertical and horizontal directions in the image and to eliminate the gap are as follows:

C1、缝隙的定义。C1, the definition of the gap.

该方法定义图像中的竖直方向和水平方向上的缝隙必须严格满足以下两个条件:This method defines that the vertical and horizontal gaps in the image must strictly meet the following two conditions:

a.、对于每一条缝隙,不管是竖直方向还是水平方向,在每一行或每一列只占一个像素,这表示缝隙是绝对单调的;a. For each gap, whether vertical or horizontal, only one pixel is occupied in each row or column, which means that the gap is absolutely monotonous;

b.缝隙必须是8连通的。定义这两个限制条件的原因是:避免自适应过程中图像的局部扭曲。b. The gap must be 8-connected. The reason for defining these two constraints is to avoid local distortion of the image during the adaptation process.

定义垂直方向上的缝隙为:Define the gap in the vertical direction as:

SeamSeam ythe y == {{ seamseam jj ythe y }} jj == 11 nno == {{ (( jj ,, ythe y (( jj )) )) }} jj == 11 nno || ythe y (( jj )) -- ythe y (( jj -- 11 )) || ≤≤ 11 -- -- -- (( 66 ))

定义水平方向上的缝隙为:Define the gap in the horizontal direction as:

SeamSeam xx == {{ seamseam ii xx }} ii == 11 mm == {{ (( ii ,, xx (( ii )) )) }} ii == 11 mm || xx (( ii )) -- xx (( ii -- 11 )) || ≤≤ 11 -- -- -- (( 77 ))

其中,Seam代表缝隙x(i),y(j)分别代表垂直方向和水平方向上的映射,m,n,||分别对应原始图像的高,宽和绝对值符号,垂直方向和水平方向上的缝隙分别如图6a,6b所示。Among them, Seam represents the gap x(i), y(j) represents the mapping in the vertical and horizontal directions respectively, m, n, || correspond to the height, width and absolute value symbol of the original image, respectively, in the vertical and horizontal directions The gaps are shown in Figure 6a, 6b respectively.

C2、利用动态规划找出能量最低的缝隙。C2. Use dynamic programming to find the gap with the lowest energy.

找出缝隙的目的在于剔除缝隙以达到图像大小的自适应改变。为了最低限度的减少自适应后图像的失真和保持显著性对象的完整性,必须用一种最优的方法来确定图像中的缝隙。该方法中采用动态规划来确定最优缝隙。The purpose of finding the gap is to eliminate the gap to achieve adaptive change of the image size. In order to minimize the image distortion after adaptation and maintain the integrity of salient objects, an optimal method must be used to determine the gaps in the image. In this method, dynamic programming is used to determine the optimal gap.

垂直方向上的动态规划:Dynamic programming in the vertical direction:

Task(i,j)=E(i,j)+S(i,j)+min(Task(i-1,j-1),Task(i-1,j),Task(i-1,j+1))    (8)Task(i,j)=E(i,j)+S(i,j)+min(Task(i-1,j-1), Task(i-1,j), Task(i-1,j +1)) (8)

水平方向上的动态规划:Dynamic programming in the horizontal direction:

Task(i,j)=E(i,j)+S(i,j)+min(Task(i-1,j-1),Task(i,j-1),Task(i+1,j-1))    (9)Task(i,j)=E(i,j)+S(i,j)+min(Task(i-1,j-1), Task(i,j-1), Task(i+1,j -1)) (9)

Task(i,j),E(i,j),S(i,j),min(x,y,z)分别表示动态规划计算出的图像,原始图像的能量图像,显著性图像和三个数的最小值函数,i∈[0,height),j∈[0,width),height,width对应原始图像的高和宽。以垂直方向的缝隙确定过程为例:首先根据垂直方向上的动态规划方法从第0行到第M-1行计算出Task;然后找出第M-1行的最小值;最后从第M-1按照动态规划的思路回溯到第0行。如图10、11所示,假设图10所示的矩阵与某一图像对应,三个红色箭头表示计算顺序(本方案中采用自上往下的顺序),有色方块表示这个方块内的值已经被计算出来。假设要计算图10中第二行第二列位置出的值,根据前面缝隙(Seam)定义中的连通性和单调性,当前位置处的值只能与前一行中与其最邻近的三个位置处的值有关,利用这三个已经计算出来的值来确定这个位置处的值,同时保证计算出的值是所有可能计算方法所得的结果中最小的值。参照图10,利用动态规划可以得到这个位置的值为:7=2+min(5,8,12),其中min(a,b,c)表示求三个数的最小值,“2”表示矩阵中第二行第二列处初步能量,“5”,“8”和“12”表示相应位置已经计算好的值,算出其它方法算出来的来的值一定要大于或等于这个值。当第二行第二列处的值计算好了以后,然后接着计算下一位置的值(即第二行第三列),一直计算到最后一行最后一列。图11中,红色线条表示图像中进行动态规划操作的轨迹,也就是本方法中反复提到的缝隙(Seam)。通过反复的找出并且剔除这些缝隙,就能达到图像自适应的目的。Task(i, j), E(i, j), S(i, j), min(x, y, z) represent the image calculated by dynamic programming, the energy image of the original image, the saliency image and three The minimum value function of the number, i∈[0, height), j∈[0, width), height, width corresponds to the height and width of the original image. Take the gap determination process in the vertical direction as an example: first calculate the Task from line 0 to line M-1 according to the dynamic programming method in the vertical direction; then find the minimum value of line M-1; 1 Backtrack to line 0 according to the idea of dynamic programming. As shown in Figures 10 and 11, assuming that the matrix shown in Figure 10 corresponds to a certain image, the three red arrows indicate the calculation order (in this scheme, the order from top to bottom is adopted), and the colored squares indicate that the values in this box have been calculated. is calculated. Suppose you want to calculate the value at the second row and the second column in Figure 10. According to the connectivity and monotonicity in the definition of the previous seam (Seam), the value at the current position can only be compared with the three closest positions in the previous row. It is related to the value at the location, use these three calculated values to determine the value at this location, and at the same time ensure that the calculated value is the smallest value among the results obtained by all possible calculation methods. With reference to Fig. 10, the value of this position can be obtained by using dynamic programming: 7=2+min(5,8,12), wherein min(a,b,c) means seeking the minimum value of three numbers, and "2" means The preliminary energy at the second row and second column of the matrix, "5", "8" and "12" indicate the calculated values at the corresponding positions, and the values calculated by other methods must be greater than or equal to this value. After the value at the second column of the second row is calculated, the value at the next position (that is, the third column of the second row) is calculated until the last row and the last column. In Figure 11, the red line represents the trajectory of the dynamic programming operation in the image, which is the seam (Seam) mentioned repeatedly in this method. By repeatedly finding and eliminating these gaps, the purpose of image adaptation can be achieved.

C3、实现图像的自适应。C3. Realize image self-adaptation.

本方法中,图像的自适应按照下面的步骤来实现图像的自适应。由于宽的自适应和高的自适应方法是完全相同的,所以下面的步骤主要是针对宽或高的自适应(高的自适应宽的自适应的步骤完全相同)。设输入的原始图像为I(m,n),其中,m,n分别对应原始图像的高和宽。需要得到的目标图像的大小为m′,n′。ΔM=m′-m表示需要变化的宽度或高度;In this method, the self-adaptation of the image is realized according to the following steps. Since the methods of width adaptation and height adaptation are exactly the same, the following steps are mainly for width or height adaptation (the steps of height adaptation and width adaptation are exactly the same). Let the input original image be I(m, n), where m and n correspond to the height and width of the original image respectively. The size of the target image to be obtained is m', n'. ΔM=m'-m represents the width or height that needs to be changed;

C31、按照上面介绍的方法找出一条垂直方向上的缝隙,用Seam表示;C31. According to the method described above, find out a gap in the vertical direction, expressed by Seam;

C32、剔除这条缝隙:从第一行开始找出Seam中属于第i行的像素所处的位置,其中,i∈[0,m),将这个位置右边的像素逐个向左移动一个单位,删除第n-1行所有像素。完成这一步后,图像的宽度就减少一个,变为m-1,同时ΔM=ΔM-1;C32. Eliminate this gap: find out the position of the pixel belonging to the i-th row in Seam from the first row, wherein, i∈[0, m), move the pixels on the right side of this position one by one unit to the left, Delete all pixels in row n-1. After this step is completed, the width of the image will be reduced by one, becoming m-1, and ΔM=ΔM-1;

C33、若ΔM=0,就结束;若不为0,重复A,B的操作,直到ΔM=0为止。C33. If ΔM=0, end; if not 0, repeat the operations of A and B until ΔM=0.

通过上面A,B,C的操作,就可以完成图像宽度的自适应。对高度的自适应方法也是一样的思路。Through the above operations of A, B, and C, the self-adaptation of the image width can be completed. The same idea applies to height-adaptive methods.

本发明的基于视觉显著性的图像自适应方法与现有的技术相比较具有如下优点:该自适应方法由于剔除的是图像中能量最低的缝隙,所以图像整体能量损失最小;由于显著性对象的相对能量较高,剔除的缝隙不会穿过显著性对象,所以图像自适应后显著性对象的结果和形状会保持完整。因此,该方法能够实现在智能移动设备中低分辨率和小屏幕环境下将图片的视觉效果失真降到最低,且保持其中显著性对象的完整性,为观看者提供与高分辨率显示设备上完全一样的视觉效果。Compared with the prior art, the image adaptive method based on visual saliency of the present invention has the following advantages: because the adaptive method eliminates the seam with the lowest energy in the image, the overall energy loss of the image is the smallest; The relative energy is high, and the culled seam will not pass through the salient object, so the result and shape of the salient object will remain intact after image adaptation. Therefore, the method can minimize the distortion of the visual effect of the picture in the low-resolution and small-screen environment of the smart mobile device, and maintain the integrity of the salient objects in it, and provide viewers with the same experience as on the high-resolution display device. Exactly the same visual effect.

附图说明Description of drawings

图1是本发明的基于视觉显著性的图像自适应方法的流程图;Fig. 1 is the flowchart of the image adaptive method based on visual saliency of the present invention;

图2a、2b、2c是典型的风景图、几何结构图和显著性对象图;Figures 2a, 2b, and 2c are typical landscape graphs, geometric structure graphs, and salient object graphs;

图3是本发明中上述方法的步骤A中的梯度图像(初步能量函数图);Fig. 3 is the gradient image (preliminary energy function figure) in the step A of above-mentioned method among the present invention;

图4a、4b、4c、4d、4e、4f是本发明中上述方法的步骤B中B2的差分图像;Fig. 4a, 4b, 4c, 4d, 4e, 4f are the differential images of B2 in the step B of the above-mentioned method in the present invention;

图5a、5b、5c是本发明中上述方法的步骤B中B2的均值图像、方差图像和最终的能量函数图;Fig. 5 a, 5b, 5c are mean image, variance image and final energy function figure of B2 in the step B of above-mentioned method among the present invention;

图6a,6b分别表示的是图像中所找出的一条水平方向和垂直方向上的缝隙;Figures 6a and 6b respectively represent a horizontal and vertical gap found in the image;

图7a、7b、7c是本发明中上述方法的步骤C中的自适应结果图;Fig. 7a, 7b, 7c are the self-adaptive result figure in the step C of above-mentioned method in the present invention;

图8是苹果公司的iPhone外观图;Figure 8 is an appearance diagram of Apple's iPhone;

图9a、9b、9c是在智能移动设备中所显示未进行本发明方法的失真的风景图、几何结构图和显著性对象图;Fig. 9a, 9b, 9c are distorted landscape diagrams, geometric structure diagrams and salient object diagrams displayed in smart mobile devices without performing the method of the present invention;

图10对应的是从微观的角度说明动态规划对每个像素的作用,给出了动态规划的基本操作和演示;Figure 10 corresponds to explaining the effect of dynamic programming on each pixel from a microscopic point of view, and gives the basic operation and demonstration of dynamic programming;

图11对应的是从宏观的角度上来说明动态规划对图像的作用,给出了动态规划的基本操作和演示。Figure 11 corresponds to explaining the effect of dynamic programming on images from a macro perspective, and gives the basic operation and demonstration of dynamic programming.

具体实施方式Detailed ways

本发明的基于视觉显著性的图像自适应方法的实施例结合附图详述如下:Embodiments of the image adaptive method based on visual salience of the present invention are described in detail as follows in conjunction with the accompanying drawings:

参照图1,示出了本发明的基于视觉显著性的图像自适应方法的流程图,能够在低分辨率和小屏幕的环境下实现针对图像内容的自适应。在CPU为1.66GHz、内存1024M的PC测试平台上编程实现,给出了处理过程中的一些结果。其具体实现步骤是:Referring to FIG. 1 , it shows a flow chart of the image adaptation method based on visual salience of the present invention, which can realize the adaptation for image content in the environment of low resolution and small screen. It is programmed and implemented on a PC test platform with a CPU of 1.66GHz and a memory of 1024M, and some results during the processing are given. Its specific implementation steps are:

A、计算原始图像的能量:如图2a、2b、2c所示的原始图像,将原始彩色图像转换为灰度图像,然后计算灰度图像的梯度,每个像素梯度值的大小即为原始图像对应的每个像素的能量值;A. Calculate the energy of the original image: the original image shown in Figure 2a, 2b, and 2c, convert the original color image into a grayscale image, and then calculate the gradient of the grayscale image, the size of the gradient value of each pixel is the original image The corresponding energy value of each pixel;

B、提取显著性对象并且提高其相对能量:将原始彩色图像进行颜色分解,接着进行差分颜色重组,然后对重组后的图像分块并且计算相应的块均值和块方差,最后分别计算块均值和块方差的信息熵,根据所计算信息熵的一致性来确定显著性对象;B. Extract salient objects and improve their relative energy: decompose the original color image, then perform differential color recombination, then block the reorganized image and calculate the corresponding block mean and block variance, and finally calculate the block mean and block respectively. The information entropy of the block variance, to determine the salient objects according to the consistency of the calculated information entropy;

C、利用动态规划技术来找出图像中垂直方向和水平方向上能量最低的缝隙,剔除这些缝隙:定义图像垂直和水平方向上的分析,利用动态规划技术反复找出局部最优的缝隙,剔除这些缝隙已达到改变图像大小改变的目的。C. Use dynamic programming technology to find the gaps with the lowest energy in the vertical and horizontal directions in the image, and eliminate these gaps: define the analysis in the vertical and horizontal directions of the image, and use dynamic programming technology to repeatedly find the local optimal gaps and eliminate them These gaps have achieved the purpose of changing the size of the image.

上述方法的步骤A中的计算原始图像的能量按公式(1)计算:The energy of calculating original image in the step A of above-mentioned method is calculated by formula (1):

该基于视觉显著性的自适应方法的核心思想是原始图像中剔除某些低能量的像素(缝隙是由满足一定条件的低能量像素组成)。因此,首要的问题是:怎么知道那个像素的能量低,那个像素的能量高?直觉上讲,最好的做法是移除那些“不显著的”像素。此外人们在浏览图片时,图像中边界或边缘处更容易吸引人们的注意。梯度算子(Gradient operator)是基于图像函数的局部导数,在图像函数的快速变化的位置处(边缘处)较大,梯度算子的作用就是在图像中显现这些位置。本方法中,采用梯度算子作为能量函数。如图2c所示的原始图像,输入的原始图像为I(m,n),其中m,n分别对应原始图像的高和宽,The core idea of this adaptive method based on visual saliency is to eliminate some low-energy pixels in the original image (the gap is composed of low-energy pixels that meet certain conditions). So, the first question is: how do you know which pixel has low energy and which pixel has high energy? Intuitively, the best thing to do is to remove those "insignificant" pixels. In addition, when people browse pictures, the border or edge in the image is more likely to attract people's attention. The gradient operator (Gradient operator) is based on the local derivative of the image function, which is larger at the rapidly changing position (edge) of the image function, and the function of the gradient operator is to show these positions in the image. In this method, the gradient operator is used as the energy function. The original image shown in Figure 2c, the input original image is I(m, n), where m and n correspond to the height and width of the original image respectively,

EE. (( II )) == || ∂∂ II ∂∂ xx || ++ || ∂∂ II ∂∂ ythe y || -- -- -- (( 11 ))

其中,E(I)表示原始图像的能量(以下简称能量图),||,

Figure G2009100469761D00072
分别代表绝对值符号,图像分别在x,y方向上的偏导数。Among them, E(I) represents the energy of the original image (hereinafter referred to as the energy map), ||,
Figure G2009100469761D00072
Respectively represent the absolute value sign, the partial derivative of the image in the x and y directions respectively.

首先将原始的彩色图像转换为灰度图像,然后,对所得的灰度图像利用(1)式来进行梯度运算,得到图像的能量,结果如图3所示。First, the original color image is converted into a grayscale image, and then, the gradient operation is performed on the obtained grayscale image using formula (1) to obtain the energy of the image. The result is shown in Figure 3.

上述方法的步骤B中的提取原始图像中的显著性对象并且提高其相对能量按以下步骤进行:Extracting the salient object in the original image in step B of the above method and improving its relative energy are carried out in the following steps:

原始图像中的快速变化处被梯度算子检测出来,能量从低到高分别用0-255的数值来表示,0表示最低的能量,255表示高能量最高。但是使用梯度算子只能够检测出快速变化处,对于显著性对象,其内部由于变化平缓,其相对能量值相对较低,这样会导致在自适应时显著性对象被破坏。这不是基于视觉显著性的图像自适应所期待的结果,必须检测出显著性区域并且提高其相对能量,使图像内容在自适应时保持完整性。该方法按照以下几个步骤来检测和增强显著性对象。The rapid changes in the original image are detected by the gradient operator, and the energy is represented by a value of 0-255 from low to high, with 0 representing the lowest energy and 255 representing the highest energy. But using the gradient operator can only detect fast changes. For the salient objects, the relative energy value is relatively low due to the gentle changes inside, which will lead to the destruction of the salient objects during self-adaptation. This is not the expected result of image adaptation based on visual saliency. Salient regions must be detected and their relative energy increased to keep the integrity of image content during adaptation. The method follows several steps to detect and enhance salient objects.

上述方法的步骤B中图像中显著性对象的提取和其能量增强处理实现步骤:The extraction of salient objects in the image in step B of the above method and the implementation steps of its energy enhancement processing:

B1、将原始彩色图像进行颜色分解;B1, decomposing the original color image into colors;

B2、将分解后的图像进行差分运算,得到差分图像,运算后的结果如图4a、4b、4c、4d、4e、4f所示。B2. Perform difference operation on the decomposed image to obtain a difference image, and the results after operation are shown in Figures 4a, 4b, 4c, 4d, 4e, and 4f.

B3、分别计算每个差分图像的均值图像和方差图像,如图5a、5b所示。B3. Calculate the mean image and variance image of each difference image respectively, as shown in Figs. 5a and 5b.

B4分别计算每个差分图像的均值图像和差分图像的信息熵。B4 calculates the mean value image and the information entropy of the difference image of each difference image respectively.

B5比较每个差分图像的均值信息熵和方差信息熵,最后得出显著性图像并且增强,得出的结果如图5c所示。B5 compares the mean information entropy and variance information entropy of each differential image, and finally obtains the saliency image and enhances it. The result is shown in Figure 5c.

上述方法的步骤C中的利用动态规划技术来找出图像中垂直方向和水平方向上能量最低的缝隙,剔除这些缝隙,以完成图像自适应的实现步骤:In the step C of the above method, dynamic programming technology is used to find out the gaps with the lowest energy in the vertical and horizontal directions in the image, and these gaps are eliminated to complete the implementation steps of image adaptation:

C1、在第一,二步计算出的能量和显著性图像的基础上,按照第(6),(7)式来定义的垂直方向和水平方向的缝隙;C1. On the basis of the energy and saliency images calculated in the first and second steps, the vertical and horizontal gaps are defined according to formulas (6) and (7);

C2、利用动态规划技术来寻找垂直方向和水平方向上的缝隙,剔除缝隙,整体移动缝隙右边或下边的像素,取得剔除缝隙的图6a、6b图像;C2. Use dynamic programming technology to find gaps in the vertical and horizontal directions, remove the gaps, move the pixels on the right side or the bottom of the gap as a whole, and obtain the images of Figures 6a and 6b that remove the gaps;

C3、根据实际应用取得目标图像的尺寸,反复进行C1,C2的操作,直到满足目标为止,自适应的最终结果如图7b所示。C3. Obtain the size of the target image according to the actual application, and repeat the operations of C1 and C2 until the target is met. The final result of the adaptation is shown in Figure 7b.

如上所述已经能够实现在智能移动设备中低分辨率和小屏幕环境下将图片的视觉效果失真降到最低且保持其中显著性对象的完整性。该发明可以有效地解决视频效果因分辨率下降而导致细节丢失和观赏性降低的问题。根据图1的程序流程图,以下给出实现的实例,图片的类型不受任何限制,可以是风景图,几何机构图和显著性对象图。图4至图7给出了处理过程中相应的结果。下面结合程序流程图来进行各部分试验的说明。As mentioned above, it has been possible to minimize the distortion of the visual effect of the picture and maintain the integrity of the salient objects in the low-resolution and small-screen environment of the smart mobile device. The invention can effectively solve the problems of loss of details and reduction of appreciation of video effect due to resolution reduction. According to the program flow chart in Fig. 1, an example of implementation is given below, and the type of the picture is not limited in any way, it can be a landscape picture, a geometric mechanism picture and a salient object picture. Figures 4 to 7 show the corresponding results during the processing. The following is a description of each part of the test in conjunction with the program flow chart.

试验:本发明主要是为解决传统自适应方法对图像操作后所带来图像质量下降的问题,相比传统的方法,本方法能够在改变图像尺寸大小的同时,保持图像显著性对象完整的完整性,该方法主要分为三部分。根据图1所示的流程图:首先计算图像的初步能量,然后提取显著性对象并且增强其能量,最后利用动态规划来寻找并剔除图像中能量最低的缝隙。图3对应上述方法的步骤A中提到的原始图像的初步能量图,图5c对应的是提取原始图像中的显著性对象并且增强显著性对象能量后的图,图7a为原始图像,7b为用本发明的方法得到的自适应图像7c对应的是传统方法得到的自适应图,可以明显的看出:在改变宽度的情况下,本发明的方法能够保持显著性对象(该图中小孩为显著性对象)的完整性,而传统方法却会造成显著性对象的失真,本发明的方法在视觉效果上带来了很大的改善。从图9可以看出在低分辨率和小屏幕环境下,几何结构图和显著性对象图出现了很大的内容失真,而风景图的失真相对较小。Test: The present invention is mainly to solve the problem of image quality degradation caused by the traditional adaptive method after the image operation. Compared with the traditional method, this method can change the size of the image while maintaining the integrity of the image salient objects The method is mainly divided into three parts. According to the flow chart shown in Figure 1: first calculate the preliminary energy of the image, then extract the salient objects and enhance their energy, and finally use dynamic programming to find and eliminate the gap with the lowest energy in the image. Fig. 3 corresponds to the preliminary energy map of the original image mentioned in step A of the above method, Fig. 5c corresponds to the map after extracting the salient objects in the original image and enhancing the energy of the salient objects, Fig. 7a is the original image, and Fig. 7b is The adaptive image 7c obtained by the method of the present invention corresponds to the adaptive image obtained by the traditional method. It can be clearly seen that the method of the present invention can keep the salient object (the child in this figure is The integrity of the salient object), while the traditional method will cause the distortion of the salient object, the method of the present invention has brought a great improvement in the visual effect. It can be seen from Figure 9 that under low-resolution and small-screen environments, the geometric structure map and the saliency object map have great content distortion, while the distortion of the landscape map is relatively small.

Claims (3)

1. the image adaptive method based on vision significance is characterized in that it at first being the energy that calculates original image; Next is the relative energy that extracts the saliency object in the original image and strengthen saliency object; Utilize the dynamic programming technology to find out in the image the minimum slit of energy on the vertical direction and horizontal direction then, reject these slits, realize the self-adaptation of image, its specific implementation step is:
The energy of A, calculating original image: original color image is converted to gray level image, calculates the gradient of gray level image then, the size of each pixel gradient value is the energy value of each pixel of original image correspondence;
B, extract saliency object and improve its relative energy: original color image is carried out color decompose, then carry out the reorganization of difference color, then to the reorganization after image block and calculate corresponding piece average and piece variance, distinguish the information entropy of computing block average and piece variance at last, consistance according to institute's computing information entropy is determined saliency object, improves the relative energy of saliency object;
C, utilize the dynamic programming technology to find out in the image the minimum slit of energy on the vertical direction and horizontal direction, reject these slits: the slit on the vertical and horizontal direction of definition image, utilize the dynamic programming technology to find out the slit of local optimum repeatedly, reject these slits.
2. the image adaptive method based on vision significance according to claim 1, the energy that it is characterized in that the calculating original image in the steps A of said method by formula (1) calculates: adopt gradient operator as energy function, the original image of supposing input is I (m, n), wherein, m, n is the height and width of corresponding original image respectively
Wherein, the energy of E (I) expression original image,
Figure FSB00000471300200012
Represent absolute value sign respectively, image is respectively at x, the partial derivative on the y direction.
3. the image adaptive method based on vision significance according to claim 1 is characterized in that: the saliency object in the extraction original image among the step B of said method and improve its relative energy and detect according to following step and strengthen saliency object:
B1. original color image is carried out colour and decomposes,, be converted into the RGB image, carry out color according to formula (2) then and decompose if original image is not the RGB image,
Figure FSB00000471300200021
Wherein, r, g, b represent three Color Channel values of original RGB image: redness, and green and blue, R New, G New, B New, Y NewMonochrome image after expression separates respectively: redness, green, blue and yellow;
B2, the monochrome image after will separating carry out mutual calculus of differences, obtain 6 kinds of difference images altogether, with the monochrome image R after separating New, G New, B New, Y NewCarry out mutual calculus of differences, with the calculus of differences between the Θ presentation video, concrete steps as shown in Equation (3),
Figure FSB00000471300200022
Wherein, RG Diff, RB Diff, RY Diff, GB Diff, GY Diff, BY DiffCorresponding red green respectively, red indigo plant, reddish yellow, turquoise, greenish-yellow and blue yellow difference image;
B3, the piece average of calculating difference image and piece variance and carry out binaryzation are carried out piecemeal to each difference image, with Block (size of each piece is N * M, calculates the piece average and the piece variance of difference image then according to formula (4) for i, j) expression,
Wherein, σ I, jAnd μ I, jRepresent piece Block (i, deviation j) and average, I respectively I, j(x, y) (i, j) Nei pixel, this method adopt a kind of block-based local quantization method to expression piece Block: if σ I, jAnd μ I, jRespectively greater than
Figure FSB00000471300200024
With
Figure FSB00000471300200025
If σ I, j=255, μ I, j=255; If σ I, jAnd μ I, jRespectively less than
Figure FSB00000471300200031
With If σ I, j=0, μ I, j=0; Calculate the quantity of information that quantizes back average image and offset images, select best difference image, draw conspicuousness figure by the comparison information amount,
According to Shannon information theory principle, calculate the information entropy of each average image and variance image, consistance according to information entropy draws the possibility of each difference image as the conspicuousness image, pick out that image that comprises saliency object and the relative energy that improves saliency object, shown in the calculating of information entropy (5) formula
Entropy=-log(P(x)) (5)
Wherein, Entropy represents information entropy, P (x) represents the probability of corresponding average image of certain difference image or the shared whole total pixel of variance image maximal value pixel, the consistance criterion is: if the relative distance of the quantity of information entropy of average image after quantizing and variance image is near more, the probability as the conspicuousness image of corresponding difference image is big more, finds out the energy that strengthens saliency object behind the conspicuousness image.
CN2009100469761A 2009-03-04 2009-03-04 Image self-adapting method based on vision significance Active CN101510299B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2009100469761A CN101510299B (en) 2009-03-04 2009-03-04 Image self-adapting method based on vision significance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2009100469761A CN101510299B (en) 2009-03-04 2009-03-04 Image self-adapting method based on vision significance

Publications (2)

Publication Number Publication Date
CN101510299A CN101510299A (en) 2009-08-19
CN101510299B true CN101510299B (en) 2011-07-20

Family

ID=41002691

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2009100469761A Active CN101510299B (en) 2009-03-04 2009-03-04 Image self-adapting method based on vision significance

Country Status (1)

Country Link
CN (1) CN101510299B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101917557B (en) * 2010-08-10 2012-06-27 浙江大学 Method for dynamically adding subtitles based on video content
CN102779338B (en) * 2011-05-13 2017-05-17 欧姆龙株式会社 Image processing method and image processing device
CN102509072B (en) * 2011-10-17 2013-08-28 上海大学 Method for detecting salient object in image based on inter-area difference
CN103226824B (en) * 2013-03-18 2016-07-06 上海交通大学 Maintain the video Redirectional system of vision significance
CN103218606A (en) * 2013-04-10 2013-07-24 哈尔滨工程大学 Multi-pose face recognition method based on face mean and variance energy images
CN103247038B (en) * 2013-04-12 2016-01-20 北京科技大学 A kind of global image information synthesis method of visual cognition model-driven
US20160253574A1 (en) 2013-11-28 2016-09-01 Pavel S. Smirnov Technologies for determining local differentiating color for image feature detectors
EP3074925A1 (en) * 2013-11-28 2016-10-05 Intel Corporation Method for determining local differentiating color for image feature detectors
CN104123720B (en) * 2014-06-24 2017-07-04 小米科技有限责任公司 Image method for relocating, device and terminal
US9665925B2 (en) 2014-06-24 2017-05-30 Xiaomi Inc. Method and terminal device for retargeting images
CN104318514B (en) * 2014-10-17 2017-05-17 合肥工业大学 Three-dimensional significance based image warping method
CN104680143B (en) * 2015-02-28 2018-02-27 武汉烽火众智数字技术有限责任公司 A kind of fast image retrieval method for video investigation
CN104967922A (en) * 2015-06-30 2015-10-07 北京奇艺世纪科技有限公司 Subtitle adding position determining method and device
CN107038699B (en) * 2016-11-09 2019-07-23 重庆医科大学 Enhance image fault rate detection method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1439704A2 (en) * 1997-03-17 2004-07-21 Matsushita Electric Industrial Co., Ltd. Method and apparatus for processing, transmitting and receiving dynamic image data
CN1615020A (en) * 2004-11-10 2005-05-11 华中科技大学 Method for pridicting sortable complex in frame
CN1922890A (en) * 2004-09-30 2007-02-28 日本电信电话株式会社 Stepwise reversible video encoding method, stepwise reversible video decoding method, stepwise reversible video encoding device, stepwise reversible video decoding device, program therefore, and recor

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1439704A2 (en) * 1997-03-17 2004-07-21 Matsushita Electric Industrial Co., Ltd. Method and apparatus for processing, transmitting and receiving dynamic image data
CN1922890A (en) * 2004-09-30 2007-02-28 日本电信电话株式会社 Stepwise reversible video encoding method, stepwise reversible video decoding method, stepwise reversible video encoding device, stepwise reversible video decoding device, program therefore, and recor
CN1615020A (en) * 2004-11-10 2005-05-11 华中科技大学 Method for pridicting sortable complex in frame

Also Published As

Publication number Publication date
CN101510299A (en) 2009-08-19

Similar Documents

Publication Publication Date Title
CN101510299B (en) Image self-adapting method based on vision significance
US8290252B2 (en) Image-based backgrounds for images
CN110796595A (en) Tone mapping method and device and electronic equipment
CN101742339A (en) A Method of Color Image Enhancement
CN106412619A (en) HSV color histogram and DCT perceptual hash based lens boundary detection method
CN105243641B (en) A kind of low light image Enhancement Method based on dual-tree complex wavelet transform
CN111489346A (en) Full-reference image quality evaluation method and system
CN104217440B (en) A kind of method extracting built-up areas from remote sensing images
CN106875358A (en) Image enchancing method and image intensifier device based on Bayer format
CN111179289B (en) Image segmentation method suitable for webpage length graph and width graph
CN109040764A (en) Fast coding algorithm in a kind of HEVC screen content frame based on decision tree
CN104463806B (en) Height adaptive method for enhancing picture contrast based on data driven technique
CN106997608A (en) A kind of method and device for generating halation result figure
CN110312134A (en) A Screen Video Coding Method Based on Image Processing and Machine Learning
CN105913460A (en) Skin color detection method and device
CN102801988A (en) Video format conversion method of converting YUV444 to YUV420 based on chrominance component amplitude
CN103325101A (en) Extraction method and device of color characteristics
CN119693251B (en) Image processing method, device, electronic device, storage medium and program product
CN116310666A (en) Remote sensing image self-supervision training method and device and electronic equipment
CN104519371A (en) Pushing method, pushing device and server
CN115222606A (en) Image processing method, image processing device, computer readable medium and electronic equipment
CN113923313B (en) Carrier generation type information hiding method and extraction method based on cartoon pattern
CN106303496B (en) Picture format determines method and device, display equipment
Park et al. Applying enhanced confusion line color transform using color segmentation for mobile applications
CN110689001A (en) Method for generating license plate training sample in complex environment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
ASS Succession or assignment of patent right

Owner name: STATE GRID SHANGHAI ELECTRIC POWER COMPANY

Free format text: FORMER OWNER: SHANGHAI UNIVERSITY

Effective date: 20141211

Owner name: SHANGHAI ANYAN INFORMATION TECHNOLOGY CO., LTD.

Effective date: 20141211

C41 Transfer of patent application or patent right or utility model
COR Change of bibliographic data

Free format text: CORRECT: ADDRESS; FROM: 200444 BAOSHAN, SHANGHAI TO: 200122 PUDONG NEW AREA, SHANGHAI

TR01 Transfer of patent right

Effective date of registration: 20141211

Address after: 200122 No. 1671 South Pudong Road, Shanghai, Pudong New Area

Patentee after: State Grid Shanghai Municipal Electric Power Company

Patentee after: Shanghai Anyan Information Technology Co., Ltd.

Address before: 200444 Baoshan District Road, Shanghai, No. 99

Patentee before: Shanghai University