[go: up one dir, main page]

CN108460751A - The 4K ultra high-definition image quality evaluating methods of view-based access control model nerve - Google Patents

The 4K ultra high-definition image quality evaluating methods of view-based access control model nerve Download PDF

Info

Publication number
CN108460751A
CN108460751A CN201710098240.3A CN201710098240A CN108460751A CN 108460751 A CN108460751 A CN 108460751A CN 201710098240 A CN201710098240 A CN 201710098240A CN 108460751 A CN108460751 A CN 108460751A
Authority
CN
China
Prior art keywords
video
frame
access control
ultra high
control model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710098240.3A
Other languages
Chinese (zh)
Inventor
袁政
许颖浩
褚灵伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WENGUANG INTERDYANMIC TV CO Ltd SHANGHAI
Original Assignee
WENGUANG INTERDYANMIC TV CO Ltd SHANGHAI
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WENGUANG INTERDYANMIC TV CO Ltd SHANGHAI filed Critical WENGUANG INTERDYANMIC TV CO Ltd SHANGHAI
Priority to CN201710098240.3A priority Critical patent/CN108460751A/en
Publication of CN108460751A publication Critical patent/CN108460751A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of 4K ultra high-definition image quality evaluating methods of view-based access control model nerve, include the following steps:Step 1, original video and video to be measured are respectively divided into several frame images;Step 2, the notable figure in each frame image of video to be measured is calculated;Step 3, the structure similar diagram of each frame in original video is calculated;Step 4, the notable figure of corresponding frame and structure similar diagram are done into weighted comprehensive, calculates the picture quality of video to be measured.The method of the present invention has the characteristics that speed is fast, expense is low and can be embedded into video system itself, and vision significance algorithm is applied in video evaluation.

Description

The 4K ultra high-definition image quality evaluating methods of view-based access control model nerve
Technical field
The present invention relates to method for evaluating video quality, more specifically to a kind of 4K ultra high-definitions of view-based access control model nerve Image quality evaluating method.
Background technology
Vision attention is a kind of significant psychological regulation mechanism in human visual system's process of information processing, and people is allowed to select The notable feature for obtaining to selecting property institute's observation object, to substantially reduce information processing capacity.Vision attention is in computer vision Play important role.Many tasks of computer vision, such as scene analysis, object detection, video tracking, object are known Not, retrieval, estimation and image recovery etc., all studied and improve performance using vision attention.
What you were seen is that you want to see, this is a main Research foundation of vision significance.At any time, ring The visual information that border is showed can be handled considerably beyond human eye.Vision significantly allow people choose with instantly oneself into The relevant information of capable behavior.In order to adapt to this potential burden, human brain also has a series of corresponding conspicuousness mechanism. Main includes two aspects:It is conspicuousness first to be used for choosing relevant information and ignoring incoherent information and can use To speculate information.In addition conspicuousness can adjust according to the target of behavior and enhance selected information.The research of conspicuousness It can be divided into many types according to different situations.And time and space significance is wherein than major one kind.
When we observe ambient enviroment, often due to attention is had selection by behavior purpose demand or local scene clue Ground concentrates on some or certain scenery, to select the representative of certain point or region as scenery.The sheet of vision attention Matter is a psychological regulation mechanism of human vision, and the mankind choose interested area from the bulk information of external world's input whereby Domain, and the notable feature of target of interest is selectively obtained to a certain extent, to reduce information processing capacity.
Conspicuousness is introduced into image quality evaluation and has been obtained for studying.Due to the introducing of vision significance, obtain more Meet the evaluating objective quality result of human eye.But vision significance algorithm is primarily directed to image at present, and video is special It is that ultra high-definition video wants the more of complexity compared to image, cannot directly applies mechanically the conspicuousness algorithm for image, because the third dimension Temporal signatures and movable information need to be considered.
Invention content
The object of the present invention is to provide a kind of 4K ultra high-definition image quality evaluating methods of view-based access control model nerve, solve existing Vision significance algorithm is used only for picture appraisal in technology, and the problem of cannot be used for video evaluation.
To achieve the above object, the present invention adopts the following technical scheme that:
A kind of 4K ultra high-definition image quality evaluating methods of view-based access control model nerve, include the following steps:Step 1, original to regard Frequency and video to be measured are respectively divided into several frame images;Step 2, the notable figure in each frame image of video to be measured is calculated;Step 3, Calculate the structure similar diagram of each frame in original video;Step 4, that the notable figure of corresponding frame and structure similar diagram are done weighting is comprehensive It closes, calculates the picture quality of video to be measured.
Preferably, step 2 further comprises:Step 2.1, video is formed by continuous frame by one and is decomposed into 3 significantly Property body Ci(i=1,2,3) corresponds to 3 kinds of different features, brightness, color and movement respectively;Step 2.2, each conspicuousness body Different scale j is resolved into progress, establishes a series of Gauss tower C={ Cij, wherein i=1,2,3, j=1 ..., L;Step 2.3, energy function E is minimized, it includes a data item EdWith smoothing factor Es, wherein:E (C)=λd·Ed(C)+λs·Es (C), wherein λdAnd λsFor corrected parameter;Step 2.4, last conspicuousness is that each scale and feature are average:Wherein, j=1 ..., L.
Preferably, step 3 further comprises:Enable VoAnd VdOriginal and distortion video is indicated respectively, and dimension is M*N*F;G and f V is indicated respectivelydA frame image.For the structure chart SS of calculated distortion videof, g and f are divided into many fritter x and y, x and y The calculating of SSIM values is as follows:Wherein μx, μyAnd σx, σyIt is the equal of x and y Value and variance, σxyIt is the covariance of x and y, C1、C2It is constant.The SSIM values of all small image blocks in one frame image constitute Structure chart SSf(f=1 ..., F).
Preferably, step 4 further comprises:Quality per frame
Preferably, final video quality value VQ is:
In the above-mentioned technical solutions, the 4K ultra high-definition image quality evaluating methods of view-based access control model nerve of the invention have speed The characteristics of degree is fast, expense is low and can be embedded into video system itself, and vision significance algorithm is applied to video evaluation In.
Description of the drawings
Fig. 1 is the flow chart of the method for the present invention.
Specific implementation mode
The technical solution further illustrated the present invention with reference to the accompanying drawings and examples.
The method of the present invention is based on 2 important vision significance principles, i.e. spatial domain conspicuousness and time domain conspicuousness.
Spatial domain conspicuousness:In the task of any visual search, the mode that human eye is watched attentively can be divided into inherence and external two Kind.Inherence pays attention to assuming to think that object controls vision attention.This is also referred to as from top to bottom, the vision attention of target excitation. It is a kind of voluntary, needs the process made great efforts.In addition vision attention can be automatically driven to by external driving by extrinsic motivation Some position.This is referred to as the vision attention for encouraging driving from down to up.Flashing lamp on highway is regarded from external driving Feel attention.External vision attention automatically drives note that response process is rapid.Vision attention external from down to up includes very The driving attribute of multiple types.For example, in visual search, spatial cues and the setting of unexpected vision can cause vision attention. Other visual signatures such as single feature (the vertical target in red objects or horizontal in green color background), can Effectively arouse attention.
Time domain conspicuousness:It is the feature of vision selection in time.The input of visual signal is constantly to change at any time , observer needs extraction and the relevant information of behavior from vision inlet flow.A kind of standard technique of research time domain conspicuousness It is that object sequence is played with the speed of 20 objects per second.This processing method and selection mechanism can allow researcher to find to regard Feel the video playout speed that information can be extracted.
In view of this, the present invention has used for reference the dense time and space significance of Konstantinos Rapantzikos et al. propositions Model (Dense Spatiotemporal Salient Model), abbreviation KR models are the conspicuousness moulds suitable for video One of type.The model characterizes video using multiple dimensioned body, and space-time operation is carried out on three dimensions.The calculating of conspicuousness is logical The process of a global minimization is crossed to realize.The constraints of the minimum is related to a series of visual signature information , including spatial domain proximity, scale and characteristic similarity (brightness, color, movement).The extreme value of above-mentioned conspicuousness response is just chosen For distinguishing feature, it has therefore proved that this method has reached good balance between brightness and information.
Based on above-mentioned principle, commented as shown in Figure 1, the present invention discloses a kind of 4K ultra high-definition picture qualities of view-based access control model nerve Valence method comprising key step below:
S1:Original video and video to be measured are respectively divided into several frame images.
S2:Calculate the notable figure in each frame image of video to be measured.
S3:Calculate the structure similar diagram of each frame in original video.
S4:The notable figure of corresponding frame and structure similar diagram are done into weighted comprehensive, calculate the picture quality of video to be measured.
Assuming that V represents a video being made of continuous frame, q=(x, y, t) represents an individual moment point.So Q means that the element of volume in an individual.V (q) is enabled to indicate pixel values of the V at q points.At this point, the main calculating of the notable figure of S2 Step includes:
S2.1:Video is formed by continuous frame by one and is decomposed into 3 conspicuousness body Ci(i=1,2,3) corresponds to 3 respectively Kind different feature, brightness, color and movement;
S2.2:Different scale j is resolved into each conspicuousness body progress, establishes a series of Gauss tower C={ Cij, Middle i=1,2,3, j=1 ..., L;
S2.3:Energy function E is minimized, it includes a data item EdWith smoothing factor Es, wherein:E (C)=λd·Ed (C)+λs·Es(C), wherein λdAnd λsFor corrected parameter;
S2.4:Last conspicuousness is that each scale and feature are average:Wherein, j=1 ..., L.
In above-mentioned steps, it is assumed that the result of the first scale is chosen as final Saliency maps, i.e., such as enables j=1, then:
Obviously, if certain part of video image is relatively more prominent, human eye is also relatively more to their concern, then managing institute This certain partial content weight shared in the evaluation result of video also should be big.Therefore, salient region is determining entirety Video evaluation value in should occupy an important position.Based on such idea, the present invention is parallel to execute while executing S2 S3。
Enable VoAnd VdOriginal and distortion video is indicated respectively, and dimension is M*N*F.G and f indicate V respectivelydA frame image.For The structure chart SS of calculated distortion videof, the calculating for the SSIM values that g and f are divided into many fritter x and y, x and y is as follows:
Wherein, μx, μyAnd σx, σyIt is the mean value and variance of x and y, σxyIt is the covariance of x and y, C1、C2It is constant.One frame figure The SSIM values of all small image blocks as in constitute structure chart SSf(f=1 ..., F).
Finally, use notable figure as the weighted factor of structure similar diagram, and the final video mass value of entire video is each Frame weighted graph is averaged, specifically:
Quality per frame
According to above-mentioned it is assumed that assuming that the result of the first scale is chosen as final Saliency maps:
At this timeThen:
Quality per frame
Final video quality value VQ is:
Those of ordinary skill in the art it should be appreciated that more than embodiment be intended merely to illustrate the present invention, And be not used as limitation of the invention, as long as in the spirit of the present invention, the change to embodiment described above Change, modification will all be fallen within the scope of claims of the present invention.

Claims (5)

1. a kind of 4K ultra high-definition image quality evaluating methods of view-based access control model nerve, which is characterized in that include the following steps:
Step 1, original video and video to be measured are respectively divided into several frame images;
Step 2, the notable figure in each frame image of video to be measured is calculated;
Step 3, the structure similar diagram of each frame in original video is calculated;
Step 4, the notable figure of corresponding frame and structure similar diagram are done into weighted comprehensive, calculates the picture quality of video to be measured.
2. the 4K ultra high-definition image quality evaluating methods of view-based access control model nerve as described in claim 1, which is characterized in that step 2 further comprise:
Step 2.1, video is formed by continuous frame by one and is decomposed into 3 conspicuousness body Ci(i=1,2,3) corresponds to 3 kinds respectively Different feature, brightness, color and movements;
Step 2.2, different scale j is resolved into each conspicuousness body progress, establishes a series of Gauss tower C={ Cij, Middle i=1,2,3, j=1 ..., L;
Step 2.3, energy function E is minimized, it includes a data item EdWith smoothing factor Es, wherein:
E (C)=λd·Ed(C)+λs·Es(C), wherein λdAnd λsFor corrected parameter;
Step 2.4, last conspicuousness is that each scale and feature are average:
Wherein, j=1 ..., L.
3. the 4K ultra high-definition image quality evaluating methods of view-based access control model nerve as claimed in claim 2, which is characterized in that step 3 further comprise:
Enable VoAnd VdOriginal and distortion video is indicated respectively, and dimension is M*N*F;G and f indicate V respectivelydA frame image.In order to count Calculate the structure chart SS of distortion videof, the calculating for the SSIM values that g and f are divided into many fritter x and y, x and y is as follows:
Wherein μx, μyAnd σx, σyIt is the mean value and variance of x and y, σxyIt is the covariance of x and y, C1、C2It is constant.In one frame image The SSIM values of all small image blocks constitute structure chart SSf(f=1 ..., F).
4. the 4K ultra high-definition image quality evaluating methods of view-based access control model nerve as claimed in claim 3, which is characterized in that step 4 further comprise:
Quality per frame
5. the 4K ultra high-definition image quality evaluating methods of view-based access control model nerve as claimed in claim 4, which is characterized in that final Video quality value VQ be:
CN201710098240.3A 2017-02-22 2017-02-22 The 4K ultra high-definition image quality evaluating methods of view-based access control model nerve Pending CN108460751A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710098240.3A CN108460751A (en) 2017-02-22 2017-02-22 The 4K ultra high-definition image quality evaluating methods of view-based access control model nerve

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710098240.3A CN108460751A (en) 2017-02-22 2017-02-22 The 4K ultra high-definition image quality evaluating methods of view-based access control model nerve

Publications (1)

Publication Number Publication Date
CN108460751A true CN108460751A (en) 2018-08-28

Family

ID=63220850

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710098240.3A Pending CN108460751A (en) 2017-02-22 2017-02-22 The 4K ultra high-definition image quality evaluating methods of view-based access control model nerve

Country Status (1)

Country Link
CN (1) CN108460751A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110312124A (en) * 2019-07-31 2019-10-08 中国矿业大学 A quality correction method for mobile inspection video based on salient multi-feature fusion
CN111385567A (en) * 2020-03-12 2020-07-07 上海交通大学 Ultra-high-definition video quality evaluation method and device
CN114373085A (en) * 2021-12-31 2022-04-19 江苏任务网络科技有限公司 Calculation Method of Image Similarity Based on Neighborhood Similarity
CN115239647A (en) * 2022-07-06 2022-10-25 杭州电子科技大学 Full-reference video quality evaluation method based on two stages of self-adaptive sampling and multi-scale time sequence
CN117934354A (en) * 2024-03-21 2024-04-26 共幸科技(深圳)有限公司 Image processing method based on AI algorithm

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110312124A (en) * 2019-07-31 2019-10-08 中国矿业大学 A quality correction method for mobile inspection video based on salient multi-feature fusion
CN110312124B (en) * 2019-07-31 2020-09-08 中国矿业大学 A mobile inspection video quality correction method based on saliency multi-feature fusion
CN111385567A (en) * 2020-03-12 2020-07-07 上海交通大学 Ultra-high-definition video quality evaluation method and device
CN114373085A (en) * 2021-12-31 2022-04-19 江苏任务网络科技有限公司 Calculation Method of Image Similarity Based on Neighborhood Similarity
CN115239647A (en) * 2022-07-06 2022-10-25 杭州电子科技大学 Full-reference video quality evaluation method based on two stages of self-adaptive sampling and multi-scale time sequence
CN117934354A (en) * 2024-03-21 2024-04-26 共幸科技(深圳)有限公司 Image processing method based on AI algorithm
CN117934354B (en) * 2024-03-21 2024-06-11 共幸科技(深圳)有限公司 Image processing method based on AI algorithm

Similar Documents

Publication Publication Date Title
Jin et al. Pedestrian detection with super-resolution reconstruction for low-quality image
Wang et al. Multi-scale dilated convolution of convolutional neural network for crowd counting
Song et al. Deep sliding shapes for amodal 3d object detection in rgb-d images
CN108460751A (en) The 4K ultra high-definition image quality evaluating methods of view-based access control model nerve
US8503770B2 (en) Information processing apparatus and method, and program
KR101333347B1 (en) Image processing method and image processing device
CN103065326B (en) Target detection method based on time-space multiscale motion attention analysis
CN111491187B (en) Video recommendation method, device, equipment and storage medium
CN112580576A (en) Face spoofing detection method and system based on multiscale illumination invariance texture features
EP2246807A1 (en) Information processing apparatus and method, and program
WO2011148562A1 (en) Image information processing apparatus
Ip et al. Saliency-assisted navigation of very large landscape images
CN111260738A (en) Multi-scale target tracking method based on relevant filtering and self-adaptive feature fusion
CN110827193A (en) Panoramic video saliency detection method based on multi-channel features
CN105701467A (en) Many-people abnormal behavior identification method based on human body shape characteristic
CN103247038B (en) A kind of global image information synthesis method of visual cognition model-driven
CN105513080B (en) An Infrared Image Target Saliency Evaluation Method
CN110781962A (en) Target detection method based on lightweight convolutional neural network
CN105303571A (en) Time-space saliency detection method for video processing
CN114067428A (en) Multi-view multi-target tracking method and device, computer equipment and storage medium
Chaabouni et al. ChaboNet: Design of a deep CNN for prediction of visual saliency in natural video
Singh et al. Learning to Predict Video Saliency using Temporal Superpixels.
Chen et al. SRCBTFusion-Net: An efficient fusion architecture via stacked residual convolution blocks and transformer for remote sensing image semantic segmentation
CN108460794A (en) A kind of infrared well-marked target detection method of binocular solid and system
CN110245660B (en) Webpage glance path prediction method based on saliency feature fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20180828

WD01 Invention patent application deemed withdrawn after publication