Disclosure of Invention
The application aims to provide a security monitoring video compression storage method, the monitoring area is divided into a plurality of sub-areas, the images presented by each frame of video monitoring are divided according to the divided sub-areas in the same way, each sub-area presents a plurality of images, then the images presented in each sub-area are subjected to similarity matching, the images with high similarity are classified, image parameters in the same type of images are obtained, a data analysis model is established according to the image parameters, image quality screening evaluation coefficients are generated, the image quality screening evaluation coefficients obtained by the same type of images are sequentially ordered, the images with the optimal image quality in the same type of images are screened out, the images with the optimal image quality in the same type of images are used as reference images for compression, the compressed images with the optimal image quality in the same type of images are completely replaced, only the images with the optimal image quality in the same type of images are required to be compressed once, the compression times of the images with the same type of images are reduced, the compression times of the whole image with the same type of images are reduced, the compression time is greatly reduced, the compression efficiency is improved, the image quality in the same type of images with the optimal image quality as the reference images can be replaced, and the quality of the images with the optimal image quality in the same type of images can be compressed.
In order to achieve the above object, the present application provides the following technical solutions: a security monitoring video compression storage method comprises the following steps:
dividing a monitoring area into a plurality of sub-areas, dividing the image presented by each frame of video monitoring according to the divided sub-areas in the same way, and presenting a plurality of images in each sub-area;
performing similarity matching on a plurality of images presented in each sub-area, and classifying images with high similarity;
acquiring image parameters in the same type of image, establishing a data analysis model according to the image parameters, and generating an image quality screening evaluation coefficient;
and screening out the images with the optimal image quality in the same type of images, compressing the images with the optimal image quality in the same type of images serving as reference images, and completely replacing other images in the same type of images after compressing the images.
Preferably, the monitoring area is divided into a plurality of sub-areas, the number of the sub-areas is set to be N, N is a positive integer greater than or equal to 2, the images displayed by each frame of the video monitoring are divided according to the divided sub-areas in the same way, and if the number of frames in the video is m, and m is a positive integer, m images are displayed in each sub-area.
Preferably, the similarity of a plurality of images is evaluated based on a feature vector method, which comprises the following steps:
s1: feature extraction
For each image, firstly extracting key points and feature vectors thereof by using a feature extraction algorithm;
s2: feature matching
Matching each image with all other images by calculating the distance or similarity between the feature vectors;
s3: feature aggregation
Aggregating the matching results of each image to form a feature vector related to all the images;
s4: similarity calculation
Obtaining a similarity matrix between all images by calculating the distance or similarity between the feature vectors;
s5: similarity ordering
And sequencing each image according to the similarity of the images with other images to obtain a plurality of images with the highest similarity.
Preferably, the image parameters include a noise duty ratio, an image contrast, a color accuracy and a color distortion value in the image, and after acquisition, the noise duty ratio, the image contrast, the image blur and the color distortion value are respectively calibrated to ZSZi, DBDi, MHZi, SZZi.
Preferably, the noise ratio, that is, the area of noise in the image, occupies the area of the image, and the expression is calculated as:in the formula, yx represents the area of the noise region, x represents the number of noise regions, and x=1, 2, 3, … …, n is a positive integer.
Preferably, the image contrast refers to the degree of difference between the brightest pixel and the darkest pixel in the image, and the image contrast can be calculated by the following formula:the method comprises the steps of carrying out a first treatment on the surface of the Wherein->Luminance value representing brightest pixel in image, and>the brightness value representing the darkest pixel in the image, and the contrast range is between 0 and 1.
Preferably, the image ambiguity is an index for describing the sharpness of an image, and is commonly used for evaluating the quality of the image, and the image ambiguity is measured by a gradient method, wherein the measuring steps are as follows:
A. carrying out graying treatment on the image;
B. calculating a gradient value of each pixel point in the image, and calculating by using a filter such as a Sobel operator or a Prewitt operator;
C. and (3) averaging the gradient values of all the pixel points to obtain an average gradient value of the image, namely the image ambiguity.
Preferably, the color distortion is measured using DeltaE values calculated in the following manner:wherein L1, a1, b1 are the L, a, b values of the actual color, respectively, L2, a2, b2 are the L, a, b values in the image, respectively; the color distortion value is directly obtained through the DeltaE value.
Preferably, after obtaining the noise duty ratio ZSZi, the image contrast DBDi, the image blur degree MHZi and the color distortion value SZZi, a data analysis model is established, and an image quality screening evaluation coefficient PGXi is generated according to the following formula:
in (1) the->、/>、/>、/>Preset proportionality coefficients of noise ratio, image contrast, image blur degree and color distortion value respectively +.>、/>、/>、/>Are all greater than 0.
Preferably, after the image quality screening evaluation coefficients obtained from the same type of image are obtained, the image quality screening evaluation coefficients obtained from the same type of image are sorted in order from large to small or from small to large, the image corresponding to the maximum value of the image quality screening evaluation coefficients is screened out, the image corresponding to the maximum value of the image quality screening evaluation coefficients is the image with the optimal image quality, the image corresponding to the maximum value of the image quality screening evaluation coefficients is compressed as the compression reference image, and then the compressed image is replaced by other images of the same type.
In the technical scheme, the application has the technical effects and advantages that:
according to the application, the monitoring area is divided into a plurality of sub-areas, the image presented by each frame of video monitoring is divided according to the divided sub-areas in the same way, so that each sub-area presents a plurality of images, then the images presented in each sub-area are subjected to similarity matching, the images with high similarity are classified, the image parameters in the images with the same type are obtained, a data analysis model is established according to the image parameters, an image quality screening evaluation coefficient is generated, the image quality screening evaluation coefficients obtained by the images with the same type are sequentially ordered, the image with the optimal image quality in the image with the same type is screened, the compressed image with the optimal image quality in the image type is used as a reference image to be compressed, the compressed image with the maximum value of the image quality screening evaluation coefficient is used as a compression reference image to be completely replaced, and then the compressed image with the optimal image with the same type is completely replaced, and only the image with the optimal image quality in the image with the same type is compressed, so that the number of times of compression of the images with the same type is effectively reduced, the time of compression of the image with the same type is greatly reduced, the time of compression of the image with the optimal image quality is greatly reduced, and the quality of the image with the optimal quality is compressed image with the reference image quality of the same type is compressed.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these example embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art.
The application provides a security monitoring video compression storage method as shown in fig. 1, which comprises the following steps:
dividing a monitoring area into a plurality of sub-areas, dividing the image presented by each frame of video monitoring according to the divided sub-areas in the same way, and presenting a plurality of images in each sub-area;
when the video monitoring system is in practical use, the monitoring area is divided into a plurality of sub-areas, the number of the sub-areas is set to be N, N is a positive integer greater than or equal to 2, the images displayed by each frame of video monitoring are divided according to the divided sub-areas in the same way, and if the number of frames in the video is m, and m is a positive integer, m images are displayed in each sub-area;
performing similarity matching on a plurality of images presented in each sub-area, and classifying images with high similarity;
the similarity of a plurality of images is evaluated based on a feature vector method, and the method comprises the following steps:
s1: feature extraction
For each image, extracting key points and feature vectors thereof by using a feature extraction algorithm (such as SIFT, SURF or ORB, etc.);
s2: feature matching
Matching each image with all other images by calculating the distance or similarity between the feature vectors;
s3: feature aggregation
Aggregating the matching results of each image to form a feature vector related to all the images;
s4: similarity calculation
Obtaining a similarity matrix between all images by calculating the distance or similarity between the feature vectors;
s5: similarity ordering
Sequencing each image according to the similarity of the images with other images to obtain a plurality of images with highest similarity;
classifying the images according to the similarity based on a feature vector method, and further processing the same type of images with high similarity;
it should be noted that, the plurality of images with the highest similarity are one type of images, and the pictures of the images in the same type of images are identical;
acquiring image parameters in the same type of image, establishing a data analysis model according to the image parameters, and generating an image quality screening evaluation coefficient;
the image parameters comprise a noise duty ratio, an image contrast, color accuracy and a color distortion value in the image, and after acquisition, the noise duty ratio, the image contrast, the image ambiguity and the color distortion value are respectively calibrated to be ZSZi, DBDi, MHZi, SZZi;
the noise ratio, that is, the area of noise in the image, occupies the area of the image, and the expression is calculated as follows:wherein Yx represents the area of the noise region, x represents the number of noise regions, x=1, 2, 3, … …, n being a positive integer; during image acquisition and processing, due to interference and influence of various factors, some unrealistic pixels or pixel values appear in the image, and the unrealistic pixels or pixel values are called noise, and the noise can be classified into various types such as Gaussian noise, spiced salt noise, speckle noise and the like, wherein the Gaussian noise is the most effectiveA common noise type is generated due to the influence of electronic noise or other environmental factors of the image acquisition equipment, salt-and-pepper noise is generated due to signal loss or errors in the image transmission or storage process, speckle noise is caused by damage or failure of certain pixels of the image acquisition equipment, and noise has great influence on image processing and analysis;
the image contrast refers to the degree of difference between the brightest pixel and the darkest pixel in the image, the image with high contrast means that the difference between the brightest and darkest parts is large, the color and detail of the image are more vivid, on the contrary, the image with low contrast means that the difference between the brightest and darkest parts is smaller, and the color and detail of the image become blurred or unclear;
the image contrast can be calculated by the following formula:the method comprises the steps of carrying out a first treatment on the surface of the Wherein->Luminance value representing brightest pixel in image, and>the brightness value of the darkest pixel in the image is represented, the value range of the contrast is between 0 and 1, the larger the value is, the higher the contrast is, and the more vivid the color and detail of the image are;
the image blur degree is an index for describing the image definition, and is commonly used for evaluating the image quality, the smaller the image blur degree is, the clearer the detail information in the image is, the better the image quality is, and the image blur degree is commonly measured by adopting a gradient method, wherein the measuring steps are as follows:
A. carrying out graying treatment on the image;
B. calculating a gradient value of each pixel point in the image, and calculating by using a filter such as a Sobel operator or a Prewitt operator;
C. averaging the gradient values of all the pixel points to obtain an average gradient value of the image, namely the image ambiguity;
color distortion refers to the fact that the color in an image is different from that of an actual object or a strange color stripe appears, and the color distortion is calculated by using a DeltaE value, which represents the difference degree between the actual color and the color in the image, and the common DeltaE calculating method is as follows:wherein L1, a1, b1 are the L, a, b values of the actual color, respectively, L2, a2, b2 are the L, a, b values in the image, respectively;
the values of L, a, b are three parameters in the CIELab-color space, which is a standardized color space proposed by the international commission on illumination (CIE) in 1976 for describing the color characteristics of objects;
l represents brightness (Luminance), the value range is from 0 to 100, the range from black to white is represented, a and b represent the range of colors, the a-axis represents the red-green axis, the value range is from-128 to 127, the negative value represents green, the positive value represents red, the b-axis represents the blue-yellow axis, the value range is also from-128 to 127, the negative value represents blue, and the positive value represents yellow; therefore, the brightness, red-green tone and blue-yellow tone of one color can be accurately described through the values of L, a and b;
the color distortion value SZZi is directly obtained through the DeltaE value, and the color distortion value is used for indicating the degree of difference between the actual color and the color in the image, so that the larger the distortion value is, the larger the difference between the color in the image and the actual color is, namely the worse the color reduction capability of the image is, and the worse the image quality is;
after obtaining a noise duty ratio ZSZi, an image contrast DBDi, an image ambiguity MHZi and a color distortion value SZZi, establishing a data analysis model to generate an image quality screening evaluation coefficient PGxi according to the following formula:
in (1) the->、/>、/>、/>Preset proportionality coefficients of noise ratio, image contrast, image blur degree and color distortion value respectively +.>、/>、/>、/>Are all greater than 0;
as can be seen from the formula, when the noise ratio in the image is higher, the image contrast is lower, the image blurring degree is higher, the color distortion value is higher, namely, the expression value of the image quality screening evaluation coefficient is lower, the image quality of the image is poorer, and when the noise ratio in the image is lower, the image contrast is higher, the image blurring degree is lower, the color distortion value is lower, namely, the expression value of the image quality screening evaluation coefficient is higher, the image quality of the image is better;
screening out the images with the optimal image quality in the same type of images, compressing the images with the optimal image quality in the same type of images as reference images, and completely replacing other images in the same type of images after compressing the images;
after the image quality screening evaluation coefficients obtained from the same type of images are obtained, the image quality screening evaluation coefficients obtained from the same type of images are ranked in order from large to small or from small to large, the image corresponding to the maximum value of the image quality screening evaluation coefficients is selected, the image is the image with the optimal image quality in the same type of images, the image corresponding to the maximum value of the image quality screening evaluation coefficients is compressed as a compression reference image, and then the compressed image is replaced by other images with the same type of images, so that the number of times of compression of the image with the optimal image quality in the same type of images is effectively reduced by only carrying out primary compression on the image with the optimal image quality in the same type of images, the overall compression number is further reduced, the compression time is greatly reduced, the compression efficiency is improved, and secondly, all the images with the optimal image quality in the same type of images are replaced by the image with the optimal image quality after compression by taking the image with the optimal image quality in the same type of images as the reference image, and the quality of the compressed image can be effectively improved.
According to the application, the monitoring area is divided into a plurality of sub-areas, the image presented by each frame of video monitoring is divided according to the divided sub-areas in the same way, so that each sub-area presents a plurality of images, then the images presented in each sub-area are subjected to similarity matching, the images with high similarity are classified, the image parameters in the images with the same type are obtained, a data analysis model is established according to the image parameters, an image quality screening evaluation coefficient is generated, the image quality screening evaluation coefficients obtained by the images with the same type are sequentially ordered, the image with the optimal image quality in the images with the same type is screened, the compressed images with the optimal image quality in the images with the same type are used as reference images, all the other images with the same type are replaced, the image with the maximum value of the image quality screening evaluation coefficient is used as the compression reference image, and all the compressed images with the same type are replaced, and therefore, the number of times of compression of the images with the same type in the same type can be effectively reduced, the time of compression of the images with the optimal image quality in the same type is greatly reduced, the time of the image with the optimal image quality is greatly improved, and then the compressed images with the optimal quality in the same type are compressed reference images.
The above formulas are all formulas with dimensions removed and numerical values calculated, the formulas are formulas with a large amount of data collected for software simulation to obtain the latest real situation, and preset parameters in the formulas are set by those skilled in the art according to the actual situation.
The above embodiments may be implemented in whole or in part by software, hardware, firmware, or any other combination. When implemented in software, the above-described embodiments may be implemented in whole or in part in the form of a computer program product. The computer program product comprises one or more computer instructions or computer programs. When the computer instructions or computer program are loaded or executed on a computer, the processes or functions described in accordance with embodiments of the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website site, computer, server, or data center to another website site, computer, server, or data center by wired or wireless means (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains one or more sets of available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium. The semiconductor medium may be a solid state disk.
It should be understood that the term "and/or" is merely an association relationship describing the associated object, and means that three relationships may exist, for example, a and/or B may mean: there are three cases, a alone, a and B together, and B alone, wherein a, B may be singular or plural. In addition, the character "/" herein generally indicates that the associated object is an "or" relationship, but may also indicate an "and/or" relationship, and may be understood by referring to the context.
In the present application, "at least one" means one or more, and "a plurality" means two or more. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b, or c may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or plural.
It should be understood that, in various embodiments of the present application, the sequence numbers of the foregoing processes do not mean the order of execution, and the order of execution of the processes should be determined by the functions and internal logic thereof, and should not constitute any limitation on the implementation process of the embodiments of the present application.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the several embodiments provided in the present application, it should be understood that the disclosed method may be implemented in other manners. For example, the embodiments described above are merely illustrative, e.g., the division of the elements is merely a logical functional division, and there may be additional divisions in actual implementation, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.