CN105654470A - Image selection method, device and system - Google Patents
Image selection method, device and system Download PDFInfo
- Publication number
- CN105654470A CN105654470A CN201510988384.7A CN201510988384A CN105654470A CN 105654470 A CN105654470 A CN 105654470A CN 201510988384 A CN201510988384 A CN 201510988384A CN 105654470 A CN105654470 A CN 105654470A
- Authority
- CN
- China
- Prior art keywords
- image
- value
- images
- mean
- calculating
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000010187 selection method Methods 0.000 title claims abstract description 9
- 238000011156 evaluation Methods 0.000 claims abstract description 64
- 238000000034 method Methods 0.000 claims abstract description 49
- 230000004927 fusion Effects 0.000 claims abstract description 48
- 238000004364 calculation method Methods 0.000 claims description 72
- 238000001914 filtration Methods 0.000 claims description 24
- 238000010586 diagram Methods 0.000 claims description 12
- 238000006243 chemical reaction Methods 0.000 claims description 9
- 230000000903 blocking effect Effects 0.000 claims description 4
- 230000009466 transformation Effects 0.000 claims description 4
- 238000000638 solvent extraction Methods 0.000 claims description 3
- 238000004891 communication Methods 0.000 description 10
- 238000012545 processing Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 4
- 238000012163 sequencing technique Methods 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 230000000007 visual effect Effects 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 238000012935 Averaging Methods 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 230000001131 transforming effect Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
Landscapes
- Engineering & Computer Science (AREA)
- Quality & Reliability (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to an image selection method, device and system. The method comprises the steps that the average brightness value of each image in images to be selected is calculated, and a brightness evaluation value of the image is calculated according to the average brightness value; a first weight map average value of each image in the images to be selected under the time domain is calculated, the energy occupation ratio of each image in the images to be selected under the frequency domain is calculated, and an resolution weight map of each image in the images to be selected is calculated according to the first weight map average value and the energy occupation ratio; and the feature fusion value of the image is determined according to the brightness evaluation value and the resolution weight map, and the image with the highest feature fusion value in the images to be selected acts as a final selection image. The image of the optimal quality under the same scene is selected through combination of brightness information and resolution information of the image.
Description
Technical Field
The present disclosure relates to image processing technologies, and in particular, to an image selection method, apparatus, and system.
Background
In daily life, photographing has become a habit of people. However, multiple pictures may be taken in the same scene, and due to different shooting conditions and different qualities of the multiple pictures in the same scene, a picture with the best quality needs to be selected from the multiple pictures for presentation, and particularly, a picture with the best quality is selected from similar pictures in the same scene.
Disclosure of Invention
In order to overcome the problems in the related art, the present disclosure provides an image selection method, device and system.
According to a first aspect of the embodiments of the present disclosure, there is provided an image selecting method, including:
calculating the brightness mean value of each image in the images to be selected, and calculating the brightness evaluation value of the images according to the brightness mean value;
calculating a first weight graph mean value of each image in the images to be selected in a time domain, calculating an energy occupation ratio of each image in the images to be selected in a frequency domain, and calculating a definition weight graph of each image in the images to be selected according to the first weight graph mean value and the energy occupation ratio;
and determining a feature fusion value of the image according to the brightness evaluation value and the definition weight map, and taking the image with the maximum feature fusion value in the image to be selected as a final selected image.
With reference to the first aspect, in a first possible implementation manner of the first aspect, the calculating a luminance mean value of each image in the to-be-selected image, and calculating a luminance evaluation value of the image according to the luminance mean value includes:
converting each image f (x, y) in the images to be selected into a log space by adopting a formula f' (x, y) ═ log (f (x, y) + 1);
after taking the mean value of the images in the log space by adopting a formula v ═ exp (mean (f '(x, y))) -1, converting each image f' (x, y) in the images to be selected in the log space into the value of the original space as the brightness mean value v;
according to the brightness mean value v, adopting a formula L1The luminance evaluation value L of the image was calculated as 1-abs (v-0.5)1;
The method comprises the following steps that exp (mean (f '(x, y))) represents an exponential function with a natural constant e as a base and mean (f' (x, y)) as an exponent, and mean (f '(x, y)) represents the average value of each image f' (x, y) in an image to be selected in a log space; abs (v-0.5) represents the absolute value of the difference between the luminance mean and the reference point of 0.5.
With reference to the first possible implementation manner of the first aspect, in a second possible implementation manner of the first aspect, the calculating a first weight graph mean value of each image in the to-be-selected image in a time domain includes:
dividing each image in the images to be selected into at least one area block;
using a formulaCalculating the omnibearing differential value w (x) of the pixel in each area block;
wherein, Xi、XjRepresenting a pixel value in each image region block in an image to be selected, wherein omega represents a region formed by all region blocks of each image in the image to be selected in a time domain, i and j are positive integers which are more than or equal to 1 respectively, and i is not equal to j;
taking the maximum omnibearing differential value in the area blocks as the omnibearing differential value of each area block, and forming an omnibearing differential graph of the area blocks by the omnibearing differential values of all the area blocks;
using formula according to omnibearing difference diagram of region blockCalculating the first weight map S;
using the formula S1Mean(s) calculating the first weight mapValue S1;
Wherein,indicates the omnibearing difference value v (omega) of the pixels in the region block1) The maximum value is obtained, mean (S) represents the average value of the first weight map S.
With reference to the second possible implementation manner of the first aspect, in a third possible implementation manner of the first aspect, before the taking the maximum omni-directional difference value in the area blocks as the omni-directional difference value of each area block, the method further includes:
the maximum omni-directional differential value in a region block is calculated from the omni-directional differential values of the pixels within each region block.
With reference to the third possible implementation manner of the first aspect, in a fourth possible implementation manner of the first aspect, the calculating an energy occupancy ratio of each image in the to-be-selected image in the frequency domain includes:
transferring each image I in the images to be selected to a frequency domain by adopting a formula F-fft (I);
fft (I) represents that each image I in the images to be selected is subjected to fourier transform;
filtering each image in the images to be selected in the frequency domain;
using the formula F1Calculating the energy value F of each image in the filtered images to be selected greater than the energy value F of the preset threshold value T (abs (F) > T)1;
Abs (F) represents a module value of an energy value F of each image to be selected after filtering, and a preset threshold T is 5;
using a formulaCalculating each of the images to be selectedEnergy occupancy ratio S of image in frequency domain2。
With reference to the fourth possible implementation manner of the first aspect, in a fifth possible implementation manner of the first aspect, the calculating a sharpness weight map of each image of the to-be-selected images according to the first weight map mean value and the energy occupancy includes:
according to the first weight graph mean value S1And the energy occupancy ratio S2Using the formula L2=S1*β+S2(1- β) calculating definition weight graph L of each image in the images to be selected2;
Wherein β is a weight value of the first weight map mean.
With reference to the fifth possible implementation manner of the first aspect, in a sixth possible implementation manner of the first aspect, the determining a feature fusion value of an image according to the luminance evaluation value and the sharpness weight map includes:
according to the brightness evaluation value L1And the sharpness weight map L2Adopting the formula L ═ L1*L2And calculating a feature fusion value L of the image.
With reference to the first aspect to the sixth possible implementation manner of the first aspect, in a seventh possible implementation manner of the first aspect, before the calculating a luminance mean value of each of the images to be selected, the method further includes:
at least two images to be selected are obtained.
According to a second aspect of the embodiments of the present disclosure, there is provided an image selecting apparatus including:
the first calculation module is configured to calculate a brightness mean value of each image in the images to be selected and determine a brightness evaluation value of the images according to the brightness mean value;
the second calculation module is configured to calculate a first weight map mean value of each image in the images to be selected in the time domain, calculate an energy occupancy ratio of each image in the images to be selected in the frequency domain, and calculate a definition weight map of each image in the images to be selected according to the first weight map mean value and the energy occupancy ratio;
the selecting module is configured to determine a feature fusion value of an image according to the brightness evaluation value calculated by the first calculating module and the definition weight map calculated by the second calculating module, and take the image with the maximum feature fusion value in the images to be selected as a final selected image.
With reference to the second aspect, in a first possible implementation manner of the second aspect, the first computing module includes:
the space conversion sub-module is configured to convert each image f (x, y) in the images to be selected into a log space by adopting a formula f' (x, y) ═ log (f (x, y) + 1);
a first determining submodule configured to adopt a formula v ═ exp (mean (f '(x, y))) -1 to take an average value of the images in a log space converted by the spatial conversion submodule, and then take a value of each image f' (x, y) in the log space to be selected into an original space as the brightness average value v;
a first calculation submodule configured to apply a formula L according to the brightness mean value v determined by the first determination submodule1The luminance evaluation value L of the image was calculated as 1-abs (v-0.5)1;
The method comprises the following steps that exp (mean (f '(x, y))) represents an exponential function with a natural constant e as a base and mean (f' (x, y)) as an exponent, and mean (f '(x, y)) represents the average value of each image f' (x, y) in an image to be selected in a log space; abs (v-0.5) represents the absolute value of the difference between the luminance mean and the reference point of 0.5.
With reference to the first possible implementation manner of the second aspect, in a second possible implementation manner of the second aspect, the second calculating module includes:
the blocking submodule is configured to divide each image in the images to be selected into at least one area block;
a block calculation submodule configured to adopt a formulaCalculating the omnibearing differential value w (x) of the pixels in each area block divided by the partitioning submodule;
wherein, Xi、XjExpressing the pixel value in each image region block in the image to be selected, wherein omega expresses the region formed by all the region blocks of each image in the image to be selected in the time domain, i and j are positive integers which are more than or equal to 1 respectively, and i is not equal to j;
a second determination submodule configured to take a maximum omnidirectional difference value in the region blocks as an omnidirectional difference value of each region block, the omnidirectional difference values of all the region blocks constituting an omnidirectional difference map of the region blocks;
a weight map calculation submodule configured to apply a formula according to the omni-directional difference map of the area block determined by the second determination submoduleCalculating the first weight map S;
a mean value calculation sub-module configured to adopt a formula S according to the first weight map S calculated by the weight map calculation sub-module1(S) calculating the first weight map mean S1;
Wherein,indicates the omnibearing difference value v (omega) of the pixels in the region block1) The maximum value is obtained, mean (S) represents the average value of the first weight map S.
With reference to the second possible implementation manner of the second aspect, in a third possible implementation manner of the second aspect, the second calculating module further includes: a maximum value calculation submodule;
a maximum value calculating submodule configured to calculate a maximum omnidirectional difference value in the region block from the omnidirectional difference values of the pixels within each region block before the maximum omnidirectional difference value in the region block is taken as the omnidirectional difference value of each region block by the second determining submodule.
With reference to the third possible implementation manner of the second aspect, in a fourth possible implementation manner of the second aspect, the second calculating module further includes:
the frequency domain transformation submodule is configured to convert each image I in the images to be selected to a frequency domain by adopting a formula F-fft (I);
fft (I) represents that each image I in the images to be selected is subjected to fourier transform;
the filtering submodule is configured to filter each image to be selected in the frequency domain converted by the frequency domain conversion submodule;
an energy value calculating operator module configured to adopt a formula F1Calculating the energy value F of each image to be selected in the filtered images of the filtering submodule after the filtering is greater than the energy value F of the preset threshold value T (abs (F) > T)1;
Abs (F) represents a module value of an energy value F of each image to be selected after filtering, and a preset threshold T is 5;
an occupancy ratio calculation submodule configured to calculate an energy value F from the energy value submodule1Using a formulaCalculating the energy occupancy ratio S of each image in the images to be selected in the frequency domain2。
With reference to the fourth possible implementation manner of the second aspect, in a fifth possible implementation manner of the second aspect, the second calculating module further includes:
a second calculation submodule configured to calculate the first weight map mean S from the mean submodule1And the energy occupancy S calculated by the occupancy submodule2Using the formula L2=S1*β+S2(1- β) calculating definition weight graph L of each image in the images to be selected2;
Wherein β is a weight value of the first weight map mean.
With reference to the fifth possible implementation manner of the second aspect, in a sixth possible implementation manner of the second aspect, the selecting module includes:
a selection calculation sub-module configured to calculate the luminance evaluation value L from the first calculation sub-module1And the sharpness weight map L calculated by the second calculation sub-module2Adopting the formula L ═ L1*L2And calculating a feature fusion value L of the image.
With reference to the second aspect to the sixth possible implementation manner of the second aspect, in a seventh possible implementation manner of the second aspect, the apparatus further includes:
the acquisition module is configured to acquire at least two images to be selected.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
in one embodiment, the brightness mean value of each image in the images to be selected is calculated, the brightness evaluation value of the images is calculated according to the brightness mean value, the first weight graph mean value of each image in the images to be selected in the time domain is calculated, the energy occupancy ratio of each image in the images to be selected in the frequency domain is calculated, the definition weight graph of each image in the images to be selected is calculated according to the first weight graph mean value and the energy occupancy ratio, the feature fusion value of the images is determined according to the brightness evaluation value and the definition weight graph, the image with the maximum feature fusion value in the images to be selected is used as the final selected image, the quality of the images to be selected is ranked according to the brightness information and the definition information of the images, the image with the optimal quality is selected, and the image selected under the same scene, especially under the similar images, is the optimal image.
In another embodiment, each image f (x, y) in the image to be selected in the log space is converted into the log space by using the formula f ' (x, y) ═ log (f (x, y) +1), after the image is averaged in the log space by using the formula v ═ exp (mean (f ' (x, y))) -1, each image f ' (x, y) in the image to be selected in the log space is converted into the value of the original space as the brightness mean value v, and the formula L is used according to the brightness mean value v1The luminance evaluation value L of the image was calculated as 1-abs (v-0.5)1The method is more suitable for the visual characteristics of human eyes, and the acquired image brightness information is more accurate.
In another embodiment, each image in the images to be selected is divided into at least one area block, and a formula is adoptedCalculating the omnibearing differential value w (x) of the pixel in each area block, taking the maximum omnibearing differential value in each area block as the omnibearing differential value of each area block, forming an omnibearing differential image of each area block by the omnibearing differential values of all the area blocks, and adopting a formula according to the omnibearing differential image of each area blockCalculating a first weight map S using the formula S1Mean (S) calculating a first weight map mean S1The method and the device realize the calculation of the first weight map of the image to be selected in the time domain, ensure the calculation accuracy of the first weight map and further improve the calculation accuracy of the definition weight map of the image.
In another embodiment, before the maximum omni-directional difference value in the region block is used as the omni-directional difference value of each region block, the maximum omni-directional difference value in the region block is calculated according to the omni-directional difference value of the pixels in each region block, so that the calculation of the maximum omni-directional difference value in the region block is realized, and the calculation accuracy of the first weight map is further ensured.
In another embodiment, each image to be selected is converted into a frequency domain by using the formula F-fft (i), each image to be selected in the frequency domain is filtered, and the formula F is used1Calculating the energy value F of each image in the filtered images to be selected greater than the energy value F of the preset threshold value (ABs (F) > T)1By the formulaCalculating the energy occupancy ratio S of each image in the images to be selected in the frequency domain2The method and the device realize the calculation of the energy occupancy ratio of the image to be selected in the frequency domain, ensure the calculation accuracy of the energy occupancy ratio and further improve the calculation accuracy of the definition weight map of the image.
In another embodiment, the formula L is adopted according to the first weight map mean value and the energy occupancy ratio2=S1*β+S2(1- β) calculating definition weight graph L of each image in the images to be selected2β is the weight value of the first weight map mean value, so as to realize the calculation of the definition weight map of the image to be selected, ensure the calculation accuracy of the definition weight map of the image and further improve the calculation accuracy of the feature fusion value of the image.
In another embodiment, the formula L is adopted according to the luminance evaluation value and the sharpness weighting graph1*L2And calculating the feature fusion value L of the image, ensuring the calculation accuracy of the feature fusion value of the image, ensuring that the image with the optimal quality is selected, and realizing that the image selected under the same scene, especially under similar images, is the optimal image. .
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a flow diagram illustrating an image selection method according to an exemplary embodiment;
FIG. 2 is a flow diagram illustrating an image selection method according to another exemplary embodiment;
FIG. 3 is a block diagram illustrating an image selection apparatus according to an exemplary embodiment;
FIG. 4 illustrates a block diagram of an image selection apparatus according to another exemplary embodiment;
FIG. 5 is a block diagram illustrating an apparatus for image selection according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
FIG. 1 is a flow chart illustrating a method of image selection according to an exemplary embodiment. As shown in fig. 1, the image selecting method according to this embodiment is applied to a terminal, where the terminal may be a mobile phone, a tablet computer, a notebook computer, etc. with a photographing function, or may be other devices with a photographing function, such as a video camera. The image selection method comprises the following steps.
In step S11, a luminance mean value of each of the images to be selected is calculated, and a luminance evaluation value of the image is calculated based on the luminance mean value.
In general, in the same scene, the brightness of images differs due to differences in conditions such as lighting in which the images are captured. In the embodiment of the disclosure, the terminal performs characteristic analysis on the brightness of the image to be selected, calculates the brightness mean value of each image in the image to be selected, and calculates the brightness evaluation value of the image according to the brightness mean value. The brightness evaluation value can better describe the brightness information of the image, and can effectively reflect the illumination condition of the image, such as whether the image deviates from the optimal range of human eyes, and the problems of exposure, partial darkness and the like occur.
In step S12, a first weight map mean value of each image in the image to be selected in the time domain is calculated, an energy occupancy ratio of each image in the image to be selected in the frequency domain is calculated, and a sharpness weight map of each image in the image to be selected is calculated according to the first weight map mean value and the energy occupancy ratio.
In general, in the same scene, the sharpness of an image differs depending on the condition such as focusing at the time of image capturing. In the embodiment of the disclosure, the terminal performs feature analysis on the image to be determined in the time domain, and calculates a first weight graph mean value of the image to be determined in the time domain. The first weight graph mean value in the time domain can better describe the local area of the image, and can effectively describe the edge structure information of the image. And the terminal performs characteristic analysis on the image to be judged in the frequency domain and calculates the energy occupation ratio of the image to be judged in the frequency domain. The energy occupancy ratio in the frequency domain can better describe the whole area of the image, including some areas with lower contrast, and can effectively describe the whole information of the image. The image definition weight map is calculated according to the first weight map mean value in the time domain and the energy occupation ratio in the frequency domain, the terminal can better describe the image definition by combining the time domain information of the image and the frequency domain information of the image, the time domain of the image considers the local part of the image, and the frequency domain of the image considers the whole image, so that the image definition condition can be effectively reflected, such as whether the image has the conditions of defocusing, motion blur and the like.
In step S13, the feature fusion value of the image is determined according to the luminance evaluation value and the sharpness weighting map, and the image with the maximum feature fusion value in the images to be selected is used as the final selected image.
In the embodiment of the disclosure, the quality of the image to be selected is evaluated by combining the brightness information and the definition information of the image, the terminal determines the feature fusion value of the image according to the brightness evaluation value and the definition weight map, the image with the maximum feature fusion value in the image to be selected is taken as the final selected image, the optimal image can be selected from the image to be selected in the same scene, especially similar images, the user can be intelligently prompted which image has the optimal quality, and other images in the image to be selected can be intelligently deleted.
The image selecting method of this embodiment calculates the mean value of the brightness of each image in the images to be selected, calculating the brightness evaluation value of the image according to the brightness mean value, calculating the first weight graph mean value of each image in the image to be selected in the time domain, calculating the energy occupation ratio of each image in the image to be selected in the frequency domain, calculating the definition weight map of each image in the images to be selected according to the first weight map mean value and the energy occupation ratio, determining a feature fusion value of an image according to the brightness evaluation value and the definition weight graph, taking the image with the maximum feature fusion value in the image to be selected as a final selected image, sequencing the quality of the image to be selected by combining the brightness information and the definition information of the image, and selecting the image with the optimal quality, so that the image selected under the same scene, especially under similar images, is the optimal image.
FIG. 2 is a flow chart illustrating a method of image selection according to another exemplary embodiment. The embodiment of the present invention calculates a luminance evaluation value of an image and a sharpness weight map of the image based on the embodiment shown in fig. 1, and determines a feature fusion value of the image according to the luminance evaluation value and the sharpness weight map, as shown in fig. 2, the image selecting method is used in a terminal, and includes the following steps.
In step S21, at least two images to be selected are acquired.
In the embodiment of the disclosure, the terminal may acquire the image to be selected through shooting by the camera, or may acquire the image to be selected, which is stored in the terminal in advance, in the storage module. The image to be selected is a plurality of images with similar contents shot in the same scene.
In step S22, each image in the images to be selected is converted into a log space, after the mean value of the images is taken in the log space, the value of each image in the images to be selected in the log space converted into an original space is taken as a brightness mean value, and a formula L is adopted according to the brightness mean value1The luminance evaluation value of the image was calculated as 1-abs (v-0.5). Step S26 is executed.
Converting each image f (x, y) in the images to be selected into a log space by adopting a formula f' (x, y) ═ log (f (x, y) + 1); after taking the mean value of the images in the log space by adopting a formula v ═ exp (mean (f '(x, y))) -1, converting each image f' (x, y) in the images to be selected in the log space into the value of the original space as a brightness mean value v, and adopting a formula L according to the brightness mean value v1The luminance evaluation value L of the image was calculated as 1-abs (v-0.5)1. exp (mean (f '(x, y))) represents an exponential function taking a natural constant e as a base and mean (f' (x, y)) as an exponent, and mean (f '(x, y)) represents the average value of each image f' (x, y) in the to-be-selected image in the log space; abs (v-0.5) represents the absolute value of the difference between the luminance mean and the reference point of 0.5.
In the embodiment of the disclosure, each image in the images to be selected is mapped and converted into a log space, the mean value of the images is taken in the log space and then is reversed to be used as the calculated brightness mean value, so that the method is more in line with the visual characteristics of human eyes, and the acquired image brightness information is more accurate. Meanwhile, the evaluation criterion of the image brightness is adoptedFormula L1Calculating the brightness evaluation value of the image as 1-abs (v-0.5), comparing the brightness mean value of the image with the reference point 0.5, judging the distance between the current brightness mean value and the reference point 0.5, and if the distance between the current brightness mean value and the reference point 0.5 is larger, namely the brightness evaluation value L of the image1The smaller the image, the more the image appears exposed or darker; if the distance between the current brightness mean value and the reference point is smaller than 0.5, the brightness evaluation value L of the image is also obtained1The larger the size, the less exposure of the image or the less dark the image.
In step S23, each image in the image to be selected is divided into at least one region block, an omnidirectional difference value of pixels in each region block is calculated, the maximum omnidirectional difference value in the region block is used as the omnidirectional difference value of each region block, the omnidirectional difference values of all region blocks form an omnidirectional difference map of the region block, a first weight map is calculated according to the omnidirectional difference map of the region block, and formula S is adopted1Mean(s) calculates a first weight map mean. Step S25 is executed.
Wherein, a formula is adoptedCalculating the omnibearing differential value w (x) of the pixel in each area block; xi、XjThe image selecting method comprises the steps of representing a pixel value in each image region block in an image to be selected, wherein omega represents a region formed by all region blocks of each image in the image to be selected in a time domain, i and j are positive integers which are larger than or equal to 1 respectively, and i is not equal to j. Using formula according to omnibearing difference diagram of region blockCalculating a first weight map S; using the formula S1Mean (S) calculating a first weight map mean S1。Indicates the omnibearing difference value v (omega) of the pixels in the region block1) The maximum value is obtained, mean (S) represents the average value of the first weight map S.
In the embodiment of the disclosure, first, the terminal divides each image in the image to be selected into a plurality of area blocks, each area block includes a plurality of pixels, and the terminal can calculate the omnibearing difference value of the pixels at the preset positions in each divided area block according to the pixel values of the pixels in the area block. For example, if each region block includes 8 × 8(64) pixels, the omni-directional difference values of 7 × 7(49) pixels surrounded by the right side, the lower side, and the lower right side can be calculated. In the embodiment of the disclosure, the omnibearing differential value of the pixel at the preset position in each area block is calculated by the terminal, so that the boundary point in each area block can be removed, and the precision of the omnibearing differential value of the pixel in each area block is ensured. Secondly, the omnibearing difference value of each area block pixel represents the difference between the pixel and the surrounding pixels, and the larger the difference is, the higher the contrast ratio in the area block where the pixel is located is. The terminal takes the maximum omnibearing differential value in the area block as the omnibearing differential value of each area block, so that the omnibearing differential values in each area block are the same, the contrast difference of pixels in each area block of the image to be judged can be ensured to be large, and each area block has better contrast. For example, if the image to be determined is divided into 4 area blocks, the formula is usedCalculating the omnibearing differential value of each area block pixel as follows: 0.2, 0.3, 0.5 and 0.8, the maximum omni-directional difference value of 0.8 is taken as the omni-directional difference value of the 4 area blocks, i.e. the omni-directional difference value of each area block is 0.8. Thirdly, the omnibearing difference graph represents a first weight graph of the image in a time domain, the omnibearing difference graph of the area block and the first weight graph have a corresponding relation, and the terminal adopts a formula according to the omnibearing difference graph of the area blockCalculating a first weight map S using the formula S1Mean (S) calculating a first weight map mean S1First weight graph mean S1For describing the sharpness, S, of an image in the time domain1The larger the image, the higher the sharpness of the image, S1Smaller means lower definition of the image.
Further, before the step 23, the method further includes, before the maximum omni-directional difference value in the region blocks is used as the omni-directional difference value of each region block: the maximum omni-directional differential value in a region block is calculated from the omni-directional differential values of the pixels within each region block.
In the embodiment of the disclosure, according to the omnibearing differential value of the pixel at the preset position in each area block, the maximum omnibearing differential value in the area block can be determined by a one-by-one comparison method; the maximum omni-directional difference value in the region block may also be calculated by a maximum function, which is not limited and described herein.
In step S24, each image in the images to be selected is converted to the frequency domain, each image in the images to be selected in the frequency domain is filtered, the energy value of each image in the filtered images to be selected is calculated to be greater than the energy value of the preset threshold, and a formula is adoptedAnd calculating the energy occupancy ratio of each image in the images to be selected in the frequency domain. Step S25 is executed.
And converting each image I in the images to be selected into a frequency domain by adopting a formula F-fft (I). Using the formula F1Calculating the energy value F of each image in the filtered images to be selected greater than the energy value F of the preset threshold value T (abs (F) > T)1(ii) a Wherein, F ═ fft (I) denotes that each image I in the images to be selected is subjected to fourier transform; abs (F) represents a module value of the energy value F of each image to be selected after filtering, and the preset threshold T is 5.
In the embodiment of the present disclosure, first, the terminal performs fourier transform on each image in the images to be selected by using a formula F ═ fft (i), and transforms the image to be selected from a time domain to a frequency domain. Secondly, after each image in the images to be selected is converted into a frequency domain through Fourier transformation, filtering is carried out once in the frequency domain, the energy value of each image in the images to be selected after filtering is calculated, the module value of each image is larger than a preset threshold value, and a formula is adoptedCalculating the energy occupancy ratio S of each image in the images to be selected in the frequency domain2Energy occupancy ratio S2The larger the image, the sharper the image is, and the higher the sharpness of the image is.
In step S25, formula L is used according to the first weight map mean value and the energy occupancy ratio2=S1*β+S2(1- β) calculating a sharpness weight map for each of the images to be selected.
Wherein β is a weight value of the first weight map mean.
In the embodiment of the disclosure, the time domain and the frequency domain of the image are combined, the weighted fusion of the time domain and the frequency domain is adopted, the terminal calculates the definition weight map of the image according to the first weight map average value calculated in the step S23 and the energy occupation ratio calculated in the step S24, and the definition weight map L of the image2The larger the image definition is, the higher the definition of the image is, and the definition weight graph L of the image is2The smaller the image is, the lower the sharpness of the image is.
In step S26, the formula L is adopted according to the luminance evaluation value and the sharpness weight map1*L2And calculating a feature fusion value of the image.
In the embodiment of the disclosure, the luminance feature and the definition feature of the image are combined, the luminance feature and the definition feature are fused, the terminal calculates the feature fusion value of the image according to the luminance evaluation value calculated in step S22 and the definition weight map calculated in step S25, and the greater the feature fusion value L of the image is, the better the comprehensive feature of the image is, and the better the quality of the image is.
In step S27, the image with the largest feature fusion value in the images to be selected is used as the final selected image.
In the embodiment of the disclosure, the quality of the image to be selected is evaluated by combining the brightness information and the definition information of the image, and the terminal evaluates the quality of the image to be selected according to the brightness evaluation value and the definition weight graph formula L ═ L1*L2The method comprises the steps of determining a feature fusion value of an image, taking the image with the maximum feature fusion value in the image to be selected as a final selected image, selecting the optimal image in the image to be selected in the same scene, particularly similar images, intelligently prompting a user which image has the optimal quality, and intelligently deleting the rest images in the image to be selected.
The image selecting method of this embodiment calculates the mean value of the brightness of each image in the images to be selected, calculating the brightness evaluation value of the image according to the brightness mean value, calculating the first weight graph mean value of each image in the image to be selected in the time domain, calculating the energy occupation ratio of each image in the image to be selected in the frequency domain, calculating the definition weight map of each image in the images to be selected according to the first weight map mean value and the energy occupation ratio, determining a feature fusion value of an image according to the brightness evaluation value and the definition weight graph, taking the image with the maximum feature fusion value in the image to be selected as a final selected image, sequencing the quality of the image to be selected by combining the brightness information and the definition information of the image, and selecting the image with the optimal quality, so that the image selected under the same scene, especially under similar images, is the optimal image. Meanwhile, each image in the images to be selected is converted into a log space, after the mean value of the images is obtained in the log space, the value of each image in the images to be selected in the log space converted into an original space is used as a brightness mean value, and a formula L is adopted according to the brightness mean value1The brightness evaluation value of the image is calculated as 1-abs (v-0.5), so that the image brightness evaluation value is more consistent with the visual characteristics of human eyes, and the acquired image brightness information is more accurate. In addition, each image in the image to be selected is divided into at least one area block, the omnibearing difference value of pixels in each area block is calculated, the maximum omnibearing difference value in the area block is used as the omnibearing difference value of each area block, and the omnibearing difference values of all the area blocks form a regionAn all-round difference map of the region block, a first weight map calculated from the all-round difference map of the region block, using the formula S1Mean(s) and mean(s) are calculated to realize calculation of the first weight map of the image to be selected in the time domain, so as to ensure the calculation accuracy of the first weight map and further improve the calculation accuracy of the definition weight map of the image. Meanwhile, each image in the images to be selected is converted into a frequency domain, each image in the images to be selected in the frequency domain is filtered, the energy value of each image in the images to be selected after filtering is calculated to be larger than the energy value of a preset threshold value, and a formula is adoptedThe energy occupation ratio of each image in the images to be selected in the frequency domain is calculated, the energy occupation ratio of the images to be selected in the frequency domain is calculated, the calculation accuracy of the energy occupation ratio is ensured, and the calculation accuracy of the definition weight graph of the images is improved.
FIG. 3 is a block diagram illustrating an image selection device according to an exemplary embodiment. Referring to fig. 3, the apparatus includes: a first calculation module 31, a second calculation module 32 and a selection module 33.
The first calculating module 31 is configured to calculate a brightness mean value of each image in the images to be selected, and determine a brightness evaluation value of the image according to the brightness mean value.
The second calculating module 32 is configured to calculate a first weight map mean value of each image in the image to be selected in the time domain, calculate an energy occupancy ratio of each image in the image to be selected in the frequency domain, and calculate a sharpness weight map of each image in the image to be selected according to the first weight map mean value and the energy occupancy ratio.
The selecting module 33 is configured to determine a feature fusion value of the image according to the luminance evaluation value calculated by the first calculating module 31 and the sharpness weight map calculated by the second calculating module 32, and take the image with the largest feature fusion value in the images to be selected as a final selected image.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
The image selecting apparatus of this embodiment calculates the mean value of the brightness of each image in the images to be selected, calculating the brightness evaluation value of the image according to the brightness mean value, calculating the first weight graph mean value of each image in the image to be selected in the time domain, calculating the energy occupation ratio of each image in the image to be selected in the frequency domain, calculating the definition weight map of each image in the images to be selected according to the first weight map mean value and the energy occupation ratio, determining a feature fusion value of an image according to the brightness evaluation value and the definition weight graph, taking the image with the maximum feature fusion value in the image to be selected as a final selected image, sequencing the quality of the image to be selected by combining the brightness information and the definition information of the image, and selecting the image with the optimal quality, so that the image selected under the same scene, especially under similar images, is the optimal image.
FIG. 4 illustrates a block diagram of an image selection apparatus according to another exemplary embodiment. Referring to fig. 4, on the basis of the embodiment shown in fig. 3, the apparatus further includes: an acquisition module 34.
The acquisition module 34 is configured to acquire at least two images to be selected.
The first calculation module 31 includes: a spatial conversion submodule 311, a first determination submodule 312 and a first calculation submodule 313.
The spatial conversion submodule 311 is configured to convert each image f (x, y) of the images to be selected into the log space using the formula f' (x, y) ═ log (f (x, y) + 1).
The first determining sub-module 312 is configured to take the mean value of the images in the log space converted by the spatial conversion sub-module 311 by using the formula v ═ exp (mean (f '(x, y))) -1, and then convert each image f' (x, y) in the to-be-selected images in the log space to the original space as the mean value v of the brightness.
The method includes the steps that exp (mean (f '(x, y))) represents an exponential function with a natural constant e as a base and mean (f' (x, y)) as an exponent, and mean (f '(x, y)) represents the average value of each image f' (x, y) in an image to be selected in a log space.
The first calculation submodule 313 is configured to apply the formula L on the basis of the mean value v of the luminance determined by the first determination submodule 3121The luminance evaluation value L of the image was calculated as 1-abs (v-0.5)1。
Wherein abs (v-0.5) represents the absolute value of the difference between the luminance mean and the reference point of 0.5.
The second calculation module 32 includes: a blocking submodule 3211, a blocking calculation submodule 3212, a second determination submodule 3213, a weight map calculation submodule 3214, and a mean calculation submodule 3215.
The partitioning submodule 3211 is configured to divide each of the images to be selected into at least one region block.
The block calculation submodule 3212 is configured to employ a formulaThe omnibearing differential value w (x) of the pixels in each region block divided by the block submodule 3211 is calculated.
Wherein, Xi、XjThe image selecting method comprises the steps of representing a pixel value in each image region block in an image to be selected, wherein omega represents a region formed by all region blocks of each image in the image to be selected in a time domain, i and j are positive integers which are larger than or equal to 1 respectively, and i is not equal to j.
The second determining submodule 3213 is configured to take the largest omni-directional differential value of the area blocks as the omni-directional differential value of each area block, the omni-directional differential values of all the area blocks constituting an omni-directional differential map of the area blocks.
The weight map calculation submodule 3214 is configured to determine the region according to the second determination submodule 3213The omnibearing difference graph of the block adopts a formulaA first weight map S is calculated.
Wherein,indicates the omnibearing difference value v (omega) of the pixels in the region block1) And (5) solving the maximum value.
The mean value calculating submodule 3215 is configured to calculate the first weight map S using the formula S according to the weight map calculating submodule 32141Mean (S) calculating a first weight map mean S1。
Wherein mean (S) represents averaging the first weight map S.
Further, the second calculation module 32 further includes; and a maximum value calculation submodule.
The maximum value calculating submodule is configured to calculate the maximum omnidirectional difference value in the region block from the omnidirectional difference values of the pixels within each region block before the second determining submodule 3213 regards the maximum omnidirectional difference value in the region block as the omnidirectional difference value of each region block.
Further, the second calculation module further comprises: a frequency domain transformation sub-module 3221, a filtering sub-module 3222, an energy value calculation sub-module 3223, and an occupancy ratio calculation sub-module 3224.
The frequency domain transforming sub-module 3221 is configured to convert each image I of the images to be selected into the frequency domain using the formula F fft (I).
Wherein, fft (I) represents fourier transform of each image I to be selected.
The filtering sub-module 3222 is configured to filter each of the images to be selected in the frequency domain converted by the frequency domain transforming sub-module 3221.
The energy value operator module 3223 is configured to adopt the formula F1After filtering by the abs (F) > T calculation filtering submodule 3222, the energy value F of each image in the to-be-selected image is greater than the energy value F of the preset threshold T1。
Abs (F) represents a module value of the energy value F of each image to be selected after filtering, and the preset threshold T is 5.
The occupancy ratio calculating sub-module 3224 is configured to calculate the energy value F according to the energy value sub-module 32231Using a formulaCalculating the energy occupancy ratio S of each image in the images to be selected in the frequency domain2。
Further, the second calculation module further comprises: a second calculation submodule 323.
The second calculation submodule 323 is configured to calculate a first weight graph mean S from a mean submodule 32151And the energy occupancy S calculated by the occupancy submodule 32242Using the formula L2=S1*β+S2(1- β) calculating definition weight graph L of each image in the images to be selected2。
Wherein β is a weight value of the first weight map mean.
Further, the selecting module 33 includes: a selection calculation submodule 331.
The selection calculation sub-module 331 is configured to calculate the luminance evaluation value L from the first calculation sub-module 3131And the sharpness weight map L calculated by the second calculation submodule 3232Adopting the formula L ═ L1*L2And calculating a feature fusion value L of the image.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
The image selecting apparatus of this embodiment calculates the mean value of the brightness of each image in the images to be selected, calculating the brightness evaluation value of the image according to the brightness mean value, calculating the first weight graph mean value of each image in the image to be selected in the time domain, calculating the energy occupation ratio of each image in the image to be selected in the frequency domain, calculating the definition weight map of each image in the images to be selected according to the first weight map mean value and the energy occupation ratio, determining a feature fusion value of an image according to the brightness evaluation value and the definition weight graph, taking the image with the maximum feature fusion value in the image to be selected as a final selected image, sequencing the quality of the image to be selected by combining the brightness information and the definition information of the image, and selecting the image with the optimal quality, so that the image selected under the same scene, especially under similar images, is the optimal image. Meanwhile, each image in the images to be selected is converted into a log space, after the mean value of the images is obtained in the log space, the value of each image in the images to be selected in the log space converted into an original space is used as a brightness mean value, and a formula L is adopted according to the brightness mean value1The brightness evaluation value of the image is calculated as 1-abs (v-0.5), so that the image brightness evaluation value is more consistent with the visual characteristics of human eyes, and the acquired image brightness information is more accurate. In addition, each image in the image to be selected is divided into at least one area block, the omnibearing differential value of pixels in each area block is calculated, the maximum omnibearing differential value in each area block is used as the omnibearing differential value of each area block, the omnibearing differential values of all the area blocks form an omnibearing differential map of each area block, a first weight map is calculated according to the omnibearing differential map of each area block, and a formula S is adopted1Mean(s) and mean(s) are calculated to realize calculation of the first weight map of the image to be selected in the time domain, so as to ensure the calculation accuracy of the first weight map and further improve the calculation accuracy of the definition weight map of the image. Meanwhile, each image in the images to be selected is converted into a frequency domain, each image in the images to be selected in the frequency domain is filtered, the energy value of each image in the images to be selected after filtering is calculated to be larger than the energy value of a preset threshold value, and a formula is adoptedThe energy occupation ratio of each image in the images to be selected in the frequency domain is calculated, the energy occupation ratio of the images to be selected in the frequency domain is calculated, the calculation accuracy of the energy occupation ratio is ensured, and the calculation accuracy of the definition weight graph of the images is improved.
FIG. 5 is a block diagram illustrating an apparatus for image selection according to an exemplary embodiment. For example, the apparatus 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 5, the apparatus 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and a communication component 816.
The processing component 802 generally controls overall operation of the device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operation at the device 800. Examples of such data include instructions for any application or method operating on device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power component 806 provides power to the various components of device 800. The power components 806 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the device 800.
The multimedia component 808 includes a screen that provides an output interface between the device 800 and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when the device 800 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the apparatus 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the device 800. For example, the sensor assembly 814 may detect the open/closed state of the device 800, the relative positioning of the components, such as a display and keypad of the apparatus 800, the sensor assembly 814 may also detect a change in position of the apparatus 800 or a component of the apparatus 800, the presence or absence of user contact with the apparatus 800, orientation or acceleration/deceleration of the apparatus 800, and a change in temperature of the apparatus 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communications between the apparatus 800 and other devices in a wired or wireless manner. The device 800 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, communications component 816 further includes a Near Field Communications (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 804 comprising instructions, executable by the processor 820 of the device 800 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
A non-transitory computer readable storage medium having instructions therein, which when executed by a processor of a mobile terminal, enable the mobile terminal to perform a method of image selection, the method comprising:
calculating the brightness mean value of each image in the images to be selected, and calculating the brightness evaluation value of the images according to the brightness mean value;
calculating a first weight graph mean value of each image in the images to be selected in the time domain, calculating an energy occupation ratio of each image in the images to be selected in the frequency domain, and calculating a definition weight graph of each image in the images to be selected according to the first weight graph mean value and the energy occupation ratio;
and determining a feature fusion value of the image according to the brightness evaluation value and the definition weight graph, and taking the image with the maximum feature fusion value in the image to be selected as a final selected image.
The method for calculating the brightness mean value of each image in the images to be selected and calculating the brightness evaluation value of the image according to the brightness mean value comprises the following steps:
converting each image f (x, y) in the images to be selected into a log space by adopting a formula f' (x, y) ═ log (f (x, y) + 1);
after taking the mean value of the images in the log space by adopting a formula v ═ exp (mean (f '(x, y))) -1, transferring each image f' (x, y) in the images to be selected in the log space to the value of the original space as a brightness mean value v;
using formula L according to the brightness mean value v1The luminance evaluation value L of the image was calculated as 1-abs (v-0.5)1;
The method comprises the following steps that exp (mean (f '(x, y))) represents an exponential function with a natural constant e as a base and mean (f' (x, y)) as an exponent, and mean (f '(x, y)) represents the average value of each image f' (x, y) in an image to be selected in a log space; abs (v-0.5) represents the absolute value of the difference between the luminance mean and the reference point of 0.5.
The calculating of the first weight graph mean value of each image in the images to be selected in the time domain includes:
dividing each image in the images to be selected into at least one area block;
using a formulaCalculating the omnibearing differential value w (x) of the pixel in each area block;
wherein, Xi、XjThe method comprises the steps of representing a pixel value in each image region block in an image to be selected, wherein omega represents a region formed by all region blocks of each image in the image to be selected in a time domain, i and j are positive integers which are more than or equal to 1 respectively, and i is not equal to j;
taking the maximum omnibearing differential value in the area blocks as the omnibearing differential value of each area block, and forming an omnibearing differential graph of the area blocks by the omnibearing differential values of all the area blocks;
using formula according to omnibearing difference diagram of region blockCalculating a first weight map S;
using the formula S1Mean (S) calculating a first weight map mean S1;
Wherein,indicates the omnibearing difference value v (omega) of the pixels in the region block1) The maximum value is obtained by calculating the maximum value,mean (S) represents the averaging of the first weight map S.
Before the maximum omni-directional difference value in the region block is used as the omni-directional difference value of each region block, the method further comprises the following steps:
the maximum omni-directional differential value in a region block is calculated from the omni-directional differential values of the pixels within each region block.
The method for calculating the energy occupancy ratio of each image in the images to be selected in the frequency domain comprises the following steps:
transferring each image I in the images to be selected to a frequency domain by adopting a formula F-fft (I);
fft (I) represents that each image I in the images to be selected is subjected to fourier transform;
filtering each image in the images to be selected in the frequency domain;
using the formula F1Calculating the energy value F of each image in the filtered images to be selected greater than the energy value F of the preset threshold value T (abs (F) > T)1;
Abs (F) represents a module value of an energy value F of each image to be selected after filtering, and a preset threshold T is 5;
using a formulaCalculating the energy occupancy ratio S of each image in the images to be selected in the frequency domain2。
The method for calculating the definition weight map of each image in the images to be selected according to the first weight map mean value and the energy occupation ratio comprises the following steps:
according to the first weight graph mean value S1And energy ratio S2Using the formula L2=S1*β+S2(1- β) calculating definition weight graph L of each image in the images to be selected2;
Wherein β is a weight value of the first weight map mean.
Determining a feature fusion value of the image according to the brightness evaluation value and the definition weight map, wherein the method comprises the following steps:
according to the luminance evaluation value L1And sharpness weight map L2Adopting the formula L ═ L1*L2And calculating a feature fusion value L of the image.
Before calculating the brightness mean value of each image in the images to be selected, the method further comprises the following steps:
at least two images to be selected are obtained.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.
Claims (16)
1. An image selection method, comprising:
calculating the brightness mean value of each image in the images to be selected, and calculating the brightness evaluation value of the images according to the brightness mean value;
calculating a first weight graph mean value of each image in the images to be selected in a time domain, calculating an energy occupation ratio of each image in the images to be selected in a frequency domain, and calculating a definition weight graph of each image in the images to be selected according to the first weight graph mean value and the energy occupation ratio;
and determining a feature fusion value of the image according to the brightness evaluation value and the definition weight map, and taking the image with the maximum feature fusion value in the image to be selected as a final selected image.
2. The method according to claim 1, wherein the calculating a brightness mean value of each image in the images to be selected and calculating a brightness evaluation value of the image according to the brightness mean value comprises:
converting each image f (x, y) in the images to be selected into a log space by adopting a formula f' (x, y) ═ log (f (x, y) + 1);
after taking the mean value of the images in the log space by adopting a formula v ═ exp (mean (f '(x, y))) -1, converting each image f' (x, y) in the images to be selected in the log space into the value of the original space as the brightness mean value v;
according to the brightness mean value v, adopting a formula L1The luminance evaluation value L of the image was calculated as 1-abs (v-0.5)1;
The method comprises the following steps that exp (mean (f '(x, y))) represents an exponential function with a natural constant e as a base and mean (f' (x, y)) as an exponent, and mean (f '(x, y)) represents the average value of each image f' (x, y) in an image to be selected in a log space; abs (v-0.5) represents the absolute value of the difference between the luminance mean and the reference point of 0.5.
3. The method according to claim 2, wherein the calculating the first weight map mean value of each image in the image to be selected in the time domain comprises:
dividing each image in the images to be selected into at least one area block;
using a formulaCalculating the omnibearing difference value w (x) of the pixel in each area block
Wherein, Xi、XjRepresenting the pixel value of each image area block in the image to be selected, and omega represents each image in the image to be selectedLike the area formed by all area blocks in the time domain, i and j are positive integers which are more than or equal to 1 respectively, and i is not equal to j;
taking the maximum omnibearing differential value in the area blocks as the omnibearing differential value of each area block, and forming an omnibearing differential graph of the area blocks by the omnibearing differential values of all the area blocks;
using formula according to omnibearing difference diagram of region blockCalculating the first weight map S;
using the formula S1(S) calculating the first weight map mean S1;
Wherein,indicates the omnibearing difference value v (omega) of the pixels in the region block1) The maximum value is obtained, mean (S) represents the average value of the first weight map S.
4. The method of claim 3, wherein before the using the largest omni-directional difference value in the region blocks as the omni-directional difference value for each region block, further comprising:
the maximum omni-directional differential value in a region block is calculated from the omni-directional differential values of the pixels within each region block.
5. The method according to claim 4, wherein the calculating the energy occupancy of each image in the frequency domain comprises:
transferring each image I in the images to be selected to a frequency domain by adopting a formula F-fft (I);
fft (I) represents that each image I in the images to be selected is subjected to fourier transform;
filtering each image in the images to be selected in the frequency domain;
using the formula F1=abs(F)>T calculating an energy value F of each image in the filtered images to be selected, wherein the energy value F is larger than a preset threshold value T1;
Abs (F) represents a module value of an energy value F of each image to be selected after filtering, and a preset threshold T is 5;
using a formulaCalculating the energy occupancy ratio S of each image in the images to be selected in the frequency domain2。
6. The method according to claim 5, wherein the calculating the sharpness weight map of each image of the images to be selected according to the first weight map mean and the energy occupancy ratio comprises:
according to the first weight graph mean value S1And the energy occupancy ratio S2Using the formula L2=S1*β+S2(1- β) calculating definition weight graph L of each image in the images to be selected2;
Wherein β is a weight value of the first weight map mean.
7. The method according to claim 6, wherein determining a feature fusion value of an image according to the luminance evaluation value and the sharpness weight map comprises:
according to the brightness evaluation value L1And the sharpness weight map L2Adopting the formula L ═ L1*L2And calculating a feature fusion value L of the image.
8. The method according to any one of claims 1 to 7, wherein before calculating the mean value of the luminance of each image of the images to be selected, the method further comprises:
at least two images to be selected are obtained.
9. An image selecting apparatus, comprising:
the first calculation module is configured to calculate a brightness mean value of each image in the images to be selected and determine a brightness evaluation value of the images according to the brightness mean value;
the second calculation module is configured to calculate a first weight map mean value of each image in the images to be selected in the time domain, calculate an energy occupancy ratio of each image in the images to be selected in the frequency domain, and calculate a definition weight map of each image in the images to be selected according to the first weight map mean value and the energy occupancy ratio;
the selecting module is configured to determine a feature fusion value of an image according to the brightness evaluation value calculated by the first calculating module and the definition weight map calculated by the second calculating module, and take the image with the maximum feature fusion value in the images to be selected as a final selected image.
10. The apparatus of claim 9, wherein the first computing module comprises:
the space conversion sub-module is configured to convert each image f (x, y) in the images to be selected into a log space by adopting a formula f' (x, y) ═ log (f (x, y) + 1);
a first determining submodule configured to adopt a formula v ═ exp (mean (f '(x, y))) -1 to take an average value of the images in a log space converted by the spatial conversion submodule, and then take a value of each image f' (x, y) in the log space to be selected into an original space as the brightness average value v;
a first calculation submodule configured to apply a formula L according to the brightness mean value v determined by the first determination submodule1The luminance evaluation value L of the image was calculated as 1-abs (v-0.5)1;
The method comprises the following steps that exp (mean (f '(x, y))) represents an exponential function with a natural constant e as a base and mean (f' (x, y)) as an exponent, and mean (f '(x, y)) represents the average value of each image f' (x, y) in an image to be selected in a log space; abs (v-0.5) represents the absolute value of the difference between the luminance mean and the reference point of 0.5.
11. The apparatus of claim 10, wherein the second computing module comprises:
the blocking submodule is configured to divide each image in the images to be selected into at least one area block;
a block calculation submodule configured to adopt a formulaCalculating the omnibearing differential value w (x) of the pixels in each area block divided by the partitioning submodule;
wherein, Xi、XjExpressing the pixel value in each image region block in the image to be selected, wherein omega expresses the region formed by all the region blocks of each image in the image to be selected in the time domain, i and j are positive integers which are more than or equal to 1 respectively, and i is not equal to j;
a second determination submodule configured to take a maximum omnidirectional difference value in the region blocks as an omnidirectional difference value of each region block, the omnidirectional difference values of all the region blocks constituting an omnidirectional difference map of the region blocks;
a weight map calculation submodule configured to apply a formula according to the omni-directional difference map of the area block determined by the second determination submoduleCalculating the first weight map S;
a mean value calculation sub-module configured to adopt a formula S according to the first weight map S calculated by the weight map calculation sub-module1(S) calculating the first weight map mean S1;
Wherein,indicates the omnibearing difference value v (omega) of the pixels in the region block1) The maximum value is obtained, mean (S) represents the average value of the first weight map S.
12. The apparatus of claim 11, wherein the second computing module further comprises: a maximum value calculation submodule;
a maximum value calculating submodule configured to calculate a maximum omnidirectional difference value in the region block from the omnidirectional difference values of the pixels within each region block before the maximum omnidirectional difference value in the region block is taken as the omnidirectional difference value of each region block by the second determining submodule.
13. The apparatus of claim 12, wherein the second computing module further comprises:
the frequency domain transformation submodule is configured to convert each image I in the images to be selected to a frequency domain by adopting a formula F-fft (I);
fft (I) represents that each image I in the images to be selected is subjected to fourier transform;
the filtering submodule is configured to filter each image to be selected in the frequency domain converted by the frequency domain conversion submodule;
an energy value calculating operator module configured to adopt a formula F1Calculating the energy value F of each image to be selected in the filtered images of the filtering submodule after the filtering is greater than the energy value F of the preset threshold value T (abs (F) > T)1;
Abs (F) represents a module value of an energy value F of each image to be selected after filtering, and a preset threshold T is 5;
an occupancy ratio calculation submodule configured to calculate an energy value F from the energy value submodule1Using a formulaCalculating the energy occupancy ratio S of each image in the images to be selected in the frequency domain2。
14. The apparatus of claim 13, wherein the second computing module further comprises:
a second calculation submodule configured to calculate the first weight map mean S from the mean submodule1And the energy occupancy S calculated by the occupancy submodule2Using the formula L2=S1*β+S2(1- β) calculating definition weight graph L of each image in the images to be selected2;
Wherein β is a weight value of the first weight map mean.
15. The apparatus of claim 14, wherein the selecting module comprises:
a selection calculation sub-module configured to calculate the luminance evaluation value L from the first calculation sub-module1And the sharpness weight map L calculated by the second calculation sub-module2Adopting the formula L ═ L1*L2And calculating a feature fusion value L of the image.
16. The apparatus of claims 9-15, further comprising:
the acquisition module is configured to acquire at least two images to be selected.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510988384.7A CN105654470B (en) | 2015-12-24 | 2015-12-24 | Image choosing method, apparatus and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510988384.7A CN105654470B (en) | 2015-12-24 | 2015-12-24 | Image choosing method, apparatus and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105654470A true CN105654470A (en) | 2016-06-08 |
CN105654470B CN105654470B (en) | 2018-12-11 |
Family
ID=56476785
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510988384.7A Active CN105654470B (en) | 2015-12-24 | 2015-12-24 | Image choosing method, apparatus and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105654470B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110807745A (en) * | 2019-10-25 | 2020-02-18 | 北京小米智能科技有限公司 | Image processing method and device and electronic equipment |
CN111161198A (en) * | 2019-12-11 | 2020-05-15 | 国网北京市电力公司 | Control method, device, storage medium, and processor of imaging device |
CN111369531A (en) * | 2020-03-04 | 2020-07-03 | 浙江大华技术股份有限公司 | Image definition grading method, equipment and storage device |
CN116678827A (en) * | 2023-05-31 | 2023-09-01 | 天芯电子科技(江阴)有限公司 | A detection system for LGA packaging pins of large current power supply module |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101419666A (en) * | 2007-09-28 | 2009-04-29 | 富士胶片株式会社 | Image processing apparatus, image capturing apparatus, image processing method and recording medium |
CN102209196A (en) * | 2010-03-30 | 2011-10-05 | 株式会社尼康 | Image processing device and image estimating method |
CN103218778A (en) * | 2013-03-22 | 2013-07-24 | 华为技术有限公司 | Image and video processing method and device |
CN103618855A (en) * | 2013-12-03 | 2014-03-05 | 厦门美图移动科技有限公司 | Photographing method and device for automatically selecting optimal image |
US20150071547A1 (en) * | 2013-09-09 | 2015-03-12 | Apple Inc. | Automated Selection Of Keeper Images From A Burst Photo Captured Set |
CN104604214A (en) * | 2012-09-25 | 2015-05-06 | 三星电子株式会社 | Method and apparatus for generating photograph image |
-
2015
- 2015-12-24 CN CN201510988384.7A patent/CN105654470B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101419666A (en) * | 2007-09-28 | 2009-04-29 | 富士胶片株式会社 | Image processing apparatus, image capturing apparatus, image processing method and recording medium |
CN102209196A (en) * | 2010-03-30 | 2011-10-05 | 株式会社尼康 | Image processing device and image estimating method |
CN104604214A (en) * | 2012-09-25 | 2015-05-06 | 三星电子株式会社 | Method and apparatus for generating photograph image |
CN103218778A (en) * | 2013-03-22 | 2013-07-24 | 华为技术有限公司 | Image and video processing method and device |
US20150071547A1 (en) * | 2013-09-09 | 2015-03-12 | Apple Inc. | Automated Selection Of Keeper Images From A Burst Photo Captured Set |
CN103618855A (en) * | 2013-12-03 | 2014-03-05 | 厦门美图移动科技有限公司 | Photographing method and device for automatically selecting optimal image |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110807745A (en) * | 2019-10-25 | 2020-02-18 | 北京小米智能科技有限公司 | Image processing method and device and electronic equipment |
CN110807745B (en) * | 2019-10-25 | 2022-09-16 | 北京小米智能科技有限公司 | Image processing method and device and electronic equipment |
CN111161198A (en) * | 2019-12-11 | 2020-05-15 | 国网北京市电力公司 | Control method, device, storage medium, and processor of imaging device |
CN111369531A (en) * | 2020-03-04 | 2020-07-03 | 浙江大华技术股份有限公司 | Image definition grading method, equipment and storage device |
CN111369531B (en) * | 2020-03-04 | 2023-09-01 | 浙江大华技术股份有限公司 | Image definition scoring method, device and storage device |
CN116678827A (en) * | 2023-05-31 | 2023-09-01 | 天芯电子科技(江阴)有限公司 | A detection system for LGA packaging pins of large current power supply module |
Also Published As
Publication number | Publication date |
---|---|
CN105654470B (en) | 2018-12-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11061202B2 (en) | Methods and devices for adjusting lens position | |
RU2628494C1 (en) | Method and device for generating image filter | |
CN106778773B (en) | Method and device for positioning target object in picture | |
CN106331504B (en) | Shooting method and device | |
EP3179711A2 (en) | Method and apparatus for preventing photograph from being shielded | |
CN108154465B (en) | Image processing method and device | |
CN107944367B (en) | Face key point detection method and device | |
CN107948510B (en) | Focal length adjusting method and device and storage medium | |
CN105631803B (en) | The method and apparatus of filter processing | |
CN108154466B (en) | Image processing method and device | |
CN109784164B (en) | Foreground identification method and device, electronic equipment and storage medium | |
CN111756989A (en) | Method and device for controlling focusing of lens | |
KR20210053121A (en) | Method and apparatus for training image processing model, and storage medium | |
CN105654470B (en) | Image choosing method, apparatus and system | |
CN108040204B (en) | Image shooting method and device based on multiple cameras and storage medium | |
CN105528765A (en) | Method and device for processing image | |
CN109408022A (en) | Display methods, device, terminal and storage medium | |
CN106469446B (en) | Depth image segmentation method and segmentation device | |
CN105472228B (en) | Image processing method and device and terminal | |
CN108108668B (en) | Age prediction method and device based on image | |
CN111275641A (en) | Image processing method and device, electronic equipment and storage medium | |
CN118214950A (en) | Image stitching method, device and storage medium | |
CN105391942B (en) | Automatic photographing method and device | |
CN112733599A (en) | Document image processing method and device, storage medium and terminal equipment | |
CN109447929B (en) | Image synthesis method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |