CN119211736A - Image processing method and related device - Google Patents
Image processing method and related device Download PDFInfo
- Publication number
- CN119211736A CN119211736A CN202411598877.5A CN202411598877A CN119211736A CN 119211736 A CN119211736 A CN 119211736A CN 202411598877 A CN202411598877 A CN 202411598877A CN 119211736 A CN119211736 A CN 119211736A
- Authority
- CN
- China
- Prior art keywords
- brightness
- value
- image
- luminance
- frame image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 14
- 238000000034 method Methods 0.000 claims abstract description 72
- 230000006870 function Effects 0.000 claims description 214
- 238000012545 processing Methods 0.000 claims description 53
- 230000015654 memory Effects 0.000 claims description 38
- 238000004590 computer program Methods 0.000 claims description 17
- 230000008859 change Effects 0.000 claims description 15
- 230000001965 increasing effect Effects 0.000 claims description 14
- 238000003860 storage Methods 0.000 claims description 13
- 238000013459 approach Methods 0.000 abstract description 12
- 235000019557 luminance Nutrition 0.000 description 377
- 230000008569 process Effects 0.000 description 33
- 230000014509 gene expression Effects 0.000 description 31
- 238000009826 distribution Methods 0.000 description 22
- 238000004891 communication Methods 0.000 description 18
- 230000006854 communication Effects 0.000 description 18
- 230000007935 neutral effect Effects 0.000 description 14
- 230000007704 transition Effects 0.000 description 14
- 238000010586 diagram Methods 0.000 description 13
- 230000010354 integration Effects 0.000 description 13
- 230000009466 transformation Effects 0.000 description 12
- 238000004422 calculation algorithm Methods 0.000 description 10
- 238000006243 chemical reaction Methods 0.000 description 10
- 238000005315 distribution function Methods 0.000 description 8
- 230000000694 effects Effects 0.000 description 8
- 230000001186 cumulative effect Effects 0.000 description 7
- 238000012935 Averaging Methods 0.000 description 6
- 230000008901 benefit Effects 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 5
- 238000003384 imaging method Methods 0.000 description 5
- 238000010295 mobile communication Methods 0.000 description 5
- 230000004044 response Effects 0.000 description 5
- 238000012937 correction Methods 0.000 description 4
- 230000004907 flux Effects 0.000 description 4
- 238000002310 reflectometry Methods 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 230000006872 improvement Effects 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 3
- 238000012552 review Methods 0.000 description 3
- 239000008186 active pharmaceutical agent Substances 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 2
- 238000005282 brightening Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000005192 partition Methods 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 241001270131 Agaricus moelleri Species 0.000 description 1
- 101000798940 Gallus gallus Target of Myb protein 1 Proteins 0.000 description 1
- 241000450412 Nierembergia repens Species 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000007175 bidirectional communication Effects 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 239000004020 conductor Substances 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 230000001939 inductive effect Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001568 sexual effect Effects 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 238000000528 statistical test Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/71—Circuitry for evaluating the brightness variation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/76—Circuitry for compensating brightness variation in the scene by influencing the image signals
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
Abstract
The application provides an image processing method and a related device, wherein in the method, before the brightness value of an Mth frame image is adjusted, an electronic device can acquire an M-1 frame image acquired before the Mth frame image, wherein M is an integer larger than 1. And then, the electronic equipment determines a target brightness threshold according to the brightness values respectively corresponding to the M-1 frame images. And when the preset condition is met between the brightness value of the Mth frame image and the target brightness threshold value, the electronic equipment adjusts the brightness value of the Mth frame image. Therefore, on the basis that the brightness of the adjusted Mth frame image approaches to the real brightness, the condition that the brightness of the Mth frame image is in jump is reduced, and the photographing experience of a user is improved.
Description
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method and a related device.
Background
With the continuous development of photography technology in electronic devices (such as smartphones), higher requirements are also being placed on the quality of images acquired by the electronic devices. For example, in the case where there is a deviation between the brightness of an image and the true brightness of a photographed scene, the electronic device can improve the imaging quality of the image by adjusting the brightness value of the image.
However, if the electronic device adjusts the brightness value of each frame of image, there may be a problem that the brightness of one or more frames of images is greatly different from the brightness of the adjacent frames of images, so as to cause abrupt brightness change, and affect the shooting experience of the user.
Disclosure of Invention
According to the image processing method and the related device, the electronic equipment adjusts the brightness value of the image meeting the preset condition, so that the situation that the brightness of the image is in jump is reduced, and the imaging quality is improved.
In a first aspect, the present application provides an image processing method, the method comprising:
acquiring an M-1 frame image and an M-th frame image, wherein M is an integer greater than 1;
Determining a target brightness threshold value based on brightness values respectively corresponding to the M-1 frame images;
adjusting the brightness value of the Mth frame image according to the target brightness threshold value to obtain a first image;
and displaying the first image.
Optionally, the target luminance threshold is used to determine a difference between the luminance value of the mth frame image and the luminance value corresponding to the previous M-1 frame image, so that it may be determined whether the luminance value of the mth frame image needs to be adjusted based on the difference.
In the above method, the brightness jump phenomenon is caused by the difference between the brightness of the frame image before processing and the brightness of the previous frame or the brightness of the previous frames, and if the electronic device adjusts the brightness value of the frame image, the difference of the brightness may be increased. Therefore, the difference between the brightness of the processed frame image and the brightness of the previous frame or frames of images is large, and brightness jump is caused. Therefore, before the brightness value of the Mth frame image is adjusted, the electronic device can acquire the M-1 frame image acquired before the Mth frame image, and then determine the target brightness threshold according to the brightness values respectively corresponding to the M-1 frame images. A difference between the luminance of the Mth frame image and the luminance of the previous M-1 frame image is judged based on the target luminance threshold value, thereby determining whether to adjust the luminance value of the Mth frame image. Compared with the prior art, the brightness value of each frame of image is adjusted, the brightness value of the image is adjusted based on the target brightness threshold, the situation that the brightness of the adjusted image is in jump is reduced on the basis that the brightness of the adjusted image approaches to the real brightness, and the photographing experience of a user is improved.
In a possible implementation manner of the first aspect, the target luminance threshold includes a first luminance threshold and a second luminance threshold, and the adjusting the luminance value of the mth frame image according to the target luminance threshold includes:
when a first brightness average value of the Mth frame image is smaller than the first brightness threshold value, adjusting the brightness value of the Mth frame image, wherein the first brightness average value is determined according to the brightness value corresponding to the pixel contained in a first area in the Mth frame image, and the brightness value corresponding to the pixel contained in the first area is smaller than a first preset threshold value and larger than a second preset threshold value;
When a second brightness average value of the Mth frame image is larger than the second brightness threshold value, adjusting the brightness value of the Mth frame image, wherein the second brightness average value is determined according to the brightness value corresponding to the pixel contained in a second area in the Mth frame image, and the brightness value corresponding to the pixel contained in the second area is smaller than or equal to the second preset threshold value;
and adjusting the brightness value of the M-th frame image under the condition that the first brightness average value is smaller than the first brightness threshold value and the second brightness average value is larger than the second brightness threshold value, wherein the first brightness threshold value is larger than the second brightness threshold value.
Optionally, the target brightness threshold is related to an area of the electronic device for brightness adjustment of the image. For example, if the electronic device performs brightness adjustment on a region (e.g., a first region) in the image, the target brightness threshold corresponds to the first region, and the target brightness threshold may be determined based on a brightness value corresponding to a pixel included in the first region. For another example, if the electronic device performs brightness adjustment on a plurality of regions (for example, the first region and the second region) in the image, the target brightness threshold value corresponds to the plurality of regions, and the target brightness threshold value may be determined based on brightness values corresponding to pixels included in the plurality of regions, so the number of target brightness threshold values may be plural. For example, the target luminance threshold includes a first luminance threshold corresponding to the first region and a second luminance threshold corresponding to the second region.
In the above method, the electronic device compares the first luminance average value with the first luminance threshold value in a case where the electronic device needs to perform luminance adjustment on the first region of the mth frame image. And under the condition that the electronic equipment needs to carry out brightness adjustment on the second area of the Mth frame image, the electronic equipment compares the second brightness average value with a second brightness threshold value. In the case where the electronic device needs to perform brightness adjustment on the first region and the second region of the mth frame image, the electronic device compares the first brightness average value with the first brightness threshold value and compares the second brightness average value with the second brightness threshold value. Therefore, the electronic equipment can compare the target brightness threshold value with the brightness value of the Mth frame image according to the area to be processed, the pertinence of the electronic equipment in processing the image is improved, and the calculation pressure of the electronic equipment in processing the image is reduced.
In a possible implementation manner of the first aspect, the determining the target brightness threshold based on the brightness values respectively corresponding to the M-1 frame images includes:
respectively obtaining brightness values corresponding to pixels contained in the first region in the M-1 frame image and brightness values corresponding to pixels contained in the second region in the M-1 frame image;
respectively carrying out average processing on brightness values corresponding to pixels contained in the first region in the M-1 frame image to obtain M-1 third brightness average values;
respectively carrying out average processing on brightness values corresponding to pixels contained in the second region in the M-1 frame image to obtain M-1 fourth brightness average values;
and determining the target brightness threshold according to M-1 target brightness averages, wherein the target brightness averages comprise the third brightness average and the fourth brightness average, the third brightness average is used for determining the first brightness threshold, and the fourth brightness average is used for determining the second brightness threshold.
In the above method, since the image contains more pixels, and each pixel corresponds to one brightness value, the image contains more brightness values, which is not beneficial to the analysis processing of the electronic device. The electronic device performs an averaging process on luminance values corresponding to pixels in the M-1 frame image in order to obtain a value that can characterize the overall luminance of the image. For example, the electronic device performs an average process on luminance values corresponding to pixels included in the first area in the image to obtain a third luminance average value, and the luminance of the first area may be represented by the third luminance average value. Therefore, the electronic device can represent the brightness value corresponding to the pixel with a larger number by the brightness average value with a smaller number (for example, the third brightness average value and the fourth brightness average value), and then the operation pressure of the electronic device can be relieved by analyzing and processing the brightness average value with a smaller number.
In a possible implementation manner of the first aspect, the determining the target brightness threshold according to the M-1 target brightness average value includes:
determining a first function according to the M-1 target brightness average value;
And determining the target brightness threshold according to the integral value of the brightness value corresponding to the M-1 frame image in the first function.
Optionally, the electronic device may use a kernel density estimation algorithm (KERNEL DENSITY estimation, KDE) to perform density estimation on the average value of M-1 target luminances to obtain a kernel density estimation result. The kernel density estimation result is a probability density function estimation of the sample, i.e. a first function of the embodiment of the present application.
In the method, continuous first functions are fitted based on the average value of the discrete M-1 target brightness, and the distribution condition of brightness values corresponding to M-1 frame images can be represented through the continuous first functions. The target brightness threshold determined based on the distribution situation accords with the distribution rule of brightness values corresponding to the M-1 frame images respectively, so that the target brightness threshold based on the distribution rule can be used for judging whether the brightness value of the M frame image accords with the distribution rule.
In a possible implementation manner of the first aspect, the determining the target luminance threshold according to an integral value of luminance values corresponding to the M-1 frame images in the first function includes:
integrating the first function based on a target integral value to determine a target brightness interval, wherein the target brightness interval comprises a plurality of brightness values, and the target integral value is used for representing the probability of brightness values corresponding to the M-1 frame images in the target brightness interval;
and determining the target brightness threshold according to the target brightness interval.
Alternatively, the first function may be a probability density function fitted from an average of M-1 target luminance values. Because the integral value obtained after the probability density function is subjected to the integral processing is used for representing the probability of the whole sample (for example, the luminance values corresponding to the M-1 frame images respectively) in the integral interval, the electronic equipment integrates the first function based on a preset target integral value, namely, the preset probability, so that the determined target luminance interval (namely, the integral interval) represents the distribution condition of the whole sample (for example, the luminance values corresponding to the M-1 frame images respectively) under the preset probability.
Optionally, in the case where the electronic device needs to increase the luminance value of the first area in the mth frame image, in order to make the luminance value of the first area in the increased mth frame image not greatly different from the M-1 third luminance average values, a smaller luminance value may be determined from a first function corresponding to the first area fitted based on the M-1 third luminance average values, to be used as the first luminance threshold.
Optionally, when the luminance value of the first region in the mth frame image is smaller than the first luminance threshold, the luminance value of the first region in the mth frame image is smaller than most of the M-1 third luminance average values. Therefore, even if the electronic device increases the brightness value of the first area in the mth frame image, the difference between the brightness value of the first area in the increased mth frame image and the average value of M-1 third brightness is not large, so that the phenomenon of brightness jump of the mth frame image can be avoided.
Optionally, in the case where the electronic device needs to reduce the luminance value of the second area in the mth frame image, in order to make the luminance value of the second area in the reduced mth frame image not greatly different from the M-1 fourth luminance average values, a larger luminance value may be determined from the first function corresponding to the second area fitted based on the M-1 fourth luminance average values, to be used as the second luminance threshold.
Optionally, when the luminance value of the second region in the mth frame image is greater than the second luminance threshold, it is indicated that the luminance value of the second region in the mth frame image is greater than most of the M-1 fourth luminance average values. Therefore, even if the electronic device reduces the brightness value of the second area in the mth frame image, the difference between the brightness value of the second area in the reduced mth frame image and the average value of M-1 fourth brightness is not large, so that the phenomenon of brightness jump of the mth frame image can be avoided.
In a possible implementation manner of the first aspect, the determining a first function according to the M-1 target brightness average value includes:
Determining a second function according to the number of the target brightness thresholds which are smaller than or equal to the M-1 target brightness average values, wherein the second function is used for representing the probability of the target brightness thresholds which are smaller than or equal to the M-1 target brightness average values;
Determining M-1 third functions according to the change rate of the second functions in a width interval, wherein the positions of central points of the M-1 third functions in coordinate axes are respectively in one-to-one correspondence with the M-1 target brightness average values, and the width of any one of the M-1 third functions is determined according to the length of the width interval;
The first function is determined by superimposing the M-1 third functions.
Optionally, the kernel density estimation functionI.e. the first function may also be expressed in terms of a kernel function (i.e. a third function), in particular the following expression:
Wherein, Representing the bandwidth of the kernel function, for determining the width of the kernel function.Indicating whenIs the nth of the n target brightness averagesA function of the average value of the individual target brightnesses,For determining the location of the center point of the kernel on the coordinate axis,Representation ofAnd (3) summing the kernel functions.
As can be seen from the above expressions, the electronic devices can respectively pass throughPersonal (S)DeterminingPersonal (S)Namely, M-1 third functions are determined by the average value of M-1 target brightness obtained by the embodiment of the application. Then, by superpositionDetermining kernel density estimation functions from kernel functions (i.e., M-1 third functions). Due to the kernel density estimation functionMay be approximately equal to the probability density functionTherefore, the first function may be determined by superimposing M-1 third functions.
In the above method, since the result (e.g., the first function) of the kernel density estimation is a continuous density curve, it can provide a smoother, more accurate probability density estimation result. Therefore, the electronic equipment can comprehensively capture the trend of the brightness change of the M-1 frame image through kernel density estimation, is beneficial to smooth the brightness change, reduces the influence of single frame noise on an estimation result (namely a first function), and improves the accuracy of the estimation result, so that the density estimation result obtained based on different kernel functions has little difference. In addition, the accurate first function provides a reliable basis for whether brightness correction is carried out on the Mth frame image or not in the follow-up process, so that image detail improvement is more effective.
In a possible implementation manner of the first aspect, the adjusting the brightness value of the mth frame image according to the target brightness threshold includes:
increasing a brightness value corresponding to a pixel contained in the first region in the M-th frame image under the condition that the first brightness average value is smaller than the first brightness threshold value;
reducing a brightness value corresponding to a pixel contained in the second region in the M-th frame image under the condition that the second brightness average value is larger than the second brightness threshold value;
And when the first brightness average value is smaller than the first brightness threshold value and the second brightness average value is larger than the second brightness threshold value, increasing the brightness value corresponding to the pixel contained in the first area in the Mth frame image, and reducing the brightness value corresponding to the pixel contained in the second area in the Mth frame image.
Optionally, if the comparison result is that the average value of the first luminance is greater than or equal to the first luminance threshold, it indicates that the luminance difference between the luminance of the first region in the mth frame image and the luminance of the first region corresponding to the M-1 frame image is greater, and the electronic device does not adjust the luminance of the mth frame image, so as to avoid causing a luminance jump after adjusting the luminance of the mth frame image.
Optionally, if the comparison result indicates that the average value of the second luminance is smaller than or equal to the second luminance threshold, which indicates that the luminance difference between the second region in the mth frame image and the luminance of the second region corresponding to the M-1 frame image is larger, the electronic device does not adjust the luminance of the mth frame image, so as to avoid causing luminance jump after adjusting the luminance of the mth frame image.
In the method, the electronic device increases the brightness value of the pixels included in the first area in the Mth frame image so as to improve the brightness detail of the first area in the Mth frame image, so that the Mth frame image is easier for a user to watch. The electronic device reduces the brightness value of the pixels included in the second area in the Mth frame image so as to improve details of the second area in the Mth frame image, so that the Mth frame image can display more detail contents in shooting scenes.
In a second aspect, an embodiment of the present application provides an electronic device, including one or more processors and one or more memories, where the one or more memories are coupled to the one or more processors, the one or more memories are configured to store computer program code, where the computer program code includes computer instructions that are invoked by the one or more processors to cause the electronic device to perform the method of image processing described in the first aspect or any possible implementation of the first aspect.
In a third aspect, the present application provides a chip or chip system comprising at least one processor and a communication interface, the communication interface and the at least one processor being interconnected by wires, the at least one processor being adapted to execute a computer program or instructions to perform the method of image processing described in the first aspect or any one of the possible implementations of the first aspect. The communication interface in the chip can be an input/output interface, a pin, a circuit or the like.
In one possible implementation, the chip or chip system described above in the embodiments of the present application further includes at least one memory, where the at least one memory stores instructions. The memory may be a memory unit within the chip, such as a register, a cache, etc., or may be a memory unit of the chip (e.g., a read-only memory, a random access memory, etc.).
In a fourth aspect, embodiments of the present application provide a computer storage medium storing a computer program which, when executed by a processor, causes the computer to perform a method of image processing as described in the first aspect or any one of the possible implementations of the first aspect.
In a fifth aspect, embodiments of the present application provide a computer program product which, when run on a communication device, causes the communication device to perform a method of image processing as described in the first aspect or any one of the possible implementations of the first aspect.
It should be appreciated that the description of technical features, aspects, benefits or similar language in the present application does not imply that all of the features and advantages may be realized with any single embodiment. Conversely, it should be understood that the description of features or advantages is intended to include, in at least one embodiment, the particular features, aspects, or advantages. Therefore, the description of technical features, technical solutions or advantageous effects in this specification does not necessarily refer to the same embodiment. Furthermore, the technical features, technical solutions and advantageous effects described in the present embodiment may also be combined in any appropriate manner. Those of skill in the art will appreciate that an embodiment may be implemented without one or more particular features, aspects, or benefits of a particular embodiment. In other embodiments, additional features and advantages may be recognized in certain embodiments that may not be present in all embodiments.
Drawings
The drawings used in the embodiments of the present application are described below.
FIG. 1A is a scene graph of a captured image provided by an embodiment of the present application;
FIG. 1B is an image of a sudden change in brightness provided by an embodiment of the present application;
FIG. 2A is a scene graph of another captured image provided by an embodiment of the present application;
FIG. 2B is an image of another abrupt change in brightness provided by an embodiment of the present application;
Fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
Fig. 4 is a schematic software architecture of an electronic device according to an embodiment of the present application;
Fig. 5 is a schematic flow chart of an image processing method according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a first function corresponding to a first region according to an embodiment of the present application;
FIG. 7 is a schematic diagram of a first function corresponding to a second region according to an embodiment of the present application;
FIG. 8 is a flowchart of another image processing method according to an embodiment of the present application;
FIG. 9 is a schematic diagram of a first function corresponding to a first region according to an embodiment of the present application;
FIG. 10 is a schematic diagram of a first function corresponding to a second region according to an embodiment of the present application;
FIG. 11 is a flowchart for adjusting brightness of an image according to an embodiment of the present application;
FIG. 12A is a system desktop of an electronic device, provided by an embodiment of the present application;
FIGS. 12B, 12C and 12D are preview interfaces of a set of camera applications provided by embodiments of the present application;
FIG. 13A is a system desktop of an electronic device according to an embodiment of the present application;
fig. 13B, 13C, and 13D are preview interfaces of another set of camera applications provided by embodiments of the present application.
Detailed Description
The terminology used in the following embodiments of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," "the," and "the" are intended to include the plural forms as well, unless the context clearly indicates to the contrary. It should also be understood that the term "and/or" as used in this disclosure refers to and encompasses any or all possible combinations of one or more of the listed items.
The terms "first," "second," and the like, are used below for descriptive purposes only and are not to be construed as implying or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature, and in the description of embodiments of the application, unless otherwise indicated, the meaning of "a plurality" is two or more.
In order to facilitate understanding of the embodiments of the present application, the following first analyzes and proposes a technical problem to be solved by the present application.
With the continuous development of photography technology in electronic devices (such as smartphones), higher requirements are also being placed on the quality of images acquired by the electronic devices. Typically, the electronic device adjusts the exposure parameters according to the difference between the brightness of the image and the preset brightness through an automatic exposure (Automatic Exposure, AE) technique such that the brightness of the image acquired based on the adjusted exposure parameters is equal to or close to the preset brightness. However, in the case where a large-area dark area or a large-area bright area is included in the photographed scene, the electronic device adjusts the brightness of the entire image to a preset brightness, which may cause a deviation between the brightness of the image display and the real brightness of the photographed scene, affecting the quality of the image.
The electronic device usually uses 18% neutral gray as the preset brightness, because the brightness and color of the object are determined by the reflectivity of the object to light, for example, the reflectivity of a pure black object is 0, the reflectivity of a pure white object is 100%, and the reflectivity of an object at neutral gray is 18%. The neutral gray in the center is a mean value of all gray scales in color spectrum, so that it is generally considered that the average reflectance of an object in nature is 18% neutral gray. Thus, the electronics adjust the brightness of the image to 18% neutral gray may cause the image to appear as an average brightness.
For example, in the case where a large area of dark area exists in a shooting scene, the average brightness of the image is less than 18% of neutral gray, and the electronic device adjusts the exposure parameters to increase the brightness of the image, which may result in acquiring an image with excessive brightness based on the adjusted exposure parameters, so that the imaging effect of the dark area of the image on the detail texture is poor.
Referring to fig. 1A and fig. 1B, fig. 1A is a scene diagram of a shot image according to an embodiment of the present application, and fig. 1B is an image with abrupt brightness change according to an embodiment of the present application.
As shown in fig. 1A, the electronic device 100 invokes a camera to capture a capture scene a, and then displays the captured image in a preview interface of the camera application. The shooting scene a includes a dark-colored table 1011, on which a white cup 1012 and a box 1013 are placed. Illustratively, the table 1011 is darkest in color and brightest in brightness relative to the cup 1012 and the box 1013, the brightness of the box 1013 is between the table 1011 and the cup 1012, alternatively, the area where the table 1011 is located may be referred to as a dark area, the area where the box 1013 is located may be referred to as a transition area, and the area where the cup 1012 is located may be referred to as a light area. It can be seen that the area of the table 1011 occupies a relatively large area in the shooting scene a, so that the shooting scene a includes a large area of dark area, and optionally, the average brightness of the picture acquired by the electronic device is less than 18% of neutral gray. Illustratively, in the case where the average brightness of the acquired picture is less than 18% neutral gray, the electronic device generally increases the brightness of the picture by increasing the luminous flux so that the brightness of the picture approaches 18% neutral gray. For example, the electronic apparatus obtains the exposure parameters by increasing the light flux, acquires the image 102 based on the above exposure parameters, and then displays the image 102 in the preview interface 103 shown in fig. 1A.
For example, the brightness corresponding to the object in the shooting scene a may be regarded as the real brightness, which is equivalent to the brightness corresponding to the accurate exposure, the image 102 is an image collected by the electronic device based on 18% neutral gray, it can be seen that the brightness of the whole image 102 is higher compared with the shooting scene a, and there is a difference between the brightness of the image collected by the electronic device and the real brightness. Alternatively, the table 1021 shown in the image 102 is brighter than the table 1011 shown in the shooting scene a, resulting in that the table 1021 shown in the image 102 loses part of the detail texture compared to the table 1011 shown in the shooting scene a, which is not in conformity with the actual scene, thereby affecting the imaging quality. Aiming at the problem that the difference exists between the brightness and the real brightness of the image acquired by the electronic equipment, the electronic equipment can firstly adjust the brightness value of the image after acquiring the image corresponding to the shooting scene A, and then display the adjusted image in the preview interface so as to reduce the difference between the brightness and the real brightness of the image by adjusting the brightness value.
For example, the electronic device captures image a after image 102, then adjusts the brightness value of image a based on the difference between the brightness of image 102 and the actual brightness, resulting in image 104, and displays image 104 in preview interface 105 shown in fig. 1B.
As shown in fig. 1B, the overall brightness of image 104 is darkened and varies to a greater extent than image 102. Optionally, table 1041 shown in image 104 is darker in brightness than table 1021 shown in image 102. Illustratively, since the electronic device typically captures multiple consecutive images at short intervals, the captured images are then displayed in the preview interface. Therefore, the shorter time interval for the electronic device to display the image 102 and then display the image 104 may cause the electronic device to display the image 104 with a larger difference from the brightness of the image 102 in a shorter time interval after displaying the image 102, resulting in a brightness "jump" phenomenon, thereby affecting the photographing experience of the user.
The reason why the brightness "jumps" is caused is that, for the shooting scene a, the brightness difference between the image collected by the electronic device and the image 102 in fig. 1A is generally small, and the electronic device may perform brightness adjustment on one or more frames of images corresponding to the shooting scene a based on the same adjustment manner, so that the brightness of the adjusted image approaches to the real brightness. However, when the brightness difference between the image acquired by the electronic device for the shooting scene a and the image 102 is large (for example, the brightness difference between the acquired image a and the image 102 is large), if the electronic device adjusts the brightness of the image a according to the above adjustment manner, the brightness difference between the adjusted image a and the image 102 may be further increased, which results in a brightness "jump" phenomenon on the image 104 obtained after the adjustment of the image a.
Referring to fig. 2A and 2B, fig. 2A is a scene diagram of another shot image provided by an embodiment of the present application, and fig. 2B is another image with abrupt brightness change provided by an embodiment of the present application.
As shown in fig. 2A, the electronic device 100 invokes the camera to capture a capture scene B, and then displays the captured image in a preview interface of the camera application. The shooting scene B includes a light-colored desktop 2011, a black box 2012 placed on the desktop 2011, and a notebook 2013. Illustratively, the desktop 2011 has the lightest color and the highest brightness relative to the case 2012 and the notebook 2013, and the brightness of the notebook 2013 is between the brightness of the desktop 2011 and the brightness of the case 2012, and optionally, in the embodiment of the present application, the area where the notebook 2013 is located may be referred to as a transition area, the area where the desktop 2011 is located may be referred to as a bright area, and the area where the case 2012 is located may be referred to as a dark area. Alternatively, the area of the box 2012 in the shooting scene B is smaller than the area of the shooting scene B, so the area of the shooting scene B including the dark area is smaller, and the area including the transition area and the bright area is larger. Optionally, the average brightness of the picture collected by the electronic device is greater than 18% neutral gray. Illustratively, in the case where the average brightness of the acquired picture is greater than 18% neutral gray, the electronic device generally increases the brightness of the picture by reducing the luminous flux so that the brightness of the picture approaches 18% neutral gray. For example, the electronic apparatus obtains exposure parameters by reducing the light flux, acquires the image 202 based on the above exposure parameters, and then displays the image 202 in the preview interface 203 shown in fig. 2A.
For example, the brightness corresponding to the object in the shooting scene B may be regarded as the real brightness, which is equivalent to the brightness corresponding to the accurate exposure, the image 202 is an image collected by the electronic device based on 18% neutral gray, it can be seen that the brightness of the whole image 202 is lower than that of the shooting scene B, and there is a difference between the brightness of the image collected by the electronic device and the real brightness. Alternatively, the notebook 2021 shown in the image 202 has a lower brightness than the notebook 2013 shown in the shooting scene B, which is not in conformity with the actual scene, thereby affecting the imaging quality. Aiming at the problem that the difference exists between the brightness and the real brightness of the image acquired by the electronic equipment, the electronic equipment can firstly adjust the brightness value of the image after acquiring the image corresponding to the shooting scene B, and then display the adjusted image in the preview interface so as to reduce the difference between the brightness and the real brightness of the image by adjusting the brightness value.
For example, the electronic device captures image B after image 202, then adjusts the brightness value of image B based on the difference between the brightness of image 202 and the actual brightness, resulting in image 204, and displays image 204 in preview interface 205 shown in fig. 2B.
As shown in fig. 2B, the overall brightness of the image 204 is brighter than that of the image 202, and the degree of change is greater. Optionally, notebook 2041 shown in image 204 is brighter than notebook 2021 shown in image 202. Illustratively, since the electronic device typically captures multiple consecutive images at short intervals, the captured images are then displayed in the preview interface. Therefore, the electronic device may display the image 202 and then display the image 204 at a shorter time interval, which may cause the electronic device to display the image 204 with a larger difference from the brightness of the image 202 at a shorter time interval, resulting in a brightness "jump" phenomenon, thereby affecting the photographing experience of the user.
The reason why the brightness "jumps" is caused is that, for the shooting scene B, the brightness difference between the image collected by the electronic device and the image 202 in fig. 2A is generally small, and the electronic device may perform brightness adjustment on one or more frames of images corresponding to the shooting scene B based on the same adjustment manner, so that the brightness of the adjusted image approaches to the real brightness. However, when the brightness difference between the image acquired by the electronic device for the shooting scene B and the image 202 is large (for example, the brightness difference between the acquired image B and the image 202 is large), if the electronic device adjusts the brightness of the image B according to the above adjustment manner, the brightness difference between the adjusted image B and the image 202 may be further increased, which results in a brightness "jump" phenomenon for the image 204 obtained after the adjustment of the image B.
In the application, when the brightness of the first image is larger than the brightness of the second image (the second image is acquired after the first image) (for example, the brightness of the dark area in the second image is larger than the first difference threshold value and/or the brightness of the transition area in the second image is larger than the second difference threshold value) the brightness value of the second image is adjusted by the electronic device, so that the brightness difference between the first image and the adjusted second image can be increased, thereby generating a brightness jump phenomenon, and affecting the photographing experience of the user.
In view of this, an embodiment of the present application provides an image processing method. Before the brightness value of the mth frame image is adjusted, an M-1 frame image acquired before the mth frame image may be acquired, where M is an integer greater than 1. And then, the electronic equipment determines a target brightness threshold according to the brightness values respectively corresponding to the M-1 frame images. When the brightness value of the Mth frame image and the target brightness threshold value meet the preset condition, the difference between the brightness of the Mth frame image and the brightness of the previous M-1 frame image is smaller, and the electronic equipment can adjust the brightness value of the Mth frame image. On the basis of enabling the brightness of the adjusted Mth frame image to approach to the real brightness, the condition that the brightness of the Mth frame image is in jump is reduced, and photographing experience of a user is improved.
The image processing method according to the embodiment of the present application is described below with reference to a hardware structure and a software structure of an electronic device.
In some embodiments, the electronic device includes, but is not limited to, a mobile phone, a tablet (Portable Android Device, PAD), a Personal digital assistant (Personal DIGITAL ASSISTANT, PDA), a handheld device with wireless communication capabilities, a computing device, an in-vehicle device, or a wearable device, a Virtual Reality (VR) terminal device, an augmented Reality (Augmented Reality, AR) terminal device, a wireless terminal in industrial control (Industrial Control), a wireless terminal in unmanned (SELF DRIVING), a wireless terminal in Remote Medical (Remote Medical), a wireless terminal in Smart Home (Smart Home), and the like, a mobile terminal or a fixed terminal. The form of the electronic device in the embodiment of the application is not particularly limited.
Referring to fig. 3, fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application, as shown in fig. 3, the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a key 190, a motor 191, an indicator 192, a camera 193, a display 194, and a subscriber identity module (subscriber identification module, SIM) card interface 195. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It should be understood that the illustrated structure of the embodiment of the present application does not constitute a specific limitation on the electronic device 100. In other embodiments of the application, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may generate operation control signals according to the instruction operation code and the timing signals to complete instruction fetching and instruction execution control.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it may be called directly from memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 determines a first function based on the M-1 frame image, the first function being used to characterize the luminance distribution of the M-1 frame image. The processor 110 determines a target brightness threshold according to the first function, then compares the target brightness threshold with the brightness value of the mth frame image, and when the comparison result meets the preset condition, the processor 110 adjusts the brightness value of the mth frame image. When the comparison result does not satisfy the preset condition, the processor 110 does not adjust the brightness value of the mth frame image. For example, the target luminance threshold includes a first luminance threshold corresponding to the first region and a second luminance threshold corresponding to the second region. When the average value of the first luminance corresponding to the first region of the mth frame image is smaller than the first luminance threshold value and the average value of the second luminance corresponding to the second region of the mth frame image is larger than the second luminance threshold value, the comparison result satisfies the preset condition, and the processor 110 may perform luminance adjustment on the first region and the second region of the mth frame image.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-INTEGRATED CIRCUIT, I2C) interface, an integrated circuit built-in audio (inter-INTEGRATED CIRCUITSOUND, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purposeinput/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface.
The UART interface is a universal serial data bus for asynchronous communications. The bus may be a bi-directional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is typically used to connect the processor 110 with the wireless communication module 160.
The MIPI interface may be used to connect the processor 110 to peripheral devices such as a display 194, a camera 193, and the like. The MIPI interfaces include camera serial interfaces (CAMERA SERIAL INTERFACE, CSI), display serial interfaces (DISPLAYSERIAL INTERFACE, DSI), and the like.
In some embodiments, processor 110 and display 194 communicate via a DSI interface to implement the display functionality of electronic device 100.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example, the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wirelesslocal area networks, WLAN) (e.g., wireless fidelity (WIRELESS FIDELITY, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation SATELLITE SYSTEM, GNSS), frequency modulation (frequency modulation, FM), near field communication (NEAR FIELD communication, NFC), infrared (IR), etc., applied to the electronic device 100.
The electronic device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The internal memory 121 may be used to store computer-executable program code that includes instructions. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device 100 (e.g., audio data, phonebook, etc.), and so on. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like. The processor 110 performs various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
In one embodiment of the present application, the internal memory 121 may store therein M-1 frame images acquired by the camera 193 or luminance values corresponding to the M-1 frame images, respectively.
The pressure sensor 180A is used to sense a pressure signal, and may convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A is of various types, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a capacitive pressure sensor comprising at least two parallel plates with conductive material. The capacitance between the electrodes changes when a force is applied to the pressure sensor 180A. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the electronic apparatus 100 detects the touch operation intensity according to the pressure sensor 180A. The electronic device 100 may also calculate the location of the touch based on the detection signal of the pressure sensor 180A. In some embodiments, touch operations that act on the same touch location, but at different touch operation strengths, may correspond to different operation instructions.
The touch sensor 180K, also referred to as a "touch device". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is for detecting a touch operation acting thereon or thereabout. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display 194. In other embodiments, the touch sensor 180K may also be disposed on the surface of the electronic device 100 at a different location than the display 194.
The software system of the electronic device (such as a mobile phone) can adopt a layered architecture, a transaction driven architecture, a micro-core architecture, a micro-service architecture or a cloud architecture. In the embodiment of the application, an Android system with a layered architecture is taken as an example, and the software architecture of a mobile phone is illustrated. Referring to fig. 4, fig. 4 is a schematic diagram of a software architecture of an electronic device according to an embodiment of the application.
As shown in fig. 4, the layered architecture divides the software into several layers, each with a clear role and division of work. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into five layers, from top to bottom, an application layer (application), an application framework layer (frame), a hardware abstraction layer (hardware abstraction layer, HAL), a driver layer, and a hardware layer, respectively. Wherein:
the application layer (application) may comprise a series of packages. For example, the application may include a camera, gallery, or the like. Among other things, camera applications may include, but are not limited to, UI modules, photo taking modules, gallery modules, and the like. The UI module may be cameraUI module, and may be mainly responsible for camera applications to measure human-computer interaction, for example, control the display of the preview interface and the preview screen therein, and receive and respond to user operations occurring in the preview interface. The photographing module is used for providing photographing function, focusing function and the like. The gallery module may be used to store photos taken by a user in a file system or a specific database of the electronic device for retrieval by applications such as a gallery.
An application framework layer (framework) provides a programming interface (application programming interface, API) and a programming framework for application programs of the application layer, mainly relates to a camera framework, and can comprise camera extension libraries, camera services and other camera access interfaces, plays a role of being started up and started down, can interact with camera applications through application APIs, and can also interact with HAL through HAL interface definition language (HAL INTERFACE definition language, HIDL). The application framework layer may also include a window manager, and the camera application and gallery application may present the taken photos to the user with the support of the window manager.
A Hardware Abstraction Layer (HAL) is an interface layer located between the application framework layer and the driver layer, providing a virtual hardware platform for the operating system. By way of example, the hardware abstraction layer may include a camera hardware abstraction layer and an image processing module. The camera hardware abstraction layer may provide, among other things, virtual hardware of the camera device 1 (first camera), the camera device 2 (second camera), and more camera devices.
The image processing module is used for judging whether the brightness value of the Mth frame image is adjusted or not, and is used for adjusting the brightness value of the Mth frame image. The image processing module comprises a kernel density estimation algorithm, and a first function can be determined based on brightness values corresponding to the M-1 frame images respectively through the kernel density estimation algorithm, so that the image processing module can determine a target brightness threshold according to the first function.
In one implementation, the image processing module may determine a first function from luminance values corresponding to pixels included in the M-1 frame image by a kernel density estimation algorithm, and then determine the target luminance threshold based on the first function. The image processing module compares the brightness value of the Mth frame image with a target brightness threshold value, and when the comparison result meets a preset condition, the image processing module adjusts the brightness value of the Mth frame image so that the adjusted brightness value of the Mth frame image approaches to a true value.
The driver layer is a layer between hardware and software, and includes drivers for various hardware. The driving layer may include a camera device driver, a digital signal processor driver, an image processor driver, and the like. The camera device drives an image sensor for driving one or more cameras in the camera module to acquire images and drives an image signal processor to preprocess the images. The digital signal processor driver is used for driving the digital signal processor to process the image. The image processor driver is used for driving the image processor to process the image.
The hardware layer may include a camera module, an image signal processor, a digital signal processor, an image processor.
The camera module may include one or more camera image sensors (e.g., image sensor 1, image sensor 2, camera 193 shown in fig. 3, etc.).
The workflow of the electronic device software and hardware is illustrated below in connection with the context of a camera application.
In the embodiment of the present application, after the touch sensor 180K receives the touch operation, the corresponding hardware interrupt is sent to the kernel layer, and the kernel layer processes the user operation into an original input event (including information such as touch coordinates and a timestamp of the touch operation), and identifies a control corresponding to the input event. Taking touch operation as an example of user operation acting on the camera application, the camera application calls a camera access interface of the application program framework layer, starts the camera application, further sends an instruction for starting the camera capability by calling camera equipment in the camera hardware abstraction layer, such as the camera equipment 1, and sends the instruction to a camera equipment driver of the driving layer, wherein the camera equipment driver can start a sensor corresponding to the camera equipment, such as the sensor 1, and then acquires image light signals through the sensor 1 to obtain an Mth frame image.
In one implementation, the image signal processor passes the Mth frame image back to the hardware abstraction layer through the camera device driver, and the hardware abstraction layer sends the Mth frame image to the image processing module. Based on the support of the image signal processor and the digital signal processor, an image processing module in the hardware abstraction layer can combine the camera shooting parameters of the M-1 frame image reported by the camera module and/or other sensors, determine a first function used for representing the brightness distribution condition of the M-1 frame image through a kernel density estimation algorithm, and then determine a target brightness threshold based on the first function. The image processing module compares the brightness value of the Mth frame image with a target brightness threshold value, and when the comparison result meets a preset condition, the image processing module adjusts the brightness value of the Mth frame image to obtain a first image.
For example, the target luminance threshold includes a first luminance threshold corresponding to the first region and a second luminance threshold corresponding to the second region. When the first brightness average value corresponding to the first area of the Mth frame image is smaller than the first brightness threshold value and the second brightness average value corresponding to the second area of the Mth frame image is larger than the second brightness threshold value, the comparison result meets the preset condition.
The hardware abstraction layer returns the first image data to the camera application through the camera interface. The camera application may then present the first image to the user with the support of the window manager, the first image having a brightness display effect that is better than the mth frame image.
The image processing method provided by the embodiment of the application can be applied to the electronic equipment with the hardware structure shown in the figure 3 and the software structure shown in the figure 4. Or more or less components than illustrated, or some components may be combined, or some components may be separated, or different components may be arranged, or the like in hardware and software configurations.
The image processing scheme provided by the embodiment of the present application will be specifically described with reference to fig. 5. The application scene of the scheme provided by the embodiment of the application is that after the electronic equipment starts the automatic exposure function, the shooting component is called to collect the image, and then the image is displayed on the scene of the preview interface.
Referring to fig. 5, fig. 5 is a flowchart of an image processing method according to an embodiment of the present application, where the method includes the following steps S501 to S504.
Step S501, an M-1 frame image and an Mth frame image are acquired.
The electronic equipment can acquire multiple frames of images at intervals of preset time, and then the acquired multiple frames of images are displayed in a preview interface of the camera application program. If the multi-frame image has an image with a large brightness variation degree, a brightness jump phenomenon may be caused, which is specifically shown that the difference between the brightness of the frame image and the brightness of the previous frame or the previous multi-frame image is larger than a difference threshold, and the user is visually felt as suddenly brightening or suddenly brightening.
As can be seen from the embodiments corresponding to fig. 1B and fig. 2B, the brightness "jump" phenomenon is generated due to the difference between the brightness of the frame image before processing and the brightness of the previous frame or the previous frames, and if the electronic device adjusts the brightness value of the frame image, the difference of the brightness may be increased. Therefore, the difference between the brightness of the processed frame image and the brightness of the previous frame or frames of images is large, and brightness jump is caused. Therefore, before adjusting the brightness value of the mth frame image, the electronic device may acquire the M-1 frame image acquired before the mth frame image, and then comprehensively consider the brightness of the M-1 frame image and the brightness of the mth frame image according to steps S502 and S503 to determine whether to adjust the brightness value of the mth frame image.
Where M is an integer greater than 1, it may be understood that the electronic device acquires at least two frames of images when executing step S501. For example, if M is equal to 2, the electronic device acquires the first frame image and the second frame image, where the acquisition time of the first frame image is earlier than the acquisition time of the second frame image. Optionally, the first frame image is used for determining whether the electronic device needs to adjust a brightness value of the second frame image, and the second frame image is an image to be processed. In the embodiment of the application, the M-1 frame image can be an image which is not processed after being acquired by the electronic equipment, or can be an image which is processed after being acquired by the electronic equipment, and the method is not limited.
Step S502, determining a target brightness threshold based on brightness values corresponding to the M-1 frame images respectively.
Specifically, after the electronic device acquires the M-1 frame image, the target brightness threshold may be determined based on brightness values corresponding to the M-1 frame image respectively. The target brightness threshold is used for judging the difference between the brightness value of the Mth frame image and the brightness value corresponding to the previous M-1 frame image, so as to determine whether the brightness value of the Mth frame image needs to be adjusted based on the difference according to step S503.
Wherein the target brightness threshold is related to an area where the electronic device performs brightness adjustment on the image. For example, if the electronic device performs brightness adjustment on a region (e.g., a first region) in the image, the target brightness threshold corresponds to the first region, and the target brightness threshold may be determined based on a brightness value corresponding to a pixel included in the first region. For another example, if the electronic device performs brightness adjustment on a plurality of regions (for example, the first region and the second region) in the image, the target brightness threshold value corresponds to the plurality of regions, and the target brightness threshold value may be determined based on brightness values corresponding to pixels included in the plurality of regions, so the number of target brightness threshold values may be plural. For example, the target luminance threshold includes a first luminance threshold corresponding to the first region and a second luminance threshold corresponding to the second region.
Illustratively, the target luminance threshold includes a first luminance threshold and a second luminance threshold, where the first luminance threshold corresponds to the first region and the second luminance threshold corresponds to the second region. The brightness values corresponding to the pixels included in the first region are smaller than a first preset threshold value and larger than a second preset threshold value, which can be understood as a region with brightness in the middle value in the image, and can be also called a first region as a transition region. The brightness values corresponding to the pixels included in the second area are smaller than or equal to a second preset threshold, which can be understood as an area with lower brightness in the image, and can be also called a dark area. Therefore, the first luminance threshold is determined based on the transition region in the M-1 frame image and the second luminance threshold is determined based on the dark region in the M-1 frame image.
In one possible implementation manner, the electronic device obtains luminance values corresponding to pixels included in a first area of the M-1 frame image respectively, and then the electronic device performs average processing on the luminance values corresponding to the pixels included in the first area of the M-1 frame image respectively to obtain M-1 third luminance average values, and determines the first luminance threshold according to the M-1 third luminance average values.
The electronic device performs the averaging process, and the obtained luminance average value includes various forms, for example, the luminance average value includes, but is not limited to, an arithmetic average value, a geometric average value, a square average value, a harmonic average value, a weighted average value, and the like, which is not limited in the embodiment of the present application.
In one possible implementation, the electronic device determines the first function corresponding to the first area according to the M-1 third luminance average values, and then determines the first luminance threshold according to the integral value of the luminance values respectively corresponding to the M-1 frame images in the first function corresponding to the first area.
The first function corresponding to the first region may be a probability density function fitted according to the M-1 third luminance average. Because the integral value obtained after the integration processing is performed on the probability density function is used for representing the probability of the whole sample (for example, the luminance values corresponding to the first areas in the M-1 frame image respectively) in the integral interval, the electronic equipment integrates the first function corresponding to the first area based on the preset target integral value, namely, the preset probability, so that the determined target luminance interval (for example, the integral interval) represents the distribution condition of the whole sample (for example, the luminance values corresponding to the first areas in the M-1 frame image respectively) under the preset probability.
For example, if the preset probability is large, the determined target luminance interval includes a large portion of luminance values in the whole sample (for example, luminance values corresponding to the first regions in the M-1 frame image respectively), which means that the number of luminance values corresponding to the first regions in the M-1 frame image respectively distributed in the target luminance interval is large. For another example, if the preset probability is smaller, the determined target brightness interval includes a smaller part of brightness values in the whole sample (for example, brightness values corresponding to the first areas in the M-1 frame image respectively), which means that the number of brightness values corresponding to the first areas in the M-1 frame image respectively distributed in the target brightness interval is smaller.
By way of example, the process by which the electronic device determines the first luminance threshold according to the first function corresponding to the first region will be described in detail below in connection with fig. 6. The first function shown in fig. 6 is a possible example, and the shape and type of the first function are not limited in the embodiment of the present application, where the process of determining, by the electronic device, the first function corresponding to the first area may refer to the embodiment corresponding to fig. 9.
Referring to fig. 6, fig. 6 is a schematic diagram of a first function corresponding to a first area according to an embodiment of the application. As shown in fig. 6, the horizontal axis of the coordinate axis represents the luminance value, and the vertical axis represents the probability density. The first function corresponding to the first region represents a distribution of the M-1 third luminance average values, for example, the larger the ordinate in the first function corresponding to the first region, the greater the density of the M-1 third luminance average values distributed in the vicinity of the luminance value corresponding to the ordinate.
In the embodiment of the present application, based on the concept of the probability density function, the integrated value of the probability density function in the target luminance section (for example, the first luminance section shown in fig. 6) and the area in the target luminance section may be equal to the probability that the whole sample is distributed in the target luminance section. Thus, the electronic device may perform the integration processing on the first function corresponding to the first area shown in fig. 6 based on the preset probability (i.e., the target integrated value), or determine the area enclosed by the first function corresponding to the first area shown in fig. 6 and the horizontal axis, so as to determine the target luminance interval corresponding to the preset probability, so as to determine the distribution of the luminance values in the case of the preset probability.
In one implementation, referring to fig. 6, after determining the target integrated value, the electronic device may perform an integration process on the first function corresponding to the first area based on the target integrated value, where a lower limit of the integration process is a left limit (a point) of the first function corresponding to the first area, and the determined upper limit (c point) is the first brightness threshold.
In another implementation, referring to fig. 6, the electronic device may calculate an area surrounded by the first function corresponding to the first area and the horizontal axis from the left limit (point a) of the first function corresponding to the first area. When the above-described area is equal to the target integrated value, the electronic device may set the section of the area on the horizontal axis as the first luminance section, and the maximum value (point c) of the first luminance section as the first luminance threshold value.
Alternatively, other luminance values in the first luminance interval may also be used as the first luminance threshold, which is not limited herein.
It is understood that the length of the first luminance section shown in fig. 6 obtained by the electronic device depends on the magnitude of the probability (i.e., the target integrated value) set in advance, and the length of the first luminance section increases as the target integrated value increases. The electronic device may determine the first luminance threshold based on different shooting scenes, selecting a target integrated value corresponding to the shooting scene, such that the resulting first luminance threshold meets the current shooting scene. By way of example, the target integrated value corresponding to the first region may include one or more of 0.01, 0.05, 0.1, etc., without limitation.
For example, if the current shooting scene makes the difference between the brightness of the first area in the mth frame image and the real brightness larger, for example, the difference is larger than the first scene threshold, the electronic device needs to adjust the brightness of the first area in the mth frame image to a larger extent. In order to avoid that the brightness of the adjusted M-th frame image has larger brightness difference from the brightness of the M-1 th frame image, the electronic device may select to adjust the brightness of the first area in the M-th frame image when the brightness value of the first area in the M-th frame image is smaller. The electronic device may select a target integrated value as small as possible (for example, select a target integrated value of 0.01) from the plurality of target integrated values, so that the first luminance section having a shorter length is determined based on the smaller target integrated value, so that the first luminance threshold value obtained based on the first luminance section having a shorter length belongs to a value as small as possible among the M-1 third luminance average values. When the luminance value of the first area of the mth frame image is smaller than the first luminance threshold as small as possible, even if the electronic device performs luminance adjustment on the first area of the mth frame image to a larger extent, the difference between the luminance of the adjusted mth frame image and the luminance of the M-1 frame image may be smaller.
For another example, if the current shooting scene makes the difference between the brightness of the first area in the mth frame image and the real brightness smaller, for example, smaller than or equal to the first scene threshold, the electronic device needs to adjust the brightness of the first area in the mth frame image to a smaller extent, so that the probability of brightness "jump" generated by the adjusted mth frame image is smaller, and the target integral value may be larger (for example, the target integral value is selected to be 0.1).
Note that, in the above embodiment, the electronic device performs the integration processing or calculates the area from the left limit of the first function corresponding to the first area as shown in fig. 6. The electronic device may perform the integration processing or calculate the area from the right limit (for example, point b in fig. 6) of the first function corresponding to the first region as shown in fig. 6, and if the electronic device starts processing from the right limit, the sum of the target integrated value and the integrated value corresponding to the processing from the left limit is 1. For example, if the above-described integral values corresponding to the processing from the left limit include, but are not limited to, 0.01, 0.05, 0.1, etc., the integral values corresponding to the processing from the right limit may include one or more of 0.99, 0.95, 0.9, etc.
In one possible implementation manner, the electronic device obtains luminance values corresponding to pixels included in a second region of the M-1 frame image respectively, and then the electronic device performs average processing on the luminance values corresponding to the pixels included in the second region of the M-1 frame image respectively to obtain M-1 fourth luminance average values, and determines the second luminance threshold according to the M-1 fourth luminance average values.
In one possible implementation, the electronic device determines the first function corresponding to the second area according to the M-1 fourth luminance average value, and then determines the second luminance threshold according to the integral value of the luminance value corresponding to each M-1 frame image in the first function corresponding to the second area.
The process by which the electronic device determines the second luminance threshold from the first function corresponding to the second region will be described in detail below in conjunction with fig. 7. The first function shown in fig. 7 is a possible example, and the shape and type of the first function are not limited in the embodiment of the present application, and the process of determining, by the electronic device, the first function corresponding to the second area may refer to the embodiment corresponding to fig. 10.
Referring to fig. 7, fig. 7 is a schematic diagram of a first function corresponding to a second area according to an embodiment of the application. As shown in fig. 7, the horizontal axis of the coordinate axis represents the luminance value, and the vertical axis represents the probability density. The first function corresponding to the second region represents the distribution of the M-1 fourth luminance average values, and for example, the larger the ordinate in the first function corresponding to the second region, the greater the density of the M-1 fourth luminance average values distributed in the vicinity of the luminance value corresponding to the ordinate is.
In one implementation, referring to fig. 7, after determining the target integrated value, the electronic device may perform an integration process on the first function corresponding to the second area based on the target integrated value, where a lower limit of the integration process is a left limit (d point) of the first function corresponding to the second area, and the determined upper limit (f point) is the second brightness threshold.
In another implementation, referring to fig. 7, the electronic device may calculate an area surrounded by the first function corresponding to the second area and the horizontal axis from the left limit (point d) of the first function corresponding to the second area. When the area is equal to the target integrated value, the electronic device may use the section of the area on the horizontal axis as the second luminance section (i.e., the target luminance section in the embodiment of the present application), and use the maximum value (f point) of the second luminance section as the second luminance threshold.
Alternatively, other luminance values in the second luminance interval may also be used as the second luminance threshold, which is not limited herein.
For example, the electronic device may select a target integrated value corresponding to a photographed scene based on a different photographed scene to determine a second brightness threshold based on a first function corresponding to a second region shown in fig. 7 such that the resulting second brightness threshold meets the current photographed scene. By way of example, the target integrated value corresponding to the second region may include one or more of 0.99, 0.95, 0.9, etc., without limitation.
For example, if the current shooting scene makes the difference between the brightness of the second area in the mth frame image and the real brightness larger, for example, the difference is larger than the second scene threshold, the electronic device needs to adjust the brightness of the second area in the mth frame image to a larger extent. In order to avoid that the brightness of the adjusted M-th frame image has larger brightness difference from that of the M-1 th frame image, the electronic device may select to adjust the brightness of the second area in the M-th frame image when the brightness value of the second area in the M-th frame image is larger. The electronic device may select a target integrated value as large as possible (for example, select a target integrated value of 0.99) from the plurality of target integrated values so that the second luminance section longer in length is determined based on the larger target integrated value, so that the second luminance threshold value obtained based on the second luminance section longer in length belongs to a value as large as possible among the M-1 fourth luminance average values. When the luminance value of the second area of the mth frame image is greater than the second luminance threshold value which is as large as possible, even if the electronic device performs luminance adjustment on the second area of the mth frame image to a larger extent, the difference between the luminance of the adjusted mth frame image and the luminance of the M-1 frame image may be smaller.
For another example, if the current shooting scene makes the difference between the brightness of the second area in the mth frame image and the real brightness smaller, for example, smaller than or equal to the second scene threshold, the electronic device needs to adjust the brightness of the second area in the mth frame image to a smaller extent, so that the probability of generating brightness "jump" in the adjusted mth frame image is smaller, and the target integral value may be smaller (for example, the target integral value is selected to be 0.9).
It should be noted that, in the above embodiment, the electronic device performs the integration processing or calculates the area from the left limit of the first function corresponding to the second area as shown in fig. 7. The electronic device may perform the integration processing or calculate the area from the right limit (e.g., point e in fig. 7) of the first function corresponding to the second region as shown in fig. 7, and if the electronic device starts processing from the right limit, the sum of the target integrated value and the integrated value corresponding to the processing from the left limit is 1. For example, if the above-described integral values corresponding to the processing from the left limit include, but are not limited to, 0.99, 0.95, 0.9, etc., the integral values corresponding to the processing from the right limit may include one or more of 0.01, 0.05, 0.1, etc.
Optionally, when the electronic device performs brightness adjustment on an area in the M-th frame image, the target brightness average value includes an average value corresponding to the area, and the target brightness threshold includes a brightness threshold corresponding to the area. For example, the electronic device performs brightness adjustment on a first region in the mth frame image, the target brightness average value includes a third brightness average value, and the target brightness threshold includes a first brightness threshold. For another example, the electronic device performs brightness adjustment on the second region in the mth frame image, the target brightness average value includes a fourth brightness average value, and the target brightness threshold includes a second brightness threshold.
Optionally, when the electronic device performs brightness adjustment on a plurality of regions (for example, a first region and a second region) in the M-th frame image, the target brightness average value includes a third brightness average value corresponding to the first region and a fourth brightness average value corresponding to the second region, and the target brightness threshold includes a first brightness threshold corresponding to the first region and a second brightness threshold corresponding to the second region.
It can be appreciated that the continuous first function is fitted based on the discrete M-1 target luminance average values, and the distribution of the luminance values corresponding to the M-1 frame images can be represented by the continuous first function. The target brightness threshold determined based on the distribution situation accords with the distribution rule of brightness values corresponding to the M-1 frame images respectively, so that the target brightness threshold based on the distribution rule can be used for judging whether the brightness value of the M frame image accords with the distribution rule. Alternatively, the process of determining the first function by the electronic device may refer to the corresponding embodiment of fig. 9 and fig. 10, which will not be described herein.
Step S503, the brightness value of the Mth frame image is adjusted according to the target brightness threshold value, so as to obtain a first image.
Specifically, after obtaining the target luminance threshold (e.g., the first luminance threshold and the second luminance threshold) according to step S502, the electronic device may determine whether to adjust the luminance value of the mth frame image according to the target luminance threshold. If the brightness value of the Mth frame image is determined to be required to be adjusted, the brightness value of the Mth frame image can be adjusted, and the first image is obtained.
In one possible implementation manner, the electronic device determines a first luminance average value according to a luminance value corresponding to a pixel included in a first area in the mth frame image, and determines a second luminance average value according to a luminance value corresponding to a pixel included in a second area in the mth frame image. Then, it is determined whether to adjust the brightness of the M-th frame image by combining the first brightness average value, the second brightness average value, and the target brightness threshold value.
In one possible implementation, the electronic device adjusts the luminance value of the mth frame image in the event that the first luminance average value is less than the first luminance threshold.
Specifically, when the electronic device performs brightness adjustment on the first region of the mth frame image, the electronic device compares the first brightness average value with the first brightness threshold value. When the comparison result is that the first brightness average value is smaller than the first brightness threshold value, the electronic device adjusts the brightness value of the first area in the mth frame image, for example, the electronic device increases the brightness value of the first area containing pixels in the mth frame image, so as to promote the brightness details of the first area in the mth frame image, and enable the mth frame image to be easier for a user to watch. Optionally, when the comparison result is that the first luminance average value is greater than or equal to the first luminance threshold value, the electronic device does not adjust the luminance value of the first area in the mth frame image.
The electronic device performs brightness adjustment on the first area, which includes but is not limited to a case when the first area occupies a relatively large area (e.g., greater than a first duty ratio threshold) in the mth frame image, and the electronic device performs brightness adjustment on the first area to implement adjustment on the mth frame image.
In one possible implementation, the electronic device adjusts the luminance value of the mth frame image if the second luminance average value is greater than the first luminance threshold value.
Specifically, when the electronic device performs brightness adjustment on the second region of the mth frame image, the electronic device compares the second brightness average value with the second brightness threshold value. Optionally, when the comparison result is that the second luminance average value is greater than the second luminance threshold, the electronic device adjusts the luminance value of the second region in the mth frame image, for example, the electronic device reduces the luminance value of the pixel included in the second region in the mth frame image, so as to improve the detail of the second region in the mth frame image, so that the mth frame image is easier for the user to watch. Optionally, when the comparison result is that the second luminance average value is less than or equal to the second luminance threshold value, the electronic device does not adjust the luminance value of the second area in the mth frame image.
The electronic device performs brightness adjustment on the second area, including but not limited to a case when the second area occupies a relatively large area (e.g., greater than a second duty ratio threshold) in the mth frame image, where the electronic device performs brightness adjustment on the second area to implement adjustment on the mth frame image.
In one possible implementation, the luminance values of the M-frame image are adjusted in the event that the first luminance average value is less than the first luminance threshold value and the second luminance average value is greater than the second luminance threshold value.
Specifically, in the case where the electronic device needs to perform brightness adjustment on the first region and the second region of the mth frame image, the electronic device compares the first brightness average value with the first brightness threshold value, and compares the second brightness average value with the second brightness threshold value. And when the comparison result is that the first brightness average value is smaller than the first brightness threshold value and the second brightness average value is larger than the second brightness threshold value, adjusting the brightness value of the M frame image. Optionally, when the comparison result is that the first luminance average value is greater than or equal to the first luminance threshold value, or the second luminance average value is less than or equal to the second luminance threshold value, the luminance value of the M-frame image is not adjusted.
It should be noted that, the first luminance threshold value belongs to a smaller value in the M-1 third luminance average values, and when the luminance value of the first region in the mth frame image is smaller than the first luminance threshold value, it is indicated that the luminance value of the first region in the mth frame image is smaller than a majority of values in the M-1 third luminance average values. Therefore, even if the electronic device increases the brightness value of the first area in the mth frame image, the difference between the brightness value of the first area in the increased mth frame image and the average value of M-1 third brightness is not large, so that the phenomenon of brightness jump of the mth frame image can be avoided.
Wherein the first luminance threshold value belongs to a smaller value in the M-1 third luminance average values, see fig. 6, it can be seen that the integral value in the first luminance section (i.e., the target luminance section corresponding to the first luminance threshold value) in the first function corresponding to the first region is smaller, so that the probability that the M-1 third luminance average values fall in the first luminance section is smaller, thereby explaining that the luminance values of the M-1 third luminance average values distributed in the first luminance section are smaller. The minimum value of the first brightness interval is the left limit of the first function corresponding to the first area, and the brightness values in the first brightness interval are arranged from small to large, which means that the brightness value in the first brightness interval belongs to a smaller value in the M-1 third brightness average values. Since the M-1 third luminance average values have fewer luminance values distributed in the first luminance interval, it can be explained that the maximum value of the first luminance interval is smaller than most of the M-1 third luminance average values, that is, the first luminance threshold value determined based on the first luminance interval belongs to a smaller value among the M-1 third luminance average values.
It should be noted that the second luminance threshold belongs to a larger value in the M-1 fourth luminance average values, and when the luminance value of the second region in the mth frame image is greater than the second luminance threshold, it is indicated that the luminance value of the second region in the mth frame image is greater than a majority of the values in the M-1 fourth luminance average values. Therefore, even if the electronic device reduces the brightness value of the second area in the mth frame image, the difference between the brightness value of the second area in the reduced mth frame image and the average value of M-1 fourth brightness is not large, so that the phenomenon of brightness jump of the mth frame image can be avoided.
As shown in fig. 7, it can be seen that the integral value of the second luminance threshold value in the second luminance section (i.e., the target luminance section corresponding to the second luminance threshold value) is larger in the first function corresponding to the second region, so that the probability that the M-1 fourth luminance average value falls in the second luminance section is larger, and thus it can be explained that the M-1 fourth luminance average value distribution has more luminance values in the second luminance section. The minimum value of the second brightness interval is the left limit of the first function corresponding to the second area, the brightness values in the second brightness interval are arranged from small to large, and the fact that the M-1 fourth brightness average values are distributed in the second brightness interval is more can indicate that the maximum value of the second brightness interval is larger than the M-1 fourth brightness average values of most of the second brightness interval, namely, the second brightness threshold value determined based on the second brightness interval belongs to a larger value in the M-1 fourth brightness average values.
Step S504, displaying the first image on the preview interface.
Specifically, the electronic device may aggregate the results of brightness adjustment of the pixels of the mth frame image to obtain a first image, and then display the first image in the preview interface, for example, fig. 12D and fig. 13D.
The embodiment shown in fig. 5 above includes a plurality of possible solutions, one of which is described below for ease of understanding. For example, taking an electronic device to adjust brightness of a first area and a second area in an mth frame image as an example, the scheme provided by the embodiment of the application is described. It should be understood that some terms, logic, etc. in the scheme shown in fig. 8 are explained, and reference may be made to the embodiment shown in fig. 5.
Referring to fig. 8, fig. 8 is a flowchart of another image processing method according to an embodiment of the application. The method shown in fig. 8 can be applied to an electronic device having a hardware structure as shown in fig. 3 and a software structure as shown in fig. 4. The method comprises one or more steps from step S801 to step S810, each of which is specifically as follows:
s801, acquiring an M-1 frame image and an Mth frame image in a preview stream.
For example, after the electronic device starts the camera application, the camera may be called to acquire a frame of image at a preset time interval, and then the frame of image is displayed in a preview interface of the camera application. In an embodiment of the present application, the continuous multi-frame images acquired by the electronic device at intervals of a preset time may be referred to as a preview stream, for example, the preview stream shown in fig. 8 includes imagesImage and method for producing the sameImage and method for producing the sameTo an imageTogether M-1 frame image and imageImage and method for producing the sameTo an image. The electronic device can update the image of the history frame In a First In, first Out manner, for example, when the electronic device is used for the image of the Mth frame (i.e. image) When brightness calibration is performed, the electronic device can acquire the previous M-1 frame image (i.e. imageTo an imageTogether M-1 frame images). Alternatively, when the electronic device is displaying an image (i.e., image) of the M+1st frame) When brightness calibration is performed, the electronic device can obtain the previous M-1 frame image (i.e. image)To an imageTogether M-1 frame images). In the embodiment of the application, the similarity between the shooting scene of the acquired M-1 frame image and the shooting scene of the Mth frame image can be higher based on a first-in first-out mode, and the accuracy of the target brightness threshold determined based on the M-1 frame image in the step S502 can be further improved.
S802, determining a data set of the third brightness average value and a data set of the fourth brightness average value.
Specifically, the electronic device may determine the third luminance average value and the fourth luminance average value by using luminance values corresponding to pixels in the image, so that M-1 third luminance average values corresponding to M-1 frame images respectively form a data set, and M-1 fourth luminance average values corresponding to M-1 frame images respectively form a data set.
The brightness value corresponding to the pixel in the image may be any information capable of reflecting the brightness of the image, which is not limited herein, and the manner in which the electronic device obtains the brightness value is described below by way of example.
In one possible implementation, if the color mode of the M-1 frame image is a color mode having a Luminance value component, such as (luminence, Y) Luminance (Chrominance, U) density (Chroma, V) mode, the electronic device may directly obtain the Y value of the pixel in the M-1 frame image, divide the pixel whose Y value is smaller than the first preset threshold and greater than the second preset threshold into the first region, and divide the pixel whose Y value is smaller than or equal to the second preset threshold into the second region. And then, respectively carrying out average processing on the Y values of the pixels in the first area and the second area to obtain a third brightness average value corresponding to the first area and a fourth brightness average value corresponding to the second area.
In another possible implementation, if the color mode of the M-1 frame image is not the color mode having the luminance value component, the color mode of the M-1 frame image may be converted into the color mode having the luminance value component, and then the luminance value corresponding to the color mode having the luminance value component is processed.
For example, in the case where the color mode of the M-1 frame image is a red (R) green (G) blue (B) mode, a hue (H) saturation (S) luminance (B) mode, or the like, the electronic device may convert the color mode of the M-1 frame image into a YUV mode according to a color mode conversion formula such that the Y value of a pixel in the converted M-1 frame image corresponds to the luminance value of the pixel, and the electronic device may perform an averaging process based on the Y value obtained after the conversion.
It can be understood that, since the image includes more pixels, and each pixel corresponds to one brightness value, the image includes more brightness values, which is not beneficial to the analysis of the electronic device. The electronic device performs an averaging process on luminance values corresponding to pixels in the M-1 frame image in order to obtain a value that can characterize the overall luminance of the image. For example, the electronic device performs an average process on luminance values corresponding to pixels included in the first area in the image to obtain a third luminance average value, and the luminance of the first area may be represented by the third luminance average value. Therefore, the electronic device can represent the brightness value corresponding to the pixel with a larger number by the brightness average value with a smaller number (for example, the third brightness average value and the fourth brightness average value), and then the operation pressure of the electronic device can be relieved by analyzing and processing the brightness average value with a smaller number.
In one implementation, the electronic device may perform an averaging process to determine the third luminance average according to the following expression:
Wherein, A third luminance average value corresponding to the i-th frame image,Is any value between the integer 1 and the integer M-1.The number of pixels included in the transition region in the i-th frame image is represented.A luminance value (e.g., Y value of a jth pixel) representing a jth pixel included in a transition region in an ith frame image, j being less than or equal toIs a positive integer of (a).Indicating that j is taken from 1 toCorresponding toThe sum of the luminance values of the individual pixels.
Therefore, the electronic device can obtain M-1 third brightness average values corresponding to the M-1 frame images respectively based on the expression, and a data set formed by the M-1 third brightness average values is:。
In one implementation, the electronic device may perform an averaging process to determine a fourth luminance average according to the following expression:
Wherein, A fourth luminance average value corresponding to the i-th frame image,Is any value between the integer 1 and the integer M-1.The number of pixels included in the dark region in the i-th frame image is represented.Representing the ith frame image including the dark areaThe brightness value of the individual pixel (e.g. the firstY value of each pixel),Is less than or equal toIs a positive integer of (a).Representing the presentation to beTake value from 1 toCorresponding toThe sum of the luminance values of the individual pixels.
Therefore, the electronic device can obtain M-1 fourth brightness average values corresponding to the M-1 frame images respectively based on the expression, and a data set formed by the M-1 fourth brightness average values is:。
S803, determining a first function corresponding to the first area according to the M-1 third brightness average values.
Specifically, the electronic device may perform density estimation on the M-1 third luminance average value by using a kernel density estimation algorithm (KERNEL DENSITY estimation, KDE) to obtain a kernel density estimation result. The kernel density estimation result is a probability density function estimation of the sample, i.e. a first function of the embodiment of the present application.
For example, referring to fig. 9, fig. 9 is a schematic diagram of a first function corresponding to a first region according to an embodiment of the application. As shown in fig. 9, the horizontal axis of the coordinate axis represents the luminance value, and the vertical axis represents the probability density. M-1 third luminance averages are distributed on the horizontal axis, e.g. as shown in FIG. 9、、、、To the point ofA total of M-1 third luminance averages.
Based on expression (7), the kernel density estimation functionRelated to one or more of the class of the first kernel function, the length of the bandwidth, the target luminance average, etc. For example, the class of the first kernel function includes, but is not limited to, gaussian kernel (Gaussian), quadratic kernel (Epanechnikov), uniform kernel (Uniform), and the like, and the class of the first kernel function is not limited by the embodiment of the present application. Among them, the description of expression (7) can be found in the following examples.
For example, first, the electronic device may obtain the category and bandwidth of the preset first kernel function, and then, on the coordinate axis shown in fig. 9, apply a first kernel function to each third luminance average value, to obtain M-1 first kernel functions. For example, as shown in FIG. 9、、、、To the point ofRespectively corresponding to the first kernel functions, and the positions of the central points of the M-1 first kernel functions on the coordinate axes respectively correspond to the M-1 third brightness averages one by one, for example, the central point of the first kernel function 1 and the first third brightness average value in fig. 9 #) Corresponding to the above. The width of the M-1 first kernel functions is determined according to the length of the width section (e.g., 2h in expression (3)), i.e., the width of the first kernel functions is equal to the bandwidth h. Finally, the electronic device sums the M-1 first kernel functions to obtain a first function corresponding to the first region as shown in fig. 9. Among them, expression (3) and the width interval can be seen in the following examples.
Optionally, after obtaining the M-1 first kernel functions, the electronic device may further calculate a probability density contribution of the first kernel function at each location, and then sum the probability density contributions to obtain a probability density estimate of the location. Summing each position can result in a first function corresponding to the first region.
S804, determining a first function corresponding to the second area according to the M-1 fourth brightness average values.
Specifically, the electronic device may perform density estimation on the M-1 fourth luminance average values by using a kernel density estimation algorithm, to obtain a kernel density estimation result. The kernel density estimation result is a probability density function estimation of the sample, i.e. a first function of the embodiment of the present application.
For example, referring to fig. 10, fig. 10 is a schematic diagram of a first function corresponding to a second area according to an embodiment of the application. As shown in fig. 10, the horizontal axis of the coordinate axis represents the luminance value, and the vertical axis represents the probability density. M-1 fourth luminance averages are distributed on the horizontal axis, e.g. as shown in FIG. 10、、、B 5 toM-1 fourth luminance averages total.
First, the electronic device may acquire the category and bandwidth of the preset second kernel functions, and then apply a second kernel function to each fourth luminance average value on the coordinate axis shown in fig. 10 to obtain M-1 second kernel functions, for example, as shown in fig. 10、、、B 5 toRespectively corresponding to a second kernel function. And, the positions of the center points of the M-1 second kernel functions on the coordinate axes are respectively in one-to-one correspondence with the M-1 fourth luminance average values, for example, the center point of the second kernel function 1in FIG. 10 corresponds to the first fourth luminance average value) Corresponding to the above. The width of the M-1 second kernel functions is determined according to the length of the width section (e.g., 2h in expression (3)), i.e., the width of the second kernel functions is equal to the bandwidth h. Finally, the electronic device sums the M-1 second kernel functions to obtain the first function corresponding to the second region as shown in fig. 10. Among them, expression (3) and the width interval can be seen in the following examples.
Optionally, after obtaining the M-1 second kernel functions, the electronic device may also calculate a probability density contribution of the second kernel function at each location, and then sum the probability density contributions to obtain a probability density estimate of the location. Summing each position can result in a first function corresponding to the second region.
It should be noted that, the selection of kernel function types has a certain effect on the final density estimation result, but the density estimation results given by different kernel functions are not greatly different. For example, when M is 2, the electronic device performs probability density estimation based on the 1 st frame image depending on the selection of the kernel class. For example, the gaussian kernel has smoothness, is suitable for processing continuous data, but may produce large estimation errors at the edges. The uniform check of all data points gives the same weight, is simple and easy to use, but may lead to insufficient smoothness of the estimation result. However, when the number of M-1 frame images is sufficiently large (more than the threshold value), the electronic device can comprehensively capture the trend of the brightness variation of the M-1 frame images through kernel density estimation, which is helpful for smoothing the brightness variation, reducing the influence of single frame noise on the estimation result (i.e. the first function), thereby improving the accuracy of the estimation result, and further, making the density estimation result obtained based on different kernel functions have little difference. In addition, the accurate first function provides a reliable basis for whether brightness correction is carried out on the Mth frame image or not in the follow-up process, so that image detail improvement is more effective.
The above reduction of single frame noise is understood as that the electronic device may filter out the target brightness average value that may interfere with the result through the kernel function. For example, in the M-1 frame image, the shooting scene corresponding to the K-th frame and the K+1th frame image is the second scene, and the shooting scene corresponding to the other images in the M-1 frame image and the M-th frame image is the first scene. Since the second scene is different from the first scene, the K-th frame and k+1-th frame images are images that may interfere with the result. The electronic equipment can filter out the target brightness average value corresponding to the K frame and the K+1 frame images through the kernel function so as to reduce the interference of noise and improve the accuracy of the first function.
It should be noted that the above description of using the kernel density estimation algorithm to determine the first function is merely an example, and since the result of kernel density estimation (e.g., the first function) is a continuous density curve, it can provide a smoother and more accurate probability density estimation result, so embodiments of the present application are described based on the kernel density estimation algorithm as an example, and other density estimation algorithms may be used for estimation, which is not limited herein.
S805, determining a first brightness threshold according to a first function corresponding to the first area.
Specifically, the electronic device may perform integration processing on a first function corresponding to the first area based on the first integrated value to obtain a first brightness interval, and then determine a first brightness threshold according to the first brightness interval. The specific process of the electronic device determining the first brightness threshold according to the first function corresponding to the first area may refer to the embodiment corresponding to fig. 6.
S806, determining a second brightness threshold according to the first function corresponding to the second area.
Specifically, the electronic device may perform integration processing on the first function corresponding to the second area based on the second integrated value to obtain a second brightness interval, and then determine a second brightness threshold according to the second brightness interval. The specific process of the electronic device determining the second brightness threshold according to the first function corresponding to the second area may refer to the embodiment corresponding to fig. 7.
S807, a first luminance average value and a second luminance average value are determined.
Specifically, the electronic device determines a first luminance average value according to an average value of luminance values corresponding to pixels included in a first region in the M-th frame image, and determines a second luminance average value according to an average value of luminance values corresponding to pixels included in a second region. The brightness value corresponding to the pixels contained in the first area is smaller than a first preset threshold value and larger than a second preset threshold value, and the brightness value corresponding to the pixels contained in the second area is smaller than or equal to the second preset threshold value. The process of determining the first luminance average value and the second luminance average value by the electronic device is consistent with the manner of determining the third luminance average value and the fourth luminance average value in step S802, and will not be described herein.
S808, judging whether the first brightness average value is smaller than a first brightness threshold value.
Specifically, after obtaining the first luminance average value, the electronic device compares the first luminance average value with the first luminance threshold value, and if the comparison result is that the first luminance average value is smaller than the first luminance threshold value, S809 is executed. If the comparison result is that the first brightness average value is greater than or equal to the first brightness threshold value, which indicates that the brightness difference between the brightness of the first area in the Mth frame image and the brightness of the first area corresponding to the M-1 frame image is greater, the electronic device does not adjust the brightness of the Mth frame image, so as to avoid brightness jump after adjusting the brightness of the Mth frame image, and the electronic device can display the Mth frame image in the preview interface. Alternatively, the electronic device may acquire the m+1st frame image, and determine whether to adjust the brightness of the m+1st frame image based on the previous M-1 frame image of the m+1st frame image.
S809, judging whether the second brightness average value is larger than a second brightness threshold value.
Specifically, after the second luminance average value is obtained, the electronic device compares the second luminance average value with the second luminance threshold value, if the comparison result is that the second luminance average value is greater than the second luminance threshold value, and if it is determined in S808 that the first luminance average value is less than the first luminance threshold value, it is indicated that the luminance difference between the luminance of the first region in the mth frame image and the luminance of the first region corresponding to the M-1 frame image is smaller, and the luminance difference between the luminance of the second region in the mth frame image and the luminance of the second region corresponding to the M-1 frame image is smaller, and the electronic device may execute S810. If the comparison result shows that the average value of the second brightness is smaller than or equal to the second brightness threshold, which indicates that the brightness difference between the brightness of the second area in the Mth frame image and the brightness of the second area corresponding to the M-1 frame image is larger, the electronic device does not adjust the brightness of the Mth frame image, so as to avoid brightness jump caused by adjusting the brightness of the Mth frame image, and the electronic device can display the Mth frame image in the preview interface. Alternatively, the electronic device may acquire the m+1st frame image, and determine whether to adjust the brightness of the m+1st frame image based on the previous M-1 frame image of the m+1st frame image.
It should be noted that, the embodiment of the present application does not limit the sequence of executing S808 and S809 by the electronic device, and the electronic device may execute S808 and then S809, or execute S808 and S809 simultaneously.
And S810, performing brightness adjustment on the Mth frame image to obtain a first image.
Specifically, when the electronic device obtains, based on S808 and S809, that the first luminance average value is smaller than the first luminance threshold value and the second luminance average value is greater than the second luminance threshold value, the electronic device may perform luminance adjustment on the first area and the second area of the mth frame image respectively, so as to improve the detail luminance of the mth frame image.
For example, since electronic devices typically perform gamma (gamma) conversion on the brightness of an image after capturing the image in order to match the nonlinear response characteristics of the human eye to the light intensity, the brightness value of the image is converted into a nonlinear brightness value. Therefore, if the luminance value of the mth frame image is nonlinear, the electronic device needs to convert the nonlinear luminance value into a linear luminance value by inverse gamma conversion before adjusting the luminance value of the mth frame image, and then adjust the linear luminance value. Therefore, the embodiment of the present application will take the luminance value of the mth frame image as the nonlinear example in conjunction with fig. 11, to describe the process of the electronic device for adjusting the luminance of the mth frame image.
Referring to fig. 11, fig. 11 is a flowchart of adjusting brightness of an image according to an embodiment of the application. As shown in fig. 11, the flowchart includes steps S1101 to S1103, and the specific flow is as follows:
S1101, partitioning pixels of an mth frame image.
Illustratively, taking the color mode of the mth frame image as (luminence, Y) Luminance (Chrominance, U) density (Chroma, V) mode as an example, the process of transforming a nonlinear Luminance value into a linear Luminance value through inverse gamma transformation can be referred to as the following expression:
Wherein, For the luminance value corresponding to the i-th pixel in the M-th frame image, for example the Y value corresponding to the i-th pixel,Is the coefficient of the gamma, which is the coefficient of the gamma,To pair(s)And (5) performing the inverse gamma transformation. Therefore, the electronic equipment can obtain the result of the inverse gamma conversion of the pixels contained in the Mth frame image according to the expression.
Illustratively, the electronic device partitions pixels of the mth frame image according to a policy for brightness adjustment of the mth frame image. For example, the strategies for brightness adjustment by the electronic device involve a first region (e.g., a transition region), a second region (e.g., a dark region), and a third region (e.g., a light region). Therefore, the electronic device may divide the pixels included in the M-th frame image by the first region, the second region, and the third region.
For example, when the result of inverse gamma transformation of a pixel is less than or equal to the first transformation threshold, the pixel is divided into a second region. And dividing the pixel into a first area when the result of the inverse gamma transformation of the pixel is larger than a first transformation threshold and smaller than a second transformation threshold. And dividing the pixel into a third region when the result of the inverse gamma transformation of the pixel is greater than or equal to the second transformation threshold. Thus, the electronic device can obtain the pixels corresponding to the first region, the second region and the third region respectively.
S1102, performing brightness fitting on the pixels according to different partition results.
Specifically, after the electronic device obtains the pixels corresponding to the first region, the second region and the third region respectively, the electronic device may perform brightness fitting on the pixels according to adjustment policies of different regions.
The electronic device performs luminance fitting on the pixels in the second area through a straight line with a linear fitting function and a slope smaller than 1, so as to reduce the luminance value corresponding to the pixels in the second area, and obtain a luminance fitting result of the second area.
The electronic device performs luminance fitting on the pixels of the first area by using a polynomial fitting method or a table look-up fitting method to increase the luminance value corresponding to the pixels of the first area, so as to obtain a luminance fitting result of the first area.
For example, the electronic device performs luminance fitting on the pixels of the third area through a straight line with a slope of 1, so that the luminance value corresponding to the pixels of the third area remains unchanged, and a luminance fitting result of the third area is obtained.
And S1103, adjusting the brightness value of the Mth frame image according to the brightness fitting result to obtain a first image.
Specifically, the electronic device adjusts the result of inverse gamma conversion of the pixels of the first region in the M-th frame image according to the brightness fitting result of the first region, so as to improve the brightness of the first region and increase the detail display effect of the first region. And adjusting the result of inverse gamma conversion of the pixels of the second region in the M-th frame image according to the brightness fitting result of the second region so as to inhibit the brightness of the second region and increase the detail display effect of the second region. And adjusting the result of inverse gamma conversion of the pixels of the third region in the M-th frame image according to the brightness fitting result of the third region.
Then, the electronic device performs gamma conversion on the result of the inverse gamma conversion on the adjusted pixels, so that the linear brightness value is converted into a nonlinear brightness value, and the human eyes can watch the brightness value conveniently.
For example, the electronic device may perform gamma transformation based on the following expression to obtain the first image:
Wherein, For the result of the inverse gamma transformation of the adjusted i-th pixel,Is the coefficient of the gamma, which is the coefficient of the gamma,The luminance value corresponding to the i-th pixel in the first image.
It should be noted that, when the luminance value of the mth frame image is linear, the electronic device may not perform inverse gamma transformation on the luminance value corresponding to the pixel included in the mth frame image, and perform luminance adjustment on the pixels in different areas of the mth frame image according to the adjustment policy in step S1102, so as to obtain the first image.
The process by which the electronic device determines the first function by means of the kernel density estimation is described in detail below.
In one possible implementation manner, the electronic device may obtain the first function based on the average value of the M-1 target brightness by means of non-parameter estimation, so as to determine the distribution situation of the brightness values corresponding to the M-1 frame images respectively according to the first function. The non-parameter estimation is also called non-parameter test, and refers to the sum of a series of methods for directly performing statistical test and judgment analysis by directly using a priori knowledge of a known class of learning samples (for example, average value of M-1 target brightness) without considering the original overall distribution (for example, without considering the distribution situation of brightness values corresponding to M-1 frame images respectively) or without performing assumption about parameters.
Illustratively, the electronic device determines the second function based on a number of the M-1 target luminance averages less than or equal to the target luminance threshold. Wherein the target luminance threshold here is an unknown quantity, the target luminance threshold here may be taken as an argument of the second function. For example, the expression of the second function can be seen in the following expression (1):
Wherein, As an average value of the target brightness,For example, in the embodiment of the present application, the electronic device collects M-1 frames of images, so n is M-1, that is, n third luminance averages or n fourth luminance averages.Is the nth of the n target brightness averagesAverage value of the target brightness.
In the expression (1)For an indicative function, see also the following expression (2):
as can be seen from the expression (2), when the target luminance is the average value Less than or equal toIn the time-course of which the first and second contact surfaces,The value of (2) is 1. When the target brightness is averageNot less than or equal toIn the case of this condition of the process,The value of (2) is 0. Therefore, based onCan be determined toLess than or equal to the average value of the individual target brightnessThe number of occurrences (i.e., frequency number).
As an empirical distribution function, i.eAnd (3) withIs expressed as divisor ofLess than or equal to the average value of the individual target brightnessIs approximately equal to the cumulative distribution function of the samples in the non-parametric estimation. Cumulative distribution functionCharacterizing the whole sample (the number of luminance values contained in the whole sample is greater thanFor the embodiment of the applicationAverage of individual target brightness to fit distribution of sample ensemble) less than or equal toIs a probability of (2).
The cumulative distribution functionIs a probability density functionProbability density functionIs the following expression (3):
Wherein, Is aboutIs used for the differentiation of the (c) and (d),Is aboutIs a derivative of (a).Representing cumulative distribution functionAn increment of the output value is performed,Representing cumulative distribution functionThe increment of the argument,Representation ofIn the width intervalThe slope of the output value is represented by the ratio of the increment of the output value to the increment of the independent variable (or the cumulative distribution function) Rate of change over the width interval.Representation ofIs close to the point of 0 and the like,This means that the limit when the increment of the argument is approaching 0 is calculated.
In one possible implementation, the electronic device may pass through a kernel density estimation functionTo approximate the probability density functionSee in particular the following expression (4):
Wherein, regarding AndFor an explanation of (c) can be seen in the above expression (3), in the nuclear density estimating functionFor bandwidth (also called window width) of the kernel function, the explanation about bandwidth can be found in the following description about kernel function.
Due to the second functionExpression (2) of (2) is approximately equal to the cumulative distribution functionIn the process of calculationAndAt the time, canAndSubstituting expression (4) results in expression (5) as follows:
Wherein, regarding 、、、、For a description of (c) can be found in the above expression, the sexual functionCan also be described as the following expression (6):
in expression (6) Is a kernel function, i.e., a third function in an embodiment of the present application. When (when)And when the kernel function takes a value of 1. When (when)When the value is smaller than-1 or larger than 1, the value of the kernel function is 0.
Therefore, the kernel density estimation function in expression (5)It can also be expressed according to a kernel function, specifically the following expression (7):
Wherein, Representing the bandwidth of the kernel function, for determining the width of the kernel function.Indicating whenIs the nth of the n target brightness averagesA function of the average value of the individual target brightnesses,For determining the location of the center point of the kernel on the coordinate axis,Representation ofAnd (3) summing the kernel functions.
From expression (7), the electronic devices can pass through respectivelyPersonal (S)DeterminingPersonal (S)Namely, M-1 third functions are determined by the average value of M-1 target brightness obtained by the embodiment of the application.
Then, by superpositionDetermining kernel density estimation functions from kernel functions (i.e., M-1 third functions). Due to the kernel density estimation functionMay be approximately equal to the probability density functionTherefore, the first function may be determined by superimposing M-1 third functions.
The process of adjusting the image brightness of the electronic device will be described with reference to fig. 12A to 12D.
Fig. 12A is a system desktop of an electronic device, and fig. 12B, fig. 12C, and fig. 12D are preview interfaces of a set of camera applications provided by an embodiment of the present application.
Fig. 12A is a system desktop 1102 of the electronic device 100 according to an embodiment of the present application. As shown in fig. 12A, the system desktop 1102 may include a status bar 1101, a page indicator 1103, and a plurality of application icons.
Among other things, status bar 1101 may include one or more signal strength indicators of mobile communication signals (also may be referred to as cellular signals), such as fifth generation mobile communication technology (5th Generation Mobile Communication Technology,5G), wireless high-fidelity (WIRELESS FIDELITY, wi-Fi) signal strength indicators, battery status indicators, time indicators (e.g., 8:00), and the like.
The page indicator 1103 may be used to indicate a positional relationship of a currently displayed page with other pages.
The plurality of application icons may include a time application icon (e.g., 08:00), a date application icon (e.g., 1 month No. 1, friday), a weather application icon (e.g., 5 ℃), an application marketplace application icon, a memo application icon, a mall application icon, a browser application icon, a phone application icon, an information application icon, a camera application icon, a settings application icon, and the like. Not limited to the above icons, the system desktop 1102 may also include other application icons, which are not listed here. Multiple application icons may be distributed across multiple pages. The page indicator 1103 may be used to indicate which of a plurality of pages carrying a plurality of applications the page currently viewed by the user is. The user can browse other pages through the touch operation of sliding left and right.
It will be appreciated that the user interface of fig. 12A and the following description illustrate one possible user interface style of an electronic device, such as a cell phone, and should not be construed as limiting the embodiments of the application.
As shown in fig. 12A, the electronic device 100 may receive a user operation for starting the camera application, for example, an operation of clicking a desktop icon of the camera application, and in response to the operation, the electronic device may display a preview interface of the camera application as shown in fig. 12B.
Fig. 12B illustrates a user interface, also referred to as preview interface 103, for a capture and display service provided by an embodiment of the present application. In the preview interface 103 shown in fig. 12B, the electronic device 100 displays an image 102, a function selection area 1031, a mode selection area 1036, a zoom function 1032, a gallery 1033, a photographing button 1034, and a switching button 1035. Wherein the function selection area 1031 includes at least one function setting button. Exemplary function setup buttons include, but are not limited to, a visual button, a flash button, an artificial intelligence (ARTIFICIAL INTELLIGENCE, AI) camera button, a color button, and a setup button. Alternatively, if the electronic apparatus 100 detects that the user touches any one of the function setting buttons in the function selection area 1031, the corresponding function is set according to the touched function setting button. For example, if it is detected that the user touches a flash button, the setting of the flash is performed. Illustratively, the mode selection area 1036 includes at least one capture mode button. The shooting mode buttons include, but are not limited to, an iris mode button, a night scene mode button, a portrait mode button, a shooting mode button, a video mode button, a professional mode button, a more mode button, etc. Alternatively, if the electronic apparatus 100 detects that the user touches any one of the photographing mode buttons in the mode selection area 1036, it switches to the corresponding photographing mode according to the touched photographing mode button. Illustratively, gallery 1033 is used to review the captured pictures, and in particular, gallery 1033 is used to provide the user with access to review the captured pictures, and the user clicks on gallery 1033 to access the review captured picture interface. Illustratively, a capture button 1034 is used to capture a photograph, and a switch button 1035 is used to switch the front and rear cameras at the time of capture.
As shown in fig. 12B, after the electronic apparatus 100 switches from the system desktop 1102 shown in fig. 12A to the preview interface 103 shown in fig. 12B in response to a user operation, the electronic apparatus 100 may receive a user operation to switch modes, for example, left/right in the mode selection area 1036, and change the currently used photographing mode according to the operation. By default, the electronic device first uses a "photo" mode. In one implementation, if the currently used shooting mode of the electronic device 100 is not the "shooting" mode, the electronic device 100 may switch to the "shooting" mode when receiving the left/right sliding operation of the drag mode selection area 1036 and stopping the buoy at the "shooting" option.
As shown in fig. 12B, in the "photographing" mode, the electronic device 100 captures an image corresponding to the photographing scene a shown in fig. 12B, and then displays the captured image on a preview interface of the camera application. For example, the electronic device 100 captures the image 102 shown in fig. 1A, and displays the image 102 in the preview interface 103 shown in fig. 12B. The description of the shooting scene a, the table 1011, the cup 1012, the box 1013, and the image 102 included in the shooting scene a may be referred to the above embodiment corresponding to fig. 1A, and will not be repeated here.
In one possible implementation, the electronic device determines, after the acquired image 102, that there is a difference in brightness of the image 102 from the actual brightness of the shooting scene a based on the image 102. The electronic device may adjust the brightness value of the image after the image corresponding to the shooting scene a is acquired based on the difference between the brightness of the image 102 and the real brightness, and then display the adjusted image in the preview interface, so as to reduce the difference between the brightness of the image and the real brightness by adjusting the brightness value.
In one possible implementation, the electronic device acquires an image a after the image 102, where the image a is an mth frame image in the embodiment of the present application, and the electronic device needs to determine whether to perform brightness adjustment on the image a. Firstly, the electronic equipment acquires M-1 frame images before the image A, and determines a target brightness threshold according to brightness values respectively corresponding to the M-1 frame images. Then, the target luminance threshold value is compared with the luminance value of the image a. For example, based on image a and image 102, the average luminance of the dark area of image a is smaller than the average luminance of the dark area of image 102, failing to satisfy the preset condition. Therefore, the electronic device does not adjust the brightness of the image a, and displays the image a (i.e., the image 106 shown in fig. 12C) in the preview interface 107 shown in fig. 12C. The dark areas of the image a and the image 102 may be referred to the explanation of the dark areas of the shooting scene a in fig. 1A, and will not be described here.
As shown in fig. 12C, an image 106 is displayed in the preview interface 107. It can be seen that the luminance difference between the image 106 shown in fig. 12C and the image 102 shown in fig. 12B is a first luminance difference, the luminance difference between the image 104 (the image after the luminance adjustment of the image a) and the image 102 shown in fig. 1B described above is a second luminance difference, and the first luminance difference is smaller than the second luminance difference. Optionally, the difference in brightness between table 1061 in image 106 and table 1021 in image 102 is less than the difference in brightness between table 1041 in image 104 and table 1021 in image 102. Therefore, when the difference between the M-th frame image (e.g., image a) and the M-1 frame image (e.g., image 102) is large, the brightness "jump" phenomenon can be reduced if the electronic device does not perform brightness adjustment on the M-th frame image.
In one possible implementation, the electronic device, after acquiring image 106, acquires image C for shooting scene a shown in fig. 12B. The image C is an mth frame image in the embodiment of the present application, and the electronic device needs to determine whether to perform brightness adjustment on the image C. Firstly, the electronic device acquires an M-1 frame image before the image C, and determines a target brightness threshold (comprising a first brightness threshold corresponding to a first area and a second brightness threshold corresponding to a second area) according to brightness values respectively corresponding to the M-1 frame image. Then, the target luminance threshold value is compared with the luminance value of the image C. If the first brightness average value corresponding to the first area in the image C is smaller than the first brightness threshold value and the second brightness average value corresponding to the second area is larger than the second brightness threshold value, the electronic device adjusts the brightness of the M-th frame image. The electronic device then displays the adjusted image C (i.e., image 108) in the preview interface 109 as shown in fig. 12D.
As shown in fig. 12D, the preview interface 109 displays the image 108, and it can be seen that the brightness difference between the image 108 and the images 106 and 102 is smaller, and the brightness "jump" phenomenon does not occur. Further, the brightness of the image 108 approaches the actual brightness of the shooting scene a shown in fig. 12B. For example, the luminance of the table 1081 in the image 108 approaches the table 1011 in the shooting scene a shown in fig. 12B, thereby realizing correction of the image luminance value.
The process of adjusting the image brightness of the electronic device will be described with reference to fig. 13A to 13D.
Fig. 13A is a system desktop of an electronic device, and fig. 13B, fig. 13C, and fig. 13D are preview interfaces of another set of camera applications provided by an embodiment of the present application.
Fig. 13A is a system desktop 210 of the electronic device 100 according to an embodiment of the present application. As shown in fig. 13A, the system desktop 210 may include a status bar 1101, a page indicator 1103, and a plurality of application icons. The status bar 1101, the page indicator 1103, and the plurality of application icons included in the system desktop 210 may be referred to in the related description of fig. 12A, and will not be described herein.
It will be appreciated that the user interface of fig. 13A and the following description illustrate one possible user interface style of an electronic device, such as a cell phone, and should not be construed as limiting the embodiments of the present application.
As shown in fig. 13A, the electronic device 100 may receive a user operation for starting the camera application, for example, an operation of clicking a desktop icon of the camera application, and in response to the operation, the electronic device may display a preview interface of the camera application as shown in fig. 13B.
Fig. 13B illustrates a user interface, also referred to as preview interface 203, for a capture and display service provided by an embodiment of the present application. In the preview interface 203 shown in fig. 13B, the electronic device 100 displays an image 202, a function selection area 1031, a mode selection area 1036, a zoom function 1032, a gallery 1033, a photographing button 1034, and a switching button 1035. The function selection area 1031, the mode selection area 1036, the zoom function 1032, the gallery 1033, the photographing button 1034, and the switching button 1035 may be described with reference to fig. 12B, and will not be described herein.
As shown in fig. 13B, after the electronic apparatus 100 switches from the system desktop 210 shown in fig. 13A to the preview interface 203 shown in fig. 13B in response to a user operation, the electronic apparatus 100 may receive a user operation to switch modes, for example, left/right sliding in the mode selection area 1036, and change the currently used photographing mode according to the operation. By default, the electronic device first uses a "photo" mode.
As shown in fig. 13B, in the "photographing" mode, an image frame captured by the camera for the photographing scene B, for example, the image 202 shown in fig. 2A described above, is displayed in the preview interface 203. For the description of the shooting scene B, the desktop 2011, the box 2012, and the notebook 2013 included in the shooting scene B, and the image 202, reference may be made to the above-mentioned embodiment corresponding to fig. 2A, which is not repeated herein.
In one possible implementation, the electronic device determines, after the acquired image 202, that there is a difference in brightness of the image 202 from the actual brightness of the shooting scene B based on the image 202. The electronic device may adjust the brightness value of the image after the image corresponding to the shooting scene B is acquired based on the difference between the brightness of the image 202 and the real brightness, and then display the adjusted image in the preview interface, so as to reduce the difference between the brightness of the image and the real brightness by adjusting the brightness value.
In one possible implementation, the electronic device acquires an image B after the image 202, where the image B is an mth frame image in the embodiment of the present application, and the electronic device needs to determine whether to perform brightness adjustment on the image B. Firstly, the electronic equipment acquires M-1 frame images before the image B, and determines a target brightness threshold according to brightness values respectively corresponding to the M-1 frame images. Then, the target luminance threshold value is compared with the luminance value of the image B. For example, the average brightness of the transition region of image B is greater than the average brightness of the transition region of image 202, and the preset condition is not satisfied. Therefore, the electronic device does not adjust the brightness of the image B, and displays the image B (i.e., the image 206 shown in fig. 13C) in the preview interface 207 shown in fig. 13C. Here, the explanation about the transition region of the image B and the transition region of the image 202 can be seen in fig. 2A about the transition region of the shooting scene B.
As shown in fig. 13C, an image 206 is displayed in the preview interface 207. It can be seen that the luminance difference between the image 206 shown in fig. 13C and the image 202 shown in fig. 13B is a third luminance difference, the luminance difference between the image 204 shown in fig. 2B (the image after the luminance adjustment of the image B) and the image 202 described above is a fourth luminance difference, and the third luminance difference is smaller than the fourth luminance difference. Optionally, the difference in brightness between notebook 2061 in image 206 and notebook 2021 in image 202 is less than the difference in brightness between notebook 2041 in image 204 and notebook 2021 in image 202. Therefore, when the difference between the M-th frame image (e.g., image B) and the M-1 frame image (e.g., image 202) is large, the brightness "jump" phenomenon can be reduced if the electronic device does not perform brightness adjustment on the M-th frame image.
In one possible implementation, the electronic device acquires image D for shooting scene B shown in fig. 13B after acquiring image 206. The image D is an mth frame image in the embodiment of the present application, and the electronic device needs to determine whether to perform brightness adjustment on the image D. Firstly, the electronic device acquires M-1 frame images before the image D, and determines a target brightness threshold (comprising a first brightness threshold corresponding to a first area and a second brightness threshold corresponding to a second area) according to brightness values corresponding to the M-1 frame images. Then, the target luminance threshold value is compared with the luminance value of the image D. If the first brightness average value corresponding to the first area in the image D is smaller than the first brightness threshold value and the second brightness average value corresponding to the second area is larger than the second brightness threshold value, the electronic device adjusts the brightness of the M-th frame image. The electronic device then displays the adjusted image D (i.e., image 208) in a preview interface 209 as shown in fig. 13D.
As shown in fig. 13D, the preview interface 209 displays the image 208, and it can be seen that the brightness differences between the image 208 and the images 206 and 202 are smaller, and the brightness "jump" phenomenon does not occur. Further, the brightness of the image 208 approaches the true brightness of the shooting scene B shown in fig. 13B. For example, the brightness of the notebook 2081 in the image 208 approaches the notebook 2013 in the shooting scene B shown in fig. 13B, thereby realizing correction for the image brightness value.
It should be understood that each step in the above method embodiments provided by the present application may be implemented by an integrated logic circuit of hardware in a processor or an instruction in software form. The steps of the method disclosed in connection with the embodiments of the present application may be embodied directly in a hardware processor for execution, or in a combination of hardware and software modules in the processor for execution.
The application also provides an electronic device, which may include a memory and a processor. Wherein the memory is operable to store a computer program and the processor is operable to invoke the computer program in the memory to cause the electronic device to perform the method of any of the embodiments described above.
The application also provides a chip system comprising at least one processor for implementing the functions involved in the method executed by the electronic device in any of the above embodiments.
In one possible design, the system on a chip further includes a memory to hold program instructions and data, the memory being located either within the processor or external to the processor.
The chip system may be formed of a chip or may include a chip and other discrete devices.
Alternatively, the processor in the system-on-chip may be one or more. The processor may be implemented in hardware or in software. When implemented in hardware, the processor may be a logic circuit, an integrated circuit, or the like. When implemented in software, the processor may be a general purpose processor, implemented by reading software code stored in a memory.
Alternatively, the memory in the system-on-chip may be one or more. The memory may be integral with the processor or separate from the processor, and embodiments of the present application are not limited. The memory may be a non-transitory processor, such as a ROM, which may be integrated on the same chip as the processor, or may be separately provided on different chips, and the type of memory and the manner of providing the memory and the processor are not particularly limited in the embodiments of the present application.
Illustratively, the chip system may be a field programmable gate array (field programmable GATE ARRAY, FPGA), an Application Specific Integrated Chip (ASIC), a system on chip (SoC), a central processing unit (central processor unit, CPU), a network processor (network processor, NP), a digital signal processing circuit (DIGITAL SIGNAL processor, DSP), a microcontroller (micro controller unit, MCU), a programmable controller (programmable logic device, PLD) or other integrated chip.
The present application also provides a computer program product comprising a computer program (which may also be referred to as code, or instructions) which, when executed, causes a computer to perform the method performed by the electronic device in any of the embodiments described above.
The present application also provides a computer-readable storage medium storing a computer program (which may also be referred to as code, or instructions). The computer program, when executed, causes a computer to perform the method performed by the electronic device in any of the embodiments described above.
The embodiments of the present application may be arbitrarily combined to achieve different technical effects.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the processes or functions in accordance with the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk Solid STATE DISK), etc.
Those of ordinary skill in the art will appreciate that implementing all or part of the above-described method embodiments may be accomplished by a computer program to instruct related hardware, the program may be stored in a computer readable storage medium, and the program may include the above-described method embodiments when executed. The storage medium includes a ROM or a random access memory RAM, a magnetic disk or an optical disk, and other various media capable of storing program codes.
In summary, the foregoing description is only exemplary embodiments of the present invention and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement, etc. made according to the disclosure of the present invention should be included in the protection scope of the present invention.
Claims (11)
1. An image processing method, the method comprising:
acquiring an M-1 frame image and an M-th frame image, wherein M is an integer greater than 1;
Determining a target brightness threshold value based on brightness values respectively corresponding to the M-1 frame images;
adjusting the brightness value of the Mth frame image according to the target brightness threshold value to obtain a first image;
And displaying the first image on a preview interface.
2. The method of claim 1, wherein the target luminance threshold comprises a first luminance threshold and a second luminance threshold, and wherein adjusting the luminance value of the mth frame image according to the target luminance threshold comprises:
when a first brightness average value of the Mth frame image is smaller than the first brightness threshold value, adjusting the brightness value of the Mth frame image, wherein the first brightness average value is determined according to the brightness value corresponding to the pixel contained in a first area in the Mth frame image, and the brightness value corresponding to the pixel contained in the first area is smaller than a first preset threshold value and larger than a second preset threshold value;
When a second brightness average value of the Mth frame image is larger than the second brightness threshold value, adjusting the brightness value of the Mth frame image, wherein the second brightness average value is determined according to the brightness value corresponding to the pixel contained in a second area in the Mth frame image, and the brightness value corresponding to the pixel contained in the second area is smaller than or equal to the second preset threshold value;
and adjusting the brightness value of the M-th frame image under the condition that the first brightness average value is smaller than the first brightness threshold value and the second brightness average value is larger than the second brightness threshold value, wherein the first brightness threshold value is larger than the second brightness threshold value.
3. The method according to claim 2, wherein determining the target luminance threshold based on the luminance values respectively corresponding to the M-1 frame images comprises:
respectively obtaining brightness values corresponding to pixels contained in the first region in the M-1 frame image and brightness values corresponding to pixels contained in the second region in the M-1 frame image;
respectively carrying out average processing on brightness values corresponding to pixels contained in the first region in the M-1 frame image to obtain M-1 third brightness average values;
respectively carrying out average processing on brightness values corresponding to pixels contained in the second region in the M-1 frame image to obtain M-1 fourth brightness average values;
and determining the target brightness threshold according to M-1 target brightness averages, wherein the target brightness averages comprise the third brightness average and the fourth brightness average, the third brightness average is used for determining the first brightness threshold, and the fourth brightness average is used for determining the second brightness threshold.
4. A method according to claim 3, wherein said determining said target luminance threshold from M-1 target luminance averages comprises:
determining a first function according to the M-1 target brightness average value;
And determining the target brightness threshold according to the integral value of the brightness value corresponding to the M-1 frame image in the first function.
5. The method according to claim 4, wherein determining the target luminance threshold value from the integral value of the luminance values corresponding to the M-1 frame images, respectively, in the first function includes:
integrating the first function based on a target integral value to determine a target brightness interval, wherein the target brightness interval comprises a plurality of brightness values, and the target integral value is used for representing the probability of brightness values corresponding to the M-1 frame images in the target brightness interval;
and determining the target brightness threshold according to the target brightness interval.
6. The method of claim 4 or 5, wherein said determining a first function from said M-1 target luminance averages comprises:
Determining a second function according to the number of the target brightness thresholds which are smaller than or equal to the M-1 target brightness average values, wherein the second function is used for representing the probability of the target brightness thresholds which are smaller than or equal to the M-1 target brightness average values;
Determining M-1 third functions according to the change rate of the second functions in a width interval, wherein the positions of central points of the M-1 third functions in coordinate axes are respectively in one-to-one correspondence with the M-1 target brightness average values, and the width of any one of the M-1 third functions is determined according to the length of the width interval;
The first function is determined by superimposing the M-1 third functions.
7. The method of claim 2, wherein said adjusting the luminance value of the mth frame image in accordance with the target luminance threshold value comprises:
increasing a brightness value corresponding to a pixel contained in the first region in the M-th frame image under the condition that the first brightness average value is smaller than the first brightness threshold value;
reducing a brightness value corresponding to a pixel contained in the second region in the M-th frame image under the condition that the second brightness average value is larger than the second brightness threshold value;
And when the first brightness average value is smaller than the first brightness threshold value and the second brightness average value is larger than the second brightness threshold value, increasing the brightness value corresponding to the pixel contained in the first area in the Mth frame image, and reducing the brightness value corresponding to the pixel contained in the second area in the Mth frame image.
8. An electronic device comprising one or more processors and one or more memories, wherein the one or more memories are coupled to the one or more processors, the one or more memories to store computer program code comprising computer instructions that the one or more processors invoke to cause the electronic device to perform the method of any of claims 1-7.
9. A chip system for application to an electronic device, the chip system comprising one or more processors for invoking computer instructions to cause the electronic device to perform the method of any of claims 1 to 7.
10. A computer program product comprising instructions which, when run on an electronic device, cause the electronic device to perform the method of any one of claims 1 to 7.
11. A computer readable storage medium comprising instructions which, when run on an electronic device, cause the electronic device to perform the method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411598877.5A CN119211736B (en) | 2024-11-11 | 2024-11-11 | Image processing method and related device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411598877.5A CN119211736B (en) | 2024-11-11 | 2024-11-11 | Image processing method and related device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN119211736A true CN119211736A (en) | 2024-12-27 |
CN119211736B CN119211736B (en) | 2025-04-11 |
Family
ID=94054827
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202411598877.5A Active CN119211736B (en) | 2024-11-11 | 2024-11-11 | Image processing method and related device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN119211736B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070115372A1 (en) * | 2005-11-24 | 2007-05-24 | Cheng-Yu Wu | Automatic exposure control method and automatic exposure compensation apparatus |
CN101651786A (en) * | 2008-08-14 | 2010-02-17 | 深圳华为通信技术有限公司 | Method for restoring brightness change of video sequence and video processing equipment |
CN111225160A (en) * | 2020-01-17 | 2020-06-02 | 中国科学院西安光学精密机械研究所 | Automatic exposure control method based on image multi-threshold control |
CN113923373A (en) * | 2021-09-30 | 2022-01-11 | 北京地平线信息技术有限公司 | Image flicker detection method and detection device |
CN117241145A (en) * | 2022-06-15 | 2023-12-15 | 荣耀终端有限公司 | Terminal device and method for creating/displaying HDR image |
-
2024
- 2024-11-11 CN CN202411598877.5A patent/CN119211736B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070115372A1 (en) * | 2005-11-24 | 2007-05-24 | Cheng-Yu Wu | Automatic exposure control method and automatic exposure compensation apparatus |
CN101651786A (en) * | 2008-08-14 | 2010-02-17 | 深圳华为通信技术有限公司 | Method for restoring brightness change of video sequence and video processing equipment |
CN111225160A (en) * | 2020-01-17 | 2020-06-02 | 中国科学院西安光学精密机械研究所 | Automatic exposure control method based on image multi-threshold control |
CN113923373A (en) * | 2021-09-30 | 2022-01-11 | 北京地平线信息技术有限公司 | Image flicker detection method and detection device |
CN117241145A (en) * | 2022-06-15 | 2023-12-15 | 荣耀终端有限公司 | Terminal device and method for creating/displaying HDR image |
Also Published As
Publication number | Publication date |
---|---|
CN119211736B (en) | 2025-04-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109829864B (en) | Image processing method, device, equipment and storage medium | |
US20230043815A1 (en) | Image Processing Method and Electronic Device | |
CN112598594A (en) | Color consistency correction method and related device | |
CN112287852B (en) | Face image processing method, face image display method, face image processing device and face image display equipment | |
CN112887582A (en) | Image color processing method and device and related equipment | |
US20250150707A1 (en) | Photographing frame rate control method, electronic device, chip system, and readable storage medium | |
CN113810603A (en) | Point light source image detection method and electronic device | |
CN116916151B (en) | Photography methods, electronic equipment and storage media | |
CN113873083A (en) | Duration determination method and device, electronic equipment and storage medium | |
CN112184581B (en) | Image processing method, device, computer equipment and medium | |
CN115330610B (en) | Image processing method, device, electronic device and storage medium | |
CN112634155A (en) | Image processing method, image processing device, electronic equipment and storage medium | |
WO2022267608A1 (en) | Exposure intensity adjusting method and related apparatus | |
CN114172596B (en) | Channel noise detection method and related device | |
CN113781959A (en) | Interface processing method and device | |
CN119211736B (en) | Image processing method and related device | |
CN115460343B (en) | Image processing method, device and storage medium | |
CN114390195B (en) | Automatic focusing method, device, equipment and storage medium | |
CN113891008B (en) | Exposure intensity adjusting method and related equipment | |
CN108881739B (en) | Image generation method, device, terminal and storage medium | |
CN117710265B (en) | Image processing method and related equipment | |
CN117395495B (en) | Image processing method and electronic device | |
CN119255115B (en) | Image generation method, electronic device, and storage medium | |
CN118509715B (en) | Light spot display method, device and electronic equipment | |
CN115908221B (en) | Image processing method, electronic device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Country or region after: China Address after: Unit 3401, unit a, building 6, Shenye Zhongcheng, No. 8089, Hongli West Road, Donghai community, Xiangmihu street, Futian District, Shenzhen, Guangdong 518040 Applicant after: Honor Terminal Co.,Ltd. Address before: 3401, unit a, building 6, Shenye Zhongcheng, No. 8089, Hongli West Road, Donghai community, Xiangmihu street, Futian District, Shenzhen, Guangdong Applicant before: Honor Device Co.,Ltd. Country or region before: China |
|
CB02 | Change of applicant information | ||
GR01 | Patent grant | ||
GR01 | Patent grant |