Detailed Description
Referring to the drawings, wherein like reference numbers refer to like elements, the principles of the present application are illustrated as being implemented in a suitable computing environment. The following description is based on illustrated embodiments of the application and should not be taken as limiting the application with respect to other embodiments that are not detailed herein.
In the description that follows, specific embodiments of the present application will be described with reference to steps and symbols executed by one or more computers, unless otherwise indicated. Accordingly, these steps and operations will be referred to, several times, as being performed by a computer, the computer performing operations involving a processing unit of the computer in electronic signals representing data in a structured form. This operation transforms the data or maintains it at locations in the computer's memory system, which may be reconfigured or otherwise altered in a manner well known to those skilled in the art. The data maintains a data structure that is a physical location of the memory that has particular characteristics defined by the data format. However, while the principles of the application have been described in language specific to above, it is not intended to be limited to the specific form set forth herein, and it will be recognized by those of ordinary skill in the art that various of the steps and operations described below may be implemented in hardware.
The terms "first", "second", and "third", etc. in this application are used to distinguish between different objects and not to describe a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or modules is not limited to only those steps or modules listed, but rather, some embodiments may include other steps or modules not listed or inherent to such process, method, article, or apparatus.
Referring to fig. 1, fig. 1 is a schematic flow chart of an image processing method according to an embodiment of the present disclosure. The image processing method provided by the embodiment of the application is applied to the electronic equipment, and the specific flow can be as follows:
step 101, obtaining a preview image of a current shooting scene.
In an embodiment, the image processing method may be implemented in a scene of taking a picture on an electronic device. When a user wants to take a picture, the imaging device of the electronic equipment is started, and when the user wants to take a picture, the imaging device of the electronic equipment is started, wherein the imaging device can be a front camera, a rear camera, a double camera and the like. Starting the imaging equipment of the electronic equipment, enabling the imaging equipment to enter a photographing preview mode, displaying a photographed object in a display window of the electronic equipment, and defining a picture displayed in the display window at the moment as a preview image.
The imaging device generally comprises five parts in hardware: a housing (motor), a lens, an infrared filter, an image sensor (e.g., CCD or COMS), and a Flexible Printed Circuit Board (FPCB), etc. In the shooting preview mode, in the process of displaying a preview image, the lens is driven by the motor to move, and a shot object passes through the lens to be imaged on the image sensor. The image sensor converts the optical signal into an electric signal through optical-electric conversion and transmits the electric signal to the image processing circuit for subsequent processing. The Image Processing circuit may be implemented using hardware and/or software components, and may include various Processing units that define an ISP (Image Signal Processing) pipeline.
Step 102, performing region segmentation processing on the preview image to obtain a plurality of scene regions of the preview image.
In an embodiment, the plurality of scene areas in the preview image may include a foreground area and a background area. In practical applications, a subject to be photographed and a background are often included in a viewing range when a user photographs through an electronic device, wherein the subject to be photographed may be a foreground region, and the background may be a background region.
The above-described manner of performing the region segmentation process on the preview image may be various. In an embodiment, the region may be segmented by depth information, specifically, the depth information of the preview image may be obtained first, generally, if the depth information indicates that the object is closer to the plane where the primary and secondary cameras are located, and the depth value is smaller, the object may be determined to be a foreground, and if the depth information indicates that the object is farther from the plane where the primary and secondary cameras are located, and the depth value is larger, the object may be determined to be a background. In this embodiment, a depth value of a target object may be determined in the preview image, where the target object may be an object corresponding to a central point of the preview image or an object corresponding to an opposite point (for example, a position where a user touches a screen), and then the preview image is divided into a foreground region and a background region according to the depth value of the target object, for example, a depth value range is generated according to the depth value of the target object, and an object located in the depth value range is determined as a foreground in the preview image, and an object located outside the depth value range is determined as a background, so as to obtain the foreground region and the background region of the preview image. Namely, the step of performing the region segmentation processing on the preview image to obtain the plurality of scene regions of the preview image includes:
acquiring depth information of the preview image;
determining a depth value of a target object among the preview images according to the depth information;
and segmenting the preview image into a foreground region and a background region according to the depth value of the target object.
The depth information of the acquired preview image can be acquired by two cameras, specifically, because a certain distance exists between the main camera and the auxiliary camera, the two cameras have parallax, and images taken by different cameras should be different. The main image is captured by the main camera and the web image is captured by the sub-camera, so that there should be some difference between the main image and the sub-image. According to the principle of triangulation, the depth information of the same object in the main image and the auxiliary image, namely the distance between the object and the plane where the main camera and the auxiliary camera are located, can be calculated.
In other embodiments, the foreground region and the background region of the preview image may also be determined by means of image recognition. Specifically, the prediction can be performed through a Convolutional Neural Network (CNN), where the CNN is a Neural Network model developed based on a conventional multilayer Neural Network and is used for image classification and recognition, and the CNN introduces a Convolutional algorithm and a pooling algorithm compared with the conventional multilayer Neural Network. According to the embodiment of the application, the neural network can be trained, and the preview image can be predicted after the training is finished. Namely, the step of performing the region segmentation processing on the preview image to obtain the plurality of scene regions of the preview image includes:
inputting the preview image into a preset neural network model for region segmentation prediction to obtain a prediction result, wherein the prediction result comprises prediction information of a foreground region and prediction information of a background region in the preview image;
and determining the previewed foreground region and the previewed background region according to the prediction result.
Step 103, respectively determining exposure combinations corresponding to a plurality of scene areas, wherein the exposure combination of each scene area includes a plurality of exposure parameters.
In an embodiment, after obtaining a plurality of scene areas in the preview image, the exposure combinations corresponding to the plurality of scene areas are respectively determined, specifically, the corresponding exposure combinations may be determined according to the brightness of the plurality of scene areas, and the exposure combinations corresponding to the plurality of scene areas may also be determined according to the light ratio. For example, after obtaining the plurality of scene areas, since the respective areas are independent from each other, the light ratio statistics may be performed for each scene area, so as to obtain the exposure combination corresponding to the area. Wherein the exposure combination of each scene area comprises a plurality of exposure parameters.
As shown in fig. 2, fig. 2 is a schematic diagram of exposure combinations corresponding to different scene areas according to an embodiment of the present application. The preview image comprises a scene area 1 and a scene area 2, the scene area 1 corresponds to an exposure combination 1, the scene area 2 corresponds to an exposure combination 2, the exposure combination 1 comprises three exposure parameters of exposure 1, exposure 2 and exposure 3, and the exposure combination 2 comprises three exposure parameters of exposure 4, exposure 5 and exposure 6. The exposure parameter includes an exposure value (EV value) or an exposure duration.
The exposure combinations corresponding to the different scene areas may be the same or different. For example, the light conditions of different scene areas are the same, or different scene areas are in the same depth of field, so that the conditions of the previous different scene areas are similar, and the exposure combinations corresponding to the different scene areas are the same.
And 104, acquiring a plurality of scene images of the shooting scene according to different exposure parameters included in the plurality of exposure combinations, and synthesizing the plurality of scene images to obtain a high dynamic range image of the shooting scene.
As an optional implementation manner, when acquiring a plurality of scene images of a shooting scene according to different exposure parameters, the electronic device may acquire the plurality of scene images of the shooting scene in a manner that a preset short exposure time and a preset long exposure time overlap. In other words, one of the two adjacently exposed scene images is a short-exposure image, and the other scene image is a long-exposure image. Therefore, the high dynamic range image of the shooting scene can be synthesized in a long and short exposure synthesis mode.
For example, the electronic device obtains a short-exposure-duration scene image and a long-exposure-duration scene image, and because the short-exposure-duration scene image retains the features of the brighter region in the shooting scene and the long-exposure-duration scene image retains the features of the darker region in the shooting scene, when synthesizing, the high-dynamic-range image of the shooting scene can be synthesized by using the features of the darker region in the shooting scene retained by the long-exposure-duration scene image and the features of the brighter region in the shooting scene retained by the short-exposure-duration scene image.
As another optional implementation manner, when acquiring a plurality of scene images of a shooting scene according to different exposure parameters, the electronic device may acquire the plurality of scene images of the shooting scene according to different exposure values, wherein the electronic device may respectively expose the shooting scene according to a preset overexposure value and a preset underexposure value to obtain two scene images of the shooting scene, and may also respectively expose the shooting scene according to a preset overexposure value, a preset normal exposure value, and a preset underexposure value to obtain three scene images of the shooting scene, and the like.
For example, the electronic device may control the camera to expose the shooting scene according to a preset normal exposure value EV0, a preset underexposure value EV-2, and a preset overexposure value EV2, so as to obtain three scene images of the shooting scene, which are a normal exposure image, an overexposure image, and an underexposure image, respectively, and synthesize a high dynamic range image of the shooting scene in a bracketing synthesis manner.
In one embodiment, the step of acquiring a plurality of scene images of the captured scene according to different exposure parameters included in the plurality of exposure combinations includes
And acquiring a plurality of scene images of a shooting scene through the first camera and the second camera according to different exposure parameters included in the plurality of exposure combinations.
Referring to fig. 3, in the embodiment of the present application, a first camera and a second camera are disposed on the same side of an electronic device.
For example, when the electronic device obtains a plurality of scene images of a shooting scene according to different exposure parameters, the first camera and the second camera can be respectively provided to obtain the plurality of scene images of the shooting scene according to the different exposure parameters. Therefore, the acquisition efficiency of the scene image is improved, and the synthesis efficiency of the high dynamic range image is improved. For example, the electronic device respectively exposes the shooting scene according to the short exposure duration through the first camera, exposes the shooting scene according to the long exposure duration through the second camera, and can acquire two scene images of the shooting scene, namely a short exposure image and a long exposure image, through one exposure operation.
In an embodiment, the image processing method provided by the present application may further include:
and carrying out video coding according to the high dynamic range image to obtain a video of a shooting scene.
The electronic equipment can also perform video coding according to the high dynamic range image to obtain the video of the shooting scene, namely when the shooting scene is recorded, the video obtained by recording has the effect of high dynamic range.
It should be noted that, as to what video coding format is used for video coding, the embodiment of the present application is not particularly limited, and a person skilled in the art can select a suitable video coding format according to actual needs, including but not limited to h.264, h.265, MPEG-4, and so on.
In an embodiment, the image processing method provided by the present application further includes:
and before the high dynamic range image is obtained through synthesis, performing down-sampling processing on the plurality of scene images according to the current resolution of the screen.
It will be appreciated by those skilled in the art that the actual resolution of the preview image is greater than the resolution of the screen display, and that no better display effect is obtained than if the actual resolution of the preview image is equal to the resolution of the screen display.
Therefore, before the electronic device synthesizes and obtains the high dynamic range image, the current resolution of the screen is firstly obtained, and then the plurality of scene images are subjected to down-sampling processing according to the current resolution of the screen, so that the resolutions of the plurality of scene images are consistent with the current resolution of the screen, and further the resolution of the synthesized high dynamic range image is consistent with the current resolution of the screen. Thus, the efficiency of combining the high dynamic range images can be improved, and the display effect of the high dynamic range images is not reduced when the high dynamic range images are displayed as preview images.
As can be seen from the above, the image processing method provided in this embodiment of the present application may obtain a preview image of a current shooting scene, perform region segmentation processing on the preview image to obtain a plurality of scene regions of the preview image, respectively determine exposure combinations corresponding to the plurality of scene regions, where the exposure combination of each scene region includes a plurality of exposure parameters, obtain a plurality of scene images of the shooting scene according to different exposure parameters included in the plurality of exposure combinations, and synthesize the plurality of scene images to obtain a high dynamic range image of the shooting scene. According to the embodiment of the application, the preview image is divided into the plurality of areas, different exposure combinations can be obtained, then shooting is carried out through the exposure parameters in the exposure combinations, and the high dynamic range effect of image shooting can be effectively improved.
The cleaning method of the present application will be further described below on the basis of the method described in the above embodiment. Referring to fig. 4, fig. 4 is another schematic flow chart of an image processing method according to an embodiment of the present application, where the image processing method includes:
in step 201, the electronic device obtains a preview image of a current shooting scene.
In an embodiment, the image processing method may be implemented in a scene of taking a picture on an electronic device. When a user wants to take a picture, the imaging device of the electronic equipment is started, and when the user wants to take a picture, the imaging device of the electronic equipment is started, wherein the imaging device can be a front camera, a rear camera, a double camera and the like. Starting the imaging equipment of the electronic equipment, enabling the imaging equipment to enter a photographing preview mode, displaying a photographed object in a display window of the electronic equipment, and defining a picture displayed in the display window at the moment as a preview picture.
In step 202, the electronic device performs region segmentation processing on the preview image to obtain a plurality of scene regions of the preview image.
In an embodiment, the plurality of scene areas in the preview image may include a foreground area and a background area. In practical applications, a subject to be photographed and a background are often included in a viewing range when a user photographs through an electronic device, wherein the subject to be photographed may be a foreground region, and the background may be a background region.
For example, the depth information of the preview image may be obtained first, and a depth value of a target object is determined in the preview image, where the target object may be an object corresponding to a central point of the preview image or an object corresponding to an opposite focus (e.g., a position where a user touches a screen), and then the preview image is divided into a foreground region and a background region according to the depth value of the target object.
In other embodiments, the foreground region and the background region of the preview image may also be determined by means of image recognition. Specifically, the prediction can be performed through a convolutional neural network, the neural network can be trained first, and the preview image can be predicted after the training is completed, so that the foreground region and the background region of the preview image are determined.
In step 203, the electronic device determines a first region with a brightness greater than a first preset brightness value and a second region with a brightness less than a second preset brightness value among the scene regions.
In an embodiment of the present invention, the first predetermined brightness value is greater than the second predetermined brightness value. The electronic device may determine the first area and the second area in various ways, such as: after the electronic device acquires the preview image of the shooting scene, the electronic device may acquire the brightness of each region in the preview image, determine, as the first region, a region in each region in the preview image with a brightness greater than a first preset brightness value, and determine, as the second region, a region in each region in the preview image with a brightness less than a second preset brightness value. The first preset brightness value and the second preset brightness value may be preset values in the electronic device.
In other embodiments, after the electronic device acquires a preview image of a shooting scene, the preview image may be observed by a user, and through a first input to a region with a higher brightness in the preview image, the terminal device may be triggered to determine a region corresponding to the first input in the preview image as the first region, and through a second input to a region with a lower brightness in the preview image, the terminal device may be triggered to determine a region corresponding to the second input in the preview image as the second region.
Optionally, in this embodiment of the present invention, in the above manners, the first input and the second input may be input manners such as a single-click input, a double-click input, or a long-press input in an interface where the preview image is displayed on a screen of the electronic device by a user. The method may be determined according to actual use requirements, and the embodiment of the present invention is not further limited in this respect.
In step 204, the electronic device calculates an exposure combination according to the areas of the first region and the second region, wherein the exposure combination of each scene region includes a plurality of exposure parameters.
In an embodiment, after obtaining a plurality of scene regions in the preview image, the exposure combinations corresponding to the plurality of scene regions are respectively determined, and specifically, the corresponding exposure combinations may be determined according to areas of a first region and a second region in each scene region.
The exposure combinations corresponding to the different scene areas may be the same or different. For example, the light conditions of different scene areas are the same, or different scene areas are in the same depth of field, so that the conditions of the previous different scene areas are similar, and the exposure combinations corresponding to the different scene areas are the same.
In step 205, the electronic device obtains a plurality of scene images of the shooting scene according to different exposure parameters included in the plurality of exposure combinations, and synthesizes the plurality of scene images to obtain a high dynamic range image of the shooting scene.
As an optional implementation manner, when acquiring a plurality of scene images of a shooting scene according to different exposure parameters, the electronic device may acquire the plurality of scene images of the shooting scene in a manner that a preset short exposure time and a preset long exposure time overlap. In other words, one of the two adjacently exposed scene images is a short-exposure image, and the other scene image is a long-exposure image. Therefore, the high dynamic range image of the shooting scene can be synthesized in a long and short exposure synthesis mode.
As another optional implementation manner, when acquiring a plurality of scene images of a shooting scene according to different exposure parameters, the electronic device may acquire the plurality of scene images of the shooting scene according to different exposure values, wherein the electronic device may respectively expose the shooting scene according to a preset overexposure value and a preset underexposure value to obtain two scene images of the shooting scene, and may also respectively expose the shooting scene according to a preset overexposure value, a preset normal exposure value, and a preset underexposure value to obtain three scene images of the shooting scene, and the like.
And step 206, the electronic equipment displays the high dynamic range image as a target preview image of the shooting scene.
In the embodiment of the application, after the electronic device obtains the high dynamic range image through synthesis, the high dynamic range image obtained through synthesis is displayed as a preview image of a shooting scene.
The high dynamic range image is displayed as the preview image of the shooting scene, so that the user can see the high dynamic range effect of the image obtained by shooting the shooting scene in advance, and the user is helped to better shoot.
As can be seen from the above, in the image processing method provided in this embodiment of the present application, the electronic device may obtain a preview image of a current shooting scene, perform region segmentation processing on the preview image to obtain a plurality of scene regions of the preview image, determine, in the scene regions, a first region having a luminance greater than a first preset luminance value and a second region having a luminance less than a second preset luminance value, calculate an exposure combination according to areas of the first region and the second region, where the exposure combination of each scene region includes a plurality of exposure parameters, obtain a plurality of scene images of the shooting scene according to different exposure parameters included in the plurality of exposure combinations, synthesize the plurality of scene images to obtain a high dynamic range image of the shooting scene, and show the high dynamic range image as a target preview image of the shooting scene. According to the embodiment of the application, the preview image is divided into the plurality of areas, different exposure combinations can be obtained, then shooting is carried out through the exposure parameters in the exposure combinations, and the high dynamic range effect of image shooting can be effectively improved.
Referring to fig. 5, fig. 5 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure. Wherein the image processing apparatus 30 comprises:
an obtaining module 301, configured to obtain a preview image of a current shooting scene.
In one embodiment, the imaging device of the electronic device is activated when the user wants to take a picture, and the imaging device of the device is activated when the user wants to take a picture, wherein the imaging device can be a front camera, a rear camera, a dual camera, and the like. Starting the imaging device of the electronic device, making the imaging device enter a photographing preview mode, displaying the photographed object on a display window of the electronic device, defining a picture displayed on the display window at this time as a preview image, and acquiring the preview image by the acquiring module 301.
A processing module 302. The image processing device is used for performing region segmentation processing on the preview image to obtain a plurality of scene regions of the preview image.
In an embodiment, the plurality of scene areas in the preview image may include a foreground area and a background area. In practical applications, a subject to be photographed and a background are often included in a viewing range when a user photographs through an electronic device, wherein the subject to be photographed may be a foreground region, and the background may be a background region.
The processing module 302 may perform region segmentation processing on the preview image in various ways. In an embodiment, the processing module 302 may perform region segmentation through depth information, and in other embodiments, may perform scene region segmentation through an image recognition method.
A determining module 303, configured to determine exposure combinations corresponding to the multiple scene areas, respectively, where an exposure combination of each scene area includes multiple exposure parameters.
In an embodiment, after obtaining a plurality of scene regions in the preview image, the determining module 303 determines exposure combinations corresponding to the plurality of scene regions, specifically, the exposure combinations corresponding to the plurality of scene regions may be determined according to brightness of the plurality of scene regions, and the exposure combinations corresponding to the plurality of scene regions may also be determined according to a light ratio. For example, after obtaining the plurality of scene areas, since the respective areas are independent from each other, the light ratio statistics may be performed for each scene area, so as to obtain the exposure combination corresponding to the area. Wherein the exposure combination of each scene area comprises a plurality of exposure parameters.
A synthesizing module 304, configured to obtain multiple scene images of a shooting scene according to different exposure parameters included in the multiple exposure combinations, and synthesize the multiple scene images to obtain a high dynamic range image of the shooting scene.
When acquiring multiple scene images of a shooting scene according to different exposure parameters, the electronic device may acquire the multiple scene images of the shooting scene in a manner that a preset short exposure time and a preset long exposure time are overlapped, may also acquire the multiple scene images of the shooting scene according to different exposure values, and then synthesizes the multiple scene images by the synthesis module 304 to obtain a high dynamic range image of the shooting scene.
In one embodiment, as shown in fig. 6, the processing module 302 includes:
an obtaining submodule 3021 configured to obtain depth information of the preview image;
a first determining sub-module 3022 for determining a depth value of a target object among the preview images according to the depth information;
a processing sub-module 3023, configured to segment the preview image into a foreground region and a background region according to the depth value of the target object.
In one embodiment, with continued reference to fig. 6, the determining module 303 includes:
a second determining submodule 3031 configured to determine, among the scene regions, a first region having a luminance greater than a first preset luminance value and a second region having a luminance less than a second preset luminance value;
a calculation submodule 3032, configured to calculate the exposure combination according to the areas of the first region and the second region.
As can be seen from the above description, the image processing apparatus 30 according to the embodiment of the present application may acquire a preview image of a current shooting scene, perform region segmentation processing on the preview image to obtain a plurality of scene regions of the preview image, respectively determine exposure combinations corresponding to the plurality of scene regions, where the exposure combination of each scene region includes a plurality of exposure parameters, acquire a plurality of scene images of the shooting scene according to different exposure parameters included in the plurality of exposure combinations, and synthesize the plurality of scene images to obtain a high dynamic range image of the shooting scene. According to the embodiment of the application, the preview image is divided into the plurality of areas, different exposure combinations can be obtained, then shooting is carried out through the exposure parameters in the exposure combinations, and the high dynamic range effect of image shooting can be effectively improved.
In the embodiment of the present application, the image processing apparatus and the image processing method in the foregoing embodiment belong to the same concept, and any method provided in the embodiment of the image processing method may be executed on the image processing apparatus, and a specific implementation process thereof is described in detail in the embodiment of the image processing method, and is not described herein again.
The term "module" as used herein may be considered a software object executing on the computing system. The different components, modules, engines, and services described herein may be considered as implementation objects on the computing system. The apparatus and method described herein may be implemented in software, but may also be implemented in hardware, and are within the scope of the present application.
The embodiment of the present application also provides a storage medium, on which a computer program is stored, which, when running on a computer, causes the computer to execute the above-mentioned image processing method.
The embodiment of the application also provides an electronic device, such as a tablet computer, a mobile phone and the like. The processor in the electronic device loads instructions corresponding to processes of one or more application programs into the memory according to the following steps, and the processor runs the application programs stored in the memory, so that various functions are realized:
acquiring a preview image of a current shooting scene;
performing region segmentation processing on the preview image to obtain a plurality of scene regions of the preview image;
respectively determining exposure combinations corresponding to the plurality of scene areas, wherein the exposure combination of each scene area comprises a plurality of exposure parameters;
and acquiring a plurality of scene images of a shooting scene according to different exposure parameters included by the exposure combinations, and synthesizing the scene images to obtain a high dynamic range image of the shooting scene.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Referring to fig. 7, the electronic device 400 includes a processor 401 and a memory 402. The processor 401 is electrically connected to the memory 402.
The processor 400 is a control center of the electronic device 400, connects various parts of the entire electronic device using various interfaces and lines, performs various functions of the electronic device 400 by running or loading a computer program stored in the memory 402 and calling data stored in the memory 402, and processes the data, thereby monitoring the electronic device 400 as a whole.
The memory 402 may be used to store software programs and modules, and the processor 401 executes various functional applications and data processing by operating the computer programs and modules stored in the memory 402. The memory 402 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, a computer program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to use of the electronic device, and the like. Further, the memory 402 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 402 may also include a memory controller to provide the processor 401 access to the memory 402.
In this embodiment, the processor 401 in the electronic device 400 loads instructions corresponding to one or more processes of the computer program into the memory 402 according to the following steps, and the processor 401 runs the computer program stored in the memory 402, so as to implement various functions, as follows:
acquiring a preview image of a current shooting scene;
performing region segmentation processing on the preview image to obtain a plurality of scene regions of the preview image;
respectively determining exposure combinations corresponding to the plurality of scene areas, wherein the exposure combination of each scene area comprises a plurality of exposure parameters;
and acquiring a plurality of scene images of a shooting scene according to different exposure parameters included by the exposure combinations, and synthesizing the scene images to obtain a high dynamic range image of the shooting scene.
Referring to fig. 8, in some embodiments, the electronic device 400 may further include: a display 403, radio frequency circuitry 404, audio circuitry 405, and a power supply 406. The display 403, the rf circuit 404, the audio circuit 405, and the power source 406 are electrically connected to the processor 401.
The display 403 may be used to display information entered by or provided to the user as well as various graphical user interfaces, which may be made up of graphics, text, icons, video, and any combination thereof. The Display 403 may include a Display panel, and in some embodiments, the Display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The rf circuit 404 may be used for transceiving rf signals to establish wireless communication with a network device or other electronic devices through wireless communication, and for transceiving signals with the network device or other electronic devices. In general, radio frequency circuit 501 includes, but is not limited to, an antenna, at least one Amplifier, a tuner, one or more oscillators, a Subscriber Identity Module (SIM) card, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like.
The audio circuit 405 may be used to provide an audio interface between the user and the electronic device through a speaker, microphone. The audio circuit 506 may convert the received audio data into an electrical signal, transmit the electrical signal to a speaker, and convert the electrical signal to an audio signal for output by the speaker.
The power supply 406 may be used to power various components of the electronic device 400. In some embodiments, power supply 406 may be logically coupled to processor 401 via a power management system, such that functions to manage charging, discharging, and power consumption management are performed via the power management system. The power supply 406 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
Although not shown in fig. 8, the electronic device 400 may further include a camera, a bluetooth module, and the like, which are not described in detail herein.
In the embodiment of the present application, the storage medium may be a magnetic disk, an optical disk, a Read Only Memory (ROM), a Random Access Memory (RAM), or the like.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
It should be noted that, for the image processing method in the embodiment of the present application, it can be understood by a person skilled in the art that all or part of the process of implementing the image processing method in the embodiment of the present application can be completed by controlling the relevant hardware through a computer program, the computer program can be stored in a computer readable storage medium, such as a memory of an electronic device, and executed by at least one processor in the electronic device, and the process of executing the process can include, for example, the process of the embodiment of the image processing method. The storage medium may be a magnetic disk, an optical disk, a read-only memory, a random access memory, etc.
In the image processing apparatus according to the embodiment of the present application, each functional module may be integrated into one processing chip, each module may exist alone physically, or two or more modules may be integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium, such as a read-only memory, a magnetic or optical disk, or the like.
The foregoing detailed description has provided an image processing method, an image processing apparatus, a storage medium, and an electronic device according to embodiments of the present application, and specific examples are applied herein to explain the principles and implementations of the present application, and the descriptions of the foregoing embodiments are only used to help understand the method and the core ideas of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.