[go: up one dir, main page]

CN113487497B - Image processing method, device and electronic device - Google Patents

Image processing method, device and electronic device Download PDF

Info

Publication number
CN113487497B
CN113487497B CN202110680600.7A CN202110680600A CN113487497B CN 113487497 B CN113487497 B CN 113487497B CN 202110680600 A CN202110680600 A CN 202110680600A CN 113487497 B CN113487497 B CN 113487497B
Authority
CN
China
Prior art keywords
image
sub
skin
correction
color
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110680600.7A
Other languages
Chinese (zh)
Other versions
CN113487497A (en
Inventor
杨丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202110680600.7A priority Critical patent/CN113487497B/en
Publication of CN113487497A publication Critical patent/CN113487497A/en
Priority to PCT/CN2022/099439 priority patent/WO2022262848A1/en
Application granted granted Critical
Publication of CN113487497B publication Critical patent/CN113487497B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30088Skin; Dermal

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

本申请公开了一种图像处理方法、装置和电子设备,属于图像处理的技术领域。图像处理方法包括:识别目标图像中的皮肤区域子图像是否出现偏色;在皮肤区域子图像出现偏色的情况下,根据皮肤区域子图像生成纠正目标图像,并将纠正目标图像与皮肤区域子图像进行色相混合处理,获得第一纠正图像;将第一纠正图像与皮肤区域子图像进行融合处理,获得第二纠正图像。

The present application discloses an image processing method, device and electronic device, which belongs to the technical field of image processing. The image processing method includes: identifying whether a skin region sub-image in a target image has color cast; if the skin region sub-image has color cast, generating a corrected target image according to the skin region sub-image, and performing hue mixing processing on the corrected target image and the skin region sub-image to obtain a first corrected image; and performing fusion processing on the first corrected image and the skin region sub-image to obtain a second corrected image.

Description

Image processing method and device and electronic equipment
Technical Field
The application belongs to the technical field of image processing, and particularly relates to an image processing method, an image processing device and electronic equipment.
Background
With the development of image acquisition and processing technologies, electronic devices with image acquisition functions such as smart phones and cameras are widely used.
In the process of triggering the electronic equipment by a user to acquire an image, when the user is in severe environment light, the environment light has a great influence on the face of the user, and the transition of the face color (namely skin color) in the photo is unnatural. For the above-mentioned cases, it is generally necessary for the user to take a picture by searching for a suitable light source. Still another solution is for the electronic device to take an additional light-compensating photograph. Both schemes can bring poor photographing experience to users.
In order to improve the photographing experience of a user in severe ambient light, in the related art, a plurality of color temperature tone parameters and skin color effect parameters are usually set for an electronic device in advance, so as to adjust an image acquisition result under the ambient light when the user is in the severe ambient light, thereby correcting distortion of the user or poor skin color effect in a photo. However, since the ambient light is complicated and changeable, it is difficult for the above-described processing methods in the related art to adapt the preset color temperature tone parameter and skin tone effect parameter to the complicated and changeable ambient light, and thus it is difficult to accurately correct the skin tone of the user.
In summary, how to accurately and reasonably correct the skin color of the user in the image acquisition process becomes a technical problem to be solved urgently by those skilled in the art.
Disclosure of Invention
The embodiment of the application aims to provide an image processing method, an image processing device and electronic equipment, which can solve the problem of accurately and reasonably correcting the skin color of a user during image acquisition.
In a first aspect, an embodiment of the present application provides an image processing method, including:
identifying whether the sub-image of the skin area in the target image is color cast;
Under the condition that the skin area sub-image is color cast, generating a correction target image according to the skin area sub-image, and carrying out hue mixing treatment on the correction target image and the skin area sub-image to obtain a first correction image;
And carrying out fusion processing on the first correction image and the skin region sub-image to obtain a second correction image.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
The identifying module is used for identifying whether the sub-image of the skin area in the target image is color cast;
the correction module is used for generating a correction target image according to the skin region sub-image under the condition that the recognition module recognizes that the skin region sub-image is out of color, and carrying out hue mixing processing on the correction target image and the skin region sub-image to obtain a first correction image;
And carrying out fusion processing on the first correction image and the skin region sub-image to obtain a second correction image.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, and a program or instruction stored on the memory and executable on the processor, the program or instruction implementing the steps of the method according to the first aspect when executed by the processor.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor perform the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and where the processor is configured to execute a program or instructions to implement a method according to the first aspect.
In the embodiment of the application, by identifying whether the sub-image of the skin area in the target image is color cast, the color cast condition in the target image and the area needing color cast correction (namely the area occupied by the skin of the user in the image) can be obtained. Further, when the skin region sub-image is colored, a correction target image is generated from the skin region sub-image, and the correction target image and the skin region sub-image are subjected to hue mixing processing, so that a first correction image is obtained. And carrying out fusion processing on the first correction image and the skin region sub-image to obtain a second correction image. Therefore, the color cast correction which is suitable for the skin region sub-image can be carried out according to the color cast recognition result aiming at the skin region sub-image in the target image, and the color cast correction operation is carried out aiming at the skin region sub-image, so that the color change of the images of other regions in the target image can not be caused, the influence of the ambient light on the face of the user in the image acquisition process can be avoided, and the true and natural skin color of the user in the image acquisition result is ensured.
Drawings
FIG. 1 is a flow chart of steps of an image processing method of an embodiment of the present application;
Fig. 2 is a schematic structural view of an image processing apparatus according to an embodiment of the present application;
FIG. 3 is a schematic diagram of an electronic device according to an embodiment of the present application;
fig. 4 is a second schematic structural diagram of an electronic device according to an embodiment of the application.
Detailed Description
The technical solutions of the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which are obtained by a person skilled in the art based on the embodiments of the present application, fall within the scope of protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type, and are not limited to the number of objects, such as the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The image processing method, the image processing device and the electronic equipment provided by the embodiment of the application are described in detail through specific embodiments and application scenes thereof with reference to the accompanying drawings.
As shown in fig. 1, an embodiment of the present application provides an image processing method, including the following S101 to S103:
S101, the image processing device identifies whether the skin area sub-image in the target image is color cast.
In the embodiment of the application, the target image is an image acquired by an image acquisition device or an electronic device provided with the image acquisition device, or is obtained by receiving images sent by other electronic devices through the electronic device provided with the image acquisition device. The target image may be an image acquired only for a person or a face, or may be an image including both a person or a face and other backgrounds (e.g., objects or scenery).
It will be appreciated that the target image is an image that needs or may need to be corrected. In other words, the target image is an image that needs to be subjected to color cast correction processing due to the presence or possibility of color cast problem.
In the embodiment of the application, the number of the target images can be one or more. For example, when continuous image acquisition (i.e., continuous shooting) is performed on a target object, the number of target images is plural. In the above-described case, the image processing method provided by the embodiment of the present application may be executed separately for each target image.
It will be appreciated that the skin region sub-image is the region of the target image where the person's skin is located.
Optionally, in the embodiment of the present application, the skin area sub-image may be an area where the face of the person in the target image is located, an area where the face and the neck of the person in the target image are located, or an area where all the skin (for example, face, neck, hand, arm, etc.) of the person in the target image is located.
Illustratively, in the case where a person is included in an image, the face area is generally relatively large in proportion to the image and is concentrated. Therefore, an image of the region where the face of the person is located in the target image can be selected as the skin region sub-image.
It will be appreciated that in the target image, the skin region sub-image has different attribute characteristics in terms of color attributes than other regions. Therefore, it is possible to distinguish the skin region sub-image from the non-skin region sub-image, and thereby identify the skin region sub-image in the target image and identify whether or not color cast occurs.
It will be appreciated that color cast refers to the situation where the image acquisition deviates from the normal human skin color. This may be generally affected by ambient light. For example, color cast problems may occur in situations where the ambient light is too bright, too dark, the illumination source is colored abnormally, and the like.
It should be noted that, the above steps only determine whether the sub-image of the skin area has color cast. Specifically, in the case that the target image includes a skin region sub-image and a non-skin region sub-image (i.e., a background), the image processing method according to the embodiment of the present application first identifies the skin region sub-image, and further determines whether the identified skin region sub-image is color-shifted.
It can be understood that, when the recognition result indicates that the sub-image of the skin area has color cast, step S102 is performed; if the identification and judgment result indicates that the color cast does not occur in the skin area sub-image, step S102 is not required, that is, color cast correction is not required for the skin area sub-image. In this case, the target image may be directly output (e.g., displayed or transmitted) or stored.
It will be appreciated that the sub-image has different attribute characteristics in terms of color attributes in view of the skin area than other areas. It is thus possible to recognize and determine whether color shift occurs or not based on a logical relationship between the optical three primary color pixel values (hereinafter referred to as RGB pixel values) of the skin area sub-image.
Illustratively, taking the skin tone attribute of the yellow race as an example, in view of that the logical relationship between the skin tone RGB pixel values in the normal portrait satisfies the R value > G value > B value, it can be determined that the skin area sub-image is color cast in the case where the RGB pixel values of the skin area sub-image do not satisfy the above relationship.
Optionally, in the embodiment of the present application, S101 includes the following S101a and S101b:
S101a, the image processing device respectively acquires the average value of red, green and blue pixels of red, green and blue color channels in the skin region sub-image.
It can be understood that the above-mentioned red, green and blue pixel average values are respectively the pixel average value of the red color channel, the pixel average value of the green color channel and the pixel average value of the blue color channel in the skin region sub-image in sequence.
S101b, the image processing device identifies whether the sub-image of the skin area is color cast according to the logic relation among the red, green and blue pixel mean values.
It will be appreciated that the logical relationship between the red, green, and blue pixel means that they follow may be different for skin area sub-images of different ethnicities (including yellow ethnic, black ethnic, white ethnic).
Illustratively, S101b includes the following S101b1 to S10214, exemplified by the skin tone attribute of the yellow race:
And S101b1, the image processing device judges that the skin area sub-image is color cast and the skin area sub-image is yellow when the first difference value is larger than or equal to a first threshold value.
The first difference is the difference between the red pixel mean (R m for short) and the blue pixel mean (B m for short).
It may be understood that the value of the first threshold may be specifically determined according to actual use requirements, which is not limited by the embodiment of the present application.
In example 1, the first threshold may have a value of 90. Accordingly, in the case of R m-Bm > =90, it can be judged that the skin region sub-image is colored and the skin region sub-image is yellow.
And S101b2, the image processing device judges that the skin area sub-image is colored and the skin area sub-image is green when the second difference value is smaller than or equal to a second threshold value.
The second difference is the difference between the red pixel mean (R m for short) and the green pixel mean (G m for short).
It can be understood that the value of the second threshold may be specifically determined according to actual use requirements, which is not limited by the embodiment of the present application.
In example 2, the second threshold may have a value of 25. Accordingly, in the case of R m-Gm < = 25, it can be judged that the skin region sub-image appears to be colored and the skin region sub-image is greenish.
And S101b3, the image processing device judges that the sub-image of the skin area is color-cast and the sub-image of the skin area is red when the sum of the first difference value and the second difference value is larger than or equal to a third threshold value.
Wherein, as described above, the first difference is the difference between the red pixel mean (R m for short) and the blue pixel mean (B m for short), and the second difference is the difference between the red pixel mean (R m for short) and the green pixel mean (G m for short).
It can be understood that the value of the third threshold may be specifically determined according to actual use requirements, which is not limited by the embodiment of the present application.
In example 3, the third threshold may have a value of 150. Accordingly, in the case of 2×r m-Gm-Bm > =150, i.e., (R m-Bm)+(Rm-Gm) > =150, it can be judged that the skin region sub-image appears to be colored and the skin region sub-image is colored red.
And S101b4, when the sum of the minimum value in the first interval and the minimum value in the second interval is smaller than a fourth threshold value, the image processing device judges that the skin area sub-image is color-shifting and the skin area sub-image is blue-shifting.
Wherein the first interval is an interval from the first difference value to zero, namely an interval of min (R m-Bm, 0); the second interval is an interval from the inverse of the second difference value to zero, i.e., an interval of min (G m-Bm, 0).
It can be understood that the value of the fourth threshold may be specifically determined according to actual use requirements, which is not limited by the embodiment of the present application.
In example 4, the fourth threshold may have a value of 0. Accordingly, in the case of min (R m-Bm,0)+min(Gm-Bm, 0) <0, it can be judged that the skin region sub-image appears to be colored and the skin region sub-image is colored blue.
It will be appreciated that in the case where neither of the cases of S101a and S101b described above occurs (i.e., the skin area sub-image is not yellowish, not greenish, not reddish, not bluish), it may be determined that the skin area sub-image is not colored. Conversely, when the skin area sub-image appears to be one or more of yellow, green, red and blue, the color shift of the skin area sub-image is indicated.
S102, under the condition that the skin area sub-image is out of color, the image processing device generates a correction target image according to the skin area sub-image, and performs hue mixing processing on the correction target image and the skin area sub-image to obtain a first correction image.
It will be appreciated that the correction target image has color parameters that are those of the ideal skin image.
In this embodiment, by performing the hue mixture process on the correction target image and the skin region sub-image, the correction target image may be used to perform the first correction on the skin region sub-image (i.e., obtain the first correction image).
S103, fusing the first correction image with the skin region sub-image to obtain a second correction image.
It should be noted that, S102 to S103 described above may perform color cast correction only for the skin region sub-image. In other words, in the case where the target image includes a skin region sub-image and a non-skin region sub-image, the image processing method according to the embodiment of the present application first identifies whether the skin region sub-image is color cast through S101, then obtains a first correction image through S102, and finally corrects the first correction image into a second correction image through S103. The image processing method according to the embodiment of the application can not process the sub-image of the non-skin area, and keep the color attribute (such as hue, hue and saturation) unchanged.
It will be appreciated that the sub-image has different attribute characteristics in terms of color attributes in view of the skin area than other areas. Therefore, the color cast correction can be performed on the skin region sub-image according to the logical relationship between the optical three primary color pixel values (hereinafter referred to as RGB pixel values) of the skin region sub-image.
By taking the skin color attribute of the yellow race as an example, the pixel values of RGB three-color channels of a plurality of normal skin color samples can be collected under different brightness conditions, the mapping relation between one color channel and the other two color channels in the skin image with normal skin color can be obtained in a one-time polynomial linear fitting mode, and then the color cast correction is carried out on the skin region sub-image according to the mapping relation.
Optionally, in the embodiment of the present application, the generating the correction target image according to the skin region sub-image in S102 includes the following S102a to S102b:
s102a, the image processing device obtains a standard pixel mean value of a second color channel and a standard pixel mean value of a third color channel in the skin region sub-image according to a first preset mapping relation through the pixel mean value of the first color channel in the skin region sub-image.
It can be understood that the first preset mapping relationship is a mapping relationship obtained by the sample collection and linear fitting manner. The first preset mapping relationship includes a mapping relationship (abbreviated as a mapping relationship a) between a pixel mean value of the first color channel and a standard pixel mean value of the second color channel, and further includes a mapping relationship (abbreviated as a mapping relationship B) between the pixel mean value of the first color channel and the standard pixel mean value of the third color channel.
It can be understood that the mapping relationship a is a mapping relationship using the pixel mean value of the first color channel as an independent variable and the standard pixel mean value of the second color channel as a dependent variable. The mapping relation B is a mapping relation which takes the pixel mean value of the first color channel as an independent variable and the standard pixel mean value of the third color channel as an independent variable.
It should be noted that, in the embodiment of the present application, the three color channels including the first color channel, the second color channel, and the third color channel refer to three color channels including a red color channel, a green color channel, and a blue color channel. The color channels of the first color channel, the second color channel and the third color channel can be specifically determined according to actual use requirements, and the embodiment of the application is not limited.
Optionally, in an embodiment of the present application, the first color channel is a red color channel, the second color channel is a green color channel, and the third color channel is a blue color channel.
It should be noted that, although the first color channel, the second color channel, and the third color channel may be any color channel, for the skin region sub-image, the pixel average value of the red color channel is greater than the pixel average value of the green color channel and the pixel average value of the blue color channel, so that if the first color channel is the red color channel, the standard pixel average values of the other two color channels can be obtained more accurately.
Optionally, in an embodiment of the present application, the first preset mapping is:
G1=a1×R1-b1
B1=a2×R1-b2
wherein R 1 is the pixel mean value of the first color channel, G 1 is the standard pixel mean value of the second color channel, B 1 is the standard pixel mean value of the third color channel, and a 1、a2、b1、b2 are constants, respectively.
It will be appreciated that the specific values of a 1、a2、b1、b2 may be determined according to the actual requirements, and embodiments of the present application are not limited.
Illustratively, the first preset mapping relationship may be:
G1=1.118×R1-71.57;
B1=0.959×R1-64.38。
S102b, the image processing device randomly perturbs the standard pixel mean value of the second color channel and the standard pixel mean value of the third color channel to generate the correction target image.
It will be appreciated that the size (e.g., shape, resolution) of the corrected target image is consistent with the skin region sub-image.
In the embodiment of the application, the first correction image is an image obtained by mixing the sub-image of the skin area and the correction target image in hue.
In the embodiment of the present application, the first corrected image is a color phase mixed image obtained by mixing the brightness and saturation values of the original image (i.e., the skin region sub-image) with the corrected target image.
Thus, after the correction target image is obtained, the mixing processing can be performed on the correction target image and the skin region sub-image, so that the skin region sub-image can be corrected appropriately according to the color cast condition of the skin region sub-image.
Optionally, in the embodiment of the present application, the specific manner of the hue mixing process is: and carrying out loop iteration on the hue value of the correction target image and the brightness value of the skin region sub-image, and carrying out hue mixing treatment on the correction target image and the skin region sub-image.
It will be appreciated that when the skin area sub-image and the correction target image are subjected to a blending process, the change in hue will result in a change in brightness, which will result in a change in hue and saturation.
Thus, by using the hue of the correction target image and maintaining the brightness of the skin region sub-image, through the loop iteration, the mixing process can be performed, and thus the purpose of skin color correction can be achieved.
Optionally, in the embodiment of the present application, S103 includes the following S103a and S103b:
S103a, the image processing device acquires a first weight of the skin region sub-image and a second weight of the second correction image.
Wherein the sum of the first weight and the second weight is 1.
It will be appreciated that the purpose of assigning the first weight and the second weight is to: the skin region sub-image and the second correction image are fused according to the first weight and the second weight.
S103b, the image processing device performs fusion processing on the skin region sub-image and the second correction image according to the first weight and the second weight.
It will be appreciated that after the fusion process, a second corrected image may be obtained, the second corrected image being the image with the color cast correction completed.
Therefore, the problem that the skin color image of the area with heavy color cast is overcorrected after the hue mixture treatment can be avoided.
Illustratively, the first corrected image and the skin region sub-image may be subjected to a fusion process by example 5 as follows. Herein, for convenience of description, the skin region sub-image is abbreviated as S org, the first corrected image is abbreviated as S hue, and the second corrected image is abbreviated as S fusion in example 5.
Example 5, the fusion process was performed using the formula:
Sfusion=α×Sorg+(1-α)×Shue
wherein alpha is a first weight and 1-alpha is a second weight.
Optionally, in an embodiment of the present application, the first weight is determined according to a distance difference matrix between the skin region sub-image and the second correction image.
Illustratively, the distance difference matrix may be obtained as in example 6 below, and the first weight may be determined according to the distance difference matrix, and the second weight may be determined according to the first weight, and the skin region sub-image and the second correction image may be fused according to the first weight and the second weight. Here, for convenience of description, the skin region sub-image is also abbreviated as S org, the first corrected image is S hue, and the second corrected image is abbreviated as S fusion in example 6.
In example 6, to obtain the distance difference matrix, normalization processing may be performed according to the difference between the pixels in S org and S hue to obtain the distance difference matrix W d, where the normalization is performed by dividing the pixel value by 255. The larger the pixel value difference value is, the more serious the color cast problem is represented by S org. The fusion proportion of the areas with heavy color cast can be reduced through W d, so that excessive correction of the skin color of the areas with heavy color cast is avoided. The fusion process is performed using the formula:
Sfusion=αd×Sorg+(1-αd)×Shue
Wd=Normal(Sorg-Shu);
αd=α×Wd
Wherein, alpha d is a first weight, and 1-alpha d is a second weight.
In the embodiment of the application, by identifying whether the sub-image of the skin area in the target image is color cast, the color cast condition in the target image and the area needing color cast correction (namely the area occupied by the skin of the user in the image) can be obtained. Further, when the skin region sub-image is colored, a correction target image is generated from the skin region sub-image, and the correction target image and the skin region sub-image are subjected to hue mixing processing, so that a first correction image is obtained. And carrying out fusion processing on the first correction image and the skin region sub-image to obtain a second correction image. Therefore, the color cast correction which is suitable for the skin region sub-image can be carried out according to the color cast recognition result aiming at the skin region sub-image in the target image, and the color cast correction operation is carried out aiming at the skin region sub-image, so that the color change of the images of other regions in the target image can not be caused, the influence of the ambient light on the face of the user in the image acquisition process can be avoided, and the true and natural skin color of the user in the image acquisition result is ensured.
Optionally, in an embodiment of the present application, in a case where the sub-image of the skin area appears to be colored, the colored scene includes at least one of the following: yellow scene, green scene, red scene, blue scene.
Optionally, in the embodiment of the present application, the color cast scene may be numbered, and according to the number corresponding to the color cast scene, a color cast correction mode corresponding to the number is adopted to correct the color cast of the sub-image of the skin area.
Alternatively, in the embodiment of the present application, in the case where the sub-image of the skin area appears to be colored, S102 to S103 described above may be performed one or more times.
For example, the image subjected to the high-exposure suppression and color cast pre-correction may be further corrected by performing the above S102 to S103 for the first time, performing the high-exposure suppression and color cast pre-correction for the color cast condition, and performing the above S102 to S103 for the second time.
For example, in order to achieve high exposure suppression and color cast pre-correction, a local color cast region in the skin region sub-image may be identified, and in the case where the local color cast region is identified, the local color cast region is subjected to color cast pre-correction. In the case where S102 to S103 are performed a plurality of times, parameters such as a predetermined mapping relationship used each time the color cast correction steps of S102 to S103 are performed may be the same or different.
It should be noted that, in the image processing method provided in the embodiment of the present application, the execution subject may be an image processing apparatus, or a control module for executing the image processing method in the image processing apparatus. In the embodiment of the present application, an image processing apparatus is described by taking an example of an image processing method performed by the image processing apparatus.
As shown in fig. 2, an embodiment of the present application further provides an image processing apparatus 200, including:
The identifying module 210 is configured to identify whether the sub-image of the skin area in the target image is color-shifted.
The correction module 220 is configured to generate a correction target image according to the skin region sub-image when the skin region sub-image identified by the identification module 210 is color cast, and perform color mixing processing on the correction target image and the skin region sub-image to obtain a first correction image; and carrying out fusion processing on the first correction image and the skin region sub-image to obtain a second correction image.
In the embodiment of the present application, the image processing apparatus 200 can learn the color cast condition in the target image and the area needing color cast correction (i.e. the area occupied by the skin of the user in the image) by identifying whether the sub-image of the skin area in the target image is color cast. Further, when the skin region sub-image is colored, a correction target image is generated from the skin region sub-image, and the correction target image and the skin region sub-image are subjected to hue mixing processing, so that a first correction image is obtained. And carrying out fusion processing on the first correction image and the skin region sub-image to obtain a second correction image. Therefore, the image processing apparatus 200 can perform color cast correction corresponding to the skin region sub-image according to the color cast recognition result of the skin region sub-image in the target image, and the color cast correction operation is performed on the skin region sub-image, which does not cause color change of the images of other regions in the target image, so that influence of ambient light on the face of the user during the image acquisition process can be avoided, and true and natural skin color of the user in the image acquisition result is ensured.
Optionally, in the embodiment of the present application, the correction module 220 is specifically configured to:
According to a first preset mapping relation, obtaining a standard pixel mean value of a second color channel and a standard pixel mean value of a third color channel in the skin region sub-image through the pixel mean value of the first color channel in the skin region sub-image;
and randomly disturbing the standard pixel mean value of the second color channel and the standard pixel mean value of the third color channel to generate a correction target image.
Optionally, in the embodiment of the present application, the correction module 220 is specifically configured to:
and carrying out loop iteration on the hue value of the correction target image and the brightness value of the skin region sub-image, and carrying out hue mixing treatment on the correction target image and the skin region sub-image.
Optionally, in the embodiment of the present application, the correction module 220 is specifically configured to:
acquiring a first weight of the skin region sub-image and a second weight of the first corrected image;
According to the first weight and the second weight, the first correction image and the skin region sub-image are fused;
wherein the sum of the first weight and the second weight is 1, and the first weight is determined according to a distance difference matrix between the skin region sub-image and the first correction image.
The image processing device in the embodiment of the application can be a device, and can also be a component, an integrated circuit or a chip in a terminal. The device may be a mobile electronic device or a non-mobile electronic device. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), etc., and the non-mobile electronic device may be a server, a network attached storage (Network Attached Storage, NAS), a personal computer (personal computer, PC), a Television (TV), a teller machine, a self-service machine, etc., and the embodiments of the present application are not limited in particular.
The image processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, and the embodiment of the present application is not limited specifically.
The image processing device provided in the embodiment of the present application can implement each process implemented by the method embodiment of fig. 1, and in order to avoid repetition, a description is omitted here.
Optionally, as shown in fig. 3, the embodiment of the present application further provides an electronic device 300, including a processor 301, a memory 302, and a program or an instruction stored in the memory 302 and capable of running on the processor 301, where the program or the instruction implements each process of the above-mentioned image processing method embodiment when executed by the processor 301, and the process may achieve the same technical effect, and for avoiding repetition, a description is omitted herein.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device.
Fig. 4 is a schematic hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 400 includes, but is not limited to: radio frequency unit 401, network module 402, audio output unit 403, input unit 404, sensor 405, display unit 406, user input unit 407, interface unit 408, memory 409, and processor 410.
Those skilled in the art will appreciate that the electronic device 400 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 410 by a power management system to perform functions such as managing charge, discharge, and power consumption by the power management system. The electronic device structure shown in fig. 4 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
It should be appreciated that in embodiments of the present application, the input unit 404 may include a graphics processor (Graphics Processing Unit, GPU) 4041 and a microphone 4042, with the graphics processor 4041 processing image data of still pictures or video obtained by an image capture device (e.g., a camera) in a video capture mode or an image capture mode. The display unit 406 may include a display panel 4061, and the display panel 4061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 407 includes a touch panel 4071 and other input devices 4072. The touch panel 4071 is also referred to as a touch screen. The touch panel 4071 may include two parts, a touch detection device and a touch controller. Other input devices 4072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein. Memory 409 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 410 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 410.
The embodiment of the application also provides a readable storage medium, on which a program or an instruction is stored, which when executed by a processor, implements each process of the above image processing method embodiment, and can achieve the same technical effects, and in order to avoid repetition, a detailed description is omitted here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium such as a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk or an optical disk, and the like.
The embodiment of the application further provides a chip, which comprises a processor and a communication interface, wherein the communication interface is coupled with the processor, and the processor is used for running programs or instructions to realize the processes of the embodiment of the image processing method, and can achieve the same technical effects, so that repetition is avoided, and the description is omitted here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.

Claims (8)

1.一种图像处理方法,其特征在于,包括:1. An image processing method, comprising: 识别目标图像中的皮肤区域子图像是否出现偏色;Identify whether the skin area sub-image in the target image has color cast; 在所述皮肤区域子图像出现偏色的情况下,根据所述皮肤区域子图像生成纠正目标图像,并将所述纠正目标图像与所述皮肤区域子图像进行色相混合处理,获得第一纠正图像,所述纠正目标图像具有的颜色参数为理想皮肤图像具有的颜色参数;In the case where the skin region sub-image has color cast, generating a correction target image according to the skin region sub-image, and performing hue mixing processing on the correction target image and the skin region sub-image to obtain a first correction image, wherein the color parameters of the correction target image are the color parameters of the ideal skin image; 将所述第一纠正图像与所述皮肤区域子图像进行融合处理,获得第二纠正图像;Fusing the first corrected image with the skin region sub-image to obtain a second corrected image; 所述根据所述皮肤区域子图像生成纠正目标图像,包括:The step of generating a corrected target image according to the skin area sub-image comprises: 根据第一预设映射关系,通过所述皮肤区域子图像中第一颜色通道的像素均值,得出所述皮肤区域子图像中第二颜色通道的标准像素均值,以及第三颜色通道的标准像素均值;According to the first preset mapping relationship, the standard pixel mean value of the second color channel and the standard pixel mean value of the third color channel in the skin area sub-image are obtained by using the pixel mean value of the first color channel in the skin area sub-image; 对所述第二颜色通道的标准像素均值和所述第三颜色通道的标准像素均值进行随机扰动,生成所述纠正目标图像。The standard pixel mean of the second color channel and the standard pixel mean of the third color channel are randomly perturbed to generate the corrected target image. 2.根据权利要求1所述的图像处理方法,其特征在于,所述将所述纠正目标图像与所述皮肤区域子图像进行色相混合处理,包括:2. The image processing method according to claim 1, characterized in that the step of performing hue mixing processing on the corrected target image and the skin area sub-image comprises: 通过对所述纠正目标图像的色相值和所述皮肤区域子图像的亮度值进行循环迭代,将所述纠正目标图像与所述皮肤区域子图像进行色相混合处理。The hue value of the correction target image and the brightness value of the skin area sub-image are iterated cyclically to perform hue mixing processing on the correction target image and the skin area sub-image. 3.根据权利要求1所述的图像处理方法,其特征在于,所述将所述第一纠正图像与所述皮肤区域子图像进行融合处理,包括:3. The image processing method according to claim 1, characterized in that the fusing of the first corrected image with the skin area sub-image comprises: 获取所述皮肤区域子图像的第一权重,以及所述第一纠正图像的第二权重;Acquire a first weight of the skin region sub-image and a second weight of the first corrected image; 根据所述第一权重和所述第二权重,将所述第一纠正图像与所述皮肤区域子图像进行融合处理;fusing the first corrected image with the skin region sub-image according to the first weight and the second weight; 其中,所述第一权重和所述第二权重的相加之和为1,所述第一权重根据所述皮肤区域子图像和所述第一纠正图像之间的距离差异矩阵确定。The sum of the first weight and the second weight is 1, and the first weight is determined according to a distance difference matrix between the skin region sub-image and the first corrected image. 4.一种图像处理装置,其特征在于,包括:4. An image processing device, comprising: 识别模块,用于识别目标图像中的皮肤区域子图像是否出现偏色;A recognition module, used to recognize whether a skin area sub-image in a target image has color cast; 纠正模块,用于在所述识别模块识别所述皮肤区域子图像出现偏色的情况下,根据所述皮肤区域子图像生成纠正目标图像,并将所述纠正目标图像与所述皮肤区域子图像进行色相混合处理,获得第一纠正图像,所述纠正目标图像具有的颜色参数为理想皮肤图像具有的颜色参数;将所述第一纠正图像与所述皮肤区域子图像进行融合处理,获得第二纠正图像;A correction module, configured to generate a correction target image according to the skin region sub-image when the recognition module recognizes that the skin region sub-image has color cast, and perform hue mixing processing on the correction target image and the skin region sub-image to obtain a first correction image, wherein the color parameters of the correction target image are the color parameters of an ideal skin image; and perform fusion processing on the first correction image and the skin region sub-image to obtain a second correction image; 所述纠正模块具体用于:The correction module is specifically used for: 根据第一预设映射关系,通过所述皮肤区域子图像中第一颜色通道的像素均值,得出所述皮肤区域子图像中第二颜色通道的标准像素均值,以及第三颜色通道的标准像素均值;According to the first preset mapping relationship, the standard pixel mean value of the second color channel and the standard pixel mean value of the third color channel in the skin area sub-image are obtained by using the pixel mean value of the first color channel in the skin area sub-image; 对所述第二颜色通道的标准像素均值和所述第三颜色通道的标准像素均值进行随机扰动,生成所述纠正目标图像。The standard pixel mean of the second color channel and the standard pixel mean of the third color channel are randomly perturbed to generate the corrected target image. 5.根据权利要求4所述的图像处理装置,其特征在于,所述纠正模块具体用于:5. The image processing device according to claim 4, characterized in that the correction module is specifically used for: 通过对所述纠正目标图像的色相值和所述皮肤区域子图像的亮度值进行循环迭代,将所述纠正目标图像与所述皮肤区域子图像进行色相混合处理。The hue value of the correction target image and the brightness value of the skin area sub-image are iterated cyclically to perform hue mixing processing on the correction target image and the skin area sub-image. 6.根据权利要求4所述的图像处理装置,其特征在于,所述纠正模块具体用于:6. The image processing device according to claim 4, characterized in that the correction module is specifically used for: 获取所述皮肤区域子图像的第一权重,以及所述第一纠正图像的第二权重;Acquire a first weight of the skin region sub-image and a second weight of the first corrected image; 根据所述第一权重和所述第二权重,将所述第一纠正图像与所述皮肤区域子图像进行融合处理;fusing the first corrected image with the skin region sub-image according to the first weight and the second weight; 其中,所述第一权重和所述第二权重的相加之和为1,所述第一权重根据所述皮肤区域子图像和所述第一纠正图像之间的距离差异矩阵确定。The sum of the first weight and the second weight is 1, and the first weight is determined according to a distance difference matrix between the skin region sub-image and the first corrected image. 7.一种电子设备,其特征在于,包括处理器,存储器及存储在所述存储器上并可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如权利要求1至3中任一项所述的图像处理方法的步骤。7. An electronic device, characterized in that it comprises a processor, a memory, and a program or instruction stored in the memory and executable on the processor, wherein the program or instruction, when executed by the processor, implements the steps of the image processing method as described in any one of claims 1 to 3. 8.一种可读存储介质,其特征在于,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如权利要求1至3中任一项所述的图像处理方法的步骤。8. A readable storage medium, characterized in that the readable storage medium stores a program or instruction, and when the program or instruction is executed by a processor, the steps of the image processing method according to any one of claims 1 to 3 are implemented.
CN202110680600.7A 2021-06-18 2021-06-18 Image processing method, device and electronic device Active CN113487497B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110680600.7A CN113487497B (en) 2021-06-18 2021-06-18 Image processing method, device and electronic device
PCT/CN2022/099439 WO2022262848A1 (en) 2021-06-18 2022-06-17 Image processing method and apparatus, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110680600.7A CN113487497B (en) 2021-06-18 2021-06-18 Image processing method, device and electronic device

Publications (2)

Publication Number Publication Date
CN113487497A CN113487497A (en) 2021-10-08
CN113487497B true CN113487497B (en) 2024-11-15

Family

ID=77935603

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110680600.7A Active CN113487497B (en) 2021-06-18 2021-06-18 Image processing method, device and electronic device

Country Status (2)

Country Link
CN (1) CN113487497B (en)
WO (1) WO2022262848A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113487497B (en) * 2021-06-18 2024-11-15 维沃移动通信有限公司 Image processing method, device and electronic device

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110381303A (en) * 2019-05-31 2019-10-25 成都品果科技有限公司 Portrait automatic exposure white balance correction method and system based on skin color statistics

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009059009A (en) * 2007-08-30 2009-03-19 Dainippon Printing Co Ltd Color-corrected image creation method and color-corrected image creation device
CN103971344B (en) * 2014-05-27 2016-09-07 广州商景网络科技有限公司 The colour of skin color cast correction of a kind of certificate image and system
US9672414B2 (en) * 2015-11-11 2017-06-06 Adobe Systems Incorporated Enhancement of skin, including faces, in photographs
CN108038889A (en) * 2017-11-10 2018-05-15 维沃移动通信有限公司 The processing method and mobile terminal of a kind of image color cast
CN112532855B (en) * 2019-09-17 2022-04-29 华为技术有限公司 Image processing method and device
CN111063008A (en) * 2019-12-23 2020-04-24 北京达佳互联信息技术有限公司 Image processing method, device, equipment and storage medium
CN111524076B (en) * 2020-04-07 2023-07-21 咪咕文化科技有限公司 Image processing method, electronic device and computer-readable storage medium
CN113487497B (en) * 2021-06-18 2024-11-15 维沃移动通信有限公司 Image processing method, device and electronic device

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110381303A (en) * 2019-05-31 2019-10-25 成都品果科技有限公司 Portrait automatic exposure white balance correction method and system based on skin color statistics

Also Published As

Publication number Publication date
WO2022262848A1 (en) 2022-12-22
CN113487497A (en) 2021-10-08

Similar Documents

Publication Publication Date Title
WO2018176925A1 (en) Hdr image generation method and apparatus
CN112449120A (en) High dynamic range video generation method and device
WO2023273111A1 (en) Image processing method and apparatus, and computer device and storage medium
CN104754239A (en) Photographing method and device
CN113010126B (en) Display control method, device, electronic equipment and medium
CN111901519B (en) Screen light supplement method and device and electronic equipment
CN113132696A (en) Image tone mapping method, device, electronic equipment and storage medium
CN116668838B (en) Image processing methods and electronic equipment
CN111968605A (en) Exposure adjusting method and device
CN113487497B (en) Image processing method, device and electronic device
CN112419218B (en) Image processing method and device and electronic equipment
KR20230041648A (en) Multi-frame depth-based multi-camera relighting of images
CN112437237B (en) Shooting method and device
CN113393391B (en) Image enhancement method, image enhancement device, electronic apparatus, and storage medium
CN117692788A (en) Image processing method and electronic equipment
CN113012085B (en) Image processing method and device
CN116309224A (en) Image fusion method, device, terminal and computer readable storage medium
CN114816619A (en) Information processing method and electronic equipment
CN113676674A (en) Image processing method and device, electronic equipment and readable storage medium
CN113962840A (en) Image processing method, image processing device, electronic equipment and storage medium
CN112669229B (en) Image processing method and device and electronic equipment
CN112446848A (en) Image processing method and device and electronic equipment
CN118474552B (en) Model training method, image processing method and electronic device
CN114071016B (en) Image processing method, device, electronic equipment and storage medium
CN112995539B (en) Mobile terminal and image processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant