Detailed Description
The technical solutions of the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which are obtained by a person skilled in the art based on the embodiments of the present application, fall within the scope of protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type, and are not limited to the number of objects, such as the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The image processing method, the image processing device and the electronic equipment provided by the embodiment of the application are described in detail through specific embodiments and application scenes thereof with reference to the accompanying drawings.
As shown in fig. 1, an embodiment of the present application provides an image processing method, including the following S101 to S103:
S101, the image processing device identifies whether the skin area sub-image in the target image is color cast.
In the embodiment of the application, the target image is an image acquired by an image acquisition device or an electronic device provided with the image acquisition device, or is obtained by receiving images sent by other electronic devices through the electronic device provided with the image acquisition device. The target image may be an image acquired only for a person or a face, or may be an image including both a person or a face and other backgrounds (e.g., objects or scenery).
It will be appreciated that the target image is an image that needs or may need to be corrected. In other words, the target image is an image that needs to be subjected to color cast correction processing due to the presence or possibility of color cast problem.
In the embodiment of the application, the number of the target images can be one or more. For example, when continuous image acquisition (i.e., continuous shooting) is performed on a target object, the number of target images is plural. In the above-described case, the image processing method provided by the embodiment of the present application may be executed separately for each target image.
It will be appreciated that the skin region sub-image is the region of the target image where the person's skin is located.
Optionally, in the embodiment of the present application, the skin area sub-image may be an area where the face of the person in the target image is located, an area where the face and the neck of the person in the target image are located, or an area where all the skin (for example, face, neck, hand, arm, etc.) of the person in the target image is located.
Illustratively, in the case where a person is included in an image, the face area is generally relatively large in proportion to the image and is concentrated. Therefore, an image of the region where the face of the person is located in the target image can be selected as the skin region sub-image.
It will be appreciated that in the target image, the skin region sub-image has different attribute characteristics in terms of color attributes than other regions. Therefore, it is possible to distinguish the skin region sub-image from the non-skin region sub-image, and thereby identify the skin region sub-image in the target image and identify whether or not color cast occurs.
It will be appreciated that color cast refers to the situation where the image acquisition deviates from the normal human skin color. This may be generally affected by ambient light. For example, color cast problems may occur in situations where the ambient light is too bright, too dark, the illumination source is colored abnormally, and the like.
It should be noted that, the above steps only determine whether the sub-image of the skin area has color cast. Specifically, in the case that the target image includes a skin region sub-image and a non-skin region sub-image (i.e., a background), the image processing method according to the embodiment of the present application first identifies the skin region sub-image, and further determines whether the identified skin region sub-image is color-shifted.
It can be understood that, when the recognition result indicates that the sub-image of the skin area has color cast, step S102 is performed; if the identification and judgment result indicates that the color cast does not occur in the skin area sub-image, step S102 is not required, that is, color cast correction is not required for the skin area sub-image. In this case, the target image may be directly output (e.g., displayed or transmitted) or stored.
It will be appreciated that the sub-image has different attribute characteristics in terms of color attributes in view of the skin area than other areas. It is thus possible to recognize and determine whether color shift occurs or not based on a logical relationship between the optical three primary color pixel values (hereinafter referred to as RGB pixel values) of the skin area sub-image.
Illustratively, taking the skin tone attribute of the yellow race as an example, in view of that the logical relationship between the skin tone RGB pixel values in the normal portrait satisfies the R value > G value > B value, it can be determined that the skin area sub-image is color cast in the case where the RGB pixel values of the skin area sub-image do not satisfy the above relationship.
Optionally, in the embodiment of the present application, S101 includes the following S101a and S101b:
S101a, the image processing device respectively acquires the average value of red, green and blue pixels of red, green and blue color channels in the skin region sub-image.
It can be understood that the above-mentioned red, green and blue pixel average values are respectively the pixel average value of the red color channel, the pixel average value of the green color channel and the pixel average value of the blue color channel in the skin region sub-image in sequence.
S101b, the image processing device identifies whether the sub-image of the skin area is color cast according to the logic relation among the red, green and blue pixel mean values.
It will be appreciated that the logical relationship between the red, green, and blue pixel means that they follow may be different for skin area sub-images of different ethnicities (including yellow ethnic, black ethnic, white ethnic).
Illustratively, S101b includes the following S101b1 to S10214, exemplified by the skin tone attribute of the yellow race:
And S101b1, the image processing device judges that the skin area sub-image is color cast and the skin area sub-image is yellow when the first difference value is larger than or equal to a first threshold value.
The first difference is the difference between the red pixel mean (R m for short) and the blue pixel mean (B m for short).
It may be understood that the value of the first threshold may be specifically determined according to actual use requirements, which is not limited by the embodiment of the present application.
In example 1, the first threshold may have a value of 90. Accordingly, in the case of R m-Bm > =90, it can be judged that the skin region sub-image is colored and the skin region sub-image is yellow.
And S101b2, the image processing device judges that the skin area sub-image is colored and the skin area sub-image is green when the second difference value is smaller than or equal to a second threshold value.
The second difference is the difference between the red pixel mean (R m for short) and the green pixel mean (G m for short).
It can be understood that the value of the second threshold may be specifically determined according to actual use requirements, which is not limited by the embodiment of the present application.
In example 2, the second threshold may have a value of 25. Accordingly, in the case of R m-Gm < = 25, it can be judged that the skin region sub-image appears to be colored and the skin region sub-image is greenish.
And S101b3, the image processing device judges that the sub-image of the skin area is color-cast and the sub-image of the skin area is red when the sum of the first difference value and the second difference value is larger than or equal to a third threshold value.
Wherein, as described above, the first difference is the difference between the red pixel mean (R m for short) and the blue pixel mean (B m for short), and the second difference is the difference between the red pixel mean (R m for short) and the green pixel mean (G m for short).
It can be understood that the value of the third threshold may be specifically determined according to actual use requirements, which is not limited by the embodiment of the present application.
In example 3, the third threshold may have a value of 150. Accordingly, in the case of 2×r m-Gm-Bm > =150, i.e., (R m-Bm)+(Rm-Gm) > =150, it can be judged that the skin region sub-image appears to be colored and the skin region sub-image is colored red.
And S101b4, when the sum of the minimum value in the first interval and the minimum value in the second interval is smaller than a fourth threshold value, the image processing device judges that the skin area sub-image is color-shifting and the skin area sub-image is blue-shifting.
Wherein the first interval is an interval from the first difference value to zero, namely an interval of min (R m-Bm, 0); the second interval is an interval from the inverse of the second difference value to zero, i.e., an interval of min (G m-Bm, 0).
It can be understood that the value of the fourth threshold may be specifically determined according to actual use requirements, which is not limited by the embodiment of the present application.
In example 4, the fourth threshold may have a value of 0. Accordingly, in the case of min (R m-Bm,0)+min(Gm-Bm, 0) <0, it can be judged that the skin region sub-image appears to be colored and the skin region sub-image is colored blue.
It will be appreciated that in the case where neither of the cases of S101a and S101b described above occurs (i.e., the skin area sub-image is not yellowish, not greenish, not reddish, not bluish), it may be determined that the skin area sub-image is not colored. Conversely, when the skin area sub-image appears to be one or more of yellow, green, red and blue, the color shift of the skin area sub-image is indicated.
S102, under the condition that the skin area sub-image is out of color, the image processing device generates a correction target image according to the skin area sub-image, and performs hue mixing processing on the correction target image and the skin area sub-image to obtain a first correction image.
It will be appreciated that the correction target image has color parameters that are those of the ideal skin image.
In this embodiment, by performing the hue mixture process on the correction target image and the skin region sub-image, the correction target image may be used to perform the first correction on the skin region sub-image (i.e., obtain the first correction image).
S103, fusing the first correction image with the skin region sub-image to obtain a second correction image.
It should be noted that, S102 to S103 described above may perform color cast correction only for the skin region sub-image. In other words, in the case where the target image includes a skin region sub-image and a non-skin region sub-image, the image processing method according to the embodiment of the present application first identifies whether the skin region sub-image is color cast through S101, then obtains a first correction image through S102, and finally corrects the first correction image into a second correction image through S103. The image processing method according to the embodiment of the application can not process the sub-image of the non-skin area, and keep the color attribute (such as hue, hue and saturation) unchanged.
It will be appreciated that the sub-image has different attribute characteristics in terms of color attributes in view of the skin area than other areas. Therefore, the color cast correction can be performed on the skin region sub-image according to the logical relationship between the optical three primary color pixel values (hereinafter referred to as RGB pixel values) of the skin region sub-image.
By taking the skin color attribute of the yellow race as an example, the pixel values of RGB three-color channels of a plurality of normal skin color samples can be collected under different brightness conditions, the mapping relation between one color channel and the other two color channels in the skin image with normal skin color can be obtained in a one-time polynomial linear fitting mode, and then the color cast correction is carried out on the skin region sub-image according to the mapping relation.
Optionally, in the embodiment of the present application, the generating the correction target image according to the skin region sub-image in S102 includes the following S102a to S102b:
s102a, the image processing device obtains a standard pixel mean value of a second color channel and a standard pixel mean value of a third color channel in the skin region sub-image according to a first preset mapping relation through the pixel mean value of the first color channel in the skin region sub-image.
It can be understood that the first preset mapping relationship is a mapping relationship obtained by the sample collection and linear fitting manner. The first preset mapping relationship includes a mapping relationship (abbreviated as a mapping relationship a) between a pixel mean value of the first color channel and a standard pixel mean value of the second color channel, and further includes a mapping relationship (abbreviated as a mapping relationship B) between the pixel mean value of the first color channel and the standard pixel mean value of the third color channel.
It can be understood that the mapping relationship a is a mapping relationship using the pixel mean value of the first color channel as an independent variable and the standard pixel mean value of the second color channel as a dependent variable. The mapping relation B is a mapping relation which takes the pixel mean value of the first color channel as an independent variable and the standard pixel mean value of the third color channel as an independent variable.
It should be noted that, in the embodiment of the present application, the three color channels including the first color channel, the second color channel, and the third color channel refer to three color channels including a red color channel, a green color channel, and a blue color channel. The color channels of the first color channel, the second color channel and the third color channel can be specifically determined according to actual use requirements, and the embodiment of the application is not limited.
Optionally, in an embodiment of the present application, the first color channel is a red color channel, the second color channel is a green color channel, and the third color channel is a blue color channel.
It should be noted that, although the first color channel, the second color channel, and the third color channel may be any color channel, for the skin region sub-image, the pixel average value of the red color channel is greater than the pixel average value of the green color channel and the pixel average value of the blue color channel, so that if the first color channel is the red color channel, the standard pixel average values of the other two color channels can be obtained more accurately.
Optionally, in an embodiment of the present application, the first preset mapping is:
G1=a1×R1-b1。
B1=a2×R1-b2。
wherein R 1 is the pixel mean value of the first color channel, G 1 is the standard pixel mean value of the second color channel, B 1 is the standard pixel mean value of the third color channel, and a 1、a2、b1、b2 are constants, respectively.
It will be appreciated that the specific values of a 1、a2、b1、b2 may be determined according to the actual requirements, and embodiments of the present application are not limited.
Illustratively, the first preset mapping relationship may be:
G1=1.118×R1-71.57;
B1=0.959×R1-64.38。
S102b, the image processing device randomly perturbs the standard pixel mean value of the second color channel and the standard pixel mean value of the third color channel to generate the correction target image.
It will be appreciated that the size (e.g., shape, resolution) of the corrected target image is consistent with the skin region sub-image.
In the embodiment of the application, the first correction image is an image obtained by mixing the sub-image of the skin area and the correction target image in hue.
In the embodiment of the present application, the first corrected image is a color phase mixed image obtained by mixing the brightness and saturation values of the original image (i.e., the skin region sub-image) with the corrected target image.
Thus, after the correction target image is obtained, the mixing processing can be performed on the correction target image and the skin region sub-image, so that the skin region sub-image can be corrected appropriately according to the color cast condition of the skin region sub-image.
Optionally, in the embodiment of the present application, the specific manner of the hue mixing process is: and carrying out loop iteration on the hue value of the correction target image and the brightness value of the skin region sub-image, and carrying out hue mixing treatment on the correction target image and the skin region sub-image.
It will be appreciated that when the skin area sub-image and the correction target image are subjected to a blending process, the change in hue will result in a change in brightness, which will result in a change in hue and saturation.
Thus, by using the hue of the correction target image and maintaining the brightness of the skin region sub-image, through the loop iteration, the mixing process can be performed, and thus the purpose of skin color correction can be achieved.
Optionally, in the embodiment of the present application, S103 includes the following S103a and S103b:
S103a, the image processing device acquires a first weight of the skin region sub-image and a second weight of the second correction image.
Wherein the sum of the first weight and the second weight is 1.
It will be appreciated that the purpose of assigning the first weight and the second weight is to: the skin region sub-image and the second correction image are fused according to the first weight and the second weight.
S103b, the image processing device performs fusion processing on the skin region sub-image and the second correction image according to the first weight and the second weight.
It will be appreciated that after the fusion process, a second corrected image may be obtained, the second corrected image being the image with the color cast correction completed.
Therefore, the problem that the skin color image of the area with heavy color cast is overcorrected after the hue mixture treatment can be avoided.
Illustratively, the first corrected image and the skin region sub-image may be subjected to a fusion process by example 5 as follows. Herein, for convenience of description, the skin region sub-image is abbreviated as S org, the first corrected image is abbreviated as S hue, and the second corrected image is abbreviated as S fusion in example 5.
Example 5, the fusion process was performed using the formula:
Sfusion=α×Sorg+(1-α)×Shue;
wherein alpha is a first weight and 1-alpha is a second weight.
Optionally, in an embodiment of the present application, the first weight is determined according to a distance difference matrix between the skin region sub-image and the second correction image.
Illustratively, the distance difference matrix may be obtained as in example 6 below, and the first weight may be determined according to the distance difference matrix, and the second weight may be determined according to the first weight, and the skin region sub-image and the second correction image may be fused according to the first weight and the second weight. Here, for convenience of description, the skin region sub-image is also abbreviated as S org, the first corrected image is S hue, and the second corrected image is abbreviated as S fusion in example 6.
In example 6, to obtain the distance difference matrix, normalization processing may be performed according to the difference between the pixels in S org and S hue to obtain the distance difference matrix W d, where the normalization is performed by dividing the pixel value by 255. The larger the pixel value difference value is, the more serious the color cast problem is represented by S org. The fusion proportion of the areas with heavy color cast can be reduced through W d, so that excessive correction of the skin color of the areas with heavy color cast is avoided. The fusion process is performed using the formula:
Sfusion=αd×Sorg+(1-αd)×Shue;
Wd=Normal(Sorg-Shu);
αd=α×Wd。
Wherein, alpha d is a first weight, and 1-alpha d is a second weight.
In the embodiment of the application, by identifying whether the sub-image of the skin area in the target image is color cast, the color cast condition in the target image and the area needing color cast correction (namely the area occupied by the skin of the user in the image) can be obtained. Further, when the skin region sub-image is colored, a correction target image is generated from the skin region sub-image, and the correction target image and the skin region sub-image are subjected to hue mixing processing, so that a first correction image is obtained. And carrying out fusion processing on the first correction image and the skin region sub-image to obtain a second correction image. Therefore, the color cast correction which is suitable for the skin region sub-image can be carried out according to the color cast recognition result aiming at the skin region sub-image in the target image, and the color cast correction operation is carried out aiming at the skin region sub-image, so that the color change of the images of other regions in the target image can not be caused, the influence of the ambient light on the face of the user in the image acquisition process can be avoided, and the true and natural skin color of the user in the image acquisition result is ensured.
Optionally, in an embodiment of the present application, in a case where the sub-image of the skin area appears to be colored, the colored scene includes at least one of the following: yellow scene, green scene, red scene, blue scene.
Optionally, in the embodiment of the present application, the color cast scene may be numbered, and according to the number corresponding to the color cast scene, a color cast correction mode corresponding to the number is adopted to correct the color cast of the sub-image of the skin area.
Alternatively, in the embodiment of the present application, in the case where the sub-image of the skin area appears to be colored, S102 to S103 described above may be performed one or more times.
For example, the image subjected to the high-exposure suppression and color cast pre-correction may be further corrected by performing the above S102 to S103 for the first time, performing the high-exposure suppression and color cast pre-correction for the color cast condition, and performing the above S102 to S103 for the second time.
For example, in order to achieve high exposure suppression and color cast pre-correction, a local color cast region in the skin region sub-image may be identified, and in the case where the local color cast region is identified, the local color cast region is subjected to color cast pre-correction. In the case where S102 to S103 are performed a plurality of times, parameters such as a predetermined mapping relationship used each time the color cast correction steps of S102 to S103 are performed may be the same or different.
It should be noted that, in the image processing method provided in the embodiment of the present application, the execution subject may be an image processing apparatus, or a control module for executing the image processing method in the image processing apparatus. In the embodiment of the present application, an image processing apparatus is described by taking an example of an image processing method performed by the image processing apparatus.
As shown in fig. 2, an embodiment of the present application further provides an image processing apparatus 200, including:
The identifying module 210 is configured to identify whether the sub-image of the skin area in the target image is color-shifted.
The correction module 220 is configured to generate a correction target image according to the skin region sub-image when the skin region sub-image identified by the identification module 210 is color cast, and perform color mixing processing on the correction target image and the skin region sub-image to obtain a first correction image; and carrying out fusion processing on the first correction image and the skin region sub-image to obtain a second correction image.
In the embodiment of the present application, the image processing apparatus 200 can learn the color cast condition in the target image and the area needing color cast correction (i.e. the area occupied by the skin of the user in the image) by identifying whether the sub-image of the skin area in the target image is color cast. Further, when the skin region sub-image is colored, a correction target image is generated from the skin region sub-image, and the correction target image and the skin region sub-image are subjected to hue mixing processing, so that a first correction image is obtained. And carrying out fusion processing on the first correction image and the skin region sub-image to obtain a second correction image. Therefore, the image processing apparatus 200 can perform color cast correction corresponding to the skin region sub-image according to the color cast recognition result of the skin region sub-image in the target image, and the color cast correction operation is performed on the skin region sub-image, which does not cause color change of the images of other regions in the target image, so that influence of ambient light on the face of the user during the image acquisition process can be avoided, and true and natural skin color of the user in the image acquisition result is ensured.
Optionally, in the embodiment of the present application, the correction module 220 is specifically configured to:
According to a first preset mapping relation, obtaining a standard pixel mean value of a second color channel and a standard pixel mean value of a third color channel in the skin region sub-image through the pixel mean value of the first color channel in the skin region sub-image;
and randomly disturbing the standard pixel mean value of the second color channel and the standard pixel mean value of the third color channel to generate a correction target image.
Optionally, in the embodiment of the present application, the correction module 220 is specifically configured to:
and carrying out loop iteration on the hue value of the correction target image and the brightness value of the skin region sub-image, and carrying out hue mixing treatment on the correction target image and the skin region sub-image.
Optionally, in the embodiment of the present application, the correction module 220 is specifically configured to:
acquiring a first weight of the skin region sub-image and a second weight of the first corrected image;
According to the first weight and the second weight, the first correction image and the skin region sub-image are fused;
wherein the sum of the first weight and the second weight is 1, and the first weight is determined according to a distance difference matrix between the skin region sub-image and the first correction image.
The image processing device in the embodiment of the application can be a device, and can also be a component, an integrated circuit or a chip in a terminal. The device may be a mobile electronic device or a non-mobile electronic device. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), etc., and the non-mobile electronic device may be a server, a network attached storage (Network Attached Storage, NAS), a personal computer (personal computer, PC), a Television (TV), a teller machine, a self-service machine, etc., and the embodiments of the present application are not limited in particular.
The image processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, and the embodiment of the present application is not limited specifically.
The image processing device provided in the embodiment of the present application can implement each process implemented by the method embodiment of fig. 1, and in order to avoid repetition, a description is omitted here.
Optionally, as shown in fig. 3, the embodiment of the present application further provides an electronic device 300, including a processor 301, a memory 302, and a program or an instruction stored in the memory 302 and capable of running on the processor 301, where the program or the instruction implements each process of the above-mentioned image processing method embodiment when executed by the processor 301, and the process may achieve the same technical effect, and for avoiding repetition, a description is omitted herein.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device.
Fig. 4 is a schematic hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 400 includes, but is not limited to: radio frequency unit 401, network module 402, audio output unit 403, input unit 404, sensor 405, display unit 406, user input unit 407, interface unit 408, memory 409, and processor 410.
Those skilled in the art will appreciate that the electronic device 400 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 410 by a power management system to perform functions such as managing charge, discharge, and power consumption by the power management system. The electronic device structure shown in fig. 4 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
It should be appreciated that in embodiments of the present application, the input unit 404 may include a graphics processor (Graphics Processing Unit, GPU) 4041 and a microphone 4042, with the graphics processor 4041 processing image data of still pictures or video obtained by an image capture device (e.g., a camera) in a video capture mode or an image capture mode. The display unit 406 may include a display panel 4061, and the display panel 4061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 407 includes a touch panel 4071 and other input devices 4072. The touch panel 4071 is also referred to as a touch screen. The touch panel 4071 may include two parts, a touch detection device and a touch controller. Other input devices 4072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein. Memory 409 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 410 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 410.
The embodiment of the application also provides a readable storage medium, on which a program or an instruction is stored, which when executed by a processor, implements each process of the above image processing method embodiment, and can achieve the same technical effects, and in order to avoid repetition, a detailed description is omitted here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium such as a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk or an optical disk, and the like.
The embodiment of the application further provides a chip, which comprises a processor and a communication interface, wherein the communication interface is coupled with the processor, and the processor is used for running programs or instructions to realize the processes of the embodiment of the image processing method, and can achieve the same technical effects, so that repetition is avoided, and the description is omitted here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.