Disclosure of Invention
The embodiment of the invention provides an emotion recognition method, an emotion recognition device, computer equipment and a storage medium, which are used for improving the accuracy and the applicability of emotion recognition.
A method of emotion recognition, comprising:
acquiring a target image of first image acquisition equipment, and identifying a target face image in the target image;
if the shielding area exists in the target face image, extracting comparison information from the target face image;
determining an image associated with the target face image from other equipment according to the attribute information of the first image acquisition equipment and/or the target image; comparing the characteristic information of the image associated with the determined target face image with the comparison information, and extracting associated images from other image acquisition equipment; the related image is an image comprising a target face image without an occlusion area;
if the related image does not exist in other image acquisition equipment, performing image segmentation on the non-occluded area in the target face image to obtain a plurality of image segmentation areas;
performing image segmentation on each sample image in a preset emotion sample image set according to the plurality of image segmentation areas, segmenting each sample image in the emotion sample image set into the plurality of sample segmentation areas, wherein the emotion sample image set comprises the plurality of sample images and emotion marking data corresponding to each sample image;
performing cluster analysis on each image segmentation area in the target face image and the corresponding sample segmentation area in each sample image to determine a cluster corresponding to each image segmentation area;
counting emotion marking data corresponding to each image segmentation area in a cluster corresponding to each image segmentation area, and determining the emotion marking data with the largest quantity in the cluster corresponding to each image segmentation area as reference emotion data;
determining a weight value of each image segmentation region according to the area of each image segmentation region;
calculating according to the weight value of each image segmentation area and the corresponding reference emotion data, and determining an emotion recognition result of the target face image;
if the associated image exists in other image acquisition equipment, determining an associated face area from the associated image, and performing emotion recognition on the associated face area to obtain an emotion recognition result of the target face image, wherein the associated face area is a face image consistent with the target face image.
An emotion recognition apparatus comprising:
the target image acquisition module is used for acquiring a target image of the first image acquisition equipment and identifying a target face image in the target image;
the comparison information extraction module is used for extracting comparison information from the target face image when the shielding area exists in the target face image;
the associated image extraction module is used for determining an image associated with the target face image from other equipment according to the attribute information of the first image acquisition equipment and/or the target image; comparing the characteristic information of the image associated with the determined target face image with the comparison information, and extracting an associated image from other image acquisition equipment, wherein the associated image is an image comprising the target face image without an occlusion area;
the emotion recognition module is used for determining a related face area from the related image if the related image exists in other image acquisition equipment, and performing emotion recognition on the related face area to obtain an emotion recognition result of the target face image, wherein the related face area is a face image consistent with the target face image;
the associated image extraction module comprises:
the first image segmentation module is used for carrying out image segmentation on the non-occluded area in the target face image to obtain a plurality of image segmentation areas when the associated image does not exist in other image acquisition equipment;
the second image segmentation module is used for carrying out image segmentation on each sample image in a preset emotion sample image set according to the plurality of image segmentation areas, segmenting each sample image in the emotion sample image set into the plurality of sample segmentation areas, wherein the emotion sample image set comprises the plurality of sample images and emotion marking data corresponding to each sample image;
the cluster analysis module is used for carrying out cluster analysis on each image segmentation area in the target face image and the corresponding sample segmentation area in each sample image to determine a cluster corresponding to each image segmentation area;
the reference emotion data determination module is used for counting emotion marking data corresponding to each image segmentation area in clustering clusters corresponding to each image segmentation area and determining the emotion marking data with the largest quantity in the clustering clusters corresponding to each image segmentation area as reference emotion data;
the emotion recognition result determining module is used for determining the weight value of each image segmentation area according to the area of each image segmentation area; and calculating according to the weight value of each image segmentation area and the corresponding reference emotion data, and determining the emotion recognition result of the target face image.
A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the emotion recognition method when executing the computer program.
A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned emotion recognition method.
According to the emotion recognition method, the emotion recognition device, the computer equipment and the storage medium, the target face image in the target image is recognized by acquiring the target image of the first image acquisition equipment; if the shielding area exists in the target face image, extracting comparison information from the target face image; extracting a related image from other image acquisition equipment according to the attribute information and the comparison information of the first image acquisition equipment and/or the target image, wherein the related image is an image including a target human image without an occlusion area; and determining a related face area from the related image, and performing emotion recognition on the related face area to obtain an emotion recognition result of the target face image, wherein the related face area is the face image consistent with the target face image. According to the method, the problem that the emotion recognition cannot be directly carried out on the target image when the face image of the target image is partially shielded is solved, the accuracy and the applicability of the emotion recognition on the target face image based on the target scene are improved, and the robustness of the emotion recognition is improved.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The emotion recognition method provided by the embodiment of the invention can be applied to a monitoring platform. The monitoring platform may include a plurality of image capture devices disposed in a target scene. The target scene can be a family, an old home or other public service places. A plurality of image capturing devices are disposed in the target scene. The position and the angle of the image acquisition equipment can be reasonably set in the target scene so as to comprehensively cover the target scene, the image of each position in the target scene can be acquired, and better monitoring can be carried out. The position setting of the specific image acquisition device can be arranged or adjusted according to different target scenes, and is not described herein again.
The emotion recognition method provided by the embodiment of the invention can be applied to the application environment shown in fig. 1. Specifically, the emotion recognition method is applied to an emotion recognition system, which comprises a client and a server as shown in fig. 1, wherein the client and the server are in communication through a network and are used for improving the accuracy and applicability of emotion recognition. The client may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, image capture devices, smart bands, portable wearable devices, and the like. The server can be implemented by an independent server or a server cluster composed of a plurality of servers.
In an embodiment, as shown in fig. 2, an emotion recognition method is provided, which is described by taking the example that the method is applied to the monitoring platform in fig. 1, and includes the following steps:
s11: and acquiring a target image of the first image acquisition equipment, and identifying a target face image in the target image.
The first image capturing device is configured to obtain digitized image information, and may be, for example, a video camera, a scanner, or the like. The target image is an image corresponding to the monitoring requirements of the user. The target face image is an image in which a face is included in the target image.
Specifically, all images acquired by the first image acquisition device are acquired, a target image is determined from all the images, and an image corresponding to a target face in the target image, namely a target face image, is identified.
S12: and if the shielding area exists in the target face image, extracting comparison information from the target face image.
The shielded area is an area which is shielded by an object and cannot perform emotion recognition on the image in the target face image. The essence of the comparison information is face feature information of the target face image, and the comparison information is used for comparing the target image with images on other image acquisition equipment so as to determine the image which is in an association relationship with the target image. And the comparison information is used for matching in other images subsequently, and searching a target face consistent with the target face image.
Specifically, after a target face image in a target image is recognized, integrity judgment is performed on the target face image, if a partial region exists in the target face image and is shielded by an object, and because the target face image has a shielded region, emotion recognition cannot be performed on the target face image, therefore, feature information of a target face, that is, comparison information needs to be extracted from the target face image for comparison with images on other image acquisition devices, so as to determine an image having an association relationship with the target image.
S13: and extracting a related image from other image acquisition equipment according to the attribute information of the first image acquisition equipment and/or the target image and the comparison information, wherein the related image is an image including the target human image without the occlusion area.
The attribute information of the target image is information associated with the target image, and may include, for example, an acquisition time of the target image, pixel information of the target image, and the like. The other image capturing devices have the same function as the first image capturing device, but the images captured by the other image capturing devices may be different from the first image capturing device.
Specifically, after the comparison information is extracted from the target face image, an image associated with the target face image is determined from other devices according to the first image acquisition device and/or the attribute information of the target image, and further, the determined feature information of the image associated with the target face image is compared with the comparison information to extract an associated image from other devices.
In one embodiment, the emotion recognition method is applied to a monitoring platform, and the monitoring platform comprises a plurality of image acquisition devices arranged in a target scene. Furthermore, all image acquisition devices in the monitoring platform are aimed at acquiring the same target scene. It can be understood that all the image capturing devices may be set as cameras, etc. with uniform specification attributes, or may be set as image capturing devices with different specifications, but it should be ensured that the difference between the capturing time intervals of all the image capturing devices is small, so as to ensure that when a certain image capturing device is affected by a foreign object environment, at least one other image capturing device can provide captured images within the same time range.
S14: and determining a related face area from the related image, and performing emotion recognition on the related face area to obtain an emotion recognition result of the target face image, wherein the related face area is the face image consistent with the target face image.
Herein, emotion recognition refers to a process of automatically distinguishing an emotional state of an individual by physiological or non-physiological signals of the individual. The emotion recognition result corresponds to the face emotion in the target face image.
Specifically, after extracting the associated image from other image acquisition devices according to the attribute information and the comparison information of the first image acquisition device and/or the target image, determining a region corresponding to a face image consistent with the target face image from the associated image as an associated face region, and performing emotion recognition on the associated face region to obtain an emotion recognition result of the target face image.
Furthermore, the associated face region and the target face image are acquired from two different image acquisition devices, but the two different image acquisition devices acquire images corresponding to the same face at the same time, but because the target face image has an occlusion region, an emotion recognition result cannot be obtained by a method of directly performing emotion recognition on the target face image, and therefore, according to a result obtained by performing emotion recognition on the associated face region, that is, a result obtained by performing emotion recognition on the target face image, in this embodiment, an emotion recognition result of the target face image is obtained after performing emotion recognition on the associated face region.
In this embodiment, when an occlusion region exists in the target face image, extracting an associated image corresponding to the target face image from other image acquisition devices according to the comparison information of the target face image, the attribute information of the target image, and the attribute information of the first image acquisition device, where the associated image does not have the occlusion region, and performing emotion recognition on the associated face region of the associated image to obtain an emotion recognition result of the target face image. According to the method, the problem that the emotion recognition cannot be directly carried out on the target image when the face image of the target image is partially shielded is solved, the accuracy and the applicability of the emotion recognition on the target face image based on the target scene are improved, and the robustness of the emotion recognition is improved.
In an embodiment, as shown in fig. 3, in step S13, extracting a related image from other image capturing devices according to the attribute information and the comparison information of the first image capturing device and/or the target image specifically includes the following steps:
s131: the acquisition time of the target image is determined from the attribute information of the target image, and the position information of the target image is determined from the attribute information of the first image acquisition device.
The acquisition time is the time when the first image acquisition equipment acquires the target image. The position information is a position corresponding to the target scene where the first image acquisition device is placed when acquiring the target image.
Specifically, after the comparison information is extracted from the target face image, the acquisition time of the target image is determined from the attribute information of the target image, and the position information of the target image is determined from the attribute information of the first image acquisition device. Namely, the specific time corresponding to the target image when being collected and the position of the corresponding image collection device in the target scene when the target image is collected are determined.
S132: and determining the associated image acquisition equipment according to the position information, wherein the associated image acquisition equipment can acquire the target image.
Specifically, after the position information of the target image is determined from the attribute information of the first image capturing device, an image capturing device that can capture the target image is determined from other image capturing devices than the first image capturing device based on the position information, and the determined image capturing device is recorded as an associated image capturing device.
S133: and extracting the associated image from the associated image acquisition equipment according to the acquisition time and the comparison information.
Specifically, after the acquisition time of the target image is determined from the attribute information of the target image and the associated image acquisition device is determined from the position information, an image acquired at a time corresponding to the acquisition time is determined from the associated image acquisition device and an image associated with the comparison information is determined from the image as an associated image to extract the associated image from the associated image acquisition device.
In an embodiment, as shown in fig. 4, after step S13, that is, after extracting the associated image from the other image capturing devices according to the attribute information of the first image capturing device and/or the target image and the comparison information, the method specifically includes the following steps:
s21: and if the related image does not exist in other image acquisition equipment, performing image segmentation on the non-occluded area in the target face image to obtain a plurality of image segmentation areas.
The image segmentation includes, for example, a threshold-based segmentation method, a region-based segmentation method, an edge-based segmentation method, a segmentation method based on a specific theory, and the like. Each image segmentation area represents part of characteristic information of the non-occluded area in the target image.
Specifically, after extracting the associated image from the other image acquisition devices according to the attribute information and the comparison information of the first image acquisition device and/or the target image, if the associated image does not exist in the other image acquisition devices, it indicates that the other image acquisition devices fail to acquire the image associated with the target face image in the target image in the same time interval corresponding to the acquisition of the target image; at this time, if emotion recognition is to be performed on the target face image, an emotion recognition result corresponding to the target face image is obtained according to the non-blocked area in the target face image. Therefore, first, the non-occluded area in the target face image is subjected to image segmentation to obtain a plurality of image segmentation areas.
S22: and carrying out image segmentation on each sample image in a preset emotion sample image set according to the plurality of image segmentation areas, segmenting each sample image in the emotion sample image set into the plurality of sample segmentation areas, wherein the emotion sample image set comprises the plurality of sample images and emotion marking data corresponding to each sample image.
The preset emotion sample image set is a set of all sample images acquired under the same target scene with the target image. The sample segmentation area is an area corresponding to different characteristic information in each sample image after each sample image is subjected to image segmentation. And the emotion marking data is obtained by marking the emotion recognition result corresponding to each sample image.
Specifically, after image segmentation is performed on a non-occluded area in a target face image to obtain a plurality of image segmentation areas, the same image segmentation technology as that for obtaining the plurality of image segmentation areas is performed on each sample image in a preset emotion sample image set according to the plurality of image segmentation areas, and each sample image in the emotion sample image set is segmented into the plurality of sample segmentation areas. The emotion sample image set comprises a plurality of sample images and emotion marking data corresponding to each sample image. The emotion marking data are emotion category labels for marking the sample image, such as happy, calm, painful, sad and the like.
S23: and performing cluster analysis on each image segmentation area in the target face image and the corresponding sample segmentation area in each sample image to determine a cluster corresponding to each image segmentation area.
The cluster analysis refers to a method for classifying according to the characteristic information of the image segmentation region. The essence of the cluster is a classification unit corresponding to each image segmentation region, and each cluster comprises at least one image segmentation region.
Specifically, after each sample image in a preset emotion sample image set is subjected to image segmentation according to a plurality of image segmentation areas and each sample image in the emotion sample image set is segmented into a plurality of sample segmentation areas, each image segmentation area in a target face image and the corresponding sample segmentation area in each sample image are subjected to cluster analysis, that is, the feature information of each image segmentation area and the feature information of each sample segmentation area are subjected to cluster analysis, and a cluster corresponding to each image segmentation area is determined.
S24: and counting emotion marking data corresponding to each image segmentation area in the clustering cluster corresponding to each image segmentation area, and determining the emotion marking data with the largest quantity in the clustering cluster corresponding to each image segmentation area as reference emotion data.
Specifically, after each image segmentation area in the target face image and the corresponding sample segmentation area in each sample image are subjected to cluster analysis, and a cluster corresponding to each image segmentation area is determined, emotion marking data corresponding to each image segmentation area in the cluster corresponding to each image segmentation area is counted (after counting, for example, a weight value corresponding to the emotion marking data corresponding to each image segmentation area can be displayed), and emotion marking data with the largest number of emotion marking data in the cluster corresponding to each image segmentation area (for example, data with the highest weight value in the display data of the cluster corresponding to each image segmentation area) is determined as basic emotion data.
S25: and determining the emotion recognition result of the target face image according to the reference emotion data in the cluster corresponding to each image segmentation area.
Specifically, after counting emotion annotation data corresponding to each image segmentation region in a cluster corresponding to each image segmentation region, determining emotion annotation data with the largest number in the cluster corresponding to each image segmentation region as reference emotion data, and determining an emotion recognition result of a target face image according to the reference emotion data in the cluster corresponding to each image segmentation region, so as to solve the problem that the emotion of the target face image cannot be recognized through a related image related to the target face image when the related image does not exist in other image acquisition devices.
In an embodiment, as shown in fig. 5, in step S14, that is, performing emotion recognition on the associated face region to obtain an emotion recognition result of the target face image, the method specifically includes the following steps:
s141: and carrying out identity recognition on the associated image, and determining the identity information of the associated image.
The identity recognition is a process of recognizing identity information of a human face in an image. The identity information is a specific identity corresponding to the face in the associated image.
Specifically, after the associated image is extracted from other image acquisition devices according to the attribute information of the first image acquisition device and/or the target image and the comparison information, the associated image is subjected to identity recognition, and the identity information of the associated image is determined.
The identity recognition mainly comprises image preprocessing, image feature extraction, feature information classification and feature matching recognition.
S142: and determining the voiceprint characteristics of the associated image according to the identity information.
The voiceprint features are feature information of sound corresponding to each individual, and can be obtained by carrying out voiceprint recognition on all individuals in a target scene in advance after voiceprint collection.
Specifically, after the identification of the associated image is performed and the identification information of the associated image is determined, the voiceprint feature of the associated image is determined according to the identification information.
S143: and determining a relevant time interval according to the acquisition time, and extracting voice information from the relevant image acquisition equipment according to the relevant time interval.
The associated time interval refers to a time range corresponding to the other image devices for acquiring the associated images. The voice information is the voice information sent by all individuals when the image acquisition equipment is monitored.
Specifically, after determining the voiceprint feature of the associated image according to the identity information, determining the acquisition time of the target image according to the attribute information of the target image, determining an associated time interval corresponding to the acquisition time, and re-extracting the voice information in the associated time interval from the associated image acquisition device.
S144: and extracting the voice data of the associated image from the voice information according to the voiceprint characteristics of the associated image.
The voice data is a voice segment matched with the voiceprint feature of the associated image in the voice information.
Specifically, after determining a correlation time interval according to the acquisition time and extracting voice information from the correlation image acquisition device according to the correlation time interval, voice data matching the voiceprint feature of the correlation image is extracted from the voice information according to the voiceprint feature of the correlation image.
S145: and performing emotion recognition according to the voice data and the associated face area to obtain an emotion recognition result of the target face image.
Specifically, after extracting voice data of the associated image from the voice information according to the voiceprint feature of the associated image, performing emotion recognition according to the voice data and the associated face area to obtain an emotion recognition result of the target face image. The emotion recognition result obtained by combining the individual voice segments corresponding to the associated face region and the feature information of the associated face region acquired by the associated image acquisition equipment is higher in accuracy.
In an embodiment, in step S25, that is, determining the emotion recognition result of the target face image according to the reference emotion data in the cluster corresponding to each image segmentation region specifically includes the following steps:
s251: and determining the weight value of each image segmentation region according to the area of each image segmentation region.
Specifically, after counting emotion marking data corresponding to each image segmentation area in a cluster corresponding to each image segmentation area, determining emotion marking data with the largest number in the cluster corresponding to each image segmentation area as reference emotion data, and determining a weight value of each segmentation area according to an area ratio of each image segmentation area.
S252: and calculating according to the weight value of each image segmentation area and the corresponding reference emotion data, and determining the emotion recognition result of the target face image.
Specifically, after the weight value of each image segmentation area is determined according to the area of each image segmentation area, calculation is performed according to the weight value of each image segmentation area and the reference emotion data corresponding to each image segmentation area to obtain the proportion of each reference emotion data in the target face image, and the reference emotion data with the highest proportion in the target face image is determined as the emotion recognition result of the target face image.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
In one embodiment, an emotion recognition apparatus is provided, which corresponds to the emotion recognition method in the above embodiments one to one. As shown in fig. 6, the emotion recognition apparatus includes a target image acquisition module 11, a comparison information extraction module 12, an associated image extraction module 13, and an emotion recognition module 14.
The functional modules are explained in detail as follows:
and the target image acquisition module 11 is configured to acquire a target image of the first image acquisition device and identify a target face image in the target image.
And the comparison information extraction module 12 is configured to extract comparison information from the target face image when the occlusion region exists in the target face image.
And the associated image extracting module 13 is configured to extract an associated image from other image capturing devices according to the attribute information and the comparison information of the first image capturing device and/or the target image, where the associated image is an image including a target human image without an occlusion region.
And the emotion recognition module 14 is configured to determine an associated face region from the associated image, perform emotion recognition on the associated face region, and obtain an emotion recognition result of the target face image, where the associated face region is a face image consistent with the target face image.
Preferably, as shown in fig. 7, the associated image extraction module 13 includes the following units:
an image information determining unit 131 for determining the acquisition time of the target image from the attribute information of the target image and determining the position information of the target image from the attribute information of the first image acquisition device.
And an associated image capturing device determining unit 132, configured to determine an associated image capturing device according to the position information, where the associated image capturing device is an image capturing device capable of capturing the target image.
And an associated image extracting unit 133, configured to extract an associated image from the associated image capturing device according to the capturing time and the comparison information.
Preferably, as shown in fig. 8, the emotion recognition apparatus further includes:
the first image segmentation module 21 is configured to, when the related image does not exist in other image acquisition devices, perform image segmentation on a non-occluded area in the target face image to obtain a plurality of image segmentation areas.
The second image segmentation module 22 is configured to perform image segmentation on each sample image in a preset emotion sample image set according to the plurality of image segmentation areas, and segment each sample image in the emotion sample image set into the plurality of sample segmentation areas, where the emotion sample image set includes the plurality of sample images and emotion annotation data corresponding to each sample image.
And the cluster analysis module 23 is configured to perform cluster analysis on each image segmentation area in the target face image and the corresponding sample segmentation area in each sample image, and determine a cluster corresponding to each image segmentation area.
And a reference emotion data determination module 24, configured to count emotion annotation data corresponding to each image partition area in the cluster corresponding to each image partition area, and determine, as reference emotion data, emotion annotation data with the largest quantity in the cluster corresponding to each image partition area.
And the emotion recognition result determining module 25 is configured to determine an emotion recognition result of the target face image according to the reference emotion data in the cluster corresponding to each image segmentation region.
Preferably, as shown in fig. 9, the emotion recognition module 14 includes the following units:
the identity information determining unit 141 performs identity recognition on the related image to determine the identity information of the related image.
And a voiceprint feature determining unit 142, configured to determine a voiceprint feature of the associated image according to the identity information.
And the voice information extraction unit 143 is configured to determine an associated time interval according to the acquisition time, and extract voice information from the associated image acquisition device according to the associated time interval.
And a voice data extracting unit 144, configured to extract voice data of the associated image from the voice information according to the voiceprint feature of the associated image.
And the emotion recognition unit 145 is used for performing emotion recognition according to the voice data and the associated face area to obtain an emotion recognition result of the target face image.
Preferably, the emotion recognition result determination module 25 includes the following units:
a segmentation region weight determination unit 251 for determining a weight value of each image segmentation region according to an area of each image segmentation region.
The emotion recognition result calculation unit 252 performs calculation according to the weight value of each image segmentation region and the corresponding reference emotion data, and determines an emotion recognition result of the target face image.
For the specific definition of the emotion recognition device, reference may be made to the above definition of the emotion recognition method, which is not described herein again. The modules in the emotion recognition device can be wholly or partially implemented by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 10. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing data used in the emotion recognition method. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method of emotion recognition.
In one embodiment, a computer device is provided, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the emotion recognition method when executing the computer program.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which when executed by a processor implements the above-described emotion recognition method.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.