[go: up one dir, main page]

CN111967311B - Emotion recognition method and device, computer equipment and storage medium - Google Patents

Emotion recognition method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN111967311B
CN111967311B CN202010641010.9A CN202010641010A CN111967311B CN 111967311 B CN111967311 B CN 111967311B CN 202010641010 A CN202010641010 A CN 202010641010A CN 111967311 B CN111967311 B CN 111967311B
Authority
CN
China
Prior art keywords
image
target
emotion
area
emotion recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010641010.9A
Other languages
Chinese (zh)
Other versions
CN111967311A (en
Inventor
唐宇
骆少明
郭琪伟
侯超钧
庄家俊
苗爱敏
褚璇
钟震宇
吴亮生
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongkai University of Agriculture and Engineering
Guangdong Polytechnic Normal University
Institute of Intelligent Manufacturing of Guangdong Academy of Sciences
Original Assignee
Zhongkai University of Agriculture and Engineering
Guangdong Polytechnic Normal University
Institute of Intelligent Manufacturing of Guangdong Academy of Sciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongkai University of Agriculture and Engineering, Guangdong Polytechnic Normal University, Institute of Intelligent Manufacturing of Guangdong Academy of Sciences filed Critical Zhongkai University of Agriculture and Engineering
Priority to CN202010641010.9A priority Critical patent/CN111967311B/en
Publication of CN111967311A publication Critical patent/CN111967311A/en
Application granted granted Critical
Publication of CN111967311B publication Critical patent/CN111967311B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Child & Adolescent Psychology (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychiatry (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Acoustics & Sound (AREA)
  • Image Analysis (AREA)

Abstract

本发明涉及智能识别领域。本发明公开了一种情绪识别方法、装置、计算机设备及存储介质,通过获取第一图像采集设备的目标图像,识别目标图像中的目标人脸图像;若目标人脸图像中存在遮挡区域,则从目标人脸图像中提取比对信息;根据第一图像采集设备和/或目标图像的属性信息以及比对信息从其他图像采集设备中提取关联图像;从关联图像中确定关联人脸区域,对关联人脸区域进行情绪识别,得到目标人脸图像的情绪识别结果。本发明解决了目标图像的人脸图像存在部分区域被遮挡时,无法直接对目标图像进行情绪识别的问题,提高了基于目标场景下对目标人脸图像进行情绪识别的准确性和适用性,并提高的情绪识别的鲁棒性。

Figure 202010641010

The present invention relates to the field of intelligent identification. The invention discloses an emotion recognition method, device, computer equipment and storage medium. By acquiring a target image of a first image acquisition device, a target face image in the target image is recognized; if there is an occlusion area in the target face image, the Extracting comparison information from the target face image; extracting associated images from other image capturing devices according to the attribute information and comparison information of the first image capturing device and/or the target image; Emotion recognition is performed by correlating the face area, and the emotion recognition result of the target face image is obtained. The invention solves the problem that emotion recognition cannot be performed directly on the target image when the face image of the target image is partially occluded, and improves the accuracy and applicability of emotion recognition on the target face image based on the target scene. Improved robustness of emotion recognition.

Figure 202010641010

Description

Emotion recognition method and device, computer equipment and storage medium
Technical Field
The invention relates to the field of intelligent recognition, in particular to an emotion recognition method and device, computer equipment and a storage medium.
Background
With the continuous development of computer technology, especially artificial intelligence technology, more and more related intelligent monitoring and early warning technology is applied to daily life. The elderly are vulnerable to sudden injuries due to their own diseases or external influences because of the decline of physical functions and the weakening of mobility and strain capacity. Therefore, in places such as homes and nursing homes, the behaviors of the elderly need to be intelligently monitored so as to timely handle emergency situations of the places where the elderly happen and better ensure the safety of the elderly.
However, there is a method of monitoring the behavior of the elderly by using devices such as a camera, and when an abnormal behavior is monitored, a prompt or an alarm is implemented. However, the current monitoring schemes have a number of disadvantages.
Disclosure of Invention
The embodiment of the invention provides an emotion recognition method, an emotion recognition device, computer equipment and a storage medium, which are used for improving the accuracy and the applicability of emotion recognition.
A method of emotion recognition, comprising:
acquiring a target image of first image acquisition equipment, and identifying a target face image in the target image;
if the shielding area exists in the target face image, extracting comparison information from the target face image;
determining an image associated with the target face image from other equipment according to the attribute information of the first image acquisition equipment and/or the target image; comparing the characteristic information of the image associated with the determined target face image with the comparison information, and extracting associated images from other image acquisition equipment; the related image is an image comprising a target face image without an occlusion area;
if the related image does not exist in other image acquisition equipment, performing image segmentation on the non-occluded area in the target face image to obtain a plurality of image segmentation areas;
performing image segmentation on each sample image in a preset emotion sample image set according to the plurality of image segmentation areas, segmenting each sample image in the emotion sample image set into the plurality of sample segmentation areas, wherein the emotion sample image set comprises the plurality of sample images and emotion marking data corresponding to each sample image;
performing cluster analysis on each image segmentation area in the target face image and the corresponding sample segmentation area in each sample image to determine a cluster corresponding to each image segmentation area;
counting emotion marking data corresponding to each image segmentation area in a cluster corresponding to each image segmentation area, and determining the emotion marking data with the largest quantity in the cluster corresponding to each image segmentation area as reference emotion data;
determining a weight value of each image segmentation region according to the area of each image segmentation region;
calculating according to the weight value of each image segmentation area and the corresponding reference emotion data, and determining an emotion recognition result of the target face image;
if the associated image exists in other image acquisition equipment, determining an associated face area from the associated image, and performing emotion recognition on the associated face area to obtain an emotion recognition result of the target face image, wherein the associated face area is a face image consistent with the target face image.
An emotion recognition apparatus comprising:
the target image acquisition module is used for acquiring a target image of the first image acquisition equipment and identifying a target face image in the target image;
the comparison information extraction module is used for extracting comparison information from the target face image when the shielding area exists in the target face image;
the associated image extraction module is used for determining an image associated with the target face image from other equipment according to the attribute information of the first image acquisition equipment and/or the target image; comparing the characteristic information of the image associated with the determined target face image with the comparison information, and extracting an associated image from other image acquisition equipment, wherein the associated image is an image comprising the target face image without an occlusion area;
the emotion recognition module is used for determining a related face area from the related image if the related image exists in other image acquisition equipment, and performing emotion recognition on the related face area to obtain an emotion recognition result of the target face image, wherein the related face area is a face image consistent with the target face image;
the associated image extraction module comprises:
the first image segmentation module is used for carrying out image segmentation on the non-occluded area in the target face image to obtain a plurality of image segmentation areas when the associated image does not exist in other image acquisition equipment;
the second image segmentation module is used for carrying out image segmentation on each sample image in a preset emotion sample image set according to the plurality of image segmentation areas, segmenting each sample image in the emotion sample image set into the plurality of sample segmentation areas, wherein the emotion sample image set comprises the plurality of sample images and emotion marking data corresponding to each sample image;
the cluster analysis module is used for carrying out cluster analysis on each image segmentation area in the target face image and the corresponding sample segmentation area in each sample image to determine a cluster corresponding to each image segmentation area;
the reference emotion data determination module is used for counting emotion marking data corresponding to each image segmentation area in clustering clusters corresponding to each image segmentation area and determining the emotion marking data with the largest quantity in the clustering clusters corresponding to each image segmentation area as reference emotion data;
the emotion recognition result determining module is used for determining the weight value of each image segmentation area according to the area of each image segmentation area; and calculating according to the weight value of each image segmentation area and the corresponding reference emotion data, and determining the emotion recognition result of the target face image.
A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the emotion recognition method when executing the computer program.
A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned emotion recognition method.
According to the emotion recognition method, the emotion recognition device, the computer equipment and the storage medium, the target face image in the target image is recognized by acquiring the target image of the first image acquisition equipment; if the shielding area exists in the target face image, extracting comparison information from the target face image; extracting a related image from other image acquisition equipment according to the attribute information and the comparison information of the first image acquisition equipment and/or the target image, wherein the related image is an image including a target human image without an occlusion area; and determining a related face area from the related image, and performing emotion recognition on the related face area to obtain an emotion recognition result of the target face image, wherein the related face area is the face image consistent with the target face image. According to the method, the problem that the emotion recognition cannot be directly carried out on the target image when the face image of the target image is partially shielded is solved, the accuracy and the applicability of the emotion recognition on the target face image based on the target scene are improved, and the robustness of the emotion recognition is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
FIG. 1 is a diagram illustrating an application environment of a method for emotion recognition according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method of emotion recognition in an embodiment of the present invention;
FIG. 3 is a flowchart of step S13 of the emotion recognition method in an embodiment of the present invention;
FIG. 4 is another flow chart of a method of emotion recognition in an embodiment of the present invention;
FIG. 5 is a flowchart of step S14 of the emotion recognition method in an embodiment of the present invention;
FIG. 6 is a functional block diagram of an emotion recognition apparatus in an embodiment of the present invention;
FIG. 7 is a schematic block diagram of an associated image extraction module in the emotion recognition apparatus according to an embodiment of the present invention;
FIG. 8 is another functional block diagram of an emotion recognition apparatus in an embodiment of the present invention;
FIG. 9 is a schematic block diagram of an emotion recognition module in the emotion recognition apparatus according to an embodiment of the present invention;
FIG. 10 is a schematic diagram of a computer device according to an embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The emotion recognition method provided by the embodiment of the invention can be applied to a monitoring platform. The monitoring platform may include a plurality of image capture devices disposed in a target scene. The target scene can be a family, an old home or other public service places. A plurality of image capturing devices are disposed in the target scene. The position and the angle of the image acquisition equipment can be reasonably set in the target scene so as to comprehensively cover the target scene, the image of each position in the target scene can be acquired, and better monitoring can be carried out. The position setting of the specific image acquisition device can be arranged or adjusted according to different target scenes, and is not described herein again.
The emotion recognition method provided by the embodiment of the invention can be applied to the application environment shown in fig. 1. Specifically, the emotion recognition method is applied to an emotion recognition system, which comprises a client and a server as shown in fig. 1, wherein the client and the server are in communication through a network and are used for improving the accuracy and applicability of emotion recognition. The client may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, image capture devices, smart bands, portable wearable devices, and the like. The server can be implemented by an independent server or a server cluster composed of a plurality of servers.
In an embodiment, as shown in fig. 2, an emotion recognition method is provided, which is described by taking the example that the method is applied to the monitoring platform in fig. 1, and includes the following steps:
s11: and acquiring a target image of the first image acquisition equipment, and identifying a target face image in the target image.
The first image capturing device is configured to obtain digitized image information, and may be, for example, a video camera, a scanner, or the like. The target image is an image corresponding to the monitoring requirements of the user. The target face image is an image in which a face is included in the target image.
Specifically, all images acquired by the first image acquisition device are acquired, a target image is determined from all the images, and an image corresponding to a target face in the target image, namely a target face image, is identified.
S12: and if the shielding area exists in the target face image, extracting comparison information from the target face image.
The shielded area is an area which is shielded by an object and cannot perform emotion recognition on the image in the target face image. The essence of the comparison information is face feature information of the target face image, and the comparison information is used for comparing the target image with images on other image acquisition equipment so as to determine the image which is in an association relationship with the target image. And the comparison information is used for matching in other images subsequently, and searching a target face consistent with the target face image.
Specifically, after a target face image in a target image is recognized, integrity judgment is performed on the target face image, if a partial region exists in the target face image and is shielded by an object, and because the target face image has a shielded region, emotion recognition cannot be performed on the target face image, therefore, feature information of a target face, that is, comparison information needs to be extracted from the target face image for comparison with images on other image acquisition devices, so as to determine an image having an association relationship with the target image.
S13: and extracting a related image from other image acquisition equipment according to the attribute information of the first image acquisition equipment and/or the target image and the comparison information, wherein the related image is an image including the target human image without the occlusion area.
The attribute information of the target image is information associated with the target image, and may include, for example, an acquisition time of the target image, pixel information of the target image, and the like. The other image capturing devices have the same function as the first image capturing device, but the images captured by the other image capturing devices may be different from the first image capturing device.
Specifically, after the comparison information is extracted from the target face image, an image associated with the target face image is determined from other devices according to the first image acquisition device and/or the attribute information of the target image, and further, the determined feature information of the image associated with the target face image is compared with the comparison information to extract an associated image from other devices.
In one embodiment, the emotion recognition method is applied to a monitoring platform, and the monitoring platform comprises a plurality of image acquisition devices arranged in a target scene. Furthermore, all image acquisition devices in the monitoring platform are aimed at acquiring the same target scene. It can be understood that all the image capturing devices may be set as cameras, etc. with uniform specification attributes, or may be set as image capturing devices with different specifications, but it should be ensured that the difference between the capturing time intervals of all the image capturing devices is small, so as to ensure that when a certain image capturing device is affected by a foreign object environment, at least one other image capturing device can provide captured images within the same time range.
S14: and determining a related face area from the related image, and performing emotion recognition on the related face area to obtain an emotion recognition result of the target face image, wherein the related face area is the face image consistent with the target face image.
Herein, emotion recognition refers to a process of automatically distinguishing an emotional state of an individual by physiological or non-physiological signals of the individual. The emotion recognition result corresponds to the face emotion in the target face image.
Specifically, after extracting the associated image from other image acquisition devices according to the attribute information and the comparison information of the first image acquisition device and/or the target image, determining a region corresponding to a face image consistent with the target face image from the associated image as an associated face region, and performing emotion recognition on the associated face region to obtain an emotion recognition result of the target face image.
Furthermore, the associated face region and the target face image are acquired from two different image acquisition devices, but the two different image acquisition devices acquire images corresponding to the same face at the same time, but because the target face image has an occlusion region, an emotion recognition result cannot be obtained by a method of directly performing emotion recognition on the target face image, and therefore, according to a result obtained by performing emotion recognition on the associated face region, that is, a result obtained by performing emotion recognition on the target face image, in this embodiment, an emotion recognition result of the target face image is obtained after performing emotion recognition on the associated face region.
In this embodiment, when an occlusion region exists in the target face image, extracting an associated image corresponding to the target face image from other image acquisition devices according to the comparison information of the target face image, the attribute information of the target image, and the attribute information of the first image acquisition device, where the associated image does not have the occlusion region, and performing emotion recognition on the associated face region of the associated image to obtain an emotion recognition result of the target face image. According to the method, the problem that the emotion recognition cannot be directly carried out on the target image when the face image of the target image is partially shielded is solved, the accuracy and the applicability of the emotion recognition on the target face image based on the target scene are improved, and the robustness of the emotion recognition is improved.
In an embodiment, as shown in fig. 3, in step S13, extracting a related image from other image capturing devices according to the attribute information and the comparison information of the first image capturing device and/or the target image specifically includes the following steps:
s131: the acquisition time of the target image is determined from the attribute information of the target image, and the position information of the target image is determined from the attribute information of the first image acquisition device.
The acquisition time is the time when the first image acquisition equipment acquires the target image. The position information is a position corresponding to the target scene where the first image acquisition device is placed when acquiring the target image.
Specifically, after the comparison information is extracted from the target face image, the acquisition time of the target image is determined from the attribute information of the target image, and the position information of the target image is determined from the attribute information of the first image acquisition device. Namely, the specific time corresponding to the target image when being collected and the position of the corresponding image collection device in the target scene when the target image is collected are determined.
S132: and determining the associated image acquisition equipment according to the position information, wherein the associated image acquisition equipment can acquire the target image.
Specifically, after the position information of the target image is determined from the attribute information of the first image capturing device, an image capturing device that can capture the target image is determined from other image capturing devices than the first image capturing device based on the position information, and the determined image capturing device is recorded as an associated image capturing device.
S133: and extracting the associated image from the associated image acquisition equipment according to the acquisition time and the comparison information.
Specifically, after the acquisition time of the target image is determined from the attribute information of the target image and the associated image acquisition device is determined from the position information, an image acquired at a time corresponding to the acquisition time is determined from the associated image acquisition device and an image associated with the comparison information is determined from the image as an associated image to extract the associated image from the associated image acquisition device.
In an embodiment, as shown in fig. 4, after step S13, that is, after extracting the associated image from the other image capturing devices according to the attribute information of the first image capturing device and/or the target image and the comparison information, the method specifically includes the following steps:
s21: and if the related image does not exist in other image acquisition equipment, performing image segmentation on the non-occluded area in the target face image to obtain a plurality of image segmentation areas.
The image segmentation includes, for example, a threshold-based segmentation method, a region-based segmentation method, an edge-based segmentation method, a segmentation method based on a specific theory, and the like. Each image segmentation area represents part of characteristic information of the non-occluded area in the target image.
Specifically, after extracting the associated image from the other image acquisition devices according to the attribute information and the comparison information of the first image acquisition device and/or the target image, if the associated image does not exist in the other image acquisition devices, it indicates that the other image acquisition devices fail to acquire the image associated with the target face image in the target image in the same time interval corresponding to the acquisition of the target image; at this time, if emotion recognition is to be performed on the target face image, an emotion recognition result corresponding to the target face image is obtained according to the non-blocked area in the target face image. Therefore, first, the non-occluded area in the target face image is subjected to image segmentation to obtain a plurality of image segmentation areas.
S22: and carrying out image segmentation on each sample image in a preset emotion sample image set according to the plurality of image segmentation areas, segmenting each sample image in the emotion sample image set into the plurality of sample segmentation areas, wherein the emotion sample image set comprises the plurality of sample images and emotion marking data corresponding to each sample image.
The preset emotion sample image set is a set of all sample images acquired under the same target scene with the target image. The sample segmentation area is an area corresponding to different characteristic information in each sample image after each sample image is subjected to image segmentation. And the emotion marking data is obtained by marking the emotion recognition result corresponding to each sample image.
Specifically, after image segmentation is performed on a non-occluded area in a target face image to obtain a plurality of image segmentation areas, the same image segmentation technology as that for obtaining the plurality of image segmentation areas is performed on each sample image in a preset emotion sample image set according to the plurality of image segmentation areas, and each sample image in the emotion sample image set is segmented into the plurality of sample segmentation areas. The emotion sample image set comprises a plurality of sample images and emotion marking data corresponding to each sample image. The emotion marking data are emotion category labels for marking the sample image, such as happy, calm, painful, sad and the like.
S23: and performing cluster analysis on each image segmentation area in the target face image and the corresponding sample segmentation area in each sample image to determine a cluster corresponding to each image segmentation area.
The cluster analysis refers to a method for classifying according to the characteristic information of the image segmentation region. The essence of the cluster is a classification unit corresponding to each image segmentation region, and each cluster comprises at least one image segmentation region.
Specifically, after each sample image in a preset emotion sample image set is subjected to image segmentation according to a plurality of image segmentation areas and each sample image in the emotion sample image set is segmented into a plurality of sample segmentation areas, each image segmentation area in a target face image and the corresponding sample segmentation area in each sample image are subjected to cluster analysis, that is, the feature information of each image segmentation area and the feature information of each sample segmentation area are subjected to cluster analysis, and a cluster corresponding to each image segmentation area is determined.
S24: and counting emotion marking data corresponding to each image segmentation area in the clustering cluster corresponding to each image segmentation area, and determining the emotion marking data with the largest quantity in the clustering cluster corresponding to each image segmentation area as reference emotion data.
Specifically, after each image segmentation area in the target face image and the corresponding sample segmentation area in each sample image are subjected to cluster analysis, and a cluster corresponding to each image segmentation area is determined, emotion marking data corresponding to each image segmentation area in the cluster corresponding to each image segmentation area is counted (after counting, for example, a weight value corresponding to the emotion marking data corresponding to each image segmentation area can be displayed), and emotion marking data with the largest number of emotion marking data in the cluster corresponding to each image segmentation area (for example, data with the highest weight value in the display data of the cluster corresponding to each image segmentation area) is determined as basic emotion data.
S25: and determining the emotion recognition result of the target face image according to the reference emotion data in the cluster corresponding to each image segmentation area.
Specifically, after counting emotion annotation data corresponding to each image segmentation region in a cluster corresponding to each image segmentation region, determining emotion annotation data with the largest number in the cluster corresponding to each image segmentation region as reference emotion data, and determining an emotion recognition result of a target face image according to the reference emotion data in the cluster corresponding to each image segmentation region, so as to solve the problem that the emotion of the target face image cannot be recognized through a related image related to the target face image when the related image does not exist in other image acquisition devices.
In an embodiment, as shown in fig. 5, in step S14, that is, performing emotion recognition on the associated face region to obtain an emotion recognition result of the target face image, the method specifically includes the following steps:
s141: and carrying out identity recognition on the associated image, and determining the identity information of the associated image.
The identity recognition is a process of recognizing identity information of a human face in an image. The identity information is a specific identity corresponding to the face in the associated image.
Specifically, after the associated image is extracted from other image acquisition devices according to the attribute information of the first image acquisition device and/or the target image and the comparison information, the associated image is subjected to identity recognition, and the identity information of the associated image is determined.
The identity recognition mainly comprises image preprocessing, image feature extraction, feature information classification and feature matching recognition.
S142: and determining the voiceprint characteristics of the associated image according to the identity information.
The voiceprint features are feature information of sound corresponding to each individual, and can be obtained by carrying out voiceprint recognition on all individuals in a target scene in advance after voiceprint collection.
Specifically, after the identification of the associated image is performed and the identification information of the associated image is determined, the voiceprint feature of the associated image is determined according to the identification information.
S143: and determining a relevant time interval according to the acquisition time, and extracting voice information from the relevant image acquisition equipment according to the relevant time interval.
The associated time interval refers to a time range corresponding to the other image devices for acquiring the associated images. The voice information is the voice information sent by all individuals when the image acquisition equipment is monitored.
Specifically, after determining the voiceprint feature of the associated image according to the identity information, determining the acquisition time of the target image according to the attribute information of the target image, determining an associated time interval corresponding to the acquisition time, and re-extracting the voice information in the associated time interval from the associated image acquisition device.
S144: and extracting the voice data of the associated image from the voice information according to the voiceprint characteristics of the associated image.
The voice data is a voice segment matched with the voiceprint feature of the associated image in the voice information.
Specifically, after determining a correlation time interval according to the acquisition time and extracting voice information from the correlation image acquisition device according to the correlation time interval, voice data matching the voiceprint feature of the correlation image is extracted from the voice information according to the voiceprint feature of the correlation image.
S145: and performing emotion recognition according to the voice data and the associated face area to obtain an emotion recognition result of the target face image.
Specifically, after extracting voice data of the associated image from the voice information according to the voiceprint feature of the associated image, performing emotion recognition according to the voice data and the associated face area to obtain an emotion recognition result of the target face image. The emotion recognition result obtained by combining the individual voice segments corresponding to the associated face region and the feature information of the associated face region acquired by the associated image acquisition equipment is higher in accuracy.
In an embodiment, in step S25, that is, determining the emotion recognition result of the target face image according to the reference emotion data in the cluster corresponding to each image segmentation region specifically includes the following steps:
s251: and determining the weight value of each image segmentation region according to the area of each image segmentation region.
Specifically, after counting emotion marking data corresponding to each image segmentation area in a cluster corresponding to each image segmentation area, determining emotion marking data with the largest number in the cluster corresponding to each image segmentation area as reference emotion data, and determining a weight value of each segmentation area according to an area ratio of each image segmentation area.
S252: and calculating according to the weight value of each image segmentation area and the corresponding reference emotion data, and determining the emotion recognition result of the target face image.
Specifically, after the weight value of each image segmentation area is determined according to the area of each image segmentation area, calculation is performed according to the weight value of each image segmentation area and the reference emotion data corresponding to each image segmentation area to obtain the proportion of each reference emotion data in the target face image, and the reference emotion data with the highest proportion in the target face image is determined as the emotion recognition result of the target face image.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
In one embodiment, an emotion recognition apparatus is provided, which corresponds to the emotion recognition method in the above embodiments one to one. As shown in fig. 6, the emotion recognition apparatus includes a target image acquisition module 11, a comparison information extraction module 12, an associated image extraction module 13, and an emotion recognition module 14.
The functional modules are explained in detail as follows:
and the target image acquisition module 11 is configured to acquire a target image of the first image acquisition device and identify a target face image in the target image.
And the comparison information extraction module 12 is configured to extract comparison information from the target face image when the occlusion region exists in the target face image.
And the associated image extracting module 13 is configured to extract an associated image from other image capturing devices according to the attribute information and the comparison information of the first image capturing device and/or the target image, where the associated image is an image including a target human image without an occlusion region.
And the emotion recognition module 14 is configured to determine an associated face region from the associated image, perform emotion recognition on the associated face region, and obtain an emotion recognition result of the target face image, where the associated face region is a face image consistent with the target face image.
Preferably, as shown in fig. 7, the associated image extraction module 13 includes the following units:
an image information determining unit 131 for determining the acquisition time of the target image from the attribute information of the target image and determining the position information of the target image from the attribute information of the first image acquisition device.
And an associated image capturing device determining unit 132, configured to determine an associated image capturing device according to the position information, where the associated image capturing device is an image capturing device capable of capturing the target image.
And an associated image extracting unit 133, configured to extract an associated image from the associated image capturing device according to the capturing time and the comparison information.
Preferably, as shown in fig. 8, the emotion recognition apparatus further includes:
the first image segmentation module 21 is configured to, when the related image does not exist in other image acquisition devices, perform image segmentation on a non-occluded area in the target face image to obtain a plurality of image segmentation areas.
The second image segmentation module 22 is configured to perform image segmentation on each sample image in a preset emotion sample image set according to the plurality of image segmentation areas, and segment each sample image in the emotion sample image set into the plurality of sample segmentation areas, where the emotion sample image set includes the plurality of sample images and emotion annotation data corresponding to each sample image.
And the cluster analysis module 23 is configured to perform cluster analysis on each image segmentation area in the target face image and the corresponding sample segmentation area in each sample image, and determine a cluster corresponding to each image segmentation area.
And a reference emotion data determination module 24, configured to count emotion annotation data corresponding to each image partition area in the cluster corresponding to each image partition area, and determine, as reference emotion data, emotion annotation data with the largest quantity in the cluster corresponding to each image partition area.
And the emotion recognition result determining module 25 is configured to determine an emotion recognition result of the target face image according to the reference emotion data in the cluster corresponding to each image segmentation region.
Preferably, as shown in fig. 9, the emotion recognition module 14 includes the following units:
the identity information determining unit 141 performs identity recognition on the related image to determine the identity information of the related image.
And a voiceprint feature determining unit 142, configured to determine a voiceprint feature of the associated image according to the identity information.
And the voice information extraction unit 143 is configured to determine an associated time interval according to the acquisition time, and extract voice information from the associated image acquisition device according to the associated time interval.
And a voice data extracting unit 144, configured to extract voice data of the associated image from the voice information according to the voiceprint feature of the associated image.
And the emotion recognition unit 145 is used for performing emotion recognition according to the voice data and the associated face area to obtain an emotion recognition result of the target face image.
Preferably, the emotion recognition result determination module 25 includes the following units:
a segmentation region weight determination unit 251 for determining a weight value of each image segmentation region according to an area of each image segmentation region.
The emotion recognition result calculation unit 252 performs calculation according to the weight value of each image segmentation region and the corresponding reference emotion data, and determines an emotion recognition result of the target face image.
For the specific definition of the emotion recognition device, reference may be made to the above definition of the emotion recognition method, which is not described herein again. The modules in the emotion recognition device can be wholly or partially implemented by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 10. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing data used in the emotion recognition method. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method of emotion recognition.
In one embodiment, a computer device is provided, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the emotion recognition method when executing the computer program.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which when executed by a processor implements the above-described emotion recognition method.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (7)

1. The emotion recognition method is applied to a monitoring platform, the monitoring platform comprises a plurality of image acquisition devices arranged in a target scene, and the emotion recognition method comprises the following steps:
acquiring a target image of first image acquisition equipment, and identifying a target face image in the target image;
if the shielding area exists in the target face image, extracting comparison information from the target face image;
determining an image associated with the target face image from other equipment according to the attribute information of the first image acquisition equipment and/or the target image; comparing the characteristic information of the image associated with the determined target face image with the comparison information, and extracting associated images from other image acquisition equipment; the related image is an image comprising a target face image without an occlusion area;
if the related image does not exist in other image acquisition equipment, performing image segmentation on the non-occluded area in the target face image to obtain a plurality of image segmentation areas;
performing image segmentation on each sample image in a preset emotion sample image set according to the plurality of image segmentation areas, segmenting each sample image in the emotion sample image set into the plurality of sample segmentation areas, wherein the emotion sample image set comprises the plurality of sample images and emotion marking data corresponding to each sample image;
performing cluster analysis on each image segmentation area in the target face image and the corresponding sample segmentation area in each sample image to determine a cluster corresponding to each image segmentation area;
counting emotion marking data corresponding to each image segmentation area in a cluster corresponding to each image segmentation area, and determining the emotion marking data with the largest quantity in the cluster corresponding to each image segmentation area as reference emotion data;
determining a weight value of each image segmentation region according to the area of each image segmentation region;
calculating according to the weight value of each image segmentation area and the corresponding reference emotion data, and determining an emotion recognition result of the target face image;
if the associated image exists in other image acquisition equipment, determining an associated face area from the associated image, and performing emotion recognition on the associated face area to obtain an emotion recognition result of the target face image, wherein the associated face area is a face image consistent with the target face image.
2. The emotion recognition method according to claim 1, wherein the image associated with the target face image is determined from other devices based on attribute information of the first image pickup device and/or the target image; comparing the characteristic information of the image associated with the determined target face image with the comparison information, and extracting the associated image from other image acquisition equipment, wherein the method comprises the following steps:
determining the acquisition time of the target image from the attribute information of the target image, and determining the position information of the target image from the attribute information of the first image acquisition device;
determining associated image acquisition equipment according to the position information, wherein the associated image acquisition equipment can acquire the target image;
and extracting the associated image from the associated image acquisition equipment according to the acquisition time and the comparison information.
3. The emotion recognition method of claim 1, wherein the performing emotion recognition on the associated face region to obtain an emotion recognition result of the target face image comprises:
performing identity recognition on the associated image, and determining identity information of the associated image;
determining the voiceprint characteristics of the associated image according to the identity information;
determining an associated time interval according to the acquisition time, and extracting voice information from the associated image acquisition equipment according to the associated time interval;
extracting voice data of the associated image from the voice information according to the voiceprint feature of the associated image;
and performing emotion recognition according to the voice data and the associated face area to obtain an emotion recognition result of the target face image.
4. An emotion recognition apparatus, comprising:
the target image acquisition module is used for acquiring a target image of the first image acquisition equipment and identifying a target face image in the target image;
the comparison information extraction module is used for extracting comparison information from the target face image when the shielding area exists in the target face image;
the associated image extraction module is used for determining an image associated with the target face image from other equipment according to the attribute information of the first image acquisition equipment and/or the target image; comparing the characteristic information of the image associated with the determined target face image with the comparison information, and extracting an associated image from other image acquisition equipment, wherein the associated image is an image comprising the target face image without an occlusion area; the emotion recognition module is used for determining a related face area from the related image if the related image exists in other image acquisition equipment, and performing emotion recognition on the related face area to obtain an emotion recognition result of the target face image, wherein the related face area is a face image consistent with the target face image;
the associated image extraction module comprises:
the first image segmentation module is used for carrying out image segmentation on the non-occluded area in the target face image to obtain a plurality of image segmentation areas when the associated image does not exist in other image acquisition equipment;
the second image segmentation module is used for carrying out image segmentation on each sample image in a preset emotion sample image set according to the plurality of image segmentation areas, segmenting each sample image in the emotion sample image set into the plurality of sample segmentation areas, wherein the emotion sample image set comprises the plurality of sample images and emotion marking data corresponding to each sample image;
the cluster analysis module is used for carrying out cluster analysis on each image segmentation area in the target face image and the corresponding sample segmentation area in each sample image to determine a cluster corresponding to each image segmentation area;
the reference emotion data determination module is used for counting emotion marking data corresponding to each image segmentation area in clustering clusters corresponding to each image segmentation area and determining the emotion marking data with the largest quantity in the clustering clusters corresponding to each image segmentation area as reference emotion data;
the emotion recognition result determining module is used for determining the weight value of each image segmentation area according to the area of each image segmentation area; and calculating according to the weight value of each image segmentation area and the corresponding reference emotion data, and determining the emotion recognition result of the target face image.
5. The emotion recognition apparatus of claim 4,
the image information determining unit is used for determining the acquisition time of the target image from the attribute information of the target image and determining the position information of the target image from the attribute information of the first image acquisition device;
the associated image acquisition equipment determining unit is used for determining associated image acquisition equipment according to the position information, and the associated image acquisition equipment is image acquisition equipment capable of acquiring a target image;
and the associated image extracting unit is used for extracting the associated image from the associated image acquisition equipment according to the acquisition time and the comparison information.
6. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the emotion recognition method as claimed in any of claims 1 to 3 when executing the computer program.
7. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the emotion recognition method as recited in any one of claims 1 to 3.
CN202010641010.9A 2020-07-06 2020-07-06 Emotion recognition method and device, computer equipment and storage medium Active CN111967311B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010641010.9A CN111967311B (en) 2020-07-06 2020-07-06 Emotion recognition method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010641010.9A CN111967311B (en) 2020-07-06 2020-07-06 Emotion recognition method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111967311A CN111967311A (en) 2020-11-20
CN111967311B true CN111967311B (en) 2021-09-10

Family

ID=73361450

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010641010.9A Active CN111967311B (en) 2020-07-06 2020-07-06 Emotion recognition method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111967311B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112597886A (en) * 2020-12-22 2021-04-02 成都商汤科技有限公司 Ride fare evasion detection method and device, electronic equipment and storage medium
CN112949710B (en) * 2021-02-26 2023-06-13 北京百度网讯科技有限公司 Image clustering method and device
CN114554113B (en) * 2022-04-24 2022-08-16 浙江华眼视觉科技有限公司 Express item code recognition machine express item person drawing method and device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102254169A (en) * 2011-08-23 2011-11-23 东北大学秦皇岛分校 Multi-camera-based face recognition method and multi-camera-based face recognition system
CN105160299A (en) * 2015-07-31 2015-12-16 华南理工大学 Human face emotion identifying method based on Bayes fusion sparse representation classifier
CN109801096A (en) * 2018-12-14 2019-05-24 中国科学院深圳先进技术研究院 A kind of multi-modal customer satisfaction overall evaluation system, method
CN110110653A (en) * 2019-04-30 2019-08-09 上海迥灵信息技术有限公司 The Emotion identification method, apparatus and storage medium of multiple features fusion
CN110287776A (en) * 2019-05-15 2019-09-27 北京邮电大学 Method, device and computer-readable storage medium for face recognition
CN110532900A (en) * 2019-08-09 2019-12-03 西安电子科技大学 Facial expression recognizing method based on U-Net and LS-CNN
CN111915842A (en) * 2020-07-02 2020-11-10 广东技术师范大学 Abnormity monitoring method and device, computer equipment and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170098122A1 (en) * 2010-06-07 2017-04-06 Affectiva, Inc. Analysis of image content with associated manipulation of expression presentation
US10869626B2 (en) * 2010-06-07 2020-12-22 Affectiva, Inc. Image analysis for emotional metric evaluation
WO2013120851A1 (en) * 2012-02-13 2013-08-22 Mach-3D Sàrl Method for sharing emotions through the creation of three-dimensional avatars and their interaction through a cloud-based platform

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102254169A (en) * 2011-08-23 2011-11-23 东北大学秦皇岛分校 Multi-camera-based face recognition method and multi-camera-based face recognition system
CN105160299A (en) * 2015-07-31 2015-12-16 华南理工大学 Human face emotion identifying method based on Bayes fusion sparse representation classifier
CN109801096A (en) * 2018-12-14 2019-05-24 中国科学院深圳先进技术研究院 A kind of multi-modal customer satisfaction overall evaluation system, method
CN110110653A (en) * 2019-04-30 2019-08-09 上海迥灵信息技术有限公司 The Emotion identification method, apparatus and storage medium of multiple features fusion
CN110287776A (en) * 2019-05-15 2019-09-27 北京邮电大学 Method, device and computer-readable storage medium for face recognition
CN110532900A (en) * 2019-08-09 2019-12-03 西安电子科技大学 Facial expression recognizing method based on U-Net and LS-CNN
CN111915842A (en) * 2020-07-02 2020-11-10 广东技术师范大学 Abnormity monitoring method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN111967311A (en) 2020-11-20

Similar Documents

Publication Publication Date Title
CN111915842B (en) Abnormity monitoring method and device, computer equipment and storage medium
CN111967311B (en) Emotion recognition method and device, computer equipment and storage medium
CN111222423B (en) Target identification method and device based on operation area and computer equipment
US11210504B2 (en) Emotion detection enabled video redaction
CN106203458B (en) Crowd video analysis method and system
CN111914661A (en) Abnormal behavior recognition method, target abnormal recognition method, device, and medium
CN110647812A (en) Tumble behavior detection processing method and device, computer equipment and storage medium
CN110620905A (en) Video monitoring method and device, computer equipment and storage medium
CN111199200A (en) Wearing detection method and device based on electric protection equipment and computer equipment
CN111339813B (en) Face attribute recognition method and device, electronic equipment and storage medium
CN111126208B (en) Pedestrian archiving method and device, computer equipment and storage medium
WO2019033569A1 (en) Eyeball movement analysis method, device and storage medium
CN108170750A (en) A kind of face database update method, system and terminal device
CN113239874A (en) Behavior posture detection method, device, equipment and medium based on video image
CN113139403A (en) Violation behavior identification method and device, computer equipment and storage medium
CN111860152A (en) Personnel state detection method, system, device and computer device
CN114067431A (en) Image processing method, image processing device, computer equipment and storage medium
KR101957677B1 (en) System for learning based real time guidance through face recognition and the method thereof
CN111159476A (en) Target object searching method and device, computer equipment and storage medium
CN110795980A (en) Network video-based evasion identification method, equipment, storage medium and device
CN112257567B (en) Training of behavior recognition network, behavior recognition method and related equipment
CN112689120A (en) Monitoring method and device
EP4530978A1 (en) Spinning workshop inspection method and apparatus, electronic device and storage medium
CN113947795A (en) Mask wearing detection method, device, equipment and storage medium
CN112528261A (en) Method and device for identifying user identity of SIM card

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 510000 No. 293 Shipai Zhongshan Avenue, Tianhe District, Guangzhou City, Guangdong Province

Applicant after: GUANGDONG POLYTECHNIC NORMAL University

Applicant after: Zhongkai University of Agriculture and Engineering

Applicant after: Institute of intelligent manufacturing, Guangdong Academy of Sciences

Address before: 510000 No. 293 Shipai Zhongshan Avenue, Tianhe District, Guangzhou City, Guangdong Province

Applicant before: GUANGDONG POLYTECHNIC NORMAL University

Applicant before: Zhongkai University of Agriculture and Engineering

Applicant before: GUANGDONG INSTITUTE OF INTELLIGENT MANUFACTURING

GR01 Patent grant
GR01 Patent grant