[go: up one dir, main page]

CN118135642B - Facial expression analysis method and device, electronic equipment and readable storage medium - Google Patents

Facial expression analysis method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN118135642B
CN118135642B CN202410553369.9A CN202410553369A CN118135642B CN 118135642 B CN118135642 B CN 118135642B CN 202410553369 A CN202410553369 A CN 202410553369A CN 118135642 B CN118135642 B CN 118135642B
Authority
CN
China
Prior art keywords
data
feature
target
preprocessing
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410553369.9A
Other languages
Chinese (zh)
Other versions
CN118135642A (en
Inventor
吴晓涛
甘俊杰
陈一丰
龙萍
肖慈婉
陈丽美
甘伟发
胡礼春
查丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Gutin Technology Co ltd
Original Assignee
Zhuhai Gutin Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Gutin Technology Co ltd filed Critical Zhuhai Gutin Technology Co ltd
Priority to CN202410553369.9A priority Critical patent/CN118135642B/en
Publication of CN118135642A publication Critical patent/CN118135642A/en
Application granted granted Critical
Publication of CN118135642B publication Critical patent/CN118135642B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Business, Economics & Management (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Tourism & Hospitality (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Economics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Databases & Information Systems (AREA)
  • Primary Health Care (AREA)
  • Marketing (AREA)
  • Human Resources & Organizations (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to the technical fields of big data processing, image analysis and education, in particular to a facial expression analysis method, a device, electronic equipment and a readable storage medium, wherein the facial expression analysis method comprises the following steps: acquiring target data; preprocessing target data to determine scaling data corresponding to the target data; and the noise of the data is removed, and the normalization of the data is realized. Carrying out space domain enhancement processing on the scaled data to obtain preprocessed data, and improving the quality of the data; the preprocessing data is subjected to feature extraction operation, and feature data with accurate price corresponding to the preprocessing data is determined; the characteristic data and the characteristic state corresponding to the characteristic data are associated, and a target characteristic model corresponding to the characteristic data and the characteristic state is determined, so that a more accurate and reliable target characteristic model is obtained; according to the target feature model, the facial expression is analyzed, and a more accurate facial analysis result can be obtained.

Description

Facial expression analysis method and device, electronic equipment and readable storage medium
Technical Field
The present application relates to the technical field of big data processing, the technical field of image analysis, and the technical field of education, and in particular, to a facial expression analysis method, a facial expression analysis device, an electronic device, and a readable storage medium.
Background
In the prior art, along with the development of technology, the fields of big data processing technology, image analysis technology, education technology and the like are advanced. In the technical field of big data processing, through collecting and analyzing a large number of student facial microexpressive data, the emotion state of the student can be deeply understood, and a scientific basis is provided for personalized teaching and student emotion management. In the technical field of image analysis, facial micro-expressions of students can be more accurately identified through advanced image processing technology and feature extraction algorithm, and reliable data support is provided for model establishment. In the technical field of education, real-time monitoring and intervention of the emotion of the student can be realized by applying big data and image analysis technology, and the teaching effect and the emotion health of the student are improved. However, the prior art has some problems in the technical fields of big data processing, image analysis, education, and the like. First, the quality and number of data acquisitions directly affect the accuracy and reliability of the model, but current data acquisition devices and techniques may not guarantee the quality and consistency of the data. Second, the process of feature extraction and model building is complex, requiring specialized computer vision and machine learning knowledge, which can present certain difficulties for the average educator. Finally, verification and optimization of models is also a challenge, requiring large amounts of verification data and effective optimization strategies. Therefore, how to build a proper mathematical model through a large amount of high-quality data, improve the accuracy and reliability of the mathematical model, and optimize and verify the built data model becomes a problem to be solved urgently.
Disclosure of Invention
The embodiment of the invention mainly aims to provide a facial expression analysis method and a related device, and aims to solve the problems that in the prior art, acquired data are less and data quality is low, so that a proper mathematical model cannot be obtained, and the reliability of the model is reduced.
In a first aspect, an embodiment of the present invention provides a facial expression analysis method, including:
acquiring target data, wherein the target data comprises at least one of first face data, second face data and third face data;
preprocessing the target data, and determining scaling data corresponding to the target data;
Performing spatial domain enhancement processing on the scaling data to obtain preprocessing data corresponding to the scaling data;
performing feature extraction operation on the preprocessing data, and determining feature data corresponding to the preprocessing data;
correlating the characteristic data with a characteristic state corresponding to the characteristic data, and determining a target characteristic model corresponding to the characteristic data and the characteristic state;
and analyzing the facial expression according to the target feature model.
In a second aspect, an embodiment of the present invention provides a facial expression analysis apparatus including:
the data acquisition module is used for acquiring target data, wherein the target data comprises at least one of first face data, second face data and third face data;
the first data determining module is used for preprocessing the target data and determining scaling data corresponding to the target data;
The preprocessing module is used for performing spatial domain enhancement processing on the scaled data to obtain preprocessed data corresponding to the scaled data;
The second data determining module is used for carrying out feature extraction operation on the preprocessing data and determining feature data corresponding to the preprocessing data;
The model determining module is used for associating the characteristic data with the characteristic state corresponding to the characteristic data and determining a target characteristic model corresponding to the characteristic data and the characteristic state;
And the analysis module is used for analyzing the facial expression according to the target feature model.
In a third aspect, embodiments of the present invention further provide an electronic device comprising a processor, a memory, a computer program stored on the memory and executable by the processor, and a data bus for enabling a connected communication between the processor and the memory, wherein the computer program, when executed by the processor, implements the steps of any of the facial expression analysis methods as provided in the present specification.
In a fourth aspect, embodiments of the present invention further provide a readable storage medium for computer readable storage, wherein the storage medium stores one or more programs executable by one or more processors to implement steps of any of the facial expression analysis methods as provided in the present specification.
The embodiment of the invention provides a facial expression analysis method, a device, electronic equipment and a readable storage medium, wherein the facial expression analysis method comprises the following steps: acquiring target data, wherein the target data comprises at least one of first face data, second face data and third face data; preprocessing the target data, and determining scaling data corresponding to the target data; performing spatial domain enhancement processing on the scaling data to obtain preprocessing data corresponding to the scaling data; performing feature extraction operation on the preprocessing data, and determining feature data corresponding to the preprocessing data; correlating the characteristic data with a characteristic state corresponding to the characteristic data, and determining a target characteristic model corresponding to the characteristic data and the characteristic state; and analyzing the facial expression according to the target feature model. In the facial expression analysis method, feature data is obtained by performing preprocessing operation and feature extraction operation on target data of facial expression. In the process of obtaining the characteristic data, the quality of data acquisition is improved. And the feature data and the feature states corresponding to the feature data are associated to obtain a target feature model, so that the accuracy and stability of the target feature model are improved. And finally, analyzing the facial expression according to the target feature model.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of a facial expression analysis method according to an embodiment of the present invention;
Fig. 2 is a schematic block diagram of a facial expression analysis apparatus according to an embodiment of the present invention;
Fig. 3 is a schematic block diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The flow diagrams depicted in the figures are merely illustrative and not necessarily all of the elements and operations/steps are included or performed in the order described. For example, some operations/steps may be further divided, combined, or partially combined, so that the order of actual execution may be changed according to actual situations.
It is to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
The embodiment of the invention provides a facial expression analysis method and a related device. The facial expression analysis method can be applied to terminal equipment, and the terminal equipment can be electronic equipment such as tablet computers, notebook computers, desktop computers, personal digital assistants, wearable equipment and the like. The terminal device may be a server or a server cluster.
Some embodiments of the invention are described in detail below with reference to the accompanying drawings. The following embodiments and features of the embodiments may be combined with each other without conflict.
Referring to fig. 1, fig. 1 is a flowchart illustrating a facial expression analysis method according to an embodiment of the invention.
As shown in fig. 1, the facial expression analysis method includes steps S101 to S106.
Step S101, acquiring target data, where the target data includes at least one of first face data, second face data, and third face data.
The first facial data may be data related to the cheek and the corner of the mouth, the second facial data may be data related to the forehead, and the third facial data may be data related to the eyes and the periocular area, or the distance between the eyes and the cheekbones or the chin.
When the user receives the good message and feels very happy, the micro expression of the face of the user can be changed, and the method is specifically expressed as follows: the cheeks of the wearer slightly bulge, the corners of the mouth are not raised independently, the eyes slightly contract, and smile appears around the eyes.
When the user is in a heavy mood and feels anxiety, the corners of the user's mouth will not rise, there is a possibility of a skimming action, the eyebrows will lock tightly, and even the eyes may be flooded tear in the eyes, etc.
Therefore, the target data of different moods of the user can be acquired in real time through the data acquisition device, wherein the data acquisition device comprises: an imaging system or a data sensor, etc.
By installing the camera system in a specific area, the camera system can collect face data of a user in time when the user passes through the camera range of the camera system, a large amount of face data of different users collected by the camera system are target data, and the camera system sends the collected target data to electronic equipment connected with the camera system through a network so as to perform related data processing on the obtained target data by a data processing system of the electronic equipment.
Besides the acquisition of target data by the camera system, a specific sensor can also be used for acquiring face data of a large number of different users, wherein the face data of the large number of different users are the target data required to be acquired by the electronic equipment. The specific sensor transmits the collected facial data of a large number of different users to the electronic device through the network established by the specific sensor and the electronic device, and the electronic device acquires the target data. The specific sensor may be any sensor for acquiring data, for example, an infrared sensor, a 3D sensor, etc., which is not limited in the present application.
For example, when face data of all students in a school is collected, the camera system may be installed near a gate of the school, and when the students get in class every day, the camera system may collect face data of each student, and further send the collected face data of all students to an electronic device processing system in the school, where after receiving the target data, the electronic device performs a series of subsequent data processing on the target data. The face data of all students here is the target data.
Step S102, preprocessing the target data, and determining scaling data corresponding to the target data.
Specifically, after the electronic equipment obtains a large amount of target data, data preprocessing operation is performed on the target data, and the target data is further screened to obtain preprocessed data with higher quality and higher consistency.
In some embodiments, preprocessing the target data to determine scaling data corresponding to the target data includes: adding the target data to a preset filter to perform noise removal processing, and determining first intermediate data corresponding to the target data; performing the first intermediate data
Performing standardization processing, namely determining second intermediate data after the standardization processing; and determining a preset data interval, and scaling the second intermediate data into the preset data interval to obtain scaling data corresponding to the second intermediate data.
The electronic device may first transmit the target data to its own filter after obtaining the target data, and perform noise removal processing on the target data in the filter, and obtain the first intermediate data after performing noise removal processing on the target data by the filter.
Specifically, the target data may be subjected to a filtering process of removing noise by a gaussian filter, to obtain first intermediate data. The expression of gaussian filtering is:
;
Wherein, The gaussian weight is represented by the number of points,The standard deviation of the gaussian function is indicated,Representing coordinates of the target data.
It should be noted that, in addition to the filtering process for removing noise from the target data by the gaussian filter mentioned in the present embodiment, any suitable filter such as a low-pass filter, a band-stop filter or the like may be employed,
In practical applications, a suitable filter may be selected according to circumstances, which is not limited by the present application.
And after filtering the target data to obtain first intermediate data corresponding to the target data, continuing to perform standardization processing on the first intermediate data.
Specifically, the first intermediate data may be normalized based on a normalization processing formula to obtain normalized second intermediate data. The normalization process corresponds to the expression:
;
Wherein, Is the second intermediate data after the normalization process,Is the first intermediate data set in question,Is the mean value of the first intermediate data,Is the standard deviation of the first intermediate data.
After the first intermediate data is subjected to standardization processing and second intermediate data corresponding to the first intermediate data is obtained, the data quality and accuracy of the second intermediate data are greatly improved compared with those of the first intermediate data, and meanwhile, the consistency of the data is also improved.
And then, carrying out normalization operation on the second intermediate data, and determining scaling data of the second intermediate data after normalization operation. The concrete steps are as follows: and determining a preset interval corresponding to the second intermediate data, scaling the second intermediate data to the preset interval, and obtaining scaling data corresponding to the second intermediate data in the preset interval. For example, in the process of normalizing the second intermediate data, the data set of the second intermediate data is [2,4,6,1,3], the preset interval corresponding to the second intermediate data is [0.1], and the obtained scaling data is [0.5,1,0.25,0.75] after the normalization is performed on the second intermediate data.
Step S103, performing spatial domain enhancement processing on the scaled data to obtain preprocessed data corresponding to the scaled data.
Specifically, spatial domain enhancement processing is performed on the scaling data after normalization operation is performed on the second intermediate data, so as to obtain preprocessing data corresponding to the scaling data. The image corresponding to the preprocessed data after the spatial domain enhancement processing will be clearer, so as to achieve the purpose of improving the image quality.
In some embodiments, the spatial domain enhancement processing corresponds to the expression:
g(x,y)=T[f(s,v)]
Wherein f (s, v) represents the pixel point of the image corresponding to the scaling data, g (x, y) represents the pixel point of the image corresponding to the preprocessing data, and T represents the spatial domain enhancement function.
Specifically, the pixel points of the image corresponding to the scaling data are denoted by f (s, v), the pixel points f (s, v) of the image corresponding to each scaling data are sequentially input into the spatial domain enhancement function T, and the spatial domain enhancement function T outputs the pixel points of the image corresponding to the corresponding preprocessing data, namely denoted by g (x, y).
Step S104, performing feature extraction operation on the preprocessing data, and determining feature data corresponding to the preprocessing data.
Specifically, in the process of performing feature extraction operation on the preprocessed data, feature extraction operation may be performed on the preprocessed data through motion features, morphological features, texture features, and the like of the face, so as to extract feature data corresponding to the motion features, feature data corresponding to the morphological features, and feature data corresponding to the texture features, respectively.
In some embodiments, performing feature extraction on the preprocessed data, and determining feature data corresponding to the preprocessed data includes: adding the preprocessing data into a preset target classifier, and performing feature extraction operation on the preprocessing data based on the preset target classifier to determine the feature data corresponding to the preprocessing data; the expression corresponding to the preset target classifier is as follows: ; wherein, Representing the functional expression corresponding to the preprocessing function,The pre-processing data is represented by a set of data,And representing a function expression corresponding to the preset target classifier, and t represents the characteristic data.
Specifically, the iterative operation can be performed on the preprocessed data through an iterative algorithm in a preset target classifier to obtain feature data corresponding to the preprocessed data, wherein the iterative algorithm can be an Adaboost algorithm.
For example, when feature extraction is performed on the preprocessed data, a weight distribution corresponding to the preprocessed data is determined, and each data in the preprocessed data is given the same weight. The expression for determining the weight distribution corresponding to the preprocessing data is as follows:
Then, M rounds of iteration are carried out on the preprocessed data according to an iteration algorithm.
First step, based on the distribution of weightsTraining the data training set of the pre-processed data of the (a) to obtain a basic classifier, selecting a threshold with the lowest error rate to design the basic classifier, and correspondingly expressing as
Second step, calculateThe classification error rate on the pre-processed data,From this, it can be seen that,Error rate on preprocessed dataNamely, quiltThe sum of the weights of the misclassified samples.
Third step, calculateCoefficient of (2)Indicating the degree of importance on the preset target classifier. Wherein, calculateThe corresponding expression for the coefficients of (a) is:
and fourthly, updating weight distribution in the preprocessed data for the next iteration.
The specific expression is: Wherein, the method comprises the steps of, wherein, ,In order for the normalization factor to be a function of the normalization factor,
So far, the multi-round iteration of the preprocessing data is completed based on the expression.
Then, after a plurality of iterations are combinedBasic classifierFor multiple iterationsBasic classifierAnd carrying out summation processing, and combining the basic classifiers. The corresponding expression is:
finally, according to each basic classifier Obtaining an expression of a preset target classifier: Wherein, the method comprises the steps of, wherein, And (3) representing a function expression corresponding to a preset target classifier, wherein t represents characteristic data.
And according to the finally obtained expression corresponding to the preset target classifier, performing feature extraction operation on the preprocessed data to obtain feature data corresponding to the preprocessed data. Compared with the preprocessing data, the feature data after feature extraction has more obvious feature expression, and is convenient for establishing a more accurate model later. For example, when the user is happy, the texture features of the corresponding face are that the eyes have some laugh lines and the corners of the mouth are slightly raised, so when the corresponding feature data is extracted according to the pre-processing data, the pre-processing data around the eyes and the corners of the mouth of the face are extracted by the preset target classifier, and the pre-processing data around the eyes and the corners of the mouth of the face are taken as the feature data in the happy state of the user.
It should be noted that the specific data and assumptions given in the above embodiments are merely illustrative of the present embodiment, and the present application is not limited thereto.
Step 105, associating the feature data with a feature state corresponding to the feature data, and determining a target feature model corresponding to the feature data and the feature state.
For example, when the feature data and the feature state corresponding to the feature data are associated, the method can be implemented based on a data association algorithm, wherein the data association algorithm can be any one of an Apriori algorithm, an FP-Growth algorithm, a ECAOA algorithm and an Eclat algorithm.
In some embodiments, associating the feature data with a feature state corresponding to the feature data, determining a target feature model corresponding to the feature data and the feature state, includes: establishing a preset feature model based on a preset deep learning model; and in the preset feature model, correlating the feature data with the feature state to obtain the target feature model corresponding to the feature data and the feature state.
In particular, after the feature data is obtained, a predetermined feature model corresponding to the machine learning technique can be used,
Respectively inputting the feature data and the feature state corresponding to the feature data into a preset feature model, and in the preset feature model, realizing the association of the feature data and the feature state corresponding to the feature data through a data association algorithm in the preset feature model.
Specifically, the feature state corresponding to the feature data may be happy, depressed, anxiety, or the like, wherein when the feature state is happy, the feature data corresponding to the feature state is happy is recorded as first feature data; when the feature state is frustrated, recording feature data corresponding to the feature state being frustrated as second feature data; when the feature state is anxiety, feature data corresponding to the feature state being anxiety is recorded as third feature data.
When the feature data and the feature states corresponding to the feature data are associated according to the data association algorithm, the happy feature state is associated with the first feature data, the frustrated feature state is associated with the second feature data, and the anxiety feature state is associated with the third feature data, namely, the happy, frustrated and anxiety labels are respectively established for the first feature data, the second feature data and the third feature data.
Specifically, after all the feature data are respectively associated with the corresponding feature states, the target feature model corresponding to the feature states is obtained. Thus, the establishment of the target feature model is completed.
In some embodiments, associating the feature data with a feature state corresponding to the feature data, after determining a target feature model corresponding to the feature data and the feature state, includes:
determining a preset verification threshold; verifying the target feature model based on a preset verification mode to obtain a verified verification error; if the verification error is not in the range of the preset verification threshold value, repeating the step of establishing the target feature model, and reestablishing the target feature model; and if the verification error is within the range of the preset verification threshold value, completing the establishment of the target feature model.
Specifically, the preset verification method may be a suitable verification method such as cross verification, sensitivity analysis, fitting degree analysis, and the like.
For example, if the cross verification mode is adopted to verify the target feature model, multiple groups of input data need to be collected through the camera system again, the collected multiple input data are sequentially input into the target feature model to be built, verification errors corresponding to the input data are sequentially output after data analysis of the target feature model, after the verification errors are obtained, the obtained multiple groups of verification errors are respectively compared with a preset verification threshold, the magnitude relation between the multiple groups of verification errors and the preset verification threshold is compared, and if verification errors in the multiple groups of verification errors are not in the range of the preset verification threshold, that is, if the verification errors are larger than the preset verification threshold, the target feature model at the moment does not meet the requirements of the facial expression analysis method, and the target feature model needs to be determined again. Otherwise, if the verification errors are all within the range of the preset verification threshold, the target feature model at the moment meets the requirements of the facial expression analysis method. The aim of verifying the target feature model is to obtain a reliable target feature model and improve the accuracy of the target feature model.
The preset verification threshold value can be reasonably set according to the requirement of an actual facial expression analysis method from specific values, and the application is not limited to the specific values.
Specifically, in order to make the verification error obtained by each input data more uniform, the verification error may be in the form of square addition, so that the observation is convenient.
It should be noted that the specific form of the verification error is not the only form, and may be reasonably set according to actual implementation in practical applications, which is not limited by the present application.
And S106, analyzing the facial expression according to the target feature model.
Specifically, when the target feature model meets the verification requirement of the preset verification mode, the target feature model can be applied to an actual facial expression analysis scene for analyzing facial expressions of different users.
In some embodiments, analyzing the facial expression from the target feature model includes: acquiring current facial feature data, wherein the current facial feature data is acquired in real time through at least one of a camera system or a sensor; adding the current facial feature data to the target feature model to obtain a facial analysis result corresponding to the current facial feature data; and analyzing the facial expression according to the facial analysis result.
For example, the current facial data of the user may be collected in real time based on the camera system or the sensor, and the collected current facial data may be transmitted to the target feature model, where the data analysis is performed on the current facial data, and after the data analysis is performed on the current facial data in the target feature model, the data analysis system of the target feature model may output a facial analysis result, and further analyze a facial expression corresponding to the facial analysis result according to the facial analysis result.
The output form of the facial analysis result can be voice, graphic display or text display. For example, if the output form of the face analysis result corresponding to the current face feature data is speech, the speech corresponding to the face analysis result may be "i am very happy" or "i am somewhat not happy". After the face analysis result corresponding to the current face feature data is obtained, the electronic equipment where the target feature model is located directly outputs the voice corresponding to the face analysis result by clicking an output button on the electronic equipment where the target feature model is located.
In addition, it should be noted that the output form of the above-mentioned facial analysis result may also be graphic display or text display, and the basic output principle thereof is basically the same as that of the above-mentioned facial analysis result in which the output form is voice, and will not be described herein.
The target feature model can be applied to the field of education, particularly to schools, by acquiring current facial feature data of students and timely transmitting the current facial feature data to the target feature model, so that the target feature model can analyze the current facial feature data of the students, and further, the emotion and psychological states of the students can be analyzed according to the obtained facial analysis results. So as to find psychological problems of students in time and comb the students in time.
In addition, the current facial feature data of different students can be obtained in the course of the lesson, after the time is over, the current facial feature data of different students in a lesson are obtained, and the current facial feature data of different students are subjected to data analysis in the target feature model, so that the obtained facial analysis results are that the students are anxious, confused or happy, and the like. If the obtained facial analysis result is confused according to the current facial feature data of most students, the content of the lesson is deeper or the explanation mode deviates from the understanding range of the students, and the teacher timely adjusts the teaching mode or review the lesson according to the facial analysis result. Otherwise, if the face analysis result obtained according to the current facial feature data of most students is happy, the content of the lesson section is understood by most students, and a teacher can timely start the explanation of the next lesson section, and for a few students who do not understand the content of the lesson, the students can conduct coaching on the lesson section at other times. Therefore, according to different facial analysis results, students can know whether to basically master the content of the lesson in a lesson, so that teachers can conveniently start the explanation of the next content in time or continue to review the content of the lesson. So as to achieve the purpose of improving teaching effect and emotional health of students.
It should be noted that the above-described target feature model may be applied not only to the education field but also to various companies, medical fields, etc., to which the present application is not limited.
The embodiment provides a facial expression analysis method, which removes data noise and improves data quality by acquiring target data and performing a series of data preprocessing operations on the target data; then, performing feature extraction operation on the preprocessed data, selecting more accurate preprocessed data as feature data, and providing more accurate data for subsequently obtaining a target feature model; and correlating the feature data with the feature state of the feature data to obtain a more accurate target feature model, and verifying and optimizing the obtained target feature model to improve the reliability of the model, and analyzing the facial expression according to the target feature model in the later practical application. Can meet the practical demands of various aspects in the education field.
Referring to fig. 2, fig. 2 shows a facial expression analysis apparatus 200 according to an embodiment of the present application, where the facial expression analysis apparatus 200 includes a data acquisition module 201, a first data determination module 202, a preprocessing module 203, a second data determination module 204, a model determination module 205, and an analysis module 206, where the data acquisition module 201 is configured to acquire target data, where the target data includes at least one of first facial data, second facial data, and third facial data; a first data determining module 202, configured to perform a preprocessing operation on the target data, and determine scaling data corresponding to the target data; a preprocessing module 203, configured to perform spatial domain enhancement processing on the scaling data, so as to obtain preprocessed data corresponding to the scaling data;
A second data determining module 204, configured to perform a feature extraction operation on the preprocessed data, and determine feature data corresponding to the preprocessed data; the model determining module 205 is configured to associate the feature data with a feature state corresponding to the feature data, and determine a target feature model corresponding to the feature data and the feature state; an analysis module 206 for analyzing the facial expression according to the target feature model.
In some embodiments, the first data determining module 202 processes the target data in the process of performing a preprocessing operation on the target data to determine scaling data corresponding to the target data;
Adding the target data to a preset filter to perform noise removal processing, and determining first intermediate data corresponding to the target data;
carrying out standardization processing on the first intermediate data, and determining second intermediate data after the standardization processing;
And determining a preset data interval, and scaling the second intermediate data into the preset data interval to obtain scaling data corresponding to the second intermediate data.
In some embodiments, the preprocessing module 203 processes;
g(x,y)=T[f(s,v)]
Wherein f (s, v) represents the pixel point of the image corresponding to the scaling data, g (x, y) represents the pixel point of the image corresponding to the preprocessing data, and T represents the spatial domain enhancement function.
In some embodiments, the second data determining module 204 processes the feature data corresponding to the preprocessed data in a process of performing a feature extraction operation on the preprocessed data;
adding the preprocessing data into a preset target classifier, and determining the characteristic data corresponding to the preprocessing data based on the characteristic extraction operation of the preset target classifier on the preprocessing data;
the expression corresponding to the preset target classifier is as follows:
Wherein, Representing the functional expression corresponding to the preprocessing function,The pre-processing data is represented by a set of data,And representing a function expression corresponding to the preset target classifier, and t represents the characteristic data.
In some implementations, the analysis module 206 processes in analyzing facial expressions according to the target feature model;
acquiring current facial feature data, wherein the current facial feature data is acquired in real time through at least one of a camera system or a sensor;
adding the current facial feature data to the target feature model to obtain a facial analysis result corresponding to the current facial feature data;
and analyzing the facial expression according to the facial analysis result.
In some embodiments, the model determining module 205 processes in a process of associating the feature data with a feature state corresponding to the feature data and determining a target feature model corresponding to the feature data and the feature state;
Establishing a preset feature model based on a preset deep learning model; and in the preset feature model, correlating the feature data with the feature state to obtain the target feature model corresponding to the feature data and the feature state.
In some embodiments, the model determining module 205 processes after associating the feature data with a feature state corresponding to the feature data and determining a target feature model corresponding to the feature data and the feature state;
determining a preset verification threshold; verifying the target feature model based on a preset verification mode to obtain a verified verification error; if the verification error is not in the range of the preset verification threshold value, repeating the step of establishing the target feature model, and reestablishing the target feature model; and if the verification error is within the range of the preset verification threshold value, completing the establishment of the target feature model.
In some embodiments, the facial expression analysis apparatus 200 may be applied to an electronic device.
It should be noted that, for convenience and brevity of description, the specific working process of the facial expression analysis apparatus 200 described above may refer to the corresponding process in the foregoing facial expression analysis method embodiment, and will not be described herein.
Referring to fig. 3, fig. 3 is a schematic block diagram of an electronic device according to an embodiment of the present invention.
As shown in FIG. 3, the electronic device 300 includes a processor 301 and a memory 302, the processor 301 and the memory 302 being connected by a bus 303, such as an I2C (Inter-INTEGRATED CIRCUIT) bus.
In particular, the processor 301 is used to provide computing and control capabilities, supporting the operation of the entire terminal device. The Processor 301 may be a central processing unit (Central Processing Unit, CPU), the Processor 301 may also be other general purpose processors, digital signal processors (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. Wherein the general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Specifically, the Memory 302 may be a Flash chip, a Read-Only Memory (ROM) disk, an optical disk, a U-disk, a removable hard disk, or the like.
It will be appreciated by those skilled in the art that the structure shown in fig. 3 is merely a block diagram of a portion of the structure related to the embodiment of the present invention, and does not constitute a limitation of the terminal device to which the embodiment of the present invention is applied, and that a specific server may include more or less components than those shown in the drawings, or may combine some components, or have a different arrangement of components.
The processor is configured to run a computer program stored in the memory, and implement any one of the facial expression analysis methods provided by the embodiments of the present invention when the computer program is executed.
In an embodiment, the processor is configured to run a computer program stored in a memory and to implement the following steps when executing the computer program:
acquiring target data, wherein the target data comprises at least one of first face data, second face data and third face data;
preprocessing the target data, and determining scaling data corresponding to the target data;
Performing spatial domain enhancement processing on the scaling data to obtain the preprocessing data corresponding to the scaling data;
performing feature extraction operation on the preprocessing data, and determining feature data corresponding to the preprocessing data;
correlating the characteristic data with a characteristic state corresponding to the characteristic data, and determining a target characteristic model corresponding to the characteristic data and the characteristic state;
and analyzing the facial expression according to the target feature model.
In some embodiments, the processor 301 performs, in preprocessing the target data, determining scaling data corresponding to the target data, performing:
Adding the target data to a preset filter to perform noise removal processing, and determining first intermediate data corresponding to the target data;
carrying out standardization processing on the first intermediate data, and determining second intermediate data after the standardization processing;
And determining a preset data interval, and scaling the second intermediate data into the preset data interval to obtain scaling data corresponding to the second intermediate data.
In some implementations, the processor 301 also executes;
g(x,y)=T[f(s,v)];
Wherein f (s, v) represents the pixel point of the image corresponding to the scaling data, g (x, y) represents the pixel point of the image corresponding to the preprocessing data, and T represents the spatial domain enhancement function.
In some embodiments, the processor 301 performs, in performing a feature extraction operation on the preprocessed data, determining feature data corresponding to the preprocessed data, performing:
adding the preprocessing data into a preset target classifier, and determining the characteristic data corresponding to the preprocessing data based on the characteristic extraction operation of the preset target classifier on the preprocessing data;
the expression corresponding to the preset target classifier is as follows:
Wherein, Representing the functional expression corresponding to the preprocessing function,The pre-processing data is represented by a set of data,And representing a function expression corresponding to the preset target classifier, and t represents the characteristic data.
The processor 301 performs, in analyzing the facial expression according to the target feature model:
acquiring current facial feature data, wherein the current facial feature data is acquired in real time through at least one of a camera system or a sensor;
adding the current facial feature data to the target feature model to obtain a facial analysis result corresponding to the current facial feature data;
and analyzing the facial expression according to the facial analysis result.
In some implementations, the processor 301, in associating the feature data with a feature state corresponding to the feature data, determines a target feature model for the feature data corresponding to the feature state, performs,
Establishing a preset feature model based on a preset deep learning model;
And in the preset feature model, correlating the feature data with the feature state to obtain the target feature model corresponding to the feature data and the feature state.
In some embodiments, the processor 301 performs, after associating the feature data with a feature state corresponding to the feature data and determining a target feature model corresponding to the feature data and the feature state:
determining a preset verification threshold;
Verifying the target feature model based on a preset verification mode to obtain a verified verification error;
if the verification error is not in the range of the preset verification threshold value, repeating the step of establishing the target feature model, and reestablishing the target feature model;
And if the verification error is within the range of the preset verification threshold value, completing the establishment of the target feature model.
It should be noted that, for convenience and brevity of description, specific working processes of the terminal device described above may refer to corresponding processes in the foregoing facial expression analysis method embodiment, and are not described herein again.
Embodiments of the present invention also provide a computer-readable storage medium for computer-readable storage, the storage medium storing one or more programs executable by one or more processors to implement steps of any of the facial expression analysis methods as provided in the embodiments of the present invention.
The storage medium may be an internal storage unit of the terminal device according to the foregoing embodiment, for example, a hard disk or a memory of the terminal device. The storage medium may also be an external storage device of the terminal device, such as a plug-in hard disk, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD), or the like, which are provided on the terminal device.
Those of ordinary skill in the art will appreciate that all or some of the steps, systems, functional modules/units in the apparatus, and methods disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. In a hardware embodiment, the division between the functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be performed cooperatively by several physical components. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as known to those skilled in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer. Furthermore, as is well known to those of ordinary skill in the art, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
It should be understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations. It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments. While the invention has been described with reference to certain preferred embodiments, it will be understood by those skilled in the art that various changes and substitutions may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. Therefore, the protection scope of the invention is subject to the protection scope of the claims.

Claims (8)

1. A facial expression analysis method, comprising:
acquiring target data, wherein the target data comprises at least one of first face data, second face data and third face data;
preprocessing the target data, and determining scaling data corresponding to the target data;
Performing spatial domain enhancement processing on the scaling data to obtain preprocessing data corresponding to the scaling data;
performing feature extraction operation on the preprocessing data, and determining feature data corresponding to the preprocessing data;
correlating the characteristic data with a characteristic state corresponding to the characteristic data, and determining a target characteristic model corresponding to the characteristic data and the characteristic state;
Analyzing facial expressions according to the target feature model;
The preprocessing operation is performed on the target data, and determining scaling data corresponding to the target data includes:
Adding the target data to a preset filter to perform noise removal processing, and determining first intermediate data corresponding to the target data;
carrying out standardization processing on the first intermediate data, and determining second intermediate data after the standardization processing;
Determining a preset data interval, and scaling the second intermediate data into the preset data interval to obtain scaling data corresponding to the second intermediate data;
the expression corresponding to the spatial domain enhancement processing is as follows:
f (s, v) represents the pixel point of the image corresponding to the scaling data, g (x, y) represents the pixel point of the image corresponding to the preprocessing data, and T represents the spatial domain enhancement function.
2. The facial expression analysis method according to claim 1, wherein the performing a feature extraction operation on the pre-processing data to determine feature data corresponding to the pre-processing data includes:
adding the preprocessing data into a preset target classifier, and performing feature extraction operation on the preprocessing data based on the preset target classifier to determine the feature data corresponding to the preprocessing data;
the expression corresponding to the preset target classifier is as follows: g (t) =sign (f (x));
Wherein f represents a function expression corresponding to a preprocessing function, x represents the preprocessing data, G represents a function expression corresponding to the preset target classifier, and t represents the characteristic data.
3. The facial expression analysis method according to claim 1, wherein the analyzing a facial expression from the target feature model includes:
acquiring current facial feature data, wherein the current facial feature data is acquired in real time through at least one of a camera system or a sensor;
Adding the current facial feature data to the target feature model to obtain a facial analysis result corresponding to the current facial feature data;
and analyzing the facial expression according to the facial analysis result.
4. The facial expression analysis method according to claim 1, wherein the associating the feature data with a feature state corresponding to the feature data, determining a target feature model corresponding to the feature data with the feature state, comprises:
Establishing a preset feature model based on a preset deep learning model;
And in the preset feature model, correlating the feature data with the feature state to obtain the target feature model corresponding to the feature data and the feature state.
5. The facial expression analysis method according to claim 1, wherein the associating the feature data with the feature state corresponding to the feature data, after determining the target feature model corresponding to the feature data with the feature state, includes:
determining a preset verification threshold;
Verifying the target feature model based on a preset verification mode to obtain a verified verification error;
if the verification error is not in the range of the preset verification threshold value, repeating the step of establishing the target feature model, and reestablishing the target feature model;
And if the verification error is within the range of the preset verification threshold value, completing the establishment of the target feature model.
6. A facial expression analysis apparatus, comprising:
the data acquisition module is used for acquiring target data, wherein the target data comprises at least one of first face data, second face data and third face data;
the first data determining module is configured to perform preprocessing operation on the target data, determine scaling data corresponding to the target data, and include:
Adding the target data to a preset filter to perform noise removal processing, and determining first intermediate data corresponding to the target data;
carrying out standardization processing on the first intermediate data, and determining second intermediate data after the standardization processing;
Determining a preset data interval, and scaling the second intermediate data into the preset data interval to obtain scaling data corresponding to the second intermediate data;
The preprocessing module is used for performing spatial domain enhancement processing on the scaling data to obtain preprocessed data corresponding to the scaling data, wherein an expression corresponding to the spatial domain enhancement processing is as follows:
g(x,y)=T[f(s,v)]
f (s, v) represents the pixel point of the image corresponding to the scaling data, g (x, y) represents the pixel point of the image corresponding to the preprocessing data, and T represents a spatial domain enhancement function;
The second data determining module is used for carrying out feature extraction operation on the preprocessing data and determining feature data corresponding to the preprocessing data;
The model determining module is used for associating the characteristic data with the characteristic state corresponding to the characteristic data and determining a target characteristic model corresponding to the characteristic data and the characteristic state;
And the analysis module is used for analyzing the facial expression according to the target feature model.
7. An electronic device comprising a memory and a processor;
the memory is used for storing a computer program;
The processor for executing the computer program and implementing the facial expression analysis method according to any one of claims 1 to 5 when the computer program is executed.
8. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, causes the processor to implement the facial expression analysis method according to any one of claims 1 to 5.
CN202410553369.9A 2024-05-07 2024-05-07 Facial expression analysis method and device, electronic equipment and readable storage medium Active CN118135642B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410553369.9A CN118135642B (en) 2024-05-07 2024-05-07 Facial expression analysis method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410553369.9A CN118135642B (en) 2024-05-07 2024-05-07 Facial expression analysis method and device, electronic equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN118135642A CN118135642A (en) 2024-06-04
CN118135642B true CN118135642B (en) 2024-08-23

Family

ID=91230566

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410553369.9A Active CN118135642B (en) 2024-05-07 2024-05-07 Facial expression analysis method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN118135642B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114743241A (en) * 2022-03-31 2022-07-12 网易(杭州)网络有限公司 Facial expression recognition method and device, electronic equipment and storage medium
CN117636436A (en) * 2023-12-08 2024-03-01 上海大学 Multi-person real-time facial expression recognition method and system based on attention mechanism

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6879709B2 (en) * 2002-01-17 2005-04-12 International Business Machines Corporation System and method for automatically detecting neutral expressionless faces in digital images
US9760767B1 (en) * 2016-09-27 2017-09-12 International Business Machines Corporation Rating applications based on emotional states
US10489690B2 (en) * 2017-10-24 2019-11-26 International Business Machines Corporation Emotion classification based on expression variations associated with same or similar emotions
US12112573B2 (en) * 2021-08-13 2024-10-08 Lemon Inc. Asymmetric facial expression recognition
CN115909455B (en) * 2022-11-16 2023-09-19 航天恒星科技有限公司 Expression recognition method integrating multi-scale feature extraction and attention mechanism

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114743241A (en) * 2022-03-31 2022-07-12 网易(杭州)网络有限公司 Facial expression recognition method and device, electronic equipment and storage medium
CN117636436A (en) * 2023-12-08 2024-03-01 上海大学 Multi-person real-time facial expression recognition method and system based on attention mechanism

Also Published As

Publication number Publication date
CN118135642A (en) 2024-06-04

Similar Documents

Publication Publication Date Title
EP3989111A1 (en) Video classification method and apparatus, model training method and apparatus, device and storage medium
CN108717663B (en) Face-to-face fraud judgment method, device, equipment and medium based on micro-expressions
US9792553B2 (en) Feature extraction and machine learning for evaluation of image- or video-type, media-rich coursework
CN111931585A (en) Classroom concentration degree detection method and device
CN113722474A (en) Text classification method, device, equipment and storage medium
CN107679513B (en) Image processing method and device and server
CN110363084A (en) A kind of class state detection method, device, storage medium and electronics
CN104346503A (en) Human face image based emotional health monitoring method and mobile phone
CN111144215A (en) Image processing method, device, electronic device and storage medium
CN109670065A (en) Question and answer processing method, device, equipment and storage medium based on image recognition
CN113255557A (en) Video crowd emotion analysis method and system based on deep learning
CN119048247A (en) Method for predicting insurance risk based on pet age and related equipment thereof
CN113658713B (en) Infection tendency prediction method, device, equipment and storage medium
CN111652242B (en) Image processing method, device, electronic device and storage medium
CN103258186A (en) Integrated face recognition method based on image segmentation
CN114943999B (en) Method for training age detection model, age detection method and related device
CN118135642B (en) Facial expression analysis method and device, electronic equipment and readable storage medium
CN117974081B (en) A simulated interview teaching method and system based on AI big model
CN104598866B (en) A kind of social feeling quotrient based on face promotes method and system
CN107507611B (en) A method and device for speech classification and recognition
CN115171042A (en) Student classroom behavior identification method, device, terminal equipment and medium
CN116402927A (en) Method, device, equipment and storage medium for generating head gesture of responder
Tyagi et al. Face Recognition Based Student Attendance System
CN116363732A (en) Facial emotion recognition method, device, equipment and storage medium
CN115661910A (en) Facial expression recognition method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant