[go: up one dir, main page]

CN112487904A - Video image processing method and system based on big data analysis - Google Patents

Video image processing method and system based on big data analysis Download PDF

Info

Publication number
CN112487904A
CN112487904A CN202011317899.1A CN202011317899A CN112487904A CN 112487904 A CN112487904 A CN 112487904A CN 202011317899 A CN202011317899 A CN 202011317899A CN 112487904 A CN112487904 A CN 112487904A
Authority
CN
China
Prior art keywords
image
face
analysis
database
video image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011317899.1A
Other languages
Chinese (zh)
Inventor
杨洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Jinzhi Zhiyuan Technology Co ltd
Original Assignee
Chengdu Jinzhi Zhiyuan Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Jinzhi Zhiyuan Technology Co ltd filed Critical Chengdu Jinzhi Zhiyuan Technology Co ltd
Priority to CN202011317899.1A priority Critical patent/CN112487904A/en
Publication of CN112487904A publication Critical patent/CN112487904A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Library & Information Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

本发明涉及一种基于大数据分析的视频图像处理方法,所述视频图像处理方法包括:建立人脸识别数据库和人脸微表情数据库;采集视频图像获取人脸图像;对采集的视频图像进行预处理消除视频图像中的背景杂色;对预处理后的视频图像中的情感特征并进行情感分析,完成对视频图像的识别。本发明的优点在于:通过对视频图像进行边缘提取、直方图均衡、肤色分割和光照补偿等预处理去除了图像中多余的混淆色,使得在后续对人脸的表情等细节的分析识别上更为准确,也进一步地缩短了识别的时间。

Figure 202011317899

The invention relates to a video image processing method based on big data analysis. The video image processing method includes: establishing a face recognition database and a face micro-expression database; collecting video images to obtain face images; Process and eliminate background noise in video images; perform emotional analysis on emotional features in preprocessed video images to complete recognition of video images. The advantage of the present invention is that: by performing preprocessing on the video image, such as edge extraction, histogram equalization, skin color segmentation and illumination compensation, the redundant confusing color in the image is removed, so that the subsequent analysis and recognition of details such as facial expressions can be improved. For accuracy, the identification time is further shortened.

Figure 202011317899

Description

Video image processing method and system based on big data analysis
Technical Field
The invention relates to the technical field of big data analysis, in particular to a video image processing method and system based on big data analysis.
Background
The face recognition is a biological recognition technology for performing identity recognition based on face feature information of a person, and is a series of related technologies, generally called face recognition or face recognition, in which a camera or a camera is used to collect an image or a video stream containing a face, and automatically detect and track the face in the image, thereby recognizing the detected face.
Most of the existing face recognition only recognizes the general features of the face, so that a person can be judged from a video image, or whether the information of the person and a database can be successfully matched or not is judged; however, in some situations, it is necessary to recognize fine expressions of faces in a video image, and therefore, it is necessary to analyze detailed portions in the video image, but too many background colors in the video cause great trouble to the detailed analysis of the image, which results in a slow video image recognition speed and even a reduction in the overall recognition accuracy.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, provides a video image processing method and a video image processing system based on big data analysis, and overcomes the defects of the existing face recognition method.
The purpose of the invention is realized by the following technical scheme: a video image processing method based on big data analysis, the video image processing method comprising:
establishing a face recognition database and a face micro-expression database;
acquiring a video image to acquire a face image;
preprocessing the collected video image to eliminate background variegation in the video image;
and performing emotion analysis on the emotion characteristics in the preprocessed video image to finish the identification of the video image.
Further, the pre-processing the acquired video image to eliminate the background mottle in the video image comprises:
capturing the outline of a face image in an acquired video image, separating a skin color area from a background image, intercepting the captured face image from the background of the video image, and then carrying out binarization processing;
extracting the edge of the image after binarization processing, and removing an image area with weak edge and a background area with flat change to equalize the distribution of pixel values in the image;
and removing the binarization processing effect of the image after pixel value distribution equalization, and performing illumination compensation to overcome the interference of uneven brightness on the result.
Further, the equalizing the distribution of pixel values in the image specifically includes the following:
performing histogram equalization on an input image, transforming the input image to a frequency domain by using 2D-FFT, and solving the correlation between the input image and an average face template by using an optimal adaptive correlator;
dividing the output of the filter into three parts of a face region, a possible face region and a background region according to a threshold value, carrying out local gray level equalization on an image to be detected in a window of 3 multiplied by 3, and finally outputting the background region by using an OAC filter.
Further, the performing emotion analysis on the emotion features in the preprocessed video image and completing the identification of the video image includes:
analyzing color parameters of the preprocessed image, judging the range of the face according to the color change of the image, adjusting a screenshot picture according to a feedback result of the face position, pixelating the face position image according to the face position information, and performing pixel statistics face recognition on image pixels by combining face characteristic data of a database;
analyzing and defining human face characteristic points according to the analyzed human face result and combining micro expression characteristics in a database, generating a human face model in combination, calibrating the characteristic points on the model, analyzing and adjusting the positions of the characteristic points in real time and recording the change displacement of the positions of the characteristic points;
and (4) calling micro-expression characteristics and psychological behavior characteristics in the database to analyze the emotion and emotion changes of the character according to the position change displacement of the characteristic points on the face model, and outputting an analysis result.
A video image processing system based on big data analysis comprises a database construction module, an image preprocessing module and a human face emotion analysis module;
the database construction module is used for establishing a face recognition database and a face micro-expression database;
the image preprocessing module is used for preprocessing the acquired video image to eliminate background variegates in the video image;
the face emotion analysis module is used for carrying out emotion analysis on emotion characteristics in the preprocessed video image to finish the identification of the video image.
Furthermore, the image preprocessing module comprises a human face contour intercepting and processing unit, an edge extraction and equalization processing unit and an illumination compensation unit;
the human face contour screenshot and processing unit is used for capturing the contour of a human face image in an acquired video image, separating a skin color area from a background image, intercepting the captured human face image from the background of the video image and then carrying out binarization processing;
the edge extraction and equalization processing unit is used for extracting the edge of the image after binarization processing, and equalizing the distribution of pixel values in the image after removing the image area with weak edge and the background area with flat change;
the illumination compensation unit is used for removing the binarization processing effect of the image after pixel value distribution equalization and carrying out illumination compensation to overcome the interference of uneven brightness on the result.
Furthermore, the human face emotion analysis module comprises a color analysis and pixel statistics unit, a characteristic point analysis and modeling unit and an emotion analysis unit;
the color analysis and pixel statistics unit is used for analyzing color parameters of the preprocessed image, judging the range of the face according to the color change of the image, adjusting a screenshot picture of a feedback result of the face position, pixelating the face position image according to the face position information, and performing pixel statistics face recognition on image pixels by combining face characteristic data of a database;
the characteristic point analysis and modeling unit is used for analyzing and defining human face characteristic points according to the analyzed human face result and combining micro expression characteristics in the database, generating a human face model in a combining manner, calibrating the characteristic points on the model, analyzing and adjusting the positions of the characteristic points in real time and recording the change displacement of the positions of the characteristic points;
the emotion analysis unit is used for calling micro-expression characteristics and psychological behavior characteristics in the database to analyze the emotion and emotion changes of the character according to the position change displacement of the characteristic points on the face model, and outputting an analysis result.
The invention has the following advantages: the video image processing method and the video image processing system based on big data analysis remove redundant confounding colors in the image by preprocessing the video image such as edge extraction, histogram equalization, skin color segmentation and illumination compensation, so that the subsequent analysis and identification of details such as facial expressions and the like are more accurate, and the identification time is further shortened.
Drawings
FIG. 1 is a flow chart of the present invention; .
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, as presented in the figures, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application. The invention is further described below with reference to the accompanying drawings.
As shown in fig. 1, the present invention relates to a video image processing method based on big data analysis, the video image processing method includes:
s1, establishing a face recognition database and a face micro-expression database;
s2, acquiring a video image to obtain a face image;
s3, preprocessing the collected video image to eliminate background variegated colors in the video image;
and S4, performing emotion analysis on the emotion characteristics in the preprocessed video image, and completing identification of the video image.
Further, the pre-processing the acquired video image to eliminate the background mottle in the video image comprises:
capturing the outline of a face image in an acquired video image, separating a skin color area from a background image, intercepting the captured face image from the background of the video image, and then carrying out binarization processing;
extracting the edge of the image after binarization processing, and removing an image area with weak edge and a background area with flat change to equalize the distribution of pixel values in the image;
and removing the binarization processing effect of the image after pixel value distribution equalization, and performing illumination compensation to overcome the interference of uneven brightness on the result.
Further, the equalizing the distribution of pixel values in the image specifically includes the following:
performing histogram equalization on an input image, transforming the input image to a frequency domain by using 2D-FFT, and solving the correlation between the input image and an average face template by using an optimal adaptive correlator;
dividing the output of the filter into three parts of a face region, a possible face region and a background region according to a threshold value, carrying out local gray level equalization on an image to be detected in a window of 3 multiplied by 3, and finally outputting the background region by using an OAC filter.
Further, the performing emotion analysis on the emotion features in the preprocessed video image and completing the identification of the video image includes:
analyzing color parameters of the preprocessed image, judging the range of the face according to the color change of the image, adjusting a screenshot picture according to a feedback result of the face position, pixelating the face position image according to the face position information, and performing pixel statistics face recognition on image pixels by combining face characteristic data of a database;
analyzing and defining human face characteristic points according to the analyzed human face result and combining micro expression characteristics in a database, generating a human face model in combination, calibrating the characteristic points on the model, analyzing and adjusting the positions of the characteristic points in real time and recording the change displacement of the positions of the characteristic points;
and (4) calling micro-expression characteristics and psychological behavior characteristics in the database to analyze the emotion and emotion changes of the character according to the position change displacement of the characteristic points on the face model, and outputting an analysis result.
The invention relates to a video image processing system based on big data analysis, which comprises a database construction module, an image preprocessing module and a human face emotion analysis module;
the database construction module is used for establishing a face recognition database and a face micro-expression database;
the image preprocessing module is used for preprocessing the acquired video image to eliminate background variegates in the video image;
the face emotion analysis module is used for carrying out emotion analysis on emotion characteristics in the preprocessed video image to finish the identification of the video image.
Furthermore, the image preprocessing module comprises a human face contour intercepting and processing unit, an edge extraction and equalization processing unit and an illumination compensation unit;
the human face contour screenshot and processing unit is used for capturing the contour of a human face image in an acquired video image, separating a skin color area from a background image, intercepting the captured human face image from the background of the video image and then carrying out binarization processing;
the edge extraction and equalization processing unit is used for extracting the edge of the image after binarization processing, and equalizing the distribution of pixel values in the image after removing the image area with weak edge and the background area with flat change;
the illumination compensation unit is used for removing the binarization processing effect of the image after pixel value distribution equalization and carrying out illumination compensation to overcome the interference of uneven brightness on the result.
Furthermore, the human face emotion analysis module comprises a color analysis and pixel statistics unit, a characteristic point analysis and modeling unit and an emotion analysis unit;
the color analysis and pixel statistics unit is used for analyzing color parameters of the preprocessed image, judging the range of the face according to the color change of the image, adjusting a screenshot picture of a feedback result of the face position, pixelating the face position image according to the face position information, and performing pixel statistics face recognition on image pixels by combining face characteristic data of a database;
the characteristic point analysis and modeling unit is used for analyzing and defining human face characteristic points according to the analyzed human face result and combining micro expression characteristics in the database, generating a human face model in a combining manner, calibrating the characteristic points on the model, analyzing and adjusting the positions of the characteristic points in real time and recording the change displacement of the positions of the characteristic points;
the emotion analysis unit is used for calling micro-expression characteristics and psychological behavior characteristics in the database to analyze the emotion and emotion changes of the character according to the position change displacement of the characteristic points on the face model, and outputting an analysis result.
The foregoing is illustrative of the preferred embodiments of this invention, and it is to be understood that the invention is not limited to the precise form disclosed herein and that various other combinations, modifications, and environments may be resorted to, falling within the scope of the concept as disclosed herein, either as described above or as apparent to those skilled in the relevant art. And that modifications and variations may be effected by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (7)

1.一种基于大数据分析的视频图像处理方法,其特征在于:所述视频图像处理方法包括:1. a video image processing method based on big data analysis, is characterized in that: described video image processing method comprises: 建立人脸识别数据库和人脸微表情数据库;Establish a face recognition database and a face micro-expression database; 采集视频图像获取人脸图像;Collect video images to obtain face images; 对采集的视频图像进行预处理消除视频图像中的背景杂色;Preprocessing the collected video images to eliminate background noise in the video images; 对预处理后的视频图像中的情感特征并进行情感分析,完成对视频图像的识别。The emotional features in the preprocessed video images are analyzed and the recognition of the video images is completed. 2.根据权利要求1所述的一种基于大数据分析的视频图像处理方法,其特征在于:所述对采集的视频图像进行预处理消除视频图像中的背景杂色包括:2. A video image processing method based on big data analysis according to claim 1, wherein the preprocessing of the collected video images to eliminate background noise in the video images comprises: 对采集的视频图像中的人脸图像轮廓进行捕捉以及将肤色区域与背景图像分离,并将捕捉到的人脸图像从视频图像的背景中截取出来后进行二值化处理;Capture the outline of the face image in the collected video image and separate the skin color area from the background image, and cut out the captured face image from the background of the video image and perform binarization processing; 提取二值化处理后的图像边缘,去除边缘很弱的图像区域以及变化平坦的背景区域后使图像中像素值分布均衡化;Extract the edge of the image after binarization, remove the image area with weak edge and the background area with flat change to equalize the distribution of pixel values in the image; 去除经过像素值分布均衡化后图像的二值化处理效果,并进行光照补偿以克服亮度不均对结果的干扰。Remove the binarization effect of the image after pixel value distribution equalization, and perform illumination compensation to overcome the interference of uneven brightness on the result. 3.根据权利要求2所述的一种基于大数据分析的视频图像处理方法,其特征在于:所述使图像中像素值分布均衡化具体包括以下内容:3. a video image processing method based on big data analysis according to claim 2, is characterized in that: described making the pixel value distribution equalization in the image specifically comprises the following content: 输入图像进行直方图均衡,利用2D-FFT将其变换到频域,采用最佳自适应相关器将输入图像与平均脸模板求相关;The input image is histogram equalized, transformed to the frequency domain by 2D-FFT, and the input image is correlated with the average face template using the best adaptive correlator; 将滤波器的输出按照阈值分为人脸区域,可能的人脸区域和背景区域三部分,通过3×3的窗口内对待测图像进行局部灰度均衡,最后使用OAC滤波器输出背景区域。The output of the filter is divided into three parts: face area, possible face area and background area according to the threshold value. The local gray-scale equalization of the image to be tested is carried out in a 3×3 window, and finally the background area is output by the OAC filter. 4.根据权利要求2所述的一种基于大数据分析的视频图像处理方法,其特征在于:所述对预处理后的视频图像中的情感特征并进行情感分析,完成对视频图像的识别包括:4. a kind of video image processing method based on big data analysis according to claim 2, is characterized in that: described to the emotional feature in the preprocessed video image and carry out emotional analysis, complete the identification to the video image comprising: : 分析预处理后图像的颜色参数,根据图像颜色变化判断人脸的范围,将人脸位置的反馈结果调整截图画面,以及根据人脸位置信息对人脸位置图像进行像素化,并结合数据库的人脸特征数据对图像像素进行像素统计人脸识别;Analyze the color parameters of the preprocessed image, determine the range of the face according to the color change of the image, adjust the screenshot of the feedback result of the face position, and pixelate the face position image according to the face position information, and combine the data of the database. The face feature data performs pixel statistics and face recognition on the image pixels; 根据分析的人脸结果结合数据库中的微表情特征进行分析和定义人脸特征点,结合生成人脸模型并在模型上标定特征点,实时分析调整特征点的位置以及记录特征点位置的变化位移;Analyze and define facial feature points according to the analyzed face results combined with the micro-expression features in the database, generate a face model and calibrate the feature points on the model, analyze and adjust the position of the feature points in real time, and record the change and displacement of the feature point position ; 根据人脸模型上的特征点位置变化位移,调用数据库中的微表情特征和心理行为特征分析人物情绪和情绪的变化,并输出分析结果。According to the change and displacement of the feature points on the face model, the micro-expression features and psychological behavior features in the database are called to analyze the changes of the characters' emotions and emotions, and the analysis results are output. 5.一种基于大数据分析的视频图像处理系统,其特征在于:它包括数据库构建模块、图像预处理模块和人脸情感分析模块;5. A video image processing system based on big data analysis, characterized in that: it comprises a database building module, an image preprocessing module and a facial emotion analysis module; 所述数据库构建模块用于建立人脸识别数据库和人脸微表情数据库;The database building module is used to establish a face recognition database and a face micro-expression database; 所述图像预处理模块用于对采集的视频图像进行预处理消除视频图像中的背景杂色;The image preprocessing module is used for preprocessing the collected video images to eliminate background noise in the video images; 所述人脸情感分析模块用于对预处理后的视频图像中的情感特征并进行情感分析,完成对视频图像的识别。The facial emotion analysis module is used to perform emotion analysis on the emotion features in the preprocessed video images to complete the recognition of the video images. 6.根据权利要求5所述的一种基于大数据分析的视频图像处理系统,其特征在于:所述图像预处理模块包括人脸轮廓截取及处理单元、边缘提取及均衡化处理单元以及光照补偿单元;6. A video image processing system based on big data analysis according to claim 5, wherein the image preprocessing module comprises a face contour interception and processing unit, an edge extraction and equalization processing unit, and an illumination compensation unit unit; 所述人脸轮廓截图及处理单元用于对采集的视频图像中的人脸图像轮廓进行捕捉以及将肤色区域与背景图像分离,并将捕捉到的人脸图像从视频图像的背景中截取出来后进行二值化处理;The face contour screenshot and the processing unit are used to capture the face image contour in the collected video image and separate the skin color area from the background image, and after the captured face image is cut out from the background of the video image. Perform binarization processing; 所述边缘提取及均衡化处理单元用于提取二值化处理后的图像边缘,去除边缘很弱的图像区域以及变化平坦的背景区域后使图像中像素值分布均衡化;The edge extraction and equalization processing unit is used to extract the edge of the image after the binarization process, remove the image area with weak edge and the background area with flat change, and equalize the distribution of pixel values in the image; 所述光照补偿单元用于去除经过像素值分布均衡化后图像的二值化处理效果,并进行光照补偿以克服亮度不均对结果的干扰。The illumination compensation unit is used to remove the binarization processing effect of the image after pixel value distribution equalization, and perform illumination compensation to overcome the interference of uneven brightness on the result. 7.根据权利要求5所述的一种基于大数据分析的视频图像处理系统,其特征在于:所述人脸情感分析模块包括颜色分析及像素统计单元、特征点分析及建模单元和情绪分析单元;7. a kind of video image processing system based on big data analysis according to claim 5, is characterized in that: described facial emotion analysis module comprises color analysis and pixel statistics unit, feature point analysis and modeling unit and emotion analysis unit; 所述颜色分析及像素统计单元用于分析预处理后图像的颜色参数,根据图像颜色变化判断人脸的范围,将人脸位置的反馈结果调整截图画面,以及根据人脸位置信息对人脸位置图像进行像素化,并结合数据库的人脸特征数据对图像像素进行像素统计人脸识别;The color analysis and pixel statistics unit is used for analyzing the color parameters of the preprocessed image, judging the range of the face according to the color change of the image, adjusting the screenshot of the feedback result of the position of the face, and adjusting the position of the face according to the position information of the face. The image is pixelized, and the image pixels are combined with the face feature data of the database to perform pixel statistics and face recognition; 所述特征点分析及建模单元用于根据分析的人脸结果结合数据库中的微表情特征进行分析和定义人脸特征点,结合生成人脸模型并在模型上标定特征点,实时分析调整特征点的位置以及记录特征点位置的变化位移;The feature point analysis and modeling unit is used to analyze and define the face feature points according to the analyzed face results in combination with the micro-expression features in the database, generate a face model and calibrate the feature points on the model, and analyze and adjust the features in real time. The position of the point and the change displacement of the recorded feature point position; 所述情绪分析单元用于根据人脸模型上的特征点位置变化位移,调用数据库中的微表情特征和心理行为特征分析人物情绪和情绪的变化,并输出分析结果。The emotion analysis unit is used for analyzing the changes of the person's emotion and emotion according to the position change and displacement of the feature point on the face model, calling the micro-expression feature and the psychological behavior feature in the database, and outputting the analysis result.
CN202011317899.1A 2020-11-23 2020-11-23 Video image processing method and system based on big data analysis Pending CN112487904A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011317899.1A CN112487904A (en) 2020-11-23 2020-11-23 Video image processing method and system based on big data analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011317899.1A CN112487904A (en) 2020-11-23 2020-11-23 Video image processing method and system based on big data analysis

Publications (1)

Publication Number Publication Date
CN112487904A true CN112487904A (en) 2021-03-12

Family

ID=74932718

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011317899.1A Pending CN112487904A (en) 2020-11-23 2020-11-23 Video image processing method and system based on big data analysis

Country Status (1)

Country Link
CN (1) CN112487904A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113850247A (en) * 2021-12-01 2021-12-28 环球数科集团有限公司 A travel video sentiment analysis system integrating text information

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102360421A (en) * 2011-10-19 2012-02-22 苏州大学 Face identification method and system based on video streaming
CN106886770A (en) * 2017-03-07 2017-06-23 佛山市融信通企业咨询服务有限公司 A kind of video communication sentiment analysis householder method
CN106909907A (en) * 2017-03-07 2017-06-30 佛山市融信通企业咨询服务有限公司 A kind of video communication sentiment analysis accessory system
CN106919924A (en) * 2017-03-07 2017-07-04 佛山市融信通企业咨询服务有限公司 A kind of mood analysis system based on the identification of people face
CN109255319A (en) * 2018-09-02 2019-01-22 珠海横琴现联盛科技发展有限公司 For the recognition of face payment information method for anti-counterfeit of still photo
WO2019184125A1 (en) * 2018-03-30 2019-10-03 平安科技(深圳)有限公司 Micro-expression-based risk identification method and device, equipment and medium
KR20200063292A (en) * 2018-11-16 2020-06-05 광운대학교 산학협력단 Emotional recognition system and method based on face images
CN111524080A (en) * 2020-04-22 2020-08-11 杭州夭灵夭智能科技有限公司 Face skin feature identification method, terminal and computer equipment
CN111611940A (en) * 2020-05-22 2020-09-01 西安佐尔电子技术有限公司 Rapid video face recognition method based on big data processing
CN111931671A (en) * 2020-08-17 2020-11-13 青岛北斗天地科技有限公司 Face recognition method for illumination compensation in underground coal mine adverse light environment

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102360421A (en) * 2011-10-19 2012-02-22 苏州大学 Face identification method and system based on video streaming
CN106886770A (en) * 2017-03-07 2017-06-23 佛山市融信通企业咨询服务有限公司 A kind of video communication sentiment analysis householder method
CN106909907A (en) * 2017-03-07 2017-06-30 佛山市融信通企业咨询服务有限公司 A kind of video communication sentiment analysis accessory system
CN106919924A (en) * 2017-03-07 2017-07-04 佛山市融信通企业咨询服务有限公司 A kind of mood analysis system based on the identification of people face
WO2019184125A1 (en) * 2018-03-30 2019-10-03 平安科技(深圳)有限公司 Micro-expression-based risk identification method and device, equipment and medium
CN109255319A (en) * 2018-09-02 2019-01-22 珠海横琴现联盛科技发展有限公司 For the recognition of face payment information method for anti-counterfeit of still photo
KR20200063292A (en) * 2018-11-16 2020-06-05 광운대학교 산학협력단 Emotional recognition system and method based on face images
CN111524080A (en) * 2020-04-22 2020-08-11 杭州夭灵夭智能科技有限公司 Face skin feature identification method, terminal and computer equipment
CN111611940A (en) * 2020-05-22 2020-09-01 西安佐尔电子技术有限公司 Rapid video face recognition method based on big data processing
CN111931671A (en) * 2020-08-17 2020-11-13 青岛北斗天地科技有限公司 Face recognition method for illumination compensation in underground coal mine adverse light environment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
王金云;周晖杰;纪政;: "复杂背景中的人脸识别技术研究", 计算机工程, no. 08 *
董静;王万森;: "E-learning系统中情感识别的研究", 计算机工程与设计, no. 17 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113850247A (en) * 2021-12-01 2021-12-28 环球数科集团有限公司 A travel video sentiment analysis system integrating text information
CN113850247B (en) * 2021-12-01 2022-02-08 环球数科集团有限公司 Tourism video emotion analysis system fused with text information

Similar Documents

Publication Publication Date Title
CN107798279B (en) Face living body detection method and device
CN110765838B (en) Real-time dynamic analysis method for facial feature region for emotional state monitoring
CN106446753A (en) Negative expression identifying and encouraging system
Monwar et al. Pain recognition using artificial neural network
CN105335691A (en) Smiling face identification and encouragement system
CN107563427A (en) Method for copyright identification of oil paintings and corresponding use
CN111967319A (en) Infrared and visible light based in-vivo detection method, device, equipment and storage medium
CN112861588B (en) Living body detection method and device
CN113076860B (en) Bird detection system under field scene
CN110322470A (en) Action recognition device, action recognition method and recording medium
CN112487904A (en) Video image processing method and system based on big data analysis
Priesnitz et al. MCLFIQ: mobile contactless fingerprint image quality
CN113610071B (en) Face living body detection method and device, electronic equipment and storage medium
CN107729828A (en) A kind of fingerprint image acquisition method, electronic equipment, storage medium and system
Dharavath et al. Impact of image preprocessing on face recognition: A comparative analysis
Qaisar et al. Scene to text conversion and pronunciation for visually impaired people
CN111325118A (en) Method for identity authentication based on video and video equipment
CN114998968B (en) A method for analyzing classroom interactive behavior based on audio and video
CN113887314A (en) Vehicle driving direction identification method and device, computer equipment and storage medium
CN114220161A (en) A kind of classroom behavior detection method
CN112861587B (en) Living body detection method and device
CN116959074B (en) Human skin detection method and device based on multispectral imaging
CN114680853B (en) A fast heart rate detection method under complex light sources based on face recognition
KR102458614B1 (en) System for diagnosing skin using non-contact type rgb camera and skin diagnosis method thereof
EP4468261A1 (en) Method for finger pose correction in 2d fingerprint images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210312