[go: up one dir, main page]

CN107633205B - lip motion analysis method, device and storage medium - Google Patents

lip motion analysis method, device and storage medium Download PDF

Info

Publication number
CN107633205B
CN107633205B CN201710708364.9A CN201710708364A CN107633205B CN 107633205 B CN107633205 B CN 107633205B CN 201710708364 A CN201710708364 A CN 201710708364A CN 107633205 B CN107633205 B CN 107633205B
Authority
CN
China
Prior art keywords
lip
real
image
region
lips
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710708364.9A
Other languages
Chinese (zh)
Other versions
CN107633205A (en
Inventor
陈林
张国辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201710708364.9A priority Critical patent/CN107633205B/en
Priority to PCT/CN2017/108749 priority patent/WO2019033570A1/en
Publication of CN107633205A publication Critical patent/CN107633205A/en
Application granted granted Critical
Publication of CN107633205B publication Critical patent/CN107633205B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of lip motion analysis method, device and storage mediums, this method comprises: obtaining the realtime graphic of photographic device shooting, a real-time face image is extracted from the realtime graphic;The real-time face image is inputted into trained lip averaging model in advance, identifies t lip feature point for representing Hp position in the real-time face image;Determine lip region according to the t lip feature point, by the lip region input in advance trained lip disaggregated model, judge the lip region whether be people lip region;If so, the direction of motion and move distance of lip in the real-time face image is calculated according to x, the y-coordinate of t lip feature point in the real-time face image.The present invention calculates the motion information of lip in real-time face image according to the coordinate of lip feature point, realizes the analysis to lip region and the real-time capture to lip motion.

Description

Lip motion analysis method, device and storage medium
Technical Field
The invention relates to the technical field of computer vision processing, in particular to a lip motion analysis method and device and a computer readable storage medium.
Background
Lip motion capture is a biometric technique that performs user lip motion recognition based on human facial feature information. At present, the application field of lip motion capture is very wide, the lip motion capture plays an important role in a plurality of fields such as entrance guard attendance, identity recognition and the like, and great convenience is brought to the life of people. The method for capturing the lip motion is generally implemented by using a deep learning method, training a classification model of lip features through deep learning, and then judging the features of the lips by using the classification model.
However, the lip features are trained by using the deep learning method, and the quantity of the lip features is completely dependent on the types of the lip samples, such as open mouth and closed mouth judgment, so that at least a great quantity of samples of the open mouth and the closed mouth need to be taken, and if the mouth is to be skimmed, a great quantity of samples of the mouth needs to be taken again, and then the training is carried out again. This is not only time consuming, but also impossible to capture in real time. In addition, the lip features are determined according to the classification model of the lip features, and whether the identified lip region is a human lip region cannot be analyzed.
Disclosure of Invention
The invention provides a lip motion analysis method, a device and a computer readable storage medium, and mainly aims to calculate motion information of lips in a real-time face image according to coordinates of lip feature points so as to realize analysis of lip regions and real-time capture of lip motions.
To achieve the above object, the present invention provides an electronic device, comprising: the device comprises a memory, a processor and a camera device, wherein the memory comprises a lip motion analysis program, and the lip motion analysis program realizes the following steps when being executed by the processor:
a real-time facial image acquisition step: acquiring a real-time image shot by a camera device, and extracting a real-time face image from the real-time image by using a face recognition algorithm;
a characteristic point identification step: inputting the real-time face image into a pre-trained lip average model, and identifying t lip feature points representing the position of lips in the real-time face image by using the lip average model;
a lip region identification step: determining a lip region according to the t lip feature points, inputting the lip region into a pre-trained lip classification model, and judging whether the lip region is a human lip region; and
lip movement judging step: and if the lip area is the human lip area, calculating the motion direction and the motion distance of the lips in the real-time face image according to the x and y coordinates of the t lip feature points in the real-time face image.
Optionally, when executed by the processor, the lip motion analysis program further implements the following steps:
a prompting step: and when the lip classification model judges that the lip region is not the lip region of the person, prompting that the lip region of the person is not detected from the current real-time image and lip movement cannot be judged, and returning to the real-time facial image acquisition step.
Optionally, the training step of the lip average model includes:
establishing a first sample library with n face images, marking t characteristic points at the lip part of each face image in the first sample library, wherein the t characteristic points are uniformly distributed on the upper lip, the lower lip, the left lip angle and the right lip angle; and
and training a face feature recognition model by using the face image marked with the lip feature points to obtain a lip average model related to the face.
Optionally, the training step of the lip classification model includes:
collecting m lip positive sample images and k lip negative sample images to form a second sample library;
extracting local features of each lip positive sample image and each lip negative sample image; and
and training the support vector machine classifier by using the lip positive sample image, the lip negative sample image and the local features thereof to obtain a lip classification model of the face.
Optionally, the lip movement judging step includes:
calculating the distance between the central feature point of the inner side of the upper lip and the central feature point of the inner side of the lower lip in the real-time face image, and judging the opening degree of the lips;
respectively connecting the characteristic point of the left outer lip corner with the characteristic point on the outer contour line of the upper lip and the lower lip, which is closest to the characteristic point of the left outer lip corner to form a vectorComputing vectorsThe included angle between the two lips is used for obtaining the left-handed degree of the lips; and
respectively connecting the feature point of the right outer lip corner with the feature point on the outer contour line of the upper lip and the lower lip, which is closest to the feature point of the right outer lip corner to form a vectorComputing vectorsThe included angle between the two lips is obtained to obtain the right-handed degree of the lips.
In addition, to achieve the above object, the present invention also provides a lip motion analysis method, including:
a real-time facial image acquisition step: acquiring a real-time image shot by a camera device, and extracting a real-time face image from the real-time image by using a face recognition algorithm;
a characteristic point identification step: inputting the real-time face image into a pre-trained lip average model, and identifying t lip feature points representing the position of lips in the real-time face image by using the lip average model;
a lip region identification step: determining a lip region according to the t lip feature points, inputting the lip region into a pre-trained lip classification model, and judging whether the lip region is a human lip region; and
lip movement judging step: and if the lip area is the human lip area, calculating the motion direction and the motion distance of the lips in the real-time face image according to the x and y coordinates of the t lip feature points in the real-time face image.
Optionally, when executed by the processor, the lip motion analysis program further implements the following steps:
a prompting step: and when the lip classification model judges that the lip region is not the lip region of the person, prompting that the lip region of the person is not detected from the current real-time image and lip movement cannot be judged, and returning to the real-time facial image acquisition step.
Optionally, the training step of the lip average model includes:
establishing a first sample library with n face images, marking t characteristic points at the lip part of each face image in the first sample library, wherein the t characteristic points are uniformly distributed on the upper lip, the lower lip, the left lip angle and the right lip angle; and
and training a face feature recognition model by using the face image marked with the lip feature points to obtain a lip average model related to the face.
Optionally, the training step of the lip classification model includes:
collecting m lip positive sample images and k lip negative sample images to form a second sample library;
extracting local features of each lip positive sample image and each lip negative sample image; and
and training the support vector machine classifier by using the lip positive sample image, the lip negative sample image and the local features thereof to obtain a lip classification model of the face.
Optionally, the lip movement judging step includes:
calculating the distance between the central feature point of the inner side of the upper lip and the central feature point of the inner side of the lower lip in the real-time face image, and judging the opening degree of the lips;
respectively connecting the characteristic point of the left outer lip corner with the characteristic point on the outer contour line of the upper lip and the lower lip, which is closest to the characteristic point of the left outer lip corner to form a vectorComputing vectorsThe included angle between the two lips is used for obtaining the left-handed degree of the lips; and
respectively connecting the feature point of the right outer lip corner with the feature point on the outer contour line of the upper lip and the lower lip, which is closest to the feature point of the right outer lip corner to form a vectorComputing vectorsThe included angle between the two lips is obtained to obtain the right-handed degree of the lips.
In addition, to achieve the above object, the present invention further provides a computer-readable storage medium, where the computer-readable storage medium includes a lip motion analysis program, and when the lip motion analysis program is executed by a processor, the lip motion analysis program implements any step of the lip motion analysis method as described above.
According to the lip motion analysis method, the device and the computer readable storage medium, the lip feature points are identified from the real-time face image, whether the area formed by the lip feature points is a lip area of a person or not is judged, if yes, the motion information of the lips is obtained through calculation according to the coordinates of the lip feature points, and the analysis of the lip area and the real-time capture of the lip motion can be realized without taking samples of various motions of the lips for deep learning.
Drawings
FIG. 1 is a diagram of an electronic device according to a preferred embodiment of the present invention;
FIG. 2 is a functional block diagram of a lip motion analysis program of FIG. 1;
FIG. 3 is a flowchart illustrating a lip movement analysis method according to a preferred embodiment of the present invention;
fig. 4 is a detailed flowchart of step S40 of the lip movement analysis method according to the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The invention provides an electronic device 1. Referring to fig. 1, a schematic diagram of an electronic device 1 according to a preferred embodiment of the invention is shown.
In the present embodiment, the electronic device 1 may be a terminal device having an arithmetic function, such as a server, a smart phone, a tablet computer, a portable computer, or a desktop computer.
The electronic device 1 includes: a processor 12, a memory 11, an imaging device 13, a network interface 14, and a communication bus 15. The camera device 13 is installed in a specific location, such as an office or a monitoring area, and captures a real-time image of a target entering the specific location in real time, and transmits the captured real-time image to the processor 12 through a network. The network interface 14 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The communication bus 15 is used to realize connection communication between these components.
The memory 11 includes at least one type of readable storage medium. The at least one type of readable storage medium may be a non-volatile storage medium such as a flash memory, a hard disk, a multimedia card, a card-type memory, and the like. In some embodiments, the readable storage medium may be an internal storage unit of the electronic apparatus 1, such as a hard disk of the electronic apparatus 1. In other embodiments, the readable storage medium may also be an external memory of the electronic device 1, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the electronic device 1.
In the present embodiment, the readable storage medium of the memory 11 is generally used for storing the lip motion analysis program 10 installed in the electronic device 1, a face image sample library, a human lip sample library, a constructed and trained lip average model and lip classification model, and the like. The memory 11 may also be used to temporarily store data that has been output or is to be output.
The processor 12 may be a Central Processing Unit (CPU), microprocessor or other data Processing chip in some embodiments, and is used for executing program codes stored in the memory 11 or Processing data, such as executing the lip motion analysis program 10.
Fig. 1 shows only the electronic device 1 with the components 11-15 and the lip motion analysis program 10, but it should be understood that not all of the shown components are required to be implemented, and that more or fewer components may be implemented instead.
Optionally, the electronic device 1 may further include a user interface, the user interface may include an input unit such as a Keyboard (Keyboard), a voice input device such as a microphone (microphone) or other equipment with a voice recognition function, a voice output device such as a sound box, a headset, etc., and optionally the user interface may further include a standard wired interface, a wireless interface.
Optionally, the electronic device 1 may further comprise a display, which may also be appropriately referred to as a display screen or display unit. In some embodiments, the display device may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display is used for displaying information processed in the electronic apparatus 1 and for displaying a visualized user interface.
Optionally, the electronic device 1 further comprises a touch sensor. The area provided by the touch sensor for the user to perform touch operation is called a touch area. Further, the touch sensor described herein may be a resistive touch sensor, a capacitive touch sensor, or the like. The touch sensor may include not only a contact type touch sensor but also a proximity type touch sensor. Further, the touch sensor may be a single sensor, or may be a plurality of sensors arranged in an array, for example.
The area of the display of the electronic device 1 may be the same as or different from the area of the touch sensor. Optionally, a display is stacked with the touch sensor to form a touch display screen. The device detects touch operation triggered by a user based on the touch display screen.
Optionally, the electronic device 1 may further include a Radio Frequency (RF) circuit, a sensor, an audio circuit, and the like, which are not described herein again.
In the apparatus embodiment shown in fig. 1, a memory 11, which is a kind of computer storage medium, may include therein an operating system, and a lip motion analysis program 10; the processor 12, when executing the lip motion analysis program 10 stored in the memory 11, implements the following steps:
acquiring a real-time image shot by the camera device 13, extracting a real-time face image from the real-time image by the processor 12 by using a face recognition algorithm, calling an eye average model and a human eye classification model from the memory 11, inputting the real-time face image into the lip average model, recognizing lip feature points in the real-time face image, inputting a lip region determined by the lip feature points into the lip classification model, judging whether the lip region is a lip region of a person, if so, calculating motion information of lips in the real-time face image according to coordinates of the lip feature points, and otherwise, returning to the real-time face image acquisition step.
In other embodiments, the lip motion analysis program 10 may be further divided into one or more modules, and the one or more modules are stored in the memory 11 and executed by the processor 12 to implement the present invention. The modules referred to herein are referred to as a series of computer program instruction segments capable of performing specified functions.
Referring to fig. 2, a functional block diagram of the lip motion analysis program 10 of fig. 1 is shown.
The lip motion analysis program 10 may be divided into: the device comprises an acquisition module 110, an identification module 120, a judgment module 130, a calculation module 140 and a prompt module 150.
The acquiring module 110 is configured to acquire a real-time image captured by the camera 13, and extract a real-time face image from the real-time image by using a face recognition algorithm. When the camera device 13 captures a real-time image, the camera device 13 sends the real-time image to the processor 12, and when the processor 12 receives the real-time image, the acquiring module 110 extracts a real-time face image by using a face recognition algorithm.
Specifically, the face recognition algorithm for extracting the real-time facial image from the real-time image may be a geometric feature-based method, a local feature analysis method, a eigenface method, an elastic model-based method, a neural network method, or the like.
The recognition module 120 is configured to input the real-time facial image into a pre-trained lip averaging model, and recognize t lip feature points representing the positions of lips in the real-time facial image by using the lip averaging model.
Assuming that there are 20 lip feature points in the lip average model, the 20 lip feature points are uniformly distributed. The recognition module 120 calls the trained lip mean model from the memory 11, aligns the real-time facial image with the lip mean model, and then searches the real-time facial image for 20 lip feature points matching with the 20 lip feature points of the lip mean model by using a feature extraction algorithm. The lip average model of the face is constructed and trained in advance, and a specific embodiment will be described in the following lip motion analysis method.
Assuming that the 20 lip feature points recognized from the real-time facial image by the recognition module 120 are still recorded as P1-P20, the coordinates of the 20 lip feature points are respectively: (x)1、y1)、(x2、y2)、(x3、y3)、…、(x20、y20)。
As shown in fig. 2, the upper and lower lips of the lip have 8 characteristic points (P1 to P8, P9 to P16), respectively, and the left and right lip corners have 2 characteristic points (P17 to P18, P19 to P20, respectively). Of the 8 characteristic points of the upper lip, 5 are positioned on the outer contour line (P1-P5) of the upper lip, and 3 are positioned on the inner contour line (P6-P8, and P7 is the inner central characteristic point of the upper lip); of the 8 characteristic points of the lower lip, 5 are located on the outer contour line of the lower lip (P9-P13), and 3 are located on the inner contour line of the lower lip (P14-P16, and P15 is the central characteristic point of the inner side of the lower lip). Of the 2 feature points of the left and right lip corners, 1 is located on the outer lip contour (for example, P18 and P20, hereinafter referred to as outer lip corner feature point) and 1 is located on the inner lip contour (for example, P17 and P19, hereinafter referred to as inner lip corner feature point).
In the present embodiment, the feature extraction algorithm is a SIFT (scale-innovative feature transform) algorithm. The SIFT algorithm extracts the local feature of each lip feature point from a lip average model of the face, selects one lip feature point as a reference feature point, searches feature points which are the same as or similar to the local feature of the reference feature point in the real-time face image (for example, the difference value of the local features of the two feature points is within a preset range), and finds all the lip feature points in the real-time face image according to the principle. In other embodiments, the feature extraction algorithm may also be SURF (speeded Up Robust features) algorithm, LBP (local Binary patterns) algorithm, HOG (histogram of OrientedGrids) algorithm, etc.
The determining module 130 is configured to determine a lip region according to the t lip feature points, input the lip region into a pre-trained lip classification model, and determine whether the lip region is a human lip region. After the recognition module 120 recognizes 20 lip feature points from the real-time facial image, a lip region may be determined according to the 20 lip feature points, and then the determined lip region is input into the trained lip classification model, and whether the determined lip region is a lip region of a person is determined according to a result obtained by the model. The lip classification model is constructed and trained in advance, and a detailed embodiment will be described in the following lip motion analysis method.
The calculating module 140 is configured to calculate a moving direction and a moving distance of the lips in the real-time face image according to x and y coordinates of t lip feature points in the real-time face image if the lip region is a human lip region.
Specifically, the calculation module 140 is configured to:
calculating the distance between the central feature point of the inner side of the upper lip and the central feature point of the inner side of the lower lip in the real-time face image, and judging the opening degree of the lips;
respectively connecting the characteristic point of the left outer lip corner with the characteristic point on the outer contour line of the upper lip and the lower lip, which is closest to the characteristic point of the left outer lip corner to form a vectorComputing vectorsThe included angle between the two lips is used for obtaining the left-handed degree of the lips; and
respectively connecting the feature point of the right outer lip corner with the feature point on the outer contour line of the upper lip and the lower lip, which is closest to the feature point of the right outer lip corner to form a vectorComputing vectorsThe included angle between the two lips is obtained to obtain the right-handed degree of the lips.
In the real-time face image, the coordinate of the upper lip inner center feature point P7 is (x)7、y7) The coordinate of the lower lip inner center feature point P15 is (x)15、y15) And the determining module 130 determines that the lip region is a lip region of a person, then the distance between the two points is as follows:
if d is 0, it means that two points P7 and P15 are overlapped, that is, the lips are in a closed state; if d is larger than 0, the opening degree of the lips is judged according to the size of d, and the larger d is, the larger the opening degree of the lips is.
The left outer lip corner feature point P18 has the coordinate of (x)18、y18) The coordinates of the feature points P1 and P9 closest to P18 on the outline of the upper and lower lips are (x)1、y1)、(x9、y9) Connecting P18 with P1 and P9 to form vectors respectivelyComputing vectorsThe included angle α between, the calculation formula is as follows:
wherein,α denotes a vectorThe degree of left-handed falling of lips can be judged by calculating the included angle; the smaller the angle, the greater the degree of left-handed lip.
Similarly, the coordinate of the right outer lip corner feature point P20 is (x)20、y20) The coordinates of the feature points P5 and P13 closest to P20 on the outline of the upper and lower lips are (x)5、y5)、(x13、y13) Connecting P20 with P5 and P13 to form vectors respectivelyComputing vectorsThe calculation formula of the included angle is as follows:
wherein,β denotes a vectorThe degree of right-handed lip can be judged by calculating the included angle; the smaller the included angleIndicating a greater degree of right-handed lip.
And the prompting module 150 is configured to prompt that the lip region of the person is not detected from the current real-time image and the lip motion cannot be judged when the lip classification model judges that the lip region is not the lip region of the person, and return the process to the real-time image capturing step to capture a next real-time image. If the determination module 130 inputs the lip regions determined by the 20 lip feature points into the lip classification model, it is determined that the lip region is not the lip region of the person according to the model result, the prompt module 150 prompts that the lip region of the person is not recognized, the next step of determining the motion of the lips cannot be performed, and meanwhile, the real-time image shot by the camera device is obtained again, and the subsequent steps are performed.
The electronic device 1 provided in this embodiment extracts a real-time facial image from a real-time image, identifies lip feature points in the real-time facial image by using a lip average model, analyzes a lip region determined by using a lip classification model, and calculates motion information of lips in the real-time facial image according to coordinates of the lip feature points if the lip region is a human lip region, thereby realizing analysis of the lip region and real-time capture of lip motion.
In addition, the invention also provides a lip motion analysis method. Referring to fig. 3, a flowchart of a lip motion analysis method according to a preferred embodiment of the present invention is shown. The method may be performed by an apparatus, which may be implemented by software and/or hardware.
In this embodiment, the lip motion analysis method includes: step S10-step S50.
Step S10, a real-time image captured by the camera device is acquired, and a real-time facial image is extracted from the real-time image by using a face recognition algorithm. When the camera device shoots a real-time image, the camera device sends the real-time image to the processor, and after the processor receives the real-time image, the real-time face image is extracted by using a face recognition algorithm.
Specifically, the face recognition algorithm for extracting the real-time facial image from the real-time image may be a geometric feature-based method, a local feature analysis method, a eigenface method, an elastic model-based method, a neural network method, or the like.
Step S20, inputting the real-time facial image into a pre-trained lip average model, and recognizing t lip feature points representing the positions of lips in the real-time facial image by using the lip average model.
Establishing a first sample library with n face images, manually marking t characteristic points at the lip part of each face image in the first sample library, wherein the t characteristic points are uniformly distributed on the upper lip, the lower lip, the left lip angle and the right lip angle.
And training a face feature recognition model by using the face image marked with the lip feature points to obtain a lip average model related to the face. The face feature recognition model is an Ensemble of RegressionTress (ERT) algorithm. The ERT algorithm is formulated as follows:
where t denotes the cascade number,. tau.t(-) represents the regressor at the current stage. Each regressor is composed of a number of regression trees (trees) that are trained to obtain.
WhereinA shape estimate for the current model; each regressor tautAccording to the input images I andto predict an incrementThis increment is added to the current shape estimate to improve the current model. Each stage of the regressor performs prediction according to the characteristic points. The training data set was: (I1, S1), (In, Sn) where I is the input sample image and S is the shape feature vector consisting of feature points In the sample image.
In the process of model training, the number of face images in the sample library is N, and assuming that t is 20, that is, each sample image has 20 feature points, a first regression tree is trained by taking a part of feature points of all sample images (for example, randomly taking 15 feature points from the 20 feature points of each sample image), a residual error between a predicted value of the first regression tree and a true value of the part of feature points (a weighted average of the 15 feature points taken by each sample image) is used for training a second tree …, and so on until a predicted value of an nth tree and a true value of the part of feature points are close to 0, all regression trees of an ERT algorithm are obtained, a lip average model of a face is obtained according to the regression trees, and the model file and the sample library are stored in a memory. Because 20 lip feature points are labeled in the sample image of the training model, the trained lip average model of the face can be used to identify 20 lip feature points from the face image.
After the trained lip mean model is called from the memory, the real-time facial image is aligned with the lip mean model, and then 20 lip feature points matched with the 20 lip feature points of the lip mean model are searched in the real-time facial image by using a feature extraction algorithm. Assuming that 20 lip feature points identified from the real-time face image are still recorded as P1-P20, the coordinates of the 20 lip feature points are respectively: (x)1、y1)、(x2、y2)、(x3、y3)、…、(x20、y20)。
As shown in fig. 2, the upper and lower lips of the lip have 8 characteristic points (P1 to P8, P9 to P16), respectively, and the left and right lip corners have 2 characteristic points (P17 to P18, P19 to P20, respectively). Of the 8 characteristic points of the upper lip, 5 are positioned on the outer contour line (P1-P5) of the upper lip, and 3 are positioned on the inner contour line (P6-P8, and P7 is the inner central characteristic point of the upper lip); of the 8 characteristic points of the lower lip, 5 are located on the outer contour line of the lower lip (P9-P13), and 3 are located on the inner contour line of the lower lip (P14-P16, and P15 is the central characteristic point of the inner side of the lower lip). Of the 2 feature points of the left and right lip corners, 1 is located on the outer lip contour (for example, P18 and P20, hereinafter referred to as outer lip corner feature point) and 1 is located on the inner lip contour (for example, P17 and P19, hereinafter referred to as inner lip corner feature point).
Specifically, the feature extraction algorithm may also be a SIFT algorithm, a SURF algorithm, an LBP algorithm, a HOG algorithm, or the like.
Step S30, determining a lip region according to the t lip feature points, inputting the lip region into a pre-trained lip classification model, and judging whether the lip region is a human lip region.
And collecting m lip positive sample images and k lip negative sample images to form a second sample library. The lip positive sample image refers to an image containing human lips, and lip parts can be extracted from a face image sample library to be used as the lip positive sample image. The lip negative sample image is a defective lip region of a human or an image in which lips are not lips of a human (e.g., an animal), and the multiple lip positive sample images and the multiple lip negative sample images form a second sample library.
And extracting local features of each lip positive sample image and each lip negative sample image. And extracting the features of the directional Gradient Histogram (HOG for short) of the lip sample image by using a feature extraction algorithm. Since color information in the lip sample image has little effect, the lip sample image is generally converted into a gray scale image, the whole image is normalized, gradients in the horizontal coordinate and vertical coordinate directions of the image are calculated, and a gradient direction value of each pixel position is calculated according to the gradients, so that contour, shadow and some texture information are captured, and the influence of illumination is further weakened. And then dividing the whole image into individual Cell cells, constructing a gradient direction histogram for each Cell, and counting and quantizing the local image gradient information to obtain a feature description vector of a local image area. Then, Cell cells are combined into a large block (block), and the variation range of the gradient intensity is very large due to the variation of local illumination and the variation of foreground-background contrast, so that the gradient intensity needs to be normalized, and illumination, shadow and edges are further compressed. And finally, combining all the HOG descriptors of the block together to form a final HOG feature description vector.
And training a Support Vector Machine (SVM) classifier by using the lip positive sample image, the lip negative sample image and the extracted HOG characteristics to obtain a lip classification model of the face.
After 20 lip feature points are identified from the real-time facial image, a lip region can be determined according to the 20 lip feature points, then the determined lip region is input into a trained lip classification model, and whether the determined lip region is a lip region of a person is judged according to a result obtained by the model.
Step S40, if the lip region is a human lip region, calculating the motion direction and the motion distance of the lips in the real-time facial image according to the x and y coordinates of the t lip feature points in the real-time facial image.
Fig. 4 is a detailed flowchart of step S40 in the lip motion analysis method according to the present invention. Specifically, step S40 includes:
step S41, calculating the distance between the central characteristic point of the inner side of the upper lip and the central characteristic point of the inner side of the lower lip in the real-time face image, and judging the opening degree of the lips;
step S42, the characteristic point of the left outer lip corner is connected with the characteristic point on the outer contour line of the upper lip and the lower lip, which is closest to the characteristic point of the left outer lip corner, to form a vectorComputing vectorsThe included angle between the two lips is used for obtaining the left-handed degree of the lips; and
step S43, the characteristic point of the right outer lip corner is connected with the characteristic point which is closest to the characteristic point of the right outer lip corner on the outline of the upper lip and the lower lip to form a vectorComputing vectorsThe included angle between the two lips is obtained to obtain the right-handed degree of the lips.
In the real-time face image, the coordinate of the upper lip inner center feature point P7 is (x)7、y7) The coordinate of the lower lip inner center feature point P15 is (x)15、y15) And the lip region is a human lip region, then the distance between two points is as follows:
if d is 0, it means that two points P7 and P15 are overlapped, that is, the lips are in a closed state; if d is larger than 0, the opening degree of the lips is judged according to the size of d, and the larger d is, the larger the opening degree of the lips is.
The left outer lip corner feature point P18 has the coordinate of (x)18、y18) The coordinates of the feature points P1 and P9 closest to P18 on the outline of the upper and lower lips are (x)1、y1)、(x9、y9) Connecting P18 with P1 and P9 to form vectors respectivelyComputing vectorsThe included angle α between them, is calculatedThe formula is as follows:
wherein,α denotes a vectorThe degree of left-handed falling of lips can be judged by calculating the included angle; the smaller the angle, the greater the degree of left-handed lip.
Similarly, the coordinate of the right outer lip corner feature point P20 is (x)20、y20) The coordinates of the feature points P5 and P13 closest to P20 on the outline of the upper and lower lips are (x)5、y5)、(x13、y13) Connecting P20 with P5 and P13 to form vectors respectivelyComputing vectorsThe calculation formula of the included angle is as follows:
wherein,β denotes a vectorThe degree of right-handed lip can be judged by calculating the included angle; the smaller the angle, the greater the degree of right-handed lip.
Step S50, when the lip classification model determines that the lip region is not a human lip region, it is prompted that the human lip region is not detected from the current real-time image and the lip motion cannot be determined, and the flow returns to the real-time image capturing step to capture the next real-time image. And after the lip areas determined by the 20 lip feature points are input into a lip classification model, judging that the lip areas are not the lip areas of the person according to a model result, prompting that the lip areas of the person are not identified, and failing to perform the next step of judging the motion of the lips, and simultaneously, re-acquiring real-time images shot by the camera device and performing subsequent steps.
According to the lip motion analysis method provided by the embodiment, the lip feature points in the real-time facial image are identified by using the lip average model, the lip region determined by the lip feature points is analyzed by using the lip classification model, if the lip region is a human lip region, the motion information of the lips in the real-time facial image is calculated according to the coordinates of the lip feature points, and the analysis of the lip region and the real-time capture of the lip motion are realized.
In addition, an embodiment of the present invention further provides a computer-readable storage medium, where the computer-readable storage medium includes a lip motion analysis program, and when executed by a processor, the lip motion analysis program implements the following operations:
a model construction step: constructing and training a face feature recognition model to obtain a lip average model about a face, and training an SVM (support vector machine) by using a lip sample image to obtain a lip classification model;
a real-time facial image acquisition step: acquiring a real-time image shot by a camera device, and extracting a real-time face image from the real-time image by using a face recognition algorithm;
a characteristic point identification step: inputting the real-time face image into a pre-trained lip average model, and identifying t lip feature points representing the position of lips in the real-time face image by using the lip average model;
a lip region identification step: determining a lip region according to the t lip feature points, inputting the lip region into a pre-trained lip classification model, and judging whether the lip region is a human lip region; and
lip movement judging step: and if the lip area is the human lip area, calculating the motion direction and the motion distance of the lips in the real-time face image according to the x and y coordinates of the t lip feature points in the real-time face image.
Optionally, when executed by the processor, the lip motion analysis program further implements the following operations:
a prompting step: and when the lip classification model judges that the lip region is not the lip region of the person, prompting that the lip region of the person is not detected from the current real-time image and lip movement cannot be judged, and returning to the real-time facial image acquisition step.
Optionally, the lip movement judging step includes:
calculating the distance between the central feature point of the inner side of the upper lip and the central feature point of the inner side of the lower lip in the real-time face image, and judging the opening degree of the lips;
respectively connecting the characteristic point of the left outer lip corner with the characteristic point on the outer contour line of the upper lip and the lower lip, which is closest to the characteristic point of the left outer lip corner to form a vectorComputing vectorsThe included angle between the two lips is used for obtaining the left-handed degree of the lips; and
respectively connecting the feature point of the right outer lip corner with the feature point on the outer contour line of the upper lip and the lower lip, which is closest to the feature point of the right outer lip corner to form a vectorComputing vectorsThe included angle between the two lips is obtained to obtain the right-handed degree of the lips.
The embodiment of the computer readable storage medium of the present invention is substantially the same as the embodiment of the lip motion analysis method, and will not be described herein again.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, apparatus, article, or method. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, apparatus, article, or method that includes the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments. Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (7)

1. An electronic device, the device comprising: the device comprises a memory, a processor and a camera device, wherein the memory comprises a lip motion analysis program, and the lip motion analysis program realizes the following steps when being executed by the processor:
a real-time facial image acquisition step: acquiring a real-time image shot by a camera device, and extracting a real-time face image from the real-time image by using a face recognition algorithm;
a characteristic point identification step: inputting the real-time face image into a pre-trained lip average model, and identifying t lip feature points representing the position of lips in the real-time face image by using the lip average model;
the training step of the lip average model comprises the following steps: establishing a first sample library with n face images, and marking t characteristic points at the lip part in each face image in the first sample library; training a face feature recognition model by using the face image marked with the lip feature points to obtain a lip average model about a face, wherein the face feature recognition model ERT algorithm is expressed by a formula as follows:
where t denotes the cascade number,. tau.t(-) represents the regressor at the current stage,a shape estimate for the current model; each regression τtAccording to the input images I andto predict an incrementIn the process of model training, training a first regression tree by taking part of feature points of all sample images, training a residual error between a predicted value of the first regression tree and the true value of the part of feature points to train a second tree … and the like in sequence until the predicted value of the Nth tree and the true value of the part of feature points are trained to be close to 0, obtaining all regression trees of an ERT algorithm, and obtaining a lip average model of a human face according to the regression trees;
a lip region identification step: determining a lip region according to the t lip feature points, inputting the lip region into a pre-trained lip classification model, and judging whether the lip region is a human lip region; and
lip movement judging step: if the lip region is a human lip region, calculating the motion direction and the motion distance of the lips in the real-time face image according to the x and y coordinates of t lip feature points in the real-time face image;
there are 20 lip feature points in the lip average model, wherein:
the upper lip and the lower lip of the lip are respectively provided with 8 characteristic points, and the left lip corner and the right lip corner are respectively provided with 2 characteristic points;
among the 8 characteristic points of the upper lip, 5 characteristic points are positioned on the outer contour line of the upper lip, and 3 characteristic points are positioned on the inner contour line of the upper lip and positioned in the middle position as the central characteristic points of the inner side of the upper lip;
of the 8 characteristic points of the lower lip, 5 characteristic points are positioned on the outer contour line of the lower lip, and 3 characteristic points are positioned on the inner contour line of the lower lip and positioned in the middle position as the central characteristic points of the inner side of the lower lip;
of the 2 characteristic points of the left lip corner and the right lip corner, 1 is positioned on the outer contour line of the lip and is called as an outer lip corner characteristic point, and 1 is positioned on the inner contour line of the lip and is called as an inner lip corner characteristic point;
the lip movement judging step includes:
calculating the distance between the central feature point of the inner side of the upper lip and the central feature point of the inner side of the lower lip in the real-time face image, and judging the opening degree of the lips;
respectively connecting the characteristic point of the left outer lip corner with the characteristic point on the outer contour line of the upper lip and the lower lip, which is closest to the characteristic point of the left outer lip corner to form a vectorComputing vectorsThe included angle between the two lips is used for obtaining the left-handed degree of the lips; and
respectively connecting the feature point of the right outer lip corner with the feature point on the outer contour line of the upper lip and the lower lip, which is closest to the feature point of the right outer lip corner to form a vectorComputing vectorsThe included angle between the two lips is obtained to obtain the right-handed degree of the lips.
2. The electronic device of claim 1, wherein the lip motion analysis program, when executed by the processor, further implements the steps of:
a prompting step: and when the lip classification model judges that the lip region is not the lip region of the person, prompting that the lip region of the person is not detected from the current real-time image and lip movement cannot be judged, and returning to the real-time facial image acquisition step.
3. The electronic device according to claim 1 or 2, wherein the training step of the lip classification model comprises:
collecting m lip positive sample images and k lip negative sample images to form a second sample library;
extracting local features of each lip positive sample image and each lip negative sample image; and
and training the support vector machine classifier by using the lip positive sample image, the lip negative sample image and the local features thereof to obtain a lip classification model of the face.
4. A lip motion analysis method, the method comprising:
a real-time facial image acquisition step: acquiring a real-time image shot by a camera device, and extracting a real-time face image from the real-time image by using a face recognition algorithm;
a characteristic point identification step: inputting the real-time face image into a pre-trained lip average model, and identifying t lip feature points representing the position of lips in the real-time face image by using the lip average model; the training step of the lip average model comprises the following steps: establishing a first sample library with n face images, and marking t characteristic points at the lip part in each face image in the first sample library; training a face feature recognition model by using the face image marked with the lip feature points to obtain a lip average model about a face, wherein the face feature recognition model ERT algorithm is expressed by a formula as follows:
where t denotes the cascade number,. tau.t(-) represents the regressor at the current stage,a shape estimate for the current model; each regression τtAccording to the input images I andto predict an incrementIn the process of model training, training a first regression tree by taking part of feature points of all sample images, training a residual error between a predicted value of the first regression tree and the true value of the part of feature points to train a second tree … and the like in sequence until the predicted value of the Nth tree and the true value of the part of feature points are trained to be close to 0, obtaining all regression trees of an ERT algorithm, and obtaining a lip average model of a human face according to the regression trees;
a lip region identification step: determining a lip region according to the t lip feature points, inputting the lip region into a pre-trained lip classification model, and judging whether the lip region is a human lip region; and
lip movement judging step: if the lip region is a human lip region, calculating the motion direction and the motion distance of the lips in the real-time face image according to the x and y coordinates of t lip feature points in the real-time face image;
there are 20 lip feature points in the lip average model, wherein:
the upper lip and the lower lip of the lip are respectively provided with 8 characteristic points, and the left lip corner and the right lip corner are respectively provided with 2 characteristic points;
among the 8 characteristic points of the upper lip, 5 characteristic points are positioned on the outer contour line of the upper lip, and 3 characteristic points are positioned on the inner contour line of the upper lip and positioned in the middle position as the central characteristic points of the inner side of the upper lip;
of the 8 characteristic points of the lower lip, 5 characteristic points are positioned on the outer contour line of the lower lip, and 3 characteristic points are positioned on the inner contour line of the lower lip and positioned in the middle position as the central characteristic points of the inner side of the lower lip;
of the 2 characteristic points of the left lip corner and the right lip corner, 1 is positioned on the outer contour line of the lip and is called as an outer lip corner characteristic point, and 1 is positioned on the inner contour line of the lip and is called as an inner lip corner characteristic point;
the lip movement judging step includes:
calculating the distance between the central feature point of the inner side of the upper lip and the central feature point of the inner side of the lower lip in the real-time face image, and judging the opening degree of the lips;
respectively connecting the characteristic point of the left outer lip corner with the characteristic point on the outer contour line of the upper lip and the lower lip, which is closest to the characteristic point of the left outer lip corner to form a vectorComputing vectorsThe included angle between the two lips is used for obtaining the left-handed degree of the lips; and
respectively connecting the feature point of the right outer lip corner with the feature point on the outer contour line of the upper lip and the lower lip, which is closest to the feature point of the right outer lip corner to form a vectorComputing vectorsThe included angle between the two lips is obtained to obtain the right-handed degree of the lips.
5. The lip motion analysis method according to claim 4, further comprising:
a prompting step: and when the lip classification model judges that the lip region is not the lip region of the person, prompting that the lip region of the person is not detected from the current real-time image and lip movement cannot be judged, and returning to the real-time facial image acquisition step.
6. The lip motion analysis method according to claim 4 or 5, wherein the training step of the lip classification model comprises:
collecting m lip positive sample images and k lip negative sample images to form a second sample library;
extracting local features of each lip positive sample image and each lip negative sample image; and
and training the support vector machine classifier by using the lip positive sample image, the lip negative sample image and the local features thereof to obtain a lip classification model of the face.
7. A computer-readable storage medium, characterized in that a lip motion analysis program is included in the computer-readable storage medium, and when the lip motion analysis program is executed by a processor, the steps of the lip motion analysis method according to any one of claims 4 to 6 are implemented.
CN201710708364.9A 2017-08-17 2017-08-17 lip motion analysis method, device and storage medium Active CN107633205B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201710708364.9A CN107633205B (en) 2017-08-17 2017-08-17 lip motion analysis method, device and storage medium
PCT/CN2017/108749 WO2019033570A1 (en) 2017-08-17 2017-10-31 Lip movement analysis method, apparatus and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710708364.9A CN107633205B (en) 2017-08-17 2017-08-17 lip motion analysis method, device and storage medium

Publications (2)

Publication Number Publication Date
CN107633205A CN107633205A (en) 2018-01-26
CN107633205B true CN107633205B (en) 2019-01-18

Family

ID=61099627

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710708364.9A Active CN107633205B (en) 2017-08-17 2017-08-17 lip motion analysis method, device and storage medium

Country Status (2)

Country Link
CN (1) CN107633205B (en)
WO (1) WO2019033570A1 (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108710836B (en) * 2018-05-04 2020-10-09 南京邮电大学 A lip detection and reading method based on cascade feature extraction
CN108763897A (en) * 2018-05-22 2018-11-06 平安科技(深圳)有限公司 Method of calibration, terminal device and the medium of identity legitimacy
CN108874145B (en) * 2018-07-04 2022-03-18 深圳美图创新科技有限公司 Image processing method, computing device and storage medium
CN110223322B (en) * 2019-05-31 2021-12-14 腾讯科技(深圳)有限公司 Image recognition method and device, computer equipment and storage medium
CN110738126A (en) * 2019-09-19 2020-01-31 平安科技(深圳)有限公司 Lip shearing method, device and equipment based on coordinate transformation and storage medium
CN111241922B (en) * 2019-12-28 2024-04-26 深圳市优必选科技股份有限公司 Robot, control method thereof and computer readable storage medium
CA3177529A1 (en) * 2020-05-05 2021-11-11 Ravindra Kumar Tarigoppula System and method for controlling viewing of multimedia based on behavioural aspects of a user
CN111259875B (en) * 2020-05-06 2020-07-31 中国人民解放军国防科技大学 Lip reading method based on self-adaptive semantic space-time diagram convolutional network
CN113095146A (en) * 2021-03-16 2021-07-09 深圳市雄帝科技股份有限公司 Mouth state classification method, device, equipment and medium based on deep learning
CN116021509A (en) * 2022-11-25 2023-04-28 上海非夕机器人科技有限公司 Method for planning motion trail of robot end-of-travel tool, robot, and storage medium
CN116405635A (en) * 2023-06-02 2023-07-07 山东正中信息技术股份有限公司 Multi-mode conference recording method and system based on edge calculation
CN119383471A (en) * 2024-12-25 2025-01-28 深圳市维海德技术股份有限公司 Target positioning method, device, video conferencing equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101702199A (en) * 2009-11-13 2010-05-05 深圳华为通信技术有限公司 Smiling face detection method and device and mobile terminal
CN104951730A (en) * 2014-03-26 2015-09-30 联想(北京)有限公司 A lip movement detection method, device and electronic equipment
CN105975935A (en) * 2016-05-04 2016-09-28 腾讯科技(深圳)有限公司 Face image processing method and apparatus
CN106529379A (en) * 2015-09-15 2017-03-22 阿里巴巴集团控股有限公司 Method and device for recognizing living body

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007094906A (en) * 2005-09-29 2007-04-12 Toshiba Corp Characteristic point detection device and method
CN104616438B (en) * 2015-03-02 2016-09-07 重庆市科学技术研究院 A kind of motion detection method of yawning for fatigue driving detection
CN105139503A (en) * 2015-10-12 2015-12-09 北京航空航天大学 Lip moving mouth shape recognition access control system and recognition method
CN106997451A (en) * 2016-01-26 2017-08-01 北方工业大学 Lip contour positioning method
CN106250815B (en) * 2016-07-05 2019-09-20 上海引波信息技术有限公司 A kind of quick expression recognition method based on mouth feature
CN106485214A (en) * 2016-09-28 2017-03-08 天津工业大学 A kind of eyes based on convolutional neural networks and mouth state identification method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101702199A (en) * 2009-11-13 2010-05-05 深圳华为通信技术有限公司 Smiling face detection method and device and mobile terminal
CN104951730A (en) * 2014-03-26 2015-09-30 联想(北京)有限公司 A lip movement detection method, device and electronic equipment
CN106529379A (en) * 2015-09-15 2017-03-22 阿里巴巴集团控股有限公司 Method and device for recognizing living body
CN105975935A (en) * 2016-05-04 2016-09-28 腾讯科技(深圳)有限公司 Face image processing method and apparatus

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于图像的嘴唇特征提取及口型分类研究;杨恒翔;《中国优秀硕士学位论文全文数据库》;20150515;I138-887

Also Published As

Publication number Publication date
CN107633205A (en) 2018-01-26
WO2019033570A1 (en) 2019-02-21

Similar Documents

Publication Publication Date Title
CN107633205B (en) lip motion analysis method, device and storage medium
CN107679449B (en) Lip motion method for catching, device and storage medium
US10534957B2 (en) Eyeball movement analysis method and device, and storage medium
US10445562B2 (en) AU feature recognition method and device, and storage medium
CN107633204B (en) Face occlusion detection method, apparatus and storage medium
US10650234B2 (en) Eyeball movement capturing method and device, and storage medium
CN107679447A (en) Facial characteristics point detecting method, device and storage medium
JP5366756B2 (en) Information processing apparatus and information processing method
JP6351240B2 (en) Image processing apparatus, image processing method, and program
WO2016149944A1 (en) Face recognition method and system, and computer program product
WO2019071664A1 (en) Human face recognition method and apparatus combined with depth information, and storage medium
JP6112801B2 (en) Image recognition apparatus and image recognition method
CN111160169A (en) Face detection method, device, equipment and computer readable storage medium
CN111263955A (en) Method and device for determining movement track of target object
Lahiani et al. Hand pose estimation system based on Viola-Jones algorithm for android devices
HK1246925A (en) Lip motion analysis method, device and storage medium
HK1246923A (en) Lip motion capturing method, device and storage medium
HK1246923B (en) Lip motion capturing method, device and storage medium
HK1246926A (en) Eyeball motion analysis method, device and storage medium
HK1246925B (en) Lip motion analysis method, device and storage medium
HK1246921A (en) Face shielding detection method, device and storage medium
HK1246922A1 (en) Face feature point detection method, device and storage medium
HK1246909B (en) Eye action capturing method, device and storage medium
HK1246921B (en) Face shielding detection method, device and storage medium
HK1246926B (en) Eyeball motion analysis method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 1246925

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant