CN113947799A - Three-dimensional face data preprocessing method and equipment - Google Patents
Three-dimensional face data preprocessing method and equipment Download PDFInfo
- Publication number
- CN113947799A CN113947799A CN202111334942.XA CN202111334942A CN113947799A CN 113947799 A CN113947799 A CN 113947799A CN 202111334942 A CN202111334942 A CN 202111334942A CN 113947799 A CN113947799 A CN 113947799A
- Authority
- CN
- China
- Prior art keywords
- face
- point
- nose tip
- dimensional
- cloud data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Image Processing (AREA)
Abstract
The invention provides a three-dimensional face data preprocessing method and equipment, wherein the method comprises the steps of obtaining three-dimensional face point cloud data to be processed; primarily determining the position of a nose tip point in the three-dimensional face point cloud data according to a preset standard face, the three-dimensional face point cloud data, a three-dimensional face nose tip point detection algorithm and a closest point iteration algorithm; according to the position of the nose tip point determined for the first time, determining front face data in the three-dimensional face point cloud data for the first time; determining the position of the nose tip point in the three-dimensional face point cloud data again according to the closest point iterative algorithm and a preset standard face; and determining the front face data in the three-dimensional face point cloud data again according to the position of the nose tip point determined again, and taking the front face data as the preprocessed front three-dimensional face point cloud data. The positions of the nose points are determined for the first time and determined again, so that under the condition that the human face posture exceeds 30 degrees, the nose points can still be accurately positioned, and the front three-dimensional human face point cloud data is obtained.
Description
Technical Field
The application belongs to the technical field of face recognition, and particularly relates to a three-dimensional face data preprocessing method and device.
Background
The current three-dimensional face data acquisition technology and three-dimensional data generation technology are easily affected by data disturbance, namely, factors such as data loss, cusp, image distortion, noise, cavities and the like also often include areas such as hair, neck, ears and the like, so that the mathematical basic theory established in the technology combining graphics and space geometry is widely used for preprocessing three-dimensional face point cloud data, the three-dimensional face is in a front three-dimensional face area, and the three-dimensional face data is used for face recognition and face expression recognition.
Although the nose tip point can be accurately detected by the existing three-dimensional face data preprocessing algorithm, the accurate detection is established on the basis that the recognized face posture is small, namely the three-dimensional face data with the posture smaller than 30 degrees are preprocessed, and once the face posture exceeds 30 degrees, the nose tip point cannot be accurately detected.
Disclosure of Invention
In view of this, the invention provides a three-dimensional face data preprocessing method and device, and aims to solve the problem that the nose tip cannot be accurately detected when the face posture exceeds 30 degrees.
A first aspect of an embodiment of the present invention provides a three-dimensional face data preprocessing method, including:
acquiring three-dimensional face point cloud data to be processed;
primarily determining the position of a nose tip point in the three-dimensional face point cloud data according to a preset standard face, the three-dimensional face point cloud data, a three-dimensional face nose tip point detection algorithm and a closest point iteration algorithm;
according to the position of the nose tip point determined for the first time, determining front face data in the three-dimensional face point cloud data for the first time;
registering the primarily determined front face data according to a closest point iterative algorithm and a preset standard face, and determining the position of a nose tip point in the three-dimensional face point cloud data again;
and according to the position of the nasal cusp which is determined again, determining the front face data in the three-dimensional face point cloud data again, and taking the front face data as the preprocessed front three-dimensional face point cloud data.
A second aspect of the embodiments of the present invention provides a three-dimensional face data preprocessing apparatus, including:
the data acquisition module is used for acquiring three-dimensional face point cloud data to be processed;
the first positioning module is used for primarily determining the position of a nose tip in the three-dimensional face point cloud data according to a preset standard face, the three-dimensional face point cloud data, a three-dimensional face nose tip detection algorithm and a closest point iteration algorithm;
the first determining module is used for primarily determining the front face data in the three-dimensional face point cloud data according to the primarily determined position of the nose tip point;
the second positioning module is used for registering the primarily determined front face data according to a closest point iterative algorithm and a preset standard face, and determining the position of a nose tip point in the three-dimensional face point cloud data again;
and the second determining module is used for determining the front face data in the three-dimensional face point cloud data again according to the position of the nose tip point determined again and taking the front face data as the preprocessed front three-dimensional face point cloud data.
A third aspect of the embodiments of the present invention provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the three-dimensional face data preprocessing method according to the first aspect when executing the computer program.
A fourth aspect of the embodiments of the present invention provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the steps of the three-dimensional face data preprocessing method according to the first aspect.
The three-dimensional face data preprocessing method and the three-dimensional face data preprocessing equipment provided by the embodiment of the invention have the advantages that the point cloud data of the three-dimensional face to be processed are obtained; primarily determining the position of a nose tip point in the three-dimensional face point cloud data according to a preset standard face, the three-dimensional face point cloud data, a three-dimensional face nose tip point detection algorithm and a closest point iteration algorithm; according to the position of the nose tip point determined for the first time, determining front face data in the three-dimensional face point cloud data for the first time; registering the primarily determined front face data according to a closest point iterative algorithm and a preset standard face, and determining the position of a nose tip point in the three-dimensional face point cloud data again; and determining the front face data in the three-dimensional face point cloud data again according to the position of the nose tip point determined again, and taking the front face data as the preprocessed front three-dimensional face point cloud data. The positions of the nose points are determined for the first time and determined again, and the positions of the nose points are corrected step by step, so that the nose points can still be accurately positioned under the condition that the face posture exceeds 30 degrees, and the front three-dimensional face point cloud data can be obtained.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is an application scene diagram of a three-dimensional face data preprocessing method provided by an embodiment of the present invention;
FIG. 2 is a flowchart illustrating an implementation of a three-dimensional face data preprocessing method according to an embodiment of the present invention;
FIG. 3 is a flow chart of an implementation of re-locating the nose tip provided by an embodiment of the present invention;
FIG. 4 is a flow chart of an implementation of the initial determination of the location of the nose tip provided by an embodiment of the present invention;
FIG. 5 is a schematic diagram of the center line of the target face and the center line of the standard face;
FIG. 6 is a schematic diagram of aligning the centerline of a target face with the centerline of a standard face;
fig. 7 is a flowchart of an implementation of a three-dimensional face data preprocessing method according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a three-dimensional face data preprocessing apparatus according to an embodiment of the present invention;
fig. 9 is a schematic diagram of an electronic device provided in an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
The existing three-dimensional facial expression preprocessing generally comprises operations of face posture correction, face region of interest (ROI) extraction, scale normalization, denoising and the like. The existing three-dimensional face data preprocessing algorithm is established on a small pose, namely less than 30 degrees, and the process of the method is that the three-dimensional face pose is firstly corrected, namely, each three-dimensional face is aligned (also called registration) with a standard face, then a nose tip is determined, namely, the maximum value of a nose tip z axis in face data is found, and finally, the nose tip is taken as a sphere center point, and a sphere with a determined radius is drawn to extract a face ROI area so as to obtain the preprocessed data of the three-dimensional face. From the above process, it can be seen that the size of the three-dimensional face pose angle directly affects the use of its application scene, such as the performance of three-dimensional face recognition and three-dimensional face expression recognition. However, there is currently no preprocessing of three-dimensional face point cloud data for face pose (greater than 30 degrees and less than 90 degrees), which directly affects the extraction of face features or facial expression features in the application scene of three-dimensional face point cloud data for face pose (greater than 30 degrees and less than 90 degrees) and the corresponding recognition performance thereof.
Fig. 1 is an application scene diagram of a three-dimensional face data preprocessing method provided in an embodiment of the present invention. The three-dimensional face data preprocessing method provided by the embodiment of the invention can be applied to the application environment but is not limited to the application environment. As shown in fig. 1, the system includes: a point cloud data acquisition device 11 and an electronic device 12.
The point cloud data acquisition device 11 is used for acquiring three-dimensional face point cloud data and sending the three-dimensional face point cloud data to the electronic device 12, and the electronic device 12 is used for preprocessing the received three-dimensional face point cloud data.
The point cloud data collecting device 11 may be an image collecting device composed of one or more cameras. The electronic device 12 may be a server, a terminal, etc., and is not limited thereto. The server may be implemented as a stand-alone server or as a server cluster comprised of multiple servers. The terminal may include, but is not limited to, a face recognition terminal, a desktop computer, a notebook computer, a tablet computer, and the like.
The electronic device 12 may be connected to one point cloud data collecting device 11, or may be connected to a plurality of point cloud data collecting devices 11, which is not limited herein. For example, a plurality of point cloud data acquisition devices 11 may be installed in an area such as a company, a school, or the like to acquire a face image, and the electronic device 12 connected with a face recognition device is connected to all the point cloud data acquisition devices 11 in the area, so as to monitor the identities of people in the area. It is also possible to install one point cloud data collecting device 11 and one electronic device 12 in a personal terminal having a face recognition function.
Fig. 2 is a flowchart of an implementation of a three-dimensional face data preprocessing method according to an embodiment of the present invention. As shown in fig. 2, in this embodiment, the three-dimensional face data preprocessing method includes:
s201, three-dimensional face point cloud data to be processed are obtained.
S202, primarily determining the position of a nose tip point in the three-dimensional face point cloud data according to a preset standard face, the three-dimensional face point cloud data, a three-dimensional face nose tip point detection algorithm and a closest point iteration algorithm.
And S203, determining the front face data in the three-dimensional face point cloud data for the first time according to the position of the nose tip point determined for the first time.
And S204, registering the primarily determined front face data according to the closest point iterative algorithm and a preset standard face, and determining the position of the nose tip point in the three-dimensional face point cloud data again.
And S205, determining the front face data in the three-dimensional face point cloud data again according to the position of the determined nose tip point again, and taking the front face data as the preprocessed front three-dimensional face point cloud data.
In this embodiment, the three-dimensional face point cloud data to be processed is acquired by the point cloud data acquisition device 11 shown in fig. 1. The preset standard face may be stored in the electronic device 12 shown in fig. 1 in advance, or may be selected from a face Recognition Grand challenge three-dimensional face database (frgc) through a network, which is not limited herein. The closest point iteration algorithm is used for gradually reducing the range of the nose tip point in an iteration mode. The three-dimensional face nose tip point detection algorithm is used for determining the nose tip points in the range.
An Iterative Closest Point (ICP) algorithm is currently the most widely used Point cloud fine registration algorithm. And finally obtaining the optimal transformation matrix of two groups of face point cloud data in a continuous iteration mode. Although the ICP algorithm has high accuracy and high convergence rate, its dependency on the initial value is high, and under the condition that the initial value is inaccurate, it is easy to fall into a local optimal solution.
In this embodiment, an approximate initial value of the nose tip position is determined by a three-dimensional face nose tip detection algorithm, an ICP algorithm is used for iteration according to the initial value, so as to obtain an approximate position of the nose tip, and then the approximate position is used as the initial value, and the ICP algorithm is used for iteration, so as to obtain an accurate position of the nose tip, so as to determine three-dimensional face point cloud data which is in front.
In the embodiment, the positions of the nose points are determined for the first time and determined again, and the positions of the nose points are corrected step by step, so that the nose points can still be accurately positioned under the condition that the face posture exceeds 30 degrees, and the front three-dimensional face point cloud data is obtained.
Fig. 3 is a flow chart of an implementation of re-locating the nose tip provided by an embodiment of the present invention. As shown in fig. 3, in some embodiments, S204 may include:
s301, aligning the center line of the face corresponding to the primarily determined front face data with the center line of a preset standard face according to a closest point iterative algorithm, and recording a first conversion matrix in the alignment process.
S302, converting the positions of the nose tip points of the standard human face according to the first conversion matrix, and defining a first spherical area with the radius of a first preset value by taking the converted positions of the nose tip points as the spherical center.
And S303, establishing a three-dimensional coordinate system in the first spherical area by taking the center of sphere as an origin and taking the direction facing the front face of the aligned human face as the positive direction of the z axis.
S304, determining the maximum z value point of the center line of the face corresponding to the primarily determined front face data in the first spherical area, and taking the maximum z value point as the position of the secondarily determined nose tip point.
In this embodiment, the first preset value may be 80 mm.
In some embodiments, S205 may include:
defining a second spherical area with the radius of a second preset value by taking the position of the nose tip point determined again as the center of sphere;
and determining the front face data in the three-dimensional face point cloud data again according to the second spherical area.
In this embodiment, the second preset value may be 100 mm.
Fig. 4 is a flowchart of an implementation of initially determining the location of the nose tip according to an embodiment of the present invention. As shown in fig. 4, in some embodiments, S202 may include:
s401, aligning the center line of the face corresponding to the three-dimensional face point cloud data with the center line of a preset standard face according to a closest point iteration algorithm, and recording a second conversion matrix in the alignment process.
S402, converting the positions of the nose points of the standard human face according to the second conversion matrix, and defining a third spherical area with the radius of a third preset value by taking the converted positions of the nose points as the sphere center.
And S403, establishing a three-dimensional coordinate system in the third spherical area by taking the center of sphere as an origin and taking the direction facing the front face of the aligned human face as the positive direction of the z axis.
S404, determining the maximum z-value point of the center line of the face corresponding to the three-dimensional face point cloud data in the third spherical area, and taking the maximum z-value point as the position of the initial nose tip point.
S405, a fourth spherical area with the radius of a fourth preset value is defined by taking the position of the initial nose tip point as the sphere center, so as to determine initial front face data in the three-dimensional face point cloud data.
And S406, primarily determining the position of a nose tip point in the three-dimensional face point cloud data according to the initial front face data.
In this embodiment, the third preset value may be 37 mm. The fourth preset value may be 80 mm.
Fig. 5 is a schematic diagram of the center line of the target face and the center line of the standard face. Fig. 6 is a schematic diagram of aligning the center line of the target face with the center line of the standard face. As shown in fig. 5 and 6, in the process of fig. 4;
s401: the center lines of the standard face and the target face are respectively marked as A and B. First, align A and B by ICP algorithm and record its conversion matrix M2。
S402-S404: by M2Finding the nose tip point k of the standard human face, drawing a ball with the radius of 37mm by taking the point k as the center, and searching a maximum value point p of a z coordinate on the B in the ball.
S405: the human face of an adult comprises eyes, a forehead, a mouth, a nose, cheeks and the like in a part of a sphere with a radius of 80mm and taking a nose tip point as a center. Therefore, a ball with the radius of 80mm can be drawn by taking the point p as the center, and the data in the ball is the initial front face data.
In some embodiments, S406 may include:
aligning the center line of the face corresponding to the initial front face data with the center line of a preset standard face according to a closest point iterative algorithm, and recording a third conversion matrix in the alignment process;
converting the position of the nose tip point of the standard human face according to the third conversion matrix, and defining a fifth spherical area with the radius of a fifth preset value by taking the converted position of the nose tip point as the spherical center;
establishing a three-dimensional coordinate system in the fifth spherical area by taking the center of the sphere as an origin and taking the direction facing the front face of the aligned human face as the positive direction of the z axis;
determining a maximum z value point of a center line of a face corresponding to the initial front face data in the first spherical area, and taking the maximum z value point as the position of a corrected nose tip point;
judging whether the distance between the corrected position of the nose tip point and the initial position of the nose tip point is smaller than a preset distance or not;
if the distance between the corrected position of the nose tip point and the initial position of the nose tip point is not less than the preset distance, taking the corrected position of the nose tip point as the initial position of the nose tip point, and jumping to a step of defining a fourth spherical area with the radius of a fourth preset value by taking the initial position of the nose tip point as the spherical center;
and if the distance between the corrected nose tip point position and the initial nose tip point position is smaller than the preset distance, taking the corrected nose tip point as the initially determined nose tip point position.
In this embodiment, the fifth preset value may be 25mm, and the preset distance may be 2 mm. As shown in fig. 5 and 6, at S406 in fig. 4: by a third conversion matrix M3And calculating a converted nose tip point p ', drawing a sphere with the radius of 25mm by taking the point p' as a center, and searching a maximum value point of a z coordinate on the B in the sphere. And then circularly iterating the process until the distance between the maximum points of the z coordinates obtained between two iterations is less than 2mm, and taking the maximum point of the z coordinate of the iteration as the position of the nasal tip point determined for the first time.
In some embodiments, after S201, the three-dimensional face data preprocessing method may further include:
determining a mirror image face of the three-dimensional face point cloud data;
aligning the face corresponding to the three-dimensional face point cloud data with the mirror face according to a closest point iteration algorithm to obtain an aligned face and recording a fourth conversion matrix in the alignment process;
and extracting the center line of the aligned face according to the fourth conversion matrix and the three-dimensional face center line extraction algorithm, and taking the center line as the center line of the face corresponding to the three-dimensional face point cloud data.
The important characteristic of the human face is that the human face has bilateral symmetry, so that the three-dimensional human face center line extraction algorithm can align the human face corresponding to the three-dimensional human face point cloud data with the mirror image human face to determine a human face symmetry plane, and then an intersection line of the symmetry plane and the human face corresponding to the three-dimensional human face point cloud data is used as a human face center line.
In the embodiment, the central line of the face is extracted, so that the detection range of the nose tip point is narrowed, and the detection precision is improved.
In some embodiments, S203 may include:
defining a sixth spherical area with the radius of a sixth preset value by taking the position of the nose tip point determined for the first time as the center of a sphere;
and according to the sixth spherical area, primarily determining the front face data in the three-dimensional face point cloud data.
In this embodiment, the sixth preset value may be 120 mm.
In some embodiments, after S202, the three-dimensional face data preprocessing method further includes:
calculating an Euler angle between a human face corresponding to the three-dimensional human face point cloud data and a standard human face to determine a human face posture of the human face corresponding to the three-dimensional human face point cloud data;
if the face pose is not larger than a first preset angle, the primarily determined front face data is used as the preprocessed front three-dimensional face point cloud data, and the subsequent steps are not executed;
if the face pose is larger than the first preset angle and not larger than the second preset angle, continuing to perform the step of registering the primarily determined front face data according to the closest point iterative algorithm and a preset standard face; wherein the first preset angle is smaller than the second preset angle.
In the present embodiment, the first preset angle may be 30 degrees, and the second preset angle may be 90 degrees. And judging the posture of the face to be recognized by calculating the Euler angle between the face and the standard face, and selecting corresponding processing steps. For example, when the face pose is not greater than 30 degrees, only executing S203 can obtain accurate face data with positive face, and subsequent steps do not need to be executed. However, when the face pose is between 30 degrees and 90 degrees, the nose point detected by the step S203 is still inaccurate, and it is difficult to obtain the front three-dimensional face point cloud data. Subsequent steps are required to accurately locate the nasal cusps.
The following describes the above three-dimensional face data preprocessing method by using an implementation example, but the method is not limited thereto. Fig. 7 is a flowchart of an implementation of a three-dimensional face data preprocessing method according to an embodiment of the present invention. As shown in fig. 7, in this embodiment, the three-dimensional face data preprocessing method specifically includes the following steps:
step 1, three-dimensional face point cloud data acquired by point cloud data acquisition equipment is acquired to form a plurality of test faces. Testing human facesWherein, the number of points of the three-dimensional face point cloud data is X ═ X1,x2,…,xN),Y=(y1,y2,…,yN),Z=(z1,z2,…,zN) X, y, z coordinate sets, each row (x) in the matrix Fi,yi,zi) Corresponding to a point p in three-dimensional spacei(0<i<N) coordinate values. The x axis points to the left direction of the face, the y axis points to the right upper side of the face, and the z axis points to the front facing direction of the face.
Step 2, using the plane of the coordinate center point of the three-dimensional face in the x directionFor a symmetry plane, p is transformed by a mirror transformation formulaiConversion to pi’(xi’,yi’,zi’):
Step 3, aligning F and F' by using an ICP (inductively coupled plasma) algorithm to obtain an aligned face F ", and recording a conversion matrix M1。
And 4, since one plane can be determined by three non-collinear points, firstly, three non-collinear points a, b and c on the face F are found, then, three corresponding points a ', b' and c 'on the face F' are found by using a mirror image transformation formula, and then, the points a ', b' and c 'of the three points on the aligned face F' are determined.
And step 5, approximately considering that a, b and c are three symmetrical pairs of points with a ', b ' and c ' according to the symmetry of the face, and the midline of the three pairs of points can divide the face from the middle into two symmetrical parts on a symmetrical plane. The midpoints of the three points of symmetry can be found by the following equation:
step 6, the following plane equation can be established by using the three points:
and 7, finally, forming a set as the center line of the face by using the point on the three-dimensional face to the point with the closest distance from the plane, namely the point with the Euclidean distance from the point to the plane less than 1 mm. To ensure the accuracy of the registration in the subsequent steps, the width of the established center line should be made 2 mm.
And 8, selecting a standard face from an FRGC (face Recognition Grand Change) three-dimensional face database, and determining a center line and a nose tip point of the standard face.
And 9, respectively recording the central lines of the standard face and the target face as A and B. First, align A and B by ICP algorithm and record its conversion matrix M2
Step 10, by M2Finding the nose tip point k of the standard human face, drawing a ball with the radius of 37mm by taking the point k as the center, and searching a maximum value point p of a z coordinate on the B in the ball.
And 11, drawing a ball with the radius of 80mm by taking the point p as a center, wherein the data in the ball is the initial front face data.
And step 13, judging whether the distance between p and p' is more than 2 mm. And if the distance between the maximum points of the z coordinate obtained between two iterations is less than 2mm, taking the maximum point of the z coordinate of the iteration as the position of the nasal tip point determined for the first time.
And step 14, establishing a spherical area by taking the maximum value point of the z coordinate obtained by the last iteration as a center and the radius of the maximum value point of the z coordinate as 120mm so as to primarily determine the front face data in the three-dimensional face point cloud data.
And step 15, registering the primarily determined front face data and the standard face by using an ICP (inductively coupled plasma) algorithm, and determining the position of the nose tip again.
And step 16, determining the position of the nose tip point as the center again, and establishing a final spherical area of the tested face with the radius of 100mm, namely determining the front three-dimensional face point cloud data again.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
Fig. 8 is a schematic structural diagram of a three-dimensional face data preprocessing device according to an embodiment of the present invention. As shown in fig. 8, the three-dimensional face data preprocessing device 8 includes:
and the data acquisition module 810 is used for acquiring the three-dimensional face point cloud data to be processed.
The first positioning module 820 is configured to primarily determine the position of a nose tip point in the three-dimensional face point cloud data according to a preset standard face, the three-dimensional face point cloud data, a three-dimensional face nose tip point detection algorithm, and a closest point iteration algorithm.
The first determining module 830 is configured to determine the front face data in the three-dimensional face point cloud data for the first time according to the position of the nose tip point determined for the first time.
And the second positioning module 840 is used for registering the primarily determined front face data according to the closest point iterative algorithm and a preset standard face, and determining the position of the nose tip point in the three-dimensional face point cloud data again.
And a second determining module 850, configured to determine the front face data in the three-dimensional face point cloud data again according to the position of the nose tip point determined again, and use the front face data as the preprocessed front three-dimensional face point cloud data.
Optionally, a second positioning module 840 for
Aligning the center line of the face corresponding to the primarily determined front face data with the center line of a preset standard face according to a closest point iterative algorithm, and recording a first conversion matrix in the alignment process;
converting the position of the nose tip point of the standard human face according to the first conversion matrix, and defining a first spherical area with the radius of a first preset value by taking the converted position of the nose tip point as the spherical center;
establishing a three-dimensional coordinate system in the first spherical area by taking the center of the sphere as an origin and taking the direction facing the front face of the aligned human face as the positive direction of the z axis;
and determining the maximum z value point of the center line of the face corresponding to the primarily determined front face data in the first spherical area, and taking the maximum z value point as the position of the secondarily determined nose tip point.
Optionally, the second determining module 850 is configured to define a second spherical area with a radius of a second preset value by taking the position of the re-determined nose tip point as a spherical center;
and determining the front face data in the three-dimensional face point cloud data again according to the second spherical area.
Optionally, the first determining module 830 is configured to align a center line of a face corresponding to the three-dimensional face point cloud data with a preset center line of a standard face according to a closest point iteration algorithm, and record a second transformation matrix in the alignment process;
converting the position of the nose tip point of the standard human face according to the second conversion matrix, and defining a third spherical area with the radius of a third preset value by taking the converted position of the nose tip point as the center of a sphere;
establishing a three-dimensional coordinate system in a third spherical area by taking the center of the sphere as an origin and taking the direction facing the front face of the aligned human face as the positive direction of the z axis;
determining the maximum z value point of the center line of the face corresponding to the three-dimensional face point cloud data in the third spherical area, and taking the maximum z value point as the position of the initial nose tip point;
defining a fourth spherical area with the radius of a fourth preset value by taking the position of the initial nose tip point as the sphere center so as to determine initial front face data in the three-dimensional face point cloud data;
and according to the initial front face data, primarily determining the position of a nose tip point in the three-dimensional face point cloud data.
Optionally, the first determining module 830 is specifically configured to align a center line of a face corresponding to the initial front face data with a center line of a preset standard face according to a closest point iteration algorithm, and record a third transformation matrix in the alignment process;
converting the position of the nose tip point of the standard human face according to the third conversion matrix, and defining a fifth spherical area with the radius of a fifth preset value by taking the converted position of the nose tip point as the spherical center;
establishing a three-dimensional coordinate system in the fifth spherical area by taking the center of the sphere as an origin and taking the direction facing the front face of the aligned human face as the positive direction of the z axis;
determining a maximum z value point of a center line of a face corresponding to the initial front face data in the first spherical area, and taking the maximum z value point as the position of a corrected nose tip point;
judging whether the distance between the corrected position of the nose tip point and the initial position of the nose tip point is smaller than a preset distance or not;
if the distance between the corrected position of the nose tip point and the initial position of the nose tip point is not less than the preset distance, taking the corrected position of the nose tip point as the initial position of the nose tip point, and jumping to a step of defining a fourth spherical area with the radius of a fourth preset value by taking the initial position of the nose tip point as the spherical center;
and if the distance between the corrected nose tip point position and the initial nose tip point position is smaller than the preset distance, taking the corrected nose tip point as the initially determined nose tip point position.
Optionally, the three-dimensional face data preprocessing device 8 further includes: a centerline extraction module 860.
A center line extraction module 860 for determining a mirror face of the three-dimensional face point cloud data;
aligning the face corresponding to the three-dimensional face point cloud data with the mirror face according to a closest point iteration algorithm to obtain an aligned face and recording a fourth conversion matrix in the alignment process;
and extracting the center line of the aligned face according to the fourth conversion matrix and the three-dimensional face center line extraction algorithm, and taking the center line as the center line of the face corresponding to the three-dimensional face point cloud data.
Optionally, the first determining module 830 is configured to determine the frontal face data in the three-dimensional face point cloud data for the first time according to the position of the nose tip point determined for the first time, and includes:
defining a sixth spherical area with the radius of a sixth preset value by taking the position of the nose tip point determined for the first time as the center of a sphere;
and according to the sixth spherical area, primarily determining the front face data in the three-dimensional face point cloud data.
Optionally, the three-dimensional face data preprocessing device 8 further includes: pose determination module 870.
The pose determining module 870 is configured to calculate an euler angle between the human face corresponding to the three-dimensional human face point cloud data and the standard human face to determine a human face pose of the human face corresponding to the three-dimensional human face point cloud data;
if the face pose is not larger than a first preset angle, the primarily determined front face data is used as the preprocessed front three-dimensional face point cloud data, and the subsequent steps are not executed;
if the face pose is larger than the first preset angle and not larger than the second preset angle, continuing to perform the step of registering the primarily determined front face data according to the closest point iterative algorithm and a preset standard face; wherein the first preset angle is smaller than the second preset angle.
The three-dimensional face data preprocessing device provided in this embodiment may be used to implement the above method embodiments, and the implementation principle and technical effect are similar, which are not described herein again.
Fig. 9 is a schematic diagram of an electronic device provided in an embodiment of the present invention. As shown in fig. 9, an embodiment of the present invention provides an electronic device 9, where the electronic device 9 of the embodiment includes: a processor 90, a memory 91, and a computer program 92 stored in the memory 91 and executable on the processor 90. The processor 90 executes the computer program 92 to implement the steps in the above-mentioned embodiments of the three-dimensional face data preprocessing method, such as the steps 201 to 205 shown in fig. 2. Alternatively, the processor 90, when executing the computer program 92, implements the functions of the various modules/units in the above-described apparatus embodiments, such as the functions of the modules 810 to 850 shown in fig. 8.
Illustratively, the computer program 92 may be partitioned into one or more modules/units, which are stored in the memory 91 and executed by the processor 90 to implement the present invention. One or more of the modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 92 in the electronic device 9.
The electronic device 9 may be a computing device such as an independent physical server, a server cluster, and a cloud server. Those skilled in the art will appreciate that fig. 9 is merely an example of the electronic device 9, does not constitute a limitation of the electronic device 9, and may include more or fewer components than illustrated, or some components in combination, or different components.
The Processor 90 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 91 may be an internal storage unit of the electronic device 9, such as a hard disk or a memory of the electronic device 9. The memory 91 may also be an external storage device of the electronic device 9, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the electronic device 9. Further, the memory 91 may also include both an internal storage unit of the electronic device 9 and an external storage device. The memory 91 is used for storing computer programs and other programs and data required by the terminal. The memory 91 may also be used to temporarily store data that has been output or is to be output.
The embodiment of the invention provides a computer-readable storage medium, wherein a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the steps in the embodiment of the three-dimensional face data preprocessing method are realized.
The computer-readable storage medium stores a computer program 92, the computer program 92 includes program instructions, and when the program instructions are executed by the processor 90, all or part of the processes in the method according to the above embodiments may be implemented by the computer program 92 instructing related hardware, and the computer program 92 may be stored in a computer-readable storage medium, and when the computer program 92 is executed by the processor 90, the steps of the above embodiments of the method may be implemented. The computer program 92 comprises, among other things, computer program code, which may be in the form of source code, object code, an executable file or some intermediate form. The computer readable medium may include: any entity or device capable of carrying computer program code, recording medium, U.S. disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution media, and the like. It should be noted that the computer readable medium may include any suitable increase or decrease as required by legislation and patent practice in the jurisdiction, for example, in some jurisdictions, computer readable media may not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
The computer readable storage medium may be an internal storage unit of the terminal of any of the foregoing embodiments, for example, a hard disk or a memory of the terminal. The computer readable storage medium may also be an external storage device of the terminal, such as a plug-in hard disk provided on the terminal, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like. Further, the computer-readable storage medium may also include both an internal storage unit and an external storage device of the terminal. The computer-readable storage medium is used for storing a computer program and other programs and data required by the terminal. The computer-readable storage medium may also be used to temporarily store data that has been output or is to be output.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules, so as to perform all or part of the functions described above. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal and method may be implemented in other ways. For example, the above-described apparatus/terminal embodiments are merely illustrative, and for example, a module or a unit may be divided into only one logical function, and may be implemented in other ways, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method according to the embodiments of the present invention may also be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of the embodiments of the method. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, recording medium, U.S. disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution media, and the like. It should be noted that the computer readable medium may include any suitable increase or decrease as required by legislation and patent practice in the jurisdiction, for example, in some jurisdictions, computer readable media may not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.
Claims (10)
1. A three-dimensional face data preprocessing method is characterized by comprising the following steps:
acquiring three-dimensional face point cloud data to be processed;
primarily determining the position of a nose tip point in the three-dimensional face point cloud data according to a preset standard face, the three-dimensional face point cloud data, a three-dimensional face nose tip point detection algorithm and a closest point iteration algorithm;
according to the position of the nose tip point determined for the first time, determining front face data in the three-dimensional face point cloud data for the first time;
registering the primarily determined front face data according to a closest point iterative algorithm and a preset standard face, and determining the position of a nose tip point in the three-dimensional face point cloud data again;
and according to the position of the nasal cusp which is determined again, determining the front face data in the three-dimensional face point cloud data again, and taking the front face data as the preprocessed front three-dimensional face point cloud data.
2. The method for preprocessing three-dimensional face data according to claim 1, wherein the registration of the primarily determined front face data is performed according to a closest point iterative algorithm and a preset standard face, and the position of the nose tip point in the three-dimensional face point cloud data is determined again, including:
aligning the center line of the face corresponding to the primarily determined front face data with the center line of a preset standard face according to a closest point iterative algorithm, and recording a first conversion matrix in the alignment process;
converting the position of the nose tip point of the standard human face according to the first conversion matrix, and defining a first spherical area with the radius of a first preset value by taking the converted position of the nose tip point as the spherical center;
establishing a three-dimensional coordinate system in the first spherical area by taking the center of sphere as an origin and taking the direction facing the front face of the aligned human face as the positive direction of the z axis;
and determining the maximum z value point of the center line of the face corresponding to the primarily determined front face data in the first spherical area, and taking the maximum z value point as the position of the secondarily determined nose tip point.
3. The method for preprocessing three-dimensional face data according to claim 1, wherein the re-determining the front face data in the three-dimensional face point cloud data according to the re-determined position of the nose tip point comprises:
defining a second spherical area with the radius of a second preset value by taking the position of the nose tip point determined again as the center of sphere;
and determining the front face data in the three-dimensional face point cloud data again according to the second spherical area.
4. The method for preprocessing three-dimensional face data according to claim 1, wherein the primarily determining the position of the nose tip point in the three-dimensional face point cloud data according to a preset standard face, the three-dimensional face point cloud data, a three-dimensional face nose tip point detection algorithm and a closest point iteration algorithm comprises:
aligning the center line of the face corresponding to the three-dimensional face point cloud data with the center line of a preset standard face according to a closest point iterative algorithm, and recording a second conversion matrix in the alignment process;
converting the position of the nose tip point of the standard human face according to the second conversion matrix, and defining a third spherical area with the radius of a third preset value by taking the converted position of the nose tip point as the center of a sphere;
establishing a three-dimensional coordinate system in the third spherical area by taking the center of the sphere as an origin and taking the direction facing the front face of the aligned human face as the positive direction of the z axis;
determining the maximum z value point of the center line of the face corresponding to the three-dimensional face point cloud data in the third spherical area, and taking the maximum z value point as the position of an initial nose tip point;
defining a fourth spherical area with the radius of a fourth preset value by taking the position of the initial nose tip point as a sphere center so as to determine initial front face data in the three-dimensional face point cloud data;
and primarily determining the position of a nose tip point in the three-dimensional face point cloud data according to the initial front face data.
5. The method of claim 4, wherein the initially determining the location of the nose tip point in the three-dimensional face point cloud data according to the initial frontal face data comprises:
aligning the center line of the face corresponding to the initial front face data with the center line of a preset standard face according to a closest point iterative algorithm, and recording a third conversion matrix in the alignment process;
converting the position of the nose tip point of the standard human face according to the third conversion matrix, and defining a fifth spherical area with the radius of a fifth preset value by taking the converted position of the nose tip point as the spherical center;
establishing a three-dimensional coordinate system in the fifth spherical area by taking the center of sphere as an origin and taking the direction facing the front face of the aligned human face as the positive direction of the z axis;
determining a maximum z value point of a center line of the face corresponding to the initial front face data in the first spherical area, and taking the maximum z value point as a corrected nose tip point;
judging whether the distance between the corrected nose tip point position and the initial nose tip point position is smaller than a preset distance or not;
if the distance between the corrected nose tip point position and the initial nose tip point position is not less than the preset distance, taking the corrected nose tip point position as the initial nose tip point position, and jumping to a step of defining a fourth spherical area with the initial nose tip point position as a spherical center and the radius of the fourth spherical area being a fourth preset value;
and if the distance between the corrected nose tip point position and the initial nose tip point position is smaller than a preset distance, taking the corrected nose tip point as the initially determined nose tip point position.
6. The method for preprocessing three-dimensional face data according to claim 1, further comprising, after acquiring the three-dimensional face point cloud data to be processed:
determining a mirror image face of the three-dimensional face point cloud data;
aligning the face corresponding to the three-dimensional face point cloud data with the mirror face according to a closest point iteration algorithm to obtain an aligned face and recording a fourth conversion matrix in the alignment process;
and extracting the center line of the aligned face according to the fourth conversion matrix and a three-dimensional face center line extraction algorithm, and taking the center line as the center line of the face corresponding to the three-dimensional face point cloud data.
7. The method for preprocessing three-dimensional face data according to claim 1, wherein the primarily determining the front face data in the three-dimensional face point cloud data according to the primarily determined position of the nose tip point comprises:
defining a sixth spherical area with the radius of a sixth preset value by taking the position of the nose tip point determined for the first time as the center of a sphere;
and according to the sixth spherical area, primarily determining front face data in the three-dimensional face point cloud data.
8. The method of any of claims 1-7, wherein after the initial determination of the frontal face data in the three-dimensional face point cloud data based on the initially determined location of the nose tip, the method further comprises:
calculating an Euler angle between a human face corresponding to the three-dimensional human face point cloud data and a standard human face to determine a human face posture of the human face corresponding to the three-dimensional human face point cloud data;
if the face pose is not larger than a first preset angle, taking the primarily determined front face data as the preprocessed front three-dimensional face point cloud data and not executing the subsequent steps;
if the face pose is larger than a first preset angle and not larger than a second preset angle, continuing to perform the step of registering the primarily determined front face data according to the closest point iterative algorithm and a preset standard face; wherein the first preset angle is smaller than the second preset angle.
9. An electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the method for preprocessing three-dimensional face data according to any one of claims 1 to 8 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, implements the steps of the three-dimensional face data preprocessing method according to any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111334942.XA CN113947799B (en) | 2021-11-11 | 2021-11-11 | Three-dimensional face data preprocessing method and equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111334942.XA CN113947799B (en) | 2021-11-11 | 2021-11-11 | Three-dimensional face data preprocessing method and equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113947799A true CN113947799A (en) | 2022-01-18 |
CN113947799B CN113947799B (en) | 2023-03-14 |
Family
ID=79337876
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111334942.XA Active CN113947799B (en) | 2021-11-11 | 2021-11-11 | Three-dimensional face data preprocessing method and equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113947799B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US12260673B1 (en) * | 2024-01-25 | 2025-03-25 | Jianghan University | Facial acupoint locating method, acupuncture method, acupuncture robot and storage medium |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104408399A (en) * | 2014-10-28 | 2015-03-11 | 小米科技有限责任公司 | Face image processing method and apparatus |
CN104978549A (en) * | 2014-04-03 | 2015-10-14 | 北京邮电大学 | Three-dimensional face image feature extraction method and system |
JP2016099759A (en) * | 2014-11-20 | 2016-05-30 | 国立大学法人静岡大学 | Face detection method, face detection device, and face detection program |
CN107609465A (en) * | 2017-07-25 | 2018-01-19 | 北京联合大学 | A kind of multi-dimension testing method for Face datection |
CN107729806A (en) * | 2017-09-05 | 2018-02-23 | 西安理工大学 | Single-view Pose-varied face recognition method based on three-dimensional facial reconstruction |
CN108615016A (en) * | 2018-04-28 | 2018-10-02 | 北京华捷艾米科技有限公司 | Face critical point detection method and face critical point detection device |
WO2019128932A1 (en) * | 2017-12-25 | 2019-07-04 | 北京市商汤科技开发有限公司 | Face pose analysis method and apparatus, device, storage medium, and program |
CN113158892A (en) * | 2021-04-20 | 2021-07-23 | 南京大学 | Face recognition method irrelevant to textures and expressions |
-
2021
- 2021-11-11 CN CN202111334942.XA patent/CN113947799B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104978549A (en) * | 2014-04-03 | 2015-10-14 | 北京邮电大学 | Three-dimensional face image feature extraction method and system |
CN104408399A (en) * | 2014-10-28 | 2015-03-11 | 小米科技有限责任公司 | Face image processing method and apparatus |
JP2016099759A (en) * | 2014-11-20 | 2016-05-30 | 国立大学法人静岡大学 | Face detection method, face detection device, and face detection program |
CN107609465A (en) * | 2017-07-25 | 2018-01-19 | 北京联合大学 | A kind of multi-dimension testing method for Face datection |
CN107729806A (en) * | 2017-09-05 | 2018-02-23 | 西安理工大学 | Single-view Pose-varied face recognition method based on three-dimensional facial reconstruction |
WO2019128932A1 (en) * | 2017-12-25 | 2019-07-04 | 北京市商汤科技开发有限公司 | Face pose analysis method and apparatus, device, storage medium, and program |
CN108615016A (en) * | 2018-04-28 | 2018-10-02 | 北京华捷艾米科技有限公司 | Face critical point detection method and face critical point detection device |
CN113158892A (en) * | 2021-04-20 | 2021-07-23 | 南京大学 | Face recognition method irrelevant to textures and expressions |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US12260673B1 (en) * | 2024-01-25 | 2025-03-25 | Jianghan University | Facial acupoint locating method, acupuncture method, acupuncture robot and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN113947799B (en) | 2023-03-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3091479B1 (en) | Method and apparatus for fingerprint identification | |
US20220254059A1 (en) | Data Processing Method and Related Device | |
WO2017186016A1 (en) | Method and device for image warping processing and computer storage medium | |
CN108389250A (en) | The method for quickly generating building cross-section diagram based on point cloud data | |
CN109102567B (en) | Pose parameter high-precision solving method based on reconstruction error minimization | |
CN108492333A (en) | Spacecraft attitude method of estimation based on satellite-rocket docking ring image information | |
CN109064516A (en) | A kind of Camera Self-Calibration method based on absolute conic picture | |
CN111340862A (en) | Point cloud registration method and device based on multi-feature fusion and storage medium | |
CN113362445B (en) | Method and device for reconstructing object based on point cloud data | |
WO2018188116A1 (en) | Gesture recognition method, device and system | |
CN111460910B (en) | Face classification method, device, terminal equipment and storage medium | |
KR102488517B1 (en) | A method, a device, an electronic equipment and a storage medium for changing hairstyle | |
CN113947799A (en) | Three-dimensional face data preprocessing method and equipment | |
CN112101247B (en) | Face pose estimation method, device, equipment and storage medium | |
US20200160037A1 (en) | Method and apparatus for pattern recognition | |
CN104091148A (en) | Facial feature point positioning method and device | |
CN110032941B (en) | Face image detection method, face image detection device and terminal equipment | |
US20220327740A1 (en) | Registration method and registration apparatus for autonomous vehicle | |
CN112418250B (en) | Optimized matching method for complex 3D point cloud | |
CN113902853A (en) | A face three-dimensional reconstruction method, device, electronic device and storage medium | |
CN106874592B (en) | Virtual auditory playback method and system | |
WO2025001996A1 (en) | Wearing deviation measurement method and apparatus, and electronic device and storage medium | |
Li et al. | Objective reduction using objective sampling and affinity propagation for many-objective optimization problems | |
US20250069183A1 (en) | Method and apparatus of processing image, interactive device, electronic device, and storage medium | |
CN112613357B (en) | Face measurement method, device, electronic equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20230719 Address after: 050051 Floor 5, No. 2, Xiangyi Road, Xinhua District, Shijiazhuang City, Hebei Province Patentee after: Hebei Linghe Computer Information Technology Co.,Ltd. Address before: 050035 Shijiazhuang University, No. 288, Zhufeng street, high tech Zone, Shijiazhuang City, Hebei Province Patentee before: SHIJIAZHUANG University |