CN112241933B - Face image processing method and device, storage medium and electronic equipment - Google Patents
Face image processing method and device, storage medium and electronic equipment Download PDFInfo
- Publication number
- CN112241933B CN112241933B CN202010681269.6A CN202010681269A CN112241933B CN 112241933 B CN112241933 B CN 112241933B CN 202010681269 A CN202010681269 A CN 202010681269A CN 112241933 B CN112241933 B CN 112241933B
- Authority
- CN
- China
- Prior art keywords
- face
- image
- data
- pixel point
- face image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 22
- 238000012545 processing Methods 0.000 claims abstract description 98
- 238000013507 mapping Methods 0.000 claims abstract description 70
- 238000000034 method Methods 0.000 claims abstract description 47
- 238000009877 rendering Methods 0.000 claims abstract description 24
- 230000000694 effects Effects 0.000 claims description 26
- 238000005070 sampling Methods 0.000 claims description 21
- 238000004590 computer program Methods 0.000 claims description 13
- 238000004422 calculation algorithm Methods 0.000 claims description 9
- 230000006870 function Effects 0.000 claims description 9
- 238000001514 detection method Methods 0.000 claims description 7
- 230000006641 stabilisation Effects 0.000 claims description 5
- 238000011105 stabilization Methods 0.000 claims description 5
- 230000001131 transforming effect Effects 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 abstract description 4
- 238000010586 diagram Methods 0.000 description 22
- 238000004891 communication Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 6
- 230000001815 facial effect Effects 0.000 description 4
- 230000003796 beauty Effects 0.000 description 3
- 238000012216 screening Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 2
- 210000004709 eyebrow Anatomy 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 230000001154 acute effect Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000003416 augmentation Effects 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000000802 evaporation-induced self-assembly Methods 0.000 description 1
- 210000000887 face Anatomy 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/18—Image warping, e.g. rearranging pixels individually
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Generation (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the application provides a face image processing method, a device, a storage medium and electronic equipment, which are used for generating a face grid image according to original face image data by acquiring the original face image data, determining an offset vector of each pixel point on the face grid image according to a normal texture mapping, performing offset processing on each pixel coordinate on the face grid image according to the offset vector of each pixel point on the face grid image to obtain deformed face image data, rendering the deformed face image data to obtain a deformed face image, and performing calculation of the offset vector for each pixel point in the process of acquiring the deformed data without complex square operation and the like, so that the accuracy and fluency of processing the face image are improved, and the user experience is improved.
Description
Technical Field
The embodiment of the application relates to the technical field of image processing, in particular to a face image processing method, a face image processing device, a storage medium and electronic equipment.
Background
With the popularization of mobile terminals and the rapid development of image processing technologies, face images in photographed photos or videos are identified and processed by the face image processing technologies to achieve effects such as face beautification, deformation, face changing and the like, and have been widely used. The common people can process the photographed pictures or videos by installing corresponding application programs on the mobile terminal, so that the pictures or videos show different effects, and the daily life of the people is greatly enriched. In addition, in recent years, by combining the face image processing technology with the technologies of virtual reality, augmented reality, and the like, the rise, transformation, or development of related industries have also been driven.
The face image deformation is a face image processing technology for deforming key parts or face contours of a face, in the prior art, in order to achieve a required deformation effect, four steps of image template screening, image preprocessing, deformation data acquisition and image processing are needed when the face image is deformed, wherein in the process of calculating pixel offset by acquiring the deformation data, the key points determined in the image template screening are used as circle centers to carry out circular or elliptical attenuation offset, and the offset of the pixel points in the area near the corresponding key points is calculated.
In the prior art, the mobile terminal is easy to be blocked in the deformation processing process of the face image, and the accuracy of the face image obtained by processing is low, so that the problem of low user experience exists when the deformation processing of the face image is performed by adopting the prior art.
Disclosure of Invention
The embodiment of the application provides a face image processing method, a device, a storage medium and electronic equipment, which are used for solving the problem of low user experience in the deformation processing process of a face image in the prior art.
In a first aspect, an embodiment of the present application provides a face image processing method, including:
acquiring original face image data;
generating a face grid image according to the original face image data;
determining an offset vector of each pixel point on the face grid image according to a normal texture map, wherein the normal texture map is a deformation trend and deformation strength of the face image represented by two-dimensional color data;
Performing offset processing on each pixel coordinate on the face grid image according to the offset vector of each pixel point on the face grid image to obtain deformed face image data;
rendering the deformed face image data to obtain a deformed face image.
Optionally, the determining an offset vector of each pixel point on the face mesh image according to the normal texture map includes:
establishing a mapping relation between the face grid image and the normal texture map;
sequentially sampling the two-dimensional color data of each pixel point on the normal texture map according to the mapping relation;
and determining an offset vector of each pixel point on the face grid image according to the two-dimensional color data.
Optionally, the establishing a mapping relationship between the face mesh image and the normal texture map includes:
determining texture coordinates of the face grid image;
Establishing a coordinate mapping relation between texture coordinates of the face grid image and texture coordinates of the normal texture map;
And establishing a pixel mapping relation between the pixel points of the face grid image and the pixel points of the normal texture mapping according to the coordinate mapping relation.
Optionally, the sequentially sampling the two-dimensional color data of each pixel point on the normal texture map according to the mapping relationship includes:
and sequentially sampling the two-dimensional color data of each pixel point on the normal texture map according to a preset sampling sequence according to the coordinate mapping relation between the texture coordinates of the face grid image and the texture coordinates of the normal texture map.
Optionally, the determining an offset vector of each pixel point on the face mesh image according to the two-dimensional color data includes:
Converting the two-dimensional color data into coordinate data in a two-dimensional coordinate system, wherein the two-dimensional coordinate system is a plane rectangular coordinate system with x-axis and y-axis values of [ -1,1 ];
and determining an offset vector of each pixel point on the face grid image according to the x-axis coordinate value and the y-axis coordinate value of the coordinate data.
Optionally, the converting the two-dimensional color data into coordinate data in a two-dimensional coordinate system includes:
and converting the two-dimensional color data into coordinate data in the two-dimensional coordinate system according to the coordinate reference value of the two-dimensional coordinate system.
Optionally, the performing offset processing on each pixel coordinate on the face mesh image according to the offset vector of each pixel point on the face mesh image to obtain deformed face image data includes:
transforming the offset vector of each pixel point on the face grid image into a target offset vector corresponding to the screen resolution of the terminal equipment;
and obtaining the deformed face image data according to the target offset vector of each pixel point on the face grid image and the pixel coordinates of the corresponding pixel point on the face grid image.
Optionally, the rendering the deformed face image data to obtain a deformed face image includes:
and rendering the deformed face image data through a two-dimensional texture processing function to obtain a deformed face image.
Optionally, before determining the offset vector of each pixel point on the face mesh image according to the normal texture map, the method further includes:
The normal texture map to be used is determined.
Optionally, the determining the normal texture map to be used includes:
selecting a normal texture map corresponding to the deformation effect from a plurality of preset normal texture maps according to a face processing instruction input by a user, wherein the face processing instruction is used for indicating the deformation effect required by the user;
Or alternatively
Determining the type of an application currently used by a user;
According to the application type, determining the normal texture mapping required to be used.
Optionally, the generating a face mesh image according to the original face image data includes:
according to a face key point detection algorithm, determining face key points from the original face image data;
determining auxiliary sites according to the distribution of the face key points;
Constructing triangle mesh data by using nearest face key points and auxiliary sites according to a triangle stabilization principle;
Rendering the original face image data to obtain a base map, and marking the base map through the triangle mesh data to obtain the face mesh image.
Optionally, the acquiring the original face image data includes:
acquiring original face image data acquired by a camera in real time;
Or alternatively
Raw face image data stored in an image application is acquired.
Optionally, the two-dimensional color data in the normal texture map is composed of color values of R channels and color values of G channels.
In a second aspect, an embodiment of the present application provides a face image processing apparatus, including:
The acquisition module is used for acquiring the original face image data;
The processing module is used for generating a face grid image according to the original face image data; determining an offset vector of each pixel point on the face grid image according to a normal texture map, wherein the normal texture map is a deformation trend and deformation strength of the face image represented by two-dimensional color data; performing offset processing on each pixel coordinate on the face grid image according to the offset vector of each pixel point on the face grid image to obtain deformed face image data; rendering the deformed face image data to obtain a deformed face image.
In a third aspect, an embodiment of the present application provides a storage medium storing a computer program for implementing the face image processing method as described above.
In a fourth aspect, an embodiment of the present application provides an electronic device, including: a memory and a processor; the memory is used for storing a computer program, and the processor executes the computer program to realize the face image processing method.
According to the face image processing method, the device, the storage medium and the electronic equipment, the original face image data is used as a basis, the face grid image is generated, the normal line texture mapping is used for storing the normal line of each pixel point, the offset vector of each pixel point on the face grid image is determined according to the normal line texture mapping, each pixel coordinate on the face grid image is offset according to the offset vector of each pixel point on the face grid image, deformed face image data are obtained, the deformed face image data are rendered, the deformed face image is obtained, the normal line of each pixel point, namely the deformation trend and the deformation strength of each pixel point, can be calculated and determined by reading the two-dimensional color data of the normal line texture mapping, in the process, the offset vector is calculated for each pixel point, complex operation such as square opening is not involved, the accuracy and the fluency of face image processing are improved, and the user experience is facilitated.
Drawings
In order to more clearly illustrate the application or the technical solutions of the prior art, the following description of the embodiments or the drawings used in the description of the prior art will be given in brief, it being obvious that the drawings in the description below are some embodiments of the application and that other drawings can be obtained from them without inventive effort for a person skilled in the art.
Fig. 1 is a schematic view of an application scenario provided in an embodiment of the present application;
fig. 2 is a schematic flow chart of a face image processing method according to an embodiment of the present application;
fig. 3 is a schematic distribution diagram of key points and auxiliary points of a face according to an embodiment of the present application;
fig. 4 is a schematic diagram of a face mesh image according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a normal texture map according to an embodiment of the present application;
fig. 6 is a schematic diagram of a processing procedure of a face image processing method according to an embodiment of the present application;
fig. 7 is a schematic flow chart of a face image processing method according to a second embodiment of the present application;
FIG. 8 is a schematic diagram of a mapping relationship between a face mesh image and a normal texture map according to an embodiment of the present application;
Fig. 9 is a schematic flow chart of a face image processing method according to a third embodiment of the present application;
FIG. 10 is a schematic diagram illustrating an operation of determining a normal texture map to be used according to an embodiment of the present application;
fig. 11 is an operation schematic diagram of a terminal device according to an embodiment of the present application;
fig. 12 is a schematic diagram of an operation interface for face image processing according to an embodiment of the present application;
Fig. 13 is a schematic diagram of an operation interface for processing a face image according to another embodiment of the present application;
Fig. 14 is a schematic structural diagram of an embodiment of a face image processing apparatus according to an embodiment of the present application;
fig. 15 is a schematic structural diagram of an embodiment of an electronic device according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the application. Rather, they are merely examples of apparatus and methods consistent with aspects of the application as detailed in the accompanying claims.
The application provides a technical scheme for processing face images, which is used for processing corresponding videos or photos according to the use requirements of users, so that the expected or certain specific face deformation effect is achieved, and the method is suitable for different application scenes or enriches shooting experience of the users. By way of example, the virtual makeup is taken as an online auxiliary sales mode recently coming up, in the process that the user performs virtual makeup, the face deformation effects such as large eyes and thin faces are realized through the face image deformation technology, the face effect after the user makeup is presented to the user, and the user can experience the effects of different makeup products even if the user does not arrive at the scene, so that more convenience is provided for shopping of the user.
The embodiment of the application takes the terminal equipment as an example to describe the technical scheme of the application, and it can be understood that the technical scheme of the application can also be used for electronic equipment with an image processing function, such as a computer, a server or a camera.
Fig. 1 is a schematic view of an application scenario provided in an embodiment of the present application, for example, to implement a face image deformation effect of a large-eye and a thin-face, as shown in fig. 1, a left image in fig. 1 is an original face image obtained by a terminal device, and the terminal device processes the original face image to obtain a deformed face image shown in a right image in fig. 1.
The main idea of the technical scheme of the application is as follows: the face image deformation scheme in the prior art comprises four steps of image template screening, image preprocessing, deformation data acquisition and image processing, wherein the deformation data acquisition is to calculate the offset of pixels, in the prior art, in the process of calculating the pixel offset, the attenuation offset of circles or ellipses is carried out by taking each key point as the circle center, the offset of the pixels in the area near the corresponding key point is calculated, on one hand, the accuracy of face image processing is not high because the offset condition of each pixel point cannot be accurately calculated, on the other hand, the process of carrying out the attenuation offset of circles or ellipses by taking the key point as the circle center has complex operations such as evolution, so that the cost of a graphic processor (graphics processing unit, a GPU) is large, and the problems of delay, blocking and the like of a mobile terminal are easy to occur in the deformation processing of the face image in the prior art, and therefore, the problem of low user experience exists in the process of face image deformation processing in the prior art.
Based on the technical problems in the prior art, according to the technical scheme, when the offset calculation of pixels is carried out according to the pre-constructed normal texture mapping corresponding to the specific deformation effect, the color value of the corresponding pixel in the normal texture mapping is sampled by establishing the mapping relation between the normal texture mapping and the face image to be processed, so that the offset condition of each pixel on the face image to be processed is determined.
Fig. 2 is a schematic flow chart of a first embodiment of a face image processing method provided by the embodiment of the present application, where an execution body of the embodiment of the present application is a terminal device, and the terminal device may be a terminal device, such as a mobile phone, a tablet computer, etc., as shown in fig. 2, and the face image processing method of the embodiment of the present application includes:
s101, acquiring original face image data.
In this step, in order to obtain the deformed face image, first, the original face image data corresponding to the face image before the deformation processing needs to be acquired as the basis for the image processing in the subsequent step. Specifically, the terminal device may recognize a face from a corresponding picture or video, and acquire corresponding face image data, where the pictures or videos may be acquired in real time by a camera or may be stored in the terminal device.
Illustratively, when a camera of the terminal device performs picture taking with respect to a face, original face image data is acquired.
Illustratively, when the terminal device captures one frame of image in the video recording process, original face image data is obtained.
Illustratively, the terminal device reads a corresponding photo or video stored in an image application such as an album to acquire the original face image data.
S102, generating a face grid image according to the original face image data.
In this step, the face grid image is obtained by processing the original face image data obtained in S101, specifically, first, some points representing the outline and key parts of the face in the original face image can be identified through a face key point detection algorithm, and the auxiliary points are determined according to the requirement of building the face grid and the distribution situation of the face key points. Fig. 3 is a schematic distribution diagram of face key points and auxiliary sites provided in an embodiment of the present application, as shown in fig. 3, after identifying P1-P99 common 99 face key points according to a face key point detection algorithm, determining S1-S13 common 13 auxiliary sites according to the distribution of the 99 face key points.
The face key point detection algorithm is an algorithm for automatically identifying a specific representative area of a face, and key points of areas such as eyes, nose, mouth, eyebrows, facial contours and the like of the face can be identified through the face key point detection algorithm.
Auxiliary sites are points playing an auxiliary positioning role, and can be points on an original face image or points on an original face image, can be determined according to the distribution condition of face key points through a preset algorithm, and are aimed at constructing a more stable triangle.
The triangle stabilization principle is to ensure that the triangle is an acute triangle as much as possible so as to avoid forming an obtuse triangle. In the embodiment of the application, when triangle mesh data is constructed, the key points and auxiliary sites of the face with the nearest positions are used as vertexes as far as possible, and the constructed triangle is ensured to be as stable as possible, so that the quality of an image is improved.
And then, constructing triangle mesh data by taking the nearest face key points and auxiliary sites as vertexes according to a triangle stabilization principle, rendering the original face image data acquired in the step S101 by adopting a common rendering mode as a base map, and marking the triangle mesh data on the base map to finally obtain the face mesh image. For example, fig. 4 is a schematic diagram of a face mesh image provided by an embodiment of the present application, as shown in fig. 4, the face mesh image includes a system triangle, and vertices of the triangle may be all auxiliary sites, such as triangle a, formed by auxiliary sites S5, S7 and S11, or all face key points, such as triangle B, formed by face key points P1, P59 and P68, or may be formed by face key points and auxiliary sites together, such as triangle C, formed by face key points P5 and P69 and auxiliary site S9, which is not strictly limited herein, as long as the triangle stability principle and the nearest principle are satisfied.
S103, determining an offset vector of each pixel point on the face grid image according to the normal texture mapping.
In this step, in order to achieve the final face deformation goal, in the embodiment of the present application, a normal texture mapping mode is adopted, after a face grid image is obtained, a terminal device obtains a corresponding normal texture mapping, and determines an offset vector of each pixel point on the face grid image according to the normal texture mapping, that is, by a mode of reading two-dimensional color data on the normal texture mapping, the offset vector of each pixel point on the face grid image is determined, and complex multiply-divide operation or evolution operation is not involved in the process, so that the cost for the GPU is small, and smooth use experience can be brought to a user.
The offset vector includes an offset direction and an offset size, and is used for determining which direction the corresponding pixel point should be offset to, and the size of the offset amount that is offset to the direction.
Because the technical scheme of the embodiment of the application is used for carrying out deformation processing on the two-dimensional face image, a normal is stored by adopting two-dimensional color data in the normal texture mapping, each two-dimensional color data represents the deformation trend and deformation strength of the corresponding pixel point, and correspondingly, the two-dimensional color data in the normal texture mapping can be composed of color data of any two color channels in RGB (red, green and blue), and the method is not limited.
Fig. 5 is a schematic diagram of a normal texture map according to an embodiment of the present application, and as shown in fig. 5, two-dimensional color data in the normal texture map is composed of color values of R channels and color values of G channels.
S104, carrying out offset processing on each pixel coordinate on the face grid image according to the offset vector of each pixel point on the face grid image to obtain deformed face image data.
In this step, in order to finally obtain the deformed face image, deformed face image data for image rendering needs to be obtained first, specifically, offset processing is performed on each pixel coordinate on the face mesh image according to the offset vector of each pixel point on the face mesh image obtained in S103, so as to obtain deformed face image data.
In one possible implementation manner, the screen resolution of the terminal device is obtained, deformed face image data matched with the actual situation of the display screen is obtained based on the screen resolution of the terminal device, specifically, the offset vector of each pixel point on the face grid image is converted into a target offset vector corresponding to the screen resolution of the terminal device, and the deformed face image data is obtained according to the target offset vector of each pixel point on the face grid image and the pixel coordinates of the corresponding pixel point on the face grid image.
The target offset vector is a vector obtained by transforming the offset vector of each pixel on the face grid image according to the screen resolution of the terminal equipment.
Because the deformed face image is finally presented to the user through the terminal equipment, the deformed face image data must be matched with the resolution of the terminal equipment, so that the finally obtained deformed face image can not have the problems of imbalance proportion and the like when presented through the terminal equipment, and the resolutions of the terminal equipment used by different users can be different.
And S105, rendering the deformed face image data to obtain a deformed face image.
In this step, the deformed face image is obtained by rendering the deformed face image data obtained in S104. It can be understood that, in order to improve the rendering effect, when the deformed face image data is rendered, the deformed face image data may be rendered according to the face mesh data obtained in S102, so as to obtain a deformed face image.
The implementation procedure of the present embodiment will be described below with a specific example:
Fig. 6 is a schematic diagram of a processing procedure of a face image processing method according to an embodiment of the present application, for example, to implement a large-eye and thin-face deformation effect, as shown in fig. 6, by determining a face key point and an auxiliary point in (a) a graph according to original face image data, constructing a triangle network by using the face key point and the auxiliary point in (a) the graph as vertices, obtaining a face mesh image in (b), determining an offset vector of each pixel point in the face mesh image in (b) by a normal texture map (corresponding to the deformation effect of the large-eye and thin-face) in (c), and finally, performing offset processing on each pixel point in the face mesh image according to the obtained offset vector, obtaining deformed face image data, and rendering the deformed face image data according to the constructed triangle mesh data, thereby obtaining a face image with the large-eye and thin-face effect.
In this embodiment, a face mesh image is generated based on original face image data, a normal line texture map is used to store a normal line of each pixel point, an offset vector of each pixel point on the face mesh image is determined according to the normal line texture map, offset processing is performed on each pixel coordinate on the face mesh image according to the offset vector of each pixel point on the face mesh image, deformed face image data is obtained, the deformed face image data is rendered, and the deformed face image is obtained.
Fig. 7 is a schematic flow chart of a second embodiment of a face image processing method according to the present application, where, based on the first embodiment, determining an offset vector of each pixel point on a face grid image according to a normal texture map includes:
S201, establishing a mapping relation between the face grid image and the normal texture mapping.
In this step, since the normal texture map and the face mesh image are independent of each other, in order to determine an offset vector of each pixel point on the face mesh image according to the normal texture map, a mapping relationship between the face mesh image and the normal texture map needs to be established first, so that the normal texture map and the face mesh image are "attached" to each other.
Fig. 8 is a schematic diagram of a mapping relationship between a face mesh image and a normal texture map according to an embodiment of the present application, and as shown in fig. 8, when a mapping relationship between a face mesh image and a normal texture map is established, the mapping relationship may be implemented by the following procedures:
(1) Defining the texture coordinates of the vertexes of the lower left corner as (0, 0), the texture coordinates of the vertexes of the upper left corner as (0, 1), the texture coordinates of the vertexes of the lower right corner as (1, 0), the texture coordinates of the vertexes of the upper right corner as (1, 1), and sequentially determining the texture coordinates of other pixel points of the face grid image according to the defined texture coordinates of the vertexes of the four corners.
(2) And establishing coordinate mapping relations between the texture coordinates of the four vertexes of the face grid image and the texture coordinates of the four corners of the normal texture map, and further determining coordinate mapping relations between texture seats of other vertexes of the face grid image and the texture coordinates of the normal texture map.
(3) And determining the pixel mapping relation between each pixel point of the face grid image and each pixel point of the normal texture map according to the coordinate mapping relation between the texture coordinates of each vertex on the face grid image and the texture coordinates of the normal texture map.
S202, sequentially sampling two-dimensional color data of each pixel point on the normal texture map according to the mapping relation.
In this step, on the premise that the normal texture map is "attached" to the face mesh image, two-dimensional color data of each pixel point on the normal texture map is sampled sequentially.
In one possible implementation, according to a coordinate mapping relationship between texture coordinates of the face mesh image and texture coordinates of the normal texture map, two-dimensional color data of each pixel point on the normal texture map is sampled sequentially according to a preset sampling sequence.
The preset sampling sequence is to take a certain pixel point as a starting point, and sequentially take the sampling sequence of all the pixel points, for example, the preset sampling sequence can be from top to bottom and from left to right and from bottom to top starting from the vertex (0, 0).
For example, with continued reference to fig. 8, Q (u, v) in the normal texture map is two-dimensional color data corresponding to P (0.5,0.69), where u and v are color values of a color channel corresponding to the pixel point, respectively.
S203, determining an offset vector of each pixel point on the face grid image according to the two-dimensional color data.
In this step, since the two-dimensional color data in the normal texture map represents the deformation trend and deformation strength of the corresponding pixel point on the face image, the offset vector of the corresponding pixel point in the face mesh image can be determined according to the sampled two-dimensional color data of a certain pixel point on the normal texture map, and the offset vector of each pixel point on the face mesh image can be determined by sequentially sampling the two-dimensional color data of each pixel point on the normal texture map.
In one possible implementation manner, the two-dimensional color data in the normal texture map is composed of color values of an R channel and a G channel, where the color value of the R channel represents an offset vector of a corresponding pixel point on the face image on the x axis, the color value of the G channel represents an offset vector of a corresponding pixel point on the face image on the y axis, RG (0.5 ) is taken as a reference value, when RG is (0.5 ), the corresponding pixel point does not make any pixel offset, when the R value is less than 0.5, the pixel coordinates representing the corresponding pixel point on the face grid image are offset to the left, when the R value is greater than 0.5, the pixel coordinates representing the corresponding pixel point on the face grid image are offset to the right, and similarly, when the G value is less than 0.5, the pixel coordinates representing the corresponding pixel point on the face grid image are offset to the bottom, and when the G value is greater than 0.5, the pixel coordinates representing the corresponding pixel point on the face grid image are offset to the top. If the deformation effect of the thin face is to be realized, the normal texture mapping is only required to set the R value of the two-dimensional color data of the left cheek to be larger than 0.5, and simultaneously set the R value of the two-dimensional color data of the right cheek to be 0.5, and set the two-dimensional color data of other positions to be (0.5 ); in order to achieve the deformation effect of the large eyes, the normal texture map is only required to set the R value of the two-dimensional color data of the left corners of the two eyes to be smaller than 0.5, and simultaneously set the R value of the two-dimensional color data of the right corners of the two eyes to be larger than 0.5, and set the two-dimensional color data of the other positions to be (0.5 ).
In this implementation manner, since color data in an image cannot define a value smaller than 0, for convenience of use, further processing needs to be performed on two-dimensional color data obtained by sampling, so that the offset direction and the offset amount can be more intuitively reflected through coordinate data, optionally, determining an offset vector of each pixel point on a face grid image according to the two-dimensional color data includes:
Converting the two-dimensional color data into coordinate data in a two-dimensional coordinate system, wherein the two-dimensional coordinate system is a plane rectangular coordinate system with x-axis and y-axis values of [ -1,1], and determining the offset vector of each pixel point on the face grid image according to the x-axis coordinate value and the y-axis coordinate value of the coordinate data.
Alternatively, the two-dimensional color data is converted into coordinate data in the two-dimensional coordinate system according to a coordinate reference value of the two-dimensional coordinate system.
Specifically, two-dimensional color data is converted into coordinate data in a two-dimensional coordinate system according to the formula V offset=Coffset ×2.0-vec (1.0 );
Wherein C offset represents two-dimensional color data on the normal texture map, V offset represents coordinate data in the converted two-dimensional coordinate system, and vec (1.0 ) represents a coordinate reference value of the two-dimensional coordinate system.
In this implementation manner, the two-dimensional color data can be converted into coordinate data in a plane rectangular coordinate system with the value ranges of the x axis and the y axis being-1 to 1 through the above formula, for example, when C offset obtained by sampling is (0.5 ), V offset can be determined to be (0, 0) through calculation, which indicates that no offset occurs in the x axis and the y axis directions; when C offset is (0.4, 0.8) as a result of sampling, V offset can be determined to be (-0.2,0.6) accordingly, which means that the x-axis direction should be shifted to the left by 0.2 pixels and the y-axis direction should be shifted up by 0.6 pixels. Therefore, in the implementation mode, the offset vector of each pixel point on the face grid image can be conveniently and intuitively determined by sequentially acquiring the coordinate data corresponding to each two-dimensional color data and through the x-axis coordinate value and the y-axis coordinate value of the coordinate data.
In this embodiment, the mapping relationship between the face mesh image and the normal texture map is established, the two-dimensional color data of each pixel point on the normal texture map is sampled in turn according to the mapping relationship, the offset vector of each pixel point on the face mesh image is determined according to the two-dimensional color data, the face mesh image is attached to the normal texture map by establishing a mapping manner, the accuracy of the determined offset vector of each pixel point on the face mesh image is improved, the two-dimensional color data are converted into positive and negative coordinate data in a two-dimensional coordinate system, and the offset vector of each pixel point on the face mesh image is determined according to the coordinate data, so that the calculation process is simplified, the consumption of the GPU is reduced, the smoothness of image processing is facilitated, and the user experience is improved.
Optionally, obtaining deformed face image data according to the target offset vector of each pixel point on the face mesh image and the pixel coordinates of the corresponding pixel point on the face mesh image, including:
obtaining deformed face image data according to a formula T Target=Vpos+Voffset/VpixSize;
wherein V pos represents coordinate data of the corresponding pixel point on the face grid image, V pixSize represents screen resolution, T Target represents face image data of the corresponding pixel point after deformation, and V offset/VpixSize in the formula represents a target offset vector.
In one possible implementation manner, in this embodiment, rendering the deformed face image data to obtain a deformed face image includes: rendering the deformed face image data through a two-dimensional texture processing function to obtain a deformed face image.
Specifically, according to formula C fragColor=Texture2D(TCamera,TTarget), rendering the deformed face image data to obtain a deformed face image;
Wherein T exture2D represents a two-dimensional texture processing function, T Camera represents original face image data of a corresponding pixel, and C fragColor represents a color output by a patch (T Camera,TTarget).
Fig. 9 is a schematic flow chart of a face image processing method according to a third embodiment of the present application, and for a terminal device capable of providing multiple normal texture maps, as shown in fig. 9, on the basis of the first embodiment and the second embodiment, a specific implementation scheme of the present embodiment is as follows:
On the one hand, the terminal equipment acquires face image data, detects face key points and auxiliary sites, constructs face grid data according to the face key points and the auxiliary sites, and generates a face network image.
On the other hand, the terminal device determines the normal texture map to be used.
It will be appreciated that the determination of the normal texture map to be used in this embodiment may be performed before determining the offset vector for each pixel point on the face mesh image based on the normal texture map.
In one possible implementation, the type of application currently used by the user is determined, and from that application type, the normal texture map to be used is determined.
Based on the two aspects, namely, according to the generated face network image and the normal texture mapping required to be used, the terminal equipment establishes the mapping relation between the face grid image and the normal texture mapping, determines the offset vector of each pixel point on the face grid image, determines the offset vector of each pixel on the face grid image on a screen, and finally processes the face grid image to obtain the deformed face grid image.
The specific implementation of the embodiment of the application is explained as follows:
For terminal devices with different applications installed, where the different applications may provide a face image morphing function of each feature, fig. 10 is an operation schematic diagram of determining a normal texture map to be used according to an embodiment of the present application, and as shown in fig. 10, when a user selects an application for performing face image processing through a related operation, for example, one of a beauty camera, a beauty show, or a virtual try-up, the terminal device identifies an application type currently used by the user, and determines a normal texture map to be used according to the identified application type, for example, when determining a virtual try-up of the application type selected by the user according to the user operation, the terminal device determines that the normal texture map to be used is a normal texture map related to the virtual try-up.
In another possible implementation manner, a face processing instruction input by a user is received, and a normal texture map corresponding to a deformation effect is selected from a plurality of preset normal texture maps according to the face processing instruction, wherein the face processing instruction is used for indicating the deformation effect required by the user.
As an example, fig. 11 is a schematic operation diagram of a terminal device provided in an embodiment of the present application, after a user enters a specific application through a click operation, according to a corresponding operation guide, as shown in a left diagram in fig. 11, the terminal device may obtain a source image according to the user operation (for example, taking a photo or a video, selecting a photo or a video from an album), and referring to a right diagram in fig. 11, so that the terminal device receives a face image processing instruction input by the user, and selects a normal texture map corresponding to a deformation effect from a plurality of preset normal texture maps, and for different applications, the manner of obtaining the face image processing instruction may also be different.
As shown in fig. 12, if the application of the facial image processing used by the user is virtual makeup, a list of various makeup products, such as eyeliner, eyebrow pencil, blush, foundation, etc., is displayed on the operation interface, and when the user selects one of the makeup products, the terminal device may obtain a facial processing instruction corresponding to the makeup product, for example, when the user selects the eyeliner, the terminal device receives a facial processing instruction corresponding to the large eye.
As shown in fig. 13, if the face image processing application used by the user is a beauty application, a name list of various deformations, such as large eyes, thin eyes, expansion, augmentation, etc., is displayed on the operation interface, and when the user selects one of the deformations, the terminal device may obtain a face processing instruction corresponding to the deformation, for example, when the user selects the large eyes, the terminal device receives the face processing instruction corresponding to the large eyes.
Fig. 12 and 13 are only for illustrating the implementation of the present embodiment, but are not limited thereto, and are not repeated here.
In this embodiment, before determining the offset vector of each pixel point on the face mesh image according to the normal texture mapping, determining the normal texture mapping to be used, and obtaining the offset vector of each pixel on the face mesh image through the normal texture mapping, and finally processing to obtain the deformed face mesh image, so that different face image processing can be performed according to the scene and the user use requirement to obtain the face image with different deformation effects, the application scene of the face image processing method is expanded, the use requirement of user diversification is met, and the user experience is improved.
Fig. 14 is a schematic structural diagram of an embodiment of a face image processing apparatus according to an embodiment of the present application, as shown in fig. 14, a face image processing apparatus 10 in this embodiment includes:
An acquisition module 11 and a processing module 12.
The acquiring module 11 is configured to acquire original face image data;
A processing module 12, configured to generate a face mesh image according to the original face image data; determining an offset vector of each pixel point on the face grid image according to a normal texture map, wherein the normal texture map is a deformation trend and deformation strength of the face image represented by two-dimensional color data; performing offset processing on each pixel coordinate on the face grid image according to the offset vector of each pixel point on the face grid image to obtain deformed face image data; rendering the deformed face image data to obtain a deformed face image.
Optionally, the processing module 12 is specifically configured to:
establishing a mapping relation between the face grid image and the normal texture map;
sequentially sampling the two-dimensional color data of each pixel point on the normal texture map according to the mapping relation;
and determining an offset vector of each pixel point on the face grid image according to the two-dimensional color data.
Optionally, the processing module 12 is specifically configured to:
Determining texture coordinates of the face grid image;
Establishing a coordinate mapping relation between texture coordinates of the face grid image and texture coordinates of the normal texture map;
And establishing a pixel mapping relation between the pixel points of the face grid image and the pixel points of the normal texture mapping according to the coordinate mapping relation.
Optionally, the processing module 12 is specifically configured to:
And sequentially sampling the two-dimensional color data of each pixel point on the normal texture map according to a preset sampling sequence according to the coordinate mapping relation between the texture coordinates of the face grid image and the texture coordinates of the normal texture map.
Optionally, the processing module 12 is specifically configured to:
Converting the two-dimensional color data into coordinate data in a two-dimensional coordinate system, wherein the two-dimensional coordinate system is a plane rectangular coordinate system with x-axis and y-axis values of [ -1,1 ];
And determining an offset vector of each pixel point on the face grid image according to the x-axis coordinate value and the y-axis coordinate value of the coordinate data.
Optionally, the processing module 12 is specifically configured to:
And converting the two-dimensional color data into coordinate data in the two-dimensional coordinate system according to the coordinate reference value of the two-dimensional coordinate system.
Optionally, the processing module 12 is specifically configured to:
Converting the offset vector of each pixel point on the face grid image into a target offset vector corresponding to the screen resolution of the terminal equipment;
And obtaining deformed face image data according to the target offset vector of each pixel point on the face grid image and the pixel coordinates of the corresponding pixel point on the face grid image.
Optionally, the processing module 12 is specifically configured to:
and rendering the deformed face image data through a two-dimensional texture processing function to obtain a deformed face image.
Optionally, the processing module 12 is further configured to:
The normal texture map to be used is determined.
Optionally, the processing module 12 is specifically configured to:
Selecting a normal texture map corresponding to the deformation effect from a plurality of preset normal texture maps according to a face processing instruction input by a user, wherein the face processing instruction is used for indicating the deformation effect required by the user;
Or alternatively
And determining the type of the application currently used by the user, and determining the normal texture mapping required to be used according to the type of the application.
Optionally, the processing module 12 is specifically configured to:
according to a face key point detection algorithm, determining face key points from original face image data;
Determining auxiliary sites according to the distribution of the key points of the face;
Constructing triangle mesh data by using nearest face key points and auxiliary sites according to a triangle stabilization principle;
Rendering the original face image data to obtain a base map, and marking the base map through triangle mesh data to obtain a face mesh image.
Optionally, the obtaining module 11 is specifically configured to:
acquiring original face image data acquired by a camera in real time;
Or alternatively
Raw face image data stored in an image application is acquired.
Optionally, the two-dimensional color data in the normal texture map is composed of color values of the R channel and color values of the G channel.
The implementation principle and technical effect of the present embodiment are similar to those of the method embodiment, and specific reference may be made to the method embodiment, which is not described herein.
It should be noted that, it should be understood that the division of the modules of the above apparatus is merely a division of a logic function, and may be fully or partially integrated into a physical entity or may be physically separated. And these modules may all be implemented in software in the form of calls by the processing element; or can be realized in hardware; the method can also be realized in a form of calling software by a processing element, and the method can be realized in a form of hardware by a part of modules. In addition, all or part of the modules may be integrated together or may be implemented independently. The processing element here may be an integrated circuit with signal processing capabilities. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in a software form.
For example, the modules above may be one or more integrated circuits configured to implement the methods above, such as: one or more Application SPECIFIC INTEGRATED Circuits (ASIC), or one or more microprocessors (DIGITAL SIGNAL processors, DSP), or one or more field programmable gate arrays (field programmable GATE ARRAY, FPGA), etc. For another example, when a module above is implemented in the form of processing element scheduler code, the processing element may be a general purpose processor, such as a central processing unit (central processing unit, CPU) or other processor that may invoke the program code. For another example, the modules may be integrated together and implemented in the form of a system-on-a-chip (SOC).
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, for example, by wired (e.g., coaxial cable, optical fiber, digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk Solid STATE DISK (SSD)), etc.
Fig. 15 is a schematic structural diagram of an embodiment of an electronic device according to an embodiment of the present application, as shown in fig. 15, an electronic device 20 in this embodiment may include: the system comprises a processor 21, a memory 22, a communication interface 23 and a system bus 24, wherein the memory 22 and the communication interface 23 are connected with the processor 21 through the system bus 24 and are used for completing communication among each other, the memory 22 is used for storing computer execution instructions, the communication interface 23 is used for communicating with other devices, and the scheme of any method embodiment is realized when the processor 21 executes a computer program.
In fig. 15, the processor 21 may be a general-purpose processor, including a central processing unit CPU, a network processor (network processor, NP), and the like; but may also be a digital signal processor DSP, an application specific integrated circuit ASIC, a field programmable gate array FPGA or other programmable logic device, a discrete gate or transistor logic device, a discrete hardware component.
The memory 22 may include random access memory (random access memory, RAM), read-only memory (RAM), and non-volatile memory (non-volatile memory), such as at least one disk memory.
The communication interface 23 is used to enable communication between the database access apparatus and other devices, such as clients, read-write libraries and read-only libraries.
The system bus 24 may be a peripheral component interconnect (PERIPHERAL COMPONENT INTERCONNECT, PCI) bus, or an extended industry standard architecture (extended industry standard architecture, EISA) bus, or the like. The system bus may be classified into an address bus, a data bus, a control bus, and the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus.
Optionally, an embodiment of the present application further provides a computer readable storage medium, where computer instructions are stored, which when run on a computer, cause the computer to perform a method according to any of the method embodiments described above.
Optionally, the embodiment of the present application further provides a chip for executing instructions, where the chip is configured to perform the method of any one of the foregoing method embodiments.
The embodiment of the application also provides a program product, which comprises a computer program, the computer program is stored in a computer readable storage medium, the computer program can be read from the computer readable storage medium by at least one processor, and the method of any one of the method embodiments can be realized when the computer program is executed by at least one processor.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the application.
Claims (14)
1. A face image processing method, comprising:
acquiring original face image data;
generating a face grid image according to the original face image data;
determining an offset vector of each pixel point on the face grid image according to a normal texture map, wherein the normal texture map is a deformation trend and deformation strength of the face image represented by two-dimensional color data;
Performing offset processing on each pixel coordinate on the face grid image according to the offset vector of each pixel point on the face grid image to obtain deformed face image data;
Rendering the deformed face image data to obtain a deformed face image;
The determining an offset vector of each pixel point on the face mesh image according to the normal texture mapping comprises:
establishing a mapping relation between the face grid image and the normal texture map;
sequentially sampling the two-dimensional color data of each pixel point on the normal texture map according to the mapping relation;
determining an offset vector of each pixel point on the face grid image according to the two-dimensional color data;
performing offset processing on each pixel coordinate on the face grid image according to the offset vector of each pixel point on the face grid image to obtain deformed face image data, including:
transforming the offset vector of each pixel point on the face grid image into a target offset vector corresponding to the screen resolution of the terminal equipment;
and obtaining the deformed face image data according to the target offset vector of each pixel point on the face grid image and the pixel coordinates of the corresponding pixel point on the face grid image.
2. The method of claim 1, wherein the establishing a mapping relationship between the face mesh image and the normal texture map comprises:
determining texture coordinates of the face grid image;
Establishing a coordinate mapping relation between texture coordinates of the face grid image and texture coordinates of the normal texture map;
And establishing a pixel mapping relation between the pixel points of the face grid image and the pixel points of the normal texture mapping according to the coordinate mapping relation.
3. The method according to claim 2, wherein sequentially sampling the two-dimensional color data of each pixel point on the normal texture map according to the mapping relation comprises:
and sequentially sampling the two-dimensional color data of each pixel point on the normal texture map according to a preset sampling sequence according to the coordinate mapping relation between the texture coordinates of the face grid image and the texture coordinates of the normal texture map.
4. The method of claim 1, wherein determining an offset vector for each pixel on the face mesh image from the two-dimensional color data comprises:
Converting the two-dimensional color data into coordinate data in a two-dimensional coordinate system, wherein the two-dimensional coordinate system is a plane rectangular coordinate system with x-axis and y-axis values of [ -1,1 ];
and determining an offset vector of each pixel point on the face grid image according to the x-axis coordinate value and the y-axis coordinate value of the coordinate data.
5. The method of claim 4, wherein said converting said two-dimensional color data into coordinate data in a two-dimensional coordinate system comprises:
and converting the two-dimensional color data into coordinate data in the two-dimensional coordinate system according to the coordinate reference value of the two-dimensional coordinate system.
6. The method according to claim 1, wherein rendering the deformed face image data to obtain a deformed face image includes:
and rendering the deformed face image data through a two-dimensional texture processing function to obtain a deformed face image.
7. The method according to any one of claims 1-5, wherein before determining the offset vector for each pixel on the face mesh image from the normal texture map, the method further comprises:
The normal texture map to be used is determined.
8. The method of claim 7, wherein determining the normal texture map to be used comprises:
Selecting a normal texture map corresponding to a deformation effect from a plurality of preset normal texture maps according to a face processing instruction input by a user, wherein the face processing instruction is used for indicating the deformation effect required by the user;
Or alternatively
And determining the type of the application currently used by the user, and determining the normal texture mapping required to be used according to the type of the application.
9. The method according to any one of claims 1-5, wherein generating a face mesh image from the raw face image data comprises:
according to a face key point detection algorithm, determining face key points from the original face image data;
determining auxiliary sites according to the distribution of the face key points;
Constructing triangle mesh data by using nearest face key points and auxiliary sites according to a triangle stabilization principle;
Rendering the original face image data to obtain a base map, and marking the base map through the triangle mesh data to obtain the face mesh image.
10. The method according to any one of claims 1-5, wherein the acquiring raw face image data comprises:
acquiring original face image data acquired by a camera in real time;
Or alternatively
Raw face image data stored in an image application is acquired.
11. The method of any of claims 1-5, wherein the two-dimensional color data in the normal texture map consists of color values of R channels and color values of G channels.
12. A face image processing apparatus, comprising:
The acquisition module is used for acquiring the original face image data;
The processing module is used for generating a face grid image according to the original face image data; determining an offset vector of each pixel point on the face grid image according to a normal texture map, wherein the normal texture map is a deformation trend and deformation strength of the face image represented by two-dimensional color data; performing offset processing on each pixel coordinate on the face grid image according to the offset vector of each pixel point on the face grid image to obtain deformed face image data; rendering the deformed face image data to obtain a deformed face image;
The processing module is further used for establishing a mapping relation between the face grid image and the normal texture mapping; sequentially sampling the two-dimensional color data of each pixel point on the normal texture map according to the mapping relation; determining an offset vector of each pixel point on the face grid image according to the two-dimensional color data;
The processing module is further used for converting the offset vector of each pixel point on the face grid image into a target offset vector corresponding to the screen resolution of the terminal equipment; and obtaining the deformed face image data according to the target offset vector of each pixel point on the face grid image and the pixel coordinates of the corresponding pixel point on the face grid image.
13. A storage medium storing a computer program for implementing the face image processing method according to any one of claims 1 to 11.
14. An electronic device, comprising: a memory and a processor; the memory is configured to store a computer program, and the processor executes the computer program to implement the face image processing method of any one of claims 1 to 11.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010681269.6A CN112241933B (en) | 2020-07-15 | 2020-07-15 | Face image processing method and device, storage medium and electronic equipment |
PCT/CN2021/083651 WO2022012085A1 (en) | 2020-07-15 | 2021-03-29 | Face image processing method and apparatus, storage medium, and electronic device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010681269.6A CN112241933B (en) | 2020-07-15 | 2020-07-15 | Face image processing method and device, storage medium and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112241933A CN112241933A (en) | 2021-01-19 |
CN112241933B true CN112241933B (en) | 2024-07-19 |
Family
ID=74170739
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010681269.6A Active CN112241933B (en) | 2020-07-15 | 2020-07-15 | Face image processing method and device, storage medium and electronic equipment |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112241933B (en) |
WO (1) | WO2022012085A1 (en) |
Families Citing this family (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112241933B (en) * | 2020-07-15 | 2024-07-19 | 北京沃东天骏信息技术有限公司 | Face image processing method and device, storage medium and electronic equipment |
CN115908104A (en) * | 2021-08-16 | 2023-04-04 | 北京字跳网络技术有限公司 | Image processing method, apparatus, device, medium, and program product |
CN113538549B (en) * | 2021-08-31 | 2023-12-22 | 广州光锥元信息科技有限公司 | Method and system for retaining texture of image texture during image processing |
CN114445543A (en) * | 2022-01-24 | 2022-05-06 | 北京百度网讯科技有限公司 | Method and device for processing texture image, electronic equipment and storage medium |
CN114429523B (en) * | 2022-02-10 | 2024-05-14 | 浙江慧脑信息科技有限公司 | Method for controlling partition mapping of three-dimensional model |
CN114626979B (en) * | 2022-03-18 | 2025-05-27 | 广州虎牙科技有限公司 | A face driving method, device, electronic device and storage medium |
CN114782600B (en) * | 2022-03-24 | 2024-10-01 | 杭州印鸽科技有限公司 | Video specific area rendering system and method based on auxiliary grid |
CN114693514B (en) * | 2022-03-28 | 2024-12-20 | 北京达佳互联信息技术有限公司 | Image processing method, device, electronic device and storage medium |
CN114820348A (en) * | 2022-04-01 | 2022-07-29 | 北京达佳互联信息技术有限公司 | An image processing method, device, electronic device and storage medium |
CN114972647B (en) * | 2022-05-31 | 2025-07-01 | 北京大甜绵白糖科技有限公司 | Model rendering method, device, computer equipment and storage medium |
CN115187822B (en) * | 2022-07-28 | 2023-06-30 | 广州方硅信息技术有限公司 | Face image dataset analysis method, live face image processing method and live face image processing device |
CN116320597A (en) * | 2022-09-06 | 2023-06-23 | 北京字跳网络技术有限公司 | Live image frame processing method, device, equipment, readable storage medium and product |
CN116109891B (en) * | 2023-02-08 | 2023-07-25 | 人民网股份有限公司 | Image data amplification method, device, computing equipment and storage medium |
CN116630510B (en) * | 2023-05-24 | 2024-01-26 | 浪潮智慧科技有限公司 | Method, equipment and medium for generating related cone gradual change texture |
CN117726499B (en) * | 2023-05-29 | 2025-04-29 | 荣耀终端股份有限公司 | Image deformation processing method, electronic device, and computer-readable storage medium |
CN117788720B (en) * | 2024-02-26 | 2024-05-17 | 山东齐鲁壹点传媒有限公司 | Method for generating user face model, storage medium and terminal |
CN119068091B (en) * | 2024-08-16 | 2025-03-14 | 广州紫为云科技有限公司 | Three-dimensional head model texture generation method and device and electronic equipment |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109035388A (en) * | 2018-06-28 | 2018-12-18 | 北京的卢深视科技有限公司 | Three-dimensional face model method for reconstructing and device |
CN111292423A (en) * | 2018-12-07 | 2020-06-16 | 北京京东尚科信息技术有限公司 | Coloring method and device based on augmented reality, electronic equipment and storage medium |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20090092473A (en) * | 2008-02-27 | 2009-09-01 | 오리엔탈종합전자(주) | 3D Face Modeling Method based on 3D Morphable Shape Model |
JP5460499B2 (en) * | 2010-07-12 | 2014-04-02 | 日本放送協会 | Image processing apparatus and computer program |
GB2543893A (en) * | 2015-08-14 | 2017-05-03 | Metail Ltd | Methods of generating personalized 3D head models or 3D body models |
CN107808410B (en) * | 2017-10-27 | 2021-04-27 | 网易(杭州)网络有限公司 | Shadow depth migration processing method and device |
CN112330824B (en) * | 2018-05-31 | 2024-08-23 | Oppo广东移动通信有限公司 | Image processing method, device, electronic equipment and storage medium |
CN108805090B (en) * | 2018-06-14 | 2020-02-21 | 广东工业大学 | A virtual makeup test method based on a plane grid model |
CN109829968B (en) * | 2018-12-19 | 2023-07-21 | 东软集团股份有限公司 | Method and device for generating normal texture map, storage medium and electronic equipment |
CN109859097B (en) * | 2019-01-08 | 2023-10-27 | 北京奇艺世纪科技有限公司 | Face image processing method, device, image processing device, and medium |
CN109859134A (en) * | 2019-01-30 | 2019-06-07 | 珠海天燕科技有限公司 | A kind of processing method and terminal of makeups material |
CN110136236B (en) * | 2019-05-17 | 2022-11-29 | 腾讯科技(深圳)有限公司 | Personalized face display method, device and equipment for three-dimensional character and storage medium |
CN110675489B (en) * | 2019-09-25 | 2024-01-23 | 北京达佳互联信息技术有限公司 | Image processing method, device, electronic equipment and storage medium |
CN112241933B (en) * | 2020-07-15 | 2024-07-19 | 北京沃东天骏信息技术有限公司 | Face image processing method and device, storage medium and electronic equipment |
-
2020
- 2020-07-15 CN CN202010681269.6A patent/CN112241933B/en active Active
-
2021
- 2021-03-29 WO PCT/CN2021/083651 patent/WO2022012085A1/en active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109035388A (en) * | 2018-06-28 | 2018-12-18 | 北京的卢深视科技有限公司 | Three-dimensional face model method for reconstructing and device |
CN111292423A (en) * | 2018-12-07 | 2020-06-16 | 北京京东尚科信息技术有限公司 | Coloring method and device based on augmented reality, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN112241933A (en) | 2021-01-19 |
WO2022012085A1 (en) | 2022-01-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112241933B (en) | Face image processing method and device, storage medium and electronic equipment | |
EP3992919B1 (en) | Three-dimensional facial model generation method and apparatus, device, and medium | |
US11373275B2 (en) | Method for generating high-resolution picture, computer device, and storage medium | |
WO2022068487A1 (en) | Styled image generation method, model training method, apparatus, device, and medium | |
US9727775B2 (en) | Method and system of curved object recognition using image matching for image processing | |
CN107993216B (en) | Image fusion method and equipment, storage medium and terminal thereof | |
US9639914B2 (en) | Portrait deformation method and apparatus | |
US9811894B2 (en) | Image processing method and apparatus | |
CN107204034B (en) | A kind of image processing method and terminal | |
JP2020507850A (en) | Method, apparatus, equipment, and storage medium for determining the shape of an object in an image | |
CN107516319A (en) | A high-precision and simple interactive map-matching method, storage device and terminal | |
WO2022022260A1 (en) | Image style transfer method and apparatus therefor | |
CN110335330B (en) | Image simulation generation method and system, deep learning algorithm training method and electronic equipment | |
US10846560B2 (en) | GPU optimized and online single gaussian based skin likelihood estimation | |
US11127126B2 (en) | Image processing method, image processing device, image processing system and medium | |
CN112766215B (en) | Face image processing method and device, electronic equipment and storage medium | |
CN114820581B (en) | Parallel Simulation Method and Device for Axisymmetric Optical Imaging | |
CN116862813B (en) | Color calibration method and system for augmented reality technology | |
CN103489219A (en) | 3D hair style effect simulation system based on depth image analysis | |
CN112528707A (en) | Image processing method, device, equipment and storage medium | |
CN114358112A (en) | Video fusion method, computer program product, client and storage medium | |
CN112967193A (en) | Image calibration method and device, computer readable medium and electronic equipment | |
CN112711984A (en) | Fixation point positioning method and device and electronic equipment | |
CN106548117B (en) | A kind of face image processing process and device | |
CN110852132B (en) | Two-dimensional code space position confirmation method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |