[go: up one dir, main page]

CN108447043A - A kind of image combining method, equipment and computer-readable medium - Google Patents

A kind of image combining method, equipment and computer-readable medium Download PDF

Info

Publication number
CN108447043A
CN108447043A CN201810296289.4A CN201810296289A CN108447043A CN 108447043 A CN108447043 A CN 108447043A CN 201810296289 A CN201810296289 A CN 201810296289A CN 108447043 A CN108447043 A CN 108447043A
Authority
CN
China
Prior art keywords
dimensional model
target
character
dimensional
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810296289.4A
Other languages
Chinese (zh)
Other versions
CN108447043B (en
Inventor
程培
傅斌
曾毅榕
沈珂轶
赵艳丹
罗爽
覃华峥
钱梦仁
周景锦
黎静波
李晓懿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201810296289.4A priority Critical patent/CN108447043B/en
Publication of CN108447043A publication Critical patent/CN108447043A/en
Application granted granted Critical
Publication of CN108447043B publication Critical patent/CN108447043B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the present application provides a kind of image combining method, equipment and computer-readable medium, wherein image combining method:Feature recognition is carried out to character image, personage's threedimensional model is established according to feature recognition result;Target material and the corresponding story types of the target material are obtained, determine that the corresponding material threedimensional model of the target material and the story types correspond to the matching position of above-mentioned personage's threedimensional model;The be added to above-mentioned matching position of above-mentioned personage's threedimensional model of above-mentioned material threedimensional model is obtained into target three-dimensional, which is rendered to obtain display result.Implement the embodiment of the present application, the superposition to personage's threedimensional model may be implemented, improves the fidelity of character image Overlay.

Description

Image synthesis method, equipment and computer readable medium
Technical Field
The present application relates to the field of image processing, and in particular, to an image synthesis method, an image synthesis apparatus, and a computer-readable medium.
Background
With the continuous development of image technology and the emergence of various emerging image applications, users have more and more requirements on the camera function, and often need to superimpose the images of people, such as: the user wants to add the latest hat, glasses and other accessory elements on the shot picture to experience whether the accessories are suitable for the user.
The function of superimposing the figure images of the two-dimensional plane can be realized in a plurality of current shooting applications, for example, the function of pasting the two-dimensional plane on a camera, and a user can superimpose the decoration material of the two-dimensional plane pasting on a shooting picture, so that the decoration material is similar to ornaments such as hats, masks and glasses, and a certain decoration effect is achieved.
However, the superimposed fidelity of the character image of the two-dimensional plane is not high, and the two-dimensional plane sticker can only play a certain decoration effect on the two-dimensional plane and cannot simulate the real try-on effect of ornaments such as hats, masks, glasses and the like, for example, when a user turns his head, the two-dimensional plane sticker of the hat immediately exposes stuffing. Therefore, the fidelity of the superposition of the human images needs to be improved at present.
Disclosure of Invention
The embodiment of the application provides an image synthesis method, image synthesis equipment and a computer readable medium, which can realize superposition of a three-dimensional model of a person and improve the fidelity of the superposition effect of the person image.
In a first aspect, an embodiment of the present application provides an image synthesis method, including:
carrying out feature recognition on the figure image, and establishing a figure three-dimensional model according to a feature recognition result;
obtaining a target material and a material type corresponding to the target material, and determining a material three-dimensional model corresponding to the target material and a fitting position of the material type corresponding to the character three-dimensional model;
and superposing the material three-dimensional model to the fitting position of the character three-dimensional model to obtain a target three-dimensional model, and rendering the target three-dimensional model to obtain a display result.
In a second aspect, embodiments of the present application provide an image synthesis apparatus comprising means for performing the method of the first aspect.
In a third aspect, an embodiment of the present application provides another image synthesis apparatus, including a processor, an input apparatus, an output apparatus, and a memory, where the processor, the input apparatus, the output apparatus, and the memory are connected to each other, where the memory is used to store a computer program that supports the image synthesis apparatus to execute the method, and the computer program includes program instructions, and the processor is configured to call the program instructions to execute the method of the first aspect.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium storing a computer program comprising program instructions that, when executed by a processor, cause the processor to perform the method of the first aspect.
The embodiment of the application has the following beneficial effects:
in the embodiment of the application, the material three-dimensional model can be superimposed on the character three-dimensional model to obtain the target three-dimensional model, for example, the three-dimensional model of the glasses is superimposed on the three-dimensional model of the character to obtain the character three-dimensional model with the glasses, and the target three-dimensional model is rendered to obtain the display result. The method and the device can construct different character three-dimensional models and material three-dimensional models, and superimpose the material three-dimensional models on the positions corresponding to the character three-dimensional models, so that the fidelity of character image superimposition effects is improved compared with a two-dimensional character image superimposition technology.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings used in the description of the embodiments will be briefly introduced below.
Fig. 1A is a schematic flow chart of an image synthesis method provided in an embodiment of the present application;
FIG. 1B is a schematic diagram illustrating extraction of facial feature points according to an embodiment of the present disclosure;
fig. 1C is a schematic diagram of a three-dimensional mesh of a human face according to an embodiment of the present application;
FIG. 1D is a diagram illustrating a result of an image overlay provided by an embodiment of the present application;
FIG. 2A is a schematic flow chart diagram of another image synthesis method provided in the embodiments of the present application;
fig. 2B is a schematic flowchart of an image synthesis method for an accessory material and a human face according to an embodiment of the present application;
fig. 2C is a schematic view of a tree structure of an accessory material provided in an embodiment of the present application;
fig. 3 is a schematic block diagram of an image synthesizing apparatus provided in an embodiment of the present application;
fig. 4 is a schematic block diagram of another image synthesizing apparatus provided in an embodiment of the present application;
fig. 5 is a schematic block diagram of an image synthesizing apparatus according to another embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order.
Referring to fig. 1A, fig. 1A is a schematic flowchart of an image synthesis method provided in an embodiment of the present application, where the image synthesis method includes:
101. and performing feature recognition on the figure image, and establishing a figure three-dimensional model according to a feature recognition result.
The execution subject of the embodiment is a device that performs image synthesis, and the device that performs image synthesis may be a terminal device or a server; if the method is applied to the terminal equipment, the method can be applied to scenes such as self-shooting and video call; if the method is applied to the server, the method can be applied to any scene that the character image needs to be sent through the server, such as: video calls, live video, etc.
In this embodiment, the person image may be obtained by taking a picture with a camera, or may be obtained by downloading or obtaining the person image from a local database.
In this embodiment, images of persons at different angles may be acquired, feature point recognition may be performed on the images, and a three-dimensional model of the person may be constructed according to the feature points.
In this embodiment, the above "feature recognition on a person image" may be understood as extracting feature points from a two-dimensional person image of a person, for example, extracting face feature points from the two-dimensional person image by a face feature point positioning technology (Facial Landmark Localization).
In this embodiment, the "building a three-dimensional model of a person according to a feature recognition result" may be understood as obtaining three-dimensional feature points of a face according to two-dimensional feature points of the face, and then building a corresponding three-dimensional model of a head.
In this embodiment, feature recognition is performed on a human image to obtain feature points of five sense organs, and a Software Development Kit (SDK) open in the related art can be used to obtain a plurality of feature points on a human face, including feature points at positions such as eyebrows, eyes, nose, mouth, and face contour. As shown in fig. 1B, fig. 1B is a schematic diagram of extracting facial feature points provided by the present application, in which small dots on a face represent the facial feature points.
In this embodiment, after obtaining the face feature points, that is, feature points of five sense organs, more face three-dimensional feature points may be fitted according to the face feature points by using a correlation algorithm, and a face three-dimensional mesh is constructed to obtain a face three-dimensional model diagram, as shown in fig. 1C, fig. 1C is a schematic diagram of a face three-dimensional mesh provided in an embodiment of the present application.
102. And obtaining a target material and a material type corresponding to the target material, and determining a material three-dimensional model corresponding to the target material and a fitting position of the material type corresponding to the character three-dimensional model.
In this embodiment, the method of obtaining the target material may be directly downloading or obtaining data of the material from a local database, for example, downloading data of accessories such as a hat, a mask, glasses, etc.; and determining and constructing a three-dimensional model of the target material after the target material is obtained.
In this embodiment, the material type of the obtained target material is, for example, the target material is a straw hat, and the corresponding material type is a headwear or a hat; the target material is earrings, and the corresponding material type is earrings. The fitting position of the determined material type corresponding to the character three-dimensional model can be a position of a corresponding feature point of the five sense organs according to the material type, for example, if the accessory material is a hat, the corresponding feature point of the five sense organs is a feature point of a head part; the accessory material is glasses, and the corresponding feature points of the five sense organs are the feature points of the eyes.
103. And superposing the material three-dimensional model to the fitting position of the character three-dimensional model to obtain a target three-dimensional model, and rendering the target three-dimensional model to obtain a display result.
In the embodiment, the three-dimensional model of the material is arranged at the corresponding fitting position in the human face three-dimensional model, so that the superposition effect is completed. As shown in fig. 1D, fig. 1D is a schematic diagram of a result of image superposition according to an embodiment of the present application, in which a three-dimensional model diagram of an accessory material of glasses and a hat is superposed on a three-dimensional model of a human face.
In this embodiment, rendering the target three-dimensional model may include rendering the target three-dimensional model according to a three-dimensional scene, illumination, music, a filter, and other materials to obtain a display result, so that a final display result has a more vivid three-dimensional stereoscopic impression and a stronger sense of reality.
It can be understood that, in the embodiment of the present application, the material three-dimensional model may be superimposed on the person three-dimensional model to obtain the target three-dimensional model, for example, the three-dimensional model of the glasses may be superimposed on the three-dimensional model of the person to obtain the three-dimensional model of the person wearing the glasses, and the target three-dimensional model may be rendered to obtain the display result. The method and the device can construct different character three-dimensional models and material three-dimensional models, and superimpose the material three-dimensional models on the positions corresponding to the character three-dimensional models, so that the fidelity of character image superimposition effects is improved compared with a two-dimensional character image superimposition technology.
As an optional implementation manner, the obtaining the target material and the material type corresponding to the target material, and determining a material three-dimensional model corresponding to the target material includes: downloading a material picture as a target material, identifying the material picture, determining the material type of the material in the material picture, and constructing a three-dimensional model of the material contained in the material picture as the three-dimensional model of the material.
In this embodiment, the material pictures may include latest accessory material pictures such as hats, glasses, earrings, and the like, so that a user may download any latest accessory picture from the internet, construct a three-dimensional material model, and not experience the wearing effect of the latest accessory.
As an optional implementation manner, in order to make the material three-dimensional model more fittingly superimposed on the person three-dimensional model, the determining the material three-dimensional model corresponding to the target material to obtain the material three-dimensional model as the material three-dimensional model includes: the generated three-dimensional model of the target material is obtained in a material library, and the generated three-dimensional model is adjusted to a three-dimensional model having a size matching the three-dimensional model of the person to be used as the material three-dimensional model.
In this embodiment, the material library may be a local database, and the database contains various materials that have generated three-dimensional models, such as three-dimensional models of glasses, earrings, and the like. The "adjusting the generated three-dimensional model to a size matching the three-dimensional model of the person as the material three-dimensional model" may be adjusting the size of the material three-dimensional model to a size matching the three-dimensional model of the person, for example, adjusting the hat three-dimensional model according to the size of the head of the task three-dimensional model, so that the hat three-dimensional model can be worn on the three-dimensional model of the head of the person.
It will be appreciated that resizing the material three-dimensional model to a size that matches the character three-dimensional model allows the material three-dimensional model to be superimposed on the character three-dimensional model with a more matching fit.
As an optional implementation manner, in order to enhance the three-dimensional stereoscopic effect of the target three-dimensional model, the rendering the target three-dimensional model to obtain a display result includes: and rendering the target three-dimensional model according to at least one of a three-dimensional scene, illumination, music and a filter to obtain a display result.
It can be understood that, in the embodiment, the target three-dimensional model is rendered by using three-dimensional scenes, illumination, music, filters and other manners, for example, a light source is added to the three-dimensional scenes, and the reflection type of the surface of each three-dimensional model is defined, so that the stereoscopic impression of the target three-dimensional model is stronger, and the superposition effect of the three-dimensional models is more vivid.
Specific rendering operations may include, but are not limited to: obtaining a scene type corresponding to a three-dimensional scene or music, obtaining a color and a texture matched with the scene type, and rendering the target three-dimensional model by using the obtained color and texture to obtain a display result; or obtaining a shadow type and/or a light attenuation parameter corresponding to the illumination, and rendering the three-dimensional model by the shadow type and/or the light attenuation parameter to obtain a display result; or the filter comprises at least one of a cloud pattern, a refraction pattern and a simulation light reflection, and the at least one of the cloud pattern, the refraction pattern and the simulation light reflection is superposed on the target three-dimensional model to obtain a display result.
As an optional implementation manner, in order to adjust the target three-dimensional model, before the rendering the target three-dimensional model to obtain a display result, the method further includes: and calculating the target three-dimensional model after at least one of rotation, displacement and scaling according to the calculation method of the Euler angle.
In this embodiment, euler's angle is a set of three independent angular parameters for uniquely determining the position of the fixed point rotating object, consisting of nutation angle θ, precession angle ψ, and rotation angleAnd (4) forming. After the two three-dimensional models are superimposed, a user may want to view visual effects of the target three-dimensional model at different angles, or view the superimposed effects of the material three-dimensional model and the character three-dimensional model at different angles and sizes, so that the nodes of the target three-dimensional model often need to be adjusted through rotation, displacement, scaling and other manners. And calculating the adjusted target three-dimensional model according to the calculation method of the Euler angle, and acquiring the data of the adjusted target three-dimensional model in time.
Referring to fig. 2A, fig. 2A is a schematic flowchart of another image synthesis method provided in an embodiment of the present application, where the image synthesis method includes:
201. and performing feature recognition on the figure image, and establishing a figure three-dimensional model according to a feature recognition result.
This step is the same as the embodiment of step 101 in the image synthesis method shown in fig. 1A, and is not described herein again.
202. And determining material nodes in the character three-dimensional model, wherein each material node corresponds to one material type.
In this embodiment, a human body part included in the three-dimensional model of the person may be determined, and if the human body part corresponds to a material type, the human body part is determined to be a material node in the three-dimensional model.
For example, feature recognition may be performed on the human image through the step 201, that is, positions of five sense organs and other accessories of the human face may be recognized, and material nodes of the three-dimensional human model may be determined according to the feature recognition results. For example, the top of the head may correspond to the material nodes of the hat, the eye socket may correspond to the material nodes of the glasses, the earlobe may correspond to the material nodes of the earring, and so on.
203. And obtaining a target material and a material type corresponding to the target material, and determining a material three-dimensional model corresponding to the target material and a material node corresponding to the material type as the fitting position of the character three-dimensional model.
In this embodiment, the manner of obtaining the target material may be directly downloading or obtaining data of the material from a local database, for example, downloading data of accessories such as hats, masks, glasses, and the like.
In this embodiment, "a material node corresponding to the material type is determined as a fitting position of the three-dimensional model of the person", that is, the material node corresponding to the material type is determined as the fitting position, for example, the material node at the vertex position corresponding to the hat type material is determined as the fitting position, and the material node at the earlobe position corresponding to the earring type material is determined as the fitting position.
204. And superposing the material three-dimensional model to the fitting position of the character three-dimensional model to obtain a target three-dimensional model, and rendering the target three-dimensional model to obtain a display result.
In this embodiment, the material three-dimensional model is superimposed on the fitting position of the character three-dimensional model, so as to complete the superimposing effect. For example, the three-dimensional model of the hat is superposed on the head top position of the three-dimensional model of the head, and the three-dimensional model of the head with the hat is obtained and used as a target three-dimensional model.
In this embodiment, rendering the target three-dimensional model may include rendering the target three-dimensional model according to a three-dimensional scene, illumination, music, a filter, and other materials to obtain a display result, so that a final display result has a more vivid three-dimensional stereoscopic impression and a stronger sense of reality.
In this embodiment, as shown in fig. 2B, for example, fig. 2B is a schematic flowchart of an accessory material and an image synthesis method of a human face according to an embodiment of the present application, and the schematic flowchart shows a process of superimposing an accessory three-dimensional model on a human face three-dimensional model. Firstly, a face image is obtained through shooting by a camera, then two-dimensional face characteristic points of the face image are extracted, more three-dimensional face characteristic points are fitted according to the two-dimensional face characteristic points, and a face three-dimensional grid is constructed according to the three-dimensional face characteristic points to obtain a face three-dimensional model; downloading an accessory material, wherein the downloading comprises the steps of obtaining three-dimensional model data of the accessory material, and taking a corresponding material node as a fitting position according to the type of the accessory material; and superposing the material three-dimensional model to a fitting position corresponding to the human face three-dimensional model, and outputting a result.
It can be understood that, in the embodiment of the present application, the material three-dimensional model may be superimposed on the person three-dimensional model to obtain the target three-dimensional model, for example, the three-dimensional model of the glasses may be superimposed on the three-dimensional model of the face to obtain the three-dimensional model of the face with the glasses, and the target three-dimensional model may be rendered to obtain the display result. Compared with the two-dimensional image sticker function of the camera which is widely applied at present, the three-dimensional effect and the fidelity of the image superposition effect are improved, a user can not go home enough, and whether the latest accessory is suitable for the user or not is experienced through the image synthesis method of the embodiment.
As an optional implementation manner, the obtaining the target material and the material type corresponding to the target material, and determining a material three-dimensional model corresponding to the target material includes: downloading a material picture as a target material, identifying the material picture, determining the material type of the material in the material picture, and constructing a three-dimensional model of the material contained in the material picture as the three-dimensional model of the material.
In this embodiment, the material pictures may include latest accessory material pictures such as hats, glasses, earrings, and the like, so that a user may download any latest accessory picture from the internet, construct a three-dimensional material model, and not experience the wearing effect of the latest accessory.
As an optional implementation manner, in order to make the material three-dimensional model more fittingly superimposed on the person three-dimensional model, the determining the material three-dimensional model corresponding to the target material to obtain the material three-dimensional model as the material three-dimensional model includes: the generated three-dimensional model of the target material is obtained in a material library, and the generated three-dimensional model is adjusted to a three-dimensional model having a size matching the three-dimensional model of the person to be used as the material three-dimensional model.
In this embodiment, the material library may be a local database, and the database contains various materials that have generated three-dimensional models, such as three-dimensional models of glasses, earrings, and the like. The "adjusting the generated three-dimensional model to a size matching the three-dimensional model of the person as the material three-dimensional model" may be adjusting the size of the material three-dimensional model to a size matching the three-dimensional model of the person, for example, adjusting the hat three-dimensional model according to the size of the head of the task three-dimensional model, so that the hat three-dimensional model can be worn on the three-dimensional model of the head of the person.
It will be appreciated that resizing the material three-dimensional model to a size that matches the character three-dimensional model allows the material three-dimensional model to be superimposed on the character three-dimensional model with a more matching fit.
As an optional implementation manner, in order to enhance the three-dimensional stereoscopic effect of the target three-dimensional model, the rendering the target three-dimensional model to obtain a display result includes: and rendering the target three-dimensional model according to at least one of a three-dimensional scene, illumination, music and a filter to obtain a display result.
It can be understood that, in the embodiment, the target three-dimensional model is rendered by using three-dimensional scenes, illumination, music, filters and other manners, for example, a light source is added to the three-dimensional scenes, and the reflection type of the surface of each three-dimensional model is defined, so that the stereoscopic impression of the target three-dimensional model is stronger, and the superposition effect of the three-dimensional models is more vivid.
For example, a scene type corresponding to a three-dimensional scene or music is obtained, a color and a texture matched with the scene type are obtained, and the obtained color and texture are used for rendering the target three-dimensional model to obtain a display result; or obtaining a shadow type and/or a light attenuation parameter corresponding to the illumination, and rendering the three-dimensional model by the shadow type and/or the light attenuation parameter to obtain a display result; or the filter comprises at least one of a cloud pattern, a refraction pattern and a simulation light reflection, and the at least one of the cloud pattern, the refraction pattern and the simulation light reflection is superposed on the target three-dimensional model to obtain a display result.
As an optional implementation manner, in order to adjust the target three-dimensional model, before the rendering the target three-dimensional model to obtain a display result, the method further includes: and calculating the target three-dimensional model after at least one of rotation, displacement and scaling according to the calculation method of the Euler angle.
In this embodiment, euler's angle is a set of three independent angular parameters for uniquely determining the position of the fixed point rotating object, consisting of nutation angle θ, precession angle ψ, and rotation angleAnd (4) forming. After the two three-dimensional models are overlapped, the user may want to check the visual effects of the target three-dimensional model at different angles, or the user wants to check the overlapping of the material three-dimensional model and the character three-dimensional model at different angles and sizesThe addition effect often requires adjustment of the nodes of the target three-dimensional model by rotation, displacement, scaling, and the like. And calculating the adjusted target three-dimensional model according to the calculation method of the Euler angle, and acquiring the data of the adjusted target three-dimensional model in time.
The embodiment of the present application also provides an image synthesis apparatus for executing the unit of the image synthesis method shown in fig. 1A. Specifically, referring to fig. 3, fig. 3 is a schematic block diagram of an image synthesis apparatus provided in an embodiment of the present application. The image synthesizing apparatus 300 of the present embodiment includes: a character three-dimensional model building module 301, a material three-dimensional model building module 302, a fitting position determining module 303, a target three-dimensional model building module 304 and a rendering module 305; wherein,
the character three-dimensional model building module 301 is configured to perform feature recognition on a character image, and build a character three-dimensional model according to a feature recognition result;
the material three-dimensional model building module 302 is configured to obtain a target material and determine a material three-dimensional model corresponding to the target material;
the fitting position determining module 303 is configured to obtain a material type corresponding to a target material, and determine a fitting position where the material type corresponds to the character three-dimensional model;
the target three-dimensional model building module 304 is configured to superimpose the material three-dimensional model onto the fitting position of the character three-dimensional model to obtain a target three-dimensional model;
the rendering module 305 is configured to render the target three-dimensional model to obtain a display result.
The specific implementation method is the same as the image synthesis method shown in fig. 1A, and will not be described in detail here.
As an optional implementation manner, the material three-dimensional model building module 302 is specifically configured to obtain a generated three-dimensional model of the target material in a material library, and adjust the generated three-dimensional model to a three-dimensional model with a size matching the character three-dimensional model to serve as the material three-dimensional model. The specific implementation method is the same as the embodiment corresponding to the image synthesis method shown in fig. 1A, and will not be described in detail here.
As an optional implementation manner, the material three-dimensional model building module 302 is specifically configured to download a material picture as a target material, identify a material type of the material picture, and build a three-dimensional model of the material contained in the material picture as the material three-dimensional model. The specific implementation method is the same as the embodiment corresponding to the image synthesis method shown in fig. 1A, and will not be described in detail here.
As an optional implementation manner, the rendering module 305 is specifically configured to render the target three-dimensional model according to at least one of a three-dimensional scene, lighting, music, and a filter to obtain a display result. The specific implementation method is the same as the embodiment corresponding to the image synthesis method shown in fig. 1A, and will not be described in detail here.
As an alternative embodiment, the image synthesizing apparatus 300 further includes: an euler angle calculation module 307, configured to calculate the target three-dimensional model after at least one of rotation, displacement, and scaling according to a euler angle calculation method. The specific implementation method is the same as the embodiment corresponding to the image synthesis method shown in fig. 1A, and will not be described in detail here.
The embodiment of the present application also provides another image synthesis apparatus for executing a unit of the image synthesis method shown in fig. 2A. Specifically, referring to fig. 4, fig. 4 is a schematic block diagram of an image synthesis apparatus provided in an embodiment of the present application. The image synthesizing apparatus 400 of the present embodiment includes: a character three-dimensional model building module 401, a material three-dimensional model building module 402, a material node determining module 403, a fitting position determining module 404, a target three-dimensional model building module 405 and a rendering module 406; wherein,
a character three-dimensional model building module 401, configured to perform feature recognition on a character image, and build a character three-dimensional model according to a feature recognition result;
a material three-dimensional model building module 402, configured to obtain a target material and determine a material three-dimensional model corresponding to the target material;
a material node determining module 403, configured to determine material nodes in the character three-dimensional model, where each material node corresponds to one material type;
a fitting position determining module 404, configured to determine material nodes corresponding to the material types as fitting positions of the character three-dimensional model;
a target three-dimensional model building module 405, configured to superimpose the material three-dimensional model onto the fitting position of the character three-dimensional model to obtain a target three-dimensional model;
and a rendering module 406, configured to render the target three-dimensional model to obtain a display result.
The specific implementation method is the same as the image synthesis method shown in fig. 2A, and will not be described in detail here.
As an optional implementation manner, the material node determining module 403 is specifically configured to determine a human body part included in the three-dimensional character model, and if the human body part corresponds to a material type, determine that the human body part is a material node in the three-dimensional character model. The specific implementation method is the same as the embodiment corresponding to the image synthesis method shown in fig. 2A, and is not described in detail here.
As an optional implementation manner, the material three-dimensional model building module 402 is specifically configured to obtain a generated three-dimensional model of a target material from a material library, and adjust the generated three-dimensional model to a three-dimensional model with a size matching the character three-dimensional model to serve as the material three-dimensional model. The specific implementation method is the same as the embodiment corresponding to the image synthesis method shown in fig. 2A, and is not described in detail here.
As an optional implementation manner, the material three-dimensional model building module 402 is specifically configured to download a material picture as a target material, identify a material type of the material picture, and build a three-dimensional model of the material contained in the material picture as the material three-dimensional model. The specific implementation method is the same as the embodiment corresponding to the image synthesis method shown in fig. 2A, and is not described in detail here.
As an optional implementation manner, the rendering module 405 is specifically configured to render the target three-dimensional model according to at least one of a three-dimensional scene, illumination, music, and a filter to obtain a display result. The specific implementation method is the same as the embodiment corresponding to the image synthesis method shown in fig. 2A, and is not described in detail here.
As an alternative embodiment, the image synthesizing apparatus 400 further includes: an euler angle calculation module 407, configured to calculate the target three-dimensional model after at least one of rotation, displacement, and scaling according to a euler angle calculation method. The specific implementation method is the same as the embodiment corresponding to the image synthesis method shown in fig. 2A, and is not described in detail here.
Referring to fig. 5, fig. 5 is a schematic block diagram of an image synthesizing apparatus according to another embodiment of the present application. The image synthesizing apparatus in the present embodiment as shown in the figure may include: one or more processors 501; one or more input devices 502, one or more output devices 503, and memory 504. The processor 501, the input device 502, the output device 503, and the memory 504 are connected by a bus 505. The memory 502 is used to store a computer program comprising program instructions and the processor 501 is used to execute the program instructions stored by the memory 502. Wherein the processor 501 is configured to call the program instruction to perform the following operations:
carrying out feature recognition on the figure image, and establishing a figure three-dimensional model according to a feature recognition result;
obtaining a target material and a material type corresponding to the target material, and determining a material three-dimensional model corresponding to the target material and a fitting position of the material type corresponding to the character three-dimensional model;
and superposing the material three-dimensional model to the fitting position of the character three-dimensional model to obtain a target three-dimensional model, and rendering the target three-dimensional model to obtain a display result.
It should be understood that, in the embodiment of the present Application, the Processor 501 may be a Central Processing Unit (CPU), and may also be other general processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The input device 502 may include a touch pad, a fingerprint sensor (for collecting fingerprint information of a user and direction information of the fingerprint), a microphone, etc., and the output device 503 may include a display (LCD, etc.), a speaker, etc.
The memory 504 may include a read-only memory and a random access memory, and provides instructions and data to the processor 501. A portion of the memory 504 may also include non-volatile random access memory. For example, the memory 504 may also store device type information.
In a specific implementation, the processor 501, the input device 502, and the output device 503 described in this embodiment of the present application may execute the implementation manners described in the first embodiment and the second embodiment of the endurance test method provided in this embodiment of the present application, and may also execute the implementation manner of the image synthesis device described in this embodiment of the present application, which is not described herein again.
The processor 501 in this embodiment may execute all functions of the character three-dimensional model building module, the material three-dimensional model building module, the fitting position determining module, the target three-dimensional model building module, and the rendering module in the aforementioned image synthesis apparatus. The input device 502 may be a camera for capturing a person image, or may be a receiving device for receiving a person image and a material image; the output device 503 may be a display for outputting the display result, or may be a transmission device for transmitting the display result to other devices.
In another embodiment of the present application, a computer-readable storage medium is provided, the computer-readable storage medium storing a computer program comprising program instructions that when executed by a processor implement:
carrying out feature recognition on the figure image, and establishing a figure three-dimensional model according to a feature recognition result;
obtaining a target material and a material type corresponding to the target material, and determining a material three-dimensional model corresponding to the target material and a fitting position of the material type corresponding to the character three-dimensional model;
and superposing the material three-dimensional model to the fitting position of the character three-dimensional model to obtain a target three-dimensional model, and rendering the target three-dimensional model to obtain a display result.
The computer-readable storage medium may be an internal storage unit of the image synthesis device of any of the foregoing embodiments, for example, a hard disk or a memory of the image synthesis device. The computer-readable storage medium may be an external storage device of the image synthesizing apparatus, such as a plug-in hard disk, a Smart Memory Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like provided in the image synthesizing apparatus. Further, the above-mentioned computer-readable storage medium may also include both an internal storage unit and an external storage device of the above-mentioned image synthesizing device. The computer-readable storage medium stores the computer program and other programs and data necessary for the image synthesizing apparatus. The above-described computer-readable storage medium may also be used to temporarily store data that has been output or is to be output.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the image synthesis apparatus and unit described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed image synthesizing apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the above-described division of units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may also be an electric, mechanical or other form of connection.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiments of the present application.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer-readable storage medium if it is implemented in the form of a software functional unit and sold or used as a separate product. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.

Claims (15)

1. An image synthesis method, comprising:
carrying out feature recognition on the figure image, and establishing a figure three-dimensional model according to a feature recognition result;
obtaining target materials and material types corresponding to the target materials, and determining material three-dimensional models corresponding to the target materials and fitting positions of the material types corresponding to the character three-dimensional models;
and superposing the material three-dimensional model to the fitting position of the character three-dimensional model to obtain a target three-dimensional model, and rendering the target three-dimensional model to obtain a display result.
2. The method of claim 1, wherein determining a three-dimensional model of the target material comprises:
and obtaining the generated three-dimensional model of the target material in a material library, and adjusting the generated three-dimensional model into a three-dimensional model with the size matched with the character three-dimensional model to serve as the material three-dimensional model.
3. The method of claim 1, further comprising:
determining material nodes in the character three-dimensional model, wherein each material node corresponds to one material type;
determining the fitting position of the material type corresponding to the character three-dimensional model; the method comprises the following steps: and determining material nodes corresponding to the material types as fitting positions of the character three-dimensional model.
4. The method of claim 3, wherein determining material nodes in the three-dimensional model of the person comprises:
and determining a human body part contained in the character three-dimensional model, and if the human body part corresponds to a material type, determining the human body part as a material node in the three-dimensional model.
5. The method of claim 1, wherein obtaining the target material and the material type corresponding to the target material, and determining the material three-dimensional model corresponding to the target material comprises:
downloading a material picture as a target material, identifying the material picture, determining the material type of the material in the material picture, and constructing a three-dimensional model of the material contained in the material picture as the material three-dimensional model.
6. The method according to any one of claims 1 to 5, wherein the rendering the target three-dimensional model to obtain a display result comprises:
and rendering the target three-dimensional model according to at least one of a three-dimensional scene, illumination, music and a filter to obtain a display result.
7. The method of claim 6, wherein rendering the target three-dimensional model according to at least one of a three-dimensional scene, lighting, music, and a filter to obtain a display result comprises:
obtaining a scene type corresponding to a three-dimensional scene or music, obtaining a color and a texture matched with the scene type, and rendering the target three-dimensional model by using the obtained color and texture to obtain a display result;
or obtaining a shadow type and/or a light attenuation parameter corresponding to the illumination, and rendering the three-dimensional model by the shadow type and/or the light attenuation parameter to obtain a display result;
or the filter comprises at least one of a cloud pattern, a refraction pattern and a simulation light reflection, and the at least one of the cloud pattern, the refraction pattern and the simulation light reflection is superposed on the target three-dimensional model to obtain a display result.
8. The method of any one of claims 1 to 5, wherein before rendering the target three-dimensional model to display results, the method further comprises:
and calculating the target three-dimensional model after at least one of rotation, displacement and scaling according to the calculation method of the Euler angle.
9. An image synthesizing apparatus characterized by comprising: the system comprises a figure three-dimensional model building module, a material three-dimensional model building module, a fitting position determining module, a target three-dimensional model building module and a rendering module;
the character three-dimensional model building module is used for carrying out feature recognition on the character image and building a character three-dimensional model according to a feature recognition result;
the material three-dimensional model building module is used for obtaining a target material and determining a material three-dimensional model corresponding to the target material;
the fitting position determining module is used for acquiring a material type corresponding to the target material and determining a fitting position of the material type corresponding to the character three-dimensional model;
the target three-dimensional model building module is used for superposing the material three-dimensional model to the fitting position of the character three-dimensional model to obtain a target three-dimensional model;
and the rendering module is used for rendering the target three-dimensional model to obtain a display result.
10. The image synthesizing apparatus according to claim 9,
the material three-dimensional model building module is specifically used for obtaining a generated three-dimensional model of the target material from a material library, and adjusting the generated three-dimensional model into a three-dimensional model with a size matched with the character three-dimensional model as the material three-dimensional model.
11. The image synthesizing apparatus according to claim 9, characterized by further comprising: a material node determining module;
the material node determining module is used for determining material nodes in the character three-dimensional model, and each material node corresponds to one material type;
and the fitting position determining module is specifically configured to determine material nodes corresponding to the material types as fitting positions of the character three-dimensional model.
12. The image synthesis apparatus of claim 11, wherein the material node determination module is specifically configured to determine a human body part included in the three-dimensional model of the person, and if the human body part corresponds to a material type, determine that the human body part is a material node in the three-dimensional model.
13. The image synthesis apparatus according to claim 9, wherein the material three-dimensional model building module is specifically configured to download a material picture as a target material, identify a material type of the material picture, and build a material included in the material picture into a three-dimensional model as the material three-dimensional model.
14. An image synthesis device comprising a processor, an input device, an output device and a memory, the processor, the input device, the output device and the memory being interconnected, wherein the memory is configured to store a computer program comprising program instructions, the processor being configured to invoke the program instructions to perform the method of any of claims 1 to 8.
15. A computer-readable storage medium, characterized in that the computer storage medium stores a computer program comprising program instructions that, when executed by a processor, cause the processor to carry out the method according to any one of claims 1 to 8.
CN201810296289.4A 2018-03-30 2018-03-30 Image synthesis method, equipment and computer readable medium Active CN108447043B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810296289.4A CN108447043B (en) 2018-03-30 2018-03-30 Image synthesis method, equipment and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810296289.4A CN108447043B (en) 2018-03-30 2018-03-30 Image synthesis method, equipment and computer readable medium

Publications (2)

Publication Number Publication Date
CN108447043A true CN108447043A (en) 2018-08-24
CN108447043B CN108447043B (en) 2022-09-20

Family

ID=63199183

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810296289.4A Active CN108447043B (en) 2018-03-30 2018-03-30 Image synthesis method, equipment and computer readable medium

Country Status (1)

Country Link
CN (1) CN108447043B (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109300191A (en) * 2018-08-28 2019-02-01 百度在线网络技术(北京)有限公司 AR model treatment method, apparatus, electronic equipment and readable storage medium storing program for executing
CN109767485A (en) * 2019-01-15 2019-05-17 三星电子(中国)研发中心 Image processing method and device
CN110298786A (en) * 2019-07-02 2019-10-01 北京字节跳动网络技术有限公司 Image rendering method, device, equipment and storage medium
CN110991325A (en) * 2019-11-29 2020-04-10 腾讯科技(深圳)有限公司 Model training method, image recognition method and related device
CN111178167A (en) * 2019-12-12 2020-05-19 咪咕文化科技有限公司 Method and device for auditing through lens, electronic equipment and storage medium
CN111625297A (en) * 2020-05-28 2020-09-04 Oppo广东移动通信有限公司 Application program display method, terminal and computer readable storage medium
CN112257797A (en) * 2020-10-29 2021-01-22 瓴盛科技有限公司 Sample Image Generation Method and Corresponding Training Method for Pedestrian Head Image Classifier
CN112927343A (en) * 2019-12-05 2021-06-08 杭州海康威视数字技术股份有限公司 Image generation method and device
CN114119849A (en) * 2022-01-24 2022-03-01 阿里巴巴(中国)有限公司 Three-dimensional scene rendering method, device and storage medium
CN114332327A (en) * 2021-12-31 2022-04-12 北京有竹居网络技术有限公司 Three-dimensional modeling method, three-dimensional modeling device, electronic equipment and server
WO2022083213A1 (en) * 2020-10-21 2022-04-28 北京字跳网络技术有限公司 Image generation method and apparatus, and device and computer-readable medium
CN114430466A (en) * 2022-01-25 2022-05-03 北京字跳网络技术有限公司 Material display method, device, electronic device, storage medium and program product
CN115486088A (en) * 2021-03-30 2022-12-16 京东方科技集团股份有限公司 Information interaction method, computer readable storage medium, communication terminal
CN117195360A (en) * 2023-09-07 2023-12-08 广东南华工商职业学院 3D scanning-based landscape model design method, system, equipment and medium
WO2023211364A3 (en) * 2022-04-24 2023-12-28 脸萌有限公司 Image processing method and apparatus, electronic device, and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20030005681A (en) * 2001-07-10 2003-01-23 (주)콤위버정보통신 Method for servicing the 3 dimensional feature information of object on the internet map
CN103456008A (en) * 2013-08-26 2013-12-18 刘晓英 Method for matching face and glasses
US20150040073A1 (en) * 2012-09-24 2015-02-05 Google Inc. Zoom, Rotate, and Translate or Pan In A Single Gesture
CN104809638A (en) * 2015-05-20 2015-07-29 成都通甲优博科技有限责任公司 Virtual glasses trying method and system based on mobile terminal
CN105842875A (en) * 2016-06-07 2016-08-10 杭州美戴科技有限公司 Method for designing glasses frame based on facial three-dimensional measurement

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20030005681A (en) * 2001-07-10 2003-01-23 (주)콤위버정보통신 Method for servicing the 3 dimensional feature information of object on the internet map
US20150040073A1 (en) * 2012-09-24 2015-02-05 Google Inc. Zoom, Rotate, and Translate or Pan In A Single Gesture
CN103456008A (en) * 2013-08-26 2013-12-18 刘晓英 Method for matching face and glasses
CN104809638A (en) * 2015-05-20 2015-07-29 成都通甲优博科技有限责任公司 Virtual glasses trying method and system based on mobile terminal
CN105842875A (en) * 2016-06-07 2016-08-10 杭州美戴科技有限公司 Method for designing glasses frame based on facial three-dimensional measurement

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
王乘等: "《Vega实时三维视景仿真技术》", 31 December 2005, 华中科技大学出版社 *
王正友等: "《数字传媒设计与制作(第2版)》", 31 May 2016, 重庆大学出版社 *

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109300191A (en) * 2018-08-28 2019-02-01 百度在线网络技术(北京)有限公司 AR model treatment method, apparatus, electronic equipment and readable storage medium storing program for executing
CN109767485A (en) * 2019-01-15 2019-05-17 三星电子(中国)研发中心 Image processing method and device
CN110298786A (en) * 2019-07-02 2019-10-01 北京字节跳动网络技术有限公司 Image rendering method, device, equipment and storage medium
CN110991325A (en) * 2019-11-29 2020-04-10 腾讯科技(深圳)有限公司 Model training method, image recognition method and related device
CN112927343B (en) * 2019-12-05 2023-09-05 杭州海康威视数字技术股份有限公司 Image generation method and device
CN112927343A (en) * 2019-12-05 2021-06-08 杭州海康威视数字技术股份有限公司 Image generation method and device
CN111178167B (en) * 2019-12-12 2023-07-25 咪咕文化科技有限公司 Method and device for checking lasting lens, electronic equipment and storage medium
CN111178167A (en) * 2019-12-12 2020-05-19 咪咕文化科技有限公司 Method and device for auditing through lens, electronic equipment and storage medium
CN111625297A (en) * 2020-05-28 2020-09-04 Oppo广东移动通信有限公司 Application program display method, terminal and computer readable storage medium
WO2022083213A1 (en) * 2020-10-21 2022-04-28 北京字跳网络技术有限公司 Image generation method and apparatus, and device and computer-readable medium
CN112257797A (en) * 2020-10-29 2021-01-22 瓴盛科技有限公司 Sample Image Generation Method and Corresponding Training Method for Pedestrian Head Image Classifier
CN115486088A (en) * 2021-03-30 2022-12-16 京东方科技集团股份有限公司 Information interaction method, computer readable storage medium, communication terminal
US12229893B2 (en) 2021-03-30 2025-02-18 Beijing Boe Technology Development Co., Ltd. Information interaction method, computer-readable storage medium and communication terminal
CN114332327A (en) * 2021-12-31 2022-04-12 北京有竹居网络技术有限公司 Three-dimensional modeling method, three-dimensional modeling device, electronic equipment and server
CN114119849A (en) * 2022-01-24 2022-03-01 阿里巴巴(中国)有限公司 Three-dimensional scene rendering method, device and storage medium
CN114430466A (en) * 2022-01-25 2022-05-03 北京字跳网络技术有限公司 Material display method, device, electronic device, storage medium and program product
WO2023143120A1 (en) * 2022-01-25 2023-08-03 北京字跳网络技术有限公司 Material display method and apparatus, electronic device, storage medium, and program product
WO2023211364A3 (en) * 2022-04-24 2023-12-28 脸萌有限公司 Image processing method and apparatus, electronic device, and storage medium
CN117195360A (en) * 2023-09-07 2023-12-08 广东南华工商职业学院 3D scanning-based landscape model design method, system, equipment and medium
CN117195360B (en) * 2023-09-07 2024-04-09 广东南华工商职业学院 3D scanning-based landscape model design method, system, equipment and medium

Also Published As

Publication number Publication date
CN108447043B (en) 2022-09-20

Similar Documents

Publication Publication Date Title
CN108447043B (en) Image synthesis method, equipment and computer readable medium
KR101190686B1 (en) Image processing apparatus, image processing method, and computer readable recording medium
US20190082211A1 (en) Producing realistic body movement using body Images
CN105279795B (en) Augmented reality system based on 3D marker
KR20180108709A (en) How to virtually dress a user's realistic body model
CN110111418A (en) Create the method, apparatus and electronic equipment of facial model
CN108369653A (en) Use the eyes gesture recognition of eye feature
CN106200918B (en) Information display method and device based on AR and mobile terminal
CN106200914B (en) Augmented reality triggering method and device and photographing equipment
CN106447604B (en) A method and device for transforming facial images in video
CN116250014A (en) Cross-domain neural network for synthesizing images with fake hair combined with real images
JP6563580B1 (en) Communication system and program
JP7624065B2 (en) 3D mesh generator based on 2D images
CN107483892A (en) Video data real-time processing method and device, computing device
CN110045817A (en) Using the interactive camera chain of virtual reality technology
CN105894571B (en) Method and device for processing multimedia information
CN107613360A (en) Video data real-time processing method and device, computing equipment
CN111639613A (en) Augmented reality AR special effect generation method and device and electronic equipment
CN107680166A (en) A kind of method and apparatus of intelligent creation
WO2023142650A1 (en) Special effect rendering
CN111383313B (en) Virtual model rendering method, device, equipment and readable storage medium
WO2012158801A2 (en) Augmented reality visualization system and method for cosmetic surgery
CN109685911B (en) AR glasses capable of realizing virtual fitting and realization method thereof
TW201629907A (en) System and method for generating three-dimensional facial image and device thereof
US11127212B1 (en) Method of projecting virtual reality imagery for augmenting real world objects and surfaces

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant