CN109448093B - Method and device for generating style image - Google Patents
Method and device for generating style image Download PDFInfo
- Publication number
- CN109448093B CN109448093B CN201811248987.3A CN201811248987A CN109448093B CN 109448093 B CN109448093 B CN 109448093B CN 201811248987 A CN201811248987 A CN 201811248987A CN 109448093 B CN109448093 B CN 109448093B
- Authority
- CN
- China
- Prior art keywords
- image
- various
- face
- points
- animals
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 28
- 241001465754 Metazoa Species 0.000 claims abstract description 39
- 239000003086 colorant Substances 0.000 claims abstract description 13
- 238000012545 processing Methods 0.000 claims abstract description 11
- 239000013598 vector Substances 0.000 claims description 10
- 241000282414 Homo sapiens Species 0.000 claims description 8
- 238000012549 training Methods 0.000 claims description 8
- 230000001815 facial effect Effects 0.000 claims description 7
- 238000007781 pre-processing Methods 0.000 claims description 4
- 238000002372 labelling Methods 0.000 claims description 2
- 210000000887 face Anatomy 0.000 description 14
- 238000013461 design Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 210000004709 eyebrow Anatomy 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 210000000216 zygoma Anatomy 0.000 description 2
- 230000007547 defect Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/02—Non-photorealistic rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/40—Filling a planar surface by adding surface attributes, e.g. colour or texture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Multimedia (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Geometry (AREA)
- Processing Or Creating Images (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a method and a device for generating style images, wherein the method comprises the following steps: s1, establishing face recognition models of people, various birds and animals and article recognition models of various articles; s2, acquiring an image to be processed, and identifying faces of people, various birds and animals and various main objects contained in the image by using the established face identification model and the object identification model; s3, searching out adjacent nearest points to establish a connecting line according to the extracted key points of the face or the feature points of the object; and S4, filling colors in all areas of the image with the established connecting lines, and automatically processing the image into a low polygon style image.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a method and a device for generating a Low polygon (Low Poly) style image.
Background
Low Poly (also called Low polygon, low surface modeling style), originally a term in 3D modeling, refers to a Low-precision model made using relatively few dot-line surfaces, and models in general net games all belong to Low modes. Now, low Poly gradually enters the field of planar Design, and after flattening (Flat Design) and Long shading (Long Shadow), a new Design tide is lifted at a Low polygon (Low Poly) fire speed, and as shown in fig. 1, the Low Poly style Design image is an abstract style image formed by combining a plurality of tiny geometric figures.
Currently, there are many ways to implement such style images in the design field, such as using flat design software to make them, or by means of 3D modeling. However, no matter which implementation method is used, the requirement for basic drawing work of a designer is extremely high, the designer not only needs good hand drawing capability and overall grasping capability for the structure and the light and dark colors, but also can draw nodes and take colors to fill according to the structure and the light and dark colors of an image, and the implementation mode of using modeling also requires that the software operation capability of the designer is very skilled.
Disclosure of Invention
In order to overcome the defects in the prior art, an object of the present invention is to provide a method and an apparatus for generating a style image, in which polygons are created by connecting adjacent feature points, and each polygon is filled with color according to color values of pixels included in each polygon, so as to achieve the purpose of automatically processing an image into a Low polygon (Low Poly) style image.
To achieve the above and other objects, the present invention provides a method for generating a style image, comprising the steps of:
s1, establishing face recognition models of people, various birds and animals and article recognition models of various articles;
s2, acquiring an image to be processed, and identifying the faces of people, various birds and animals and various main objects contained in the image by using the established face identification model and the object identification model;
s3, searching out adjacent nearest points to establish a connecting line according to the extracted key points of the face or the feature points of the object;
and S4, filling colors in each area of the image with the established connection line.
Preferably, in step S1, the face recognition model is established by establishing key points based on human faces and key points of faces of various birds and animals.
Preferably, the face recognition model building process is as follows:
acquiring a sample image, marking face key points of people, various birds and animals in the acquired sample image, and recording the coordinate positions of the face key points;
processing the input sample image according to the coordinate information of the marked face key points, and sending the processed sample image into a pre-trained key point prediction network to obtain a key point prediction result;
adjusting the key point prediction result to obtain the labeling points of the key points of the faces of people, various birds and animals;
and establishing the face recognition model according to the acquired face key points.
Preferably, in step S1, the process of establishing the article identification model is as follows:
capturing mass pictures of various articles through a search engine, and preprocessing the pictures;
extracting feature vectors with certain dimensions from preprocessed various article pictures according to different types of articles and correspondingly adopting color, texture, pattern, shape and transparency arrangement condition features of the space density of various articles;
through repeated recognition training, standard templates are extracted from various classified articles in a training set and stored in a file, and accordingly article recognition models corresponding to various articles are established.
Preferably, the step of extracting the feature vector of a certain dimension is to divide the article image into M × N grid regions, and calculate a ratio of the number of points in each grid to the total number of points of the article to obtain the feature vector of M × N dimension.
Preferably, in step S2, the image to be processed is acquired, the pre-established face recognition model and the object recognition model are compared, and faces of people, various birds and animals, and various main objects contained in the image are recognized; if the face areas of people, various birds and animals are identified in the image, extracting each face key point contained in the face area according to a face identification model; and if various articles contained in the image are identified, extracting characteristic points such as the contour, the edge, the corner and the like of each article according to the article identification model.
Preferably, in step S3, when performing the links, it is ensured that each link does not cross other links when establishing the link.
Preferably, in step S4, the color of each pixel point in each connection area is determined, the HSB values of all pixel points included in each area are respectively obtained, and the HSB value with the largest color point is taken to fill the area.
Preferably, in step S4, the color of each pixel point in each connection area is determined, the HSB values of all pixel points included in each area are respectively obtained, and the area is filled with the average value of the main hue HSB of each area.
In order to achieve the above object, the present invention further provides a style image generating apparatus, comprising:
the identification model establishing unit is used for establishing face identification models of people, various birds and animals and article identification models of various articles;
the identification unit is used for acquiring an image to be processed and identifying the faces of people, various birds and animals and various main objects contained in the image by using the established face identification model and the object identification model;
the feature point connecting unit is used for finding out adjacent nearest points to establish a connecting line according to the extracted face key points or feature points of the object;
and the color filling unit is used for filling colors into each area of the image with the established connection line.
Compared with the prior art, the style image generation method and the style image generation device have the advantages that the polygons are established by connecting lines according to the adjacent characteristic points, and the color is taken according to the color value of the pixel contained in each polygon to fill each polygon, so that the aim of automatically processing the image into the Low polygon (Low Poly) style image is fulfilled, the problem of long-time complicated operation of manual drawing of nodes, color taking and color filling by a designer is solved, the design time is saved, and the good implementation effect can be achieved without the need of the designer to master higher drawing and software operation basic power.
Drawings
FIG. 1 is a diagram of an image of a Low polygon (Low Poly) style design;
FIG. 2 is a flowchart illustrating steps of a method for generating a stylistic image in accordance with the present invention;
FIG. 3 is a system architecture diagram of a stylized image generation apparatus of the present invention.
Detailed Description
Other advantages and capabilities of the present invention will be readily apparent to those skilled in the art from the present disclosure by describing the embodiments of the present invention with specific embodiments thereof in conjunction with the accompanying drawings. The invention is capable of other and different embodiments and its several details are capable of modification in various other respects, all without departing from the spirit and scope of the present invention.
FIG. 2 is a flowchart illustrating steps of a method for generating a stylized image of the present invention. As shown in fig. 2, the method for generating a stylized image of the present invention includes the following steps:
s1, establishing face recognition models of people, various birds and animals and article recognition models of various articles.
The invention finds out the stable facial key points which can reflect the facial combination characteristics of people, various birds and animals and can shift each angle of the face under the influence of various light projected external environment and based on each key point of the face according to the facial features of eyebrows, eyes, canthus, nose, nostril, mouth, lips, cheekbone and the structure and contour combination characteristics of each component part of the human, various birds and animals. Therefore, in the embodiment of the invention, for the face recognition model, the face recognition model is established by establishing 72 points based on human faces and key points of faces of various birds and animals. Specifically, the face recognition model establishing process is as follows:
acquiring a sample image, manually marking key points of the face of people, various birds and animals in the acquired sample image, and recording the coordinate positions of the key points;
processing the input sample image according to the coordinate information of the marked face key points, wherein the processing comprises but is not limited to the transformation of clipping, scaling, rotating and the like on the input sample image, and sending the processed sample image into a pre-trained key point prediction network to obtain a key point prediction result;
and adjusting the prediction result of the key points to obtain the labeled points of the key points of the faces of the human beings, various birds and animals. The adjustment can adjust the predicted result of the key point by manually dragging the mouse and the like, so that the predicted result is more accurate, but the invention is not limited to the adjustment;
and establishing a face recognition model of people, various birds and animals according to the obtained face key points.
In the embodiment of the present invention, for the article identification model, the establishment process is as follows:
capturing mass pictures of various articles through a search engine, and preprocessing the pictures;
extracting a feature vector with a certain dimension from various preprocessed object pictures according to different types of objects and correspondingly adopting the arrangement characteristics of colors, textures, patterns, shapes, transparencies and the like of the space density of various objects, specifically, dividing an object image into M × N grid areas, and calculating the ratio of the number of points in each grid to the total number of points of the objects to obtain the M × N feature vector;
through repeated recognition training, standard templates are extracted from various classified articles in a training set and stored in a file, and therefore recognition models corresponding to various articles are established.
And S2, acquiring the image to be processed, and identifying the faces of people, various birds and animals and various main objects contained in the image by using the established face identification model and the object identification model. Specifically, in step S2, an image to be processed is acquired, a face recognition model and an article recognition model which are established in advance are compared, and faces of people, various birds and animals, and various main articles contained in the image are recognized: if the face areas of people, various birds and animals are identified in the image, extracting each face key point contained in the face area according to a face identification model; if various articles contained in the image are identified, extracting feature points such as the contour, the edge, the corner and the like of each article according to the article identification model.
And S3, searching out the nearest adjacent point to establish a connecting line according to the extracted key points of the face or the feature points of the object, and ensuring that each connecting line is not crossed with other connecting lines when establishing, namely the connecting line between two points cannot be established by crossing the existing connecting lines.
And S4, filling colors in each area of the image with the established connection line.
In an embodiment of the present invention, in step S4, the color of each pixel point in each connection area is determined, the H (hue) S (Saturation) B (brightness) values of all pixel points included in each area are respectively obtained, and the HSB value with the largest number of color points is taken to fill the area. I.e. drawing a curve according to the number of color points, and taking the HSB color value of the vertex.
In another embodiment of the present invention, in step S4, the color of each pixel point in each connection area is determined, the H (hue) S (Saturation) B (brightness) value of all pixel points included in each area is respectively obtained, the main hue HSB average value of each area is taken to fill the area, that is, the HSB color value of each point in each area is extracted, and the average value of H, S, B is respectively calculated to obtain the color average value of the area.
FIG. 3 is a system architecture diagram of a stylized image generation apparatus of the present invention. As shown in fig. 3, a genre image generation device of the present invention includes:
the recognition model establishing unit 301 is used for establishing a face recognition model and an article recognition model of various objects for people, various birds and animals.
The invention finds out the stable facial key points which can reflect the facial combination characteristics of people, various birds and animals and can shift each angle of the face under the influence of various light projected external environment and based on each key point of the face according to the facial features of eyebrows, eyes, canthus, nose, nostril, mouth, lips, cheekbone and the structure and contour combination characteristics of each component part of the human, various birds and animals. Therefore, in the embodiment of the present invention, the recognition model creating unit 301 creates the face recognition model by creating 72 points based on the human face and the key points of the face of various birds and animals. Specifically, the face recognition model creation process of the recognition model creation unit 301 is as follows:
acquiring a sample image, manually marking key points of the face of people, various birds and animals in the acquired sample image, and recording the coordinate positions of the key points;
processing the input sample image according to the coordinate information of the marked face key points, wherein the processing comprises but is not limited to the transformation of clipping, scaling, rotating and the like on the input sample image, and sending the processed sample image into a pre-trained key point prediction network to obtain a key point prediction result;
and adjusting the prediction result of the key points to obtain the labeled points of the key points of the faces of the human beings, various birds and animals. The adjustment may be performed by manually dragging a mouse or the like to adjust the predicted result of the keypoint, so as to make the predicted result more accurate, but the present invention is not limited thereto.
In an embodiment of the present invention, the process of establishing the article identification model by the identification model establishing unit 301 is as follows:
capturing mass pictures of various articles through a search engine, and preprocessing the pictures;
extracting a feature vector with a certain dimension from various preprocessed object pictures according to different types of objects and correspondingly adopting the arrangement characteristics of colors, textures, patterns, shapes, transparencies and the like of the space density of various objects, specifically, dividing an object image into M × N grid areas, and calculating the ratio of the number of points in each grid to the total number of points of the objects to obtain the M × N feature vector;
through repeated recognition training, standard templates are extracted from various classified articles in a training set and stored in a file, and therefore recognition models corresponding to various articles are established.
The identifying unit 302 is configured to obtain an image to be processed, and identify faces of people, various birds, animals, and various main objects included in the image by using the established face identification model and the object identification model. Specifically, the identifying unit 302 first obtains an image to be processed, compares a pre-established face identification model with an article identification model, and identifies faces of people or animals and various main articles contained in the image: if the face areas of people, various birds and animals are identified in the image, extracting each face key point contained in the face area according to the face identification model; if various articles contained in the image are identified, extracting feature points such as the outline, the edge, the corner and the like of each article according to the article identification model.
A feature point connection unit 303, configured to find a nearest neighboring point to establish a connection line according to the extracted key points of the face or the feature points of the object, and ensure that each connection line does not intersect other connection lines when establishing the connection line, that is, a connection line between two points cannot be established over an existing connection line.
And a color filling unit 304, configured to perform color filling on each area of the image with the established connection line.
In an embodiment of the present invention, for color filling of each region, the color filling unit 304 first determines colors of each pixel point of each connected region, respectively obtains H (hue) S (Saturation) B (brightness) values of all pixel points included in each region, and fills the region by taking the HSB value with the largest number of color points. I.e. drawing a curve according to the number of color points, and taking the HSB color value of the vertex.
In another embodiment of the present invention, for color filling in each region, the color filling unit 304 first determines colors of each pixel in each connected region, respectively obtains H (hue) S (Saturation) B (brightness) values of all pixels included in each region, respectively, takes an HSB average value of the main hue of each region to fill the region, that is, extracts an HSB color value of each point in each region, and respectively calculates an average value of H, S, B to obtain a color average value of the region.
In summary, the method and the device for generating the style image perform connection to establish the polygons according to the adjacent feature points, and perform color extraction according to the color value of the pixel included in each polygon to fill each polygon, so that the purpose of automatically processing the image into the Low polygon (Low Poly) style image is achieved, the problem of long-time complicated operations of node drawing, color extraction and color filling by manual drawing of a designer is solved, the design time is saved, and the good implementation effect can be achieved without the need of the designer to master higher drawing and software operation basic power.
The foregoing embodiments are merely illustrative of the principles and utilities of the present invention and are not intended to limit the invention. Modifications and variations can be made to the above-described embodiments by those skilled in the art without departing from the spirit and scope of the present invention. Therefore, the protection scope of the present invention should be as set forth in the claims.
Claims (9)
1. A method of generating a stylized image, comprising the steps of:
s1, establishing face recognition models of people, various birds and animals and article recognition models of various articles;
s2, acquiring an image to be processed, and identifying the faces of people, various birds and animals and various main objects contained in the image by using the established face identification model and the object identification model;
s3, searching out adjacent nearest points to establish a connecting line according to the extracted key points of the face or the feature points of the object; wherein, the establishing the connection specifically includes: when the connection is carried out, each connection is ensured not to be crossed with other connections when being established;
and S4, filling colors into each area of the image with the established connection line.
2. A method of generating stylized imagery according to claim 1, wherein: in step S1, the face recognition model is established by establishing key points based on human faces and key points of faces of various birds and animals.
3. A method of stylistic image generation as claimed in claim 1, wherein said facial recognition model is built by:
acquiring a sample image, marking face key points of people, various birds and animals in the acquired sample image, and recording the coordinate positions of the face key points;
processing the input sample image according to the coordinate information of the marked face key points, and sending the processed sample image into a pre-trained key point prediction network to obtain a key point prediction result;
adjusting the key point prediction result to obtain the labeling points of the key points of the faces of people, various birds and animals;
and establishing the face recognition model according to the acquired face key points.
4. A method of generating a stylized image, as claimed in claim 1, characterized by: in step S1, the process of establishing the article identification model is as follows:
capturing mass pictures of various articles through a search engine, and preprocessing the pictures;
extracting feature vectors with certain dimensions from preprocessed various object pictures according to different types of objects by correspondingly adopting color, texture, pattern, shape and transparency arrangement condition features of the space density of various objects;
through repeated recognition training, standard templates are extracted from various classified articles in a training set and stored in a file, and accordingly article recognition models corresponding to various articles are established.
5. A method of generating stylized imagery according to claim 4, wherein: the step of extracting the feature vector with a certain dimension is to divide the article image into M-by-N grid areas, and calculate the ratio of the number of points in each grid to the total number of points of the article to obtain the M-by-N feature vector.
6. A method of generating stylized imagery according to claim 1, wherein: in step S2, acquiring an image to be processed, comparing a pre-established face recognition model with an object recognition model, and recognizing faces of people, various birds and animals and various main objects contained in the image; if the face areas of people, various birds and animals are identified in the image, extracting each face key point contained in the face area according to a face identification model; and if various articles contained in the image are identified, extracting the contour, edge and corner feature points of each article according to the article identification model.
7. A method of generating stylized imagery according to claim 1, wherein: in step S4, determining the color of each pixel point in each connection area, respectively obtaining the HSB value of all pixel points included in each area, and filling the area with the HSB value having the largest color point.
8. A method of generating stylized imagery according to claim 1, wherein: in step S4, determining the color of each pixel point in each connection area, respectively obtaining the HSB value of all pixel points included in each area, and filling the area with the HSB average value of each area.
9. A stylized image generating apparatus comprising:
the identification model establishing unit is used for establishing face identification models of people, various birds and animals and article identification models of various articles;
the identification unit is used for acquiring an image to be processed and identifying the faces of people, various birds and animals and various main objects contained in the image by using the established face identification model and the object identification model;
the feature point connecting unit is used for finding out adjacent nearest points to establish a connecting line according to the extracted face key points or the feature points of the object; wherein, the establishing the connection specifically includes: when the connecting lines are connected, each connecting line is ensured not to be crossed with other connecting lines when being established;
and the color filling unit is used for filling colors in all areas of the image with the established connecting line.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811248987.3A CN109448093B (en) | 2018-10-25 | 2018-10-25 | Method and device for generating style image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811248987.3A CN109448093B (en) | 2018-10-25 | 2018-10-25 | Method and device for generating style image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109448093A CN109448093A (en) | 2019-03-08 |
CN109448093B true CN109448093B (en) | 2023-01-06 |
Family
ID=65547856
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811248987.3A Active CN109448093B (en) | 2018-10-25 | 2018-10-25 | Method and device for generating style image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109448093B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110414470B (en) * | 2019-08-05 | 2023-05-09 | 深圳市矽赫科技有限公司 | Inspection method based on terahertz and visible light |
CN111222519B (en) * | 2020-01-16 | 2023-03-24 | 西北大学 | Construction method, method and device of hierarchical colored drawing manuscript line extraction model |
CN112818146B (en) * | 2021-01-26 | 2022-12-02 | 山西三友和智慧信息技术股份有限公司 | Recommendation method based on product image style |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006323877A (en) * | 1995-06-16 | 2006-11-30 | Seiko Epson Corp | Face image processing method and face image processing apparatus |
CN103914699A (en) * | 2014-04-17 | 2014-07-09 | 厦门美图网科技有限公司 | Automatic lip gloss image enhancement method based on color space |
CN107730573A (en) * | 2017-09-22 | 2018-02-23 | 西安交通大学 | A kind of personal portrait cartoon style generation method of feature based extraction |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101360246B (en) * | 2008-09-09 | 2010-06-02 | 西南交通大学 | Video error concealment method combined with 3D face model |
KR101653812B1 (en) * | 2014-12-05 | 2016-09-05 | 연세대학교 산학협력단 | Apparatus and Method of portrait image processing for style synthesis |
CN106297492B (en) * | 2016-08-19 | 2019-06-25 | 上海葡萄纬度科技有限公司 | A kind of Educational toy external member and the method using color and outline identification programming module |
-
2018
- 2018-10-25 CN CN201811248987.3A patent/CN109448093B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006323877A (en) * | 1995-06-16 | 2006-11-30 | Seiko Epson Corp | Face image processing method and face image processing apparatus |
CN103914699A (en) * | 2014-04-17 | 2014-07-09 | 厦门美图网科技有限公司 | Automatic lip gloss image enhancement method based on color space |
CN107730573A (en) * | 2017-09-22 | 2018-02-23 | 西安交通大学 | A kind of personal portrait cartoon style generation method of feature based extraction |
Also Published As
Publication number | Publication date |
---|---|
CN109448093A (en) | 2019-03-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7526412B2 (en) | Method for training a parameter estimation model, apparatus for training a parameter estimation model, device and storage medium | |
CN109697688B (en) | Method and device for image processing | |
CN112784621B (en) | Image display method and device | |
CN107993216B (en) | Image fusion method and equipment, storage medium and terminal thereof | |
US10726599B2 (en) | Realistic augmentation of images and videos with graphics | |
CN106548516B (en) | Three-dimensional roaming method and device | |
CN110268442B (en) | Computer-implemented method of detecting foreign objects on background objects in an image, apparatus for detecting foreign objects on background objects in an image, and computer program product | |
US10347052B2 (en) | Color-based geometric feature enhancement for 3D models | |
CN113628327A (en) | Head three-dimensional reconstruction method and equipment | |
JP7282216B2 (en) | Representation and Extraction of Layered Motion in Monocular Still Camera Video | |
US10832095B2 (en) | Classification of 2D images according to types of 3D arrangement | |
KR101759188B1 (en) | the automatic 3D modeliing method using 2D facial image | |
KR101829733B1 (en) | Conversion Method For A 2D Image to 3D Graphic Models | |
CN109448093B (en) | Method and device for generating style image | |
CN110544300B (en) | Method for automatically generating three-dimensional model based on two-dimensional hand-drawn image characteristics | |
CN109410316A (en) | Method, tracking, relevant apparatus and the storage medium of the three-dimensional reconstruction of object | |
CN103337072A (en) | Texture and geometric attribute combined model based indoor target analytic method | |
CN112102480B (en) | Image data processing method, apparatus, device and medium | |
CN114862716A (en) | Image enhancement method, device and equipment for face image and storage medium | |
CN111523494A (en) | Human body image detection method | |
US20240029358A1 (en) | System and method for reconstructing 3d garment model from an image | |
CN114820907A (en) | Human face image cartoon processing method and device, computer equipment and storage medium | |
KR100815209B1 (en) | Apparatus and method for feature extraction of 2D image for generating 3D image and apparatus and method for generating 3D image using same | |
KR20010084996A (en) | Method for generating 3 dimension avatar using one face image and vending machine with the same | |
Cushen et al. | Markerless real-time garment retexturing from monocular 3d reconstruction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |