CN108573526A - Face snap device and image generating method - Google Patents
Face snap device and image generating method Download PDFInfo
- Publication number
- CN108573526A CN108573526A CN201810275713.7A CN201810275713A CN108573526A CN 108573526 A CN108573526 A CN 108573526A CN 201810275713 A CN201810275713 A CN 201810275713A CN 108573526 A CN108573526 A CN 108573526A
- Authority
- CN
- China
- Prior art keywords
- images
- models
- video camera
- snap device
- characteristic point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/08—Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/32—Indexing scheme for image data processing or generation, in general involving image mosaicing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a kind of face snap device and image generating methods, the face snap device includes an acquisition module, a concatenation module, at least two 3D video cameras and a supporting rack, at least two 3D video cameras include the first video camera and the second video camera, the shooting direction of first video camera is more than zero with the shooting direction angle of the second video camera, and the acquisition module is for obtaining the 3D images that 3D video cameras are shot in synchronization;The concatenation module is used to the 3D images that the synchronization is shot being spliced into a 3D models by a stitching algorithm.The face snap device and image generating method of the present invention can quickly generate the 3D models of user, realize crawl high definition 3D faces in real time, and can accelerate to generate the speed of 3D models.
Description
Technical field
The present invention relates to a kind of face snap device and image generating methods.
Background technology
3D video cameras, what is utilized is the video camera of 3D camera lenses manufacture, usually there are two tools more than pick-up lens, spacing and people
Eye spacing is close, can shoot the similar seen different images for being directed to Same Scene of human eye.Holographic 3D has 5 camera lens of disk
More than, by dot grating image Huo Ling shape raster holographics imaging can the comprehensive same image of viewing, can such as come to its border personally.
The 3D revolutions so far of First 3D video cameras are unfolded all around Hollywood weight pound sheet and important competitive sports.With
The appearance of 3D video cameras, this technology distance domestic consumer close step again.After the release of this video camera, we are from now on
3D camera lenses can be used to capture each unforgettable moment of life, such as the first step that child steps, celebration of graduating from university etc..
Usually there are two the above camera lenses for 3D video cameras.The function of 3D video cameras itself, can be by two just as human brain
Lens image is merged, and becomes a 3D rendering.These images can play on 3D TVs, and spectators wear so-called master
Dynamic formula shutter glasses may be viewed by, and can also pass through bore hole 3D display equipment direct viewing.3D shutter glasses can be with per second 60
Secondary speed enables the eyeglass fast crosstalk of left and right glasses switch.This means that each eye is it is seen that Same Scene is slightly shown not
Same picture, so brain can be thus to be the single photo presented with 3D in appreciation for it.
Existing 3D camera functions are single and expensive, and video generation speed is slow.
Invention content
The technical problem to be solved by the present invention is in order to overcome, 3D filming images terminal function is single in the prior art and valence
Lattice are expensive, the slow-footed defect of video generation, provide a kind of 3D models that can quickly generate user, realize that crawl is high in real time
Clear 3D faces, and can accelerate to generate the face snap device and image generating method of the speed of 3D models.
The present invention is to solve above-mentioned technical problem by following technical proposals:
A kind of face snap device, feature are that the face snap device includes an acquisition module, a splicing mould
Block, at least two 3D video cameras and a supporting rack, at least two 3D video cameras include the first video camera and the second video camera, institute
The shooting direction for stating the first video camera and the shooting direction angle of the second video camera are more than zero,
The acquisition module is for obtaining the 3D images that 3D video cameras are shot in synchronization;
The concatenation module is used to the 3D images that the synchronization is shot being spliced into a 3D moulds by a stitching algorithm
Type.
Preferably, the quantity of the 3D video cameras is 3,3 3D video cameras further include third video camera, support frame as described above
For a doorframe, the first video camera and the second video camera are respectively arranged on the frame of the doorframe, and the third video camera is set to described
The shooting direction of the top frame of doorframe, 3 3D video cameras is directed at a target area.
Preferably, the face snap device includes a processing chip, the stitching algorithm is stored in the processing chip
Interior, the processing chip stores newest stitching algorithm after stitching algorithm update.
Preferably, the stitching algorithm is the feature in the 3D images that the concatenation module two synchronizations of identification are shot
Point, and the 3D images of two synchronization shootings are spliced by way of characteristic point coincidence;Wherein, the concatenation module
Characteristic point used when by the database training of a preset sample 3D models to identify 3D image joints.
Preferably, the face snap device includes an identification module and a training module,
For a 3D models, the identification module is for identification in the target 3D images of 3D models and the generation 3D models
Characteristic point, and in target 3D images identification with corresponding characteristic point in the 3D models be training characteristics point;
The training module, which is used to do training data with the training characteristics point in 3D images, obtains target feature point;
The concatenation module is used to the 3D images that synchronization is shot being spliced into 3D by the coincidence of same target characteristic point
Model.
The present invention also provides a kind of image generating method, feature is, the image generating method is grabbed for a face
Device is clapped, the face snap device includes at least two 3D video cameras and a supporting rack, and at least two 3D video cameras include
First video camera and the second video camera, the shooting direction of first video camera and the shooting direction angle of the second video camera
More than zero, the image generating method includes:
Obtain the 3D images that 3D video cameras are shot in synchronization;
The 3D images that the synchronization is shot are spliced into a 3D models by a stitching algorithm.
Preferably, the quantity of the 3D video cameras is 3,3 3D video cameras further include third video camera, support frame as described above
For a doorframe, the first video camera and the second video camera are respectively arranged on the frame of the doorframe, and the third video camera is set to described
The shooting direction of the top frame of doorframe, 3 3D video cameras is directed at a target area.
Preferably, the face snap device includes a processing chip, the stitching algorithm is stored in the processing chip
Interior, the processing chip stores newest stitching algorithm after stitching algorithm update.
Preferably, the stitching algorithm includes:Identify the characteristic point in the 3D images of two synchronizations shooting, and by two
The 3D images of a synchronization shooting are spliced by way of characteristic point coincidence;Wherein, the face snap device passes through
Characteristic point used when the database training of one preset sample 3D models is to identify 3D image joints.
Preferably, for a 3D models, the image generating method includes:
Identify the characteristic point in the target 3D images of 3D models and the generation 3D models;
Identification is training characteristics point with corresponding characteristic point in the 3D models in target 3D images;
Training data, which is done, with the training characteristics point in 3D images obtains target feature point;
The 3D images of synchronization shooting are spliced into 3D models by the coincidence of same target characteristic point.
On the basis of common knowledge of the art, above-mentioned each optimum condition can be combined arbitrarily to get each preferable reality of the present invention
Example.
The positive effect of the present invention is that:The face snap device and image generating method of the present invention can be quick
The 3D models of user are generated, realize crawl high definition 3D faces in real time, and can accelerate to generate the speed of 3D models.
Description of the drawings
Fig. 1 is the structural schematic diagram of the face snap device of the embodiment of the present invention 1.
Fig. 2 is another structural schematic diagram of the face snap device of the embodiment of the present invention 1.
Fig. 3 is the flow chart of the image generating method of the embodiment of the present invention 1.
Specific implementation mode
It is further illustrated the present invention below by the mode of embodiment, but does not therefore limit the present invention to the reality
It applies among a range.
Embodiment 1
Referring to Fig. 1, Fig. 2, the present embodiment provides a kind of face snap device, the face snap device includes an acquisition mould
Block, a concatenation module, 3 3D video cameras 11, a supporting rack 12, an identification module and a training module.
3 3D video cameras are respectively the first video camera, the second video camera and third video camera.
Support frame as described above is a doorframe, and the first video camera and the second video camera are respectively arranged on the frame of the doorframe, described
Third video camera is set to the top frame of the doorframe, and the shooting direction of 3 3D video cameras is directed at a target area.
The shooting direction of first video camera is 30 degree with the shooting direction angle of the second video camera.
The shooting direction 13 of 3 3D video cameras is directed at a target area, in the present embodiment 3 3D video cameras alignments one
Target point.
When people pass through the doorframe front when, the face of people can simultaneously be shot by 3 3D video cameras, the photo of shooting with
Association in time.
The acquisition module is for obtaining the 3D images that 3D video cameras are shot in synchronization.
The concatenation module is used to the 3D images that the synchronization is shot being spliced into a 3D moulds by a stitching algorithm
Type.
The stitching algorithm is the characteristic point in the 3D images of concatenation module identification two synchronizations shooting, and will
The 3D images of two synchronization shootings are spliced by way of characteristic point coincidence;Wherein, the identification module passes through one
Characteristic point used when the database training of preset sample 3D models is to identify 3D image joints.
The face snap device includes a processing chip, and the stitching algorithm is stored in the processing chip, described
Processing chip stores newest stitching algorithm after stitching algorithm update.
The present embodiment by the doorframe, can quick obtaining 3D models, so as to just be obtained when user passes through doorframe
Obtain 3D models.
In addition, the training of the database by sample 3D models, can make face grasp shoot device obtain splicing 3D images
Algorithm, by the 3D images for analyzing sample pattern and composition sample pattern, it will be able to obtain the 3D images of composition sample pattern
In characteristic point which be that effectively, being spliced to 3D images using validity feature point and remove invalid characteristic point being capable of shape
At 3D models.
For a 3D models, the identification module is for identification in the target 3D images of 3D models and the generation 3D models
Characteristic point, and in target 3D images identification with corresponding characteristic point in the 3D models be training characteristics point.
Some in 3D images includes face image, some is background or image of low quality, this implementation
It can by spliced 3D models in example
The training module, which is used to do training data with the training characteristics point in 3D images, obtains target feature point.
The concatenation module is used to the 3D images that synchronization is shot being spliced into 3D by the coincidence of same target characteristic point
Model.
By above-mentioned training method, the quantity of validity feature point can be simplified, the speed of recognition of face is made to be getting faster.It is logical
Stitching algorithm can be optimized by crossing above-mentioned training method.To make splicing speed be getting faster.
Referring to Fig. 3, using above-mentioned face snap device, the present embodiment also provides a kind of image generating method, including:
Step 100 obtains the 3D images that 3D video cameras are shot in synchronization.
Characteristic point in the 3D images that two step 101, identification synchronizations are shot.
The 3D images of two synchronization shootings are simultaneously spliced generation by step 102 by way of characteristic point coincidence
3D models.
Wherein, the face snap device by the database training of a preset sample 3D models with identify 3D images spell
Characteristic point used when connecing.
By the training of the database of sample 3D models, face grasp shoot device can be made to obtain the algorithm of splicing 3D images,
By the 3D images for analyzing sample pattern and composition sample pattern, it will be able to obtain the spy in the 3D images of composition sample pattern
Which is that effectively, 3D moulds can be formed by being spliced to 3D images using validity feature point and removing invalid characteristic point to sign point
Type.
Step 103, for a 3D models, identify 3D models and generate the feature in the target 3D images of the 3D models
Point.
Step 104 identifies that with corresponding characteristic point in the 3D models be training characteristics point in target 3D images.
At step 104, the 3D models refer to the 3D models in step 103.
Step 105 does training data acquisition target feature point with the training characteristics point in 3D images.
Being trained by training characteristics point can filter out which characteristic point has key effect, can simplify and be used for
Generate the characteristic point of 3D models.
The 3D images of synchronization shooting are spliced into 3D models by step 106 by the coincidence of same target characteristic point.
3D models in step 106 are the new 3D models generated using target feature point.
Although specific embodiments of the present invention have been described above, it will be appreciated by those of skill in the art that these
It is merely illustrative of, protection scope of the present invention is defined by the appended claims.Those skilled in the art is not carrying on the back
Under the premise of from the principle and substance of the present invention, many changes and modifications may be made, but these are changed
Protection scope of the present invention is each fallen with modification.
Claims (10)
1. a kind of face snap device, which is characterized in that the face snap device include an acquisition module, a concatenation module,
At least two 3D video cameras and a supporting rack, at least two 3D video cameras include the first video camera and the second video camera, described
The shooting direction of first video camera and the shooting direction angle of the second video camera are more than zero,
The acquisition module is for obtaining the 3D images that 3D video cameras are shot in synchronization;
The concatenation module is used to the 3D images that the synchronization is shot being spliced into a 3D models by a stitching algorithm.
2. face snap device as described in claim 1, which is characterized in that the quantity of the 3D video cameras is 3,3 3D
Video camera further includes third video camera, and support frame as described above is a doorframe, and the first video camera and the second video camera are respectively arranged on described
The frame of doorframe, the third video camera are set to the top frame of the doorframe, and the shooting direction of 3 3D video cameras is directed at a target
Region.
3. face snap device as described in claim 1, which is characterized in that the face snap device includes a processing core
Piece, the stitching algorithm are stored in the processing chip, and the processing chip stores newest spelling after stitching algorithm update
Connect algorithm.
4. face snap device as claimed in claim 3, which is characterized in that the stitching algorithm identifies for the concatenation module
Characteristic point in the 3D images of two synchronizations shooting, and the 3D images of two synchronization shootings are passed through into characteristic point
The mode of coincidence is spliced;Wherein, the concatenation module by the database training of a preset sample 3D models to identify 3D shadows
Characteristic point used when as splicing.
5. face snap device as claimed in claim 3, which is characterized in that the face snap device includes an identification module
And a training module,
For a 3D models, the identification module 3D models and generates spy in the target 3D images of the 3D models for identification
Point is levied, and identifies that with corresponding characteristic point in the 3D models be training characteristics point in target 3D images;
The training module, which is used to do training data with the training characteristics point in 3D images, obtains target feature point;
The concatenation module is used to the 3D images that synchronization is shot being spliced into 3D models by the coincidence of same target characteristic point.
6. a kind of image generating method, which is characterized in that the image generating method is used for a face snap device, the face
Grasp shoot device includes at least two 3D video cameras and a supporting rack, and at least two 3D video cameras include the first video camera and second
Video camera, the shooting direction of first video camera are more than zero with the shooting direction angle of the second video camera, the image
Generation method includes:
Obtain the 3D images that 3D video cameras are shot in synchronization;
The 3D images that the synchronization is shot are spliced into a 3D models by a stitching algorithm.
7. image generating method as claimed in claim 6, which is characterized in that the quantity of the 3D video cameras is 3, and 3 3D take the photograph
Camera further includes third video camera, and support frame as described above is a doorframe, and the first video camera and the second video camera are respectively arranged on the door
The frame of frame, the third video camera are set to the top frame of the doorframe, and the shooting direction of 3 3D video cameras is directed at a target area
Domain.
8. image generating method as claimed in claim 6, which is characterized in that the face snap device includes a processing chip,
The stitching algorithm is stored in the processing chip, and the processing chip stores newest splicing after stitching algorithm update and calculates
Method.
9. image generating method as claimed in claim 8, which is characterized in that the stitching algorithm includes:Identification two is same
Characteristic point in the 3D images of moment shooting, and the side that the 3D images of two synchronization shootings are overlapped by characteristic point
Formula is spliced;Wherein, the face snap device by the database training of a preset sample 3D models with identify 3D images spell
Characteristic point used when connecing.
10. image generating method as claimed in claim 8, which is characterized in that for a 3D models, the image generating method
Including:
Identify the characteristic point in the target 3D images of 3D models and the generation 3D models;
Identification is training characteristics point with corresponding characteristic point in the 3D models in target 3D images;
Training data, which is done, with the training characteristics point in 3D images obtains target feature point;
The 3D images of synchronization shooting are spliced into 3D models by the coincidence of same target characteristic point.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810275713.7A CN108573526A (en) | 2018-03-30 | 2018-03-30 | Face snap device and image generating method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810275713.7A CN108573526A (en) | 2018-03-30 | 2018-03-30 | Face snap device and image generating method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108573526A true CN108573526A (en) | 2018-09-25 |
Family
ID=63574042
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810275713.7A Pending CN108573526A (en) | 2018-03-30 | 2018-03-30 | Face snap device and image generating method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108573526A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109472860A (en) * | 2018-11-13 | 2019-03-15 | 盎锐(上海)信息科技有限公司 | Depth map balance based on artificial intelligence selects excellent algorithm and device |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105933695A (en) * | 2016-06-29 | 2016-09-07 | 深圳市优象计算技术有限公司 | Panoramic camera imaging device and method based on high-speed interconnection of multiple GPUs |
CN105938627A (en) * | 2016-04-12 | 2016-09-14 | 湖南拓视觉信息技术有限公司 | Processing method and system for virtual plastic processing on face |
CN106778628A (en) * | 2016-12-21 | 2017-05-31 | 张维忠 | A kind of facial expression method for catching based on TOF depth cameras |
CN107263449A (en) * | 2017-07-05 | 2017-10-20 | 中国科学院自动化研究所 | Robot Remote Teaching System Based on Virtual Reality |
CN107292921A (en) * | 2017-06-19 | 2017-10-24 | 电子科技大学 | A kind of quick three-dimensional reconstructing method based on kinect cameras |
US20180007347A1 (en) * | 2014-12-22 | 2018-01-04 | Google Inc. | Integrated Camera System Having Two Dimensional Image Capture and Three Dimensional Time-of-Flight Capture With A Partitioned Field of View |
-
2018
- 2018-03-30 CN CN201810275713.7A patent/CN108573526A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180007347A1 (en) * | 2014-12-22 | 2018-01-04 | Google Inc. | Integrated Camera System Having Two Dimensional Image Capture and Three Dimensional Time-of-Flight Capture With A Partitioned Field of View |
CN105938627A (en) * | 2016-04-12 | 2016-09-14 | 湖南拓视觉信息技术有限公司 | Processing method and system for virtual plastic processing on face |
CN105933695A (en) * | 2016-06-29 | 2016-09-07 | 深圳市优象计算技术有限公司 | Panoramic camera imaging device and method based on high-speed interconnection of multiple GPUs |
CN106778628A (en) * | 2016-12-21 | 2017-05-31 | 张维忠 | A kind of facial expression method for catching based on TOF depth cameras |
CN107292921A (en) * | 2017-06-19 | 2017-10-24 | 电子科技大学 | A kind of quick three-dimensional reconstructing method based on kinect cameras |
CN107263449A (en) * | 2017-07-05 | 2017-10-20 | 中国科学院自动化研究所 | Robot Remote Teaching System Based on Virtual Reality |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109472860A (en) * | 2018-11-13 | 2019-03-15 | 盎锐(上海)信息科技有限公司 | Depth map balance based on artificial intelligence selects excellent algorithm and device |
CN109472860B (en) * | 2018-11-13 | 2023-02-10 | 上海盎维信息技术有限公司 | Depth map balance optimization method and device based on artificial intelligence |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9094675B2 (en) | Processing image data from multiple cameras for motion pictures | |
JP2022166078A (en) | Composing and realizing viewer's interaction with digital media | |
CN105939481A (en) | Interactive three-dimensional virtual reality video program recorded broadcast and live broadcast method | |
JP2005529559A (en) | Method for generating a stereoscopic image from a monoscope image | |
CN108600729A (en) | Dynamic 3D models generating means and image generating method | |
JP7479729B2 (en) | Three-dimensional representation method and device | |
CN102929091A (en) | Method for manufacturing digital spherical curtain three-dimensional film | |
CN107862718A (en) | 4D holographic video method for catching | |
Devernay et al. | Stereoscopic cinema | |
CN108391116A (en) | Total body scan unit based on 3D imaging technique and scan method | |
CN108111835A (en) | Filming apparatus, system and method for 3D video imagings | |
CN108389253A (en) | Mobile terminal with modeling function and model generating method | |
CN108573526A (en) | Face snap device and image generating method | |
CN111161399B (en) | Data processing method and assembly for generating three-dimensional model based on two-dimensional image | |
CN108513122B (en) | Model adjusting method and model generating device based on 3D imaging technology | |
CN101292516A (en) | System and method for capturing visual data | |
CN108737808A (en) | 3D models generating means and method | |
WO2016202073A1 (en) | Image processing method and apparatus | |
CN108550183A (en) | 3D model production methods and model generating means | |
CN105204284A (en) | Three-dimensional stereo playback system based on panoramic circular shooting technology | |
CN108881842A (en) | Monitoring system and information processing method based on 3D video camera | |
CN108093242A (en) | Method for imaging and camera | |
CN108564651A (en) | Body scan data device and data creation method with data systematic function | |
CN109657702B (en) | 3D depth semantic perception method and device | |
CN108810517A (en) | Image processor with monitoring function and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20180925 |
|
WD01 | Invention patent application deemed withdrawn after publication |