CN119229011A - Method and system for generating digital human skin model - Google Patents
Method and system for generating digital human skin model Download PDFInfo
- Publication number
- CN119229011A CN119229011A CN202411325611.3A CN202411325611A CN119229011A CN 119229011 A CN119229011 A CN 119229011A CN 202411325611 A CN202411325611 A CN 202411325611A CN 119229011 A CN119229011 A CN 119229011A
- Authority
- CN
- China
- Prior art keywords
- model
- point cloud
- dimensional
- human body
- cloud data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 22
- 210000005252 bulbus oculi Anatomy 0.000 claims abstract description 16
- 210000000214 mouth Anatomy 0.000 claims abstract description 16
- 238000012545 processing Methods 0.000 claims abstract description 14
- 230000036548 skin texture Effects 0.000 claims description 20
- 230000009467 reduction Effects 0.000 claims description 10
- 238000013507 mapping Methods 0.000 claims description 6
- 238000004806 packaging method and process Methods 0.000 claims description 6
- 230000000694 effects Effects 0.000 claims description 5
- 230000002441 reversible effect Effects 0.000 claims description 2
- 238000005457 optimization Methods 0.000 abstract description 4
- 230000007547 defect Effects 0.000 abstract description 3
- 210000000887 face Anatomy 0.000 abstract description 3
- 230000006872 improvement Effects 0.000 abstract description 2
- 238000007792 addition Methods 0.000 description 8
- 230000015572 biosynthetic process Effects 0.000 description 4
- 238000003786 synthesis reaction Methods 0.000 description 4
- 230000001603 reducing effect Effects 0.000 description 3
- 230000002159 abnormal effect Effects 0.000 description 2
- 210000001508 eye Anatomy 0.000 description 2
- 230000001815 facial effect Effects 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000000877 morphologic effect Effects 0.000 description 2
- 230000036961 partial effect Effects 0.000 description 2
- 230000002829 reductive effect Effects 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 238000003325 tomography Methods 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 230000037237 body shape Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 210000004709 eyebrow Anatomy 0.000 description 1
- 210000004209 hair Anatomy 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 231100000915 pathological change Toxicity 0.000 description 1
- 230000036285 pathological change Effects 0.000 description 1
- 230000001575 pathological effect Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 210000004872 soft tissue Anatomy 0.000 description 1
- 230000037303 wrinkles Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/30—Polynomial surface description
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/77—Retouching; Inpainting; Scratch removal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/08—Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/41—Medical
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- Geometry (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Mathematical Physics (AREA)
- Mathematical Analysis (AREA)
- Algebra (AREA)
- Architecture (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention relates to the technical field of human body model reconstruction, in particular to a generation method and a generation system of a digital human body skin model, comprising the following steps of step 1, obtaining three-dimensional point cloud data of a scanned object, and establishing three-dimensional human body scanning model point cloud data; and 2, importing the three-dimensional human body scanning model point cloud data obtained in the step 1 into a model data processing module, and providing an operating environment for modifying the three-dimensional human body scanning model point cloud data. The invention can generate a more complete and lifelike digital human skin model through high-precision three-dimensional scanning equipment and professional three-dimensional software. The defect caused by incomplete scanning in the prior method is repaired, and particularly, the method has remarkable improvement on detailed structure acquisition of key parts such as eyeballs, oral cavities and the like. Optimized number of faces and performance through the step of face number optimization, the invention improves the performance and fidelity of the model.
Description
Technical Field
The invention relates to the technical field of human body model reconstruction, in particular to a generation method and a generation system of a digital human body skin model.
Background
At present, some methods for generating a digital human skin model exist, including a skin surface reconstruction technology, a skin tomography technology, a three-dimensional reconstruction technology and the like, wherein the skin surface reconstruction technology is to acquire morphological and texture information of a skin surface by utilizing structured light scanning, laser scanning and the like, and provide data support for skin reconstruction. The data is directly obtained from the real human body, so that the real human body shape and detail can be restored more accurately. Some fine parts, such as the eyeball and the oral cavity, may not be captured due to limitations of the scanning device. The skin tomography technology can acquire tomographic images of different layers of skin and is used for analyzing pathological features and pathological changes in the skin. The three-dimensional reconstruction technology utilizes the obtained skin morphology and texture information to establish a three-dimensional skin model so as to simulate the behavior and deformation of the skin under the action of external force and realize finer and personalized model adjustment. However, the algorithm has the defects of large data processing amount, low drawing speed and the like, relies on manual intervention, and is not fully automated.
It is therefore an object of the present application to further improve and refine highly automated methods to increase the integrity and fidelity of the generated digitized human skin model. Aims to ensure that the generated model is more in line with the morphological characteristics of an actual human body, so that the model is more accurate and reliable in application in the fields of medicine, simulation training and the like.
Disclosure of Invention
Aiming at the technical problems existing in the reconstruction of the digitized human skin model in the prior art, the first aspect of the invention provides a technical scheme, a generation method of the digitized human skin model, which comprises the following steps:
Step 1, three-dimensional point cloud data of a scanned object are obtained, and three-dimensional human body scanning model point cloud data are established;
Step 2, importing the three-dimensional human body scanning model point cloud data obtained in the step 1 into a model data processing module, and providing an operating environment for modifying the three-dimensional human body scanning model point cloud data;
step 3, optimizing the imported three-dimensional human body scanning model point cloud data;
Step 4, carrying out face reduction treatment on the optimized three-dimensional human body scanning model;
step 5, adding skin texture to the three-dimensional human body scanning model subjected to the face reduction treatment to obtain a human body skin model which can be exported to be in a preset format;
In step 1, the scanned object is scanned twice, three-dimensional point cloud data of the surface of the scanned object is obtained by first scanning, three-dimensional point cloud data of a detail part of the scanned object is obtained by second scanning, and the point cloud data of the detail part is added into the three-dimensional human body scanning model point cloud data before the three-dimensional human body scanning model point cloud data is optimized.
Preferably, the high-precision scanning device comprises 12 depth cameras, wherein the first camera is positioned at a position right in front of the target object by 0 °, the second camera is positioned at a position right in front of the target object by 30 °, the third camera is positioned at a position left of the target object by 90 °, the fourth camera is positioned at a position left of the rear by 150 °, the fifth camera is positioned right of the target object by 180 °, the sixth camera is positioned at a position right of the rear by 210 °, the seventh camera is positioned right of the target object by 270 °, the eighth camera is positioned right of the target object by 330 °, the ninth camera is positioned right above the target object, the tenth camera is positioned right below the target object, the eleventh camera is positioned left above the target object, and the twelfth camera is positioned right below the target object.
Preferably, in step 2, the model data processing module includes reverse engineering software, which is used to automatically generate a high-precision digital model from the three-dimensional scan data.
Preferably, in step 1, the scanned object detail portion includes an oral cavity and an eyeball portion.
Preferably, in step 3, the step of optimizing the three-dimensional human body scanning model point cloud data includes removing noise, filling holes and encapsulating the point cloud;
Removing noise, and deleting data points outside the outline of the human body model in a manual deleting mode;
the filling holes are filled according to whether the human body surface point cloud data continuously fill the empty hole parts or not;
the point cloud package is used for converting the three-dimensional human body scanning model point cloud data into a three-dimensional curved surface model.
Preferably, in step 4, the effect of polygonal surface reduction is achieved by adjusting the surface reduction intensity percentage.
Preferably, in step 5, skin texture is added to the surface of the three-dimensional body scan model by means of texture mapping or geometric texture addition.
The second aspect of the present invention proposes a technical solution, a system for generating a digitized human skin model, comprising:
The high-precision scanning equipment is used for acquiring three-dimensional point cloud data of a scanned object and establishing three-dimensional human body scanning model point cloud data;
the model data processing module is used for storing and modifying the three-dimensional human body scanning model data imported by the high-precision scanning equipment;
The detail scanning device is used for acquiring point cloud data of the oral cavity and the eyeball;
The model data processing module comprises a data optimizing unit, a detail adding unit, a face number optimizing unit, a skin texture adding unit and a model exporting unit, wherein the detail adding unit is used for adding point cloud data of an oral cavity and an eyeball, which are acquired by detail scanning equipment, into the three-dimensional human body scanning model point cloud data and adding details of the three-dimensional human body scanning model point cloud data, the data optimizing unit is used for carrying out noise rejection and point cloud packaging on the three-dimensional human body scanning model point cloud data so that the three-dimensional human body scanning model data is converted into a three-dimensional curved surface form from the three-dimensional point cloud data, the face number optimizing unit is used for carrying out face reduction on the model face number of the three-dimensional curved surface form, the skin texture adding unit is used for forming skin textures on the model surface of the three-dimensional curved surface form, and the model exporting unit is used for exporting an optimized model into a preset format.
Preferably, the detail scanning device comprises an intraoral scanner and an RGBD depth camera.
Compared with the prior art, the invention has the advantages that:
The invention can generate a more complete and realistic digital human skin model through high-precision three-dimensional scanning equipment and professional three-dimensional software. The defect caused by incomplete scanning in the prior method is repaired, and particularly, the method has remarkable improvement on detailed structure acquisition of key parts such as eyeballs, oral cavities and the like.
Optimized number of faces and performance through the step of face number optimization, the invention improves the performance and fidelity of the model. Redundant surfaces are reduced, adjacent surfaces are combined, and holes are repaired and the like, so that the generated digital human skin model is more efficient and real.
The invention specially treats the key fine parts of the eyeball, the oral cavity and the like, and specially reinforces and adds details aiming at the key fine parts of the eyeball, the oral cavity and the like so as to ensure that the structures and the characteristics of the parts are fully presented. This helps to improve the physiological accuracy of the model.
Flexible skin texture addition by the step of adding skin texture, the present invention provides for more flexible control of the appearance of the model. This includes selecting textures and adding, adjusting the location and size of the textures, etc., so that the generated human skin model is more visually realistic.
The model has high fidelity in the fields of medical simulation, operation planning, rehabilitation training and the like, and has wide application potential in the fields of medical science, virtual reality and the like.
The invention reduces the dependence on manual intervention to a certain extent, and improves the efficiency and consistency of generating the digital human skin model.
Drawings
The drawings are not intended to be drawn to scale. In the drawings, each identical or nearly identical component that is illustrated in various figures may be represented by a like numeral. For purposes of clarity, not every component may be labeled in every drawing. Embodiments of various aspects of the invention will now be described, by way of example, with reference to the accompanying drawings, in which:
FIG. 1 is a schematic diagram of the positions of 12 cameras according to an embodiment of the present invention;
FIG. 2a is a three-dimensional manikin prior to filling holes in accordance with an embodiment of the present invention;
FIG. 2b is a three-dimensional manikin after filling holes according to an embodiment of the invention;
FIG. 3 is a schematic illustration of the introduction of a three-dimensional manikin to a face-reduction tool, in accordance with an embodiment of the present invention;
FIG. 4 is a schematic illustration of a three-dimensional manikin shown in an embodiment of the invention after a partial surface subtraction process;
FIG. 5a is a schematic representation of the face prior to skin texture addition in accordance with an embodiment of the present invention;
FIG. 5b is a schematic representation of the face after skin texture addition, according to an embodiment of the present invention.
Detailed Description
For a better understanding of the technical content of the present invention, specific examples are set forth below, along with the accompanying drawings.
The first aspect of the present invention proposes a technical solution, a method for generating a digitized human skin model, comprising the steps of:
and step 1, obtaining three-dimensional point cloud data of a scanned object, and establishing three-dimensional human body scanning model point cloud data.
In step 1, three-dimensional scan data of a human body is obtained using a high-precision scanning device.
Optionally, the scanning device is a Kinect camera, the main principle of the camera is infrared measurement, color information of a scanned object can be obtained by using a common camera, and depth information of different points on an image can be obtained by combining the infrared measurement, so that three-dimensional point cloud data of the scanned object can be obtained.
Specifically, 12 Kinect cameras with fixed positions are adopted to shoot a target object at 12 angles, and position information of different angles is fused and spliced to obtain comprehensive three-dimensional human body scanning data.
Optionally, as shown in fig. 1, the first camera is located at a position of 0 ° right in front of the target object, the second camera is located at a position of 30 ° in front of the target object, the third camera is located at a position of 90 ° on the left side of the target object, the fourth camera is located at a position of 150 ° on the left rear side, the fifth camera is located at a position of 180 ° right behind the target object, the sixth camera is located at a position of 210 ° right behind the target object, the seventh camera is located at a position of 270 ° on the right side of the target object, the eighth camera is located at a position of 330 ° right in front of the target object, the ninth camera is located directly above the target object, the tenth camera is located directly below the target object, the eleventh camera is located at a position of left upper side of the target object, and the twelfth camera is located at a position of right lower side of the target object.
As such, the model data obtained by the scan will contain the basic shape and structure of various parts of the body, but may not include the complete eyeball, oral cavity, etc. details.
Further detailed partial scans are required.
For example, the eyeball and the oral cavity are scanned again, and the oral cavity is soft tissue and has a relatively closed environment, so that the accurate and clear-grained data is difficult to obtain by scanning with a common RGBD camera or a laser radar without auxiliary light, and is mainly obtained by a professional intraoral scanner and a facial scanner. Point cloud data for the eye and other details may be acquired based on RGBD cameras or SAR systems.
And 2, importing the three-dimensional human body scanning model point cloud data obtained in the step 1 into a model data processing module, and providing an operating environment for modifying the three-dimensional human body scanning model point cloud data.
Specifically, the scanned preliminary model data may be imported into geomic studio software. The geomatic studio software will provide an operating environment for subsequent fixes, optimizations, and detail additions.
It is easy to understand that when the multi-machine scanning device is used for scanning the human body, the obtained scanning data contains point cloud noise due to the influence of the scanning environment, and the actual requirement cannot be met, so that the obtained point cloud data needs to be further optimized. Further, step 3, optimizing the imported three-dimensional human body scanning model point cloud data;
Optionally, the step of optimizing the three-dimensional human body scanning model point cloud data includes removing noise, filling holes and encapsulating the point cloud.
Specifically, noise is removed, namely, because human body point cloud data obtained through scanning is not only information of the surface of a human body, objects attached to the surface of the human body and sundries existing in a human body scanning environment can influence the effect of human body scanning, abnormal point cloud data can have adverse effects on subsequent processing, and data points which obviously do not belong to the surface of the human body need to be manually deleted.
Filling holes, namely importing the three-dimensional human body scanning data obtained by scanning into a geomic Studio, and observing whether the human body surface point cloud data obtained by scanning are continuous or not. The abnormal phenomenon of the human body cavity can occur in the human body obtained by three-dimensional scanning, and the related commands of Geomagic can be used for completing the repair and optimization operation of the grid.
And (3) point cloud packaging, namely, because the human body data point clouds obtained by scanning are often dense, whether the point clouds are continuous or not is difficult to judge by human eyes. The point cloud packaging aims to convert three-dimensional point cloud data into a three-dimensional curved surface form.
With reference to fig. 2a and 2b, a more complete three-dimensional curved surface model can be obtained by optimizing the point cloud data.
In order to improve the performance and fidelity of the model, the number of faces of the model needs to be optimized.
And step 4, performing face reduction treatment on the optimized three-dimensional human body scanning model.
In an alternative embodiment, the three-dimensional body scan model may be processed by Cinema 4D software, a three-dimensional design software developed by Maxon Computer, germany, which is known as a very high speed computing and powerful third party plug-in. The software is internally provided with the surface reducing tool, and the polygonal surface reducing effect can be achieved by adjusting the surface reducing strength percentage.
As shown in fig. 3-4, the fidelity of the model can be improved by adjusting the percentage of the subtracted surface strength.
And finally, adding skin texture to the model, and performing skin texture addition on the three-dimensional human body scanning model subjected to the face reduction treatment to obtain a human body skin model which can be exported to be in a preset format.
In an alternative embodiment, skin texture is added to the surface of the three-dimensional body scan model by means of texture mapping or geometric texture addition.
As shown in FIGS. 5a-5b, the facial lines of FIG. 5b are more rounded, and in addition, the detailed structures of wrinkles, hair, eyebrows, etc. can be seen.
Among these, texture mapping is an effective way to represent object surface details, increasing their realism. The image texture is used for representing the surface details of the object, which is a method adopted by the traditional texture mapping, compared with the geometric texture, the method is more economical and effective and has lower cost in time and space consumption, but the image texture does not support important effects of shielding, shading, wheel and the like, so that the drawing precision is lost.
The geometric texture has self-similarity like the image texture, but is more complex than the image texture, the geometric texture is not a discrete set of two-dimensional pixel points, but is a continuous representation formed by grids connected in an irregular topology, and the geometric texture can well represent the surface details of an object and improve the drawing precision.
In general, the geometric texture synthesis technology is to directly synthesize on the surface of a target object, which reduces a great amount of labor for manually interacting and adding geometric textures and can generate high-quality geometric textures, but the synthesis is specific to a specific target, and the result can only be used on the specific object, so that the synthesis speed is reduced, and a great amount of storage space is needed to store the synthesis result on each object.
Therefore, in a specific embodiment, according to different parts of the model, the skin can be added to the surface of the model in a texture mapping or geometric texture adding mode, so that the skin addition of the surface of the model is more flexible and controllable.
Digital human skin model generating system
The second aspect of the present invention proposes a technical solution, a system for generating a digitized human skin model, comprising:
The high-precision scanning equipment is used for acquiring three-dimensional point cloud data of a scanned object and establishing three-dimensional human body scanning model point cloud data;
The model data processing module is used for storing and modifying three-dimensional human body scanning model data imported by the high-precision scanning equipment;
The detail scanning device is used for acquiring point cloud data of the oral cavity and the eyeball;
The model data processing module comprises a data optimizing unit, a detail adding unit, a face number optimizing unit, a skin texture adding unit and a model deriving unit, wherein the detail adding unit is used for adding point cloud data of an oral cavity and an eyeball, which are acquired by a detail scanning device, into three-dimensional human body scanning model point cloud data, increasing details of the three-dimensional human body scanning model point cloud data, the data optimizing unit is used for carrying out noise rejection and point cloud packaging on the three-dimensional human body scanning model point cloud data so that the three-dimensional human body scanning model data is converted into a three-dimensional curved surface form by the three-dimensional point cloud data, the face number optimizing unit is used for subtracting the face number of the model of the three-dimensional curved surface form, the skin texture adding unit is used for forming skin textures on the model surface of the three-dimensional curved surface form, and the model deriving unit is used for deriving the optimized model into a preset format.
In an alternative embodiment, the detail scanning device used includes an intraoral scanner and RGBD depth camera for the intraoral and eyeball, etc. detail.
While the invention has been described with reference to preferred embodiments, it is not intended to be limiting. Those skilled in the art will appreciate that various modifications and adaptations can be made without departing from the spirit and scope of the present invention. Accordingly, the scope of the invention is defined by the appended claims.
Claims (9)
1. A method for generating a digitized human skin model, comprising the steps of:
Step 1, three-dimensional point cloud data of a scanned object are obtained, and three-dimensional human body scanning model point cloud data are established;
Step 2, importing the three-dimensional human body scanning model point cloud data obtained in the step 1 into a model data processing module, and providing an operating environment for modifying the three-dimensional human body scanning model point cloud data;
step 3, optimizing the imported three-dimensional human body scanning model point cloud data;
Step 4, carrying out face reduction treatment on the optimized three-dimensional human body scanning model;
step 5, adding skin texture to the three-dimensional human body scanning model subjected to the face reduction treatment to obtain a human body skin model which can be exported to be in a preset format;
In step 1, the scanned object is scanned twice, three-dimensional point cloud data of the surface of the scanned object is obtained by first scanning, three-dimensional point cloud data of a detail part of the scanned object is obtained by second scanning, and the point cloud data of the detail part is added into the three-dimensional human body scanning model point cloud data before the three-dimensional human body scanning model point cloud data is optimized.
2. The method of generating a digitized human skin model of claim 1, wherein the high precision scanning device comprises 12 depth cameras, a first camera positioned at a position immediately in front of the target object at 0 °, a second camera positioned at a position at 30 ° in front of the target object, a third camera positioned at a position at 90 ° left of the target object, a fourth camera positioned at a position at 150 ° left of the target object, a fifth camera positioned at a position at 180 ° right of the target object, a sixth camera positioned at a position at 210 ° right of the target object, a seventh camera positioned at a position at 270 ° right of the target object, an eighth camera positioned at 330 ° right of the target object, a ninth camera positioned immediately above the target object, a tenth camera positioned immediately below the target object, an eleventh camera positioned at a position at left of the target object, and a twelfth camera positioned at a position at right of the target object.
3. The method of generating a digitized human skin model of claim 1 wherein in step 2 the model data processing module comprises reverse engineering software for automatically generating a high-precision digital model from three-dimensional scan data.
4. The method of generating a digitized skin model of claim 1 wherein in step 1, the scanned object detail portion comprises an oral cavity and an eyeball portion.
5. The method for generating a digitized human skin model according to claim 1, wherein in step 3, the step of optimizing the three-dimensional human body scan model point cloud data includes removing noise, filling holes, and point cloud packaging;
Removing noise, and deleting data points outside the outline of the human body model in a manual deleting mode;
the filling holes are filled according to whether the human body surface point cloud data continuously fill the empty hole parts or not;
the point cloud package is used for converting the three-dimensional human body scanning model point cloud data into a three-dimensional curved surface model.
6. The method for generating a digitized human skin model of claim 1 wherein in step 4, the effect of polygonal face subtraction is achieved by adjusting the face subtraction intensity percentage.
7. The method of generating a digitized human skin model according to claim 1, wherein in step 5, skin texture is added to the surface of the three-dimensional human body scan model by means of texture mapping or geometric texture addition.
8. A system for generating a digitized skin model of a person, comprising:
The high-precision scanning equipment is used for acquiring three-dimensional point cloud data of a scanned object and establishing three-dimensional human body scanning model point cloud data;
the model data processing module is used for storing and modifying the three-dimensional human body scanning model data imported by the high-precision scanning equipment;
The detail scanning device is used for acquiring point cloud data of the oral cavity and the eyeball;
The model data processing module comprises a data optimizing unit, a detail adding unit, a face number optimizing unit, a skin texture adding unit and a model exporting unit, wherein the detail adding unit is used for adding point cloud data of an oral cavity and an eyeball, which are acquired by detail scanning equipment, into the three-dimensional human body scanning model point cloud data and adding details of the three-dimensional human body scanning model point cloud data, the data optimizing unit is used for carrying out noise rejection and point cloud packaging on the three-dimensional human body scanning model point cloud data so that the three-dimensional human body scanning model data is converted into a three-dimensional curved surface form from the three-dimensional point cloud data, the face number optimizing unit is used for carrying out face reduction on the model face number of the three-dimensional curved surface form, the skin texture adding unit is used for forming skin textures on the model surface of the three-dimensional curved surface form, and the model exporting unit is used for exporting an optimized model into a preset format.
9. The system for generating a digitized human skin model of claim 8 wherein the detail scanning device comprises an intraoral scanner and an RGBD depth camera.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411325611.3A CN119229011A (en) | 2024-09-23 | 2024-09-23 | Method and system for generating digital human skin model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411325611.3A CN119229011A (en) | 2024-09-23 | 2024-09-23 | Method and system for generating digital human skin model |
Publications (1)
Publication Number | Publication Date |
---|---|
CN119229011A true CN119229011A (en) | 2024-12-31 |
Family
ID=93942637
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202411325611.3A Pending CN119229011A (en) | 2024-09-23 | 2024-09-23 | Method and system for generating digital human skin model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN119229011A (en) |
-
2024
- 2024-09-23 CN CN202411325611.3A patent/CN119229011A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11735306B2 (en) | Method, system and computer readable storage media for creating three-dimensional dental restorations from two dimensional sketches | |
CN112087985B (en) | Simulated orthodontic treatment via real-time enhanced visualization | |
US12042350B2 (en) | Facial model for generation of post-treatment images of teeth and soft facial tissues | |
Ichim et al. | Phace: Physics-based face modeling and animation | |
CN109584349B (en) | Method and apparatus for rendering material properties | |
US8026916B2 (en) | Image-based viewing system | |
CN113099208A (en) | Method and device for generating dynamic human body free viewpoint video based on nerve radiation field | |
KR101744079B1 (en) | The face model generation method for the Dental procedure simulation | |
CN103198508A (en) | Human face expression animation generation method | |
US20220092840A1 (en) | Systems and Methods for Generating a Skull Surface for Computer Animation | |
EP3629336B1 (en) | Dental design transfer | |
Song et al. | A generic framework for efficient 2-D and 3-D facial expression analogy | |
WO2021017819A1 (en) | Ray-free dental x-ray periapical film virtual imaging method based on virtual reality | |
CN116402943A (en) | Indoor 3D reconstruction method and device based on signed distance field | |
US8352059B2 (en) | Method for the manufacturing of a reproduction of an encapsulated head of a foetus and objects obtained by the method | |
CN118864736A (en) | Method and device for molding oral prosthesis model | |
CN118357083A (en) | Three-dimensional paint spraying system and method based on generative large model texture generation | |
CN119229011A (en) | Method and system for generating digital human skin model | |
US20240169635A1 (en) | Systems and Methods for Anatomically-Driven 3D Facial Animation | |
CN114820730A (en) | CT and CBCT registration method based on pseudo CT | |
KR20220056760A (en) | An intraoral image processing apparatus, and an intraoral image processing method | |
KR102714028B1 (en) | System for orthodontic treatment information provision and method for thereof | |
CN119206101B (en) | Editable facial three-dimensional reconstruction method, system and storage medium | |
CN118799497A (en) | A dynamic 4D scene reconstruction method and system based on monocular video | |
Jamrozik et al. | Application of computer modeling for planning plastic surgeries |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |