Disclosure of Invention
The invention mainly aims to provide a face three-dimensional imaging method, a face three-dimensional imaging device, face three-dimensional imaging equipment and a face three-dimensional imaging storage medium, and aims to solve the technical problem that complete face information of a to-be-measured body cannot be automatically acquired in the prior art.
In order to achieve the above object, the present invention provides a method for three-dimensional imaging of a face, the method comprising the steps of:
acquiring original three-dimensional volume data of a tissue to be detected;
determining the face position and the amnion position of the tissue to be detected according to the original three-dimensional volume data and a preset threshold;
determining a target positioning point profile according to the face position and the amnion position;
carrying out transparency zero clearing on the original three-dimensional volume data according to the target positioning point profile to obtain the face three-dimensional data of the tissue to be detected;
and finishing the face three-dimensional imaging display of the tissue to be detected according to the face three-dimensional data.
Optionally, the determining the face position and the amnion position of the tissue to be detected according to the original three-dimensional volume data and a preset threshold includes:
acquiring a preset face threshold, a preset amnion threshold and a preset skeleton threshold;
determining a maximum value of target volume data according to the original three-dimensional volume data;
determining a target face threshold, a target amniotic membrane threshold and a target skeletal threshold according to the target volume data maximum, the preset face threshold, the preset amniotic membrane threshold and the preset skeletal threshold;
and determining the face position and the amniotic membrane position of the tissue to be detected according to the original three-dimensional volume data, the target face threshold, the target amniotic membrane threshold and the target bone threshold.
Optionally, the determining the face position and the amnion position of the tissue to be detected according to the original three-dimensional volume data, the target face threshold, the target amnion threshold and the target skeleton threshold includes:
determining the bone position of the tissue to be detected according to the original three-dimensional volume data and the target bone threshold value;
determining a face position of the tissue to be detected in an ultrasound beam-opposite direction based on the bone position and the target face threshold;
and determining the amniotic membrane position of the tissue to be detected in the ultrasonic sound beam reverse direction according to the face position and the target amniotic membrane threshold value.
Optionally, the determining a target positioning point profile according to the face position and the amnion position includes:
acquiring preset positioning parameters;
determining an initial positioning point and a target marking value corresponding to the initial positioning point according to the preset positioning parameters, the amnion position and the face position;
acquiring the target length and the target width of original three-dimensional volume data;
and determining a target positioning point profile according to the target length, the target width, the initial positioning point and a target mark value corresponding to the initial positioning point.
Optionally, the determining a target positioning point profile according to the target length, the target width, the initial positioning point, and the target mark value corresponding to the initial positioning point includes:
determining a first positioning point profile according to the target length, the target width and the initial positioning point;
determining an invalid positioning point in the first positioning point profile according to the initial positioning point;
and filling the invalid positioning points in the first positioning point profile to obtain a target positioning point profile.
Optionally, the filling invalid positioning points in the first positioning point profile to obtain a target positioning point profile includes:
acquiring a first wire harness corresponding to the invalid positioning point;
determining a filling module with a preset area according to the first wire harness;
carrying out profile filling on the first positioning point profile according to a preset interpolation template, the first wire harness and a filling module to obtain a second positioning point profile;
and smoothing the second positioning point profile according to a preset smoothing module to obtain a target positioning point profile.
Optionally, the performing transparency zero clearing on the original three-dimensional volume data according to the target positioning point profile to obtain the three-dimensional face data of the tissue to be detected includes:
determining the depth of a target ultrasonic sound beam according to the profile of the target positioning point;
searching a zero clearing area corresponding to the ultrasonic sound beam depth smaller than the target ultrasonic sound beam depth in the original three-dimensional volume data;
and carrying out transparency zero clearing on the zero clearing area to obtain the three-dimensional face data of the tissue to be detected.
Furthermore, in order to achieve the above object, the present invention also proposes a face three-dimensional imaging apparatus comprising:
the acquisition module is used for acquiring original three-dimensional volume data of a tissue to be detected;
the determining module is used for determining the face position and the amnion position of the tissue to be detected according to the original three-dimensional volume data and a preset threshold;
the determining module is further used for determining a target positioning point profile according to the face position and the amnion position;
the zero clearing module is used for carrying out transparency zero clearing on the original three-dimensional data according to the target positioning point profile to obtain the face three-dimensional data of the tissue to be detected;
and the display module is used for finishing the face three-dimensional imaging display of the tissue to be detected according to the face three-dimensional data.
Furthermore, to achieve the above object, the present invention also proposes a face three-dimensional imaging apparatus comprising: a memory, a processor and a three-dimensional imaging program of a face stored on the memory and executable on the processor, the three-dimensional imaging program of a face being configured to implement the three-dimensional imaging method of a face as described above.
Furthermore, to achieve the above object, the present invention also proposes a storage medium having stored thereon a three-dimensional imaging program of a face, which when executed by a processor implements the three-dimensional imaging method of a face as described above.
The method comprises the steps of obtaining original three-dimensional volume data of a tissue to be detected; determining the face position and the amnion position of the tissue to be detected according to the original three-dimensional volume data and a preset threshold; determining a target positioning point profile according to the face position and the amnion position; carrying out transparency zero clearing on the original three-dimensional volume data according to the target positioning point profile to obtain the face three-dimensional data of the tissue to be detected; and finishing the face three-dimensional imaging display of the tissue to be detected according to the face three-dimensional data. The method comprises the steps of analyzing original three-dimensional data to determine the face position of a tissue to be detected and the amniotic position in the original three-dimensional data, determining a target positioning point profile based on the amniotic position and the face position, identifying the tissue information before the face of the tissue to be detected, and finally performing transparency zero clearing on the original three-dimensional data according to the target positioning point profile to finish three-dimensional imaging display of the face of the tissue to be detected, so that a clear, comprehensive and complete face image of the body to be detected is automatically obtained, and the image rendering effect is improved.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a three-dimensional facial imaging device in a hardware operating environment according to an embodiment of the present invention.
As shown in fig. 1, the facial three-dimensional imaging apparatus may include: a processor 1001, such as a Central Processing Unit (CPU), a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a Wireless interface (e.g., a Wireless-Fidelity (Wi-Fi) interface). The Memory 1005 may be a Random Access Memory (RAM) Memory, or may be a Non-Volatile Memory (NVM), such as a disk Memory. The memory 1005 may alternatively be a storage device separate from the processor 1001.
Those skilled in the art will appreciate that the configuration shown in fig. 1 does not constitute a limitation of the facial three-dimensional imaging apparatus, and may include more or fewer components than those shown, or some components in combination, or a different arrangement of components.
As shown in fig. 1, a memory 1005, which is a storage medium, may include therein an operating system, a network communication module, a user interface module, and a face three-dimensional imaging program.
In the three-dimensional imaging apparatus for faces shown in fig. 1, the network interface 1004 is mainly used for data communication with a network server; the user interface 1003 is mainly used for data interaction with a user; the processor 1001 and the memory 1005 of the three-dimensional face imaging device of the present invention may be provided in a three-dimensional face imaging device that calls a three-dimensional face imaging program stored in the memory 1005 through the processor 1001 and executes the three-dimensional face imaging method provided by the embodiment of the present invention.
An embodiment of the present invention provides a face three-dimensional imaging method, and referring to fig. 2, fig. 2 is a schematic flow diagram of a first embodiment of a face three-dimensional imaging method according to the present invention.
In this embodiment, the three-dimensional imaging method for a face includes the following steps:
step S10: and acquiring original three-dimensional volume data of the tissue to be detected.
It should be noted that, the execution main body of this embodiment is an ultrasonic detection device, the ultrasonic detection device emits an ultrasonic sound beam to obtain original three-dimensional volume data of a tissue to be detected, a face position and an amniotic position of the tissue to be detected are determined according to the original three-dimensional volume data and a preset threshold, a target positioning point profile is determined based on the face position and the amniotic position, transparency zero clearing is performed on the original three-dimensional volume data based on the target positioning point profile, so as to obtain clear face three-dimensional data of the tissue to be detected, which does not contain other tissue information, and face three-dimensional imaging display of the tissue to be detected is completed according to the face three-dimensional data.
It can be understood that the ultrasonic detection device transmits an ultrasonic sound beam to acquire a plurality of two-dimensional images of the tissue to be detected, and performs interpolation reconstruction on the two-dimensional images, so as to obtain original three-dimensional volume data of the tissue to be detected, wherein the original three-dimensional volume data is consistent with the spatial structure of the tissue to be detected, the original three-dimensional volume data is composed of voxels ag [ i ] [ j ] [ k ] and transparency parameters aw [ i ] [ j ] [ k ], each voxel corresponds to a transparency parameter, and aw is determined by an empirical function faw (ag). For example, the ultrasound probe device transmits an ultrasound beam acquisition 188 frames 255 x 399 of two-dimensional images, where 255 is the transverse resolution and 399 is the axial resolution. And carrying out interpolation reconstruction on the two-dimensional image, converting the two-dimensional image into 600 x 400 volume data ag through three-dimensional reconstruction in a GPU of the ultrasonic detection equipment, wherein each voxel value ag [ i ] [ j ] [ k ] in the volume data has a corresponding transparency value aw [ i ] [ j ] [ k ], and ag and aw are normalized to be values between 0 and 1.
Step S20: and determining the face position and the amnion position of the tissue to be detected according to the original three-dimensional volume data and a preset threshold value.
It should be noted that, after the original three-dimensional volume data is obtained, the original three-dimensional volume data is traversed, and the face position and the amniotic membrane position of the tissue to be detected are determined based on the result obtained by the traversal and the preset threshold. The preset threshold refers to a preset threshold coefficient for positioning the face position, the amnion position and the skeleton position in the ultrasonic sound beam direction.
It can be understood that, in order to accurately locate the face position and the amnion position of the tissue to be detected, further, the determining the face position and the amnion position of the tissue to be detected according to the original three-dimensional volume data and a preset threshold includes: acquiring a preset face threshold, a preset amnion threshold and a preset skeleton threshold; determining a maximum value of target volume data according to the original three-dimensional volume data; determining a target face threshold, a target amniotic membrane threshold and a target skeletal threshold according to the target volume data maximum, the preset face threshold, the preset amniotic membrane threshold and the preset skeletal threshold; and determining the face position and the amniotic membrane position of the tissue to be detected according to the original three-dimensional volume data, the target face threshold, the target amniotic membrane threshold and the target bone threshold.
In a specific implementation, the preset face threshold, the preset amnion threshold and the preset skeleton threshold refer to preset threshold coefficients used for respectively positioning a face position, an amnion position and a skeleton position in an ultrasonic sound beam direction, and the values of the preset face threshold, the preset amnion threshold and the preset skeleton threshold which correspond to TFa, TFb and TBf respectively. In the present embodiment, TBf is set to 0.75, but the present embodiment is not limited to this, but the present embodiment will be described with TBf set to 0.75 as an example.
It should be noted that, the original three-dimensional volume data is traversed, so as to obtain a maximum value voimaxgay of the volume data in the original three-dimensional volume data, where the maximum value voimaxgay of the volume data in the original three-dimensional volume data is a maximum value of the target volume data. And determining a target face threshold, a target amniotic membrane threshold and a target skeletal threshold according to the target volume data maximum value, the preset face threshold, the preset amniotic membrane threshold and the preset skeletal threshold, wherein the target face threshold, the target amniotic membrane threshold and the target skeletal threshold are used for respectively positioning the face position, the amniotic membrane position and the skeletal position in the ultrasonic sound beam direction. The method comprises the steps of determining a target face threshold value from the skeleton of a tissue to be detected to the face of the tissue to be detected as VOIMaxGray TFa according to the maximum value of target volume data and a preset face threshold value, determining a target amnion threshold value from the face of the tissue to be detected to the amnion through amniotic fluid as VOIMaxGray TFb according to the maximum value of the target volume data and a preset amnion threshold value, and determining a target skeleton threshold value as VOIMaxGray TBf according to the maximum value of the target volume data and a preset skeleton threshold value.
It is understood that after obtaining the target face threshold, the target amniotic membrane threshold, and the target skeletal threshold, the face position PF2 and the amniotic membrane position PF1 of the tissue to be tested may be determined in the original three-dimensional volume data according to the target face threshold, the target amniotic membrane threshold, and the target skeletal threshold.
In a specific implementation, in order to accurately locate a face position and an amniotic membrane position, further, the determining the face position and the amniotic membrane position of the tissue to be detected according to the original three-dimensional volume data, a target face threshold, a target amniotic membrane threshold and a target bone threshold includes: determining the bone position of the tissue to be detected according to the original three-dimensional volume data and the target bone threshold value; determining a face position of the tissue to be detected in an ultrasound beam-opposite direction based on the bone position and the target face threshold; and determining the amniotic membrane position of the tissue to be detected in the ultrasonic sound beam reverse direction according to the face position and the target amniotic membrane threshold value.
It should be noted that, in the original three-dimensional volume data, traversal is performed according to the ultrasound beam direction by using a target bone threshold value voimaxgay TBf, the bone position of the tissue to be detected is located as PF, traversal retrieval is performed from the bone position PF to the ultrasound beam reverse direction by using a target face threshold value voimaxgay TFa, the retrieval rule is decreasing, a point smaller than the target face threshold value voimaxgay TFa is found, the point is used as a face position PF2 of the tissue to be detected, traversal retrieval is performed from the face position PF2 to the ultrasound beam reverse direction by using a target amnion threshold value voimaxgay TFb, the retrieval rule is increasing, the point larger than the target amnion threshold value voimaxgay TFb is found, and the point is used as an amnion position PF 1. As shown in fig. 3, the original three-dimensional volume data includes 600 × 400 volume data ag, each volume data ag has a voxel value ag [ i ] [ j ] [ k ] and a transparency value aw [ i ] [ j ] [ k ], k, m, and n are the third dimension of the original three-dimensional volume data, mm, nn, and ss are the second dimension (ultrasound beam direction) of the original three-dimensional volume data, the bone position PF of the tissue to be detected is determined in the ultrasound beam direction according to the target bone threshold, the face position is obtained by performing descending search based on the bone position and the target face threshold voimaxgay TFa, and the amnion position is obtained by performing ascending detection based on the face position and the target amnion threshold vaimaxgay TFb.
Step S30: and determining a target positioning point profile according to the face position and the amnion position.
It should be noted that, after the face position and the amniotic membrane position of the tissue to be detected are obtained, the target positioning point profile between the face position and the amniotic membrane position on all the ultrasonic beams in the original three-dimensional volume data can be determined. The target positioning point profile Autoface _ D is a critical surface used for carrying out transparency zero clearing on original three-dimensional volume data.
Step S40: and carrying out transparency zero clearing on the original three-dimensional data according to the section of the target positioning point to obtain the face three-dimensional data of the tissue to be detected.
It should be noted that after the target positioning point profile is obtained, transparency zero clearing is performed on the original three-dimensional volume data based on the target positioning point profile, so that the zero-cleared original three-dimensional volume data is obtained, and the zero-cleared original three-dimensional volume data is the facial three-dimensional data of the tissue to be detected.
It can be understood that, in order to obtain clear three-dimensional face data, so as to improve the definition and comprehensiveness of face imaging, further, performing transparency zero clearing on the original three-dimensional volume data according to the target positioning point profile to obtain the three-dimensional face data of the tissue to be detected includes: determining the depth of a target ultrasonic sound beam according to the profile of the target positioning point; searching a zero clearing area corresponding to the ultrasonic sound beam depth smaller than the target ultrasonic sound beam depth in the original three-dimensional volume data; and carrying out transparency zero clearing on the zero clearing area to obtain the three-dimensional face data of the tissue to be detected.
In the specific implementation, after a plurality of target positioning point profiles are obtained, target ultrasonic beam depth AutoFace _ D [ i ] [ j ] corresponding to the target positioning point profiles are determined, the ultrasonic beam depth in the original three-dimensional volume data is smaller than the target ultrasonic beam depth AutoFace _ D [ i ] [ j ] and is used as a zero clearing area, transparency corresponding to the volume data of the zero clearing area is cleared, the cleared original three-dimensional volume data is obtained, other irregular tissue information in front of the face of a tissue to be detected is identified and cleared, and the cleared original three-dimensional volume data is obtained.
Step S50: and finishing the face three-dimensional imaging display of the tissue to be detected according to the face three-dimensional data.
After the face three-dimensional data is obtained, rendering the face three-dimensional data through a ray casting algorithm to obtain a final face three-dimensional image, and displaying the face three-dimensional image.
The method comprises the steps of obtaining original three-dimensional volume data of a tissue to be detected; determining the face position and the amnion position of the tissue to be detected according to the original three-dimensional volume data and a preset threshold; determining a target positioning point profile according to the face position and the amnion position; carrying out transparency zero clearing on the original three-dimensional volume data according to the target positioning point profile to obtain the face three-dimensional data of the tissue to be detected; and finishing the face three-dimensional imaging display of the tissue to be detected according to the face three-dimensional data. The method comprises the steps of analyzing original three-dimensional data to determine the face position of a tissue to be detected and the amniotic position in the original three-dimensional data, determining a target positioning point profile based on the amniotic position and the face position, identifying the tissue information before the face of the tissue to be detected, and finally performing transparency zero clearing on the original three-dimensional data according to the target positioning point profile to finish three-dimensional imaging display of the face of the tissue to be detected, so that a clear, comprehensive and complete face image of the body to be detected is automatically obtained, and the image rendering effect is improved.
Referring to fig. 4, fig. 4 is a flowchart illustrating a three-dimensional imaging method of a face according to a second embodiment of the present invention.
Based on the first embodiment, the step S30 in the three-dimensional imaging method for a face according to this embodiment includes:
step S31: and acquiring preset positioning parameters.
It should be noted that the preset positioning parameter refers to a preset coefficient for obtaining a profile of the target positioning point, the preset positioning parameter a _ TF1TF0 is an arbitrary value between 0 and 1, and in this embodiment, the preset positioning parameter a _ TF1TF0 is 0.9 for example.
Step S32: and determining an initial positioning point and a target marking value corresponding to the initial positioning point according to the preset positioning parameters, the amnion position and the face position.
It should be noted that after the preset positioning parameters, the amnion position and the face position are obtained, the initial positioning points and the target mark values corresponding to the initial positioning points can be determined according to the preset positioning parameters, the amnion position and the face position, a plurality of initial positioning points exist, each initial positioning point corresponds to one target mark value, namely, the automatic face _ Bflag, and the target mark values are used for judging whether the initial positioning points are effective or not.
It can be understood that the amnion position and the face position are input into a function F, a plurality of initial positioning points, namely, AutoFace _ D, F (PF2, PF1) ═ PF1+ (PF2-PF1) × a _ TF1TF0, are obtained, and a corresponding target marker value, AutoFace _ Bflag, is obtained according to the initial positioning point positions.
Step S33: and acquiring the target length and the target width of the original three-dimensional volume data.
It should be noted that, since the original three-dimensional volume data is constructed from a plurality of two-dimensional images, the target length and the target width in the original three-dimensional volume data are acquired.
Step S34: and determining a target positioning point profile according to the target length, the target width, the initial positioning point and a target mark value corresponding to the initial positioning point.
It should be noted that, a target positioning point profile can be determined according to the target length, the target width, the initial positioning point, and the target mark value corresponding to the initial positioning point, and both the length and the width of the target positioning point profile are the target length and the target width.
It can be understood that, in order to obtain an accurate target positioning point profile, further, the determining a target positioning point profile according to the target length, the target width, the initial positioning point, and the target mark value corresponding to the initial positioning point includes: determining a first positioning point profile according to the target length, the target width and the initial positioning point; determining an invalid positioning point in the first positioning point profile according to the initial positioning point; and filling the invalid positioning points in the first positioning point profile to obtain a target positioning point profile.
In the specific implementation, a first positioning point profile is determined according to the target length, the target width and the initial positioning point, and as the initial positioning point corresponds to the positioning point under each ultrasonic beam and has a state that the positioning point retrieval is not satisfied, an invalid positioning point in the first positioning point profile can be determined according to the initial positioning point, when the corresponding beam in the target marker value Autoface _ Bflag is 1, the corresponding beam is valid, and when the corresponding beam in the target marker value Autoface _ Bflag is 0, the corresponding beam is invalid.
It should be noted that after obtaining the invalid positioning point, the invalid positioning point needs to be perfectly filled with the value of the area near the invalid positioning point, so as to obtain the profile of the target positioning point.
It can be understood that, in order to fill the invalid positioning points accurately, so as to obtain a target positioning point profile meeting the requirement, further, the filling the invalid positioning points in the first positioning point profile, so as to obtain a target positioning point profile, includes: acquiring a first wire harness corresponding to the invalid positioning point; determining a filling module with a preset area according to the first wire harness; carrying out profile filling on the first positioning point profile according to a preset interpolation template, the first wire harness and a filling module to obtain a second positioning point profile; and smoothing the second positioning point profile according to a preset smoothing module to obtain a target positioning point profile.
In a specific implementation, a wire harness where an invalid positioning point is located is obtained, the wire harness where the invalid positioning point is located is taken as a first wire harness, and a filling module with a preset area is determined by taking the first wire harness as a core, where the preset area refers to that the preset filling module has a length and a width of Kernel _ width1, and in this embodiment, the value of Kernel _ width1 is 7, but this embodiment is described by taking 7 as an example. The default interpolation template refers to 7 × 7 matrix, where the matrix has values of AutoFaceKernel1[49] {1.0f,1.0f,1.0f,1.0f,1.0f,1.0f,1.0f,1.0f,1.0f,1.0f,1.0f,1.0f,1.0f,1.0f,1.0f,1.0f,1.0f,1.0f,1.0f,1.0f,1.0f,1 f,1.0f,1.0f,1.0f,1 f,1.0f,1.0f,1.0f,1.0f,1.0f,1 f,1.0f,1.0f,1 f,1, 1.0f,1 f,1.0f,1 f,1 f,1.0f,1 f,1.0f,1 f,1.0f,1 f,1.0f,1 f,1, 1.0f,1 f,1, 1.0f,1.0f,1.0f,1 f,1, 1.0f,1 f,1, interpolating the position of the invalid positioning point, taking the first wire harness as a center, the AutoFaceKernel1 as a template, and peripheral wire harnesses with preset areas of Kernel _ width1 as filling modules, wherein the peripheral wire harnesses and the weighted average value of the template fill the corresponding value of the invalid positioning point, and the specific formula is as follows.
The auto face _ D [ i ] [ j ] is a value of an invalid positioning point, the Kernel _ width1 is a length and a width used for filling a module, the auto face Kernel1 is a preset interpolation template used for interpolation, and values of preset interpolation template elements in this embodiment are all 1.
It should be noted that, after filling the invalid positioning points in the first positioning point profile, a second positioning point profile that is not subjected to smoothing processing is obtained, and after smoothing processing is performed on the second positioning point profile according to a preset smoothing module, a target positioning point profile is obtained.
It is understood that the preset smoothing module refers to the module AutofaceKernel2 with the length and width of Kernel _ width2, and the smoothing formula is specifically
The automatic face _ D refers to the filled 600 x 600 anchor point in the second anchor point profile. Default smoothing module auto facekernel2[49] {0.25f,0.25f,0.25f,0.25f,0.25f,0.25f,0.25f,0.25f, 0.25f,0.75f,0.75f,0.75f,0.75f,0.75f,0.25f,0.25f,0.75f,1.0f,1.0f,1.0f,0.75f,0.25f,0.25f,0.75f,1.0f,1.0f,1.0f,0.75f,0.25f,0.25f,0.75f,1.0f,1.0f, 0.75f,0.25f,0.25f,0.75f,0.75f,0.75f, 0.25f,0.25f, 25f,0.25f,0.25f,0.75f, 25f,0.25f,0.25f, 25f,0.25f,0.75f, 0.25f,0.25f,0.25f, 0f,0.75f,0.25f, 0f, 25f,0.25f,0.25f,0.25f, 0f, 0.25f,0.75f, 0.25f,0.25f,0.25f,0.25f, 0f,0.75f, 25f, 0f,0.75f,0.25f,0.25f, 25f,0.75f, 0.25f,0.25f, 0f, 0.25f, 25f,0.25f,0.25f,0.25f,0.25f,0.25f, 0f, 0.25f,0.25f, 0f, 25f, 0f, 0.25f, 0f, 25f, 0f, 25f, 0f,0.75f, 0f,0.75f, 25f, 0f, 0.25f, 25f,0.25f, 0f, 0.25f,0.25f, 0f,0.75f, 0.75f, 0f,0.75f, 0.75f, 0.25. And finally, finishing the smoothing treatment on the profile of the second positioning point to obtain the profile of the target positioning point.
In the specific implementation, as shown in fig. 5, a plurality of two-dimensional images of the tissue to be detected are acquired through ultrasonic three-dimensional volume data acquisition, interpolation reconstruction is performed on the two-dimensional images, so that original three-dimensional volume data of the tissue to be detected is acquired, the face position and the pre-facial amnion position of the tissue to be detected under an ultrasonic sound beam are acquired under the original three-dimensional volume data, positioning point profiles of the tissue to be detected on all ultrasonic sound beams in the original three-dimensional volume data are acquired according to the face position and the pre-facial amnion position, the positioning point profiles are perfectly filled, the filled positioning point profiles are smoothened, and finally three-dimensional imaging is performed on the acquired three-dimensional facial data according to a light projection algorithm and displayed.
In the embodiment, the preset positioning parameters are obtained; determining an initial positioning point and a target marking value corresponding to the initial positioning point according to the preset positioning parameters, the amnion position and the face position; acquiring the target length and the target width of original three-dimensional volume data; and determining a target positioning point profile according to the target length, the target width, the initial positioning point and a target mark value corresponding to the initial positioning point. The initial positioning point and the target mark value corresponding to the initial positioning point are determined by presetting positioning parameters of the amnion position and the face position, so that an accurate target positioning point profile is obtained based on the target length, the target width, the initial positioning point and the target mark value corresponding to the initial positioning point, and the accuracy of the subsequent transparency clearing is improved.
Further, referring to fig. 6, the present embodiment also proposes a three-dimensional imaging apparatus for a face, including:
the acquiring module 10 is configured to acquire original three-dimensional volume data of a tissue to be detected.
And a determining module 20, configured to determine a face position and an amniotic membrane position of the tissue to be detected according to the original three-dimensional volume data and a preset threshold.
The determining module 20 is further configured to determine a target positioning point profile according to the face position and the amnion position.
And the zero clearing module 30 is used for carrying out transparency zero clearing on the original three-dimensional volume data according to the target positioning point profile to obtain the face three-dimensional data of the tissue to be detected.
And the display module 40 is used for finishing the face three-dimensional imaging display of the tissue to be detected according to the face three-dimensional data.
The method comprises the steps of obtaining original three-dimensional volume data of a tissue to be detected; determining the face position and the amnion position of the tissue to be detected according to the original three-dimensional volume data and a preset threshold; determining a target positioning point profile according to the face position and the amnion position; carrying out transparency zero clearing on the original three-dimensional volume data according to the target positioning point profile to obtain the face three-dimensional data of the tissue to be detected; and finishing the face three-dimensional imaging display of the tissue to be detected according to the face three-dimensional data. The method comprises the steps of analyzing original three-dimensional data to determine the face position of a tissue to be detected and the amnion position in the original three-dimensional data, determining a target positioning point profile based on the amnion position and the face position, identifying tissue information before the face of the tissue to be detected, finally performing transparency zero clearing on the original three-dimensional data according to the target positioning point profile, completing three-dimensional imaging display of the face of the tissue to be detected, obtaining a clear and comprehensive face image of the body to be detected, improving image rendering effect, automatically obtaining the clear and comprehensive face image of the body to be detected, and improving image rendering effect.
In an embodiment, the determining module 20 is further configured to obtain a preset face threshold, a preset amnion threshold, and a preset skeleton threshold;
determining a maximum value of target volume data according to the original three-dimensional volume data;
determining a target face threshold, a target amniotic membrane threshold and a target skeletal threshold according to the target volume data maximum, the preset face threshold, the preset amniotic membrane threshold and the preset skeletal threshold;
and determining the face position and the amniotic membrane position of the tissue to be detected according to the original three-dimensional volume data, the target face threshold, the target amniotic membrane threshold and the target bone threshold.
In an embodiment, the determining module 20 is further configured to determine a bone position of the tissue to be tested according to the original three-dimensional volume data and the target bone threshold;
determining a face position of the tissue to be detected in an ultrasound beam-opposite direction based on the bone position and the target face threshold;
and determining the amniotic membrane position of the tissue to be detected in the ultrasonic sound beam reverse direction according to the face position and the target amniotic membrane threshold value.
In an embodiment, the determining module 20 is further configured to obtain a preset positioning parameter;
determining an initial positioning point and a target marking value corresponding to the initial positioning point according to the preset positioning parameters, the amnion position and the face position;
acquiring the target length and the target width of original three-dimensional volume data;
and determining a target positioning point profile according to the target length, the target width, the initial positioning point and a target mark value corresponding to the initial positioning point.
In an embodiment, the determining module 20 is further configured to determine a first positioning point profile according to the target length, the target width, and the initial positioning point;
determining an invalid positioning point in the first positioning point profile according to the initial positioning point;
and filling the invalid positioning points in the first positioning point profile to obtain a target positioning point profile.
In an embodiment, the determining module 20 is further configured to obtain a first wire harness corresponding to the invalid positioning point;
determining a filling module with a preset area according to the first wire harness;
carrying out profile filling on the first positioning point profile according to a preset interpolation template, the first wire harness and a filling module to obtain a second positioning point profile;
and smoothing the second positioning point profile according to a preset smoothing module to obtain a target positioning point profile.
In an embodiment, the zero clearing module 30 is further configured to determine a target ultrasonic beam depth according to the target anchor point profile;
searching a zero clearing area corresponding to the ultrasonic sound beam depth smaller than the target ultrasonic sound beam depth in the original three-dimensional volume data;
and carrying out transparency zero clearing on the zero clearing area to obtain the three-dimensional face data of the tissue to be detected.
Since the present apparatus employs all technical solutions of all the above embodiments, at least all the beneficial effects brought by the technical solutions of the above embodiments are achieved, and are not described in detail herein.
Furthermore, an embodiment of the present invention further provides a storage medium, on which a three-dimensional imaging program of a face is stored, and the three-dimensional imaging program of the face, when executed by a processor, implements the steps of the three-dimensional imaging method of the face as described above.
Since the storage medium adopts all technical solutions of all the embodiments, at least all the beneficial effects brought by the technical solutions of the embodiments are achieved, and no further description is given here.
It should be noted that the above-described work flows are only exemplary, and do not limit the scope of the present invention, and in practical applications, a person skilled in the art may select some or all of them to achieve the purpose of the solution of the embodiment according to actual needs, and the present invention is not limited herein.
In addition, the technical details that are not described in detail in this embodiment may refer to the three-dimensional face imaging method provided in any embodiment of the present invention, and are not described herein again.
Further, it is to be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention or portions thereof that contribute to the prior art may be embodied in the form of a software product, where the computer software product is stored in a storage medium (e.g. Read Only Memory (ROM)/RAM, magnetic disk, optical disk), and includes several instructions for enabling a terminal device (e.g. a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.