CN117132596B - Mandibular third molar generation-retarding type identification method and system based on deep learning - Google Patents
Mandibular third molar generation-retarding type identification method and system based on deep learning Download PDFInfo
- Publication number
- CN117132596B CN117132596B CN202311394030.0A CN202311394030A CN117132596B CN 117132596 B CN117132596 B CN 117132596B CN 202311394030 A CN202311394030 A CN 202311394030A CN 117132596 B CN117132596 B CN 117132596B
- Authority
- CN
- China
- Prior art keywords
- molar
- mandibular
- cusp
- jaw
- target angle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 210000004357 third molar Anatomy 0.000 title claims abstract description 110
- 238000000034 method Methods 0.000 title claims abstract description 40
- 238000013135 deep learning Methods 0.000 title claims abstract description 22
- 230000001815 facial effect Effects 0.000 claims abstract description 42
- 238000013136 deep learning model Methods 0.000 claims abstract description 31
- 238000007781 pre-processing Methods 0.000 claims abstract description 19
- 238000012545 processing Methods 0.000 claims description 24
- 210000001847 jaw Anatomy 0.000 claims description 18
- 238000010606 normalization Methods 0.000 claims description 12
- 210000004373 mandible Anatomy 0.000 claims description 10
- 238000012549 training Methods 0.000 claims description 9
- 238000005520 cutting process Methods 0.000 claims description 6
- 230000011218 segmentation Effects 0.000 claims description 5
- 230000009466 transformation Effects 0.000 claims description 3
- 238000002372 labelling Methods 0.000 claims 1
- 238000010191 image analysis Methods 0.000 abstract description 2
- 230000008569 process Effects 0.000 description 11
- 210000003464 cuspid Anatomy 0.000 description 10
- 238000010586 diagram Methods 0.000 description 9
- 206010006514 bruxism Diseases 0.000 description 7
- 230000000903 blocking effect Effects 0.000 description 5
- 230000005764 inhibitory process Effects 0.000 description 5
- 238000000605 extraction Methods 0.000 description 4
- 230000000877 morphologic effect Effects 0.000 description 3
- 208000027418 Wounds and injury Diseases 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 208000014674 injury Diseases 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000011176 pooling Methods 0.000 description 2
- 238000001356 surgical procedure Methods 0.000 description 2
- 208000035965 Postoperative Complications Diseases 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000006378 damage Effects 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000009472 formulation Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 230000002035 prolonged effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000008733 trauma Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30036—Dental; Teeth
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- Quality & Reliability (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of image analysis, and discloses a mandibular third molar generation-retarding type identification method and system based on deep learning, wherein the method comprises the following steps: acquiring a full-jaw digital curved surface broken sheet image, and preprocessing the full-jaw digital curved surface broken sheet image; inputting the preprocessed full-jaw digital curved surface broken layer image into a deep learning model for dividing images of a first mandibular molar, a second mandibular molar and a third mandibular molar; determining a long axis of a mandibular third molar body, a first molar facial cusp connection line, a second molar facial cusp connection line and a target angle according to images of the mandibular first molar, the mandibular second molar and the mandibular third molar; judging the type of the third molar of the lower jaw according to the angle of the target angle; the image features of the teeth are extracted through the deep learning model and the full jaw digital curved surface fracture slice, and the type of the third molar of the lower jaw can be accurately and efficiently identified according to the image features of the first molar, the second molar and the third molar.
Description
Technical Field
The invention relates to the technical field of image analysis, in particular to a mandibular third molar generation-retarding type identification method and system based on deep learning.
Background
As the mandibular third molar extraction of one of the important operations of the oral and maxillofacial surgery, because the contact relation between the position of the mandibular third molar and adjacent teeth is more changeable, when a patient is positioned on the comprehensive treatment table, a young doctor is difficult to accurately grasp the proper position, and misjudgment is easily caused on the intraoral condition, so that the operation is facilitated to become a more complex alveolar surgery, and the technical requirement on the doctor is higher. Although curved volume fraction radiographs are widely used to assist physicians in preoperatively choosing the way to remove and assessing risk, significant clinical experience is still required for the physician to make a relatively accurate determination, which undoubtedly presents a small challenge to young physicians with partially lacking clinical experience.
If a doctor misjudges the type of the third mandibular molar, the serious postoperative complications such as prolonged operation time, large intraoperative trauma, even root residue, root displacement, neurovascular injury and the like can be caused. In the prior art, a technical scheme for identifying a third molar type through a deep learning model is disclosed, for example (a panoramic X-ray dental film wisdom tooth diagnosis research based on machine learning, wuzhen), which adopts a CNN model to classify a wisdom tooth type, and performs feature extraction on a wisdom tooth image to obtain morphological features of wisdom teeth in the image so as to identify a wisdom tooth blocking type, however, in the wisdom tooth blocking type identification process, the wisdom tooth blocking type identification needs to obtain more morphological features of the wisdom teeth in the image, and different identification features possibly result in different conclusions, so that the wisdom tooth morphology identification accuracy of the scheme is not high, and the efficiency is lower due to the need to obtain more wisdom tooth morphological features.
Therefore, there is a need for a method and a system for identifying the type of third molar of the mandible based on deep learning, which can accurately and efficiently identify the type of third molar of the mandible through a full-jaw digital curved surface broken layer sheet.
Disclosure of Invention
In order to solve the technical problems, the invention provides a method and a system for identifying the type of the third molar of the lower jaw based on deep learning, which can accurately and efficiently identify the type of the third molar of the lower jaw through a full-jaw digital curved surface broken layer sheet.
The invention provides a mandibular third molar type of inhibition recognition method based on deep learning, which comprises the following steps:
s1, acquiring a full-jaw digital curved surface broken sheet image, and preprocessing the full-jaw digital curved surface broken sheet image;
s2, inputting the preprocessed full-jaw digital curved surface broken layer sheet image into a deep learning model, and dividing the images of the first lower jaw molar, the second lower jaw molar and the third lower jaw molar;
s3, determining a long axis of a mandibular third molar body, a first molar buccal cuspid connecting line, a second molar buccal cuspid connecting line and a target angle according to images of the mandibular first molar, the mandibular second molar and the mandibular third molar;
and S4, judging the type of the third molar of the mandible according to the angle of the target angle.
Further, S1, preprocessing the image of the full jaw digitized curved surface broken layer sheet includes:
s11, performing image cutting processing on the full-jaw digital curved surface broken layer sheet image;
s12, carrying out HU value normalization processing on the full jaw digital curved surface broken layer sheet image after image clipping.
Further, S2, the training method of the deep learning model includes:
acquiring a data set of a full-jaw digital curved surface broken layer sheet image;
preprocessing a data set; the preprocessing comprises image clipping processing and HU value normalization processing;
marking contours of the mandibular first molar, the mandibular second molar, and the mandibular third molar in the dataset;
and inputting the marked data set into a deep learning model, and training the deep learning model until the deep learning model converges.
Further, acquiring a dataset of the full jaw digitized curved surface segment slice image includes:
carrying out data enhancement processing on the full-jaw digital curved surface broken layer sheet image to obtain a data set; the data enhancement processing includes spatial inversion, spatial scaling, angular rotation and translational transformation.
Further, S3, determining the mandibular third molar long axis, the first molar facial cusp wire, the second molar facial cusp wire, and the target angle from the images of the mandibular first molar, the mandibular second molar, and the mandibular third molar comprises:
s31, determining a tooth long axis of the third molar of the lower jaw according to the image of the third molar of the lower jaw; the long axis of the tooth passes through the center of the tooth along the direction from the crown to the root;
s32, determining two cusp points on the cheek side of the first molar of the lower jaw and two cusp points on the cheek side of the second molar of the lower jaw according to the images of the first molar of the lower jaw and the second molar of the lower jaw, determining a cusp connecting line of the cheek side of the first molar according to the two cusp points on the cheek side of the first molar of the lower jaw, and determining a cusp connecting line of the cheek side of the second molar according to the two cusp points on the cheek side of the second molar of the lower jaw;
s33, determining a target angle reference line according to the position relation between the first molar buccal cusp connecting line and the second molar buccal cusp connecting line;
s34, taking an included angle formed by the target angle reference line and the long axis of the third molar body of the lower jaw in the anticlockwise direction as a target angle.
Further, S33, determining the target angle reference line according to the positional relationship of the first molar facial cusp connection line and the second molar facial cusp connection line includes:
when the first molar buccal cusp connecting line and the second molar buccal cusp connecting line are overlapped, the first molar buccal cusp connecting line and the second molar buccal cusp connecting line are used as target angle reference lines;
when the first molar buccal cusp connecting line and the second molar buccal cusp connecting line are parallel but not coincident, taking a bisector of a distance between the first molar buccal cusp connecting line and the second molar buccal cusp connecting line as a target angle reference line;
when the first molar buccal cusp line and the second molar buccal cusp line intersect, an angular bisector between the first molar buccal cusp line and the second molar buccal cusp line is taken as a target angular reference line.
Further, S4, determining the type of the third molar of the mandible according to the angle of the target angle includes:
s41, when the angle of the target angle is more than or equal to 0 degree and less than 45 degrees, determining that the type of the third molar of the lower jaw is horizontal;
s42, when the angle of the target angle is more than or equal to 45 degrees and less than 90 degrees, determining that the type of the third molar of the lower jaw is near-middle molar;
s43, when the angle of the target angle is equal to 90 degrees, determining that the type of the third molar of the lower jaw is vertical;
and S44, when the angle of the target angle is larger than 90 degrees, determining that the type of the third molar of the lower jaw is far-middle-range.
The invention also provides a mandibular third molar type of inhibition recognition system based on deep learning, comprising:
the acquisition module is used for acquiring the full-jaw digital curved surface broken sheet image and preprocessing the full-jaw digital curved surface broken sheet image;
the segmentation module is used for inputting the preprocessed full-jaw digital curved surface segmentation slice image into a deep learning model and segmenting out images of a first mandibular molar, a second mandibular molar and a third mandibular molar;
the identification module is used for determining a long axis of a mandibular third molar body, a first molar buccal cuspid connecting line, a second molar buccal cuspid connecting line and a target angle according to images of the mandibular first molar, the mandibular second molar and the mandibular third molar;
and the judging module is used for judging the type of the third molar of the lower jaw according to the angle of the target angle.
Further, the obtaining module includes:
the preprocessing module is used for carrying out image cutting processing on the full-jaw digital curved surface broken layer sheet image; and carrying out HU value normalization processing on the full jaw digital curved surface broken layer sheet image after image clipping.
Further, the identification module is further configured to determine a target angle reference line:
when the first molar buccal cusp connecting line and the second molar buccal cusp connecting line are overlapped, the first molar buccal cusp connecting line and the second molar buccal cusp connecting line are used as target angle reference lines;
when the first molar buccal cusp connecting line and the second molar buccal cusp connecting line are parallel but not coincident, taking a bisector of a distance between the first molar buccal cusp connecting line and the second molar buccal cusp connecting line as a target angle reference line;
when the first molar buccal cusp line and the second molar buccal cusp line intersect, an angular bisector between the first molar buccal cusp line and the second molar buccal cusp line is taken as a target angular reference line.
The embodiment of the invention has the following technical effects:
the method comprises the steps of dividing images of first lower teeth grinding, second lower teeth grinding and third lower teeth grinding based on a deep learning model and a full-jaw digital curved surface fault slice, judging the type of the second lower teeth grinding according to angles formed by long axes of the third lower teeth grinding bodies and connecting lines of buccal side teeth tips of the first lower teeth grinding bodies and connecting lines of buccal side teeth tips of the second lower teeth grinding bodies, and improving identification accuracy and identification efficiency.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for identifying a third molar type of a mandibular based on deep learning provided by an embodiment of the present invention;
FIG. 2 is a schematic diagram of a framework of a U-NET model according to an embodiment of the present invention;
FIG. 3 is a schematic view of an angle formed by a long axis of a third molar of a mandible and a target angle reference line provided by an embodiment of the present invention;
FIG. 4 is a schematic illustration of a third molar type of mandibular joint of the present invention being a horizontal joint;
FIG. 5 is a schematic illustration of a third mandibular molar type of inhibition of mesial-mesial type provided in accordance with an embodiment of the present invention;
FIG. 6 is a schematic illustration of a third molar type of vertical resistance provided by an embodiment of the present invention;
FIG. 7 is a schematic illustration of a third mandibular molar type of inhibition of distal to intermediate inhibition provided in accordance with an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a third molar generation-retarding type recognition system for a mandible based on deep learning according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below. It will be apparent that the described embodiments are only some, but not all, embodiments of the invention. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the invention, are within the scope of the invention.
Fig. 1 is a flowchart of a method for identifying a third molar type of a mandibular based on deep learning according to an embodiment of the present invention. Referring to fig. 1, the method specifically includes:
s1, acquiring a full-jaw digital curved surface broken sheet image, and preprocessing the full-jaw digital curved surface broken sheet image.
Specifically, the full jaw digital curved surface broken layer tablet is the most commonly used imaging examination method in clinic, and provides more visual basis for the scheme formulation of the tooth extraction, the resistance analysis, the evaluation of the preoperative operation risk and the communication between doctors and patients. Through checking the digital curved surface broken layer sheet of the whole jaw before the third molar extraction operation of the lower jaw, doctors can be helped to make a more reasonable operation scheme, the preparation before the operation is perfected, the operation wound is reduced, the operation time is shortened, and the smooth operation is ensured to the greatest extent.
S11, performing image cutting processing on the full-jaw digital curved surface broken layer sheet image.
Specifically, image cropping is used to crop the image out of the region of interest desired by the physician. Exemplary, after the obtained image cutting process is performed on the digital curved surface broken sheet image of the whole jaw, the size of the original digital curved surface broken sheet image of the whole jaw is cut from the original size of 1935×2400 to an image with a local area of 1600×1600, so that useless information in the image can be effectively reduced, and resources occupied in the subsequent model operation process are reduced.
S12, carrying out HU value normalization processing on the full jaw digital curved surface broken layer sheet image after image clipping.
Specifically, HU value normalization is used for converting pixel values of the full jaw digital curved surface fracture slice images into standardized proportions, so that analysis and comparison of images in different imaging modes are facilitated. Reading the image-cut full-jaw digital curved surface broken-layer sheet image, converting the pixel value of the image-cut full-jaw digital curved surface broken-layer sheet image into an HU value, and carrying out HU value normalization on the HU value; the formula for converting the pixel value of the full jaw digital curved surface broken layer sheet image after image clipping into the HU value is as follows:
(1)
the pixel_value is a pixel value of the full-jaw digital curved surface broken layer slice image after image clipping, slope is a first conversion factor, and interval is a second conversion factor.
The formula for HU value normalization is:
(2)
HU_norm is a normalized HU value, and HU_min and HU_max are HU minimum values and HU maximum values in the full-jaw digital curved surface broken layer sheet image after image clipping.
S2, inputting the preprocessed full-jaw digital curved surface broken layer sheet image into a deep learning model, and dividing the images of the first lower jaw molar, the second lower jaw molar and the third lower jaw molar.
For example, the deep learning model may be a U-NET model, and fig. 2 is a schematic diagram of a framework of the U-NET model provided by an embodiment of the present invention, and referring to fig. 2, the U-NET model is composed of a coding sub-network, a decoding sub-network and a jump connection layer, and the network structure is similar to the shape of a U, so that the U-NET model is called. The U-NET network contains 23 convolution layers, 4 times of coding operation and 4 times of decoding operation are carried out in the network, two times of 3X 3 convolution operation and one time of 2X 2 pooling operation are involved in the coding process, the operation reduces the size of the preprocessed full-jaw digital curved surface broken-layer slice image to be one half of the original image, and the number of features is doubled. Only two 3 x 3 convolution operations are involved in the decoding process, the image reduced in the encoding process is enlarged twice, the size of the finally obtained image is the same as the size of the original image, and the number of the features is reduced by one half. Finally, a 1 x 1 convolutional layer is added to the U-NET network, which aims at mapping the features extracted in the encoding and decoding processes to the classes corresponding thereto. The decoding process extracts high-dimensional features of the input image by performing successive convolution operations and pooling operations. The mutual mapping relation between the coding network and the decoding network is the characteristic of the U-NET network, and the missing edge information is repaired by fusing the characteristics of the coding layer mapped with the U-NET network when decoding is carried out, so that the accuracy of predicting the edge information is improved. The long connection between the encoding and decoding of the U-NET network is mainly used for copying the information of the input image and transmitting the information to the downsampling, so that the information loss caused by the downsampling is promoted to be recovered by the network. Wherein the convolution operation may be represented by a specific mathematical expression:
(3)
wherein y is ij Convolution values representing the ith row and jth column of the feature map image, U and V representing the height and width of the convolution kernel matrix, U and V being the convolution kernelsCoordinates of matrix change, w uv Weight value corresponding to the u th row and v th column of convolution kernel, and x i+u-1,j+v-1 The pixel value corresponding to the j+v-1 th column of the i+u-1 th row of the feature image is represented.
Specifically, the training method of the deep learning model comprises the following steps:
a dataset of full jaw digitized curved surface fracture slice images is acquired.
Specifically, the deep learning network model generally needs a large amount of training data, but the data amount of the current full-jaw digital curved surface fracture slice image is relatively short, and in order to optimize the robustness of the model training enhancement model and reduce the problem of over fitting, the data set is obtained by carrying out data enhancement processing on the full-jaw digital curved surface fracture slice image. The data enhancement processing comprises spatial inversion, spatial scaling, angular rotation, translational transformation and the like. By way of example, after performing operations such as horizontal and/or vertical turning, rotation, scaling, shifting, etc. on the segmented sheet image of the digitized curved surface of the whole jaw, the deep learning network may treat the image after the data enhancement process as a different picture from the original image, thereby obtaining a large amount of training data to form a data set.
Preprocessing a data set; the preprocessing comprises image clipping processing and HU value normalization processing.
Specifically, the method for preprocessing the image in the dataset is the same as that in step S1, and will not be described here again.
Contours of the mandibular first molar, mandibular second molar, and mandibular third molar are noted in the dataset.
And inputting the marked data set into a deep learning model, and training the deep learning model until the deep learning model converges.
Specifically, the marked data set is input into a deep learning model, and the deep learning model is trained until the deep learning model converges, namely, the result of the contours of the first mandibular molar, the second mandibular molar and the third mandibular molar predicted by the deep learning model is close to the contours actually marked in the data set. For example, whether the deep learning model is converged can be judged through a loss function, and if the loss function value is smaller than a preset value, the deep learning model is judged to be converged.
And S3, determining a long axis of the mandibular third molar body, a first molar facial cuspid connecting line, a second molar facial cuspid connecting line and a target angle according to the images of the mandibular first molar, the mandibular second molar and the mandibular third molar.
S31, determining a tooth long axis of the third molar of the lower jaw according to the image of the third molar of the lower jaw; the long axis of the tooth passes through the center of the tooth along the direction from the crown to the root.
S32, determining two cusp points on the cheek side of the first molar of the lower jaw and two cusp points on the cheek side of the second molar of the lower jaw according to the images of the first molar of the lower jaw and the second molar of the lower jaw, determining a cusp line on the cheek side of the first molar according to the two cusp points on the cheek side of the first molar of the lower jaw, and determining a cusp line on the cheek side of the second molar according to the two cusp points on the cheek side of the second molar of the lower jaw.
S33, determining a target angle reference line according to the position relation between the first molar facial cusp connecting line and the second molar facial cusp connecting line.
Specifically, when the first molar buccal cusp line and the second molar buccal cusp line overlap, the first molar buccal cusp line and the second molar buccal cusp line are taken as target angle reference lines. When the first molar facial cusp line and the second molar facial cusp line are parallel but not coincident, a bisector of a distance between the first molar facial cusp line and the second molar facial cusp line is taken as the target angle reference line. When the first molar buccal cusp line and the second molar buccal cusp line intersect, an angular bisector between the first molar buccal cusp line and the second molar buccal cusp line is taken as a target angular reference line.
S34, taking an included angle formed by the target angle reference line and the long axis of the third molar body of the lower jaw in the anticlockwise direction as a target angle.
Specifically, fig. 3 is a schematic diagram of an included angle formed by a long axis of a third molar body of a mandible and a target angle reference line according to an embodiment of the present invention, referring to fig. 3, an example is given of a target angle reference line when a first molar facial cusp line and a second molar facial cusp line overlap, and an included angle formed by the target angle reference line and the long axis of the third molar body of the mandible in a counterclockwise direction is taken as a target angle.
And S4, judging the type of the third molar of the mandible according to the angle of the target angle.
S41, when the angle of the target angle is more than or equal to 0 degree and less than 45 degrees, determining that the type of the third molar of the lower jaw is horizontal.
Specifically, fig. 4 is a schematic diagram of a third molar of the present invention with a horizontal molar type, and referring to fig. 4, a target angle reference line when a first molar facial cusp line and a second molar facial cusp line are overlapped is taken as an example, and when an angle formed between the target angle reference line and a long axis of the third molar of the lower jaw in a counterclockwise direction is greater than or equal to 0 ° and less than 45 °, the molar type of the third molar of the lower jaw is determined to be a horizontal molar.
S42, when the angle of the target angle is more than or equal to 45 degrees and less than 90 degrees, determining that the type of the third molar of the lower jaw is mesial.
Specifically, fig. 5 is a schematic diagram of a third molar of the present invention, referring to fig. 5, in which a target angle reference line when a first molar facial cusp line and a second molar facial cusp line are coincident is taken as an example, and when an angle formed between the target angle reference line and a long axis of the third molar of the lower jaw in a counterclockwise direction is greater than or equal to 45 ° and less than 90 °, the type of molar of the third molar of the lower jaw is determined to be mesial.
S43, when the angle of the target angle is equal to 90 degrees, determining that the type of the third molar of the lower jaw is vertical.
Specifically, fig. 6 is a schematic diagram of a vertical growth-preventing type of third molar of the lower jaw according to an embodiment of the present invention, referring to fig. 6, taking a target angle reference line when a first molar facial cusp line and a second molar facial cusp line are coincident as an example, and determining the growth-preventing type of third molar of the lower jaw as vertical growth-preventing when an angle formed between the target angle reference line and a long axis of the third molar of the lower jaw in a counterclockwise direction is equal to 90 °.
And S44, when the angle of the target angle is larger than 90 degrees, determining that the type of the third molar of the lower jaw is far-middle-range.
Specifically, fig. 7 is a schematic diagram of a third molar type of lower jaw as a distal/intermediate molar according to an embodiment of the present invention, and referring to fig. 7, a target angle reference line when a first molar facial cusp line and a second molar facial cusp line are overlapped is taken as an example, and when an angle formed between the target angle reference line and a long axis of the third molar body in a counterclockwise direction is greater than 90 °, the type of molar of the third molar is determined to be distal/intermediate molar.
In the embodiment of the invention, the images of the first mandibular molar, the second mandibular molar and the third mandibular molar are segmented based on the deep learning model and the full-jaw digital curved surface tomogram, and then the type of the blocking of the third mandibular molar is judged according to the angle formed by the long axis of the third mandibular molar and the connecting line of the buccal cuspids of the first mandibular molar and the connecting line of the buccal cuspids of the second mandibular molar, so that the accuracy and the efficiency of recognition are improved.
Fig. 8 is a schematic structural diagram of a mandibular third molar type of resistance recognition system based on deep learning according to an embodiment of the present invention, referring to fig. 8, the system specifically includes:
the acquisition module 1 is used for acquiring the full-jaw digital curved surface broken sheet image and preprocessing the full-jaw digital curved surface broken sheet image.
The segmentation module 2 is used for inputting the preprocessed full-jaw digital curved surface broken layer sheet image into a deep learning model, and segmenting out images of the first mandibular molar, the second mandibular molar and the third mandibular molar.
And the identification module 3 is used for determining a long axis of the mandibular third molar body, a first molar facial cusp connecting line and a second molar facial cusp connecting line and a target angle according to images of the mandibular first molar, the mandibular second molar and the mandibular third molar.
And the judging module 4 is used for judging the type of the third molar of the lower jaw according to the angle of the target angle.
Further, the acquisition module 1 comprises a preprocessing module 11 for performing image clipping processing on the full-jaw digital curved surface broken layer sheet image; and carrying out HU value normalization processing on the full jaw digital curved surface broken layer sheet image after image clipping. The identification module 3 is further configured to determine a target angle reference line: when the first molar buccal cusp line and the second molar buccal cusp line coincide, the first molar buccal cusp line and the second molar buccal cusp line are taken as target angle reference lines. When the first molar facial cusp line and the second molar facial cusp line are parallel but not coincident, a bisector of a distance between the first molar facial cusp line and the second molar facial cusp line is taken as the target angle reference line. When the first molar buccal cusp line and the second molar buccal cusp line intersect, an angular bisector between the first molar buccal cusp line and the second molar buccal cusp line is taken as a target angular reference line.
In the embodiment of the invention, the images of the first mandibular molar, the second mandibular molar and the third mandibular molar are segmented based on the deep learning model and the full-jaw digital curved surface tomogram, and then the type of the blocking of the third mandibular molar is judged according to the angle formed by the long axis of the third mandibular molar and the connecting line of the buccal cuspids of the first mandibular molar and the connecting line of the buccal cuspids of the second mandibular molar, so that the accuracy and the efficiency of recognition are improved.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to limit the scope of the present application. As used in this specification, the terms "a," "an," "the," and/or "the" are not intended to be limiting, but rather are to be construed as covering the singular and the plural, unless the context clearly dictates otherwise. The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method or apparatus comprising such elements.
It should also be noted that the positional or positional relationship indicated by the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. are based on the positional or positional relationship shown in the drawings, are merely for convenience of describing the present invention and simplifying the description, and do not indicate or imply that the apparatus or element in question must have a specific orientation, be constructed and operated in a specific orientation, and thus should not be construed as limiting the present invention. Unless specifically stated or limited otherwise, the terms "mounted," "connected," and the like are to be construed broadly and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention will be understood in specific cases by those of ordinary skill in the art.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the essence of the corresponding technical solutions from the technical solutions of the embodiments of the present invention.
Claims (8)
1. A mandibular third molar type of resistance recognition method based on deep learning, comprising the steps of:
s1, acquiring a full-jaw digital curved surface broken sheet image, and preprocessing the full-jaw digital curved surface broken sheet image;
s2, inputting the preprocessed full-jaw digital curved surface broken layer sheet image into a deep learning model, and dividing the images of the first lower jaw molar, the second lower jaw molar and the third lower jaw molar;
s3, determining a long axis of the mandibular third molar body, the first molar facial cusp connecting line, the second molar facial cusp connecting line and a target angle according to the images of the mandibular first molar, the mandibular second molar and the mandibular third molar;
the method specifically comprises the following steps:
s31, determining a tooth long axis of the third molar of the lower jaw according to the image of the third molar of the lower jaw; the long axis of the tooth passes through the center of the tooth along the direction from the crown to the root;
s32, determining two cusp points of the first mandibular molar and two cusp points of the second mandibular molar according to the images of the first mandibular molar and the second mandibular molar, determining a cusp line of the first mandibular molar according to the two cusp points of the first mandibular molar, and determining a cusp line of the second mandibular molar according to the two cusp points of the second mandibular molar;
s33, determining a target angle reference line according to the position relation between the first molar buccal cusp connecting line and the second molar buccal cusp connecting line;
s34, taking an included angle formed by the target angle reference line and the long axis of the third molar body of the lower jaw in the anticlockwise direction as a target angle;
s4, judging the type of the third molar of the mandible according to the angle of the target angle;
the method specifically comprises the following steps:
s41, when the angle of the target angle is more than or equal to 0 degrees and less than 45 degrees, determining that the type of the third molar of the lower jaw is horizontal;
s42, when the angle of the target angle is more than or equal to 45 degrees and less than 90 degrees, determining that the type of the third molar of the lower jaw is near-middle molar;
s43, when the angle of the target angle is equal to 90 degrees, determining that the type of the third molar of the lower jaw is vertical;
and S44, when the angle of the target angle is larger than 90 degrees, determining that the type of the third molar of the lower jaw is distal and middle.
2. The method for identifying a third molar type of resistance based on deep learning according to claim 1, wherein the step S1 of preprocessing the digitized curved segment image of the whole jaw comprises:
s11, performing image cutting processing on the full-jaw digital curved surface broken layer sheet image;
s12, carrying out HU value normalization processing on the full jaw digital curved surface broken layer sheet image after image clipping.
3. The method for identifying the type of mandibular third molar resistor based on deep learning according to claim 2, wherein the training method of the deep learning model comprises the following steps:
acquiring a data set of the full-jaw digital curved surface broken layer sheet image;
preprocessing the data set; the preprocessing comprises image clipping processing and HU value normalization processing;
labeling contours of the first mandibular molar, the second mandibular molar, and the third mandibular molar in the dataset;
and inputting the marked data set into the deep learning model, and training the deep learning model until the deep learning model converges.
4. A method of identifying a third molar type of resistance to deep learning based on a mandibular surface of claim 3, wherein said acquiring a dataset of said full jaw digitized curved surface fracture slice image comprises:
performing data enhancement processing on the full-jaw digital curved surface broken layer slice image to obtain the data set; the data enhancement processing includes spatial inversion, spatial scaling, angular rotation and translational transformation.
5. The method of recognition of a third molar type of resistance to deep learning of the mandibular third molar type of claim 1, wherein S33, the determining a target angular reference line from the positional relationship of the first and second molar buccal cusp lines comprises:
when the first molar buccal cusp wire and the second molar buccal cusp wire are coincident, taking the first molar buccal cusp wire and the second molar buccal cusp wire as target angle reference wires;
when the first molar buccal cusp wire and the second molar buccal cusp wire are parallel but not coincident, taking a bisector of a distance between the first molar buccal cusp wire and the second molar buccal cusp wire as a target angle reference line;
when the first molar facial cusp wire and the second molar facial cusp wire intersect, an angular bisector between the first molar facial cusp wire and the second molar facial cusp wire is taken as a target angular reference line.
6. A third molar type of resistance recognition system based on deep learning, comprising:
the acquisition module is used for acquiring the full-jaw digital curved surface broken sheet image and preprocessing the full-jaw digital curved surface broken sheet image;
the segmentation module is used for inputting the preprocessed full-jaw digital curved surface segmentation slice image into a deep learning model and segmenting out images of a first mandibular molar, a second mandibular molar and a third mandibular molar;
the identification module is used for determining a long axis of the mandibular third molar body, the first molar facial cusp connecting line and the second molar facial cusp connecting line and a target angle according to the images of the mandibular first molar, the mandibular second molar and the mandibular third molar; the method comprises the steps of determining a tooth long axis of a third molar of the lower jaw according to an image of the third molar of the lower jaw; the long axis of the tooth passes through the center of the tooth along the direction from the crown to the root; determining two cusp points of the first mandibular molar and two cusp points of the second mandibular molar according to the images of the first mandibular molar and the second mandibular molar, determining the first cusp line of the cheek side according to the two cusp points of the first mandibular molar, and determining the second cusp line of the cheek side according to the two cusp points of the second mandibular molar; determining a target angle reference line according to the position relationship of the first molar buccal cusp connecting line and the second molar buccal cusp connecting line; taking an included angle formed by the target angle reference line and the long axis of the third molar body of the lower jaw in the anticlockwise direction as a target angle;
the judging module is used for judging the type of the third molar of the lower jaw according to the angle of the target angle; the method is particularly used for determining that the type of the third molar of the lower jaw is horizontal, when the angle of the target angle is more than or equal to 0 degrees and less than 45 degrees; when the angle of the target angle is more than or equal to 45 degrees and less than 90 degrees, determining that the type of the third molar of the lower jaw is mesial; when the angle of the target angle is equal to 90 degrees, determining that the type of the third molar of the lower jaw is vertical; and when the angle of the target angle is larger than 90 degrees, determining that the type of the third molar of the lower jaw is distal and mesial.
7. The deep learning based mandibular third molar type recognition system of claim 6, wherein the acquisition module comprises:
the preprocessing module is used for carrying out image cutting processing on the full-jaw digital curved surface broken layer sheet image; and carrying out HU value normalization processing on the full jaw digital curved surface broken layer sheet image after image clipping.
8. The deep learning based mandibular third molar type of resistance recognition system of claim 6, wherein the recognition module is further configured to determine a target angle reference line:
when the first molar buccal cusp wire and the second molar buccal cusp wire are coincident, taking the first molar buccal cusp wire and the second molar buccal cusp wire as target angle reference wires;
when the first molar buccal cusp wire and the second molar buccal cusp wire are parallel but not coincident, taking a bisector of a distance between the first molar buccal cusp wire and the second molar buccal cusp wire as a target angle reference line;
when the first molar facial cusp wire and the second molar facial cusp wire intersect, an angular bisector between the first molar facial cusp wire and the second molar facial cusp wire is taken as a target angular reference line.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311394030.0A CN117132596B (en) | 2023-10-26 | 2023-10-26 | Mandibular third molar generation-retarding type identification method and system based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311394030.0A CN117132596B (en) | 2023-10-26 | 2023-10-26 | Mandibular third molar generation-retarding type identification method and system based on deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117132596A CN117132596A (en) | 2023-11-28 |
CN117132596B true CN117132596B (en) | 2024-01-12 |
Family
ID=88854938
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311394030.0A Active CN117132596B (en) | 2023-10-26 | 2023-10-26 | Mandibular third molar generation-retarding type identification method and system based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117132596B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN119184895A (en) * | 2024-11-22 | 2024-12-27 | 吉林大学 | Second mesial root canal measurement method, device and storage medium |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107239649A (en) * | 2016-11-28 | 2017-10-10 | 可丽尔医疗科技(常州)有限公司 | A kind of method of oral cavity parametrization measurement |
CN110503652A (en) * | 2019-08-23 | 2019-11-26 | 北京大学口腔医学院 | Mandibular kinesiography and adjacent teeth and mandibular canal relationship determine method, apparatus, storage medium and terminal |
CN209790043U (en) * | 2018-12-04 | 2019-12-17 | 阳江市人民医院 | Be applied to guiding mechanism that lower jaw third molar divides crown to pull out |
CN110895816A (en) * | 2019-10-14 | 2020-03-20 | 广州医科大学附属口腔医院(广州医科大学羊城医院) | Method for measuring alveolar bone grinding amount before mandibular bone planting plan operation |
CN112927225A (en) * | 2021-04-01 | 2021-06-08 | 潘俞欢 | Wisdom tooth growth state auxiliary detection system based on artificial intelligence |
CN113449426A (en) * | 2021-07-01 | 2021-09-28 | 正雅齿科科技(上海)有限公司 | Digital tooth arrangement method, system, apparatus and medium |
CN114429070A (en) * | 2022-01-26 | 2022-05-03 | 华侨大学 | A structural optimization design method for molar prosthesis implants |
CN115137505A (en) * | 2022-07-01 | 2022-10-04 | 四川大学 | Manufacturing process of standard dental retention guide plate based on digitization technology |
CN218676304U (en) * | 2022-08-03 | 2023-03-21 | 同济大学附属口腔医院 | Three-dimensional visual impacted tooth model of extracing tooth |
CN116503389A (en) * | 2023-06-25 | 2023-07-28 | 南京邮电大学 | Automatic detection method of root external resorption |
-
2023
- 2023-10-26 CN CN202311394030.0A patent/CN117132596B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107239649A (en) * | 2016-11-28 | 2017-10-10 | 可丽尔医疗科技(常州)有限公司 | A kind of method of oral cavity parametrization measurement |
CN209790043U (en) * | 2018-12-04 | 2019-12-17 | 阳江市人民医院 | Be applied to guiding mechanism that lower jaw third molar divides crown to pull out |
CN110503652A (en) * | 2019-08-23 | 2019-11-26 | 北京大学口腔医学院 | Mandibular kinesiography and adjacent teeth and mandibular canal relationship determine method, apparatus, storage medium and terminal |
CN110895816A (en) * | 2019-10-14 | 2020-03-20 | 广州医科大学附属口腔医院(广州医科大学羊城医院) | Method for measuring alveolar bone grinding amount before mandibular bone planting plan operation |
CN112927225A (en) * | 2021-04-01 | 2021-06-08 | 潘俞欢 | Wisdom tooth growth state auxiliary detection system based on artificial intelligence |
CN113449426A (en) * | 2021-07-01 | 2021-09-28 | 正雅齿科科技(上海)有限公司 | Digital tooth arrangement method, system, apparatus and medium |
CN114429070A (en) * | 2022-01-26 | 2022-05-03 | 华侨大学 | A structural optimization design method for molar prosthesis implants |
CN115137505A (en) * | 2022-07-01 | 2022-10-04 | 四川大学 | Manufacturing process of standard dental retention guide plate based on digitization technology |
CN218676304U (en) * | 2022-08-03 | 2023-03-21 | 同济大学附属口腔医院 | Three-dimensional visual impacted tooth model of extracing tooth |
CN116503389A (en) * | 2023-06-25 | 2023-07-28 | 南京邮电大学 | Automatic detection method of root external resorption |
Non-Patent Citations (2)
Title |
---|
"Fault Feature Extraction of Gearbox Based on Kurtosis-Weighted Singular Values";Xin Huang etal;《2018 Prognostics and System Health Management Conference》;第1274-1279页 * |
"三维解剖软件在口腔颌面外科课程整合中的应用评价";黄昕 等;《中国高等医学教育》;第117-119页 * |
Also Published As
Publication number | Publication date |
---|---|
CN117132596A (en) | 2023-11-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Jader et al. | Deep instance segmentation of teeth in panoramic X-ray images | |
US11734825B2 (en) | Segmentation device and method of generating learning model | |
US11443423B2 (en) | System and method for constructing elements of interest (EoI)-focused panoramas of an oral complex | |
US20210118132A1 (en) | Artificial Intelligence System For Orthodontic Measurement, Treatment Planning, And Risk Assessment | |
US9710907B2 (en) | Diagnosis support system using panoramic radiograph and diagnosis support program using panoramic radiograph | |
US20220361992A1 (en) | System and Method for Predicting a Crown and Implant Feature for Dental Implant Planning | |
US20210357688A1 (en) | Artificial Intelligence System For Automated Extraction And Processing Of Dental Claim Forms | |
JP6830082B2 (en) | Dental analysis system and dental analysis X-ray system | |
JP2008520344A (en) | Method for detecting and correcting the orientation of radiographic images | |
CN117132596B (en) | Mandibular third molar generation-retarding type identification method and system based on deep learning | |
US12062170B2 (en) | System and method for classifying a tooth condition based on landmarked anthropomorphic measurements | |
Chen et al. | Detection of various dental conditions on dental panoramic radiography using faster R-CNN | |
Bodhe et al. | Design and development of deep learning approach for dental implant planning | |
CN116797731A (en) | Artificial intelligence-based oral cavity CBCT image section generation method | |
CN116468848A (en) | Three-dimensional dental model reconstruction method, device, electronic equipment and storage medium | |
Zhou et al. | NKUT: dataset and benchmark for pediatric mandibular wisdom teeth segmentation | |
CN117372376A (en) | Root canal morphology analysis method and system based on root canal long axis curve | |
CN118334043A (en) | Intraoral scanning image segmentation method based on deep learning | |
US20230013902A1 (en) | System and Method for Correcting for Distortions of a Diagnostic Image | |
CN119110961A (en) | Tooth position determination and 2D re-slice image generation using artificial neural networks | |
CN116823729A (en) | Alveolar bone absorption judging method based on SegFormer and oral cavity curved surface broken sheet | |
CN114913091A (en) | Oral cavity anti-jaw detection method based on machine learning and X-ray detector special for dentistry | |
Ahn et al. | Using artificial intelligence methods for dental image analysis: state-of-the-art reviews | |
US20250009483A1 (en) | System and Method for Aligning 3-D Imagery of a Patient's Oral Cavity in an Extended Reality (XR) Environment | |
US20220351813A1 (en) | Method and apparatus for training automatic tooth charting systems |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |