Learning U-Net Based Multi-Scale Features in Encoding-Decoding for MR Image Brain Tissue Segmentation
<p>Illustration of complex anatomical structures and variable shapes in MRBrain2013S dataset. The first row lists the brain MR images in different areas. The second row shows the corresponding ground truth labels, where the colors denote different regions of the brain: red represents the cerebrospinal fluid (CSF), green the gray matter (GM), and blue the white matter (WM). Other tissues are represented with gray.</p> "> Figure 2
<p>The architecture of proposed MSCD-UNet for brain segmentation consisting of MP, MDP, and the multi-branch output module. Three input samples with size 32 × 32 × 32 were randomly cropped with a same center point from 3 modalities (T1, T2-FLARI, T1_IR), they have the corresponding position information. Solid red box represents the subnetwork for T1 MR image.</p> "> Figure 3
<p>Example of the modality of T1_IR from patient no. 5 MRI. In this example, the intensities of different brain tissue in the different local brain regions are close to each other, like the examples in the red boxes.</p> "> Figure 4
<p>Encoding path with multi-branch max pooling.</p> "> Figure 5
<p>The components of multi-branch dense prediction (MDP).</p> "> Figure 6
<p>Example of MR images with different image modalities and the labels manually marked by experts; the first three images from left to right are FLAIR, T1, and T1_IR. The fourth image is the ground truth labels where the colors denote different regions of brain tissues: red represents cerebrospinal fluid (CSF), green the gray matter (GM), and blue the white matter (WM). Gray denotes the other tissues.</p> "> Figure 7
<p>Segmentation results of the UNet and MSCD-UNet on the MR BrainS13 dataset. The rows show the segmentation results of different slices. From first column to last column: FLAIR, manual segmentation, segmentation result of UNet, segmentation result of MSCD-UNet. Center patch in solid yellow box of each segmentation result is highlighted. Each color denotes different brain tissue class, i.e., gray matter (blue), white matter (green), cerebrospinal fluid (red), and other tissues (gray).</p> "> Figure 8
<p>Segmentation results on the IBSR dataset using UNet and MSCD-UNet. The rows show the segmentation results of different slices. From first column to last column: T1, manual segmentation, segmentation result of UNet, segmentation result of MSCD-UNet. Center patch in solid yellow box of each segmentation result is highlighted. Each color denotes different brain tissues classes, i.e., gray matter (green), white matter (blue), cerebrospinal fluid (red), and other tissues (gray).</p> ">
Abstract
:1. Introduction
- We have proposed a novel network by introducing spatial dimension and channel dimension-based multi-scale CNN feature information extractors into its encoding-decoding framework. In the encoding part, we propose the multi-branch pooling information extractor, called MP, to capture multi-scale spatial information for the information compensating. As pooling is easy to lose the useful spatial information when the feature map resolution is reduced, we propose the MP by using multiple max pooling with different kernel sizes in parallel to reduce the information missing and collect the neighborhood information with a suitable receptive field;
- In the decoding part, we propose the multi-branch dense prediction, an information extractor, called MDP, to capture multi-scale channel information for the information compensating. During the decoding phase, after the maps resolution upsizing, the spatial information in these decompressed feature maps is fixed and the detailed information is represented more in channel dimension, so we consider that the prediction results at the adjacent position are related to the result of the center position. We divided the prediction result into multiple channel groups, and the multi-scale channel information of the center position can be created by averaging these groups for the purpose of information compensation. In addition, we designed a multi-branch output structure with MDP in the decoding part to form more accurate edge-preserving predicting maps by integrating the dense adjacent prediction features at different scales.
2. Related Works
3. Materials and Methods
3.1. Model Overview
3.2. Multi-Branch Pooling and Multi-Branch Dense Prediction
3.3. Multi-Branch Output Modules and Loss Functions
3.4. Network Architecture
3.5. Dataset Introduction
3.6. Evaluation Metrics
3.7. Implementation Details
4. Results
4.1. Ablation for Multi-Branch Pooling (MP)
4.2. Ablation for Multi-Branch Output with Multi-Branch Dense Prediction (MDP)
4.3. Comparison with Existing State-of-the-Art Methods
5. Discussion
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Wright, R.; Kyriakopoulou, V.; Ledig, C.; Rutherford, M.; Hajnal, J.; Rueckert, D.; Aljabar, P. Automatic quantification of normal cortical folding patterns from fetal brain MRI. Neuroimage 2014, 91, 21–32. [Google Scholar] [CrossRef] [Green Version]
- Wells, W.; Grimson, W.; Kikinis, R.; Jolesz, F. Adaptive segmentation of MRI data. IEEE Trans. Med. Imaging 1996, 15, 429–442. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Assefa, D.; Keller, H.; Ménard, C.; Laperriere, N.; Ferrari, R.J.; Yeung, I. Robust texture features for response monitoring of glioblastoma multiforme on -weighted and -FLAIR MR images: A preliminary investigation in terms of identi-fication and segmentation. Med. Phys. 2010, 37, 1722–1736. [Google Scholar] [CrossRef] [PubMed]
- Qin, C.; Guerrero, R.; Bowles, C.; Chen, L.; Dickie, D.A.; del Valdes-Hernandez, M.; Wardlaw, J.; Rueckert, D. A large margin algorithm for automated segmentation of white matter hy-perintensity. Pattern Recognit. 2018, 77, 150–159. [Google Scholar] [CrossRef] [Green Version]
- Moeskops, P.; Benders, M.J.; Chiţǎ, S.M.; Kersbergen, K.J.; Groenendaal, F.; de Vries, L.S.; Viergever, M.A.; Išgum, I. Automatic segmentation of MR brain images of preterm infants using supervised classification. Neuroimage 2015, 118, 628–641. [Google Scholar] [CrossRef]
- Maier, O.; Menze, B.H.; der Gablentz, J.; Häni, L.; Heinrich, M.P.; Liebrand, M.; Winzeck, S.; Basit, A.; Bentley, P.; Chen, L.; et al. ISLES 2015—A public evaluation benchmark for ischemic stroke lesion segmentation from multispectral MRI. Med. Image Anal. 2017, 35, 250–269. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Janakasudha, G.; Jayashree, P. Early Detection of Alzheimer’s Disease Using Multi-feature Fusion and an Ensemble of Classifiers. In Advanced Computing and Intelligent Engineering; Springer: Singapore, 2020; pp. 113–123. [Google Scholar]
- Ashburner, J.; Friston, K.J. Unified segmentation. Neuroimage 2005, 26, 839–851. [Google Scholar] [CrossRef]
- Rajchl, M.; Baxter, J.S.H.; McLeod, A.J.; Yuan, J.; Qiu, W.; Peters, T.M.; Khan, A.R. ASeTs: MAP-based Brain Tissue Segmentation using Manifold Learning and Hierarchical Max-Flow regularization. In Proceedings of the MICCAI Grand Challenge on MR Brain Image Segmentation (MRBrainS’13), Nagoya, Japan, 26 September 2013. [Google Scholar]
- Duchesne, S.; Pruessner, J.C.; Collins, D.L. Appearance-Based Segmentation of Medial Temporal Lobe Structures. Neuroimage 2002, 17, 515–531. [Google Scholar] [CrossRef] [PubMed]
- Ballanger, B.; Tremblay, L.; Sgambato-Faure, V.; Beaudoin-Gobert, M.; Lavenne, F.; Le Bars, D.; Costes, N. A multi-atlas based method for automated anatom-ical Macaca fascicularis brain MRI segmentation and PET kinetic extraction. Neuroimage 2013, 77, 26–43. [Google Scholar] [CrossRef] [PubMed]
- Heckemann, R.A.; Hajnal, J.V.; Aljabar, P.; Rueckert, D.; Hammers, A. Automatic anatomical brain MRI segmentation combining label propagation and decision fusion. Neuroimage 2006, 33, 115–126. [Google Scholar] [CrossRef] [PubMed]
- Scherrer, B.F.F.; Garbay, C.; Dojat, M. Distributed local MRF models for tissue and structure brain segmentation. IEEE Trans. Med. Imaging 2009, 28, 1278–1295. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Long, J.; Ma, G.; Liu, H.; Song, E.; Hung, C.-C.; Xu, X.; Jin, R.; Zhuang, Y.; Liu, D. Cascaded hybrid residual U-Net for glioma segmentation. Multimed. Tools Appl. 2020, 79, 24929–24947. [Google Scholar] [CrossRef]
- Chen, H.; Qin, Z.; Ding, Y.; Tian, L.; Qin, Z. Brain tumor segmentation with deep convolutional symmetric neural network. Neurocomputing 2020, 392, 305–313. [Google Scholar] [CrossRef]
- Havaei, M.; Davy, A.; Warde-Farley, D.; Biard, A.; Courville, A.; Bengio, Y.; Pal, C.; Jodoin, P.-M.; Larochelle, H. Brain tumor segmentation with Deep Neural Networks. Med. Image Anal. 2017, 35, 18–31. [Google Scholar] [CrossRef] [Green Version]
- Coupé, P.; Mansencal, B.; Clément, M.; Giraud, R.; de Senneville, B.D.; Ta, V.; Lepetit, V.; Manjon, J.V. AssemblyNet: A large ensemble of CNNs for 3D whole brain MRI segmentation. Neuroimage 2020, 219, 117026. [Google Scholar] [CrossRef] [PubMed]
- Qamar, S.; Jin, H.; Zheng, R.; Ahmad, P.; Usama, M. A variant form of 3D-UNet for infant brain segmenta-tion. Future Gener. Comput. Syst. 2020, 108, 613–623. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Cham, Switzerland, 2015. [Google Scholar] [CrossRef] [Green Version]
- Zhao, H.; Shi, J.; Qi, X.; Wang, X.; Jia, J. Pyramid Scene Parsing Network. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 1063–6919. [Google Scholar]
- Chen, L.; Papandreou, G.; Schroff, F.; Adam, H. Rethinking Atrous Convolution for Semantic Image Segmentation. arXiv 2017, arXiv:1706.05587. [Google Scholar]
- Chen, L.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation. In Proceedings of the European Conference on Computer Vision, Munich, Germany, 8–14 September 2018. [Google Scholar]
- Rajawat, A.S.; Jain, S. Fusion Deep Learning Based on Back Propagation Neural Network for Personali-zation. In Proceedings of the 2nd International Conference on Data, Engineering and Applications, Bhopal, India, 28–29 February 2020. [Google Scholar]
- Zhang, Z.; Zhang, X.; Peng, C.; Xue, X.; Sun, J. ExFuse: Enhancing Feature Fusion for Semantic Segmenta-tion. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 273–288. [Google Scholar]
- Ma, W.; Gong, C.; Xu, S.; Zhang, X. Multi-scale spatial context-based semantic edge detection. Inf. Fusion 2020, 64, 238–251. [Google Scholar] [CrossRef]
- González-Villà, S.; Oliver, A.; Valverde, S.; Wang, L.; Zwiggelaar, R.; Lladó, X. A review on brain structures segmentation in magnetic resonance imaging. Artif. Intell. Med. 2016, 73, 45–69. [Google Scholar] [CrossRef] [Green Version]
- Makropoulos, A.; Counsell, S.J.; Rueckert, D. A review on automatic fetal and neonatal brain MRI seg-mentation. Neuroimage 2018, 170, 231–248. [Google Scholar] [CrossRef] [Green Version]
- Weisenfeld, N.I.; Warfield, S.K. Automatic segmentation of newborn brain MRI. Neuroimage 2009, 47, 564–572. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Anbeek, P.; Išgum, I.; Van Kooij, B.J.M.; Mol, C.P.; Kersbergen, K.J.; Groenendaal, F.; Viergever, M.A.; De Vries, L.S.; Benders, M.J.N.L. Automatic Segmentation of Eight Tissue Classes in Neonatal Brain MRI. PLoS ONE 2013, 8, e81895. [Google Scholar] [CrossRef]
- Artaechevarria, X.; Munoz-Barrutia, A.; Ortiz-De-Solorzano, C. Combination Strategies in Multi-Atlas Image Segmentation: Application to Brain MR Data. IEEE Trans. Med. Imaging 2009, 28, 1266–1277. [Google Scholar] [CrossRef] [PubMed]
- Wang, H.; Suh, J.W.; Das, S.R.; Pluta, J.B.; Craige, C.; Yushkevich, P.A. Multi-Atlas Segmentation with Joint Label Fusion. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 611–623. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Wang, L.; Gao, Y.; Shi, F.; Li, G.; Gilmore, J.H.; Lin, W.; Shen, D. LINKS: Learning-based multi-source IntegratioN frameworK for Segmen-tation of infant brain images. Neuroimage 2015, 108, 160–172. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Zhang, Y.; Brady, M.; Smith, S. Segmentation of brain MR images through a hidden Markov random field model and the expectation-maximization algorithm. IEEE Trans. Med. Imaging 2001, 20, 45–57. [Google Scholar] [CrossRef]
- Mishro, P.K.; Agrawal, S.; Panda, R.; Abraham, A. A Novel Type-2 Fuzzy C-Means Clustering for Brain MR Image Segmentation. IEEE Trans. Cybern. 2020, 1–12. [Google Scholar] [CrossRef]
- Zhang, W.; Li, R.; Deng, H.; Wang, L.; Lin, W.; Ji, S.; Shen, D. Deep convolutional neural networks for multi-modality isointense infant brain image segmentation. Neuroimage 2015, 108, 214–224. [Google Scholar] [CrossRef] [Green Version]
- Moeskops, P.; Viergever, M.A.; Mendrik, A.M.; De Vries, L.S.; Benders, M.J.N.L.; Isgum, I. Automatic Segmentation of MR Brain Images With a Convolutional Neural Network. IEEE Trans. Med. Imaging 2016, 35, 1252–1261. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Bao, S.; Chung, A.C. Multi-scale structured CNN with label consistency for brain MR image seg-mentation. Comput. Methods Biomech. Biomed. Eng. Imaging Vis. 2018, 6, 113–117. [Google Scholar] [CrossRef]
- Nie, D.; Wang, L.; Gao, Y.; Shen, D. Fully convolutional networks for multi-modality isointense infant brain image segmentation. In Proceedings of the 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), Prague, Czech Republic, 13–16 April 2016; pp. 1342–1345. [Google Scholar]
- Xu, Y.; Géraud, T.; Bloch, I. From neonatal to adult brain MR image segmentation in a few seconds using 3D-like fully convolutional network and transfer learning. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; pp. 4417–4421. [Google Scholar]
- Chen, H.; Dou, Q.; Yu, L.; Qin, J.; Heng, P.A. VoxResNet: Deep voxelwise residual networks for brain seg-mentation from 3D MR images. Neuroimage 2017, 170, 446–455. [Google Scholar] [CrossRef] [PubMed]
- Dolz, J.; Gopinath, K.; Yuan, J.; Lombaert, H.; Desrosiers, C.; Ayed, I.B. HyperDense-Net: A Hyper-Densely Connected CNN for Multi-Modal Image Segmentation. IEEE Trans. Med. Imaging 2019, 38, 1116–1126. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Li, J.; Yu, Z.L.; Gu, Z.; Liu, H.; Li, Y. MMAN: Multi-modality aggregation network for brain segmentation from MR images. Neurocomputing 2019, 358, 10–19. [Google Scholar] [CrossRef]
- Chen, L.; Bentley, P.; Mori, K.; Misawa, K.; Fujiwara, M.; Rueckert, D. DRINet for Medical Image Segmentation. IEEE Trans. Med. Imaging 2018, 37, 2453–2462. [Google Scholar] [CrossRef]
- Lei, Z.; Qi, L.; Wei, Y.; Zhou, Y.; Zhang, Y. Infant Brain MRI Segmentation with Dilated Convolution Pyra-Mid Downsampling and Self-Attention. arXiv 2019, arXiv:1912.12570. [Google Scholar]
- Yu, L.; Cheng, J.-Z.; Dou, Q.; Yang, X.; Chen, H.; Qin, J.; Heng, P.-A. Automatic 3D Cardiovascular MR Segmentation with Densely-Connected Volumetric ConvNets. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Quebec City, QC, Canada, 10–14 September 2017; Lecture Notes in Computer Science. Springer: Cham, Switzerland, 2017; Volume 2017, pp. 287–295. [Google Scholar]
- Bui, T.D.; Shin, J.; Moon, T. 3D Densely Convolution Networks for Volumetric Segmentation. arXiv 2017, arXiv:1709.03199v2. [Google Scholar]
- Dolz, J.; Desrosiers, C.; Wang, L.; Yuan, J.; Shen, D.; Ben Ayed, I. Deep CNN ensembles and suggestive annotations for infant brain MRI segmentation. Comput. Med. Imaging Graph. 2020, 79, 101660. [Google Scholar] [CrossRef]
- Sun, L.; Ma, W.; Ding, X.; Huang, Y.; Liang, D.; Paisley, J. A 3D Spatially Weighted Network for Segmentation of Brain Tissue from MRI. IEEE Trans. Med. Imaging 2019, 39, 898–909. [Google Scholar] [CrossRef] [PubMed]
- Maas, A.L.; Hannun, A.Y.; Ng, A.Y. Rectifier nonlinearities improve neural network acoustic models. In Proceedings of the 30th International Conference on Ma-Chine Learning, Atlanta, GA, USA, 16–21 June 2013. [Google Scholar]
- Çiçek, Ö.; Abdulkadir, A.; Lienkamp, S.S.; Brox, T.; Ronneberger, O. 3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention—MICCAI 2016, Athens, Greece, 17–21 October 2016; Lecture Notes in Computer Science. Ourselin, S., Joskowicz, L., Sabuncu, M., Unal, G., Wells, W., Eds.; Springer: Cham, Switzerland, 2016; Volume 9901. [Google Scholar] [CrossRef] [Green Version]
- Mendrik, A.M.; Vincken, K.L.; Kuijf, H.J.; Breeuwer, M.; Bouvy, W.H.; de Bresser, J.; Alansary, A.; de Bruijne, M.; Carass, A.; El-Baz, A.; et al. MRBrainS challenge: Online Evaluation Framework for Brain Image Segmentation in 3T MRI Scans. Comput. Intell. Neuroence 2015, 2015, 813696. [Google Scholar] [CrossRef] [Green Version]
- Rohlfing, T.; Brandt, R.; Menzel, R.; Maurer, C.R. Evaluation of atlas selection strategies for atlas-based image segmentation with application to confocal microscopy images of bee brains. Neuroimage 2004, 21, 1428–1442. [Google Scholar] [CrossRef] [PubMed]
- Wang, L.; Nie, D.; Li, G.; Puybareau, É.; Dolz, J.; Zhang, Q.; Wang, F.; Xia, J.; Wu, Z.; Chen, J.; et al. Benchmark on Automatic Six-Month-Old Infant Brain Segmentation Algo-rithms: The iSeg-2017 Challenge. IEEE Trans. Med. Imaging 2019, 38, 2219–2230. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Rohlfing, T. Image Similarity and Tissue Overlaps as Surrogates for Image Registration Accuracy: Widely Used but Unreliable. IEEE Trans. Med. Imaging 2012, 31, 153–163. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Andermatt, S.; Pezold, S.; Cattin, P. Multi-Dimensional Gated Recurrent Units for the Segmentation of Biomedical 3D-Data. In Deep Learning and Data Labeling for Medical Applications; Springer: Berlin, Germany, 2016; pp. 142–151. [Google Scholar]
- Stollenga, M.F.; Byeon, W.; Liwicki, M.; Schmidhuber, J. Parallel Multi-Dimensional LSTM, with Appli-Cation to Fast Biomedical Volumetric Image Segmentation. arXiv 2015, arXiv:1506.07452. [Google Scholar]
Paper | Technique | Advantage | Limitation |
---|---|---|---|
[27,28,29,30,31] | Atlas-based registration | Robustness to weak edges, strong adaptability. | Limited by the fuzzy brain tissue edge, multi-source noise, and inhomogeneous intensity. |
[31] | SVM | Preserves information in the training images, and easy to implement. | Response time increase dramatically with dataset size. Slow training, memory intensive, and performance patient-specific learning. |
[17] | Discriminative dictionary learning | ||
[32] | Hidden Markov random field | ||
[33] | Clustering algorithm | ||
[35,36,37] | Patch-wise CNN | Fast, easy to implement, and low resource hungry. Capture discriminative features from a large input patch. | Sensitive to the patch size, lack of global information, difficult to converge small dataset. |
[18,41,43,44,45,46,47] | FCNN with dense connection | Extract more reasonable and contextual information. | Large training time and storage space. High computational complexity. |
[48] | FCNN with richer spatial information | Learn required weight for spatial feature extracting. |
GM | WM | CSF | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
K | K | DC | HD | AVD | DC | HD | AVD | DC | HD | AVD | ||
FP | 7,5,3,2 | SP | 7,5,3,2 | 82.47 | 2.10 | 7.99 | 83.59 | 3.61 | 7.71 | 78.22 | 3.30 | 8.61 |
FP | 5,3,2 | SP | 5,3,2 | 83.12 | 1.94 | 7.79 | 85.40 | 2.89 | 7.52 | 81.45 | 2.58 | 8.59 |
FP | 5,3,2 | SP | 3,2 | 86.08 | 1.71 | 6.76 | 89.02 | 1.76 | 6.71 | 84.15 | 2.24 | 7.82 |
FP | 5,3,2 | SP | 2 | 84.50 | 1.75 | 7.01 | 86.04 | 2.75 | 7.17 | 83.23 | 2.44 | 8.02 |
FP | 3,2 | SP | 3,2 | 85.98 | 1.90 | 7.22 | 88.90 | 2.32 | 6.59 | 84.63 | 2.18 | 8.13 |
FP | 3,2 | SP | 2 | 82.25 | 2.03 | 8.07 | 84.34 | 3.48 | 7.42 | 83.01 | 2.98 | 8.66 |
FP | 2 | SP | 2 | 85.94 | 1.85 | 7.09 | 88.83 | 2.39 | 6.82 | 83.79 | 2.31 | 8.30 |
Tissue | GM | WM | CSF | ||||||
---|---|---|---|---|---|---|---|---|---|
Evaluation Metric | DC | HD | AVD | DC | HD | AVD | DC | HD | AVD |
UNet | 85.94 | 1.85 | 7.09 | 88.83 | 2.39 | 6.82 | 83.79 | 2.31 | 8.30 |
UNet_MP_Max | 86.08 | 1.71 | 6.76 | 89.02 | 1.76 | 6.71 | 84.15 | 2.24 | 7.82 |
UNet_MP_Aver | 85.08 | 1.98 | 8.05 | 88.27 | 2.23 | 7.47 | 82.71 | 2.70 | 8.84 |
Tissue | GM | WM | CSF | ||||||
---|---|---|---|---|---|---|---|---|---|
Evaluation Metric | DC | HD | AVD | DC | HD | AVD | DC | HD | AVD |
B1 | 72.35 | 3.12 | 8.37 | 75.97 | 2.79 | 9.31 | 70.59 | 3.78 | 11.47 |
B2 | 75.42 | 3.06 | 7.92 | 79.49 | 2.37 | 8.92 | 77.51 | 3.69 | 10.15 |
B3 (UNet) | 85.94 | 1.85 | 7.09 | 88.83 | 2.39 | 6.82 | 83.79 | 2.31 | 8.30 |
B1-MDP | 73.04 | 2.05 | 8.14 | 74.33 | 2.93 | 9.56 | 71.06 | 3.02 | 9.75 |
B2-MDP | 76.08 | 2.19 | 7.66 | 76.02 | 2.53 | 8.74 | 77.15 | 3.24 | 9.82 |
B3-MDP | 85.88 | 2.01 | 8.05 | 88.87 | 2.23 | 7.47 | 83.81 | 2.70 | 8.84 |
B4 | 86.12 | 1.91 | 6.81 | 88.30 | 2.06 | 7.17 | 83.98 | 2.23 | 8.43 |
B5 | 85.96 | 1.93 | 7.05 | 89.03 | 1.88 | 7.03 | 83.62 | 2.40 | 8.11 |
B6 | 85.97 | 1.99 | 6.81 | 89.30 | 2.09 | 7.12 | 83.86 | 2.31 | 8.55 |
MSCD-UNet | 86.41 | 1.52 | 5.76 | 89.18 | 2.13 | 7.21 | 84.29 | 2.16 | 7.73 |
Evaluation Metric | DC | ||
---|---|---|---|
Tissue | GM | WM | CSF |
UNet | 85.39 | 89.08 | 88.14 |
MSCD-UNet | 88.42 | 90.31 | 90.57 |
Methoed | GM | WM | CSF | Average | |||
---|---|---|---|---|---|---|---|
DSC | ASD | DSC | ASD | DSC | ASD | DSC | |
UNet | 0.9136 | 0.354 | 0.8991 | 0.385 | 0.9470 | 0.135 | 0.9136 |
Ours | 0.9217 | 0.322 | 0.9047 | 0.362 | 0.956 | 0.110 | 0.9274 |
Tissue | GM | WM | CSF | ||||||
---|---|---|---|---|---|---|---|---|---|
Evaluation Metric | DC | HD | AVD | DC | HD | AVD | DC | HD | AVD |
MSCD-UNet | 86.69 | 1.23 | 5.65 | 89.73 | 1.75 | 6.21 | 85.15 | 1.66 | 5.70 |
Sun [48] | 86.58 | 1.29 | 5.75 | 89.87 | 1.73 | 5.47 | 84.81 | 1.84 | 6.84 |
Li [42] | 86.40 | 1.38 | 5.72 | 89.70 | 1.88 | 6.28 | 84.86 | 2.03 | 6.75 |
Dolz [41] | 86.33 | 1.34 | 6.19 | 89.46 | 1.78 | 6.03 | 83.42 | 2.26 | 7.31 |
Chen [40] | 86.15 | 1.44 | 6.60 | 89.46 | 1.93 | 6.05 | 87.25 | 2.19 | 7.68 |
Bui [46] | 86.06 | 1.52 | 6.60 | 89.00 | 2.11 | 5.54 | 83.76 | 2.32 | 6.77 |
Geraud [39] | 86.03 | 1.44 | 6.05 | 89.29 | 1.86 | 5.83 | 82.44 | 2.28 | 9.03 |
Andermatt [55] | 85.40 | 1.54 | 6.09 | 88.98 | 2.02 | 7.69 | 84.13 | 2.17 | 7.44 |
Stollenga [56] | 84.89 | 1.67 | 6.35 | 88.53 | 2.07 | 5.93 | 83.47 | 2.22 | 8.63 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Long, J.-S.; Ma, G.-Z.; Song, E.-M.; Jin, R.-C. Learning U-Net Based Multi-Scale Features in Encoding-Decoding for MR Image Brain Tissue Segmentation. Sensors 2021, 21, 3232. https://doi.org/10.3390/s21093232
Long J-S, Ma G-Z, Song E-M, Jin R-C. Learning U-Net Based Multi-Scale Features in Encoding-Decoding for MR Image Brain Tissue Segmentation. Sensors. 2021; 21(9):3232. https://doi.org/10.3390/s21093232
Chicago/Turabian StyleLong, Jiao-Song, Guang-Zhi Ma, En-Min Song, and Ren-Chao Jin. 2021. "Learning U-Net Based Multi-Scale Features in Encoding-Decoding for MR Image Brain Tissue Segmentation" Sensors 21, no. 9: 3232. https://doi.org/10.3390/s21093232