[go: up one dir, main page]


Korean J Radiol. 2021 Dec;22(12):2073-2081. English.
Published online Oct 26, 2021.
Copyright © 2021 The Korean Society of Radiology
Review

An Open Medical Platform to Share Source Code and Various Pre-Trained Weights for Models to Use in Deep Learning Research

Sungchul Kim,1,* Sungman Cho,2,* Kyungjin Cho,1 Jiyeon Seo,1 Yujin Nam,1 Jooyoung Park,1 Kyuri Kim,2 Daeun Kim,2 Jeongeun Hwang,3 Jihye Yun,4 Miso Jang,1,3 Hyunna Lee,5 and Namkug Kim4,6
    • 1Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea.
    • 2Asan Institute for Life Sciences, Asan Medical Center, Seoul, Korea.
    • 3Department of Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea.
    • 4Department of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea.
    • 5Bigdata Research Center, Asan Institute for Life Science, Asan Medical Center, Seoul, Korea.
    • 6Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea.
Received February 27, 2021; Revised July 26, 2021; Accepted August 01, 2021.

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Deep learning-based applications have great potential to enhance the quality of medical services. The power of deep learning depends on open databases and innovation. Radiologists can act as important mediators between deep learning and medicine by simultaneously playing pioneering and gatekeeping roles. The application of deep learning technology in medicine is sometimes restricted by ethical or legal issues, including patient privacy and confidentiality, data ownership, and limitations in patient agreement. In this paper, we present an open platform, MI2RLNet, for sharing source code and various pre-trained weights for models to use in downstream tasks, including education, application, and transfer learning, to encourage deep learning research in radiology. In addition, we describe how to use this open platform in the GitHub environment. Our source code and models may contribute to further deep learning research in radiology, which may facilitate applications in medicine and healthcare, especially in medical imaging, in the near future. All code is available at https://github.com/mi2rl/MI2RLNet.

Keywords
Deep learning; Medical imaging; Open platform; Pre-trained model; Downstream task

INTRODUCTION

Deep learning is one of the most sought-after artificial intelligence (AI) research area in the medical field [1]. Studies across multiple medical specialties have employed AI to mimic the diagnostic abilities at or above the level of human experts [2, 3, 4, 5, 6]. To develop these AI algorithms to perform tasks typically associated with human intelligence, one dataset of labeled cases with which the AI algorithms are trained and another dataset with which they are tested or validated are required. However, thus far, they have not been easily accessible to AI developers because they often contain sensitive patient information that would compromise patient confidentiality and privacy [7].

Prior to its expansion into the medical field, deep learning has been revolutionized through the use of open databases and innovation. Image databases, such as MNIST [8], ImageNet [9], COCO [10], and CelebFaces [11], have been used not only as datasets for the development of deep learning models but also as sources of open code, networks, and pre-trained weights. This open innovation approach made it easy for research groups around the world to share their experiences, leading to the increased popularity of deep learning approaches.

Although it is well known that transfer learning from a pre-trained model that was trained using ImageNet could be helpful when researchers use a small dataset in radiology [12], radiology-related open-source code and open trained models are still limited. In addition, constructing medical big data is difficult for local hospitals. To tackle this problem, in this study, we released various types of pre-trained weights for various models through an open platform, MI2RLNet, for downstream tasks. We hope that our released model will have a positive effect on individual transfer learning. Figure 1 shows the overall architecture of MI2RLNet. We provide detailed examples for classification, detection, and segmentation tasks, and summarize how to use the open platform in the GitHub environment.

Fig. 1
Overall architecture of MI2RLNet.
LR = left/right

Description of MI2RLNet

In this paper, we introduce MI2RLNet, which consists of nine models that derive results for a specific purpose using medical images of various body parts obtained using different modalities. To train the MI2RLNet models, most data were obtained from open datasets, and some anonymized data were obtained from the Asan Medical Center (AMC). All AMC datasets were approved by the Institutional Review Board of AMC. The imaging data were anonymized in accordance with the Health Insurance Portability and Accountability Act Privacy Rule. Detailed information regarding our data is presented in Tables 1 and 2.

Table 1
Task Description of MI2RLNet with Models’ Performance

In this paper, we briefly introduce our representative five models that may be widely used; the other four models not specifically addressed in this article include view classification in chest X-ray, liver segmentation in abdominal CT, polyp detection in endoscopy, and black-blood segmentation on MRI. To confirm the rationality of our implementation, we used network architectures that are commonly used in the computer vision field, such as U-Net [13], three-dimensional (3D) U-Net [14], ResNet [15], EfficientDet [16], and EfficientNet [17]. The performance is listed in Table 1, and the details of each task, such as hyperparameters and training environments, are summarized in the Supplement.

Classification of Enhanced and Non-Enhanced CT Images

In actual clinical settings, there can be many errors in the digital imaging and communications in medicine (DICOM) header, especially when the images are collected from multiple centers, including incorrect recording of enhanced and non-enhanced CT imaging acquisition, which could strongly affect the accuracy of deep learning models. Therefore, we developed a classifier that consists of ResNet-50 [15] as a backbone network to classify enhanced and non-enhanced chest CT images. As a result, the model showed a 0.99 accuracy with the test set.

Detection of Left/Right (L/R) Mark on Chest Radiographs

An left/right (L/R) mark, typically seen as a strong white letter object, is located on every X-ray image, and it is not related to the status of the patient. However, the training of neural networks is usually affected substantially by pixels that have a strong intensity on images. If the L/R mark is erased, the network will be able to learn better representative features. Therefore, we developed an L/R mark detection model, which consists of EfficientDet [16] with EfficientNet-B0 [17] as a backbone network, to detect and erase the mark. As a result, the model showed a 0.99 mean average precision with the test set. Figure 2 shows the results of the L/R mark-detection model.

Fig. 2
Results of the left/right mark detection model (A-C).

Lung Segmentation in Chest Radiographs

The aim of chest radiography is to identify abnormalities in the lung region; therefore, obtaining a lung mask for chest X-ray images is very important. In addition, this procedure can be used as a pre-processing step for the registration, focusing, or reduction of false positives in irrelevant regions. In particular, registration of the field of views of given images based on the lung region on chest X-ray images can be used to overcome differences in the pose and breath-hold level of each lung. Therefore, we developed a lung segmentation model, which consists of U-Net [13], to obtain lung masks. As a result, the model showed a 0.98 dice similarity coefficient (DSC) with the test set. Figure 3 shows the results of the lung segmentation model.

Fig. 3
Results of the lung segmentation model.
A. Input. B. Ground truth. C. Predicted result.

Kidney and Tumor Segmentation

Obtaining kidney and tumor masks is very important because of the wide variety in their morphology and how their morphology relates to surgical planning and outcomes. Therefore, we developed a kidney and tumor segmentation model, which consists of a cascaded 3D U-Net [14], to obtain kidney and tumor masks. Figure 4 shows the cascaded 3D U-Net architecture. As a result, the model recorded the 33rd place in KiTS19 [13] challenge with a total DSC of 0.83 (0.96 for the kidney and 0.70 for the tumor) with the test set. Figure 5 shows the results of the kidney and tumor segmentation models.

Fig. 4
The overall architecture of the kidney and tumor segmentation.
A. Cascaded 3D U-Net with SE-block for segmentation of the kidney and tumor on abdominal CT. First network detects ROIs in the entire image and second network segments detailed labels within the ROIs. B. 3D U-Net architecture with SE-block. ReLU = rectified linear unit, ROIs = regions of interest, SE-block = Squeeze-and-Excitation block, 3D = three-dimensional

Fig. 5
Results of the kidney and tumor segmentation model.
A. Ground truth. B. Predicted result.

Brain Extraction Tool for MRI

Brain extraction is a fundamental step in brain analyses. To perform an effective analysis, brain skull stripping is usually performed, but manual skull stripping is time-consuming. Therefore, we developed a fully automated segmentation model, which consists of U-Net [13], to automatically extract the brain area on T1-weighted MRI. As a result, the model showed a DSC of 0.95 with the test set. Figure 6 shows the results of the brain extraction model.

Fig. 6
Results of the brain extraction model for different diseases.

How To Use MI2RLNet

In this section, we briefly describe how to use MI2RLNet. One module, liver segmentation, is presented with pseudocode as an example; other specific instructions can be found in the README in the GitHub repository. We also describe how MI2RLNet can be fine-tuned to fit custom datasets. The performance of the modules of MI2RLNet is excellent; however, for new data, the performance may be lower than expected. In this case, MI2RLNet can be fine-tuned for the new dataset to achieve robust performance for extended tasks. For more detailed information on each model, please refer to the Supplement, which includes typical models for classification, detection, and segmentation in chest X-ray, CT, and MRI, respectively. This paper can be cited as a reference when using models in future research.

Inference

MI2RLNet is implemented so that users can use it easily. Furthermore, each module is well-modularized; thus, the desired results can be obtained with only a few lines of code. To use this concise library, only three steps need to be taken: initialize the module, set the model with pre-trained weights, and run. The “Checker” object is also provided, which can check the data type for each module and set up resources, such as Graphical Processing Units. Although the modules in MI2RLNet are implemented in different deep learning frameworks, such as TensorFlow [21] and PyTorch [22], we provide an integrated environment independent of these frameworks as required.

The guideline for this simple inference is introduced in Algorithm 1, with liver segmentation as an example. Detailed instructions on how to use other modules can be found in the README of each module in the GitHub repository.

Algorithm 1. Guideline for the inference of MI2RLNet

  • # 1. Initialize

    # Import the module you want to use from medimodule.Liver import LiverSegmentation

  • # 2. Set the model with weight model = LiverSegmentation(“path/of/weight”)

  • # 3. Run

    # Get a result of the image

    # ‘save_path’ must be set to save the result in the module image, mask = model.predict(“path/of/image”, save_ path=”path/for/save”)

Fine-Tuning

Although we trained our model using big datasets for robustness, the accuracy may be affected by the CT, X-ray, and MR devices used. Different standards between hospitals can also affect the model accuracy. Fine-tuning the released model weights can stabilize the training of the model from the beginning, and the accuracy can be increased for existing models. Detailed instructions on how to transfer this model into a more fine-tuned model can be found in the README of each module in the GitHub repository. Algorithm 2 shows an example of the fine-tuning of the liver segmentation model.

Algorithm 2. Guideline for the fine-tuning of MI2RLNet

  • # 1. Initialize

  • # Import the module you want to fine-tune from medimodule.Liver import LiverSegmentation

  • # 2. Set the model with its weight for fine-tuning

  • # If you want to train random initialized model,

  • # you don’t have to set the weight model = LiverSegmentation(“path/of/weight”)

  • # 3. Run

  • # Construct your custom training code

    ...

    model.train()

    ...

DISCUSSION

Deep learning is becoming increasingly important in the medical field and may enable accurate predictions from the analysis of complex medical data. Deep learning requires sufficient training datasets and algorithms to optimize the performance on the training dataset before testing. Large datasets are required to achieve sufficient accuracy, and the data must be appropriately labeled. The difficulties in collecting data from medical records are as follows: access to relevant medical records may be difficult, particularly if treatment is spread over multiple different providers; time-consuming administrative process to acquire access to medical records; heavy burden of filtering the data to meet research goals; and the accuracy of some data collection systems may be questionable. Owing to these difficulties, deep learning-based studies have mainly been conducted in large hospitals.

Deep learning technologies related to medical data organization should be widely shared for the scalability of deep learning-based research in the medical field. We released pre-trained deep learning models that can be applied in various medical deep learning studies. This could help researchers to start their own deep learning research with fewer obstacles. For example, chest radiographs may have multiple views, such as posterior-anterior and lateral views, consequently increasing the difficulty of training the deep learning model. One of the models of MI2RLNet that we shared here could potentially be useful after fine-tuning in other hospitals.

To develop a deep learning solution that guarantees generalizability, medical images should be processed considering the characteristics of the modality. We trained models for various tasks with various modalities through the classification, detection, and segmentation of X-ray, CT, and MR images. The pre-processing code we introduced could help colleagues start new tasks, and if they perform transfer learning using the trained models, they could start with smaller datasets or train their model faster.

As we trained the models using public datasets and datasets from AMC (non-public datasets), our trained models may not be able to cover all possible sources of variability. In the future, we will keep our open platform sustainable by expanding its application to new tasks, covering more datasets, and considering the feedback of colleagues worldwide.

Supplement

The Supplement is available with this article at https://doi.org/10.3348/kjr.2021.0170.

SUPPLEMENT

Click here to view.(36K, pdf)

Notes

Conflicts of Interest:Namkug Kim who is on the editorial board of the Korean Journal of Radiology was not involved in the editorial evaluation or decision to publish this article. All remaining authors have declared no conflicts of interest.

Author Contributions:

  • Conceptualization: Sungman Cho, Sungchul Kim.

  • Data curation: Sungchul Kim, Sungman Cho, Kyungjin Cho, Jiyeon Seo, Yujin Nam, Jooyoung Park, Kyuri Kim, Daeun Kim.

  • Formal analysis: Jeongeun Hwang, Jihye Yun, Miso Jang, Hyunna Lee.

  • Funding acquisition: Namkug Kim.

  • Methodology: Sungchul Kim, Sungman Cho, Kyungjin Cho, Jiyeon Seo, Yujin Nam, Jooyoung Park, Kyuri Kim, Daeun Kim, Jihye Yun, Miso Jang, Hyunna Lee, Namkug Kim.

  • Supervision: Namkug Kim.

  • Writing—original draft: Sungman Cho, Sungchul Kim, Namkug Kim.

  • Writing—review & editing: Sungman Cho, Sungchul Kim, Namkug Kim.

Funding Statement:This study was supported by a grant of the Korea Health Technology R&D Project through the Korea Health Industry Development Institute (KHIDI), funded by the Ministry of Health & Welfare, Republic of Korea (HI18C2383, HI18C0022).

Acknowledgments

All authors belong to Medical Imaging and Intelligent Reality Lab (MI2RL). We thank Dr. Yongsik Sim (Department of Radiology, Severance Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea), for development of two models that chest radiographs view classification model and enhancement classification.

Availability of Data and Material

Data sharing does not apply to this article as no datasets were generated or analyzed during the current study.

References

    1. Kennedy AD, Leigh-Brown AP, Torgerson DJ, Campbell J, Grant A. Resource use data by patient report or hospital records: do they agree? BMC Health Serv Res 2002;2:2
    1. Jiang F, Jiang Y, Zhi H, Dong Y, Li H, Ma S, et al. Artificial intelligence in healthcare: past, present and future. Stroke Vasc Neurol 2017;2:230–243.
    1. Shen D, Wu G, Suk HI. Deep learning in medical image analysis. Annu Rev Biomed Eng 2017;19:221–248.
    1. Hesamian MH, Jia W, He X, Kennedy P. Deep learning techniques for medical image segmentation: achievements and challenges. J Digit Imaging 2019;32:582–596.
    1. Hwang EJ, Nam JG, Lim WH, Park SJ, Jeong YS, Kang JH, et al. Deep learning for chest radiograph diagnosis in the emergency department. Radiology 2019;293:573–580.
    1. Weikert T, Rapaka S, Grbic S, Re T, Chaganti S, Winkel DJ, et al. Prediction of patient management in COVID-19 using deep learning-based fully automated extraction of cardiothoracic CT metrics and laboratory findings. Korean J Radiol 2021;22:994–1004.
    1. Abouelmehdi K, Beni-Hessane A, Khaloufi H. Big healthcare data: preserving security and privacy. J Big Data 2018;5:1
    1. Lecun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE 1998;86:2278–2324.
    1. Deng J, Dong W, Socher R, Li L, Kai L, Fei-Fei L. ImageNet: a large-scale hierarchical image database; Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition; 2009 Jun 20-25; Miami, FL, USA. IEEE; 2009. pp. 248-255.
    1. Lin TY, Maire M, Belongie S, Hays J, Perona P, Ramanan D, et al. Microsoft coco: common objects in context. In: Fleet D, Pajdla T, Schiele B, Tuytelaars T, editors. Computer vision--ECCV 2014. Cham: Springer International Publishing; 2014. pp. 740-755.
    1. Liu Z, Luo P, Wang X, Tang X. Deep learning face attributes in the wild; Proceedings of the IEEE International Conference on Computer Vision (ICCV); 2015 Dec 7-13; Washington, DC, USA. IEEE; 2015. pp. 3730-3738.
    1. Ke A, Ellsworth W, Banerjee O, Ng AY, Rajpurkar P. CheXtransfer: performance and parameter efficiency of ImageNet models for chest X-Ray interpretation; Proceedings of the Conference on Health, Inference, and Learning; 2021 Apr 8-10; Virtual. CHIL; 2021. pp. 116-124.
    1. Ronneberger O, Fischer P, Brox T. U-Net: convolutional networks for biomedical image segmentation. In: Navab N, Hornegger J, Wells W, Frangi A, editors. Medical image computing and computer-assisted intervention – MICCAI 2015. MICCAI 2015. Lecture notes in computer science, vol 9351. Cham: Springer; 2015. pp. 234-241.
    1. Çiçek Ö, Abdulkadir A, Lienkamp SS, Brox T, Ronneberger O. 3D U-Net: learning dense volumetric segmentation from sparse annotation. In: Ourselin S, Joskowicz L, Sabuncu M, Unal G, Wells W, editors. Medical image computing and computer-assisted intervention – MICCAI 2016. MICCAI 2016. Lecture notes in computer science, vol 9901. Cham: Springer; 2016. pp. 424-432.
    1. He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2016 Jun 27-30; Las Vegas, NV, USA. IEEE; 2016. pp. 770-778.
    1. Tan M, Pang R, Le QV. Efficientdet: scalable and efficient object detection; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); 2020 Jun 13-19; Seattle, WA, USA. IEEE; 2020. pp. 10778-10787.
    1. Tan M, Le Q. Efficientnet: rethinking model scaling for convolutional neural networks; Proceedings of the 36th International Conference on Machine Learning; 2019 Jun 9-15; Long Beach, CA, USA. PMLR; 2019. pp. 6105-6114.
    1. Shiraishi J, Katsuragawa S, Ikezoe J, Matsumoto T, Kobayashi T, Komatsu K, et al. Development of a digital image database for chest radiographs with and without a lung nodule: receiver operating characteristic analysis of radiologists’ detection of pulmonary nodules. AJR Am J Roentgenol 2000;174:71–74.
    1. Heller N, Sathianathen N, Kalapara A, Walczak E, Moore K, Kaluzniak H, et al. The KiTS19 challenge data: 300 kidney tumor cases with clinical context, CT semantic segmentations, and surgical outcomes. arXiv preprint 2019:arXiv:1904.00445
    1. Marcus DS, Fotenos AF, Csernansky JG, Morris JC, Buckner RL. Open access series of imaging studies: longitudinal MRI data in nondemented and demented older adults. J Cogn Neurosci 2010;22:2677–2684.
    1. Abadi M, Agarwal A, Barham P, Brevdo E, Chen Z, Citro C, et al. Tensorflow: large-scale machine learning on heterogeneous distributed systems. arXiv preprint 2016:arXiv:1603.04467
    1. Paszke A, Gross S, Massa F, Lerer A, Bradbury J, Chanan G, et al. Pytorch: an imperative style, high-performance deep learning library. In: Wallach H, Larochelle H, Beygelzimer A, d'Alché-Buc F, Fox E, Garnett R, editors. Advances in neural information processing systems 32 (NeurIPS 2019). Vancouver: NeurIPS; 2019. pp. 8026-8037.

Publication Types
Review
Metrics
Share
Figures

1 / 6

Tables

1 / 2

Funding Information
PERMALINK