Abstract
Resective surgery may be curative for drug-resistant focal epilepsy, but only 40% to 70% of patients achieve seizure freedom after surgery. Retrospective quantitative analysis could elucidate patterns in resected structures and patient outcomes to improve resective surgery. However, the resection cavity must first be segmented on the postoperative MR image. Convolutional neural networks (CNNs) are the state-of-the-art image segmentation technique, but require large amounts of annotated data for training. Annotation of medical images is a time-consuming process requiring highly-trained raters, and often suffering from high inter-rater variability. Self-supervised learning can be used to generate training instances from unlabeled data. We developed an algorithm to simulate resections on preoperative MR images. We curated a new dataset, EPISURG, comprising 431 postoperative and 269 preoperative MR images from 431 patients who underwent resective surgery. In addition to EPISURG, we used three public datasets comprising 1813 preoperative MR images for training. We trained a 3D CNN on artificially resected images created on the fly during training, using images from 1) EPISURG, 2) public datasets and 3) both. To evaluate trained models, we calculate Dice score (DSC) between model segmentations and 200 manual annotations performed by three human raters. The model trained on data with manual annotations obtained a median (interquartile range) DSC of 65.3 (30.6). The DSC of our best-performing model, trained with no manual annotations, is 81.7 (14.2). For comparison, inter-rater agreement between human annotators was 84.0 (9.9). We demonstrate a training method for CNNs using simulated resection cavities that can accurately segment real resection cavities, without manual annotations.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Brett, M., Leff, A.P., Rorden, C., Ashburner, J.: Spatial normalization of brain images with focal lesions using cost function masking. NeuroImage 14(2), 486–500 (2001). https://doi.org/10.1006/nimg.2001.0845. http://www.sciencedirect.com/science/article/pii/S1053811901908456
Cardoso, M.J., et al.: Geodesic information flows: spatially-variant graphs and their application to segmentation and fusion. IEEE Trans. Med. Imaging 34(9), 1976–1988 (2015). https://doi.org/10.1109/TMI.2015.2418298
Chen, K., Derksen, A., Heldmann, S., Hallmann, M., Berkels, B.: Deformable image registration with automatic non-correspondence detection. In: Aujol, J.-F., Nikolova, M., Papadakis, N. (eds.) SSVM 2015. LNCS, vol. 9087, pp. 360–371. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-18461-6_29
Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. arXiv:1606.00915 [cs], May 2017
Chitphakdithai, N., Duncan, J.S.: Non-rigid registration with missing correspondences in preoperative and postresection brain images. In: Jiang, T., Navab, N., Pluim, J.P.W., Viergever, M.A. (eds.) MICCAI 2010. LNCS, vol. 6361, pp. 367–374. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-15705-9_45
Çiçek, Ö., Abdulkadir, A., Lienkamp, S.S., Brox, T., Ronneberger, O.: 3D U-Net: learning dense volumetric segmentation from sparse annotation. arXiv:1606.06650 [cs], June 2016. http://arxiv.org/abs/1606.06650
Drobny, D., Carolus, H., Kabus, S., Modersitzki, J.: Handling non-corresponding regions in image registration. In: Handels, H., Deserno, T.M., Meinzer, H.-P., Tolxdorff, T. (eds.) Bildverarbeitung für die Medizin 2015. I, pp. 107–112. Springer, Heidelberg (2015). https://doi.org/10.1007/978-3-662-46224-9_20
van Engelen, J.E., Hoos, H.H.: A survey on semi-supervised learning. Mach. Learn. 109(2), 373–440 (2020). https://doi.org/10.1007/s10994-019-05855-6
Gudbjartsson, H., Patz, S.: The rician distribution of noisy MRI data. Magn. Reson. Med. 34(6), 910–914 (1995). https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2254141/
Herrmann, E., et al.: Fully automated segmentation of the brain resection cavity for radiation target volume definition in glioblastoma patients. Int. J. Radiat. Oncol. Biol. Phys. 102(3), S194 (2018). https://doi.org/10.1016/j.ijrobp.2018.07.087. https://www.redjournal.org/article/S0360-3016(18)31492-5/abstract
Jing, L., Tian, Y.: Self-supervised visual feature learning with deep neural networks: a survey. arXiv:1902.06162 [cs], February 2019
Jobst, B.C., Cascino, G.D.: Respective epilepsy surgery for drug-resistant focal epilepsy: a review. JAMA 313(3), 285–293 (2015). https://doi.org/10.1001/jama.2014.17426
Kamnitsas, K., et al.: Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation. Med. Image Anal. 36, 61–78 (2017). https://doi.org/10.1016/j.media.2016.10.004. arXiv:1603.05959
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv:1412.6980 [cs], December 2014
Li, W., Wang, G., Fidon, L., Ourselin, S., Cardoso, M.J., Vercauteren, T.: On the compactness, efficiency, and representation of 3D convolutional networks: brain parcellation as a pretext task. arXiv:1707.01992, 10265, 348–360 (2017). https://doi.org/10.1007/978-3-319-59050-9_28
Meier, R., et al.: Automatic estimation of extent of resection and residual tumor volume of patients with glioblastoma. J. Neurosurg. 127(4), 798–806 (2017). https://doi.org/10.3171/2016.9.JNS16146
Modat, M., Cash, D.M., Daga, P., Winston, G.P., Duncan, J.S., Ourselin, S.: Global image registration using a symmetric block-matching approach. J. Med. Imaging, 1(2) (2014). https://doi.org/10.1117/1.JMI.1.2.024003. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4478989/
Nyúl, L.G., Udupa, J.K., Zhang, X.: New variants of a method of MRI scale standardization. IEEE Trans. Med. Imaging 19(2), 143–150 (2000). https://doi.org/10.1109/42.836373
Pérez-García, F., Sparks, R., Ourselin, S.: TorchIO: a Python library for efficient loading, preprocessing, augmentation and patch-based sampling of medical images in deep learning. arXiv:2003.04696 [cs, eess, stat], March 2020
Perlin, K.: Improving noise. ACM Trans. Graph. (TOG) 21(3), 681–682 (2002). https://doi.org/10.1145/566654.566636
Pezeshk, A., Petrick, N., Chen, W., Sahiner, B.: Seamless lesion insertion for data augmentation in CAD training. IEEE Trans. Med. Imaging 36(4), 1005–1015 (2017). https://doi.org/10.1109/TMI.2016.2640180. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5509514/
Rohlfing, T., Maurer, C.R.: Shape-based averaging. IEEE Trans. Image Process. 16(1), 153–161 (2007). https://doi.org/10.1109/TIP.2006.884936
Shaw, R., Sudre, C., Ourselin, S., Cardoso, M.J.: MRI k-space motion artefact augmentation: model robustness and task-specific uncertainty. In: International Conference on Medical Imaging with Deep Learning, pp. 427–436, May 2019. http://proceedings.mlr.press/v102/shaw19a.html
Shaw, R., Sudre, C.H., Ourselin, S., Cardoso, M.J.: A heteroscedastic uncertainty model for decoupling sources of MRI image quality. arXiv:2001.11927 [cs, eess], January 2020
Sudre, C.H., Cardoso, M.J., Ourselin, S.: Longitudinal segmentation of age-related white matter hyperintensities. Med. Image Anal. 38, 50–64 (2017). https://doi.org/10.1016/j.media.2017.02.007. http://www.sciencedirect.com/science/article/pii/S1361841517300257
Winterstein, M., Münter, M.W., Burkholder, I., Essig, M., Kauczor, H.U., Weber, M.A.: Partially resected gliomas: diagnostic performance of fluid-attenuated inversion recovery MR imaging for detection of progression. Radiology 254(3), 907–916 (2010). https://doi.org/10.1148/radiol09090893
Acknowledgments
The authors wish to thank Luis García-Peraza Herrera and Reuben Dorent for the fruitful discussions.
This work is supported by the UCL EPSRC Centre for Doctoral Training in Medical Imaging (EP/L016478/1). This publication represents in part independent research commissioned by the Wellcome Trust Health Innovation Challenge Fund (WT106882). The views expressed in this publication are those of the authors and not necessarily those of the Wellcome Trust.
This work uses data provided by patients and collected by the National Health Service (NHS) as part of their care and support.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Pérez-García, F., Rodionov, R., Alim-Marvasti, A., Sparks, R., Duncan, J.S., Ourselin, S. (2020). Simulation of Brain Resection for Cavity Segmentation Using Self-supervised and Semi-supervised Learning. In: Martel, A.L., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2020. MICCAI 2020. Lecture Notes in Computer Science(), vol 12263. Springer, Cham. https://doi.org/10.1007/978-3-030-59716-0_12
Download citation
DOI: https://doi.org/10.1007/978-3-030-59716-0_12
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-59715-3
Online ISBN: 978-3-030-59716-0
eBook Packages: Computer ScienceComputer Science (R0)