Abstract
Although automatic prediction of Alzheimer’s disease (AD) from Magnetic Resonance Imaging (MRI) showed excellent performance, Machine Learning (ML) algorithms often provide high accuracy at the expense of interpretability of findings. Indeed, building ML models that can be understandable has fundamental importance in clinical context, especially for early diagnosis of neurodegenerative diseases. Recently, a novel interpretability algorithm has been proposed, the Explainable Boosting Machine (EBM), which is a glassbox model based on Generative Additive Models plus Interactions GA2Ms and designed to show optimal accuracy while providing intelligibility. Thus, the aim of present study was to assess – for the first time – the EBM reliability in predicting the conversion to AD and its ability in providing the predictions explainability. In particular, two-hundred brain MRIs from ADNI of Mild Cognitive Impairment (MCI) patients equally divided into stable (sMCI) and progressive (pMCI) were processed with Freesurfer for extracting twelve hippocampal subfields volumes, which already showed good AD prediction power. EBM models with and without pairwise interactions were built on training set (80%) comprised of these volumes, and global explanations were investigated. The performance of classifiers was evaluated with AUC-ROC on test set (20%) and local explanations of four randomly selected test patients (sMCIs and pMCIs correctly classified and misclassified) were given. EBMs without and with pairwise interactions showed accuracies of respectively 80.5% and 84.2%, thus demonstrating high prediction accuracy. Moreover, EBM provided practical clinical knowledge on why a patient was correctly or incorrectly predicted as AD and which hippocampal subfields drove such prediction.
Alzheimer’s Disease Neuroimaging Initiative—Data used in preparation of this article were obtained from the Alzheimer’s Disease Neuroim-aging Initiative (ADNI) database (adni.loni.usc.edu). As such, the investigators within the ADNI contributed to the design and implementation of ADNI and/or provided data but did not participate in analysis or writing of this report. A complete listing of ADNI investigators can be found at: http://adni.loni.usc.edu/wp-content/uploads/how_to_ap-ply/ADNI_Acknowledgement_List.pdf.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Mahmud, M., Kaiser, M.S., Hussain, A., Vassanelli, S.: Applications of deep learning and reinforcement learning to biological data. IEEE Trans. Neural Netw. Learn Syst. 29, 2063–2079 (2018)
Mahmud, M., Kaiser, M.S., McGinnity, T.M., Hussain, A.: Deep learning in mining biological data. Cognit. Comput. 13, 1–33 (2021)
Jollans, L., et al.: Quantifying performance of machine learning methods for neuroimaging data. Neuroimage 199, 351–365 (2019)
Noor, M.B.T., Zenia, N.Z., Kaiser, M.S., Mamun, S.A., Mahmud, M.: Application of deep learning in detecting neurological disorders from magnetic resonance images: a survey on the detection of Alzheimer’s disease. Parkinson’s disease and schizophrenia. Brain Inform. 7, 11 (2020)
Mahmud, M., Vassanelli, S., Kaiser, M.S., Zhong, N. (eds.): BI 2020. LNCS (LNAI), vol. 12241. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59277-6
Sarica, A., Cerasa, A., Quattrone, A.: Random forest algorithm for the classification of neuroimaging data in Alzheimer’s disease: a systematic review. Front. Aging Neurosci. 9, 329 (2017)
Sarica, A., Cerasa, A., Quattrone, A., Calhoun, V.: Editorial on special issue: machine learning on MCI. J. Neurosci. Method 302, 2 (2018)
Arrieta, A.B., et al.: Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020)
Ahmad, M.A., Eckert, C., Teredesai, A.: Interpretable machine learning in healthcare. In: Proceedings of the 2018 ACM International Conference on Bioinformatics, Computational Biology, and Health Informatics, pp. 559–560 (2018)
Caruana, R., Lou, Y., Gehrke, J., Koch, P., Sturm, M., Elhadad, N.: Intelligible models for healthcare: predicting pneumonia risk and hospital 30-day readmission. In: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1721–1730 (2015)
Lou, Y., Caruana, R., Gehrke, J.: Intelligible models for classification and regression. In: Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 150–158 (2012)
Hastie, T.J., Tibshirani, R.J.: Generalized Additive Models. CRC Press, Boca Raton (1990)
Fischl, B., et al.: Whole brain segmentation: automated labeling of neuroanatomical structures in the human brain. Neuron 33, 341–355 (2002)
Khan, W., et al.: Automated hippocampal subfield measures as predictors of conversion from mild cognitive impairment to Alzheimer’s disease in two independent cohorts. Brain Topogr. 28, 746–759 (2015)
Vasta, R., et al.: Hippocampal subfield atrophies in converted and not-converted mild cognitive impairments patients by a markov random fields algorithm. Current Alzheimer Res. 13, 566–574 (2016)
Sarica, A., et al.: MRI asymmetry index of hippocampal subfields increases through the continuum from the mild cognitive impairment to the Alzheimer’s disease. Front. Neurosci. 12, 576 (2018)
Novellino, F., et al.: Relationship between hippocampal subfields and category cued recall in AD and PDD: a multimodal MRI study. Neuroscience 371, 506–517 (2018)
Lou, Y., Caruana, R., Gehrke, J., Hooker, G.: Accurate intelligible models with pairwise interactions. In: Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 623–631 (2015)
Iglesias, J.E., et al.: A computational atlas of the hippocampal formation using ex vivo, ultra-high resolution MRI: application to adaptive segmentation of in vivo MRI. Neuroimage 115, 117–137 (2015)
Nori, H., Jenkins, S., Koch, P., Caruana, R.: Interpretml: a unified framework for machine learning interpretability. arXiv preprint arXiv:1909.09223 (2019)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Sarica, A., Quattrone, A., Quattrone, A. (2021). Explainable Boosting Machine for Predicting Alzheimer’s Disease from MRI Hippocampal Subfields. In: Mahmud, M., Kaiser, M.S., Vassanelli, S., Dai, Q., Zhong, N. (eds) Brain Informatics. BI 2021. Lecture Notes in Computer Science(), vol 12960. Springer, Cham. https://doi.org/10.1007/978-3-030-86993-9_31
Download citation
DOI: https://doi.org/10.1007/978-3-030-86993-9_31
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-86992-2
Online ISBN: 978-3-030-86993-9
eBook Packages: Computer ScienceComputer Science (R0)