Med-tex: Transferring and explaining knowledge with less data from pretrained medical imaging models
arXiv preprint arXiv:2008.02593, 2020•arxiv.org
Deep learning methods usually require a large amount of training data and lack
interpretability. In this paper, we propose a novel knowledge distillation and model
interpretation framework for medical image classification that jointly solves the above two
issues. Specifically, to address the data-hungry issue, a small student model is learned with
less data by distilling knowledge from a cumbersome pretrained teacher model. To interpret
the teacher model and assist the learning of the student, an explainer module is introduced …
interpretability. In this paper, we propose a novel knowledge distillation and model
interpretation framework for medical image classification that jointly solves the above two
issues. Specifically, to address the data-hungry issue, a small student model is learned with
less data by distilling knowledge from a cumbersome pretrained teacher model. To interpret
the teacher model and assist the learning of the student, an explainer module is introduced …
Deep learning methods usually require a large amount of training data and lack interpretability. In this paper, we propose a novel knowledge distillation and model interpretation framework for medical image classification that jointly solves the above two issues. Specifically, to address the data-hungry issue, a small student model is learned with less data by distilling knowledge from a cumbersome pretrained teacher model. To interpret the teacher model and assist the learning of the student, an explainer module is introduced to highlight the regions of an input that are important for the predictions of the teacher model. Furthermore, the joint framework is trained by a principled way derived from the information-theoretic perspective. Our framework outperforms on the knowledge distillation and model interpretation tasks compared to state-of-the-art methods on a fundus dataset.
arxiv.org