Abstract
This paper presents a general Explainable Machine Learning framework and methodology based on Argumentation (ArgEML). The flexible reasoning form of argumentation in the face of unknown and incomplete information together with the direct link of argumentation to justification and explanation enables the development of a natural form of explainable machine learning. In this form of learning the explanations are useful not only for supporting the final predictions but also play a significant role in the learning process itself. The paper defines the basic theoretical notions of ArgEML together with its main machine learning operators and method of application. It describes how such an argumentation-based approach can give a flexible way for learning that recognizes difficult cases (with respect to the current available training data) and separates these cases out not as definite predictive cases but as cases where it is more appropriate to explainably analyze the alternative predictions. Using the argumentation-based explanations we can partition the problem space into groups characterized by the basic argumentative tension between arguments for and against the alternatives. The paper presents a first evaluation of the approach by applying the ArgEML learning methodology both on artificial and on real-life datasets.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
We will use a Logic Programming rule notation for the arguments to facilitate the exposition of the realization of the ArgEML framework in the next sections of the paper.
- 2.
- 3.
- 4.
- 5.
- 6.
All datasets presented here are available via request from the authors.
- 7.
- 8.
A measure of each rule’s relative importance in predicting the correct class.
- 9.
st:Stenosis (%ECST). ECST: European Carotid Surgery Trial. Lngsm40: Log(GSM + 40). GSM: Grey Scale Median. Cubrar: (Plaque Area)1/3 in mm2. Dwa1: DWAs (#of Yes cases).DWA: Discrete White Areas. Ctiastr1: History of contr. TIAs and/or Stroke (#of Yes cases). Target: {asympt, stroke}.
References
Longo, L.: Argumentation for knowledge representation, conflict resolution, defeasible inference and its integration with machine learning. In: Holzinger, A. (ed.) Machine Learning for Health Informatics. LNCS, vol. 9605, pp. 183–208. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-50478-0_9
Kakas, A., Michael, L.: Abduction and argumentation for explainable machine learning: a position survey. arXiv (2020). http://arxiv.org/abs/2010.12896
Vassiliades, A., Bassiliades, N., Patkos, T.: Argumentation and explainable artificial intelligence: a survey. Knowl. Eng. Rev. 36, e5 (2021). https://doi.org/10.1017/S0269888921000011
Možina, M., Žabkar, J., Bratko, I.: Argument based machine learning. Artif. Intell. 171(10–15), 922–937 (2007). https://doi.org/10.1016/j.artint.2007.04.007
Žabkar, J., Možina, M., Videčnik, J., Bratko, I.: Argument based machine learning in a medical domain. Front. Artif. Intell. Appl. 144, 59–70 (2006)
Možina, M., Giuliano, C., Bratko, I.: Argument based machine learning from examples and text (2009). https://doi.org/10.1109/ACIIDS.2009.60
Groza, A., Toderean, L., Muntean, G.A., Nicoara, S.D.: Agents that argue and explain classifications of retinal conditions. J. Med. Biol. Eng. 41(5), 730–741 (2021). https://doi.org/10.1007/s40846-021-00647-7
Ontañón, S., Plaza, E.: Coordinated inductive learning using argumentation-based communication. Auton. Agent. Multi. Agent. Syst. 29(2), 266–304 (2015). https://doi.org/10.1007/s10458-014-9256-2
Niskanen, A., Wallner, J.P., Järvisalo, M.: Synthesizing argumentation frameworks from examples. J. Artif. Intell. Res. 66(503), 554 (2019). https://doi.org/10.1613/jair.1.11758
Yras, K.Č., Satoh, K., Toni, F.: Abstract argumentation for case-based reasoning. In: Proceedings of the International Conference on Knowledge Represention and Reasoning, no. Kr, pp. 549–552 (2016)
Ayoobi, H., Cao, M., Verbrugge, R., Verheij, B.: Argumentation-based online incremental learning. IEEE Trans. Autom. Sci. Eng. 19(4), 3419–3433 (2022). https://doi.org/10.1109/TASE.2021.3120837
Potyka, N., Bazo, M., Spieler, J., Staab, S.: Learning gradual argumentation frameworks using meta-heuristics. In: CEUR Workshop Proceedings, vol. 3208 (2022)
Dimopoulos, Y., Kakas, A.: Learning non-monotonic logic programs: learning exceptions. In: Lavrac, N., Wrobel, S. (eds.) ECML 1995. LNCS, vol. 912, pp. 122–137. Springer, Heidelberg (1995). https://doi.org/10.1007/3-540-59286-5_53
Wardeh, M., Coenen, F., Capon, T.B.: PISA: a framework for multiagent classification using argumentation. Data Knowl. Eng. 75, 34–57 (2012). https://doi.org/10.1016/j.datak.2012.03.001
Michael, L.: Cognitive reasoning and learning mechanisms. In: CEUR Workshop Proceedings, vol. 1895 (2017)
Prentzas, N., Nicolaides, A., Kyriacou, E., Kakas, A., Pattichis, C.: Integrating machine learning with symbolic reasoning to build an explainable AI model for stroke prediction. In: Proceedings - 2019 IEEE 19th International Conference on Bioinformatics and Bioengineering, BIBE 2019, pp. 817–821 (2019). https://doi.org/10.1109/BIBE.2019.00152
Maurizio, P., Toni, F.: Learning assumption-based argumentation frameworks (2022). http://hdl.handle.net/10044/1/98940
Carstens, L., Toni, F.: Improving out-of-domain sentiment polarity classification using argumentation (2016). https://doi.org/10.1109/ICDMW.2015.185
Loizos, M.: Machine coaching (2019). https://api.semanticscholar.org/CorpusID:236161635
Potyka, N.: Interpreting neural networks as quantitative argumentation frameworks. In: 35th AAAI Conference on Artificial Intelligence, AAAI 2021, vol. 7 (2021). https://doi.org/10.1609/aaai.v35i7.16801
Riveret, R., Tran, S., Garcez, A.D.A.: Neural-symbolic probabilistic argumentation machines. In: 17th International Conference on Principles of Knowledge Representation and Reasoning, KR 2020, vol. 2 (2020). https://doi.org/10.24963/kr.2020/90
Tsamoura, E., Hospedales, T., Michael, L.: Neural-symbolic integration: a compositional perspective. In: 35th AAAI Conference on Artificial Intelligence, AAAI 2021, vol. 6A (2021). https://doi.org/10.1609/aaai.v35i6.16639
Sendi, N., Abchiche-Mimouni, N., Zehraoui, F.: A new transparent ensemble method based on deep learning. Procedia Comput. Sci. 159, 271–280 (2019). https://doi.org/10.1016/j.procs.2019.09.182
Dejl, A., et al.: Argflow: a toolkit for deep argumentative explanations for neural networks. In: Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS, vol. 3 (2021)
Bench-Capon, T.J.M., Dunne, P.E.: Argumentation in artificial intelligence. Artif. Intell. 171(10–15), 619–641 (2007). https://doi.org/10.1016/j.artint.2007.05.001
Rahwan, I., Simari, G.R.: Argumentation in artificial intelligence (2009)
Spanoudakis, N.I., Kakas, A.C., Moraitis, P.: Applications of argumentation: the SoDA methodology. In: Frontiers in artificial intelligence and applications, vol. 285 (2016). https://doi.org/10.3233/978-1-61499-672-9-1722
Longo, L., Rizzo, L., Dondio, P.: Examining the modelling capabilities of defeasible argumentation and non-monotonic fuzzy reasoning. Knowl.-Based Syst. 211, 106514 (2021). https://doi.org/10.1016/j.knosys.2020.106514
Kakas, A., Moraïtis, P.: Argumentation based decision making for autonomous agents. In: Proceedings of the International Conference on Autonomous Agents, vol. 2 (2003). https://doi.org/10.1145/860575.860717
Dietz, E., Kakas, A., Loizos, M.: Computational argumentation & cognitive AI. In: Chetouani, M., Dignum, V., Lukowicz, P., Sierra, C. (eds.) ACAI 2021. LNCS, vol. 13500, pp. 363–388. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-24349-3_19
Kakas, A.C., Moraitis, P., Spanoudakis, N.I.: GORGIAS: applying argumentation. Argument Comput. 10(1), 55–81 (2019). https://doi.org/10.3233/AAC-181006
Spanoudakis, N.I., Kakas, A.C., Koumi, A.: Application level explanations for argumentation-based decision making. In: CEUR Workshop Proceedings, vol. 3209 (2022)
Dung, P.M.: On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. Artif. Intell. 77(2), 321–357 (1995). https://doi.org/10.1016/0004-3702(94)00041-X
Cyras, K., Rago, A., Albini, E., Baroni, P., Toni, F.: Argumentative XAI: a survey (2021). https://doi.org/10.24963/ijcai.2021/600
Sklar, E.I., Azhar, M.Q.: Explanation through argumentation (2018). https://doi.org/10.1145/3284432.3284470
Rago, A., Cocarascu, O., Toni, F.: Argumentation-based recommendations: fantastic explanations and how to find them. In: IJCAI International Joint Conference on Artificial Intelligence, vol. 2018-July (2018). https://doi.org/10.24963/ijcai.2018/269
Miller, T.: Explanation in artificial intelligence: Insights from the social sciences. Artif. Intell. 267, 1–38 (2019). https://doi.org/10.1016/j.artint.2018.07.007
Deng, H.: Interpreting tree ensembles with inTrees. Int. J. Data Sci. Anal. 7(4), 277–287 (2018). https://doi.org/10.1007/s41060-018-0144-8
Letham, B., Rudin, C., McCormick, T.H., Madigan, D.: Interpretable classifiers using rules and bayesian analysis: Building a better stroke prediction model. Ann. Appl. Stat. 9(3), 1350–1371 (2015). https://doi.org/10.1214/15-AOAS848
Holte, R.C.: Very simple classification rules perform well on most commonly used datasets. Mach. Learn. 11(1), 63–90 (1993). https://doi.org/10.1023/A:1022631118932
Nicolaides, A.N., et al.: Asymptomatic internal carotid artery stenosis and cerebrovascular risk stratification. J. Vasc. Surg. 52(6), 1486–1496 (2010). https://doi.org/10.1016/j.jvs.2010.07.021
Yáñez, C.S.: Mercier and Sperber’s argumentative theory of reasoning: from the psychology of reasoning to argumentation studies. Inform. Log. 32(1), 132–159 (2012). https://doi.org/10.22329/il.v32i1.3536
Čyras, K., et al.: Machine reasoning explainability. arXiv (2020)
Ribeiro, M.T., Singh, S., Guestrin, C.: ‘Why should i trust you?’ Explaining the predictions of any classifier. In: Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, vol. 13–17-Augu, pp. 1135–1144 (2016). https://doi.org/10.1145/2939672.2939778
Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Advances in Neural Information Processing Systems, vol. 2017-Decem, pp. 4766–4775 (2017)
Setzu, M., Guidotti, R., Monreale, A., Turini, F., Pedreschi, D., Giannotti, F.: GLocalX - from local to global explanations of black box AI models. Artif. Intell. 294, 103457 (2021). https://doi.org/10.1016/j.artint.2021.103457
Dietz, E., Kakas, A., Michael, L.: Argumentation: a calculus for human-centric AI. Front. Artif. Intell. 5, 955579 (2022). https://doi.org/10.3389/frai.2022.955579
Prentzas, N., Gavrielidou, A., Neophytou, M., Kakas, A.: Argumentation-based explainable machine learning (ArgEML): a real-life use case on gynecological cancer. In: CEUR Workshop Proceedings, vol. 3208 (2022)
Nicolaou, A., Loizou, C.P., Pantzaris, M., Kakas, A., Pattichis, C.S.: Rule extraction in the assessment of brain mri lesions in multiple sclerosis: preliminary findings. In: Tsapatsoulis, N., Panayides, A., Theocharides, T., Lanitis, A., Pattichis, C., Vento, M. (eds.) CAIP 2021. LNCS (LNAI and LNB), vol. 13052, pp. 277–286. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-89128-2_27
Albini, E., Lertvittayakumjorn, P., Rago, A., Toni, F.: DAX: deep argumentative explanation for neural networks (2020)
Acknowledgements
Part of this work was undertaken under the University of Cyprus internal project, Integrated Explainable AI (IXAI) for Medical Decision Support, ARGEML 8037P-22046. This study is also partly funded by the project ‘Atherorisk’ “Identification of unstable carotid plaques associated with symptoms using ultrasonic image analysis and plaque motion analysis”, code: Excellence/0421/0292, funded by the Research and In-novation Foundation, the Republic of Cyprus.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Prentzas, N., Pattichis, C., Kakas, A. (2023). Explainable Machine Learning via Argumentation. In: Longo, L. (eds) Explainable Artificial Intelligence. xAI 2023. Communications in Computer and Information Science, vol 1903. Springer, Cham. https://doi.org/10.1007/978-3-031-44070-0_19
Download citation
DOI: https://doi.org/10.1007/978-3-031-44070-0_19
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-44069-4
Online ISBN: 978-3-031-44070-0
eBook Packages: Computer ScienceComputer Science (R0)