Abstract
In this paper we describe the use of two mathematical constructs in developing objective measures of explainability. The first one is measure theory, which has a long and interesting history, and which establishes abstract principles for comparing the size of general sets. At least some of the underpinnings of this theory can equally well be applied to evaluate the degree of explainability of given explanations. However, we suggest that it is meaningless, or at least undesired, to construct objective measures that allow the comparison of any two given explanations. Explanations might be non compatible, in the sense that integrating such explanations results in decreasing rather than increasing explainability. In other words, explainability is best considered as a partial order relation. Notwithstanding the use of partial order relations and measure theory, it is unwise to unconditionally apply these mathematical concepts to the field of explainability. It is demonstrated that the law of diminishing returns from economics offers a neat way to make these concepts applicable to the domain of explainability. The legal field is used as an illustration of the presented ideas.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
- 2.
Ibid.
- 3.
- 4.
Definition from Encyclopaedia Britannica.
- 5.
References
Lim, T., Loh, W., Shih, Y.: A comparison of prediction accuracy, complexity, and training time of thirty-three old and new classification algorithms. Mach. Learn. 40(3), 203–229 (2000)
Adadi, A., Berrada, P.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018)
Burkart, N., Huber, M.: A survey on the explainability of supervised machine learning. J. Artif. Intell. Res. 70, 245–317 (2021)
van Lent, M., Fisher, W., Mancuso, M.: An explainable artificial intelligence system for small-unit tactical behavior. In: Proceedings of the 16th Conference on Innovative Applications of Artifical Intelligence, pp. 900–907 (2004)
Lipton, Z.: The mythos of model interpretability: in machine learning, the concept of interpretability is both important and slippery. Queue 16(3), 31–57 (2018)
Voosen, P.: How AI detectives are cracking open the black box of deep learning. https://www.science.org/content/article/how-ai-detectives-are-cracking-open-black-box-deep-learning. Accessed 8 Feb 2022
Štrumbelj, E., Kononenko, I.: Explaining prediction models and individual predictions with feature contributions. Knowl. Inf. Syst. 41(3), 647–665 (2013). https://doi.org/10.1007/s10115-013-0679-x
Henelius, A., Puolamäki, K., Ukkonen, A.: Interpreting classifiers through attribute interactions in datasets. In: Kim, B., Malioutov, D., Varshney, K., Weller, A. (eds.) Proceedings of the 2017 ICML Workshop on Human Interpretability in Machine Learning (WHI 2017) (2017)
Freitas, A.: Comprehensible classification models: a position paper. ACM SIGKDD Explor. Newsl 15(1), 1–10 (2013)
Bibal, A., Lognoul, M., de Streel, A., Frénay, B.: Legal requirements on explainability in machine learning. Artif. Intell. Law 29(2), 149–169 (2020). https://doi.org/10.1007/s10506-020-09270-4
Wachter, S., Mittelstadt, B., Floridi, L.: Why a right to explanation of automated decision-making does not exist in the general data protection regulation. Int. Data Priv. Law 7(2), 7–99 (2017)
Goodman, B., Flaxman, S.: EU regulations on algorithmic decision-making and a “right to explanation". AI Mag. 38(3) (2016)
Malgieri, G., Comandé, G.: Why a right to legibility of automated decision-making exists in the general data protection regulation. Int. Data Priv. Law 7(3), 243–265 (2017)
Edwards, L., Veale, M.: Enslaving the algorithm: from a ‘right to an explanation’ to a ‘right to better decisions’? IEEE Secur. Priv. 16(3), 46–54 (2018)
Selbst, A.D., Powles, J.: Meaningful information and the right to explanation. Int. Data Priv. Law 7(4), 233–242 (2017)
De Mulder, W., Valcke, P.: The need for a numeric measure of explainability. In: IEEE International Conference on Big Data (Big Data), pp. 2712–2720 (2021)
Mohseni, S., Zarei, N., Ragan, E.: A survey of evaluation methods and measures for interpretable machine learning. arXiv preprint arXiv:1811.11839 (2018)
Rosenfeld, A.: Better metrics for evaluating explainable artificial intelligence. In: Proceedings of the 20th International Conference on Autonomous Agents and MultiAgent Systems, pp. 45–50 (2021)
Islam, S., Eberle, W., Ghafoor, S.: Towards quantification of explainability in explainable artificial intelligence methods. https://arxiv.org/abs/1911.10104. Accessed 8 Feb 2022
Sovrano, F., Vitali, F.: An objective metric for explainable AI: how and why to estimate the degree of explainability. https://arxiv.org/abs/2109.05327. Accessed 8 Feb 2022
Poursabzi-Sangdeh, F., Goldstein, D., Hofman, J., Wortman Vaughan, J., Wallach, H.: Manipulating and measuring model interpretability. In: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pp. 1–52 (2021)
De Mulder, W., Valcke, P., Vanderstichele, G., Baeck, J.: Are judges more transparent than black boxes? a scheme to improve judicial decision-making by establishing a relationship with mathematical function maximization. Law Contemp. Probl. 84(3), 47–67 (2021)
De Mulder, W., Baeck, J., Valcke, P.: Explainable black box models. In: Arai, K. (ed.) IntelliSys 2022. Lecture Notes in Networks and Systems, vol. 542, pp. 573–587. Springer, Cham. (2022). https://doi.org/10.1007/978-3-031-16072-1_42
Miller, T.: Explanation inf artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2019)
Martin, K., Liret, A., Wiratunga, N., Owusu, G., Kern, M.: Evaluating explainability methods intended for multiple stakeholders. KI - Künstliche Intell. 35, 397–411 (2021)
Bard, J., Rhee, S.: Ontologies in biology: design, applications and future challenges. Nat. Rev. Genet. 5, 213–222 (2004)
Hoekstra, R., Breuker, J., Di Bello, M., Boer, A.: The LKIF core ontology of basic legal concepts. In: CEUR Workshop Proceedings, pp. 43–63 (2007)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
De Mulder, W. (2022). The Use of Partial Order Relations and Measure Theory in Developing Objective Measures of Explainability. In: Calvaresi, D., Najjar, A., Winikoff, M., Främling, K. (eds) Explainable and Transparent AI and Multi-Agent Systems. EXTRAAMAS 2022. Lecture Notes in Computer Science(), vol 13283. Springer, Cham. https://doi.org/10.1007/978-3-031-15565-9_11
Download citation
DOI: https://doi.org/10.1007/978-3-031-15565-9_11
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-15564-2
Online ISBN: 978-3-031-15565-9
eBook Packages: Computer ScienceComputer Science (R0)