Abstract
Fairness in Artificial Intelligence is a major requirement for trust in ML-supported decision making. Up to now fairness analysis depends on human interaction – for example the specification of relevant attributes to consider. In this paper we propose a subgroup detection method based on clustering to automate this process. We analyse 10 (sub-)clustering approaches with three fairness metrics on three datasets and identify SLINK as an optimal candidate for subgroup detection.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Bellamy, R.K.E., et al.: AI Fairness 360: an extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias, Oct 2018. arxiv.org/abs/1810.01943
Cabrera, A.A., Epperson, W., Hohman, F., Kahng, M., Morgenstern, J., Chau, D.H.: FAIRVIS: Visual analytics for discovering intersectional bias in machine learning. In: 2019 IEEE Conference on Visual Analytics Science and Technology (VAST), Oct 2019
Castelnovo, A., Crupi, R., Greco, G., Regoli, D., Penco, I.G., Cosentini, A.C.: A clarification of the nuances in the fairness metrics landscape. Sci. Rep. 12(1), 1–21 (2022)
Foulds, J.R., Islam, R., Keya, K.N., Pan, S.: An Intersectional Definition of Fairness. In: 2020 IEEE 36th International Conference on Data Engineering (ICDE), pp. 1918–1921. IEEE (2020)
Gleicher, M., Barve, A., Yu, X., Heimerl, F.: Boxer: Interactive comparison of classifier results. In: Computer Graphics Forum, vol. 39, pp. 181–193. Wiley Online Library (2020)
Hertweck, C., Heitz, C.: A systematic approach to group fairness in automated decision making. In: 2021 8th Swiss Conference on Data Science (SDS), pp. 1–6. IEEE (2021)
Johnson, B., Brun, Y.: Fairkit-learn: a fairness evaluation and comparison toolkit. In: 44th International Conference on Software Engineering Companion (ICSE 2022 Companion) (2022)
Li, J., Moskovitch, Y., Jagadish, H.: DENOUNCER: detection of unfairness in classifiers. Proc. VLDB Endowm. 14(12), 2719–2722 (2021)
Morina, G., Oliinyk, V., Waton, J., Marusic, I., Georgatzis, K.: Auditing and Achieving Intersectional Fairness in Classification Problems. arXiv preprint arXiv:1911.01468 (2019)
Pastor, E., de Alfaro, L., Baralis, E.: Looking for trouble: analyzing classifier behavior via pattern divergence. In: Proceedings of the 2021 International Conference on Management of Data, pp. 1400–1412 (2021)
Teodorescu, M.H., Morse, L., Awwad, Y., Kane, G.C.: Failures of fairness in automation require a deeper understanding of human-ML augmentation. MIS Q. 45(3) (2021)
Verma, S., Rubin, J.: Fairness definitions explained. In: 2018 IEEE/ACM International Workshop on Software Fairness (FairWare), pp. 1–7. IEEE (2018)
Wexler, J., Pushkarna, M., Bolukbasi, T., Wattenberg, M., Viégas, F., Wilson, J.: The what-if tool: Interactive probing of machine learning models. IEEE Trans. Visual Comput. Graphics 26(1), 56–65 (2019)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 Springer Nature Switzerland AG
About this paper
Cite this paper
Schäfer, J., Wiese, L. (2022). Clustering-Based Subgroup Detection for Automated Fairness Analysis. In: Chiusano, S., et al. New Trends in Database and Information Systems. ADBIS 2022. Communications in Computer and Information Science, vol 1652. Springer, Cham. https://doi.org/10.1007/978-3-031-15743-1_5
Download citation
DOI: https://doi.org/10.1007/978-3-031-15743-1_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-15742-4
Online ISBN: 978-3-031-15743-1
eBook Packages: Computer ScienceComputer Science (R0)