default search action
Kacper Sokol
Person information
- affiliation: ETH Zurich, Switzerland
- affiliation: University of Bristol, Intelligent Systems Laboratory, UK
- affiliation (former): RMIT University, ARC Centre of Excellence for Automated Decision-Making and Society, Australia
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2025
- [j6]Yueqing Xuan, Edward Small, Kacper Sokol, Danula Hettiachchi, Mark Sanderson:
Comprehension is a double-edged sword: Over-interpreting unspecified information in intelligible machine learning explanations. Int. J. Hum. Comput. Stud. 193: 103376 (2025) - 2024
- [j5]Kacper Sokol, Peter A. Flach:
Interpretable representations in explainable AI: from theory to practice. Data Min. Knowl. Discov. 38(5): 3102-3140 (2024) - [j4]Andreas Züfle, Flora D. Salim, Taylor Anderson, Matthew Scotch, Li Xiong, Kacper Sokol, Hao Xue, Ruochen Kong, David J. Heslop, Hye-Young Paik, C. Raina MacIntyre:
Leveraging Simulation Data to Understand Bias in Predictive Models of Infectious Disease Spread. ACM Trans. Spatial Algorithms Syst. 10(2): 17 (2024) - [c13]Kacper Sokol, Julia E. Vogt:
What Does Evaluation of Explainable Artificial Intelligence Actually Tell Us? A Case for Compositional and Contextual Validation of XAI Building Blocks. CHI Extended Abstracts 2024: 370:1-370:8 - [c12]Edward Small, Kacper Sokol, Daniel Manning, Flora D. Salim, Jeffrey Chan:
Equalised Odds is not Equal Individual Odds: Post-processing for Group and Individual Fairness. FAccT 2024: 1559-1578 - [e1]Jose M. Alonso-Moral, Zach Anthis, Rafael Berlanga, Alejandro Catalá, Philipp Cimiano, Peter Flach, Eyke Hüllermeier, Tim Miller, Oana Mitrut, Dimitry Mindlin, Gabriela Moise, Alin Moldoveanu, Florica Moldoveanu, Kacper Sokol, Aitor Soroa:
Proceedings of the First Multimodal, Affective and Interactive eXplainable AI Workshop (MAI-XAI24 2024) co-located with 27th European Conference On Artificial Intelligence 19-24 October 2024 (ECAI 2024), Santiago de Compostela, Spain, October 19, 2024. CEUR Workshop Proceedings 3803, CEUR-WS.org 2024 [contents] - [i27]Kacper Sokol, Julia E. Vogt:
What Does Evaluation of Explainable Artificial Intelligence Actually Tell Us? A Case for Compositional and Contextual Validation of XAI Building Blocks. CoRR abs/2403.12730 (2024) - [i26]Aurora Spagnol, Kacper Sokol, Pietro Barbiero, Marc Langheinrich, Martin Gjoreski:
Counterfactual Explanations for Clustering Models. CoRR abs/2409.12632 (2024) - [i25]Yueqing Xuan, Kacper Sokol, Mark Sanderson, Jeffrey Chan:
Perfect Counterfactuals in Imperfect Worlds: Modelling Noisy Implementation of Actions in Sequential Algorithmic Recourse. CoRR abs/2410.02273 (2024) - 2023
- [p1]Peter A. Flach, Kacper Sokol, Jan Wielemaker:
Simply Logical - The First Three Decades. Prolog: The Next 50 Years 2023: 184-193 - [i24]Bernard Keenan, Kacper Sokol:
Mind the Gap! Bridging Explainable Artificial Intelligence and Human Understanding with Luhmann's Functional Theory of Communication. CoRR abs/2302.03460 (2023) - [i23]Edward Small, Yueqing Xuan, Danula Hettiachchi, Kacper Sokol:
Helpful, Misleading or Confusing: How Humans Perceive Fundamental Building Blocks of Artificial Intelligence Explanations. CoRR abs/2303.00934 (2023) - [i22]Yueqing Xuan, Kacper Sokol, Jeffrey Chan, Mark Sanderson:
More Is Less: When Do Recommenders Underperform for Data-rich Users? CoRR abs/2304.07487 (2023) - [i21]Edward A. Small, Kacper Sokol, Daniel Manning, Flora D. Salim, Jeffrey Chan:
Equalised Odds is not Equal Individual Odds: Post-processing for Group and Individual Fairness. CoRR abs/2304.09779 (2023) - [i20]Kacper Sokol, Julia E. Vogt:
(Un)reasonable Allure of Ante-hoc Interpretability for High-stakes Domains: Transparency Is Necessary but Insufficient for Explainability. CoRR abs/2306.02312 (2023) - [i19]Kacper Sokol, Edward Small, Yueqing Xuan:
Navigating Explanatory Multiverse Through Counterfactual Path Geometry. CoRR abs/2306.02786 (2023) - [i18]Edward A. Small, Jeffrey N. Clark, Christopher J. McWilliams, Kacper Sokol, Jeffrey Chan, Flora D. Salim, Raúl Santos-Rodríguez:
Counterfactual Explanations via Locally-guided Sequential Algorithmic Recourse. CoRR abs/2309.04211 (2023) - [i17]Yueqing Xuan, Edward Small, Kacper Sokol, Danula Hettiachchi, Mark Sanderson:
Can Users Correctly Interpret Machine Learning Explanations and Simultaneously Identify Their Limitations? CoRR abs/2309.08438 (2023) - 2022
- [j3]Kacper Sokol, Raúl Santos-Rodríguez, Peter A. Flach:
FAT Forensics: A Python toolbox for algorithmic fairness, accountability and transparency. Softw. Impacts 14: 100406 (2022) - [c11]Piotr Romashov, Martin Gjoreski, Kacper Sokol, Maria Vanina Martinez, Marc Langheinrich:
BayCon: Model-agnostic Bayesian Counterfactual Generator. IJCAI 2022: 740-746 - [d3]Peter A. Flach, Kacper Sokol:
Simply Logical - Intelligent Reasoning by Example (Fully Interactive Online Edition). Zenodo, 2022 - [d2]Kacper Sokol, Alexander Hepburn, Raúl Santos-Rodriguez, Peter A. Flach:
What and How of Machine Learning Transparency: Building Bespoke Explainability Tools with Interoperable Algorithmic Components. Zenodo, 2022 - [i16]Kacper Sokol, Meelis Kull, Jeffrey Chan, Flora Dilys Salim:
Ethical and Fairness Implications of Model Multiplicity. CoRR abs/2203.07139 (2022) - [i15]Edward Small, Wei Shao, Zeliang Zhang, Peihan Liu, Jeffrey Chan, Kacper Sokol, Flora D. Salim:
How Robust is your Fair Model? Exploring the Robustness of Diverse Fairness Strategies. CoRR abs/2207.04581 (2022) - [i14]Peter A. Flach, Kacper Sokol:
Simply Logical - Intelligent Reasoning by Example (Fully Interactive Online Edition). CoRR abs/2208.06823 (2022) - [i13]Kacper Sokol, Alexander Hepburn, Rafael Poyiadzi, Matthew Clifford, Raúl Santos-Rodríguez, Peter A. Flach:
FAT Forensics: A Python Toolbox for Implementing and Deploying Fairness, Accountability and Transparency Algorithms in Predictive Systems. CoRR abs/2209.03805 (2022) - [i12]Kacper Sokol, Alexander Hepburn, Raúl Santos-Rodríguez, Peter A. Flach:
What and How of Machine Learning Transparency: Building Bespoke Explainability Tools with Interoperable Algorithmic Components. CoRR abs/2209.03813 (2022) - [i11]Dilini Rajapaksha, Kacper Sokol, Jeffrey Chan, Flora D. Salim, Mukesh Prasad, Mahendra Samarawickrama:
Analysing Donors' Behaviour in Non-profit Organisations for Disaster Resilience: The 2019-2020 Australian Bushfires Case Study. CoRR abs/2210.09034 (2022) - 2021
- [i10]Kacper Sokol, Peter A. Flach:
You Only Write Thrice: Creating Documents, Computational Notebooks and Presentations From a Single Source. CoRR abs/2107.06639 (2021) - [i9]Kacper Sokol, Peter A. Flach:
Explainability Is in the Mind of the Beholder: Establishing the Foundations of Explainable Artificial Intelligence. CoRR abs/2112.14466 (2021) - 2020
- [j2]Kacper Sokol, Alexander Hepburn, Rafael Poyiadzi, Matthew Clifford, Raúl Santos-Rodríguez, Peter A. Flach:
FAT Forensics: A Python Toolbox for Implementing and Deploying Fairness, Accountability and Transparency Algorithms in Predictive Systems. J. Open Source Softw. 5(49): 1904 (2020) - [j1]Kacper Sokol, Peter A. Flach:
One Explanation Does Not Fit All. Künstliche Intell. 34(2): 235-250 (2020) - [c10]Rafael Poyiadzi, Kacper Sokol, Raúl Santos-Rodríguez, Tijl De Bie, Peter A. Flach:
FACE: Feasible and Actionable Counterfactual Explanations. AIES 2020: 344-350 - [c9]Kacper Sokol, Peter A. Flach:
Explainability fact sheets: a framework for systematic assessment of explainable approaches. FAT* 2020: 56-67 - [d1]Kacper Sokol, Alexander Hepburn, Rafael Poyiadzi, Matthew Clifford, Raúl Santos-Rodríguez, Peter A. Flach:
FAT Forensics: A Python Toolbox for Implementing and Deploying Fairness, Accountability and Transparency Algorithms in Predictive Systems. Zenodo, 2020 - [i8]Kacper Sokol, Peter A. Flach:
One Explanation Does Not Fit All: The Promise of Interactive Explanations for Machine Learning Transparency. CoRR abs/2001.09734 (2020) - [i7]Kacper Sokol, Peter A. Flach:
LIMEtree: Interactively Customisable Explanations Based on Local Surrogate Multi-output Regression Trees. CoRR abs/2005.01427 (2020) - [i6]Kacper Sokol, Peter A. Flach:
Towards Faithful and Meaningful Interpretable Representations. CoRR abs/2008.07007 (2020)
2010 – 2019
- 2019
- [c8]Kacper Sokol, Peter A. Flach:
Counterfactual Explanations of Machine Learning Predictions: Opportunities and Challenges for AI Safety. SafeAI@AAAI 2019 - [c7]Kacper Sokol, Peter A. Flach:
Desiderata for Interpretability: Explaining Decision Tree Predictions with Counterfactuals. AAAI 2019: 10035-10036 - [c6]Kacper Sokol:
Fairness, Accountability and Transparency in Artificial Intelligence: A Case Study of Logical Predictive Models. AIES 2019: 541-542 - [i5]Tom Diethe, Meelis Kull, Niall Twomey, Kacper Sokol, Hao Song, Miquel Perelló-Nieto, Emma Tonkin, Peter A. Flach:
HyperStream: a Workflow Engine for Streaming Data. CoRR abs/1908.02858 (2019) - [i4]Kacper Sokol, Raúl Santos-Rodríguez, Peter A. Flach:
FAT Forensics: A Python Toolbox for Algorithmic Fairness, Accountability and Transparency. CoRR abs/1909.05167 (2019) - [i3]Rafael Poyiadzi, Kacper Sokol, Raúl Santos-Rodriguez, Tijl De Bie, Peter A. Flach:
FACE: Feasible and Actionable Counterfactual Explanations. CoRR abs/1909.09369 (2019) - [i2]Kacper Sokol, Alexander Hepburn, Raúl Santos-Rodríguez, Peter A. Flach:
bLIMEy: Surrogate Prediction Explanations Beyond LIME. CoRR abs/1910.13016 (2019) - [i1]Kacper Sokol, Peter A. Flach:
Explainability Fact Sheets: A Framework for Systematic Assessment of Explainable Approaches. CoRR abs/1912.05100 (2019) - 2018
- [c5]Kacper Sokol, Peter A. Flach:
Conversational Explanations of Machine Learning Predictions Through Class-contrastive Counterfactual Statements. IJCAI 2018: 5785-5786 - [c4]Kacper Sokol, Peter A. Flach:
Glass-Box: Explaining AI Decisions With Counterfactual Statements Through Conversation With a Voice-enabled Virtual Assistant. IJCAI 2018: 5868-5870 - [c3]Tom Diethe, Mike Holmes, Meelis Kull, Miquel Perelló-Nieto, Kacper Sokol, Hao Song, Emma Tonkin, Niall Twomey, Peter A. Flach:
Releasing eHealth Analytics into the Wild: Lessons Learnt from the SPHERE Project. KDD 2018: 243-252 - 2017
- [c2]Kacper Sokol, Peter A. Flach:
The Role of Textualisation and Argumentation in Understanding the Machine Learning Process. IJCAI 2017: 5211-5212 - 2016
- [c1]Kacper Sokol, Peter A. Flach:
Activity Recognition in Multiple Contexts for Smart-House Data. ILP (Short Papers) 2016: 66-72
Coauthor Index
aka: Raúl Santos-Rodriguez
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-11-11 21:29 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint