[go: up one dir, main page]

 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (223)

Search Parameters:
Keywords = explainable AI (XAI)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 3766 KiB  
Article
Smart Vision Transparency: Efficient Ocular Disease Prediction Model Using Explainable Artificial Intelligence
by Sagheer Abbas, Adnan Qaisar, Muhammad Sajid Farooq, Muhammad Saleem, Munir Ahmad and Muhammad Adnan Khan
Sensors 2024, 24(20), 6618; https://doi.org/10.3390/s24206618 (registering DOI) - 14 Oct 2024
Abstract
The early prediction of ocular disease is certainly an obligatory concern in the domain of ophthalmic medicine. Although modern scientific discoveries have shown the potential to treat eye diseases by using artificial intelligence (AI) and machine learning, explainable AI remains a crucial challenge [...] Read more.
The early prediction of ocular disease is certainly an obligatory concern in the domain of ophthalmic medicine. Although modern scientific discoveries have shown the potential to treat eye diseases by using artificial intelligence (AI) and machine learning, explainable AI remains a crucial challenge confronting this area of research. Although some traditional methods put in significant effort, they cannot accurately predict the proper ocular diseases. However, incorporating AI into diagnosing eye diseases in healthcare complicates the situation as the decision-making process of AI demonstrates complexity, which is a significant concern, especially in major sectors like ocular disease prediction. The lack of transparency in the AI models may hinder the confidence and trust of the doctors and the patients, as well as their perception of the AI and its abilities. Accordingly, explainable AI is significant in ensuring trust in the technology, enhancing clinical decision-making ability, and deploying ocular disease detection. This research proposed an efficient transfer learning model for eye disease prediction to transform smart vision potential in the healthcare sector and meet conventional approaches’ challenges while integrating explainable artificial intelligence (XAI). The integration of XAI in the proposed model ensures the transparency of the decision-making process through the comprehensive provision of rationale. This proposed model provides promising results with 95.74% accuracy and explains the transformative potential of XAI in advancing ocular healthcare. This significant milestone underscores the effectiveness of the proposed model in accurately determining various types of ocular disease. It is clearly shown that the proposed model is performing better than the previously published methods. Full article
Show Figures

Figure 1

Figure 1
<p>Fundus of the eyeball [<a href="#B2-sensors-24-06618" class="html-bibr">2</a>].</p>
Full article ">Figure 2
<p>Proposed model for ocular disease prediction using XAI.</p>
Full article ">Figure 3
<p>Distribution of classes in ocular disease prediction dataset.</p>
Full article ">Figure 4
<p>Visualization of dimensionality reduction using t-SNE.</p>
Full article ">Figure 5
<p>Visualization of dimensionality reduction results for classes ‘N’ and ‘G’ using t-SNE.</p>
Full article ">Figure 6
<p>Visualization of dimensionality reduction results using UMAP.</p>
Full article ">Figure 7
<p>Visualization of dimensionality reduction results for selected samples using UMAP.</p>
Full article ">Figure 8
<p>Class distribution before and after minority class augmentations.</p>
Full article ">Figure 9
<p>Sample images from training dataset organized by class.</p>
Full article ">Figure 10
<p>Training history of EfficientNet model for ocular disease prediction with dropout, L1, gamma, and batch parameters.</p>
Full article ">Figure 11
<p>LIME explanations for model predictions.</p>
Full article ">
21 pages, 3914 KiB  
Article
Asset Returns: Reimagining Generative ESG Indexes and Market Interconnectedness
by Gordon Dash, Nina Kajiji and Bruno G. Kamdem
J. Risk Financial Manag. 2024, 17(10), 463; https://doi.org/10.3390/jrfm17100463 - 13 Oct 2024
Viewed by 452
Abstract
Financial economists have long studied factors related to risk premiums, pricing biases, and diversification impediments. This study examines the relationship between a firm’s commitment to environmental, social, and governance principles (ESGs) and asset market returns. We incorporate an algorithmic protocol to identify three [...] Read more.
Financial economists have long studied factors related to risk premiums, pricing biases, and diversification impediments. This study examines the relationship between a firm’s commitment to environmental, social, and governance principles (ESGs) and asset market returns. We incorporate an algorithmic protocol to identify three nonobservable but pervasive E, S, and G time-series factors to meet the study’s objectives. The novel factors were tested for information content by constructing a six-factor Fama and French model following the imposition of the isolation and disentanglement algorithm. Realizing that nonlinear relationships characterize models incorporating both observable and nonobservable factors, the Fama and French model statement was estimated using an enhanced shallow-learning neural network. Finally, as a post hoc measure, we integrated explainable AI (XAI) to simplify the machine learning outputs. Our study extends the literature on the disentanglement of investment factors across two dimensions. We first identify new time-series-based E, S, and G factors. Second, we demonstrate how machine learning can be used to model asset returns, considering the complex interconnectedness of sustainability factors. Our approach is further supported by comparing neural-network-estimated E, S, and G weights with London Stock Exchange ESG ratings. Full article
(This article belongs to the Special Issue Business, Finance, and Economic Development)
Show Figures

Figure 1

Figure 1
<p>Flowchart of the protocol.</p>
Full article ">Figure 2
<p>Percent of industry-wide after-market variation explained by the E, S, and G domains from the extant portfolio.</p>
Full article ">Figure 3
<p>Diversification of investor portfolio. The number in parentheses indicates the number of companies in the stated sector.</p>
Full article ">Figure 4
<p>A Fruchterman–Reingold network of company return correlations. The nodes (circles) represent the companies of the investor portfolio. The edges (lines) show the correlation between the two companies. The stronger the correlation, the closer the nodes collect to form a cluster in the middle.</p>
Full article ">Figure 5
<p>Loadings and cross-loadings of the first three principal components. Each line indicates a company. The shorter the line from the centroid, the smaller the loading. The overlapping sections indicate cross-loadings. Legend—PC-1 =red; PC-2 = blue; PC-3 = lime.</p>
Full article ">Figure 6
<p>K4-RBFN for mapping ExxonMobil market returns. The six input variables (features) consisting of three Fama and French variables and three E, S, and G factors are presented as icon equivalents. The intersecting gray lines show the interaction among the features. The circles with the letter H represent the hidden nodes of the RBFN network. The blue lines from the hidden nodes indicate negative weights, whereas the gray lines from the hidden nodes show positive weights. Output is expressed as returns for ExxonMobil.</p>
Full article ">Figure 7
<p>The interconnectedness of E, S, and G factor elasticities. The red lines (edges) represent the asset elasticity on the E domain. Similarly, the blue edges represent the S domain, and the green edges represent the G domain. The length of the edges determines the size of the contribution of the domain to the assets’ return structure. The centroids are the origin of the red, blue, and green edges. Each asset’s return to scale (RtS) is shown as the black dot (node).</p>
Full article ">Figure 8
<p>Gravitational impact of E, S, and G. The yellow centroid represents the E domain, the pink centroid represents the S domain, and the blue centroid represents the G domain. The small green circles represent the 65 companies in the investor portfolio. The dotted lines indicate negative marginal weight. The non-dotted lines indicate positive marginal weight. The heavier the line, the further the deviation from zero or the greater the gravitational pull towards the centroids.</p>
Full article ">Figure 9
<p>Shapley heatmaps of selected companies. The features are ranked for each model based on their absolute contribution to the mean SHAP value. The bar chart on the right axis indicates the absolute contribution of the feature to the mean SHAP value. The function value is shown as the black line above each figure. For each instance on the <span class="html-italic">x</span>-axis, the positive impact of the feature value is shown in red. The negative effect of the feature is shown in blue. The number of instances corresponds to the observations in the K4-RBFN training set.</p>
Full article ">Figure 9 Cont.
<p>Shapley heatmaps of selected companies. The features are ranked for each model based on their absolute contribution to the mean SHAP value. The bar chart on the right axis indicates the absolute contribution of the feature to the mean SHAP value. The function value is shown as the black line above each figure. For each instance on the <span class="html-italic">x</span>-axis, the positive impact of the feature value is shown in red. The negative effect of the feature is shown in blue. The number of instances corresponds to the observations in the K4-RBFN training set.</p>
Full article ">
45 pages, 8086 KiB  
Article
Helping CNAs Generate CVSS Scores Faster and More Confidently Using XAI
by Elyes Manai, Mohamed Mejri and Jaouhar Fattahi
Appl. Sci. 2024, 14(20), 9231; https://doi.org/10.3390/app14209231 - 11 Oct 2024
Viewed by 406
Abstract
The number of cybersecurity vulnerabilities keeps growing every year. Each vulnerability must be reported to the MITRE Corporation and assessed by a Counting Number Authority, which generates a metrics vector that determines its severity score. This process can take up to several weeks, [...] Read more.
The number of cybersecurity vulnerabilities keeps growing every year. Each vulnerability must be reported to the MITRE Corporation and assessed by a Counting Number Authority, which generates a metrics vector that determines its severity score. This process can take up to several weeks, with higher-severity vulnerabilities taking more time. Several authors have successfully used Deep Learning to automate the score generation process and used explainable AI to build trust with the users. However, the explanations that were shown were surface label input saliency on binary classification. This is a limitation, as several metrics are multi-class and there is much more we can achieve with XAI than just visualizing saliency. In this work, we look for actionable actions CNAs can take using XAI. We achieve state-of-the-art results using an interpretable XGBoost model, generate explanations for multi-class labels using SHAP, and use the raw Shapley values to calculate cumulative word importance and generate IF rules that allow a more transparent look at how the model classified vulnerabilities. Finally, we made the code and dataset open-source for reproducibility. Full article
(This article belongs to the Special Issue Recent Applications of Explainable AI (XAI))
Show Figures

Figure 1

Figure 1
<p>Number of reported vulnerabilities per year.</p>
Full article ">Figure 2
<p>Vulnerability severities of the past 7 years.</p>
Full article ">Figure 3
<p>Example CVE (from the NVD website [<a href="#B18-applsci-14-09231" class="html-bibr">18</a>]).</p>
Full article ">Figure 4
<p>An example of a CVE’s metrics, taken from the official NVD website [<a href="#B18-applsci-14-09231" class="html-bibr">18</a>].</p>
Full article ">Figure 5
<p>Costa et al. [<a href="#B2-applsci-14-09231" class="html-bibr">2</a>]’s explanation.</p>
Full article ">Figure 6
<p>Kuehn et al. [<a href="#B11-applsci-14-09231" class="html-bibr">11</a>]’s explanation; Green is for positive impact and red is for negative impact. The intensity of the color reflects the degree of importance to the classification.</p>
Full article ">Figure 7
<p>Islam et al. [<a href="#B10-applsci-14-09231" class="html-bibr">10</a>]’s explanation.</p>
Full article ">Figure 8
<p>The severity distribution of the CVE data.</p>
Full article ">Figure 9
<p>Length distribution of vulnerability descriptions.</p>
Full article ">Figure 10
<p>Label distribution of the various metrics.</p>
Full article ">Figure 11
<p>Visual representation of the XGBoost algorithm, taken from the official website.</p>
Full article ">Figure 12
<p>Model parameters ablation studies.</p>
Full article ">Figure 13
<p>Global word affinity for the Scope class.</p>
Full article ">Figure 14
<p>Global word affinity for the Attack Vector class.</p>
Full article ">Figure 15
<p>SHAP Force plot example.</p>
Full article ">Figure 16
<p>Detailed classification explanation of our SHAP-based method.</p>
Full article ">Figure 17
<p>CVE classification pipeline.</p>
Full article ">Figure 18
<p>Example of local explanation using our method. Words highlighted in red are detected exclusive words.</p>
Full article ">
21 pages, 3313 KiB  
Article
Understanding Public Opinion towards ESG and Green Finance with the Use of Explainable Artificial Intelligence
by Wihan van der Heever, Ranjan Satapathy, Ji Min Park and Erik Cambria
Mathematics 2024, 12(19), 3119; https://doi.org/10.3390/math12193119 - 5 Oct 2024
Viewed by 783
Abstract
This study leverages explainable artificial intelligence (XAI) techniques to analyze public sentiment towards Environmental, Social, and Governance (ESG) factors, climate change, and green finance. It does so by developing a novel multi-task learning framework combining aspect-based sentiment analysis, co-reference resolution, and contrastive learning [...] Read more.
This study leverages explainable artificial intelligence (XAI) techniques to analyze public sentiment towards Environmental, Social, and Governance (ESG) factors, climate change, and green finance. It does so by developing a novel multi-task learning framework combining aspect-based sentiment analysis, co-reference resolution, and contrastive learning to extract nuanced insights from a large corpus of social media data. Our approach integrates state-of-the-art models, including the SenticNet API, for sentiment analysis and implements multiple XAI methods such as LIME, SHAP, and Permutation Importance to enhance interpretability. Results reveal predominantly positive sentiment towards environmental topics, with notable variations across ESG categories. The contrastive learning visualization demonstrates clear sentiment clustering while highlighting areas of uncertainty. This research contributes to the field by providing an interpretable, trustworthy AI system for ESG sentiment analysis, offering valuable insights for policymakers and business stakeholders navigating the complex landscape of sustainable finance and climate action. The methodology proposed in this paper advances the current state of AI in ESG and green finance in several ways. By combining aspect-based sentiment analysis, co-reference resolution, and contrastive learning, our approach provides a more comprehensive understanding of public sentiment towards ESG factors than traditional methods. The integration of multiple XAI techniques (LIME, SHAP, and Permutation Importance) offers a transparent view of the subtlety of the model’s decision-making process, which is crucial for building trust in AI-driven ESG assessments. Our approach enables a more accurate representation of public opinion, essential for informed decision-making in sustainable finance. This paper paves the way for more transparent and explainable AI applications in critical domains like ESG. Full article
(This article belongs to the Special Issue Explainable and Trustworthy AI Models for Data Analytics)
Show Figures

Figure 1

Figure 1
<p>Research framework for ESG sentiment analysis using explainable AI.</p>
Full article ">Figure 2
<p>Visual depiction of the inner workings of the SenticGCN model [<a href="#B17-mathematics-12-03119" class="html-bibr">17</a>].</p>
Full article ">Figure 3
<p>Illustration of the sentiment distribution when the sentiments of the aspects are aggregated.</p>
Full article ">Figure 4
<p>Sentiment distribution depicted by the categorization of Environment, Social, Governance and Other.</p>
Full article ">Figure 5
<p>Visualizations of the LIME analysis on the ESG dataset. (<b>a</b>) Top 20 features according to the LIME analysis. (<b>b</b>) Word cloud of the most important terms according to the LIME analysis.</p>
Full article ">Figure 6
<p>Visualizations of the SHAP analysis on the ESG dataset. (<b>a</b>) Top 20 features according to the SHAP analysis. (<b>b</b>) Word cloud of the most important terms according to the SHAP analysis.</p>
Full article ">Figure 7
<p>Visualizations of the analysis on the ESG dataset when the LIME and SHAP analyses are compared to each other. (<b>a</b>) Feature importance comparison of the LIME (blue bars) versus the SHAP (green bars) analysis. (<b>b</b>) Correlation of the feature importance of the LIME versus SHAP analysis.</p>
Full article ">Figure 8
<p>Visualizations of the analysis on the ESG dataset when the LIME and SHAP analyses are combined. (<b>a</b>) Top 20 features according to the combined LIME and SHAP analysis. (<b>b</b>) Word cloud of the most important terms according to the combined LIME and SHAP analysis.</p>
Full article ">Figure 9
<p>Portrayal of the top 20 features in the dataset according to Permutation Importance.</p>
Full article ">Figure 10
<p>t-SNE plot of the contrastive learning analysis on the ESG dataset. The embeddings were obtained from the aspect terms and the overall sentiment calculated by the SenticGCN model.</p>
Full article ">Figure 11
<p>Centroid embedding t-SNE plot from the contrastive learning approach with some important aspect terms superimposed.</p>
Full article ">
111 pages, 1410 KiB  
Systematic Review
Recent Applications of Explainable AI (XAI): A Systematic Literature Review
by Mirka Saarela and Vili Podgorelec
Appl. Sci. 2024, 14(19), 8884; https://doi.org/10.3390/app14198884 - 2 Oct 2024
Viewed by 1452
Abstract
This systematic literature review employs the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology to investigate recent applications of explainable AI (XAI) over the past three years. From an initial pool of 664 articles identified through the Web of Science database, [...] Read more.
This systematic literature review employs the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology to investigate recent applications of explainable AI (XAI) over the past three years. From an initial pool of 664 articles identified through the Web of Science database, 512 peer-reviewed journal articles met the inclusion criteria—namely, being recent, high-quality XAI application articles published in English—and were analyzed in detail. Both qualitative and quantitative statistical techniques were used to analyze the identified articles: qualitatively by summarizing the characteristics of the included studies based on predefined codes, and quantitatively through statistical analysis of the data. These articles were categorized according to their application domains, techniques, and evaluation methods. Health-related applications were particularly prevalent, with a strong focus on cancer diagnosis, COVID-19 management, and medical imaging. Other significant areas of application included environmental and agricultural management, industrial optimization, cybersecurity, finance, transportation, and entertainment. Additionally, emerging applications in law, education, and social care highlight XAI’s expanding impact. The review reveals a predominant use of local explanation methods, particularly SHAP and LIME, with SHAP being favored for its stability and mathematical guarantees. However, a critical gap in the evaluation of XAI results is identified, as most studies rely on anecdotal evidence or expert opinion rather than robust quantitative metrics. This underscores the urgent need for standardized evaluation frameworks to ensure the reliability and effectiveness of XAI applications. Future research should focus on developing comprehensive evaluation standards and improving the interpretability and stability of explanations. These advancements are essential for addressing the diverse demands of various application domains while ensuring trust and transparency in AI systems. Full article
(This article belongs to the Special Issue Recent Applications of Explainable AI (XAI))
Show Figures

Figure 1

Figure 1
<p>Overview of different XAI approaches and evaluation methods. These categories were used to classify the XAI application papers reviewed in this study.</p>
Full article ">Figure 2
<p>PRISMA flow chart of the study selection process.</p>
Full article ">Figure 3
<p>Main XAI application domain of the studies in our corpus (including all the main domains mentioned in at least three papers).</p>
Full article ">Figure 4
<p>Saliency maps of eight diverse recent XAI applications from various domains: brain tumor classification [<a href="#B116-applsci-14-08884" class="html-bibr">116</a>], grape leaf disease identification [<a href="#B144-applsci-14-08884" class="html-bibr">144</a>], emotion detection [<a href="#B235-applsci-14-08884" class="html-bibr">235</a>], ripe status recognition [<a href="#B141-applsci-14-08884" class="html-bibr">141</a>], volcanic localizations [<a href="#B236-applsci-14-08884" class="html-bibr">236</a>], traffic sign classification [<a href="#B237-applsci-14-08884" class="html-bibr">237</a>], cell segmentation [<a href="#B238-applsci-14-08884" class="html-bibr">238</a>], and glaucoma diagnosis [<a href="#B77-applsci-14-08884" class="html-bibr">77</a>] (from top to bottom and left to right).</p>
Full article ">Figure 5
<p>Number of papers in our corpus that used global versus local explanations.</p>
Full article ">Figure 6
<p>Most common explanation techniques used in the papers in our corpus (only XAI techniques used in at least five papers are shown).</p>
Full article ">Figure 7
<p>Mostly used ML models in the papers in our corpus (only ML models used at least five times are shown).</p>
Full article ">Figure 8
<p>The main ML tasks in the papers in our corpus (all other ML tasks are used in only one or at most two papers).</p>
Full article ">Figure 9
<p>Number of papers in our corpus that used a post-hoc approach versus intrinsically explainable ML model.</p>
Full article ">Figure 10
<p>Number of papers that used a specific ML model, which is presented as intrinsically explainable.</p>
Full article ">Figure 11
<p>Evaluation of the explanations in recent XAI application papers.</p>
Full article ">
27 pages, 2051 KiB  
Article
A Transparent Pipeline for Identifying Sexism in Social Media: Combining Explainability with Model Prediction
by Hadi Mohammadi, Anastasia Giachanou and Ayoub Bagheri
Appl. Sci. 2024, 14(19), 8620; https://doi.org/10.3390/app14198620 - 24 Sep 2024
Viewed by 520
Abstract
In this study, we present a new approach that combines multiple Bidirectional Encoder Representations from Transformers (BERT) architectures with a Convolutional Neural Network (CNN) framework designed for sexism detection in text at a granular level. Our method relies on the analysis and identification [...] Read more.
In this study, we present a new approach that combines multiple Bidirectional Encoder Representations from Transformers (BERT) architectures with a Convolutional Neural Network (CNN) framework designed for sexism detection in text at a granular level. Our method relies on the analysis and identification of the most important terms contributing to sexist content using Shapley Additive Explanations (SHAP) values. This approach involves defining a range of Sexism Scores based on both model predictions and explainability, moving beyond binary classification to provide a deeper understanding of the sexism-detection process. Additionally, it enables us to identify specific parts of a sentence and their respective contributions to this range, which can be valuable for decision makers and future research. In conclusion, this study introduces an innovative method for enhancing the clarity of large language models (LLMs), which is particularly relevant in sensitive domains such as sexism detection. The incorporation of explainability into the model represents a significant advancement in this field. The objective of our study is to bridge the gap between advanced technology and human comprehension by providing a framework for creating AI models that are both efficient and transparent. This approach could serve as a pipeline for future studies to incorporate explainability into language models. Full article
(This article belongs to the Special Issue Data and Text Mining: New Approaches, Achievements and Applications)
Show Figures

Figure 1

Figure 1
<p>Back translation method for data augmentation (English ↔ Dutch).</p>
Full article ">Figure 2
<p>Research methodology.</p>
Full article ">Figure 3
<p>Architecture of our CustomBERT model.</p>
Full article ">Figure 4
<p>Density distribution of text lengths by sexist label.</p>
Full article ">Figure 5
<p>Top 20 unique words in sexist texts.</p>
Full article ">Figure 6
<p>Top 20 bigrams in sexist texts.</p>
Full article ">Figure 7
<p>Proportions of different categories of sexist texts.</p>
Full article ">Figure 8
<p>Top unique words for each category of sexist content.</p>
Full article ">Figure 9
<p>Sentiment polarity scores.</p>
Full article ">Figure 10
<p>Cumulative importance of top 20 tokens.</p>
Full article ">Figure 11
<p>Distribution of Sexism Scores for texts labeled as sexist.</p>
Full article ">Figure 12
<p>Threshold vs. number of selected tokens.</p>
Full article ">
23 pages, 5336 KiB  
Article
Enhancing the Interpretability of Malaria and Typhoid Diagnosis with Explainable AI and Large Language Models
by Kingsley Attai, Moses Ekpenyong, Constance Amannah, Daniel Asuquo, Peterben Ajuga, Okure Obot, Ekemini Johnson, Anietie John, Omosivie Maduka, Christie Akwaowo and Faith-Michael Uzoka
Trop. Med. Infect. Dis. 2024, 9(9), 216; https://doi.org/10.3390/tropicalmed9090216 - 16 Sep 2024
Viewed by 799
Abstract
Malaria and Typhoid fever are prevalent diseases in tropical regions, and both are exacerbated by unclear protocols, drug resistance, and environmental factors. Prompt and accurate diagnosis is crucial to improve accessibility and reduce mortality rates. Traditional diagnosis methods cannot effectively capture the complexities [...] Read more.
Malaria and Typhoid fever are prevalent diseases in tropical regions, and both are exacerbated by unclear protocols, drug resistance, and environmental factors. Prompt and accurate diagnosis is crucial to improve accessibility and reduce mortality rates. Traditional diagnosis methods cannot effectively capture the complexities of these diseases due to the presence of similar symptoms. Although machine learning (ML) models offer accurate predictions, they operate as “black boxes” with non-interpretable decision-making processes, making it challenging for healthcare providers to comprehend how the conclusions are reached. This study employs explainable AI (XAI) models such as Local Interpretable Model-agnostic Explanations (LIME), and Large Language Models (LLMs) like GPT to clarify diagnostic results for healthcare workers, building trust and transparency in medical diagnostics by describing which symptoms had the greatest impact on the model’s decisions and providing clear, understandable explanations. The models were implemented on Google Colab and Visual Studio Code because of their rich libraries and extensions. Results showed that the Random Forest model outperformed the other tested models; in addition, important features were identified with the LIME plots while ChatGPT 3.5 had a comparative advantage over other LLMs. The study integrates RF, LIME, and GPT in building a mobile app to enhance the interpretability and transparency in malaria and typhoid diagnosis system. Despite its promising results, the system’s performance is constrained by the quality of the dataset. Additionally, while LIME and GPT improve transparency, they may introduce complexities in real-time deployment due to computational demands and the need for internet service to maintain relevance and accuracy. The findings suggest that AI-driven diagnostic systems can significantly enhance healthcare delivery in environments with limited resources, and future works can explore the applicability of this framework to other medical conditions and datasets. Full article
Show Figures

Figure 1

Figure 1
<p>Malaria and Typhoid Fever Diagnosis Framework.</p>
Full article ">Figure 2
<p>Pre-processed dataset.</p>
Full article ">Figure 3
<p>Oversampled dataset with SMOTE.</p>
Full article ">Figure 4
<p>Random Forest schematic diagram.</p>
Full article ">Figure 5
<p>Extreme gradient boosting schematic diagram.</p>
Full article ">Figure 6
<p>Support Vector Machine diagram.</p>
Full article ">Figure 7
<p>XGBoost Algorithm Confusion Matrix.</p>
Full article ">Figure 8
<p>RF Algorithm Confusion Matrix.</p>
Full article ">Figure 9
<p>SVM Algorithm Confusion Matrix.</p>
Full article ">Figure 10
<p>Performance Evaluation of the Machine Learning Models.</p>
Full article ">Figure 11
<p>XGBoost Algorithm LIME diagram.</p>
Full article ">Figure 12
<p>RF Algorithm LIME diagram.</p>
Full article ">Figure 13
<p>SVM Algorithm LIME diagram.</p>
Full article ">Figure 14
<p>User Login.</p>
Full article ">Figure 15
<p>User Main Dashboard.</p>
Full article ">Figure 16
<p>Patient Registration.</p>
Full article ">Figure 17
<p>Patient Account Dashboard.</p>
Full article ">Figure 18
<p>History Taking and Examination.</p>
Full article ">Figure 19
<p>XAI Diagnosis Results.</p>
Full article ">
16 pages, 1777 KiB  
Article
Metabolomics Biomarker Discovery to Optimize Hepatocellular Carcinoma Diagnosis: Methodology Integrating AutoML and Explainable Artificial Intelligence
by Fatma Hilal Yagin, Radwa El Shawi, Abdulmohsen Algarni, Cemil Colak, Fahaid Al-Hashem and Luca Paolo Ardigò
Diagnostics 2024, 14(18), 2049; https://doi.org/10.3390/diagnostics14182049 - 15 Sep 2024
Viewed by 577
Abstract
Background: This study aims to assess the efficacy of combining automated machine learning (AutoML) and explainable artificial intelligence (XAI) in identifying metabolomic biomarkers that can differentiate between hepatocellular carcinoma (HCC) and liver cirrhosis in patients with hepatitis C virus (HCV) infection. Methods: We [...] Read more.
Background: This study aims to assess the efficacy of combining automated machine learning (AutoML) and explainable artificial intelligence (XAI) in identifying metabolomic biomarkers that can differentiate between hepatocellular carcinoma (HCC) and liver cirrhosis in patients with hepatitis C virus (HCV) infection. Methods: We investigated publicly accessible data encompassing HCC patients and cirrhotic controls. The TPOT tool, which is an AutoML tool, was used to optimize the preparation of features and data, as well as to select the most suitable machine learning model. The TreeSHAP approach, which is a type of XAI, was used to interpret the model by assessing each metabolite’s individual contribution to the categorization process. Results: TPOT had superior performance in distinguishing between HCC and cirrhosis compared to other AutoML approaches AutoSKlearn and H2O AutoML, in addition to traditional machine learning models such as random forest, support vector machine, and k-nearest neighbor. The TPOT technique attained an AUC value of 0.81, showcasing superior accuracy, sensitivity, and specificity in comparison to the other models. Key metabolites, including L-valine, glycine, and DL-isoleucine, were identified as essential by TPOT and subsequently verified by TreeSHAP analysis. TreeSHAP provided a comprehensive explanation of the contribution of these metabolites to the model’s predictions, thereby increasing the interpretability and dependability of the results. This thorough assessment highlights the strength and reliability of the AutoML framework in the development of clinical biomarkers. Conclusions: This study shows that AutoML and XAI can be used together to create metabolomic biomarkers that are specific to HCC. The exceptional performance of TPOT in comparison to traditional models highlights its capacity to identify biomarkers. Furthermore, TreeSHAP boosted model transparency by highlighting the relevance of certain metabolites. This comprehensive method has the potential to enhance the identification of biomarkers and generate precise, easily understandable, AI-driven solutions for diagnosing HCC. Full article
Show Figures

Figure 1

Figure 1
<p>A diagram of the proposed method in the current research.</p>
Full article ">Figure 2
<p>Nemenyi Test (α = 0.05) comparing the AUC of testing data for AutoML techniques and traditional machine learning techniques.</p>
Full article ">Figure 3
<p>Feature importance ranking based on SHAP values.</p>
Full article ">Figure 4
<p>SHAP waterfall plot for a representative true positive sample.</p>
Full article ">Figure 5
<p>SHAP waterfall plot for a representative true negative sample.</p>
Full article ">Figure 6
<p>Partial dependence plot of L-valine 1 showing its SHAP value and interaction with 2,3-butanediol 2.</p>
Full article ">
15 pages, 1465 KiB  
Article
Alzheimer’s Multiclassification Using Explainable AI Techniques
by Kamese Jordan Junior, Kouayep Sonia Carole, Tagne Poupi Theodore Armand, Hee-Cheol Kim and The Alzheimer’s Disease Neuroimaging Initiative
Appl. Sci. 2024, 14(18), 8287; https://doi.org/10.3390/app14188287 - 14 Sep 2024
Viewed by 1190
Abstract
In this study, we address the early detection challenges of Alzheimer’s disease (AD) using explainable artificial intelligence (XAI) techniques. AD, characterized by amyloid plaques and tau tangles, leads to cognitive decline and remains hard to diagnose due to genetic and environmental factors. Utilizing [...] Read more.
In this study, we address the early detection challenges of Alzheimer’s disease (AD) using explainable artificial intelligence (XAI) techniques. AD, characterized by amyloid plaques and tau tangles, leads to cognitive decline and remains hard to diagnose due to genetic and environmental factors. Utilizing deep learning models, we analyzed brain MRI scans from the ADNI database, categorizing them into normal cognition (NC), mild cognitive impairment (MCI), and AD. The ResNet-50 architecture was employed, enhanced by a channel-wise attention mechanism to improve feature extraction. To ensure model transparency, we integrated local interpretable model-agnostic explanations (LIMEs) and gradient-weighted class activation mapping (Grad-CAM), highlighting significant image regions contributing to predictions. Our model achieved 85% accuracy, effectively distinguishing between the classes. The LIME and Grad-CAM visualizations provided insights into the model’s decision-making process, particularly emphasizing changes near the hippocampus for MCI. These XAI methods enhance the interpretability of AI-driven AD diagnosis, fostering trust and aiding clinical decision-making. Our approach demonstrates the potential of combining deep learning with XAI for reliable and transparent medical applications. Full article
(This article belongs to the Special Issue Future Information & Communication Engineering 2024)
Show Figures

Figure 1

Figure 1
<p>Showing the system model and workflow.</p>
Full article ">Figure 2
<p>Raw MRI samples for normal cognition (NC) in the first row, mild cognitive impairment (MCI) in the middle row, and Alzheimer’s disease (AD) in the bottom row.</p>
Full article ">Figure 3
<p>Model accuracy against the count of epochs during training and validation.</p>
Full article ">Figure 4
<p>Confusion matrix for the pre-trained model.</p>
Full article ">Figure 5
<p>Output prediction from the ResNet-50 model; positive for mild cognitive impairment with 58.94% confidence.</p>
Full article ">Figure 6
<p>Perturbed instances from the predicted image in <a href="#applsci-14-08287-f005" class="html-fig">Figure 5</a> showing deactivated pixels.</p>
Full article ">Figure 7
<p>Activated pixels displaying relevant features for the positive MCI prediction.</p>
Full article ">Figure 8
<p>Jet heatmap of positive values using self-attention for class-specific interpretability with gradient-weighted class activation mapping. The highest level of intensity in the heatmap is observed in close proximity to the hippocampus.</p>
Full article ">Figure 9
<p>Comparison between Grad-CAM without channel-wise attention (<b>Left</b>), which highlights a generalized region, and Grad-CAM with the attention mechanism (<b>Right</b>), which is more localized close to the hippocampal region.</p>
Full article ">
25 pages, 8181 KiB  
Article
A Novel Integration of Data-Driven Rule Generation and Computational Argumentation for Enhanced Explainable AI
by Lucas Rizzo, Damiano Verda, Serena Berretta and Luca Longo
Mach. Learn. Knowl. Extr. 2024, 6(3), 2049-2073; https://doi.org/10.3390/make6030101 - 12 Sep 2024
Viewed by 486
Abstract
Explainable Artificial Intelligence (XAI) is a research area that clarifies AI decision-making processes to build user trust and promote responsible AI. Hence, a key scientific challenge in XAI is the development of methods that generate transparent and interpretable explanations while maintaining scalability and [...] Read more.
Explainable Artificial Intelligence (XAI) is a research area that clarifies AI decision-making processes to build user trust and promote responsible AI. Hence, a key scientific challenge in XAI is the development of methods that generate transparent and interpretable explanations while maintaining scalability and effectiveness in complex scenarios. Rule-based methods in XAI generate rules that can potentially explain AI inferences, yet they can also become convoluted in large scenarios, hindering their readability and scalability. Moreover, they often lack contrastive explanations, leaving users uncertain why specific predictions are preferred. To address this scientific problem, we explore the integration of computational argumentation—a sub-field of AI that models reasoning processes through defeasibility—into rule-based XAI systems. Computational argumentation enables arguments modelled from rules to be retracted based on new evidence. This makes it a promising approach to enhancing rule-based methods for creating more explainable AI systems. Nonetheless, research on their integration remains limited despite the appealing properties of rule-based systems and computational argumentation. Therefore, this study also addresses the applied challenge of implementing such an integration within practical AI tools. The study employs the Logic Learning Machine (LLM), a specific rule-extraction technique, and presents a modular design that integrates input rules into a structured argumentation framework using state-of-the-art computational argumentation methods. Experiments conducted on binary classification problems using various datasets from the UCI Machine Learning Repository demonstrate the effectiveness of this integration. The LLM technique excelled in producing a manageable number of if-then rules with a small number of premises while maintaining high inferential capacity for all datasets. In turn, argument-based models achieved comparable results to those derived directly from if-then rules, leveraging a concise set of rules and excelling in explainability. In summary, this paper introduces a novel approach for efficiently and automatically generating arguments and their interactions from data, addressing both scientific and applied challenges in advancing the application and deployment of argumentation systems in XAI. Full article
(This article belongs to the Section Data)
Show Figures

Figure 1

Figure 1
<p>Illustration of the integration of a data-driven rule-generator (Logic Learning Machine) and a rule-aggregator with non-monotonic logic (structured argumentation).</p>
Full article ">Figure 2
<p>An illustration of a multipartite argumentation graph. A node represents each argument and has an <span class="html-italic">if-then</span> internal structure following Equation (<a href="#FD1-make-06-00101" class="html-disp-formula">1</a>) (premises are omitted for the sake of simplicity). Arguments <span class="html-italic">a</span>–<span class="html-italic">c</span> share a common output class, whereas arguments <span class="html-italic">d</span>–<span class="html-italic">f</span> share a different one. Each argument in a partite attacks all the other arguments in the other partite.</p>
Full article ">Figure 3
<p>An illustrative example of elicitation of arguments and the definition of their dialectical status. Node labels contain the argument label and its weight. The premise of argument <span class="html-italic">a</span> does not hold true with the input data, so it is discarded along with its incoming/outgoing attacks. For graphs 1, 2, 3, and 4, the following can be observed, respectively: attacks <math display="inline"><semantics> <mrow> <mo>{</mo> <mo>∅</mo> <mo>}</mo> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mo>{</mo> <mi>d</mi> <mo>→</mo> <mi>c</mi> <mo>}</mo> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mo>{</mo> <mi>d</mi> <mo>→</mo> <mi>c</mi> <mo>,</mo> <mi>c</mi> <mo>→</mo> <mi>b</mi> <mo>}</mo> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mo>{</mo> <mi>d</mi> <mo>→</mo> <mi>c</mi> <mo>,</mo> <mi>c</mi> <mo>→</mo> <mi>d</mi> <mo>,</mo> <mi>c</mi> <mo>→</mo> <mi>b</mi> <mo>}</mo> </mrow> </semantics></math> are removed to respect the inconsistency budget defined; the grounded extensions are <math display="inline"><semantics> <mrow> <mo>{</mo> <mo>∅</mo> <mo>}</mo> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mo>{</mo> <mo>∅</mo> <mo>}</mo> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mo>{</mo> <mi>c</mi> <mo>}</mo> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mo>{</mo> <mi>c</mi> <mo>,</mo> <mi>d</mi> <mo>}</mo> </mrow> </semantics></math>, and the preferred extensions are <math display="inline"><semantics> <mrow> <mo>{</mo> <mo>{</mo> <mi>c</mi> <mo>}</mo> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mo>{</mo> <mi>b</mi> <mo>,</mo> <mi>d</mi> <mo>}</mo> <mo>}</mo> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mo>{</mo> <mo>{</mo> <mi>c</mi> <mo>}</mo> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mo>{</mo> <mi>b</mi> <mo>,</mo> <mi>d</mi> <mo>}</mo> <mo>}</mo> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mo>{</mo> <mi>c</mi> <mo>}</mo> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mo>{</mo> <mi>c</mi> <mo>,</mo> <mi>d</mi> <mo>}</mo> </mrow> </semantics></math>; the top ranked arguments for the categoriser are <math display="inline"><semantics> <mrow> <mo>{</mo> <mi>b</mi> <mo>,</mo> <mi>d</mi> <mo>}</mo> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mo>{</mo> <mi>b</mi> <mo>,</mo> <mi>c</mi> <mo>,</mo> <mi>d</mi> <mo>}</mo> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mo>{</mo> <mi>c</mi> <mo>}</mo> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mo>{</mo> <mi>c</mi> <mo>,</mo> <mi>d</mi> <mo>}</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>Design of a comparative experiment with four main steps: (<b>a</b>) Selection and pre-processing of four datasets for binary classification tasks; (<b>b</b>) Automatic formation of <span class="html-italic">if-then</span> rules from the selected dataset using the Logic Learning Machine (LLM) technique; (<b>c</b>) Generation of final inferences using two aggregator logics rules: the Standard Applied Procedure and computational argumentation; (<b>d</b>) Comparative analysis via standard binary classification metrics and percentage of undecided cases (NAs: when a model cannot lead to a final inference).</p>
Full article ">Figure 5
<p>Overall results for inferences produced using the CARS dataset grouped by the error threshold per rule parameter (10% for the first four blocks (<b>a</b>–<b>d</b>), 25% for the second four blocks (<b>e</b>–<b>h</b>)), and divided by inconsistency budget variation (25%, 50%, 90%, 100%).</p>
Full article ">Figure 6
<p>Overall results for inferences produced using the CENSUS dataset grouped by the error threshold per rule parameter (10% for the first four blocks (<b>a</b>–<b>d</b>), 25% for the second four blocks (<b>e</b>–<b>h</b>)) and divided by inconsistency budget variation (25%, 50%, 90%, 100%).</p>
Full article ">Figure 7
<p>Overall results for inferences produced using the BANK dataset grouped by the error threshold per rule parameter (10% for the first four blocks (<b>a</b>–<b>d</b>), 25% for the second four blocks (<b>e</b>–<b>h</b>)), and divided by inconsistency budget variation (25%, 50%, 90%, 100%).</p>
Full article ">Figure 8
<p>Overall results for inferences produced using the MYOCARDIAL dataset grouped by the error threshold per rule parameter (10% for the first four blocks (<b>a</b>–<b>d</b>), 25% for the second four blocks (<b>e</b>–<b>h</b>)), and divided by inconsistency budget variation (25%, 50%, 90%, 100%).</p>
Full article ">Figure A1
<p>Example of argumentation graph generated from the <span class="html-italic">if-then</span> rules extracted for the CENSUS dataset using the LLM technique with 10% error threshold per rule. (<b>a</b>) All arguments and attacks with no input data, (<b>b</b>,<b>c</b>) two examples of accepted (green) and rejected (red) arguments from some input data using the preferred semantics.</p>
Full article ">Figure A2
<p>Example of argumentation graph generated from the <span class="html-italic">if-then</span> rules extracted for the CENSUS dataset using the LLM technique with 10% error threshold per rule. (<b>a</b>) All arguments and attacks with no input data, (<b>b</b>,<b>c</b>) two examples of accepted (green) and rejected (red) arguments from some input data using the preferred semantics.</p>
Full article ">Figure A3
<p>Examples of the open-source ArgFrame framework [<a href="#B38-make-06-00101" class="html-bibr">38</a>] instantiated with argumentation graphs generated for the CENSUS dataset. It is possible to hover over nodes to analyze their internal structure. Data can also be imported, allowing the visualization of case-by-case inferences. Its use is recommended for a better understanding of the available functionalities.</p>
Full article ">
27 pages, 7268 KiB  
Article
Integrating Fuzzy C-Means Clustering and Explainable AI for Robust Galaxy Classification
by Gabriel Marín Díaz, Raquel Gómez Medina and José Alberto Aijón Jiménez
Mathematics 2024, 12(18), 2797; https://doi.org/10.3390/math12182797 - 10 Sep 2024
Viewed by 594
Abstract
The classification of galaxies has significantly advanced using machine learning techniques, offering deeper insights into the universe. This study focuses on the typology of galaxies using data from the Galaxy Zoo project, where classifications are based on the opinions of non-expert volunteers, introducing [...] Read more.
The classification of galaxies has significantly advanced using machine learning techniques, offering deeper insights into the universe. This study focuses on the typology of galaxies using data from the Galaxy Zoo project, where classifications are based on the opinions of non-expert volunteers, introducing a degree of uncertainty. The objective of this study is to integrate Fuzzy C-Means (FCM) clustering with explainability methods to achieve a precise and interpretable model for galaxy classification. We applied FCM to manage this uncertainty and group galaxies based on their morphological characteristics. Additionally, we used explainability methods, specifically SHAP (SHapley Additive exPlanations) values and LIME (Local Interpretable Model-Agnostic Explanations), to interpret and explain the key factors influencing the classification. The results show that using FCM allows for accurate classification while managing data uncertainty, with high precision values that meet the expectations of the study. Additionally, SHAP values and LIME provide a clear understanding of the most influential features in each cluster. This method enhances our classification and understanding of galaxies and is extendable to environmental studies on Earth, offering tools for environmental management and protection. The presented methodology highlights the importance of integrating FCM and XAI techniques to address complex problems with uncertain data. Full article
(This article belongs to the Special Issue Advances in Fuzzy Logic and Artificial Neural Networks)
Show Figures

Figure 1

Figure 1
<p>Publications (1282) and citations. TS = (FUZZY C-MEANS CLUSTERING).</p>
Full article ">Figure 2
<p>Publications (3178) and citations. TS = (“XAI” OR “EXPLAINABLE ARTIFICIAL INTELLIGENCE”).</p>
Full article ">Figure 3
<p>Publications (8) and citations. TS = (“FUZZY C-MEANS”) AND TS = (“XAI” OR “EXPLAINABLE ARTIFICIAL INTELLIGENCE”).</p>
Full article ">Figure 4
<p>Publications (53) and citations. TS = (“GALAXY CLASSIFICATION”) AND TS = (“MACHINE LEARNING” OR “DEEP LEARNING”).</p>
Full article ">Figure 5
<p>Methodology.</p>
Full article ">Figure 6
<p>Correlation matrix.</p>
Full article ">Figure 7
<p>Optimal number of clusters.</p>
Full article ">Figure 8
<p>Centroids by cluster.</p>
Full article ">Figure 9
<p>Cluster 0 galaxies.</p>
Full article ">Figure 10
<p>Cluster 1 galaxies.</p>
Full article ">Figure 11
<p>Cluster 2 galaxies.</p>
Full article ">Figure 12
<p>Confusion matrix.</p>
Full article ">Figure 13
<p>Feature importance for Cluster 0.</p>
Full article ">Figure 14
<p>Feature importance for Cluster 1.</p>
Full article ">Figure 15
<p>Feature importance for Cluster 2.</p>
Full article ">Figure 16
<p>Local cluster prediction (cluster = 2).</p>
Full article ">Figure 17
<p>Galaxy ID 553402, Cluster 2.</p>
Full article ">Figure 18
<p>Local cluster prediction (cluster = 0).</p>
Full article ">Figure 19
<p>Galaxy ID 236126, Cluster 0.</p>
Full article ">Figure 20
<p>Local cluster prediction (cluster = 1).</p>
Full article ">Figure 21
<p>Galaxy ID 113992, Cluster 1.</p>
Full article ">
30 pages, 3060 KiB  
Review
Explainable Artificial Intelligence (XAI) for Oncological Ultrasound Image Analysis: A Systematic Review
by Lucie S. Wyatt, Lennard M. van Karnenbeek, Mark Wijkhuizen, Freija Geldof and Behdad Dashtbozorg
Appl. Sci. 2024, 14(18), 8108; https://doi.org/10.3390/app14188108 - 10 Sep 2024
Viewed by 935
Abstract
This review provides an overview of explainable AI (XAI) methods for oncological ultrasound image analysis and compares their performance evaluations. A systematic search of Medline Embase and Scopus between 25 March and 14 April 2024 identified 17 studies describing 14 XAI methods, including [...] Read more.
This review provides an overview of explainable AI (XAI) methods for oncological ultrasound image analysis and compares their performance evaluations. A systematic search of Medline Embase and Scopus between 25 March and 14 April 2024 identified 17 studies describing 14 XAI methods, including visualization, semantics, example-based, and hybrid functions. These methods primarily provided specific, local, and post hoc explanations. Performance evaluations focused on AI model performance, with limited assessment of explainability impact. Standardized evaluations incorporating clinical end-users are generally lacking. Enhanced XAI transparency may facilitate AI integration into clinical workflows. Future research should develop real-time methodologies and standardized quantitative evaluative metrics. Full article
(This article belongs to the Section Applied Biosciences and Bioengineering)
Show Figures

Figure 1

Figure 1
<p>Schematic representation of XAI methods with model-specific (<b>left</b>) or model-agnostic (<b>right</b>) dependencies.</p>
Full article ">Figure 2
<p>Schematic representation of XAI methods with global (<b>left</b>) or local (<b>right</b>) scopes.</p>
Full article ">Figure 3
<p>Schematic representation of XAI methods with intrinsic (<b>bottom</b>) or post hoc (<b>top</b>) applications.</p>
Full article ">Figure 4
<p>Flowchart visualizing the results of the PRISMA-based article selection process.</p>
Full article ">Figure 5
<p>Division of AI-based image analysis tasks in the included studies.</p>
Full article ">Figure 6
<p>Frequency of various compositions of XAI methods’ scope and application in the included studies, categorized by function. Note that some XAI methods served multiple functions and were used in multiple studies; hence, the total counts in this figure exceed the number of studies and XAI methods listed previously.</p>
Full article ">Figure 7
<p>Division of XAI method functions.</p>
Full article ">Figure 8
<p>Frequency of identified XAI functions in the included studies, categorized by image analysis task. Note that some XAI methods served multiple functions and were used in multiple studies; hence, the total counts in this figure exceed the number of studies and XAI methods listed previously.</p>
Full article ">Figure 9
<p>Visualization examples for GIST, leiomyoma, and pancreatic rest tumors with the Grad-CAM plots generated by different methods reflecting the decision basis of different models. The first column presents the original US image. The 2nd column shows the expert annotations. The 3rd–6th columns present the generated Grad-CAM saliency maps using a baseline model, a multiattribute guided network (MAG), a contextual attention network (CA), and a combined MAG–CA network. Adapted from Zheng et al. (2024) [<a href="#B53-applsci-14-08108" class="html-bibr">53</a>], with permission from Elsevier.</p>
Full article ">Figure 10
<p>Input of the (<b>a</b>) original input image and (<b>b</b>) radiologist-highlighted region of hypoechoic lesion with mixed echogenicity prostate for malignant case, compared to the resultant (<b>c</b>) simulated image by LIME, which initially locates the regions that could be worthy of investigation given the input image, and (<b>d</b>) the final generated image by LIME explaining why the case was classified as malignant. Adapted from Hassan et al. (2022) [<a href="#B42-applsci-14-08108" class="html-bibr">42</a>], with permission from Elsevier.</p>
Full article ">Figure 11
<p>Generation of the ROI and the local patches from the images using the global features. (<b>a</b>) The original image to be used as the input for the global branch. (<b>b</b>) The generated activation heat map of the features. (<b>c</b>) The binarized heat map and the bounding box spanning it. (<b>d</b>) The cropped local patch to be used as the input to the local branch. Adapted from Basu et al. (2023) [<a href="#B38-applsci-14-08108" class="html-bibr">38</a>], with permission from Elsevier.</p>
Full article ">Figure 12
<p>SHAP explanation example: Results of a malignant case in breast ultrasound images in which the trained ensemble model can be analyzed to provide explainable decision paths within a series of decision trees. In each tree classifier, orange arrows indicate the decision path. The model compares the texture features from the input image (represented by orange numbers at the bottom of each dashed box) with the learned thresholds (indicated by black triangles on each histogram) at each node of the decision tree. Adapted from Rezazadeh et al. (2022) [<a href="#B48-applsci-14-08108" class="html-bibr">48</a>] Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.</p>
Full article ">Figure 13
<p>US images presenting benign (<b>left</b>) and malignant (<b>right</b>) breast masses and the corresponding CAM-generated saliency maps pointing out the three predetermined regions in the US images. The white cross indicates the extreme activation value of CAM responsible for the particular pointing game result. Adapted from Byra et al. (2022) [<a href="#B39-applsci-14-08108" class="html-bibr">39</a>]; an open-access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives License (CC BY-NC-ND).</p>
Full article ">
24 pages, 7001 KiB  
Article
Appendicitis Diagnosis: Ensemble Machine Learning and Explainable Artificial Intelligence-Based Comprehensive Approach
by Mohammed Gollapalli, Atta Rahman, Sheriff A. Kudos, Mohammed S. Foula, Abdullah Mahmoud Alkhalifa, Hassan Mohammed Albisher, Mohammed Taha Al-Hariri and Nazeeruddin Mohammad
Big Data Cogn. Comput. 2024, 8(9), 108; https://doi.org/10.3390/bdcc8090108 - 4 Sep 2024
Viewed by 753
Abstract
Appendicitis is a condition wherein the appendix becomes inflamed, and it can be difficult to diagnose accurately. The type of appendicitis can also be hard to determine, leading to misdiagnosis and difficulty in managing the condition. To avoid complications and reduce mortality, early [...] Read more.
Appendicitis is a condition wherein the appendix becomes inflamed, and it can be difficult to diagnose accurately. The type of appendicitis can also be hard to determine, leading to misdiagnosis and difficulty in managing the condition. To avoid complications and reduce mortality, early diagnosis and treatment are crucial. While Alvarado’s clinical scoring system is not sufficient, ultrasound and computed tomography (CT) imaging are effective but have downsides such as operator-dependency and radiation exposure. This study proposes the use of machine learning methods and a locally collected reliable dataset to enhance the identification of acute appendicitis while detecting the differences between complicated and non-complicated appendicitis. Machine learning can help reduce diagnostic errors and improve treatment decisions. This study conducted four different experiments using various ML algorithms, including K-nearest neighbors (KNN), DT, bagging, and stacking. The experimental results showed that the stacking model had the highest training accuracy, test set accuracy, precision, and F1 score, which were 97.51%, 92.63%, 95.29%, and 92.04%, respectively. Feature importance and explainable AI (XAI) identified neutrophils, WBC_Count, Total_LOS, P_O_LOS, and Symptoms_Days as the principal features that significantly affected the performance of the model. Based on the outcomes and feedback from medical health professionals, the scheme is promising in terms of its effectiveness in diagnosing of acute appendicitis. Full article
(This article belongs to the Special Issue Machine Learning Applications and Big Data Challenges)
Show Figures

Figure 1

Figure 1
<p>Variable correlation heatmap.</p>
Full article ">Figure 2
<p>Experimental setup.</p>
Full article ">Figure 3
<p>KNN with Different Metrics and N_neighbors.</p>
Full article ">Figure 4
<p>(<b>a</b>) Best splitter and (<b>b</b>) random splitter for DT with different criterion and maximum depth levels.</p>
Full article ">Figure 5
<p>KNN with different metrics and number of neighbors.</p>
Full article ">Figure 6
<p>(<b>a</b>) Best splitter and (<b>b</b>) random splitter for DT with different criterion and Max_depth level.</p>
Full article ">Figure 7
<p>Proposed models’ permutation importance (testing phase).</p>
Full article ">Figure 8
<p>Proposed models’ permutation importance (training phase).</p>
Full article ">Figure 9
<p>SHAP summary plot for stacking model.</p>
Full article ">Figure 10
<p>Non-complicated sample’s LIME plot.</p>
Full article ">Figure 11
<p>Complicated sample’s LIME plot.</p>
Full article ">
28 pages, 462 KiB  
Review
Explainable AI in Manufacturing and Industrial Cyber–Physical Systems: A Survey
by Sajad Moosavi, Maryam Farajzadeh-Zanjani, Roozbeh Razavi-Far, Vasile Palade and Mehrdad Saif
Electronics 2024, 13(17), 3497; https://doi.org/10.3390/electronics13173497 - 3 Sep 2024
Viewed by 1319
Abstract
This survey explores applications of explainable artificial intelligence in manufacturing and industrial cyber–physical systems. As technological advancements continue to integrate artificial intelligence into critical infrastructure and industrial processes, the necessity for clear and understandable intelligent models becomes crucial. Explainable artificial intelligence techniques play [...] Read more.
This survey explores applications of explainable artificial intelligence in manufacturing and industrial cyber–physical systems. As technological advancements continue to integrate artificial intelligence into critical infrastructure and industrial processes, the necessity for clear and understandable intelligent models becomes crucial. Explainable artificial intelligence techniques play a pivotal role in enhancing the trustworthiness and reliability of intelligent systems applied to industrial systems, ensuring human operators can comprehend and validate the decisions made by these intelligent systems. This review paper begins by highlighting the imperative need for explainable artificial intelligence, and, subsequently, classifies explainable artificial intelligence techniques systematically. The paper then investigates diverse explainable artificial-intelligence-related works within a wide range of industrial applications, such as predictive maintenance, cyber-security, fault detection and diagnosis, process control, product development, inventory management, and product quality. The study contributes to a comprehensive understanding of the diverse strategies and methodologies employed in integrating explainable artificial intelligence within industrial contexts. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence Engineering)
Show Figures

Figure 1

Figure 1
<p>Popularity of the “Explainable AI” term in the Google search engine taken from Google Trends [<a href="#B5-electronics-13-03497" class="html-bibr">5</a>].</p>
Full article ">Figure 2
<p>Taxonomy of XAI methods [<a href="#B8-electronics-13-03497" class="html-bibr">8</a>,<a href="#B13-electronics-13-03497" class="html-bibr">13</a>,<a href="#B14-electronics-13-03497" class="html-bibr">14</a>,<a href="#B15-electronics-13-03497" class="html-bibr">15</a>,<a href="#B16-electronics-13-03497" class="html-bibr">16</a>,<a href="#B17-electronics-13-03497" class="html-bibr">17</a>,<a href="#B18-electronics-13-03497" class="html-bibr">18</a>,<a href="#B19-electronics-13-03497" class="html-bibr">19</a>,<a href="#B20-electronics-13-03497" class="html-bibr">20</a>,<a href="#B21-electronics-13-03497" class="html-bibr">21</a>,<a href="#B22-electronics-13-03497" class="html-bibr">22</a>,<a href="#B23-electronics-13-03497" class="html-bibr">23</a>,<a href="#B24-electronics-13-03497" class="html-bibr">24</a>,<a href="#B25-electronics-13-03497" class="html-bibr">25</a>].</p>
Full article ">Figure 3
<p>Categorization of XAI methods by data type [<a href="#B27-electronics-13-03497" class="html-bibr">27</a>,<a href="#B28-electronics-13-03497" class="html-bibr">28</a>,<a href="#B29-electronics-13-03497" class="html-bibr">29</a>,<a href="#B30-electronics-13-03497" class="html-bibr">30</a>,<a href="#B31-electronics-13-03497" class="html-bibr">31</a>,<a href="#B32-electronics-13-03497" class="html-bibr">32</a>,<a href="#B33-electronics-13-03497" class="html-bibr">33</a>,<a href="#B34-electronics-13-03497" class="html-bibr">34</a>,<a href="#B35-electronics-13-03497" class="html-bibr">35</a>,<a href="#B36-electronics-13-03497" class="html-bibr">36</a>,<a href="#B37-electronics-13-03497" class="html-bibr">37</a>,<a href="#B38-electronics-13-03497" class="html-bibr">38</a>,<a href="#B39-electronics-13-03497" class="html-bibr">39</a>,<a href="#B40-electronics-13-03497" class="html-bibr">40</a>,<a href="#B41-electronics-13-03497" class="html-bibr">41</a>].</p>
Full article ">Figure 4
<p>Major advancements in XAI techniques since 2011 [<a href="#B8-electronics-13-03497" class="html-bibr">8</a>,<a href="#B14-electronics-13-03497" class="html-bibr">14</a>,<a href="#B36-electronics-13-03497" class="html-bibr">36</a>,<a href="#B42-electronics-13-03497" class="html-bibr">42</a>,<a href="#B43-electronics-13-03497" class="html-bibr">43</a>,<a href="#B44-electronics-13-03497" class="html-bibr">44</a>,<a href="#B45-electronics-13-03497" class="html-bibr">45</a>,<a href="#B46-electronics-13-03497" class="html-bibr">46</a>,<a href="#B47-electronics-13-03497" class="html-bibr">47</a>,<a href="#B48-electronics-13-03497" class="html-bibr">48</a>,<a href="#B49-electronics-13-03497" class="html-bibr">49</a>,<a href="#B50-electronics-13-03497" class="html-bibr">50</a>,<a href="#B51-electronics-13-03497" class="html-bibr">51</a>,<a href="#B52-electronics-13-03497" class="html-bibr">52</a>,<a href="#B53-electronics-13-03497" class="html-bibr">53</a>,<a href="#B54-electronics-13-03497" class="html-bibr">54</a>,<a href="#B55-electronics-13-03497" class="html-bibr">55</a>,<a href="#B56-electronics-13-03497" class="html-bibr">56</a>,<a href="#B57-electronics-13-03497" class="html-bibr">57</a>,<a href="#B58-electronics-13-03497" class="html-bibr">58</a>,<a href="#B59-electronics-13-03497" class="html-bibr">59</a>,<a href="#B60-electronics-13-03497" class="html-bibr">60</a>,<a href="#B61-electronics-13-03497" class="html-bibr">61</a>,<a href="#B62-electronics-13-03497" class="html-bibr">62</a>,<a href="#B63-electronics-13-03497" class="html-bibr">63</a>,<a href="#B64-electronics-13-03497" class="html-bibr">64</a>,<a href="#B65-electronics-13-03497" class="html-bibr">65</a>,<a href="#B66-electronics-13-03497" class="html-bibr">66</a>,<a href="#B67-electronics-13-03497" class="html-bibr">67</a>,<a href="#B68-electronics-13-03497" class="html-bibr">68</a>,<a href="#B69-electronics-13-03497" class="html-bibr">69</a>,<a href="#B70-electronics-13-03497" class="html-bibr">70</a>,<a href="#B71-electronics-13-03497" class="html-bibr">71</a>,<a href="#B72-electronics-13-03497" class="html-bibr">72</a>,<a href="#B73-electronics-13-03497" class="html-bibr">73</a>,<a href="#B74-electronics-13-03497" class="html-bibr">74</a>].</p>
Full article ">Figure 5
<p>Use cases of AI in CPS and manufacturing industry.</p>
Full article ">Figure 6
<p>Cost relation between the different maintenance strategies [<a href="#B150-electronics-13-03497" class="html-bibr">150</a>].</p>
Full article ">
19 pages, 26310 KiB  
Article
Concrete Crack Detection and Segregation: A Feature Fusion, Crack Isolation, and Explainable AI-Based Approach
by Reshma Ahmed Swarna, Muhammad Minoar Hossain, Mst. Rokeya Khatun, Mohammad Motiur Rahman and Arslan Munir
J. Imaging 2024, 10(9), 215; https://doi.org/10.3390/jimaging10090215 - 31 Aug 2024
Viewed by 906
Abstract
Scientific knowledge of image-based crack detection methods is limited in understanding their performance across diverse crack sizes, types, and environmental conditions. Builders and engineers often face difficulties with image resolution, detecting fine cracks, and differentiating between structural and non-structural issues. Enhanced algorithms and [...] Read more.
Scientific knowledge of image-based crack detection methods is limited in understanding their performance across diverse crack sizes, types, and environmental conditions. Builders and engineers often face difficulties with image resolution, detecting fine cracks, and differentiating between structural and non-structural issues. Enhanced algorithms and analysis techniques are needed for more accurate assessments. Hence, this research aims to generate an intelligent scheme that can recognize the presence of cracks and visualize the percentage of cracks from an image along with an explanation. The proposed method fuses features from concrete surface images through a ResNet-50 convolutional neural network (CNN) and curvelet transform handcrafted (HC) method, optimized by linear discriminant analysis (LDA), and the eXtreme gradient boosting (XGB) classifier then uses these features to recognize cracks. This study evaluates several CNN models, including VGG-16, VGG-19, Inception-V3, and ResNet-50, and various HC techniques, such as wavelet transform, counterlet transform, and curvelet transform for feature extraction. Principal component analysis (PCA) and LDA are assessed for feature optimization. For classification, XGB, random forest (RF), adaptive boosting (AdaBoost), and category boosting (CatBoost) are tested. To isolate and quantify the crack region, this research combines image thresholding, morphological operations, and contour detection with the convex hulls method and forms a novel algorithm. Two explainable AI (XAI) tools, local interpretable model-agnostic explanations (LIMEs) and gradient-weighted class activation mapping++ (Grad-CAM++) are integrated with the proposed method to enhance result clarity. This research introduces a novel feature fusion approach that enhances crack detection accuracy and interpretability. The method demonstrates superior performance by achieving 99.93% and 99.69% accuracy on two existing datasets, outperforming state-of-the-art methods. Additionally, the development of an algorithm for isolating and quantifying crack regions represents a significant advancement in image processing for structural analysis. The proposed approach provides a robust and reliable tool for real-time crack detection and assessment in concrete structures, facilitating timely maintenance and improving structural safety. By offering detailed explanations of the model’s decisions, the research addresses the critical need for transparency in AI applications, thus increasing trust and adoption in engineering practice. Full article
(This article belongs to the Special Issue Image Processing and Computer Vision: Algorithms and Applications)
Show Figures

Figure 1

Figure 1
<p>The workflow of this research.</p>
Full article ">Figure 2
<p>Architecture of the feature extraction technique of this research.</p>
Full article ">Figure 3
<p>A residual block contains a connection that bypasses two layers.</p>
Full article ">Figure 4
<p>Performance analysis of different classifiers.</p>
Full article ">Figure 5
<p>Normalized confusion matrix of the proposed method.</p>
Full article ">Figure 6
<p>ROC curve of the proposed method.</p>
Full article ">Figure 7
<p>Comparison of the performance of the proposed method in surface crack dataset and bridge crack dataset.</p>
Full article ">
Back to TopTop