[go: up one dir, main page]

 
 
applsci-logo

Journal Browser

Journal Browser

Advancements in Human-Centered Artificial Intelligence (HCAI) Applying Natural Language Processing Techniques

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 30 April 2025 | Viewed by 3989

Special Issue Editors


E-Mail Website
Guest Editor
Information Technologies Group, atlanTTic, University of Vigo, 36310 Vigo, Spain
Interests: artificial intelligence; natural language processing; computing systems design; real-time systems; machine learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Cyber Science and Technology, Beihang University, Beijing 100191, China
Interests: social network analysis; social media data mining; network events detection and influence analysis and prediction; network multimodal data deep fusion; text data information extraction; multimodal deep learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Information Technologies Group - atlanTTic, University of Vigo, 36310 Vigo, Spain
Interests: artificial intelligence; computational linguistics; machine learning; natural language processing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

We welcome submissions for a Special Issue of Applied Sciences entitled “Applications of Human-Centered Artificial Intelligence (HCAI)”. This Special Issue focuses on human-centered artificial intelligence, that is, human capabilities and quality of life, where the main actors are people and new and powerful AI techniques.

The current relevant HCAI research focuses on creating chatbot conversations that assist people. These assistants facilitate access to digital content, services, and ever-more personalized monitoring in health environments. At an industrial level, HCAI techniques and optimized production processes have increased the effectiveness of human-centered interaction.

Articles submitted to this Special Issue may contain new optimization techniques and processes in embedded and real-time systems in HCAI environments. The fine-tuning of existing models and their applications in industrial, social, educational, and healthcare environments will also be considered as valuable scientific contributions. These new approaches should improve decision-making, reliability, scalability, and sustainability.

Dr. Francisco De Arriba-Pérez
Dr. Xiaoming Zhang
Dr. Silvia García-Méndez
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine learning
  • deep learning
  • artificial intelligence
  • human-centered applications
  • sustainability

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

26 pages, 546 KiB  
Article
Human-Centered AI for Migrant Integration Through LLM and RAG Optimization
by Dagoberto Castellanos-Nieves and Luis García-Forte
Appl. Sci. 2025, 15(1), 325; https://doi.org/10.3390/app15010325 - 31 Dec 2024
Viewed by 927
Abstract
The enhancement of mechanisms to protect the rights of migrants and refugees within the European Union represents a critical area for human-centered artificial intelligence (HCAI). Traditionally, the focus on algorithms alone has shifted toward a more comprehensive understanding of AI’s potential to shape [...] Read more.
The enhancement of mechanisms to protect the rights of migrants and refugees within the European Union represents a critical area for human-centered artificial intelligence (HCAI). Traditionally, the focus on algorithms alone has shifted toward a more comprehensive understanding of AI’s potential to shape technology in ways which better serve human needs, particularly for disadvantaged groups. Large language models (LLMs) and retrieval-augmented generation (RAG) offer significant potential to bridging gaps for vulnerable populations, including immigrants, refugees, and individuals with disabilities. Implementing solutions based on these technologies involves critical factors which influence the pursuit of approaches aligning with humanitarian interests. This study presents a proof of concept utilizing the open LLM model LLAMA 3 and a linguistic corpus comprising legislative, regulatory, and assistance information from various European Union agencies concerning migrants. We evaluate generative metrics, energy efficiency metrics, and metrics for assessing contextually appropriate and non-discriminatory responses. Our proposal involves the optimal tuning of key hyperparameters for LLMs and RAG through multi-criteria decision-making (MCDM) methods to ensure the solutions are fair, equitable, and non-discriminatory. The optimal configurations resulted in a 20.1% reduction in carbon emissions, along with an 11.3% decrease in the metrics associated with bias. The findings suggest that by employing the appropriate methodologies and techniques, it is feasible to implement HCAI systems based on LLMs and RAG without undermining the social integration of vulnerable populations. Full article
Show Figures

Figure 1

Figure 1
<p>Class diagram of the PoC for the response generation system based on an LLM. The diagram clearly organizes the process stages from question initiation and corpus selection to evaluation of the responses generated by the LLM using various evaluation metrics. The relationships between classes and key methods are indicated by the arrows and associated descriptions.</p>
Full article ">Figure 2
<p>Comparative evaluation of primary assessment metrics—ROUGE-1, ROUGE-2, ROUGE-L, BLEU, and similarity—in relation to their corresponding CO<sub>2</sub>e emissions as a function of the chunk size. The yellow line representing ROUGE-2 overlaps with the line for BLEU due to the similarity of their results.</p>
Full article ">Figure 3
<p>The heat map illustrates the impact analysis of the temperature and K value on text quality evaluation metrics and energy efficiency.</p>
Full article ">Figure 4
<p>Comparison of average scores for the stereotype, anti-stereotype, neutral, Non_Hate, and Hate categories as a function of the temperature.</p>
Full article ">Figure 5
<p>Trends in stereotype, anti-stereotype, neutral, Non_Hate, and Hate scores as chunk size increased.</p>
Full article ">Figure 6
<p>Results of the RIM and TOPSIS and VIKOR methods for the 125 alternatives obtained. This figure displays the index values for all three methods.</p>
Full article ">Figure 7
<p>This figure presents a comparison of scores for the stereotype, anti-stereotype, neutral, No_Hate, and Hate categories based on the presence or absence of process integration within LLM systems utilizing RAG.</p>
Full article ">Figure 8
<p>Comparison of CO<sub>2</sub>e emissions between models which do not utilize integration processes and those which incorporate such processes. The values represent the median CO<sub>2</sub>e emissions calculated for each combination of the K and temperature parameters.</p>
Full article ">
17 pages, 3282 KiB  
Article
A Class-Incremental Learning Method for Interactive Event Detection via Interaction, Contrast and Distillation
by Jiashun Duan and Xin Zhang
Appl. Sci. 2024, 14(19), 8788; https://doi.org/10.3390/app14198788 - 29 Sep 2024
Viewed by 967
Abstract
Event detection is a crucial task in information extraction. Existing research primarily focuses on machine automatic detection tasks, which often perform poorly in certain practical applications. To address this, an interactive event-detection mode of “machine recommendation-human review–machine incremental learning” was proposed. In this [...] Read more.
Event detection is a crucial task in information extraction. Existing research primarily focuses on machine automatic detection tasks, which often perform poorly in certain practical applications. To address this, an interactive event-detection mode of “machine recommendation-human review–machine incremental learning” was proposed. In this mode, we study a few-shot continual class-incremental learning scenario, where the challenge is to learn new-class events with limited samples while preserving memory of old class events. To tackle these challenges, we propose a class-incremental learning method for interactive event detection via Interaction, Contrast and Distillation (ICD). We design a replay strategy based on representative and confusable samples to retain the most valuable samples under limited conditions; we introduce semantic-boundary-smoothness contrastive learning for effective learning of new-class events with few samples; and we employ hierarchical distillation to mitigate catastrophic forgetting. These methods complement each other and show strong performance. Experimental results demonstrate that, in the 5-shot 5-round class incremental-learning settings on two Chinese event-detection datasets ACE and DuEE, our method achieves final recall rates of 71.48% and 90.39%, respectively, improving by 6.86% and 3.90% over the best baseline methods. Full article
Show Figures

Figure 1

Figure 1
<p>The concept of interactive event-detection mode and the class-incremental learning.</p>
Full article ">Figure 2
<p>The structure of ICD, which includes a recommendation model, experience replay, contrastive learning, and hierarchical-distillation modules. When learning a new task, it uses the previous model to select replay samples, applies hierarchical-knowledge distillation, and then employs semantic-boundary-smoothing contrastive learning for the new model.</p>
Full article ">Figure 3
<p>The distribution of event counts in the ACE2005-Chinese and DuEE datasets.</p>
Full article ">Figure 4
<p><math display="inline"><semantics> <mrow> <mi>R</mi> <mi>e</mi> <mi>c</mi> <mi>a</mi> <mi>l</mi> <mi>l</mi> <mo>@</mo> <mn>3</mn> </mrow> </semantics></math> performance of every sub-task on ACE2005-Chinses and DuEE for comparative experiment.</p>
Full article ">Figure 5
<p><math display="inline"><semantics> <mrow> <mi>R</mi> <mi>e</mi> <mi>c</mi> <mi>a</mi> <mi>l</mi> <mi>l</mi> <mo>@</mo> <mn>3</mn> </mrow> </semantics></math> performance on old and new tasks in each sub-task on ACE2005-Chinses and DuEE.</p>
Full article ">Figure 6
<p><math display="inline"><semantics> <mrow> <mi>R</mi> <mi>e</mi> <mi>c</mi> <mi>a</mi> <mi>l</mi> <mi>l</mi> <mo>@</mo> <mn>3</mn> </mrow> </semantics></math> performance of every sub-task on ACE2005-Chinses and DuEE for extreme scenarios.</p>
Full article ">Figure 7
<p>Average time consumption per epoch of ICD and baseline methods under different settings.</p>
Full article ">
25 pages, 7199 KiB  
Article
Multimodal Sentiment Classifier Framework for Different Scene Contexts
by Nelson Silva, Pedro J. S. Cardoso and João M. F. Rodrigues
Appl. Sci. 2024, 14(16), 7065; https://doi.org/10.3390/app14167065 - 12 Aug 2024
Cited by 1 | Viewed by 1186
Abstract
Sentiment analysis (SA) is an effective method for determining public opinion. Social media posts have been the subject of much research, due to the platforms’ enormous and diversified user bases that regularly share thoughts on nearly any subject. However, on posts composed by [...] Read more.
Sentiment analysis (SA) is an effective method for determining public opinion. Social media posts have been the subject of much research, due to the platforms’ enormous and diversified user bases that regularly share thoughts on nearly any subject. However, on posts composed by a text–image pair, the written description may or may not convey the same sentiment as the image. The present study uses machine learning models for the automatic sentiment evaluation of pairs of text and image(s). The sentiments derived from the image and text are evaluated independently and merged (or not) to form the overall sentiment, returning the sentiment of the post and the discrepancy between the sentiments represented by the text–image pair. The image sentiment classification is divided into four categories—“indoor” (IND), “man-made outdoors” (OMM), “non-man-made outdoors” (ONMM), and “indoor/outdoor with persons in the background” (IOwPB)—and then ensembled into an image sentiment classification model (ISC), that can be compared with a holistic image sentiment classifier (HISC), showing that the ISC achieves better results than the HISC. For the Flickr sub-data set, the sentiment classification of images achieved an accuracy of 68.50% for IND, 83.20% for OMM, 84.50% for ONMM, 84.80% for IOwPB, and 76.45% for ISC, compared to 65.97% for the HISC. For the text sentiment classification, in a sub-data set of B-T4SA, an accuracy of 92.10% was achieved. Finally, the text–image combination, in the authors’ private data set, achieved an accuracy of 78.84%. Full article
Show Figures

Figure 1

Figure 1
<p><b>Left</b> to <b>right</b>, examples of images for the four categories extracted from the Flickr data set, i.e., ONMM, OMM, IND, and IOwPB. <b>Top</b> to <b>bottom</b>, examples of images with positive, neutral, and negative sentiments.</p>
Full article ">Figure 2
<p><b>Left</b> to <b>right</b>, examples of images for the four categories (ONMM, OMM, IND, and IOwPB); <b>top</b> to <b>bottom</b>, positive, neutral, and negative sentiments for the SIS &amp; ISP data sets.</p>
Full article ">Figure 3
<p>Multimodal Sentiment Classification Framework.</p>
Full article ">Figure 4
<p>(<b>Top</b>) the ISC model, (<b>bottom</b>) the ISC sub-block specification, with <span class="html-italic">class</span> ∈ {NMMO; MMO; IND; IOPB}.</p>
Full article ">Figure 5
<p>Block diagram of the Text Sentiment Classifier.</p>
Full article ">Figure 6
<p>Block diagram of the Multimodal Sentiment Classifier.</p>
Full article ">Figure 7
<p>ISC_OMM models’ confusion matrices: on the <b>left</b>, the model uses three inputs (which are the direct sentiments of the individual models), while the model on the <b>right</b> uses nine inputs (corresponding to the probabilities of the predicted sentiment).</p>
Full article ">Figure 8
<p>ISC_NMMO confusion matrices for models DL#1, DL#2, and DL#3 (<b>top</b> line, from <b>left</b> to <b>right</b>), and ensembles RFa and NNa (<b>bottom</b> line, <b>left</b> to <b>right</b>).</p>
Full article ">Figure 9
<p>Examples of images where the human classification presented more doubts.</p>
Full article ">
Back to TopTop