[go: up one dir, main page]

Next Issue
Volume 14, October
Previous Issue
Volume 14, August
 
 

Information, Volume 14, Issue 9 (September 2023) – 48 articles

Cover Story (view full-size image): In this study, we compared four widely used open-source SDWN controllers (ONOS, Ryu, POX, and ODL) based on a multi-criteria scheme. Using Mininet-WiFi, the performance of each controller is evaluated in terms of throughput, latency, jitter, and packet loss. As each performance factor exhibits a particular behavior, following several trends, and there is no direct correlation among them, it is difficult to conclude which the best controller is from the comparison of each metric separately; we need a comprehensive consideration of all metrics (universality). Thus, we propose a particular methodology that helps us decide which of the controllers has the best overall behavior using a single indicator (GPI). The results reveal that Ryu and POX controllers are far superior to others in terms of scalability. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
25 pages, 1679 KiB  
Article
SUCCEED: Sharing Upcycling Cases with Context and Evaluation for Efficient Software Development
by Takuya Nakata, Sinan Chen, Sachio Saiki and Masahide Nakamura
Information 2023, 14(9), 518; https://doi.org/10.3390/info14090518 - 21 Sep 2023
Cited by 2 | Viewed by 1489
Abstract
Software upcycling, a form of software reuse, is a concept that efficiently generates novel, innovative, and value-added development projects by utilizing knowledge extracted from past projects. However, how to integrate the materials derived from these projects for upcycling remains uncertain. This study defines [...] Read more.
Software upcycling, a form of software reuse, is a concept that efficiently generates novel, innovative, and value-added development projects by utilizing knowledge extracted from past projects. However, how to integrate the materials derived from these projects for upcycling remains uncertain. This study defines a systematic model for upcycling cases and develops the Sharing Upcycling Cases with Context and Evaluation for Efficient Software Development (SUCCEED) system to support the implementation of new upcycling initiatives by effectively sharing cases within the organization. To ascertain the efficacy of upcycling within our proposed model and system, we formulated three research questions and conducted two distinct experiments. Through surveys, we identified motivations and characteristics of shared upcycling-relevant development cases. Development tasks were divided into groups, those that employed the SUCCEED system and those that did not, in order to discern the enhancements brought about by upcycling. As a result of this research, we accomplished a comprehensive structuring of both technical and experiential knowledge beneficial for development, a feat previously unrealizable through conventional software reuse, and successfully realized reuse in a proactive and closed environment through construction of the wisdom of crowds for upcycling cases. Consequently, it becomes possible to systematically perform software upcycling by leveraging knowledge from existing projects for streamlining of software development. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

Figure 1
<p>Software upcycling flow.</p>
Full article ">Figure 2
<p>The SUCCEED system architecture.</p>
Full article ">Figure 3
<p>Different potential project materials.</p>
Full article ">Figure 4
<p>Upcycling case data model.</p>
Full article ">Figure 5
<p>Authentication function upcycle example.</p>
Full article ">Figure 6
<p>Multiple types of test cases upcycling example.</p>
Full article ">Figure 7
<p>Upcycling flow.</p>
Full article ">Figure 8
<p>System implementation architecture.</p>
Full article ">Figure 9
<p>Web UI for PC.</p>
Full article ">Figure 10
<p>Web UI for smartphone.</p>
Full article ">Figure 11
<p>Classification of each material by type (allowing for duplication).</p>
Full article ">Figure 12
<p>Results of time measurements: (<b>a</b>) time to detect fingers in Task 1; (<b>b</b>) time to search for agent in Task 1; (<b>c</b>) time to search for finger detection in Task 1; (<b>d</b>) time to search for shift algorithm in Task 2; (<b>e</b>) time to register the case of Task 1; (<b>f</b>) time to register the case of Task 2.</p>
Full article ">Figure 13
<p>Box-and-whisker diagram of case registration times for all tasks.</p>
Full article ">
18 pages, 3095 KiB  
Article
Machine Translation of Electrical Terminology Constraints
by Zepeng Wang, Yuan Chen and Juwei Zhang
Information 2023, 14(9), 517; https://doi.org/10.3390/info14090517 - 20 Sep 2023
Cited by 1 | Viewed by 1362
Abstract
In practical applications, the accuracy of domain terminology translation is an important criterion for the performance evaluation of domain machine translation models. Aiming at the problem of phrase mismatch and improper translation caused by word-by-word translation of English terminology phrases, this paper constructs [...] Read more.
In practical applications, the accuracy of domain terminology translation is an important criterion for the performance evaluation of domain machine translation models. Aiming at the problem of phrase mismatch and improper translation caused by word-by-word translation of English terminology phrases, this paper constructs a dictionary of terminology phrases in the field of electrical engineering and proposes three schemes to integrate the dictionary knowledge into the translation model. Scheme 1 replaces the terminology phrases of the source language. Scheme 2 uses the residual connection at the encoder end after the terminology phrase is replaced. Scheme 3 uses a segmentation method of combining character segmentation and terminology segmentation for the target language and uses an additional loss module in the training process. The results show that all three schemes are superior to the baseline model in two aspects: BLEU value and correct translation rate of terminology words. In the test set, the highest accuracy of terminology words was 48.3% higher than that of the baseline model. The BLEU value is up to 3.6 higher than the baseline model. The phenomenon is also analyzed and discussed in this paper. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

Figure 1
<p>Transformer structure.</p>
Full article ">Figure 2
<p>Residual connection encoder.</p>
Full article ">Figure 3
<p>Terminology word additional loss module.</p>
Full article ">Figure 4
<p>Training process.</p>
Full article ">Figure 5
<p>BLEU value comparison [<a href="#B13-information-14-00517" class="html-bibr">13</a>,<a href="#B21-information-14-00517" class="html-bibr">21</a>,<a href="#B22-information-14-00517" class="html-bibr">22</a>].</p>
Full article ">Figure 6
<p>Comparison of correct translation rate of terminological words [<a href="#B13-information-14-00517" class="html-bibr">13</a>,<a href="#B21-information-14-00517" class="html-bibr">21</a>,<a href="#B22-information-14-00517" class="html-bibr">22</a>].</p>
Full article ">Figure 7
<p>The influence of hyper-parameter <math display="inline"><semantics> <mrow> <mi>λ</mi> </mrow> </semantics></math> on BLEU.</p>
Full article ">Figure 8
<p>The effect of hyper-parameter <math display="inline"><semantics> <mrow> <mi>λ</mi> </mrow> </semantics></math> on the accuracy of term words.</p>
Full article ">
17 pages, 3708 KiB  
Article
Attacking Deep Learning AI Hardware with Universal Adversarial Perturbation
by Mehdi Sadi, Bashir Mohammad Sabquat Bahar Talukder, Kaniz Mishty and Md Tauhidur Rahman
Information 2023, 14(9), 516; https://doi.org/10.3390/info14090516 - 19 Sep 2023
Viewed by 2057
Abstract
Universal adversarial perturbations are image-agnostic and model-independent noise that, when added to any image, can mislead the trained deep convolutional neural networks into the wrong prediction. Since these universal adversarial perturbations can seriously jeopardize the security and integrity of practical deep learning applications, [...] Read more.
Universal adversarial perturbations are image-agnostic and model-independent noise that, when added to any image, can mislead the trained deep convolutional neural networks into the wrong prediction. Since these universal adversarial perturbations can seriously jeopardize the security and integrity of practical deep learning applications, the existing techniques use additional neural networks to detect the existence of these noises at the input image source. In this paper, we demonstrate an attack strategy that, when activated by rogue means (e.g., malware, trojan), can bypass these existing countermeasures by augmenting the adversarial noise at the AI hardware accelerator stage. We demonstrate the accelerator-level universal adversarial noise attack on several deep learning models using co-simulation of the software kernel of the Conv2D function and the Verilog RTL model of the hardware under the FuseSoC environment. Full article
(This article belongs to the Special Issue Hardware Security and Trust)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Deep NN; (<b>b</b>) deep CNN; (<b>c</b>) convolution in CNN.</p>
Full article ">Figure 2
<p>AI accelerator with PE/MAC arrays. (<b>a</b>) Systolic architecture; (<b>b</b>) SIMD architecture.</p>
Full article ">Figure 3
<p>Universal adversarial perturbation added with a clean image to deceive the trained deep learning model.</p>
Full article ">Figure 4
<p>AI hardware accelerator architecture.</p>
Full article ">Figure 5
<p>Using the additive property of convolution, the same output feature map can be generated by noise interleaving without directly adding noise to the input. (<b>a</b>) Convolution with direct adversarial noise addition with input; (<b>b</b>) noise interleaved with input, filter duplicated, and vertical stride doubled to accomplish the same task.</p>
Full article ">Figure 6
<p>(<b>a</b>) Clean input image; (<b>b</b>) universal adversarial perturbation; (<b>c</b>) adversarial perturbation interleaved with the input image.</p>
Full article ">Figure 7
<p>Input image data positioning in memory: (<b>a</b>) regular scenario; (<b>b</b>) under adversarial attack, noise data are interleaved.</p>
Full article ">Figure 8
<p>Normalized prediction accuracy changes for (i) clean, (ii) adversarially perturbed, (iii) low-magnitude random-noise-augmented, and (iv) high-magnitude random-noise-added images for ImageNet benchmark. (<b>a</b>) Top-1 accuracy; (<b>b</b>) Top-5 accuracy.</p>
Full article ">Figure 9
<p>An example of Conv2D code. Under attack, the machine code can be compromised to inject universal adversarial perturbation in the images. (<b>a</b>) Regular code. (<b>b</b>) Merging adversarial perturbation by image interleaving and altering the stride.</p>
Full article ">
27 pages, 2187 KiB  
Article
A Conceptual Consent Request Framework for Mobile Devices
by Olha Drozd and Sabrina Kirrane
Information 2023, 14(9), 515; https://doi.org/10.3390/info14090515 - 19 Sep 2023
Viewed by 1596
Abstract
The General Data Protection Regulation (GDPR) identifies consent as one of the legal bases for personal data processing and requires that it should be freely given, specific, informed, unambiguous, understandable, and easily revocable. Unfortunately, current technical mechanisms for obtaining consent often do not [...] Read more.
The General Data Protection Regulation (GDPR) identifies consent as one of the legal bases for personal data processing and requires that it should be freely given, specific, informed, unambiguous, understandable, and easily revocable. Unfortunately, current technical mechanisms for obtaining consent often do not comply with these requirements. The conceptual consent request framework for mobile devices that is presented in this paper, addresses this issue by following the GDPR requirements on consent and offering a unified user interface for mobile apps. The proposed conceptual framework is evaluated via the development of a City Explorer app with four consent request approaches (custom, functionality-based, app-based, and usage-based) integrated into it. The evaluation shows that the functionality-based consent, which was integrated into the City Explorer app, achieved the best evaluation results and the highest average system usability scale (SUS) score. The functionality-based consent also scored the highest number of SUS points among the four consent templates when evaluated separately from the app. Additionally, we discuss the framework’s reusability and its integration into other mobile apps of different contexts. Full article
(This article belongs to the Special Issue Addressing Privacy and Data Protection in New Technological Trends)
Show Figures

Figure 1

Figure 1
<p>Schematic depiction of the CURLED concept for the info map module of the City Explorer app.</p>
Full article ">Figure 2
<p>Screenshots of the custom consent request integrated into the City Explorer app: (<b>a</b>) list of purposes; (<b>b</b>) detailed overview of the data processing for a specific purpose.</p>
Full article ">Figure 3
<p>Screenshots of the functionality-based consent request integrated into the City Explorer app: (<b>a</b>) list of purpose groupings; (<b>b</b>) list of purposes of a specific purpose grouping; (<b>c</b>) detailed overview of the data processing for a specific purpose.</p>
Full article ">Figure 4
<p>Screenshots of the app-based consent request integrated into the City Explorer app: (<b>a</b>) list of purpose groupings; (<b>b</b>) list of purposes of a specific purpose grouping; (<b>c</b>) detailed overview of the data processing for a specific purpose.</p>
Full article ">Figure 5
<p>Screenshots of the usage-based consent request integrated into the City Explorer app: (<b>a</b>) list of purpose groupings; (<b>b</b>) list of purposes of a specific purpose grouping; (<b>c</b>) detailed overview of the data processing for a specific purpose.</p>
Full article ">Figure 6
<p>(<b>a</b>) Convenience vs. Privacy. (<b>b</b>) Satisfaction with the City Explorer app prototype. (<b>c</b>) Satisfaction with the way the consent was requested.</p>
Full article ">Figure 7
<p>(<b>a</b>) Consent memorability. (<b>b</b>) Assessment of the time needed for task completion. (<b>c</b>) Assessment of the time needed to give consent.</p>
Full article ">Figure 8
<p>Adjectives that describe the City Explorer app.</p>
Full article ">Figure 9
<p>Adjectives that describe consent request options.</p>
Full article ">Figure 10
<p>Average SUS scores of the City Explorer app and of its consent request options.</p>
Full article ">
33 pages, 6659 KiB  
Article
Development of a Virtual Reality Escape Room Game for Emotion Elicitation
by Inês Oliveira, Vítor Carvalho, Filomena Soares, Paulo Novais, Eva Oliveira and Lisa Gomes
Information 2023, 14(9), 514; https://doi.org/10.3390/info14090514 - 19 Sep 2023
Cited by 3 | Viewed by 2764
Abstract
In recent years, the role of emotions in digital games has gained prominence. Studies confirm emotions’ substantial impact on gaming, influencing interactions, effectiveness, efficiency, and satisfaction. Combining gaming dynamics, Virtual Reality (VR) and the immersive Escape Room genre offers a potent avenue through [...] Read more.
In recent years, the role of emotions in digital games has gained prominence. Studies confirm emotions’ substantial impact on gaming, influencing interactions, effectiveness, efficiency, and satisfaction. Combining gaming dynamics, Virtual Reality (VR) and the immersive Escape Room genre offers a potent avenue through which to evoke emotions and create a captivating player experience. The primary objective of this study is to explore VR game design specifically for the elicitation of emotions, in combination with the Escape Room genre. We also seek to understand how players perceive and respond to emotional stimuli within the game. Our study involved two distinct groups of participants: Nursing and Games. We employed a questionnaire to collect data on emotions experienced by participants, the game elements triggering these emotions, and their overall user experience. This study demonstrates the potential of VR technology and the Escape Room genre as a powerful means of eliciting emotions in players. “Escape VR: The Guilt” serves as a successful example of how immersive VR gaming can evoke emotions and captivate players. Full article
Show Figures

Figure 1

Figure 1
<p>VR Escape Room game puzzle examples.</p>
Full article ">Figure 2
<p>Relationship between time and VR sickness level felt by non-teleportation and teleportation groups [<a href="#B7-information-14-00514" class="html-bibr">7</a>].</p>
Full article ">Figure 3
<p>Example of pages of the diary.</p>
Full article ">Figure 4
<p>Example of post-its.</p>
Full article ">Figure 5
<p>Example of a grabbable.</p>
Full article ">Figure 6
<p>Example of an objective.</p>
Full article ">Figure 7
<p>Trash opened with grabbable key and cockroaches.</p>
Full article ">Figure 8
<p>The chest puzzle.</p>
Full article ">Figure 9
<p>The chest puzzle with flashlight.</p>
Full article ">Figure 10
<p>The diary pages puzzle.</p>
Full article ">Figure 11
<p>TV with clues.</p>
Full article ">Figure 12
<p>Wooden plank.</p>
Full article ">Figure 13
<p>(<b>a</b>) Locker; (<b>b</b>) player in the mirror.</p>
Full article ">Figure 14
<p>Game controls (1—thumbsticks (joysticks); 2—menu button; 3—oculus button; 4—grip buttons; 5—triggers/grip (back buttons).</p>
Full article ">Figure 15
<p><b><a href="#sec1-information-14-00514" class="html-sec">Section 1</a></b> questionnaire results.</p>
Full article ">Figure 16
<p>Average intensity values for emotions.</p>
Full article ">Figure 17
<p>Average ratio for questionnaire’s last section evaluated questions.</p>
Full article ">
28 pages, 1304 KiB  
Review
Exploring the State of Machine Learning and Deep Learning in Medicine: A Survey of the Italian Research Community
by Alessio Bottrighi and Marzio Pennisi
Information 2023, 14(9), 513; https://doi.org/10.3390/info14090513 - 18 Sep 2023
Cited by 1 | Viewed by 2625
Abstract
Artificial intelligence (AI) is becoming increasingly important, especially in the medical field. While AI has been used in medicine for some time, its growth in the last decade is remarkable. Specifically, machine learning (ML) and deep learning (DL) techniques in medicine have been [...] Read more.
Artificial intelligence (AI) is becoming increasingly important, especially in the medical field. While AI has been used in medicine for some time, its growth in the last decade is remarkable. Specifically, machine learning (ML) and deep learning (DL) techniques in medicine have been increasingly adopted due to the growing abundance of health-related data, the improved suitability of such techniques for managing large datasets, and more computational power. ML and DL methodologies are fostering the development of new “intelligent” tools and expert systems to process data, to automatize human–machine interactions, and to deliver advanced predictive systems that are changing every aspect of the scientific research, industry, and society. The Italian scientific community was instrumental in advancing this research area. This article aims to conduct a comprehensive investigation of the ML and DL methodologies and applications used in medicine by the Italian research community in the last five years. To this end, we selected all the papers published in the last five years with at least one of the authors affiliated to an Italian institution that in the title, in the abstract, or in the keywords present the terms “machine learning” or “deep learning” and reference a medical area. We focused our research on journal papers under the hypothesis that Italian researchers prefer to present novel but well-established research in scientific journals. We then analyzed the selected papers considering different dimensions, including the medical topic, the type of data, the pre-processing methods, the learning methods, and the evaluation methods. As a final outcome, a comprehensive overview of the Italian research landscape is given, highlighting how the community has increasingly worked on a very heterogeneous range of medical problems. Full article
(This article belongs to the Special Issue Computer Vision, Pattern Recognition and Machine Learning in Italy)
Show Figures

Figure 1

Figure 1
<p>No. of papers indexed by SCOPUS per year on machine/deep learning for the medical field (see <a href="#sec2dot1-information-14-00513" class="html-sec">Section 2.1</a> for details about the query).</p>
Full article ">Figure 2
<p>Papers by country on machine learning/deep learning for the medical field, indexed by SCOPUS.</p>
Full article ">Figure 3
<p>Graphical view of the framework applied in this work.</p>
Full article ">Figure 4
<p>Graphical representation of the data type taxonomy in medicine for ML/DL.</p>
Full article ">Figure 5
<p>Report of documents published by the Italian community in machine learning/deep learning for the medical field in the years 2018–2022, indexed by SCOPUS.</p>
Full article ">Figure 6
<p>The top 10 funding sponsors acknowledged in the papers.</p>
Full article ">Figure 7
<p>The top-10 coauthoring countries with Italian researchers.</p>
Full article ">Figure 8
<p>Source journals of papers considered in the systematic review.</p>
Full article ">Figure 9
<p>Medical topics considered in the papers. Percentages are rounded to the first decimal place.</p>
Full article ">Figure 10
<p>Distribution of data types used. Percentages are rounded to the first decimal place.</p>
Full article ">Figure 11
<p>Distribution of pre-processing types used. Percentages are rounded to the first decimal place.</p>
Full article ">Figure 12
<p>Distribution of the used ML/DL approaches. Percentages are rounded to the first decimal place.</p>
Full article ">
22 pages, 17315 KiB  
Article
Progressive-Augmented-Based DeepFill for High-Resolution Image Inpainting
by Muzi Cui, Hao Jiang and Chaozhuo Li
Information 2023, 14(9), 512; https://doi.org/10.3390/info14090512 - 18 Sep 2023
Cited by 2 | Viewed by 1809
Abstract
Image inpainting aims to synthesize missing regions in images that are coherent with the existing visual content. Generative adversarial networks have made significant strides in the development of image inpainting. However, existing approaches heavily rely on the surrounding pixels while ignoring that the [...] Read more.
Image inpainting aims to synthesize missing regions in images that are coherent with the existing visual content. Generative adversarial networks have made significant strides in the development of image inpainting. However, existing approaches heavily rely on the surrounding pixels while ignoring that the boundaries might be uninformative or noisy, leading to blurred images. As complementary, global visual features from the remote image contexts depict the overall structure and texture of the vanilla images, contributing to generating pixels that blend seamlessly with the existing visual elements. In this paper, we propose a novel model, PA-DeepFill, to repair high-resolution images. The generator network follows a novel progressive learning paradigm, starting with low-resolution images and gradually improving the resolutions by stacking more layers. A novel attention-based module, the gathered attention block, is further integrated into the generator to learn the importance of different distant visual components adaptively. In addition, we have designed a local discriminator that is more suitable for image inpainting tasks, multi-task guided mask-level local discriminator based PatchGAN, which can guide the model to distinguish between regions from the original image and regions completed by the model at a finer granularity. This local discriminator can capture more detailed local information, thereby enhancing the model’s discriminative ability and resulting in more realistic and natural inpainted images. Our proposal is extensively evaluated over popular datasets, and the experimental results demonstrate the superiority of our proposal. Full article
(This article belongs to the Special Issue Applications of Deep Learning in Bioinformatics and Image Processing)
Show Figures

Figure 1

Figure 1
<p>An illustration of different inpainting models. Given an image (<b>a</b>) and its masked version (<b>b</b>), CA [<a href="#B19-information-14-00512" class="html-bibr">19</a>] (<b>c</b>), PConv [<a href="#B22-information-14-00512" class="html-bibr">22</a>] (<b>d</b>), and Deepfillv2 [<a href="#B20-information-14-00512" class="html-bibr">20</a>] (<b>e</b>) generate distorted visual features.</p>
Full article ">Figure 2
<p>The overview of the proposed PA-DeepFill model.</p>
Full article ">Figure 3
<p>The framework of pixel attention block.</p>
Full article ">Figure 4
<p>The framework of channel attention block.</p>
Full article ">Figure 5
<p>The frameworkof smooth transition (ST) module. Where (<b>a</b>) is <math display="inline"><semantics> <mrow> <mn>32</mn> <mo>×</mo> <mn>32</mn> </mrow> </semantics></math>, (<b>b</b>) is the transition layer from <math display="inline"><semantics> <mrow> <mn>32</mn> <mo>×</mo> <mn>32</mn> </mrow> </semantics></math> to <math display="inline"><semantics> <mrow> <mn>64</mn> <mo>×</mo> <mn>64</mn> </mrow> </semantics></math>, and (<b>c</b>) is <math display="inline"><semantics> <mrow> <mn>64</mn> <mo>×</mo> <mn>64</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>Qualitative comparisons of PA-DeepFill with CA [<a href="#B19-information-14-00512" class="html-bibr">19</a>], PConv [<a href="#B22-information-14-00512" class="html-bibr">22</a>], Deepfillv2 [<a href="#B20-information-14-00512" class="html-bibr">20</a>], CTSDG [<a href="#B38-information-14-00512" class="html-bibr">38</a>], and AOT-GAN [<a href="#B18-information-14-00512" class="html-bibr">18</a>] on COCO [<a href="#B26-information-14-00512" class="html-bibr">26</a>]. Each column shows the overall repair effect and local repair details. All the images are center-cropped and resized to <math display="inline"><semantics> <mrow> <mn>512</mn> <mo>×</mo> <mn>512</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>The qualitative analysis of Pixel Attention Block.</p>
Full article ">Figure 8
<p>The qualitative analysis of Channel Attention Block.</p>
Full article ">Figure 9
<p>The qualitative analysis of image enhancement module.</p>
Full article ">Figure 10
<p>The qualitative analysis of image enhancement module.</p>
Full article ">Figure 11
<p>Qualitative analysis of receptive field size.</p>
Full article ">Figure 12
<p>Visual results of PA-DeepFill in logo removal.</p>
Full article ">Figure 13
<p>Visual results of PA-DeepFill in object editing.</p>
Full article ">Figure 14
<p>Visual results of PA-DeepFill in face editing.</p>
Full article ">
14 pages, 259 KiB  
Article
Analysis of the Current Situation of Big Data MOOCs in the Intelligent Era Based on the Perspective of Improving the Mental Health of College Students
by Hongfeng Sang, Liyi Ma and Nan Ma
Information 2023, 14(9), 511; https://doi.org/10.3390/info14090511 - 18 Sep 2023
Viewed by 1762
Abstract
A three-dimensional MOOC analysis framework was developed, focusing on platform design, organizational mechanisms, and course construction. This framework aims to investigate the current situation of big data MOOCs in the intelligent era, particularly from the perspective of improving the mental health of college [...] Read more.
A three-dimensional MOOC analysis framework was developed, focusing on platform design, organizational mechanisms, and course construction. This framework aims to investigate the current situation of big data MOOCs in the intelligent era, particularly from the perspective of improving the mental health of college students; moreover, the framework summarizes the construction experience and areas for improvement. The construction of 525 big data courses on 16 MOOC platforms is compared and analyzed from three aspects: the platform (including platform construction, resource quantity, and resource quality), organizational mechanism (including the course opening unit, teacher team, and learning norms), and course construction (including course objectives, teaching design, course content, teaching organization, implementation, teaching management, and evaluation). Drawing from the successful practices of international big data MOOCs and excellent Chinese big data MOOCs, and considering the requirements of authoritative government documents, such as the no. 8 document (J.G. [2019]), no. 3 document (J.G. [2015]), no. 1 document (J.G. [2022]), as well as the “Educational Information Technology Standard CELTS-22—Online Course Evaluation Standard”, recommendations about the platform, organizational mechanism, and course construction are provided for the future development of big data MOOCs in China. Full article
Show Figures

Figure 1

Figure 1
<p>Three-dimensional MOOC analysis framework.</p>
Full article ">
13 pages, 397 KiB  
Article
Knowledge Graph Based Recommender for Automatic Playlist Continuation
by Aleksandar Ivanovski, Milos Jovanovik, Riste Stojanov and Dimitar Trajanov
Information 2023, 14(9), 510; https://doi.org/10.3390/info14090510 - 16 Sep 2023
Viewed by 2058
Abstract
In this work, we present a state-of-the-art solution for automatic playlist continuation through a knowledge graph-based recommender system. By integrating representational learning with graph neural networks and fusing multiple data streams, the system effectively models user behavior, leading to accurate and personalized recommendations. [...] Read more.
In this work, we present a state-of-the-art solution for automatic playlist continuation through a knowledge graph-based recommender system. By integrating representational learning with graph neural networks and fusing multiple data streams, the system effectively models user behavior, leading to accurate and personalized recommendations. We provide a systematic and thorough comparison of our results with existing solutions and approaches, demonstrating the remarkable potential of graph-based representation in improving recommender systems. Our experiments reveal substantial enhancements over existing approaches, further validating the efficacy of this novel approach. Additionally, through comprehensive evaluation, we highlight the robustness of our solution in handling dynamic user interactions and streaming data scenarios, showcasing its practical viability and promising prospects for next-generation recommender systems. Full article
(This article belongs to the Special Issue Advances in Machine Learning and Intelligent Information Systems)
Show Figures

Figure 1

Figure 1
<p>Overview of methodology.</p>
Full article ">Figure 2
<p>Sub-graph of the knowledge graph.</p>
Full article ">Figure 3
<p>Training and validation cross-entropy loss for node classification embeddings.</p>
Full article ">Figure 4
<p>Training and validation MSE loss for link prediction embeddings.</p>
Full article ">
19 pages, 1831 KiB  
Article
An Empirical Study of Deep Learning-Based SS7 Attack Detection
by Yuejun Guo, Orhan Ermis, Qiang Tang, Hoang Trang and Alexandre De Oliveira
Information 2023, 14(9), 509; https://doi.org/10.3390/info14090509 - 16 Sep 2023
Viewed by 2472
Abstract
Signalling protocols are responsible for fundamental tasks such as initiating and terminating communication and identifying the state of the communication in telecommunication core networks. Signalling System No. 7 (SS7), Diameter, and GPRS Tunneling Protocol (GTP) are the main protocols used in 2G to [...] Read more.
Signalling protocols are responsible for fundamental tasks such as initiating and terminating communication and identifying the state of the communication in telecommunication core networks. Signalling System No. 7 (SS7), Diameter, and GPRS Tunneling Protocol (GTP) are the main protocols used in 2G to 4G, while 5G uses standard Internet protocols for its signalling. Despite their distinct features, and especially their security guarantees, they are most vulnerable to attacks in roaming scenarios: the attacks that target the location update function call for subscribers who are located in a visiting network. The literature tells us that rule-based detection mechanisms are ineffective against such attacks, while the hope lies in deep learning (DL)-based solutions. In this paper, we provide a large-scale empirical study of state-of-the-art DL models, including eight supervised and five semi-supervised, to detect attacks in the roaming scenario. Our experiments use a real-world dataset and a simulated dataset for SS7, and they can be straightforwardly carried out for other signalling protocols upon the availability of corresponding datasets. The results show that semi-supervised DL models generally outperform supervised ones since they leverage both labeled and unlabeled data for training. Nevertheless, the ensemble-based supervised model NODE outperforms others in its category and some in the semi-supervised category. Among all, the semi-supervised model PReNet performs the best regarding the Recall and F1 metrics when all unlabeled data are used for training, and it is also the most stable one. Our experiment also shows that the performances of different semi-supervised models could differ a lot regarding the size of used unlabeled data in training. Full article
(This article belongs to the Section Information Security and Privacy)
Show Figures

Figure 1

Figure 1
<p>Mobile subscribers (in billions) by technology 2016–2027 (expected). Figure source: <a href="https://tinyurl.com/4wze6un5" target="_blank">https://tinyurl.com/4wze6un5</a> (accessed on 11 September 2023).</p>
Full article ">Figure 2
<p>Holistic view of 2G/3G core networks.</p>
Full article ">Figure 3
<p>Percentage of successful attacks against SS7 with respect to message categories based on the presence of a filtering-based defense mechanism (Credit: [<a href="#B11-information-14-00509" class="html-bibr">11</a>]).</p>
Full article ">Figure 4
<p>The update location call flow in the SS7 protocol and SMS interception attack using update location (Credit: [<a href="#B8-information-14-00509" class="html-bibr">8</a>]).</p>
Full article ">Figure 5
<p>Overview of the empirical study.</p>
Full article ">Figure 6
<p>Mutual information and mean absolute difference results on a labeled real-world dataset.</p>
Full article ">Figure 7
<p>Example of positive and negative instances.</p>
Full article ">Figure 8
<p>Comparison results of 13 models on the real-world (<b>top</b>) and simulated (<b>bottom</b>) datasets visualized using a box plot. For all metrics, the higher, the better.</p>
Full article ">
19 pages, 2438 KiB  
Article
A Deep Learning Approach for Predictive Healthcare Process Monitoring
by Ulises Manuel Ramirez-Alcocer, Edgar Tello-Leal, Gerardo Romero and Bárbara A. Macías-Hernández
Information 2023, 14(9), 508; https://doi.org/10.3390/info14090508 - 16 Sep 2023
Cited by 5 | Viewed by 1859
Abstract
In this paper, we propose a deep learning-based approach to predict the next event in hospital organizational process models following the guidance of predictive process mining. This method provides value for the planning and allocating of resources since each trace linked to a [...] Read more.
In this paper, we propose a deep learning-based approach to predict the next event in hospital organizational process models following the guidance of predictive process mining. This method provides value for the planning and allocating of resources since each trace linked to a case shows the consecutive execution of events in a healthcare process. The predictive model is based on a long short-term memory (LSTM) neural network that achieves high accuracy in the training and testing stages. In addition, a framework to implement the LSTM neural network is proposed, comprising stages from the preprocessing of the raw data to selecting the best LSTM model. The effectiveness of the prediction method is evaluated through four real-life event logs that contain historical information on the execution of the processes of patient transfer orders between hospitals, sepsis care cases, billing of medical services, and patient care management. In the test stage, the LSTM model reached values of 0.98, 0.91, 0.85, and 0.81 in the accuracy metric, and in the evaluation of the prediction of the next event using the 10-fold cross-validation technique, values of 0.94, 0.88, 0.84, and 0.81 were obtained for the four previously mentioned event logs. In addition, the performance of the LSTM prediction model was evaluated with the precision, recall, F1-score, and area under the receiver operating characteristic (ROC) curve (AUC) metrics, obtaining high scores very close to 1. The experimental results suggest that the proposed method achieves acceptable measures in predicting the next event regardless of whether an input event or a set of input events is used. Full article
(This article belongs to the Special Issue Information Systems in Healthcare)
Show Figures

Figure 1

Figure 1
<p>Next event prediction framework overview.</p>
Full article ">Figure 2
<p>Operation schema of the segmentation task.</p>
Full article ">Figure 3
<p>One-hot vector representation of the output activities.</p>
Full article ">Figure 4
<p>Architecture of a single LSTM cell.</p>
Full article ">Figure 5
<p>Receiver operating characteristic (ROC) curves and their area under the curves (AUC) of worst classes existing in the validation stage of our experiment.</p>
Full article ">Figure 6
<p>Loss function in the training and validation phases by epoch.</p>
Full article ">Figure 7
<p>Accuracy in the training and validation phases by epoch.</p>
Full article ">Figure 8
<p>ROC curves using all classes of the sepsis event log.</p>
Full article ">Figure 9
<p>ROC curves using all classes of the hospital billing event log.</p>
Full article ">Figure 10
<p>ROC curves using all classes of the healthcare collaboration event log.</p>
Full article ">
17 pages, 5105 KiB  
Article
Investigation of a Hybrid LSTM + 1DCNN Approach to Predict In-Cylinder Pressure of Internal Combustion Engines
by Federico Ricci, Luca Petrucci, Francesco Mariani and Carlo Nazareno Grimaldi
Information 2023, 14(9), 507; https://doi.org/10.3390/info14090507 - 15 Sep 2023
Cited by 3 | Viewed by 1428
Abstract
The control of internal combustion engines is becoming increasingly challenging to the customer’s requirements for growing performance and ever-stringent emission regulations. Therefore, significant computational efforts are required to manage the large amount of data coming from the field for engine optimization, leading to [...] Read more.
The control of internal combustion engines is becoming increasingly challenging to the customer’s requirements for growing performance and ever-stringent emission regulations. Therefore, significant computational efforts are required to manage the large amount of data coming from the field for engine optimization, leading to increased operating times and costs. Machine-learning techniques are being increasingly used in the automotive field as virtual sensors, fault detection systems, and performance-optimization applications for their real-time and low-cost implementation. Among them, the combination of long short-term memory (LSTM) together with one-dimensional convolutional neural networks (1DCNN), i.e., LSTM + 1DCNN, has proved to be a promising tool for signal analysis. The architecture exploits the CNN characteristic to combine feature classification and extraction, creating a single adaptive learning body with the ability of LSTM to follow the sequential nature of sensor measurements over time. The current research focus is on evaluating the possibility of integrating virtual sensors into the on-board control system. Specifically, the primary objective is to assess and harness the potential of advanced machine-learning technologies to replace physical sensors. In realizing this goal, the present work establishes the first step by evaluating the forecasting performance of a LSTM + 1DCNN architecture. Experimental data coming from a three-cylinder spark-ignition engine under different operating conditions are used to predict the engine’s in-cylinder pressure traces. Since using in-cylinder pressure transducers in road cars is not economically viable, adopting advanced machine-learning technologies becomes crucial to avoid structural modifications while preserving engine integrity. The results show that LSTM + 1DCNN is particularly suited for the prediction of signals characterized by a higher variability. In particular, it consistently outperforms other architectures utilized for comparative purposes, achieving average error percentages below 2%. As cycle-to-cycle variability increases, LSTM + 1DCNN reaches average error percentages below 1.5%, demonstrating the architecture’s potential for replacing physical sensors. Full article
(This article belongs to the Special Issue Computer Vision, Pattern Recognition and Machine Learning in Italy)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Layout of the test bench. (<b>b</b>) The practicality picture of the test bench.</p>
Full article ">Figure 2
<p>Example of the acquired quantities as a function of the crank angle degree CAD. (<b>a</b>) In-cylinder pressure P<sub>cyl</sub>, (<b>b</b>) Pressure at the intake port in bar P<sub>int</sub>, (<b>c</b>) pressure at the exhaust port in bar P<sub>exh</sub>, (<b>d</b>) rotational speed of the engine in rpm EngineSpeed, and (<b>e</b>) in-cylinder volume V<sub>cyl</sub>.</p>
Full article ">Figure 3
<p>The Shapley analysis: a comprehensive understanding of the significance of each feature in predicting in-cylinder pressure on a global scale.</p>
Full article ">Figure 4
<p>(<b>a</b>) Description of the complete dataset used in terms of the analyzed cases and the considered number of variables and samples; (<b>b</b>) breakdown of the input and output parameters for each analyzed case according to the preliminary sensitivity analysis; (<b>c</b>) segmentation of the dataset for the training and testing sessions. In total, 80% of data were used for the training session and the remaining 20% during the test session to predict the output, i.e., P<sub>cyl</sub>.</p>
Full article ">Figure 5
<p>(<b>a</b>) Predictive scheme and (<b>b</b>) the internal structure of the LSTM and its division into gates.</p>
Full article ">Figure 6
<p>Trend of loss value of the LSTM + 1DCNN architecture which performed best during the training session.</p>
Full article ">Figure 7
<p>Training loss for the tested architecture during training sessions at 1500 rpm and low load conditions.</p>
Full article ">Figure 8
<p>Test performance of the architectures compared through the plot of the observed trend and forecasted one. Percentage error is depicted as well in order to highlight the quality of the predictions.</p>
Full article ">Figure 8 Cont.
<p>Test performance of the architectures compared through the plot of the observed trend and forecasted one. Percentage error is depicted as well in order to highlight the quality of the predictions.</p>
Full article ">Figure 9
<p>Regression prediction chart of the different models tested. (<b>a</b>) BP; (<b>b</b>) LSTM; (<b>c</b>) LSTM + 1DCNN; and (<b>d</b>) NARX.</p>
Full article ">Figure 10
<p>Comparison between NARX (<b>upper</b>) and LSTM + 1DCNN (<b>bottom</b>) performance at 3250 rpm.</p>
Full article ">Figure 11
<p>Comparison between NARX (<b>upper</b>) and LSTM + 1DCNN (<b>bottom</b>) performance at 2500 rpm and corresponding percentage errors (<b>right</b>).</p>
Full article ">
24 pages, 2318 KiB  
Article
Evaluation of 60 GHz Wireless Connectivity for an Automated Warehouse Suitable for Industry 4.0
by Rahul Gulia, Abhishek Vashist, Amlan Ganguly, Clark Hochgraf and Michael E. Kuhl
Information 2023, 14(9), 506; https://doi.org/10.3390/info14090506 - 15 Sep 2023
Cited by 1 | Viewed by 1337
Abstract
The fourth industrial revolution focuses on the digitization and automation of supply chains resulting in a significant transformation of methods for goods production and delivery systems. To enable this, automated warehousing is demanding unprecedented vehicle-to-vehicle and vehicle-to-infrastructure communication rates and reliability. The 60 [...] Read more.
The fourth industrial revolution focuses on the digitization and automation of supply chains resulting in a significant transformation of methods for goods production and delivery systems. To enable this, automated warehousing is demanding unprecedented vehicle-to-vehicle and vehicle-to-infrastructure communication rates and reliability. The 60 GHz frequency band can deliver multi-gigabit/second data rates to satisfy the increasing demands of network connectivity by smart warehouses. In this paper, we aim to investigate the network connectivity in the 60 GHz millimeter-wave band inside an automated warehouse. A key hindrance to robust and high-speed network connectivity, especially, at mmWave frequencies stems from numerous non-line-of-sight (nLOS) paths in the transmission medium due to various interacting objects such as metal shelves and storage boxes. The continual change in the warehouse storage configuration significantly affects the multipath reflected components and shadow fading effects, thus adding complexity to establishing stable, yet fast, network coverage. In this study, network connectivity in an automated warehouse is analyzed at 60 GHz using Network Simulator-3 (NS-3) channel simulations. We examine a simple warehouse model with several metallic shelves and storage materials of standard proportions. Our investigation indicates that the indoor warehouse network performance relies on the line-of-sight and nLOS propagation paths, the existence of reflective materials, and the autonomous material handling agents present around the access point (AP). In addition, we discuss the network performance under varied conditions including the AP height and storage materials on the warehouse shelves. We also analyze the network performance in each aisle of the warehouse in addition to its SINR heatmap to understand the 60 GHz network connectivity. Full article
(This article belongs to the Special Issue Wireless IoT Network Protocols II)
Show Figures

Figure 1

Figure 1
<p>Automated warehouse model used to analyze LOS and nLOS propagation characteristics showing the location of AP and AMHA-1 and -2’s initial positions, A and C, respectively. B and D denote AMHA final positions.</p>
Full article ">Figure 2
<p>Simulation flow adapted for this paper. Colored blocks represent this paper’s contributions.</p>
Full article ">Figure 3
<p>Path loss (PL) heatmap simulated through the proposed NS-3 platform.</p>
Full article ">Figure 4
<p>An automated warehouse model is used to analyze the network capacity showing the location of AP and AMHAs (1–10) with their initial positions and directions of movement.</p>
Full article ">Figure 5
<p>Mean throughput vs. offered load in a stationary AMHA environment.</p>
Full article ">Figure 6
<p>Mean delay vs. offered load in a stationary AMHA environment.</p>
Full article ">Figure 7
<p>Mean throughput vs. offered load in a dynamic AMHA environment.</p>
Full article ">Figure 8
<p>Mean delay vs. offered load in a dynamic AMHA environment.</p>
Full article ">Figure 9
<p>SINR distribution for various AP heights in the LOS case.</p>
Full article ">Figure 10
<p>SINR distribution for various AP heights in the nLOS case.</p>
Full article ">Figure 11
<p>SINR distribution for various AP heights in LOS case.</p>
Full article ">Figure 12
<p>SINR distribution for various AP heights in nLOS case.</p>
Full article ">Figure 13
<p>SINR heatmap for warehouse model-1.</p>
Full article ">Figure 14
<p>SINR heatmap for warehouse model-2.</p>
Full article ">Figure 15
<p>SINR heatmap for warehouse model-1. (<b>a</b>) Mean SINR (dB) in each aisle. (<b>b</b>) SINR standard deviation (dB) in each aisle. (<b>c</b>) <math display="inline"><semantics> <mo>Δ</mo> </semantics></math> SINR (dB) as we move from the main LOS aisle (east–west) to the nLOS aisles (north–south).</p>
Full article ">Figure 16
<p>SINR heatmap for warehouse model-2. (<b>a</b>) Mean SINR (dB) in each aisle. (<b>b</b>) SINR standard deviation (dB) in each aisle. (<b>c</b>) <math display="inline"><semantics> <mo>Δ</mo> </semantics></math> SINR (dB) as we move from the main LOS aisle to the nLOS aisles.</p>
Full article ">
14 pages, 904 KiB  
Article
Enhancing Personalized Educational Content Recommendation through Cosine Similarity-Based Knowledge Graphs and Contextual Signals
by Christos Troussas, Akrivi Krouska, Panagiota Tselenti, Dimitrios K. Kardaras and Stavroula Barbounaki
Information 2023, 14(9), 505; https://doi.org/10.3390/info14090505 - 14 Sep 2023
Cited by 3 | Viewed by 2243
Abstract
The extensive pool of content within educational software platforms can often overwhelm learners, leaving them uncertain about what materials to engage with. In this context, recommender systems offer significant support by customizing the content delivered to learners, alleviating the confusion and enhancing the [...] Read more.
The extensive pool of content within educational software platforms can often overwhelm learners, leaving them uncertain about what materials to engage with. In this context, recommender systems offer significant support by customizing the content delivered to learners, alleviating the confusion and enhancing the learning experience. To this end, this paper presents a novel approach for recommending adequate educational content to learners via the use of knowledge graphs. In our approach, the knowledge graph encompasses learners, educational entities, and relationships among them, creating an interconnected framework that drives personalized e-learning content recommendations. Moreover, the presented knowledge graph has been enriched with contextual signals referring to various learners’ characteristics, such as prior knowledge level, learning style, and current learning goals. To refine the recommendation process, the cosine similarity technique was employed to quantify the likeness between a learner’s preferences and the attributes of educational entities within the knowledge graph. The above methodology was incorporated in an intelligent tutoring system for learning the programming language Java to recommend content to learners. The software was evaluated with highly promising results. Full article
(This article belongs to the Collection Knowledge Graphs for Search and Recommendation)
Show Figures

Figure 1

Figure 1
<p>Knowledge graph.</p>
Full article ">Figure 2
<p>Questionnaire results.</p>
Full article ">
35 pages, 9764 KiB  
Article
Using ChatGPT and Persuasive Technology for Personalized Recommendation Messages in Hotel Upselling
by Manolis Remountakis, Konstantinos Kotis, Babis Kourtzis and George E. Tsekouras
Information 2023, 14(9), 504; https://doi.org/10.3390/info14090504 - 13 Sep 2023
Cited by 8 | Viewed by 4448
Abstract
Recommender systems have become indispensable tools in the hotel hospitality industry, enabling personalized and tailored experiences for guests. Recent advancements in large language models (LLMs), such as ChatGPT, and persuasive technologies have opened new avenues for enhancing the effectiveness of those systems. This [...] Read more.
Recommender systems have become indispensable tools in the hotel hospitality industry, enabling personalized and tailored experiences for guests. Recent advancements in large language models (LLMs), such as ChatGPT, and persuasive technologies have opened new avenues for enhancing the effectiveness of those systems. This paper explores the potential of integrating ChatGPT and persuasive technologies for automating and improving hotel hospitality recommender systems. First, we delve into the capabilities of ChatGPT, which can understand and generate human-like text, enabling more accurate and context-aware recommendations. We discuss the integration of ChatGPT into recommender systems, highlighting the ability to analyze user preferences, extract valuable insights from online reviews, and generate personalized recommendations based on guest profiles. Second, we investigate the role of persuasive technology in influencing user behavior and enhancing the persuasive impact of hotel recommendations. By incorporating persuasive techniques, such as social proof, scarcity, and personalization, recommender systems can effectively influence user decision making and encourage desired actions, such as booking a specific hotel or upgrading their room. To investigate the efficacy of ChatGPT and persuasive technologies, we present pilot experiments with a case study involving a hotel recommender system. Our inhouse commercial hotel marketing platform, eXclusivi, was extended with a new software module working with ChatGPT prompts and persuasive ads created for its recommendations. In particular, we developed an intelligent advertisement (ad) copy generation tool for the hotel marketing platform. The proposed approach allows for the hotel team to target all guests in their language, leveraging the integration with the hotel’s reservation system. Overall, this paper contributes to the field of hotel hospitality by exploring the synergistic relationship between ChatGPT and persuasive technology in recommender systems, ultimately influencing guest satisfaction and hotel revenue. Full article
(This article belongs to the Special Issue Systems Engineering and Knowledge Management)
Show Figures

Figure 1

Figure 1
<p>The overall architecture of eXclusivi’s enterprise-level platform.</p>
Full article ">Figure 2
<p>The structure of eXclusivi’s recommendation system.</p>
Full article ">Figure 3
<p>Input–output information flow in eXclusivi’s recommendation technology.</p>
Full article ">Figure 4
<p>The structure of the knowledge-based recommender system for the wine case scenario.</p>
Full article ">Figure 5
<p>The structure of the content-based recommender system for the wine case scenario.</p>
Full article ">Figure 6
<p>The structure of the collaborative filtering recommender system for the wine case scenario.</p>
Full article ">Figure 7
<p>The user–item feedback matrix, where the dark cells indicate purchases between users and items.</p>
Full article ">Figure 8
<p>The basic structure of the PROMOTE system.</p>
Full article ">Figure 9
<p>Persado’s Wheel of Emotions.</p>
Full article ">Figure 10
<p>Combination between Persado’s emotion categories and Cialdini’s principles in the influential model.</p>
Full article ">Figure 11
<p>Examples of ChatGPT prompts engineered in Google Sheets (ChatGPT extension) to create ads with emotions.</p>
Full article ">Figure 12
<p>Examples of ad messages created (in English) for couples massage based on luck.</p>
Full article ">Figure 13
<p>Upsell’s eXclusivi-platform-integrated examples of ad messages created (in German) for couples massage based on excitement emotion (Persado) and liking principle (Cialdini).</p>
Full article ">Figure 14
<p>Upsell’s eXclusivi-platform-integrated examples of ad messages created for couples spa based on encouragement emotion (Persado) and commitment principle (Cialdini).</p>
Full article ">Figure 15
<p>Manually generated example messages for spa services for all of Persado’s emotions and various Cialdini’s principles.</p>
Full article ">Figure 16
<p>ChatGPT-generated example messages for spa services for all of Persado’s emotions and various Cialdini’s principles.</p>
Full article ">Figure 17
<p>Messages generated by ChatGPT and used for the fourth experiment, where emotions belonging to the same emotion category are combined with the principle that appears to have the highest rank with respect to that emotion category.</p>
Full article ">
30 pages, 2345 KiB  
Systematic Review
Designing a Chatbot for Contemporary Education: A Systematic Literature Review
by Dimitrios Ramandanis and Stelios Xinogalos
Information 2023, 14(9), 503; https://doi.org/10.3390/info14090503 - 13 Sep 2023
Cited by 6 | Viewed by 3952
Abstract
A chatbot is a technological tool that can simulate a discussion between a human and a program application. This technology has been developing rapidly over recent years, and its usage is increasing rapidly in many sectors, especially in education. For this purpose, a [...] Read more.
A chatbot is a technological tool that can simulate a discussion between a human and a program application. This technology has been developing rapidly over recent years, and its usage is increasing rapidly in many sectors, especially in education. For this purpose, a systematic literature review was conducted using the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) framework to analyze the developments and evolutions of this technology in the educational sector during the last 5 years. More precisely, an analysis of the development methods, practices and guidelines for the development of a conversational tutor are examined. The results of this study aim to summarize the gathered knowledge to provide useful information to educators that would like to develop a conversational assistant for their course and to developers that would like to develop chatbot systems in the educational domain. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

Figure 1
<p>PRISMA framework flowchart.</p>
Full article ">Figure 2
<p>Subject fields of the collected documents.</p>
Full article ">Figure 3
<p>Distribution of the 73 analyzed documents over time.</p>
Full article ">Figure 4
<p>The development cycle of an educational agent.</p>
Full article ">Figure 5
<p>Usage roles of the examined chatbots.</p>
Full article ">Figure 6
<p>Design suggestions of the examined chatbots.</p>
Full article ">
21 pages, 1433 KiB  
Article
PDD-ET: Parkinson’s Disease Detection Using ML Ensemble Techniques and Customized Big Dataset
by Kalyan Chatterjee, Ramagiri Praveen Kumar, Anjan Bandyopadhyay, Sujata Swain, Saurav Mallik, Aimin Li and Kanad Ray
Information 2023, 14(9), 502; https://doi.org/10.3390/info14090502 - 13 Sep 2023
Cited by 6 | Viewed by 2960
Abstract
Parkinson’s disease (PD) is a neurological disorder affecting the nerve cells. PD gives rise to various neurological conditions, including gradual reduction in movement speed, tremors, limb stiffness, and alterations in walking patterns. Identifying Parkinson’s disease in its initial phases is crucial to preserving [...] Read more.
Parkinson’s disease (PD) is a neurological disorder affecting the nerve cells. PD gives rise to various neurological conditions, including gradual reduction in movement speed, tremors, limb stiffness, and alterations in walking patterns. Identifying Parkinson’s disease in its initial phases is crucial to preserving the well-being of those afflicted. However, accurately identifying PD in its early phases is intricate due to the aging population. Therefore, in this paper, we harnessed machine learning-based ensemble methodologies and focused on the premotor stage of PD to create a precise and reliable early-stage PD detection model named PDD-ET. We compiled a tailored, extensive dataset encompassing patient mobility, medication habits, prior medical history, rigidity, gender, and age group. The PDD-ET model amalgamates the outcomes of various ML techniques, resulting in an impressive 97.52% accuracy in early-stage PD detection. Furthermore, the PDD-ET model effectively distinguishes between multiple stages of PD and accurately categorizes the severity levels of patients affected by PD. The evaluation findings demonstrate that the PDD-ET model outperforms the SVR, CNN, Stacked LSTM, LSTM, GRU, Alex Net, [Decision Tree, RF, and SVR], Deep Neural Network, HOG, Quantum ReLU Activator, Improved KNN, Adaptive Boosting, RF, and Deep Learning Model techniques by the approximate margins of 37%, 30%, 20%, 27%, 25%, 18%, 19%, 27%, 25%, 23%, 45%, 40%, 42%, and 16%, respectively. Full article
(This article belongs to the Special Issue Trends in Electronics and Health Informatics)
Show Figures

Figure 1

Figure 1
<p>General flow diagram of PDD-ET model.</p>
Full article ">Figure 2
<p>Construction of Customized Big Dataset for the PDD-ET model.</p>
Full article ">Figure 3
<p>System architecture of the PDD-ET model. Here, <math display="inline"><semantics> <msub> <mi>I</mi> <mi>i</mi> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>H</mi> <mi>i</mi> </msub> </semantics></math> denote the input and hidden layers of the proposed PDD-ET model.</p>
Full article ">Figure 4
<p>Construction of PDD-ET model.</p>
Full article ">Figure 5
<p>PD detection of the proposed PDD-ET model.</p>
Full article ">Figure 6
<p>Network loss of the proposed PDD-ET model.</p>
Full article ">Figure 7
<p>Detection of healthy and PD-affected patients based on PD density of PDD-ET model.</p>
Full article ">Figure 8
<p>Detection of healthy and PD-affected patients based on spiral images of PDD-ET model.</p>
Full article ">Figure 9
<p>Status of PD detection of the PDD-ET model.</p>
Full article ">Figure 10
<p>Spread status of Parkinson’s disease using the PDD-ET model.</p>
Full article ">Figure 11
<p>AUC curve of the proposed PDD-ET model.</p>
Full article ">
19 pages, 3807 KiB  
Article
BGP Dataset-Based Malicious User Activity Detection Using Machine Learning
by Hansol Park, Kookjin Kim, Dongil Shin and Dongkyoo Shin
Information 2023, 14(9), 501; https://doi.org/10.3390/info14090501 - 13 Sep 2023
Cited by 1 | Viewed by 1969
Abstract
Recent advances in the Internet and digital technology have brought a wide variety of activities into cyberspace, but they have also brought a surge in cyberattacks, making it more important than ever to detect and prevent cyberattacks. In this study, a method is [...] Read more.
Recent advances in the Internet and digital technology have brought a wide variety of activities into cyberspace, but they have also brought a surge in cyberattacks, making it more important than ever to detect and prevent cyberattacks. In this study, a method is proposed to detect anomalies in cyberspace by consolidating BGP (Border Gateway Protocol) data into numerical data that can be trained by machine learning (ML) through a tokenizer. BGP data comprise a mix of numeric and textual data, making it challenging for ML models to learn. To convert the data into a numerical format, a tokenizer, a preprocessing technique from Natural Language Processing (NLP), was employed. This process goes beyond merely replacing letters with numbers; its objective is to preserve the patterns and characteristics of the data. The Synthetic Minority Over-sampling Technique (SMOTE) was subsequently applied to address the issue of imbalanced data. Anomaly detection experiments were conducted on the model using various ML algorithms such as One-Class Support Vector Machine (One-SVM), Convolutional Neural Network–Long Short-Term Memory (CNN–LSTM), Random Forest (RF), and Autoencoder (AE), and excellent performance in detection was demonstrated. In experiments, it performed best with the AE model, with an F1-Score of 0.99. In terms of the Area Under the Receiver Operating Characteristic (AUROC) curve, good performance was achieved by all ML models, with an average of over 90%. Improved cybersecurity is expected to be contributed by this research, as it enables the detection and monitoring of cyber anomalies from malicious users through BGP data. Full article
(This article belongs to the Special Issue Intelligent Information Processing for Sensors and IoT Communications)
Show Figures

Figure 1

Figure 1
<p>Random Forest Model (green as normal data and red as abnormal data).</p>
Full article ">Figure 2
<p>CNN–LSTM structure.</p>
Full article ">Figure 3
<p>Autoencoder structure.</p>
Full article ">Figure 4
<p>Data collection process.</p>
Full article ">Figure 5
<p>The process of classifying normal and abnormal data based on case analysis.</p>
Full article ">Figure 6
<p>Reconstruction error value.</p>
Full article ">Figure 7
<p>Preprocessing of BGP data.</p>
Full article ">Figure 8
<p>Dataset ratio pie graphs.</p>
Full article ">Figure 9
<p>Visualize dataset distributions with t-SNE.</p>
Full article ">Figure 10
<p>AUROC curve results.</p>
Full article ">Figure 11
<p>Confusion matrix results using AE.</p>
Full article ">
18 pages, 741 KiB  
Article
Time-Series Neural Network: A High-Accuracy Time-Series Forecasting Method Based on Kernel Filter and Time Attention
by Lexin Zhang, Ruihan Wang, Zhuoyuan Li, Jiaxun Li, Yichen Ge, Shiyun Wa, Sirui Huang and Chunli Lv
Information 2023, 14(9), 500; https://doi.org/10.3390/info14090500 - 13 Sep 2023
Cited by 10 | Viewed by 13558
Abstract
This research introduces a novel high-accuracy time-series forecasting method, namely the Time Neural Network (TNN), which is based on a kernel filter and time attention mechanism. Taking into account the complex characteristics of time-series data, such as non-linearity, high dimensionality, and long-term dependence, [...] Read more.
This research introduces a novel high-accuracy time-series forecasting method, namely the Time Neural Network (TNN), which is based on a kernel filter and time attention mechanism. Taking into account the complex characteristics of time-series data, such as non-linearity, high dimensionality, and long-term dependence, the TNN model is designed and implemented. The key innovations of the TNN model lie in the incorporation of the time attention mechanism and kernel filter, allowing the model to allocate different weights to features at each time point, and extract high-level features from the time-series data, thereby improving the model’s predictive accuracy. Additionally, an adaptive weight generator is integrated into the model, enabling the model to automatically adjust weights based on input features. Mainstream time-series forecasting models such as Recurrent Neural Networks (RNNs) and Long Short-Term Memory Networks (LSTM) are employed as baseline models and comprehensive comparative experiments are conducted. The results indicate that the TNN model significantly outperforms the baseline models in both long-term and short-term prediction tasks. Specifically, the RMSE, MAE, and R2 reach 0.05, 0.23, and 0.95, respectively. Remarkably, even for complex time-series data that contain a large amount of noise, the TNN model still maintains a high prediction accuracy. Full article
(This article belongs to the Special Issue New Deep Learning Approach for Time Series Forecasting)
Show Figures

Figure 1

Figure 1
<p>Illustration of the time-series neural network structure.</p>
Full article ">Figure 2
<p>Illustration of time attention structure.</p>
Full article ">Figure 3
<p>Ground truth and the predicted values by TNN.</p>
Full article ">Figure 4
<p>Comparison of different filters on RNN, LSTM, Transformer [<a href="#B37-information-14-00500" class="html-bibr">37</a>], and ours. The orange line denotes the performance for different models with Kernel Filter, while the blue one is that without Kernel Filter.</p>
Full article ">
26 pages, 1592 KiB  
Article
FinChain-BERT: A High-Accuracy Automatic Fraud Detection Model Based on NLP Methods for Financial Scenarios
by Xinze Yang, Chunkai Zhang, Yizhi Sun, Kairui Pang, Luru Jing, Shiyun Wa and Chunli Lv
Information 2023, 14(9), 499; https://doi.org/10.3390/info14090499 - 12 Sep 2023
Cited by 4 | Viewed by 4939
Abstract
This research primarily explores the application of Natural Language Processing (NLP) technology in precision financial fraud detection, with a particular focus on the implementation and optimization of the FinChain-BERT model. Firstly, the FinChain-BERT model has been successfully employed for financial fraud detection tasks, [...] Read more.
This research primarily explores the application of Natural Language Processing (NLP) technology in precision financial fraud detection, with a particular focus on the implementation and optimization of the FinChain-BERT model. Firstly, the FinChain-BERT model has been successfully employed for financial fraud detection tasks, improving the capability of handling complex financial text information through deep learning techniques. Secondly, novel attempts have been made in the selection of loss functions, with a comparison conducted between negative log-likelihood function and Keywords Loss Function. The results indicated that the Keywords Loss Function outperforms the negative log-likelihood function when applied to the FinChain-BERT model. Experimental results validated the efficacy of the FinChain-BERT model and its optimization measures. Whether in the selection of loss functions or the application of lightweight technology, the FinChain-BERT model demonstrated superior performance. The utilization of Keywords Loss Function resulted in a model achieving 0.97 in terms of accuracy, recall, and precision. Simultaneously, the model size was successfully reduced to 43 MB through the application of integer distillation technology, which holds significant importance for environments with limited computational resources. In conclusion, this research makes a crucial contribution to the application of NLP in financial fraud detection and provides a useful reference for future studies. Full article
(This article belongs to the Special Issue Information Extraction and Language Discourse Processing)
Show Figures

Figure 1

Figure 1
<p>Samples of the dataset used in this paper. The black parts are normal information, while the red parts contain primary fraud elements.</p>
Full article ">Figure 2
<p>The dissection of input text into a chain structure.</p>
Full article ">Figure 3
<p>Visualization of gradients in training. (<b>A</b>) Stable-Momentum Adam optimizer. (<b>B</b>) Adam optimizer. The red color means a dense gradient distribution, while the yellow parts represent an even and sparse gradient distribution.</p>
Full article ">Figure 4
<p>Illustration of fraud focus filter. (<b>A</b>) The filter structure. (<b>B</b>,<b>C</b>) Visualizations of focus on keywords mode.</p>
Full article ">Figure 5
<p>Illustration of the keywords loss function used in this paper.</p>
Full article ">Figure 6
<p>Illustration of the knowledge distillation module.</p>
Full article ">
15 pages, 3160 KiB  
Article
A Novel Gamification Application for High School Student Examination and Assessment to Assist Student Engagement and to Stimulate Interest
by Anna Maria Gianni and Nikolaos Antoniadis
Information 2023, 14(9), 498; https://doi.org/10.3390/info14090498 - 10 Sep 2023
Cited by 1 | Viewed by 2017
Abstract
Formal education in high school focuses primarily on knowledge acquisition via traditional classroom teaching. Younger generations of students tend to lose interest and to disengage from the process. Gamification, the use of gaming elements in the training process to stimulate interest, has been [...] Read more.
Formal education in high school focuses primarily on knowledge acquisition via traditional classroom teaching. Younger generations of students tend to lose interest and to disengage from the process. Gamification, the use of gaming elements in the training process to stimulate interest, has been used lately to battle this phenomenon. The use of an interactive environment and the employment of tools familiar to today’s students aim to bring the student closer to the learning process. Even though there have been several attempts to integrate gaming elements in the teaching process, few applications in the student assessment procedure have been reported so far. In this article, a new approach to student assessment is implemented using a gamified quiz as opposed to standard exam formats, where students are asked to answer questions on the material already taught, using various gaming elements (leaderboards, rewards at different levels, etc.). The results show that students are much more interested in this interactive process and would like to see this kind of performance assessment more often in their everyday activity in school. The participants are also motivated to learn more about the subject of the course and are generally satisfied with this novel approach compared to standard forms of exams. Full article
Show Figures

Figure 1

Figure 1
<p>The functionality of the application in the form of a flow diagram.</p>
Full article ">Figure 2
<p>Registration information for user and main menu of the application.</p>
Full article ">Figure 3
<p>Sample screen of a typical quiz question.</p>
Full article ">Figure 4
<p>Level of satisfaction with the quiz.</p>
Full article ">Figure 5
<p>Factors of improvement for students.</p>
Full article ">Figure 6
<p>Characterization of learning procedure in comparison to standard methods.</p>
Full article ">Figure 7
<p>Degree of interest in similar exams in the future.</p>
Full article ">Figure 8
<p>Extent of use of various help items.</p>
Full article ">
15 pages, 393 KiB  
Review
An Analytical Review of the Source Code Models for Exploit Analysis
by Elena Fedorchenko, Evgenia Novikova, Andrey Fedorchenko and Sergei Verevkin
Information 2023, 14(9), 497; https://doi.org/10.3390/info14090497 - 8 Sep 2023
Cited by 1 | Viewed by 2004
Abstract
Currently, enhancing the efficiency of vulnerability detection and assessment remains relevant. We investigate a new approach for the detection of vulnerabilities that can be used in cyber attacks and assess their severity for further effective responses based on an analysis of exploit source [...] Read more.
Currently, enhancing the efficiency of vulnerability detection and assessment remains relevant. We investigate a new approach for the detection of vulnerabilities that can be used in cyber attacks and assess their severity for further effective responses based on an analysis of exploit source codes and real-time detection of features of their implementation. The key element of this approach is an exploit source code model. In this paper, to specify the model, we systematically analyze existing source code models, approaches to source code analysis in general, and exploits in particular in order to examine their advantages, applications, and challenges. Finally, we provide an initial specification of the proposed source code model. Full article
(This article belongs to the Section Review)
Show Figures

Figure 1

Figure 1
<p>Scheme of the selection process for the research papers, considering the source code analysis for intrusion detection and assessment. All numbers are as of June 2023.</p>
Full article ">Figure 2
<p>ACID sub-tree for an assignment retrieved from [<a href="#B12-information-14-00497" class="html-bibr">12</a>].</p>
Full article ">Figure 3
<p>Hilbert space-filling curve traversal (<b>a</b>) and mapping data through the space-filling curve (<b>b</b>), retrieved from [<a href="#B34-information-14-00497" class="html-bibr">34</a>].</p>
Full article ">Figure 4
<p>Taxonomy of program code models.</p>
Full article ">Figure 5
<p>An example of the proposed model for the exploit’s source code.</p>
Full article ">
22 pages, 612 KiB  
Article
A Study on Influential Features for Predicting Best Answers in Community Question-Answering Forums
by Valeria Zoratto, Daniela Godoy and Gabriela N. Aranda
Information 2023, 14(9), 496; https://doi.org/10.3390/info14090496 - 7 Sep 2023
Viewed by 1235
Abstract
The knowledge provided by user communities in question-answering (QA) forums is a highly valuable source of information for satisfying user information needs. However, finding the best answer for a posted question can be challenging. User-generated content in forums can be of unequal quality [...] Read more.
The knowledge provided by user communities in question-answering (QA) forums is a highly valuable source of information for satisfying user information needs. However, finding the best answer for a posted question can be challenging. User-generated content in forums can be of unequal quality given the free nature of natural language and the varied levels of user expertise. Answers to a question posted in a forum are compiled in a discussion thread, concentrating also posterior activity such as comments and votes. There are usually multiple reasons why an answer successfully fulfills a certain information need and gets accepted as the best answer among a (possibly) high number of answers. In this work, we study the influence that different aspects of answers have on the prediction of the best answers in a QA forum. We collected the discussion threads of a real-world forum concerning computer programming, and we evaluated different features for representing the answers and the context in which they appear in a thread. Multiple classification models were used to compare the performance of the different features, finding that readability is one of the most important factors for detecting the best answers. The goal of this study is to shed some light on the reasons why answers are more likely to receive more votes and be selected as the best answer for a posted question. Such knowledge enables users to enhance their answers which leads, in turn, to an improvement in the overall quality of the content produced in a platform. Full article
Show Figures

Figure 1

Figure 1
<p>An example of a question on Stack Overflow.</p>
Full article ">Figure 2
<p>Learning performance for different combinations of feature sets using NB.</p>
Full article ">Figure 3
<p>Learning performance for different combinations of feature sets using LR.</p>
Full article ">Figure 4
<p>Learning performance for different combinations of feature sets using RF starting with <span class="html-italic">Lng</span> features.</p>
Full article ">Figure 5
<p>Learning performance for different combinations of feature sets using RF starting with <span class="html-italic">Rea</span> features.</p>
Full article ">
25 pages, 1910 KiB  
Review
A Literature Survey on Word Sense Disambiguation for the Hindi Language
by Vinto Gujjar, Neeru Mago, Raj Kumari, Shrikant Patel, Nalini Chintalapudi and Gopi Battineni
Information 2023, 14(9), 495; https://doi.org/10.3390/info14090495 - 7 Sep 2023
Cited by 5 | Viewed by 2153
Abstract
Word sense disambiguation (WSD) is a process used to determine the most appropriate meaning of a word in a given contextual framework, particularly when the word is ambiguous. While WSD has been extensively studied for English, it remains a challenging problem for resource-scarce [...] Read more.
Word sense disambiguation (WSD) is a process used to determine the most appropriate meaning of a word in a given contextual framework, particularly when the word is ambiguous. While WSD has been extensively studied for English, it remains a challenging problem for resource-scarce languages such as Hindi. Therefore, it is crucial to address ambiguity in Hindi to effectively and efficiently utilize it on the web for various applications such as machine translation, information retrieval, etc. The rich linguistic structure of Hindi, characterized by complex morphological variations and syntactic nuances, presents unique challenges in accurately determining the intended sense of a word within a given context. This review paper presents an overview of different approaches employed to resolve the ambiguity of Hindi words, including supervised, unsupervised, and knowledge-based methods. Additionally, the paper discusses applications, identifies open problems, presents conclusions, and suggests future research directions. Full article
(This article belongs to the Special Issue Computational Linguistics and Natural Language Processing)
Show Figures

Figure 1

Figure 1
<p>Conceptual Diagram of WSD.</p>
Full article ">Figure 2
<p>Classification of WSD Approaches.</p>
Full article ">Figure 3
<p>Decision Tree Example.</p>
Full article ">Figure 4
<p>Illustrating SVM Classification.</p>
Full article ">Figure 5
<p>Ensemble Methods: Combining the Strengths of Multiple Models.</p>
Full article ">Figure 6
<p>Flowchart of WSD Execution Process.</p>
Full article ">
12 pages, 782 KiB  
Article
Extreme Learning Machine-Enabled Coding Unit Partitioning Algorithm for Versatile Video Coding
by Xiantao Jiang, Mo Xiang, Jiayuan Jin and Tian Song
Information 2023, 14(9), 494; https://doi.org/10.3390/info14090494 - 7 Sep 2023
Viewed by 1065
Abstract
The versatile video coding (VVC) standard offers improved coding efficiency compared to the high efficiency video coding (HEVC) standard in multimedia signal coding. However, this increased efficiency comes at the cost of increased coding complexity. This work proposes an efficient coding unit partitioning [...] Read more.
The versatile video coding (VVC) standard offers improved coding efficiency compared to the high efficiency video coding (HEVC) standard in multimedia signal coding. However, this increased efficiency comes at the cost of increased coding complexity. This work proposes an efficient coding unit partitioning algorithm based on an extreme learning machine (ELM), which can reduce the coding complexity while ensuring coding efficiency. Firstly, the coding unit size decision is modeled as a classification problem. Secondly, an ELM classifier is trained to predict the coding unit size. In the experiment, the proposed approach is verified based on the VVC reference model. The results show that the proposed method can reduce coding complexity significantly, and good image quality can be obtained. Full article
Show Figures

Figure 1

Figure 1
<p>CU partitioning modes.</p>
Full article ">Figure 2
<p>ELM structure.</p>
Full article ">Figure 3
<p>ELM-enabled CU partitioning algorithm.</p>
Full article ">Figure 4
<p>The ELM-based machine learning structure.</p>
Full article ">Figure 5
<p>R-D curve. (<b>a</b>) BasketballDrive (RA); (<b>b</b>) BasketballDrill (RA).</p>
Full article ">Figure 6
<p>MS-SSIM curve. (<b>a</b>) BasketballDrive (RA); (<b>b</b>) BasketballDrill (RA).</p>
Full article ">Figure 7
<p>Visual comparison for BasketballDrive sequence. (<b>a</b>) Original image; (<b>b</b>) reconstructed image.</p>
Full article ">
12 pages, 818 KiB  
Article
Availability of Physical Activity Tracking Data from Wearable Devices for Glaucoma Patients
by Sonali B. Bhanvadia, Leo Meller, Kian Madjedi, Robert N. Weinreb and Sally L. Baxter
Information 2023, 14(9), 493; https://doi.org/10.3390/info14090493 - 7 Sep 2023
Viewed by 1402
Abstract
Physical activity has been found to potentially modulate glaucoma risk, but the evidence remains inconclusive. The increasing use of wearable physical activity trackers may provide longitudinal and granular data suitable to address this issue, but little is known regarding the characteristics and availability [...] Read more.
Physical activity has been found to potentially modulate glaucoma risk, but the evidence remains inconclusive. The increasing use of wearable physical activity trackers may provide longitudinal and granular data suitable to address this issue, but little is known regarding the characteristics and availability of these data sources. We performed a scoping review and query of data sources on the availability of wearable physical activity data for glaucoma patients. Literature databases (PubMed and MEDLINE) were reviewed with search terms consisting of those related to physical activity trackers and those related to glaucoma, and we evaluated results at the intersection of these two groups. Biomedical databases were also reviewed, for which we completed database queries. We identified eight data sources containing physical activity tracking data for glaucoma, with two being large national databases (UK BioBank and All of Us) and six from individual journal articles providing participant-level information. The number of glaucoma patients with physical activity tracking data available, types of glaucoma-related data, fitness devices utilized, and diversity of participants varied across all sources. Overall, there were limited analyses of these data, suggesting the need for additional research to further investigate how physical activity may alter glaucoma risk. Full article
Show Figures

Figure 1

Figure 1
<p><b>Search strategy utilized in scoping review.</b> Search terms of concept set 1 and concept set 2 are reported; the intersection of both represents the datasets of interest. POAG: primary open-angle glaucoma. IOP: intraocular pressure. Interventionary studies involving animals or humans, and other studies that require ethical approval must list the authority that provided approval and the corresponding ethical approval code.</p>
Full article ">Figure 2
<p><b>Flowchart for article and database review</b>. A total of 72 articles were screened and 6 articles were included, while 4 databases were examined and 2 were included.</p>
Full article ">
26 pages, 2354 KiB  
Article
Effects of Generative Chatbots in Higher Education
by Galina Ilieva, Tania Yankova, Stanislava Klisarova-Belcheva, Angel Dimitrov, Marin Bratkov and Delian Angelov
Information 2023, 14(9), 492; https://doi.org/10.3390/info14090492 - 7 Sep 2023
Cited by 27 | Viewed by 12314
Abstract
Learning technologies often do not meet the university requirements for learner engagement via interactivity and real-time feedback. In addition to the challenge of providing personalized learning experiences for students, these technologies can increase the workload of instructors due to the maintenance and updates [...] Read more.
Learning technologies often do not meet the university requirements for learner engagement via interactivity and real-time feedback. In addition to the challenge of providing personalized learning experiences for students, these technologies can increase the workload of instructors due to the maintenance and updates required to keep the courses up-to-date. Intelligent chatbots based on generative artificial intelligence (AI) technology can help overcome these disadvantages by transforming pedagogical activities and guiding both students and instructors interactively. In this study, we explore and compare the main characteristics of existing educational chatbots. Then, we propose a new theoretical framework for blended learning with intelligent chatbots integration enabling students to interact online and instructors to create and manage their courses using generative AI tools. The advantages of the proposed framework are as follows: (1) it provides a comprehensive understanding of the transformative potential of AI chatbots in education and facilitates their effective implementation; (2) it offers a holistic methodology to enhance the overall educational experience; and (3) it unifies the applications of intelligent chatbots in teaching–learning activities within universities. Full article
(This article belongs to the Special Issue Feature Papers in Information in 2023)
Show Figures

Figure 1

Figure 1
<p>Framework for intelligent chatbot application in higher educational context. Note: The symbol θ represents the threshold (minimum passing grade) for the midterm and final exams, which is predetermined in the course syllabus and typically falls within the range of 59% to 69%.</p>
Full article ">Figure 2
<p>The matrix of distances (dissimilarity matrix) between students’ answers.</p>
Full article ">Figure 3
<p>Students’ clusters by k−means (k = 2, 3, 4, 5).</p>
Full article ">
19 pages, 1366 KiB  
Article
Fortified-Grid: Fortifying Smart Grids through the Integration of the Trusted Platform Module in Internet of Things Devices
by Giriraj Sharma, Amit M. Joshi and Saraju P. Mohanty
Information 2023, 14(9), 491; https://doi.org/10.3390/info14090491 - 6 Sep 2023
Cited by 1 | Viewed by 2011
Abstract
This paper presents a hardware-assisted security primitive that integrates the Trusted Platform Module (TPM) into IoT devices for authentication in smart grids. Data and device security plays a pivotal role in smart grids since they are vulnerable to various attacks that could risk [...] Read more.
This paper presents a hardware-assisted security primitive that integrates the Trusted Platform Module (TPM) into IoT devices for authentication in smart grids. Data and device security plays a pivotal role in smart grids since they are vulnerable to various attacks that could risk grid failure. The proposed Fortified-Grid security primitive provides an innovative solution, leveraging the TPM for attestation coupled with standard X.509 certificates. This methodology serves a dual purpose, ensuring the authenticity of IoT devices and upholding software integrity, an indispensable foundation for any resilient smart grid security system. TPM is a hardware security module that can generate keys and store them with encryption so they cannot be compromised. Formal security verification has been performed using the random or real Oracle (ROR) model and widely accepted AVISPA simulation tool, while informal security verification uses the DY and CK adversary model. Fortified-Grid helps to validate the attested state of IoT devices with a minimal network overhead of 1984 bits. Full article
(This article belongs to the Special Issue Recent Advances in IoT and Cyber/Physical System)
Show Figures

Figure 1

Figure 1
<p>System-level overview of Fortified-Grid.</p>
Full article ">Figure 2
<p>Four-layer IoT-aided smart grid network.</p>
Full article ">Figure 3
<p>Smart grid IoT certificate.</p>
Full article ">Figure 4
<p>TPM enabled IoT smart grid network.</p>
Full article ">Figure 5
<p>Attestation of SG-IoT devices.</p>
Full article ">Figure 6
<p>AVISPA OFMC and CL-Atse.</p>
Full article ">Figure 7
<p>Comparison of smart grid IoT overhead [<a href="#B18-information-14-00491" class="html-bibr">18</a>,<a href="#B19-information-14-00491" class="html-bibr">19</a>,<a href="#B22-information-14-00491" class="html-bibr">22</a>,<a href="#B25-information-14-00491" class="html-bibr">25</a>].</p>
Full article ">
12 pages, 800 KiB  
Article
Effects of Contractual Governance on IT Project Performance under the Mediating Role of Project Management Risk: An Emerging Market Context
by Ayesha Saddiqa, Muhammad Usman Shehzad and Muhammad Mohiuddin
Information 2023, 14(9), 490; https://doi.org/10.3390/info14090490 - 5 Sep 2023
Viewed by 1606
Abstract
In this study, we explore the impact of contractual governance (CG) on project performance (PP) under the mediation of project management risk (PMR). Contractual governance influences favorably IT projects performance in an emerging market context where the IT sector is growing. The principal-agent [...] Read more.
In this study, we explore the impact of contractual governance (CG) on project performance (PP) under the mediation of project management risk (PMR). Contractual governance influences favorably IT projects performance in an emerging market context where the IT sector is growing. The principal-agent theory is used to build a research model that schedules project governance and IT project risk management. Data were collected from 295 IT professionals and the response rate was 73.75%. Smart PLS was employed to test proposed relationships. The findings postulate a strong causal relationship between the CG, PP and PMR. Fundamental elements (FE), change elements (CE), and governance elements (GE) have a significant positive relationship with project management risk (PMR), and PMR positively affects PP. Additionally, PMR mediates the relationship of FE, CE and GE with PP. Overall, the results of the study provide pragmatic visions for IT industry practitioners and experts, but the unscheduled risk to the IT industry may bring enormous harm. Consequently, effective and well-structured governance in a strategic way tends to improve the project performance by monitoring and managing both project risk and quality. In addition, the study empirically supports the significant impacts of project governance dimensions i.e., fundamental elements, change elements and governance elements on project management risk and project performance. It also guides researchers and adds value to the project performance-related literature by filling the gap. Full article
(This article belongs to the Section Information and Communications Technology)
Show Figures

Figure 1

Figure 1
<p>Research Model.</p>
Full article ">Figure 2
<p>Path Model. Note(s): T values presented in parentheses.</p>
Full article ">
19 pages, 4732 KiB  
Article
Interpreting Disentangled Representations of Person-Specific Convolutional Variational Autoencoders of Spatially Preserving EEG Topographic Maps via Clustering and Visual Plausibility
by Taufique Ahmed and Luca Longo
Information 2023, 14(9), 489; https://doi.org/10.3390/info14090489 - 4 Sep 2023
Cited by 2 | Viewed by 1616
Abstract
Dimensionality reduction and producing simple representations of electroencephalography (EEG) signals are challenging problems. Variational autoencoders (VAEs) have been employed for EEG data creation, augmentation, and automatic feature extraction. In most of the studies, VAE latent space interpretation is used to detect only the [...] Read more.
Dimensionality reduction and producing simple representations of electroencephalography (EEG) signals are challenging problems. Variational autoencoders (VAEs) have been employed for EEG data creation, augmentation, and automatic feature extraction. In most of the studies, VAE latent space interpretation is used to detect only the out-of-order distribution latent variable for anomaly detection. However, the interpretation and visualisation of all latent space components disclose information about how the model arrives at its conclusion. The main contribution of this study is interpreting the disentangled representation of VAE by activating only one latent component at a time, whereas the values for the remaining components are set to zero because it is the mean of the distribution. The results show that CNN-VAE works well, as indicated by matrices such as SSIM, MSE, MAE, and MAPE, along with SNR and correlation coefficient values throughout the architecture’s input and output. Furthermore, visual plausibility and clustering demonstrate that each component contributes differently to capturing the generative factors in topographic maps. Our proposed pipeline adds to the body of knowledge by delivering a CNN-VAE-based latent space interpretation model. This helps us learn the model’s decision and the importance of each component of latent space responsible for activating parts of the brain. Full article
(This article belongs to the Special Issue Advances in Explainable Artificial Intelligence)
Show Figures

Figure 1

Figure 1
<p>The structure of a variational autoencoder (VAE) leverages convolutional methods on input data that maps these data into the parameters of a probability distribution, such as the mean and the variance of a Gaussian distribution.</p>
Full article ">Figure 2
<p>A pipeline for spatially preserving EEG topographic map generation and interpreting the latent space of CNN-VAE via clustering and visual plausibility. (<b>A</b>) The DEAP dataset was used to build a CNN-VAE from EEG signals. (<b>B</b>) EEG topographic head maps of size <math display="inline"><semantics> <mrow> <mn>40</mn> <mo>×</mo> <mn>40</mn> </mrow> </semantics></math> generation. (<b>C</b>) A CNN-VAE model is learnt for a variable by variable interpretation of the latent space. (<b>D</b>) Clustering for visualising the learnt pattern from each active latent component. (<b>E</b>) Reconstruction of the signals from 32 electrode coordinate values of EEG topographic maps. (<b>F</b>) Evaluation of the model for reconstructed topographic maps as well as the signal.</p>
Full article ">Figure 3
<p>An example of average Euclidian distance computed from channel index of topographic maps ranging in size from 26 to 64.</p>
Full article ">Figure 4
<p>Distribution of the four latent spaces when one latent component is active at a time.</p>
Full article ">Figure 5
<p>Signal from the T7 and P7 channels, as well as their correlation values with the original data’s channel values.</p>
Full article ">Figure 6
<p>SNR for each channel of the original and reconstructed test data.</p>
Full article ">Figure 7
<p>Randomly selected 10 samples of actual and reconstructed topo maps with active components 0 and 1 of the latent space.</p>
Full article ">Figure 8
<p>Cluster analysis on reconstructed test EEG topo maps generated from each latent space active component.</p>
Full article ">Figure 9
<p>Correlation values computed between original and reconstructed signals generated with latent space component 0.</p>
Full article ">Figure 10
<p>10–20 system of electrode placement in EEG topographic maps of size 40 × 40.</p>
Full article ">Figure A1
<p>Distribution of all latent spaces when one latent component is active at a time.</p>
Full article ">Figure A1 Cont.
<p>Distribution of all latent spaces when one latent component is active at a time.</p>
Full article ">Figure A2
<p>Correlation values between the original and reconstructed signal generated from each latent active component grouped with all channels.</p>
Full article ">Figure A3
<p>Correlation values between the original and reconstructed signal for each channel grouped with all latent components.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop