[go: up one dir, main page]

Next Issue
Volume 15, March
Previous Issue
Volume 15, January
 
 

Information, Volume 15, Issue 2 (February 2024) – 54 articles

Cover Story (view full-size image): The traditional standards for company classification are based on time-consuming, effort-intensive, and vendor-specific assignments by domain experts, leading to issues with accuracy, cost, standardization, and adaptability to market dynamics. Addressing these issues requires a shift towards automated, standardized, and continuously updated classification approaches. NLP-based methods can revolutionize company classification and offer reduced costs, simplified processes, and decreased reliance on manual labor. This solution can benefit various industries, including financial research, business intelligence, and investing, by providing a more efficient and cost-effective way of categorizing companies while streamlining the decision-making processes in a rapidly changing industry landscape. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
13 pages, 1546 KiB  
Article
Shape Matters: Detecting Vertebral Fractures Using Differentiable Point-Based Shape Decoding
by Hellena Hempe, Alexander Bigalke and Mattias Paul Heinrich
Information 2024, 15(2), 120; https://doi.org/10.3390/info15020120 - 19 Feb 2024
Viewed by 1604
Abstract
Background: Degenerative spinal pathologies are highly prevalent among the elderly population. Timely diagnosis of osteoporotic fractures and other degenerative deformities enables proactive measures to mitigate the risk of severe back pain and disability. Methods: We explore the use of shape auto-encoders for vertebrae, [...] Read more.
Background: Degenerative spinal pathologies are highly prevalent among the elderly population. Timely diagnosis of osteoporotic fractures and other degenerative deformities enables proactive measures to mitigate the risk of severe back pain and disability. Methods: We explore the use of shape auto-encoders for vertebrae, advancing the state of the art through robust automatic segmentation models trained without fracture labels and recent geometric deep learning techniques. Our shape auto-encoders are pre-trained on a large set of vertebrae surface patches. This pre-training step addresses the label scarcity problem faced when learning the shape information of vertebrae for fracture detection from image intensities directly. We further propose a novel shape decoder architecture: the point-based shape decoder. Results: Employing segmentation masks that were generated using the TotalSegmentator, our proposed method achieves an AUC of 0.901 on the VerSe19 testset. This outperforms image-based and surface-based end-to-end trained models. Our results demonstrate that pre-training the models in an unsupervised manner enhances geometric methods like PointNet and DGCNN. Conclusion: Our findings emphasize the advantages of explicitly learning shape features for diagnosing osteoporotic vertebrae fractures. This approach improves the reliability of classification results and reduces the need for annotated labels. Full article
Show Figures

Figure 1

Figure 1
<p>To determine the most suitable architecture for our task, we employ combinations of several encoder–decoder architectures including traditional convolutional methods and geometric methods. The AEs are trained to reconstruct either a point cloud representation or a volumetric surface representation of vertebrae, which are derived from the previously computed segmentation mask. As <b>Shape Encoder</b> (<b>A</b>), we employ a convolutional method, as well as a point-based and a graph-based method to predict the embedding <math display="inline"><semantics> <mi mathvariant="bold-italic">z</mi> </semantics></math>. As <b>Shape Decoder</b> (<b>B</b>), we employ a convolutional method as well as a point-based method and propose a novel point-based shape decoder. The <b>Shape Classifier</b> (<b>C</b>) is then trained separately on the embedding <math display="inline"><semantics> <mi mathvariant="bold-italic">z</mi> </semantics></math> for each encoder–decoder combination using the same multilayer perceptron (MLP) model. Note that only the weights of the MLP are trained in a supervised manner, whereas the weights of the encoder are fixed.</p>
Full article ">Figure 2
<p><b>Point-based shape decoder:</b> From the embedding vector <math display="inline"><semantics> <mi mathvariant="bold-italic">z</mi> </semantics></math>, a point representation of <span class="html-italic">N</span> key points is computed using an MLP. The layers each consist of a 1D convolution with the channel size denoted by white font within the blocks, InstanceNorm and ReLU. The number on top of the blocks denotes the size of the dimensionality of the point cloud. Afterwards, a differentiable sampling operation is applied on the key points to obtain a volumetric representation. This step requires <span class="html-italic">N</span> additional parameters <math display="inline"><semantics> <mi mathvariant="bold-italic">y</mi> </semantics></math>.</p>
Full article ">Figure 3
<p>ROC curve and corresponding AUC for encoder–decoder combinations of the median AUC of 10 seeds. The encoders are grouped by color and line style, whereas the decoders are grouped by color and marker. The corresponding area under curve (AUC) is listed inside the legend.</p>
Full article ">Figure 4
<p>Results of our data-hold-out experiment as boxplots and scatterplots of the AUC obtained for 10 random seeds each. The plots are separated by the employed encoder architecture, and provide the classification results obtained with the respective decoder. <b>Top-Left:</b> Convolutional encoder <b>Top-Right:</b> Point-encoder models, <b>Bottom-Left:</b> Graph-encoder models <b>Bottom-Right:</b> End-to-End trained models including the traditional CNN trained on image intensities instead and trained on vertebra surface (denoted as Conv-encoder img and surf).</p>
Full article ">
22 pages, 1525 KiB  
Article
Information Systems Strategy for Multi-National Corporations: Towards an Operational Model and Action List
by Martin Wynn and Christian Weber
Information 2024, 15(2), 119; https://doi.org/10.3390/info15020119 - 18 Feb 2024
Viewed by 2692
Abstract
The development and implementation of information systems strategy in multi-national corporations (MNCs) faces particular challenges—cultural differences and variations in work values and practices across different countries, numerous technology landscapes and legacy issues, language and accounting particularities, and differing business models. This article builds [...] Read more.
The development and implementation of information systems strategy in multi-national corporations (MNCs) faces particular challenges—cultural differences and variations in work values and practices across different countries, numerous technology landscapes and legacy issues, language and accounting particularities, and differing business models. This article builds upon the existing literature and in-depth interviews with eighteen industry practitioners employed in six MNCs to construct an operational model to address these challenges. The research design is based on an inductive, qualitative approach that develops an initial conceptual framework—derived from the literature—into an operational model, which is then applied and refined in a case study company. The final model consists of change components and process phases. Six change components are identified that drive and underpin IS strategy—business strategy, systems projects, technology infrastructure, process change, skills and competencies, and costs and benefits. Five core process phases are recognized—review, align, engage, execute, and control. The model is based on the interaction between these two dimensions—change components and process phases—and an action list is also developed to support the application of the model, which contributes to the theory and practice of information systems deployment in MNCs. Full article
(This article belongs to the Special Issue Feature Papers in Information in 2023)
Show Figures

Figure 1

Figure 1
<p>Research method: the three phases of research.</p>
Full article ">Figure 2
<p>Vaidya’s IPCRC model for technology strategy development in MNCs. Adapted from Vaidya [<a href="#B25-information-15-00119" class="html-bibr">25</a>] (p. 12).</p>
Full article ">Figure 3
<p>Questionnaire used in phase 2 of the research.</p>
Full article ">
22 pages, 6718 KiB  
Article
Formal Security Analysis of ISA100.11a Standard Protocol Based on Colored Petri Net Tool
by Tao Feng, Taining Chen and Xiang Gong
Information 2024, 15(2), 118; https://doi.org/10.3390/info15020118 - 18 Feb 2024
Viewed by 1644
Abstract
This paper presents a formal security analysis of the ISA100.11a standard protocol using the Colored Petri Net (CPN) modeling approach. Firstly, we establish a security threat model for the ISA100.11a protocol and provide a detailed description and analysis of the identified security threats. [...] Read more.
This paper presents a formal security analysis of the ISA100.11a standard protocol using the Colored Petri Net (CPN) modeling approach. Firstly, we establish a security threat model for the ISA100.11a protocol and provide a detailed description and analysis of the identified security threats. Secondly, we use the CPN tool to model the protocol formally and conduct model checking and security analysis. Finally, we analyze and discuss the results of the model checking, which demonstrate that the ISA100.11a standard protocol may have vulnerabilities when certain security threats exist, and provide some suggestions to enhance the security of the protocol. This research provides a certain level of security assurance for the ISA100.11a standard protocol and serves as a reference for similar security research on protocols. Full article
Show Figures

Figure 1

Figure 1
<p>Protocol security for data processing.</p>
Full article ">Figure 2
<p>Data flow diagram.</p>
Full article ">Figure 3
<p>Top-level model of the protocol.</p>
Full article ">Figure 4
<p>Middle-level model of the protocol.</p>
Full article ">Figure 5
<p>Internal model of the alternative transition “Connection”.</p>
Full article ">Figure 6
<p>Internal model of the alternative transition “SecurityManager” (1).</p>
Full article ">Figure 7
<p>Internal model of the alternative transition “Security_commit”.</p>
Full article ">Figure 8
<p>Internal model of the alternative transition “SecurityManager” (2).</p>
Full article ">Figure 9
<p>The original protocol attacker model (a).</p>
Full article ">Figure 10
<p>The original protocol attacker model (b).</p>
Full article ">Figure 11
<p>The original protocol attacker model (c).</p>
Full article ">Figure 12
<p>The middle model of the new scheme.</p>
Full article ">Figure 13
<p>Alternative transition “Connection” internal model.</p>
Full article ">Figure 14
<p>Alternative transition “Center” internal model.</p>
Full article ">Figure 15
<p>“SecurityManager” transition “Center” internal model.</p>
Full article ">Figure 16
<p>The improved attacker models of the protocol.</p>
Full article ">
23 pages, 1045 KiB  
Review
Strategic Approaches to Cybersecurity Learning: A Study of Educational Models and Outcomes
by Madhav Mukherjee, Ngoc Thuy Le, Yang-Wai Chow and Willy Susilo
Information 2024, 15(2), 117; https://doi.org/10.3390/info15020117 - 18 Feb 2024
Cited by 1 | Viewed by 5847
Abstract
As the demand for cybersecurity experts in the industry grows, we face a widening shortage of skilled professionals. This pressing concern has spurred extensive research within academia and national bodies, who are striving to bridge this skills gap through refined educational frameworks, including [...] Read more.
As the demand for cybersecurity experts in the industry grows, we face a widening shortage of skilled professionals. This pressing concern has spurred extensive research within academia and national bodies, who are striving to bridge this skills gap through refined educational frameworks, including the integration of innovative information applications like remote laboratories and virtual classrooms. Despite these initiatives, current higher education models for cybersecurity, while effective in some areas, fail to provide a holistic solution to the root causes of the skills gap. Our study conducts a thorough examination of established cybersecurity educational frameworks, with the goal of identifying crucial learning outcomes that can mitigate the factors contributing to this skills gap. Furthermore, by analyzing six different educational models, for each one that can uniquely leverage technology like virtual classrooms and online platforms and is suited to various learning contexts, we categorize these contexts into four distinct categories. This categorization introduces a holistic dimension of context awareness enriched by digital learning tools into the process, enhancing the alignment with desired learning outcomes, a consideration sparsely addressed in the existing literature. This thorough analysis further strengthens the framework for guiding education providers in selecting models that most effectively align with their targeted learning outcomes and implies practical uses for technologically enhanced environments. This review presents a roadmap for educators and institutions, offering insights into relevant teaching models, including the opportunities for the utilization of remote laboratories and virtual classrooms, and their contextual applications, thereby aiding curriculum designers in making strategic decisions. Full article
Show Figures

Figure 1

Figure 1
<p>Phase-wise methodical approach.</p>
Full article ">Figure 2
<p>Core components of the NICE framework.</p>
Full article ">Figure 3
<p>Core features of ECSF.</p>
Full article ">Figure 4
<p>CSEC2017 thought model.</p>
Full article ">Figure 5
<p>CSEC2017 curricula design hierarchy.</p>
Full article ">Figure 6
<p>Course design roadmap.</p>
Full article ">
14 pages, 519 KiB  
Review
Factors Affecting the Formation of False Health Information and the Role of Social Media Literacy in Reducing Its Effects
by Kevin K. W. Ho and Shaoyu Ye
Information 2024, 15(2), 116; https://doi.org/10.3390/info15020116 - 17 Feb 2024
Cited by 1 | Viewed by 3127
Abstract
The COVID-19 pandemic heightened concerns about health and safety, leading people to seek information to protect themselves from infection. Even before the pandemic, false health information was spreading on social media. We conducted a review of recent literature in health and social sciences [...] Read more.
The COVID-19 pandemic heightened concerns about health and safety, leading people to seek information to protect themselves from infection. Even before the pandemic, false health information was spreading on social media. We conducted a review of recent literature in health and social sciences and proposed a theoretical model to understand the factors influencing the spread of false health information. Our focus was on how false health information circulated before and during the pandemic, impacting people’s perceptions of believing information on social media. We identified four possible strategies to counteract the negative effects of false health information: prebunking, refuting, legislation, and media literacy. We argue that improving people’s social media literacy skills is among the most effective ways to address this issue. Our findings provide a basis for future research and the development of policies to minimize the impact of false health information on society. Full article
Show Figures

Figure 1

Figure 1
<p>Theoretical Model.</p>
Full article ">
35 pages, 4771 KiB  
Article
Leveraging Artificial Intelligence and Participatory Modeling to Support Paradigm Shifts in Public Health: An Application to Obesity and Evidence-Based Policymaking
by Philippe J. Giabbanelli and Grace MacEwan
Information 2024, 15(2), 115; https://doi.org/10.3390/info15020115 - 16 Feb 2024
Cited by 2 | Viewed by 2004
Abstract
The Provincial Health Services Authority (PHSA) of British Columbia suggested that a paradigm shift from weight to well-being could address the unintended consequences of focusing on obesity and improve the outcomes of efforts to address the challenges facing both individuals and our healthcare [...] Read more.
The Provincial Health Services Authority (PHSA) of British Columbia suggested that a paradigm shift from weight to well-being could address the unintended consequences of focusing on obesity and improve the outcomes of efforts to address the challenges facing both individuals and our healthcare system. In this paper, we jointly used artificial intelligence (AI) and participatory modeling to examine the possible consequences of this paradigm shift. Specifically, we created a conceptual map with 19 experts to understand how obesity and physical and mental well-being connect to each other and other factors. Three analyses were performed. First, we analyzed the factors that directly connect to obesity and well-being, both in terms of causes and consequences. Second, we created a reduced version of the map and examined the connections between categories of factors (e.g., food production, and physiology). Third, we explored the themes in the interviews when discussing either well-being or obesity. Our results show that obesity was viewed from a medical perspective as a problem, whereas well-being led to broad and diverse solution-oriented themes. In particular, we found that taking a well-being perspective can be more comprehensive without losing the relevance of the physiological aspects that an obesity-centric perspective focuses on. Full article
(This article belongs to the Special Issue 2nd Edition of Data Science for Health Services)
Show Figures

Figure 1

Figure 1
<p>The four main stages of our process (objective and scope, stakeholder selection, knowledge generation, and the final map) each involve a set of steps and sub-steps. The PM literature also outlines methodological choices based on the four P’s [<a href="#B54-information-15-00115" class="html-bibr">54</a>] (purpose, partnership, process, product), which partly align with these stages.</p>
Full article ">Figure 2
<p>Analysis using the Foresight categories. Social <span class="html-italic">determinants</span> cover concepts such as smoking, socioeconomic status, educational attainments, employment inequities, work-life balance, and food literacy. Social <span class="html-italic">effects</span> include notions such as weight-based bullying, weight bias, and stigma.</p>
Full article ">Figure 3
<p>Analysis using categories specifically designed for the PHSA report. Each color corresponds to a category: for example, nodes and edges involved in eating disorders are shown in red.</p>
Full article ">Figure 4
<p>Comparison of the most important factors by connection (shown by circle size) and centrality (shown by hue).</p>
Full article ">Figure 5
<p>Set up to identify similar concepts across individual maps and combine them. <span class="html-italic">Low resolution is used to avoid the association of participants with specific statements while illustrating our overall set-up</span>.</p>
Full article ">Figure 6
<p>A systems view of obesity and well-being taken together (<b>a</b>), compared to being centered on either well-being (<b>b</b>) or obesity (<b>c</b>). Colors indicate that nodes belong to different categories.</p>
Full article ">Figure 7
<p>Reduced conceptual map using categories. The size of each category indicates its degrees (i.e., number of interactions) while its color is indicative of centrality.</p>
Full article ">Figure 8
<p>Word clouds for answers including obesity.</p>
Full article ">Figure 9
<p>Word clouds for answers including well-being.</p>
Full article ">Figure A1
<p>Consequences of dyslipidemia, with colored comorbidities on the right. All relationships depict an increase.</p>
Full article ">Figure A2
<p>Consequences of en excess or malfunction of adipose tissue, with colored comorbidities on the right. All relationships depict an increase.</p>
Full article ">Figure A3
<p>Pathways involved in linking obesity with respiratory disease based on Sebastian [<a href="#B103-information-15-00115" class="html-bibr">103</a>] and Poulain et al. [<a href="#B104-information-15-00115" class="html-bibr">104</a>]. Plain arrows indicate an increase whereas dotted arrows indicate a decrease.</p>
Full article ">Figure A4
<p>Consequences of obesity’s comorbidities on fear and ability regarding physical activities.</p>
Full article ">Figure A5
<p>Consequences of obesity and mental well-being on healthy eating.</p>
Full article ">Figure A6
<p>Pathways linking exercise and healthy eating to comorbidities. Factors and directed relationships were mostly based on the interviewees’ synthesis of the evidence (54% of relationships). Other sources include the highly cited work of Monteiro and Azevedo [<a href="#B110-information-15-00115" class="html-bibr">110</a>] (14%), the NIH’s 2022 updated definition of atherosclerosis [<a href="#B111-information-15-00115" class="html-bibr">111</a>] (12%), and a chapter by Tornheim and Ruderman [<a href="#B112-information-15-00115" class="html-bibr">112</a>] (10%). The remaining components come from the highly cited overview of Bastien et al. [<a href="#B113-information-15-00115" class="html-bibr">113</a>], a classic study by Mayer-Davis et al. [<a href="#B114-information-15-00115" class="html-bibr">114</a>], the authoritative review of Mottillo et al. [<a href="#B115-information-15-00115" class="html-bibr">115</a>], and the well-known work of Shoelson et al. [<a href="#B116-information-15-00115" class="html-bibr">116</a>]; previous subsections completed this subsystem (black arrows). Relationships causing an increase are depicted by plain arrows, while those causing a decrease are depicted by dashed arrows.</p>
Full article ">Figure A7
<p>Relationships between short sleep duration, obesity, and comorbidities of obesity. Relationships causing an increase are depicted by plain arrows, while those causing a decrease are depicted by dashed arrows.</p>
Full article ">Figure A8
<p>Relationships between the objective food environment and healthy eating. Relationships causing an increase are depicted by plain arrows, while those causing a decrease are depicted by dashed arrows.</p>
Full article ">Figure A9
<p>Barriers to physical activity. Relationships causing an increase are depicted by plain arrows, while those causing a decrease are depicted by dashed arrows.</p>
Full article ">
13 pages, 1805 KiB  
Article
Understanding Self-Supervised Learning of Speech Representation via Invariance and Redundancy Reduction
by Yusuf Brima, Ulf Krumnack, Simone Pika and Gunther Heidemann
Information 2024, 15(2), 114; https://doi.org/10.3390/info15020114 - 15 Feb 2024
Viewed by 1995
Abstract
Self-supervised learning (SSL) has emerged as a promising paradigm for learning flexible speech representations from unlabeled data. By designing pretext tasks that exploit statistical regularities, SSL models can capture useful representations that are transferable to downstream tasks. Barlow Twins (BTs) is an [...] Read more.
Self-supervised learning (SSL) has emerged as a promising paradigm for learning flexible speech representations from unlabeled data. By designing pretext tasks that exploit statistical regularities, SSL models can capture useful representations that are transferable to downstream tasks. Barlow Twins (BTs) is an SSL technique inspired by theories of redundancy reduction in human perception. In downstream tasks, BTs representations accelerate learning and transfer this learning across applications. This study applies BTs to speech data and evaluates the obtained representations on several downstream tasks, showing the applicability of the approach. However, limitations exist in disentangling key explanatory factors, with redundancy reduction and invariance alone being insufficient for factorization of learned latents into modular, compact, and informative codes. Our ablation study isolated gains from invariance constraints, but the gains were context-dependent. Overall, this work substantiates the potential of Barlow Twins for sample-efficient speech encoding. However, challenges remain in achieving fully hierarchical representations. The analysis methodology and insights presented in this paper pave a path for extensions incorporating further inductive priors and perceptual principles to further enhance the BTs self-supervision framework. Full article
(This article belongs to the Topic Advances in Artificial Neural Networks)
Show Figures

Figure 1

Figure 1
<p>The BTs framework for learning invariant speech representations. <b>Stage 1:</b> An encoder <math display="inline"><semantics> <msub> <mi>f</mi> <mi>θ</mi> </msub> </semantics></math> process augments views <math display="inline"><semantics> <msup> <mi>X</mi> <mi>A</mi> </msup> </semantics></math> and <math display="inline"><semantics> <msup> <mi>X</mi> <mi>B</mi> </msup> </semantics></math> of the same speech input <span class="html-italic">X</span> and projects them into a shared latent space. The BTs’ loss (Equation (<a href="#FD1-information-15-00114" class="html-disp-formula">1</a>)) enforces redundancy reduction between latents from different samples while maximizing correlation for positive pairs (two views of the same sample). This causes the encoders to produce invariant representations capturing speaker identity while reducing sensitivity to augmentations. <b>Stage 2:</b> The learned latent representations <math display="inline"><semantics> <msup> <mi>Z</mi> <mi>A</mi> </msup> </semantics></math> and <math display="inline"><semantics> <msup> <mi>Z</mi> <mi>B</mi> </msup> </semantics></math> can then be used for downstream speech-processing tasks to evaluate the model’s generalization capability.</p>
Full article ">Figure 2
<p>(<b>Left column</b>) View 1 provides a dual representation, featuring the time-domain signal (<b>top row</b>) and its corresponding time-frequency spectrogram (<b>second row</b>), both derived from the first perturbed version of the original audio signal. (<b>Right column</b>) View 2 presents a similar pair of representations. The higher harmonic partials present in the first view are not visibly present in the second view; however, the underlying information content remains invariant.</p>
Full article ">Figure 3
<p>The empirical cross-correlation between the 128 features of the latent representations <math display="inline"><semantics> <msup> <mi>Z</mi> <mi>A</mi> </msup> </semantics></math> and <math display="inline"><semantics> <msup> <mi>Z</mi> <mi>B</mi> </msup> </semantics></math> for paired augmented views, contrasting the untrained state (<b>left</b>) with the trained state (<b>right</b>) within the BTs framework. These matrices visually represent the relationships between different views of the same speech input for the current mini-batch. The comparison allows us to observe the transformation in cross-correlation patterns following the self-supervised learning process, highlighting the model’s ability to capture invariance (higher correlation of diagonal elements of the trained network’s matrix) and de-correlation of off-diagonal elements.</p>
Full article ">Figure 4
<p>(<b>a</b>) Top-1 accuracy for speaker recognition, comparing five base models over 50 experimental runs, highlighting the performance and stability of these techniques. (<b>b</b>) Top-1 accuracy for gender recognition from speech, using the same base models, which shows a similar performance trend, indicating task-specific model effectiveness and the nuanced nature of gender features in speech data.</p>
Full article ">Figure 5
<p>(<b>a</b>) Boxplot of Top-1 accuracy in emotion recognition across five different base models over 50 experimental runs, showing the consistency and variability in model performances. (<b>b</b>) Boxplot of Top-1 accuracy in a keyword-spotting task for the same base models and number of runs, illustrating the impact of model architecture on task-specific accuracy.</p>
Full article ">
24 pages, 4184 KiB  
Article
Deep Reinforcement Learning for Autonomous Driving in Amazon Web Services DeepRacer
by Bohdan Petryshyn, Serhii Postupaiev, Soufiane Ben Bari and Armantas Ostreika
Information 2024, 15(2), 113; https://doi.org/10.3390/info15020113 - 15 Feb 2024
Viewed by 3647
Abstract
The development of autonomous driving models through reinforcement learning has gained significant traction. However, developing obstacle avoidance systems remains a challenge. Specifically, optimising path completion times while navigating obstacles is an underexplored research area. Amazon Web Services (AWS) DeepRacer emerges as a powerful [...] Read more.
The development of autonomous driving models through reinforcement learning has gained significant traction. However, developing obstacle avoidance systems remains a challenge. Specifically, optimising path completion times while navigating obstacles is an underexplored research area. Amazon Web Services (AWS) DeepRacer emerges as a powerful infrastructure for engineering and analysing autonomous models, providing a robust foundation for addressing these complexities. This research investigates the feasibility of training end-to-end self-driving models focused on obstacle avoidance using reinforcement learning on the AWS DeepRacer autonomous race car platform. A comprehensive literature review of autonomous driving methodologies and machine learning model architectures is conducted, with a particular focus on object avoidance, followed by hands-on experimentation and the analysis of training data. Furthermore, the impact of sensor choice, reward function, action spaces, and training time on the autonomous obstacle avoidance task are compared. The results of the best configuration experiment demonstrate a significant improvement in obstacle avoidance performance compared to the baseline configuration, with a 95.8% decrease in collision rate, while taking about 79% less time to complete the trial circuit. Full article
Show Figures

Figure 1

Figure 1
<p>AWS DeepRacer policy network architecture.</p>
Full article ">Figure 2
<p>Sensors on the AWS DeepRacer EVO car [<a href="#B62-information-15-00113" class="html-bibr">62</a>].</p>
Full article ">Figure 3
<p>Simulation track map; the DeepRacer car represented by a purple circle; the three obstacles represented by white boxes.</p>
Full article ">Figure 4
<p>Baseline model reward graph.</p>
Full article ">Figure 5
<p>Baseline model with SAC algorithm reward graph.</p>
Full article ">Figure 6
<p>Extended baseline model reward graph.</p>
Full article ">Figure 7
<p>Extended baseline model with LiDAR reward graph.</p>
Full article ">Figure 8
<p>Continuous reward function model reward graph.</p>
Full article ">Figure 9
<p>Continuous reward function with LiDAR model reward graph.</p>
Full article ">Figure 10
<p>Unknown environment simulation track map.</p>
Full article ">Figure 11
<p>Continuous reward function with reduced action space model reward graph.</p>
Full article ">
19 pages, 404 KiB  
Article
A New Algorithm Framework for the Influence Maximization Problem Using Graph Clustering
by Agostinho Agra and Jose Maria Samuco
Information 2024, 15(2), 112; https://doi.org/10.3390/info15020112 - 14 Feb 2024
Cited by 1 | Viewed by 1687
Abstract
Given a social network modelled by a graph, the goal of the influence maximization problem is to find k vertices that maximize the number of active vertices through a process of diffusion. For this diffusion, the linear threshold model is considered. A new [...] Read more.
Given a social network modelled by a graph, the goal of the influence maximization problem is to find k vertices that maximize the number of active vertices through a process of diffusion. For this diffusion, the linear threshold model is considered. A new algorithm, called ClusterGreedy, is proposed to solve the influence maximization problem. The ClusterGreedy algorithm creates a partition of the original set of nodes into small subsets (the clusters), applies the SimpleGreedy algorithm to the subgraphs induced by each subset of nodes, and obtains the seed set from a combination of the seed set of each cluster by solving an integer linear program. This algorithm is further improved by exploring the submodularity property of the diffusion function. Experimental results show that the ClusterGreedy algorithm provides, on average, higher influence spread and lower running times than the SimpleGreedy algorithm on Watts–Strogatz random graphs. Full article
(This article belongs to the Special Issue Optimization Algorithms and Their Applications)
Show Figures

Figure 1

Figure 1
<p>Scheme of Algorithm 3.</p>
Full article ">Figure 2
<p>Average number of clusters generated by the MCL algorithm.</p>
Full article ">Figure 3
<p>Average runtime of clusters’ generation.</p>
Full article ">Figure 4
<p>Comparison of the average execution time of SimpleGreedy and ClusterGreedy algorithms.</p>
Full article ">Figure 5
<p>Percentage of average runtime of ClusterGreedy compared with SimpleGreedy.</p>
Full article ">Figure 6
<p>Comparison of the average objective function value given by SimpleGreedy and ClusterGreedy algorithms.</p>
Full article ">
11 pages, 1886 KiB  
Article
Exploring the Impact of Body Position on Attentional Orienting
by Rébaï Soret, Noemie Prea and Vsevolod Peysakhovich
Information 2024, 15(2), 111; https://doi.org/10.3390/info15020111 - 13 Feb 2024
Viewed by 1406
Abstract
Attentional orienting is a crucial process in perceiving our environment and guiding human behavior. Recent studies have suggested a forward attentional bias, where faster reactions are observed to spatial cues indicating information appearing in the forward rather than the rear direction. This study [...] Read more.
Attentional orienting is a crucial process in perceiving our environment and guiding human behavior. Recent studies have suggested a forward attentional bias, where faster reactions are observed to spatial cues indicating information appearing in the forward rather than the rear direction. This study investigated how the body position affects attentional orienting, using a modified version of the Posner cueing task within a virtual reality environment. Participants, seated at a 90° angle or reclined at 45°, followed arrows directing their attention to one of four spatial positions where a spaceship will appear, visible either through transparent windows (front space) or in mirrors (rear space). Their task was to promptly identify the spaceship’s color as red or blue. The results indicate that participants reacted more swiftly when the cue correctly indicated the target’s location (valid cues) and when targets appeared in the front rather than the rear. Moreover, the “validity effect”—the advantage of valid over invalid cues—on early eye movements, varied based on both the participant’s body position and the target’s location (front or rear). These findings suggest that the body position may modulate the forward attentional bias, highlighting its relevance in attentional orienting. This study’s implications are further discussed within contexts like aviation and space exploration, emphasizing the necessity for precise and swift responses to stimuli across diverse spatial environments. Full article
(This article belongs to the Special Issue Recent Advances and Perspectives in Human-Computer Interaction)
Show Figures

Figure 1

Figure 1
<p>A screenshot of the experimental environment as projected through the virtual reality headset. The arrow corresponds to the attentional cue indicating one of the four possible target positions: two corresponding to the direct vision and two corresponding to the rearview mirrors.</p>
Full article ">Figure 2
<p>Example of a valid trial sequence: after a fixation point lasting 1500 ms, a directional arrow lasting 300 ms indicates the target’s occurrence through the left transparent sight. After an inter-stimulus interval of 300 ms, the target appears at the indicated location. It remains red for 250 ms then turns white. The response sequence is as follows: move eyes away from the central fixing point (gaze initiation), look at the target (target seen), and press the “grip” button associated with the color of the spaceship (discrimination response). The green dot illustrates the position of the participants’ gaze.</p>
Full article ">Figure 3
<p>Cueing effect (in milliseconds) for gaze initiation based on target location and body position. A greater effect means more efficient orientation for valid cues and/or higher cost for invalid cues.</p>
Full article ">Figure 4
<p>Experimental configurations illustrating the alignment of participants’ perception in real (red axis) and virtual spaces (blue axis). (<b>A</b>) In the seated position, the orientation of the virtual environment (blue axis) aligns with the upright orientation of the participant, which is also in alignment with the direction of real-world gravity (red axis). (<b>B</b>) In the reclined position, the orientation of the virtual environment (blue axis) is adjusted to align with the body orientation of the participant. However, the direction of real-world gravity (red axis) remains constant and does not align with the virtual environment’s orientation. As a result, a portion of the “front” in the virtual environment overlaps with the space above in the real-world orientation, and a part of the “rear” in the virtual environment overlaps with the space below in the real-world orientation.</p>
Full article ">
29 pages, 667 KiB  
Article
Scrum@PA: Tailoring an Agile Methodology to the Digital Transformation in the Public Sector
by Paolo Ciancarini, Raffaele Giancarlo and Gennaro Grimaudo
Information 2024, 15(2), 110; https://doi.org/10.3390/info15020110 - 13 Feb 2024
Cited by 1 | Viewed by 3032
Abstract
Digital transformation in the public sector provides digital services to the citizens aiming at increasing their quality of life, as well as the transparency and accountability of a public administration. Since adaptation to the citizens changing needs is central for its success, Agile [...] Read more.
Digital transformation in the public sector provides digital services to the citizens aiming at increasing their quality of life, as well as the transparency and accountability of a public administration. Since adaptation to the citizens changing needs is central for its success, Agile methodologies seem best suited for the software development of digital services in that area. However, as well documented by an attempt to use Scrum for an important Public Administration in Italy, substantial modifications to standard Agile were needed, giving rise to a new proposal called improved Agile (in short, iAgile). Another notable example is the Scrum@IMI method developed by the City of Barcelona for the deployment of its digital services. However, given the importance of digital transformation in the public sector and the scarcity of efforts (documented in the scholarly literature) to effectively bring Agile within it, a strategically important contribution that Computer Science can offer is a general paradigm describing how to tailor Agile methodologies and, in particular, Scrum, for such a specific context. Our proposal, called Scrum@PA, addresses this strategic need. Based on it, a public administration has a technically sound avenue to follow to adopt Scrum rather than a generic set of guidelines as in the current state of the art. We show the validity of our proposal by describing how the quite successful Scrum@IMI approach can be derived from Scrum@PA. Although iAgile can also be derived from our paradigm, we have chosen Scrum@IMI as a pilot example since it is publicly available on GitHub. Full article
(This article belongs to the Special Issue Optimization and Methodology in Software Engineering)
Show Figures

Figure 1

Figure 1
<p>Leading professional figures for a PA within the taxonomy: product creation and support category. Starting at the root, the first two levels of the tree are a verbatim rendering of the taxonomy described in <a href="#sec2-information-15-00110" class="html-sec">Section 2</a>, while the leaf nodes represent the set of leader professional figures involved in the entire life cycle of the project that the PA intends to realize following the Scrum methodology. In the leaf level, two types of professional figures are represented, i.e., those belonging to the standard Scrum methodology (see pink leaves) and those specialized in its adoption in the PA context (see light blue leaves).</p>
Full article ">Figure 2
<p>Leading professional figures for the PA within the taxonomy: data management and ICT category. The figure legend is as in <a href="#information-15-00110-f001" class="html-fig">Figure 1</a>.</p>
Full article ">Figure 3
<p>Taxonomy of teams and their members’ professional figures, within the Scrum methodology, for their adoption in the context of the PA: product creation and support category. The figure legend is as in <a href="#information-15-00110-f001" class="html-fig">Figure 1</a>.</p>
Full article ">Figure 4
<p>Taxonomy of teams and their members’ professional figures, within the Scrum methodology, for their adoption in the context of the PA: data management and ICT category. The figure legend is as in <a href="#information-15-00110-f001" class="html-fig">Figure 1</a>.</p>
Full article ">Figure 5
<p>A generic scheme of the software development process via the Scrum methodology. The components of this figure are illustrated in the main text.</p>
Full article ">Figure 6
<p>A paradigmatic Scrum model for the PA: Scrum sprint. The activities carried out within each component, together with the leader figures and teams involved, are described in <a href="#sec4dot2-information-15-00110" class="html-sec">Section 4.2</a>, while the interactions, here encoded by edges, are described in <a href="#sec4dot3-information-15-00110" class="html-sec">Section 4.3</a>.</p>
Full article ">Figure 7
<p>Scrum@PA software development component. A node symbolized by a cloud represents a team, while a node symbolized by a circle represents a leader professional figure. This component consists of a single team, i.e., PT, and two leaders, i.e., SM and SC. The description of the interactions (represented by edges) between the team PT and the leaders is in the main text.</p>
Full article ">Figure 8
<p>Scrum@PA service design component. Following the same notation as in <a href="#information-15-00110-f007" class="html-fig">Figure 7</a>, this component consists of three teams, i.e., SPCT, ST, and POT. The description of the interactions among teams (edges) follows the main text.</p>
Full article ">Figure 9
<p>Scrum@PA technology management component. Following the same notation as in <a href="#information-15-00110-f007" class="html-fig">Figure 7</a>, this component consists of a single team, i.e., LT.</p>
Full article ">Figure 10
<p>Scrum@PA product compliance and validation component. Following the same notation as in <a href="#information-15-00110-f007" class="html-fig">Figure 7</a>, this component consists of two leaders, i.e., RC and RS. The description of the interactions among leaders (edges) follows the main text.</p>
Full article ">Figure 11
<p>Scrum@IMI methodology: Scrum sprint. Starting from the on-boarding phase, the arrows provide a temporal succession of the four phases. Each phase is detailed in the main text, together with the edges that enter such a Scrum sprint.</p>
Full article ">Figure 12
<p>The on-boarding phase of the Scrum@IMI methodology. The teams and the leader professional figures are indicated as in <a href="#information-15-00110-f007" class="html-fig">Figure 7</a>. For each line, dots are considered as numbered from left to right, but such an order does not imply time of execution priority. For each dot, the activities are above it, while who is involved in carrying them out is below it. Subfigure (<b>a</b>) shows the process managing the project up to the contract review; subfigure (<b>b</b>) shows the initialization of the Product Backlog.</p>
Full article ">Figure 13
<p>The sprint 0 Phase of the Scrum@IMI framework. (<b>a</b>) Planning; (<b>b</b>) backlog creation; (<b>c</b>) backlog refinement; (<b>d</b>) conclusion of sprint 0. The figure legend is as in <a href="#information-15-00110-f012" class="html-fig">Figure 12</a>.</p>
Full article ">Figure 14
<p>The sprint i Phase of the Scrum@IMI framework. (<b>a</b>) creation of the sprint backlog; (<b>b</b>) backlog refinement; (<b>c</b>) sprint conclusion. The figure legend is as in <a href="#information-15-00110-f012" class="html-fig">Figure 12</a>.</p>
Full article ">Figure 15
<p>The service transition phase of the Scrum@IMI framework. (<b>a</b>) versioning each increment; (<b>b</b>) service validation and citizens’ feedback. The figure legend is as in <a href="#information-15-00110-f012" class="html-fig">Figure 12</a>.</p>
Full article ">
27 pages, 1281 KiB  
Article
ForensicTransMonitor: A Comprehensive Blockchain Approach to Reinvent Digital Forensics and Evidence Management
by Saad Said Alqahtany and Toqeer Ali Syed
Information 2024, 15(2), 109; https://doi.org/10.3390/info15020109 - 13 Feb 2024
Cited by 3 | Viewed by 3140
Abstract
In the domain of computer forensics, ensuring the integrity of operations like preservation, acquisition, analysis, and documentation is critical. Discrepancies in these processes can compromise evidence and lead to potential miscarriages of justice. To address this, we developed a generic methodology integrating each [...] Read more.
In the domain of computer forensics, ensuring the integrity of operations like preservation, acquisition, analysis, and documentation is critical. Discrepancies in these processes can compromise evidence and lead to potential miscarriages of justice. To address this, we developed a generic methodology integrating each forensic transaction into an immutable blockchain entry, establishing transparency and authenticity from data preservation to final reporting. Our framework was designed to manage a wide range of forensic applications across different domains, including technology-focused areas such as the Internet of Things (IoT) and cloud computing, as well as sector-specific fields like healthcare. Centralizing our approach are smart contracts that seamlessly connect forensic applications to the blockchain via specialized APIs. Every action within the forensic process triggers a verifiable transaction on the blockchain, enabling a comprehensive and tamper-proof case presentation in court. Performance evaluations confirmed that our system operates with minimal overhead, ensuring that the integration bolsters the judicial process without hindering forensic investigations. Full article
Show Figures

Figure 1

Figure 1
<p>The Proposed generic framework that can encapsulate all types of forensics transactions in blockchain.</p>
Full article ">Figure 2
<p>Complete framework of computer forensics transactions integrated with blockchain.</p>
Full article ">Figure 3
<p>Sequence diagram for data preservation.</p>
Full article ">Figure 4
<p>Diagram of integration of data preservation with blockchain.</p>
Full article ">Figure 5
<p>Diagram of integration of data acquisition with blockchain.</p>
Full article ">Figure 6
<p>Sequence diagram for data acquisition.</p>
Full article ">Figure 7
<p>Diagram for integration of data analysis with blockchain.</p>
Full article ">Figure 8
<p>Sequence diagram for data analysis.</p>
Full article ">Figure 9
<p>Sequence diagram for data documentation.</p>
Full article ">Figure 10
<p>Performance evaluation of Hyperledger Fabric.</p>
Full article ">Figure 11
<p>Performance evaluation of Hyperledger Fabric with interims.</p>
Full article ">
21 pages, 4426 KiB  
Article
Improved Detection Method for Micro-Targets in Remote Sensing Images
by Linhua Zhang, Ning Xiong, Wuyang Gao and Peng Wu
Information 2024, 15(2), 108; https://doi.org/10.3390/info15020108 - 12 Feb 2024
Cited by 1 | Viewed by 1815
Abstract
With the exponential growth of remote sensing images in recent years, there has been a significant increase in demand for micro-target detection. Recently, effective detection methods for small targets have emerged; however, for micro-targets (even fewer pixels than small targets), most existing methods [...] Read more.
With the exponential growth of remote sensing images in recent years, there has been a significant increase in demand for micro-target detection. Recently, effective detection methods for small targets have emerged; however, for micro-targets (even fewer pixels than small targets), most existing methods are not fully competent in feature extraction, target positioning, and rapid classification. This study proposes an enhanced detection method, especially for micro-targets, in which a combined loss function (consisting of NWD and CIOU) is used instead of a singular CIOU loss function. In addition, the lightweight Content-Aware Reassembly of Features (CARAFE) replaces the original bilinear interpolation upsampling algorithm, and a spatial pyramid structure is added into the network model’s small target layer. The proposed algorithm undergoes training and validation utilizing the benchmark dataset known as AI-TOD. Compared to speed-oriented YOLOv7-tiny, the mAP0.5 and mAP0.5:0.95 of our improved algorithm increased from 42.0% and 16.8% to 48.7% and 18.9%, representing improvements of 6.7% and 2.1%, respectively, while the detection speed was almost equal to that of YOLOv7-tiny. Furthermore, our method was also tested on a dataset of multi-scale targets, which contains small targets, medium targets, and large targets. The results demonstrated that mAP0.5:0.95 increased from “9.8%, 54.8%, and 68.2%” to “12.6%, 55.6%, and 70.1%” for detection across different scales, indicating improvements of 2.8%, 0.8%, and 1.9%, respectively. In summary, the presented method improves detection metrics for micro-targets in various scenarios while satisfying the requirements of detection speed in a real-time system. Full article
Show Figures

Figure 1

Figure 1
<p>YOLOv7-tiny network architecture.</p>
Full article ">Figure 2
<p>IOU calculation.</p>
Full article ">Figure 3
<p>Improved network architecture. The green dotted box represents Trick 1, in which the new loss function is used. The orange dotted box is Trick 2, in which Upsample in the structure is replaced by CARAFE. The red dotted box represents Trick 3, in which CBL in the structure is replaced by CSPSPP.</p>
Full article ">Figure 4
<p>CARAFE structure.</p>
Full article ">Figure 5
<p>CSPSPP structure.</p>
Full article ">Figure 6
<p>Sample images from AI-TOD dataset.</p>
Full article ">Figure 7
<p>Model training process.</p>
Full article ">Figure 8
<p>Comparison of training process curves for baseline and enhanced method.</p>
Full article ">Figure 9
<p>Comparison of YOLOv7-tiny and the proposed method for micro-target detection.</p>
Full article ">Figure 10
<p>Comparison of YOLOv7-tiny and the proposed method for dense-target detection.</p>
Full article ">Figure 11
<p>Comparison of YOLOv7-tiny and the proposed method for micro-target detection against complex backgrounds.</p>
Full article ">
18 pages, 6294 KiB  
Article
Location Analytics of Routine Occurrences (LARO) to Identify Locations with Regularly Occurring Events with a Case Study on Traffic Accidents
by Yanan Wu, Yalin Yang and May Yuan
Information 2024, 15(2), 107; https://doi.org/10.3390/info15020107 - 9 Feb 2024
Viewed by 2076
Abstract
Conventional spatiotemporal methods take frequentist or density-based approaches to map event clusters over time. While these methods discern hotspots of varying continuity in space and time, their findings overlook locations of routine occurrences where the geographic context may contribute to the regularity of [...] Read more.
Conventional spatiotemporal methods take frequentist or density-based approaches to map event clusters over time. While these methods discern hotspots of varying continuity in space and time, their findings overlook locations of routine occurrences where the geographic context may contribute to the regularity of event occurrences. Hence, this research aims to recognize the routine occurrences of point events and relate site characteristics and situation dynamics around these locations to explain the regular occurrences. We developed an algorithm, Location Analytics of Routine Occurrences (LARO), to determine an appropriate temporal unit based on event periodicity, seek locations of routine occurrences, and geographically contextualize these locations through spatial association mining. We demonstrated LARO in a case study with over 250,000 reported traffic accidents from 2010 to 2018 in Dallas, Texas, United States. LARO identified three distinctive locations, each exhibiting varying frequencies of traffic accidents at each weekly hour. The findings indicated that locations with routine traffic accidents are surrounded by high densities of stores, restaurants, entertainment, and businesses. The timing of traffic accidents showed a strong relationship with human activities around these points of interest. Besides the LARO algorithm, this study contributes to the understanding of previously overlooked periodicity in traffic accidents, emphasizing the association between periodic human activities and the occurrence of street-level traffic accidents. The proposed LARO algorithm is applicable to occurrences of point-based events, such as crime incidents or animal sightings. Full article
(This article belongs to the Special Issue Telematics, GIS and Artificial Intelligence)
Show Figures

Figure 1

Figure 1
<p>The computational workflow for Location Analytics of Routine Occurrences (LARO) algorithm.</p>
Full article ">Figure 2
<p>Time series pattern of traffic accidents.</p>
Full article ">Figure 3
<p>Frequency vs. intensity.</p>
Full article ">Figure 4
<p>Grid locations with 1500-m intervals across the entire Dallas.</p>
Full article ">Figure 5
<p>(<b>a</b>) Temporal pattern on weekdays; (<b>b</b>) temporal pattern on weekends.</p>
Full article ">Figure 6
<p>Locations of routine traffic accidents in the city of Dallas.</p>
Full article ">Figure 7
<p>Increasing trend in traffic accidents.</p>
Full article ">Figure 8
<p>Stable trends in traffic accidents.</p>
Full article ">Figure 9
<p>Proportional distributions of POI categories.</p>
Full article ">Figure 10
<p>Graph-based parallel coordinate for 155 rules, width of arrow: Support (10–16%), color: Confidence (41–82%), Lift: 1.2–2.4.</p>
Full article ">Figure 11
<p>Graph-based parallel coordinate for 15 rules, width of arrow: Support (10–14%), color: Confidence (40–50%), Lift: 1.2–1.5.</p>
Full article ">Figure 12
<p>Graph-based parallel coordinate for 131 rules, width of arrow: Support (10–26%), color: Confidence (52–97%), Lift: 1.5–2.9.</p>
Full article ">
22 pages, 1370 KiB  
Article
Countermeasure Strategies to Address Cybersecurity Challenges Amidst Major Crises in the Higher Education and Research Sector: An Organisational Learning Perspective
by Samreen Mahmood, Mehmood Chadhar and Selena Firmin
Information 2024, 15(2), 106; https://doi.org/10.3390/info15020106 - 9 Feb 2024
Cited by 2 | Viewed by 2155
Abstract
Purpose: The purpose of this research paper was to analyse the counterstrategies to mitigate cybersecurity challenges using organisational learning loops amidst major crises in the Higher Education and Research Sector (HERS). The authors proposed the learning loop framework revealing several counterstrategies to mitigate [...] Read more.
Purpose: The purpose of this research paper was to analyse the counterstrategies to mitigate cybersecurity challenges using organisational learning loops amidst major crises in the Higher Education and Research Sector (HERS). The authors proposed the learning loop framework revealing several counterstrategies to mitigate cybersecurity issues in HERS. The counterstrategies are explored, and their implications for research and practice are discussed. Methodology: The qualitative methodology was adopted, and semi-structured interviews with cybersecurity experts and top managers were conducted. Results: This exploratory paper proposed the learning loop framework revealing introducing new policies and procedures, changing existing systems, partnership with other companies, integrating new software, improving employee learning, enhancing security, and monitoring and evaluating security measures as significant counterstrategies to ensure the cyber-safe working environment in HERS. These counterstrategies will help to tackle cybersecurity in HERS, not only during the current major crisis but also in the future. Implications: The outcomes provide insightful implications for both theory and practice. This study proposes a learning framework that prioritises counterstrategies to mitigate cybersecurity challenges in HERS amidst a major crisis. The proposed model can help HERS be more efficient in mitigating cybersecurity issues in future crises. The counterstrategies can also be tested, adopted, and implemented by practitioners working in other sectors to mitigate cybersecurity issues during and after major crises. Future research can focus on addressing the shortcomings and limitations of the proposed learning framework adopted by HERS. Full article
(This article belongs to the Special Issue Advances in Cybersecurity and Reliability)
Show Figures

Figure 1

Figure 1
<p>Organisational Learning [<a href="#B21-information-15-00106" class="html-bibr">21</a>].</p>
Full article ">Figure 2
<p>Single-, Double- and Triple-Loop Learning [<a href="#B31-information-15-00106" class="html-bibr">31</a>].</p>
Full article ">Figure 3
<p>Mapping of research study results in the learning framework [<a href="#B31-information-15-00106" class="html-bibr">31</a>].</p>
Full article ">
40 pages, 7427 KiB  
Article
Success Factors in Management of IT Service Projects: Regression, Confirmatory Factor Analysis, and Structural Equation Models
by Rafał Michalski and Szymon Zaleski
Information 2024, 15(2), 105; https://doi.org/10.3390/info15020105 - 9 Feb 2024
Cited by 4 | Viewed by 2338
Abstract
Although there have been some studies on the success factors for IT software projects, there is still a lack of coherent research on the success factors for IT service projects. Therefore, this study aimed to identify and understand the factors and their relationships [...] Read more.
Although there have been some studies on the success factors for IT software projects, there is still a lack of coherent research on the success factors for IT service projects. Therefore, this study aimed to identify and understand the factors and their relationships that contribute to the success of IT service projects. For this purpose, multivariate regressions and structural equation models (SEMs) were developed and analyzed. The regression models included six project management success criteria used as dependent variables (quality of the delivered product, scope realization and requirements, timeliness of delivery, delivery within budget, customer satisfaction, and provider satisfaction) and four independent variables (agile techniques and change management, organization and people, stakeholders and risk analysis, work environment), which had been identified through exploratory factor analysis. The results showed that not all success factors were relevant to all success criteria, and there were differences in their importance. An additional series of exploratory and confirmatory factor analyses along with appropriate statistical measures were employed to evaluate the quality of these four factors. The SEM approach was based on five latent constructs with a total of twenty components. The study suggests that investing in improving people’s knowledge and skills, using agile methodologies, creating a supportive work environment, and involving stakeholders in regular risk analysis are important for project management success. The results also suggest that the success factors for IT service projects depend on both traditional and agile approaches. The study extensively compared its findings with similar research and discussed common issues and differences in both the model structures and methodologies applied. The investigation utilized mathematical methods and techniques that are not commonly applied in the field of project management success modeling. The comprehensive methodology that was applied may be helpful to other researchers who are interested in this topic. Full article
(This article belongs to the Special Issue Systems Engineering and Knowledge Management)
Show Figures

Figure 1

Figure 1
<p>Confirmatory factor analysis model of success factors in IT service projects along with weight coefficients and covariances between the four latent variables (dimensions). All the covariances were statistically significant at <span class="html-italic">p</span> &lt; 0.005, whereas the regression weights were significant at <span class="html-italic">p</span> &lt; 0.001.</p>
Full article ">Figure 2
<p>Overall subjective perception of the project management success together with regression weights. The model fit parameters <math display="inline"><semantics> <mrow> <msup> <mrow> <mi mathvariant="sans-serif">χ</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msup> <mo>=</mo> </mrow> </semantics></math> 4.862, <span class="html-italic">p</span> = 0.088, <math display="inline"><semantics> <mrow> <msup> <mrow> <mi mathvariant="sans-serif">χ</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msup> <mo>/</mo> <mi>d</mi> <mi>f</mi> <mo>=</mo> </mrow> </semantics></math> 2.431, CFI = 0.991, IFI = 0.991, RMSEA = 0.096. The coefficients were statistically significant at the level of <span class="html-italic">p</span> &lt; 0.005.</p>
Full article ">Figure 3
<p>A simplified graphical representation of the SEM model combining the approach of factorial orthogonal structure to project management success along with the overall perceived project management success.</p>
Full article ">Figure 4
<p>Potential relationships between factors examined by the model specification search functionality in <span class="html-italic">Amos</span> software.</p>
Full article ">Figure 5
<p>The BIC values depending on the number of model parameters in <span class="html-italic">Amos</span> software specification search.</p>
Full article ">Figure 6
<p>Iterative results of the model specification search using the best subset approach in <span class="html-italic">Amos</span> software.</p>
Full article ">Figure 7
<p>A simplified structure of the best, in terms of fitting quality criteria, Model 35(0) showing relationships between latent variables.</p>
Full article ">Figure 8
<p>Schematic representation of all eight examined variants of modified Model 35. The most appropriate (best) models are highlighted by either grey or black frames.</p>
Full article ">Figure 9
<p>Model 35(6 mod) in <span class="html-italic">Amos</span> software. All regression weights computed via the bootstrap procedure are statistically significant at <span class="html-italic">p</span> &lt; 0.01. The covariance between <span class="html-italic">Dimensions-based PM Success</span> and <span class="html-italic">Overall Perception of PM Success</span> is statistically significant at <span class="html-italic">p</span> = 0.002.</p>
Full article ">Figure A1
<p>Classical model of path analysis, which assumes full orthogonality of the dimensions obtained from exploratory factor analysis. All regression weights computed by the bootstrap procedure are statistically significant with <span class="html-italic">p</span> &lt; 0.005. The covariance between <span class="html-italic">Dimensions-based PM Success</span> and <span class="html-italic">Overall Perception of PM Success</span> is statistically significant at <span class="html-italic">p</span> = 0.003.</p>
Full article ">Figure A2
<p>Model 35(0) in <span class="html-italic">Amos</span> software. All regression weights computed by the bootstrap procedure are statistically significant with <span class="html-italic">p</span> &lt; 0.01. The covariance between <span class="html-italic">Dimensions-based PM Success</span> and <span class="html-italic">Overall Perception of PM Success</span> is statistically significant at <span class="html-italic">p</span> = 0.002.</p>
Full article ">Figure A3
<p>Model 35(1) in <span class="html-italic">Amos</span> software. All regression weights computed by the bootstrap procedure are statistically significant with <span class="html-italic">p</span> &lt; 0.01. The covariance between <span class="html-italic">Dimensions-based PM Success</span> and <span class="html-italic">Overall Perception of PM Success</span> is statistically significant at <span class="html-italic">p</span> = 0.002.</p>
Full article ">Figure A4
<p>Model 35(2) in <span class="html-italic">Amos</span> software. All regression weights computed by the bootstrap procedure are statistically significant with <span class="html-italic">p</span> &lt; 0.05. The covariance between <span class="html-italic">Dimensions-based PM Success</span> and <span class="html-italic">Overall Perception of PM Success</span> is statistically significant at <span class="html-italic">p</span> = 0.003.</p>
Full article ">Figure A5
<p>Model 35(4) in <span class="html-italic">Amos</span> software. All regression weights computed by the bootstrap procedure are statistically significant with <span class="html-italic">p</span> &lt; 0.05. The covariance between <span class="html-italic">Dimensions-based PM Success</span> and <span class="html-italic">Overall Perception of PM Success</span> is statistically significant at <span class="html-italic">p</span> = 0.003.</p>
Full article ">Figure A6
<p>Model 35(6) in <span class="html-italic">Amos</span> software. All regression weights computed by the bootstrap procedure are statistically significant with <span class="html-italic">p</span> &lt; 0.05. The covariance between <span class="html-italic">Dimensions-based PM Success</span> and <span class="html-italic">Overall Perception of PM Success</span> is statistically significant at <span class="html-italic">p</span> = 0.002.</p>
Full article ">
16 pages, 8616 KiB  
Article
Enhancing Pedestrian Tracking in Autonomous Vehicles by Using Advanced Deep Learning Techniques
by Majdi Sukkar, Madhu Shukla, Dinesh Kumar, Vassilis C. Gerogiannis, Andreas Kanavos and Biswaranjan Acharya
Information 2024, 15(2), 104; https://doi.org/10.3390/info15020104 - 9 Feb 2024
Cited by 2 | Viewed by 3158
Abstract
Effective collision risk reduction in autonomous vehicles relies on robust and straightforward pedestrian tracking. Challenges posed by occlusion and switching scenarios significantly impede the reliability of pedestrian tracking. In the current study, we strive to enhance the reliability and also the efficacy of [...] Read more.
Effective collision risk reduction in autonomous vehicles relies on robust and straightforward pedestrian tracking. Challenges posed by occlusion and switching scenarios significantly impede the reliability of pedestrian tracking. In the current study, we strive to enhance the reliability and also the efficacy of pedestrian tracking in complex scenarios. Particularly, we introduce a new pedestrian tracking algorithm that leverages both the YOLOv8 (You Only Look Once) object detector technique and the StrongSORT algorithm, which is an advanced deep learning multi-object tracking (MOT) method. Our findings demonstrate that StrongSORT, an enhanced version of the DeepSORT MOT algorithm, substantially improves tracking accuracy through meticulous hyperparameter tuning. Overall, the experimental results reveal that the proposed algorithm is an effective and efficient method for pedestrian tracking, particularly in complex scenarios encountered in the MOT16 and MOT17 datasets. The combined use of Yolov8 and StrongSORT contributes to enhanced tracking results, emphasizing the synergistic relationship between detection and tracking modules. Full article
(This article belongs to the Special Issue Intelligent Information Processing for Sensors and IoT Communications)
Show Figures

Figure 1

Figure 1
<p>YOLOv8 performance.</p>
Full article ">Figure 2
<p>Framework of the AFLink model.</p>
Full article ">Figure 3
<p>Method of tuning hyperparameters.</p>
Full article ">Figure 4
<p>Tracking by StrongSORT.</p>
Full article ">Figure 5
<p>Tracking by StrongSORT_P.</p>
Full article ">Figure 6
<p>Tracking by StrongSORT in a crowded scenario.</p>
Full article ">Figure 7
<p>Tracking by StrongSORT_P in a crowded scenario.</p>
Full article ">Figure 8
<p>Detection and tracking of 80 objects using YOLOv8n-seg and StrongSORT.</p>
Full article ">Figure 9
<p>Detection and tracking of pedestrians using YOLOv8n-seg and StrongSORT_P.</p>
Full article ">
18 pages, 1865 KiB  
Article
Online Information Reviews to Boost Tourism in the B&B Industry to Reveal the Truth and Nexus
by Xiaoqun Wang, Xihui Chen and Zhouyi Gu
Information 2024, 15(2), 103; https://doi.org/10.3390/info15020103 - 9 Feb 2024
Viewed by 1826
Abstract
Grasping the concerns of customers is paramount, serving as a foundation for both attracting and retaining a loyal customer base. While customer satisfaction has been extensively explored across diverse industries, there remains a dearth of insights into how distinct rural bed and breakfasts [...] Read more.
Grasping the concerns of customers is paramount, serving as a foundation for both attracting and retaining a loyal customer base. While customer satisfaction has been extensively explored across diverse industries, there remains a dearth of insights into how distinct rural bed and breakfasts (RB&Bs) can effectively cater to the specific needs of their target audience. This research utilized latent semantic analysis and text regression techniques on online reviews, uncovering previously unrecognized factors contributing to RB&B customer satisfaction. Furthermore, the study demonstrates that certain factors wield distinct impacts on guest satisfaction within varying RB&B market segments. The implications of these findings extend to empowering RB&B owners with actionable insights to enhance the overall customer experience. Full article
(This article belongs to the Special Issue 2nd Edition of Information Retrieval and Social Media Mining)
Show Figures

Figure 1

Figure 1
<p>Research process.</p>
Full article ">Figure 2
<p>Pseudocode for web-crawling ORs of RB&amp;Bs. Note: The green characters in the figure represent code comments.</p>
Full article ">Figure 3
<p>Example of an OR in Tongcheng-Elong.</p>
Full article ">Figure 4
<p>Detailed methods of data collection and pre-processing.</p>
Full article ">Figure 5
<p>Pseudocode for sentiment analysis of ORs. Note: The green characters in the figure represent code comments.</p>
Full article ">Figure 6
<p>Scree plot for determining the optimal number of components.</p>
Full article ">
26 pages, 2875 KiB  
Article
Identifying Malware Packers through Multilayer Feature Engineering in Static Analysis
by Ehab Alkhateeb, Ali Ghorbani and Arash Habibi Lashkari
Information 2024, 15(2), 102; https://doi.org/10.3390/info15020102 - 9 Feb 2024
Cited by 1 | Viewed by 2732
Abstract
This research addresses a critical need in the ongoing battle against malware, particularly in the form of obfuscated malware, which presents a formidable challenge in the realm of cybersecurity. Developing effective antivirus (AV) solutions capable of combating packed malware remains a crucial endeavor. [...] Read more.
This research addresses a critical need in the ongoing battle against malware, particularly in the form of obfuscated malware, which presents a formidable challenge in the realm of cybersecurity. Developing effective antivirus (AV) solutions capable of combating packed malware remains a crucial endeavor. Packed malicious programs employ encryption and advanced techniques to obfuscate their payloads, rendering them elusive to AV scanners and security analysts. The introduced research presents an innovative malware packer classifier specifically designed to adeptly identify packer families and detect unknown packers in real-world scenarios. To fortify packer identification performance, we have curated a meticulously crafted dataset comprising precisely packed samples, enabling comprehensive training and validation. Our approach employs a sophisticated feature engineering methodology, encompassing multiple layers of analysis to extract salient features used as input to the classifier. The proposed packer identifier demonstrates remarkable accuracy in distinguishing between known and unknown packers, while also ensuring operational efficiency. The results reveal an impressive accuracy rate of 99.60% in identifying known packers and 91% accuracy in detecting unknown packers. This novel research not only significantly advances the field of malware detection but also equips both cybersecurity practitioners and AV engines with a robust tool to effectively counter the persistent threat of packed malware. Full article
(This article belongs to the Special Issue Advances in Cybersecurity and Reliability)
Show Figures

Figure 1

Figure 1
<p>The procedural steps involved in both packing and unpacking processes for a malware sample, illustrating the transformation and manipulation undergone by the code during these essential phases.</p>
Full article ">Figure 2
<p>The integrated systemic framework: the proposed approach encompasses key components, including the dataset, feature engineering, feature set, and classification stages.</p>
Full article ">Figure 3
<p>Stages within the image plot layer: preprocessing, family image selection, and testing and training.</p>
Full article ">Figure 4
<p>Generation of Gabor jets from an image derived from a packed original sample.</p>
Full article ">Figure 5
<p>Comparison of the entropy score in the .text section between normal files and packed files.</p>
Full article ">Figure 6
<p>Packer classification process.</p>
Full article ">Figure 7
<p>Dataset construction [<a href="#B46-information-15-00102" class="html-bibr">46</a>]. This intricate process involves meticulous steps such as data gathering, cleaning, and augmentation, ensuring a diverse and representative collection of samples.</p>
Full article ">Figure 8
<p>Family-based identification: feature set’s average accuracy showcasing the superior performance of L1L2L3.</p>
Full article ">Figure 9
<p>Feature importance: significance visualized.</p>
Full article ">Figure 10
<p>PEid Identification vs. Proposed Approach.</p>
Full article ">Figure 11
<p>VirusTotal malware samples between the years 2017 and 2019.</p>
Full article ">Figure 12
<p>Unknown packers’ identification: average accuracy highlighting performance metrics, with feature set L1L2L3 demonstrating the highest performance.</p>
Full article ">Figure 13
<p>Aegis and NSPack image plot: a visual representation highlighting distinctive characteristics and patterns identified in the image plots generated for Aegis and NSPack.</p>
Full article ">
27 pages, 9431 KiB  
Article
Generative Pre-Trained Transformer (GPT) in Research: A Systematic Review on Data Augmentation
by Fahim Sufi
Information 2024, 15(2), 99; https://doi.org/10.3390/info15020099 - 8 Feb 2024
Cited by 15 | Viewed by 14198
Abstract
GPT (Generative Pre-trained Transformer) represents advanced language models that have significantly reshaped the academic writing landscape. These sophisticated language models offer invaluable support throughout all phases of research work, facilitating idea generation, enhancing drafting processes, and overcoming challenges like writer’s block. Their capabilities [...] Read more.
GPT (Generative Pre-trained Transformer) represents advanced language models that have significantly reshaped the academic writing landscape. These sophisticated language models offer invaluable support throughout all phases of research work, facilitating idea generation, enhancing drafting processes, and overcoming challenges like writer’s block. Their capabilities extend beyond conventional applications, contributing to critical analysis, data augmentation, and research design, thereby elevating the efficiency and quality of scholarly endeavors. Strategically narrowing its focus, this review explores alternative dimensions of GPT and LLM applications, specifically data augmentation and the generation of synthetic data for research. Employing a meticulous examination of 412 scholarly works, it distills a selection of 77 contributions addressing three critical research questions: (1) GPT on Generating Research data, (2) GPT on Data Analysis, and (3) GPT on Research Design. The systematic literature review adeptly highlights the central focus on data augmentation, encapsulating 48 pertinent scholarly contributions, and extends to the proactive role of GPT in critical analysis of research data and shaping research design. Pioneering a comprehensive classification framework for “GPT’s use on Research Data”, the study classifies existing literature into six categories and 14 sub-categories, providing profound insights into the multifaceted applications of GPT in research data. This study meticulously compares 54 pieces of literature, evaluating research domains, methodologies, and advantages and disadvantages, providing scholars with profound insights crucial for the seamless integration of GPT across diverse phases of their scholarly pursuits. Full article
(This article belongs to the Special Issue Editorial Board Members’ Collection Series: "Information Processes")
Show Figures

Figure 1

Figure 1
<p>Conceptual diagram of how GPT performs feature extraction, data augmentation, and synthetic data generation.</p>
Full article ">Figure 2
<p>Use GPT and associated LLM in all phases of research. 48 scholarly works on data augmentation (Starred denoting the main focus of this review), 12 existing publications on critical analysis (i.e., research data analysis), and 10 papers on research design.</p>
Full article ">Figure 3
<p>Search keyword used for obtaining relevant existing academic works on “GPT, LLM, and associated technologies in different phases of research”.</p>
Full article ">Figure 4
<p>Schematic Diagram of the systematic literature review (i.e., use of GPT, LLM, and associated technologies on different phases of research).</p>
Full article ">Figure 5
<p>A comprehensive classification framework for “GPT’s use of research data”.</p>
Full article ">Figure 6
<p>A comparative schematic of feature extraction process with NLP and GPT.</p>
Full article ">Figure 7
<p>Chat2VIS analyzes data and shows results in visualization with a GPT prompt like “plot the gross against budget” [<a href="#B23-information-15-00099" class="html-bibr">23</a>].</p>
Full article ">Figure 8
<p>PRISMA Flow Diagram on the systematic literature review of “GPT for research”.</p>
Full article ">Figure 9
<p>Timeline analysis of existing literature on the use of GPT in research.</p>
Full article ">Figure A1
<p>Database search from Scopus using Scopus-specific advanced query. From Scopus, 99 documents were returned, including the duplicates. After removing the duplicates, records were screened. For example, the first record, “Beyond the Scalpel: Assessing ChatGPT’s Potential as an Auxiliary Intelligent Virtual Assistant in Oral Surgery” is not relevant to the focus of this study, “i.e., GPT in Research/GPT in Data Augmentation/GPT in Data Generation/GPT in Solving Research Problem”.</p>
Full article ">Figure A2
<p>Database search from IEEE Xplore using IEEE Xplore-specific advanced queries. A total of 119 documents were returned, including duplicates. After removing the duplicates, records were screened. For example, the first record was included, and the second record was screened out as this paper does not address “GPT in research”.</p>
Full article ">Figure A3
<p>Database search from PubMed using a PubMed-specific advanced query. From PubMed, 47 documents were returned, including the duplicates.</p>
Full article ">Figure A4
<p>Database Search from Web of Science using their supported advanced query. From Web of Science, 306 documents were returned, including duplicates. After removing the duplicates, the records were screened. For example, the first records were screened out as this paper was focused on nanotechnology and nanomaterials.</p>
Full article ">Figure A5
<p>Database search from the ACM Digital Library using their supported advanced query. From the ACM Digital Library, 102 documents were returned, including duplicates.</p>
Full article ">Figure A6
<p>Litmaps suggest 20 possibly relevant articles by visually analyzing the citation maps of [<a href="#B66-information-15-00099" class="html-bibr">66</a>].</p>
Full article ">
12 pages, 37559 KiB  
Article
Improving Breast Tumor Multi-Classification from High-Resolution Histological Images with the Integration of Feature Space Data Augmentation
by Nadia Brancati and Maria Frucci
Information 2024, 15(2), 98; https://doi.org/10.3390/info15020098 - 8 Feb 2024
Cited by 1 | Viewed by 1635
Abstract
To support pathologists in breast tumor diagnosis, deep learning plays a crucial role in the development of histological whole slide image (WSI) classification methods. However, automatic classification is challenging due to the high-resolution data and the scarcity of representative training data. To tackle [...] Read more.
To support pathologists in breast tumor diagnosis, deep learning plays a crucial role in the development of histological whole slide image (WSI) classification methods. However, automatic classification is challenging due to the high-resolution data and the scarcity of representative training data. To tackle these limitations, we propose a deep learning-based breast tumor gigapixel histological image multi-classifier integrated with a high-resolution data augmentation model to process the entire slide by exploring its local and global information and generating its different synthetic versions. The key idea is to perform the classification and augmentation in feature latent space, reducing the computational cost while preserving the class label of the input. We adopt a deep learning-based multi-classification method and evaluate the contribution given by a conditional generative adversarial network-based data augmentation model on the classifier’s performance for three tumor classes in the BRIGHT Challenge dataset. The proposed method has allowed us to achieve an average F1 equal to 69.5, considering only the WSI dataset of the Challenge. The results are comparable to those obtained by the Challenge winning method (71.6), also trained on the annotated tumor region dataset of the Challenge. Full article
(This article belongs to the Special Issue Applications of Deep Learning in Bioinformatics and Image Processing)
Show Figures

Figure 1

Figure 1
<p>The diagram’s higher and lower sections illustrate the training and testing phases, respectively. For the training, after transforming the WSI input into a GFM, new GFM representations are generated by an augmentation module. An attention-based neural network processes the GFM input of one of the generated GFMs.</p>
Full article ">Figure 2
<p>Synthetic GFM generation using a cGAN: each WSI input is resized at the dimension of <math display="inline"><semantics> <mrow> <mi>S</mi> <mo>×</mo> <mi>S</mi> <mo>×</mo> <mn>3</mn> </mrow> </semantics></math>. The GFM representation and the label class of the WSI are given as input to a cGAN, which produces a new synthetic GFM of the same dimension.</p>
Full article ">Figure 3
<p>Examples of different tissue samples: (<b>a</b>) pathological benign (PB), (<b>b</b>) usual ductal hyperplasia (UDH), (<b>c</b>) flat epithelial atypia (FEA), (<b>d</b>) atypical ductal hyperplasia (ADH), (<b>e</b>) carcinoma in situ (DCIS), and (<b>f</b>) invasive carcinoma (IC).</p>
Full article ">Figure 4
<p>The generator and discriminator loss curves (blue and orange lines, respectively) during the training; after approximately 750 epochs, the generator begins to improve, while the performance of the discriminator deteriorates.</p>
Full article ">
1 pages, 134 KiB  
Correction
Correction: Zhang et al. An Integrated Access and Backhaul Approach to Sustainable Dense Small Cell Network Planning. Information 2024, 15, 19
by Jie Zhang, Qiao Wang, Paul Mitchell and Hamed Ahmadi
Information 2024, 15(2), 97; https://doi.org/10.3390/info15020097 - 8 Feb 2024
Viewed by 1141
Abstract
Due to an Editorial Office error [...] Full article
16 pages, 1592 KiB  
Article
Predicting Conversion from Mild Cognitive Impairment to Alzheimer’s Disease Using K-Means Clustering on MRI Data
by Miranda Bellezza, Azzurra di Palma and Andrea Frosini
Information 2024, 15(2), 96; https://doi.org/10.3390/info15020096 - 8 Feb 2024
Viewed by 1596
Abstract
Alzheimer’s disease (AD) is a neurodegenerative disorder that leads to the loss of cognitive functions due to the deterioration of brain tissue. Current diagnostic methods are often invasive or costly, limiting their widespread use. Developing non-invasive and cost-effective screening methods is [...] Read more.
Alzheimer’s disease (AD) is a neurodegenerative disorder that leads to the loss of cognitive functions due to the deterioration of brain tissue. Current diagnostic methods are often invasive or costly, limiting their widespread use. Developing non-invasive and cost-effective screening methods is crucial, especially for identifying patients with mild cognitive impairment (MCI) at the risk of developing Alzheimer’s disease. This study employs a Machine Learning (ML) approach, specifically K-means clustering, on a subset of pixels common to all magnetic resonance imaging (MRI) images to rapidly classify subjects with AD and those with normal Normal Cognitive (NC). In particular, we benefited from defining significant pixels, a narrow subset of points (in the range of 1.5% to 6% of the total) common to all MRI images and related to more intense degeneration of white or gray matter. We performed K-means clustering, with k = 2, on the significant pixels of AD and NC MRI images to separate subjects belonging to the two classes and detect the class centroids. Subsequently, we classified subjects with MCI using only the significant pixels. This approach enables quick classification of subjects with AD and NC, and more importantly, it predicts MCI-to-AD conversion with high accuracy and low computational cost, making it a rapid and effective diagnostic tool for real-time assessments. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

Figure 1
<p>Block diagram of the study design. The data collected from the <span class="html-italic">ADNI</span> database first undergoes a pre-processing stage where white and gray matter are segmented. Then, a permutation test on the white and gray matter of <math display="inline"><semantics> <mrow> <mi>N</mi> <mi>C</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>A</mi> <mi>D</mi> </mrow> </semantics></math> subjects allows significant pixels to be detected and to restrict the dataset accordingly. The involved classes are defined in <a href="#sec3-information-15-00096" class="html-sec">Section 3</a> and <a href="#sec3dot2-information-15-00096" class="html-sec">Section 3.2</a>. Finally, a ML model, <span class="html-italic">K</span>-means, is trained, tested, and employed to distinguish between normal aging and <span class="html-italic">AD</span> degeneration, as well as predict candidates exhibiting the <span class="html-italic">MCI</span>-to-<span class="html-italic">AD</span> pattern.</p>
Full article ">Figure 2
<p>The axial slice 58 view of gray matter, white matter, and fluid (from left to right) extracted from MRI data of an <span class="html-italic">AD</span> subject. The pixel classification is performed with the Matlab CONN toolbox.</p>
Full article ">Figure 3
<p>From left to right, the axial slice 58 view of gray matter at <math display="inline"><semantics> <msub> <mi>t</mi> <mn>0</mn> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>t</mi> <mn>2</mn> </msub> </semantics></math> time acquisitions and their difference, highlighted in red. The data are from the MRI of an <span class="html-italic">AD</span> subject.</p>
Full article ">Figure 4
<p>Significant pixels’ distribution for the white and gray matter according to the three different <math display="inline"><semantics> <mi>α</mi> </semantics></math> thresholds.</p>
Full article ">Figure 5
<p>The confusion matrices computed after the <span class="html-italic">K</span>-means clustering of the <math display="inline"><semantics> <msup> <mover accent="true"> <mrow> <mi>A</mi> <mi>D</mi> </mrow> <mo>^</mo> </mover> <mi>g</mi> </msup> </semantics></math> and <math display="inline"><semantics> <msup> <mover accent="true"> <mrow> <mi>N</mi> <mi>C</mi> </mrow> <mo>^</mo> </mover> <mi>g</mi> </msup> </semantics></math> classes with respect to the four (Euclidean, Czekanowski, Chebyshev, and City Block) distances (the first one outperforming the others). The distances led to convergence after 2, 8, 10, and 6 iterations, respectively.</p>
Full article ">Figure 6
<p>Subject-to-centroid distance distributions inside each cluster.</p>
Full article ">Figure 7
<p>Slice 58 of an <math display="inline"><semantics> <mrow> <mi>A</mi> <mi>D</mi> </mrow> </semantics></math> subject. (<b>Left</b>) white and gray matter locations. (<b>Right</b>) the significant pixels of the same slice and the brain areas in which they are located, divided according to Brodmann areas. The vast majority of significant pixels are located in the fusiform gyros, hyppocampus, amigdala, parahippocampal gyrus, and orbitofrontal lobe.</p>
Full article ">
18 pages, 17236 KiB  
Article
A Particle-Swarm-Optimization-Algorithm-Improved Jiles–Atherton Model for Magnetorheological Dampers Considering Magnetic Hysteresis Characteristics
by Ying-Qing Guo, Meng Li, Yang Yang, Zhao-Dong Xu and Wen-Han Xie
Information 2024, 15(2), 101; https://doi.org/10.3390/info15020101 - 8 Feb 2024
Cited by 1 | Viewed by 1452
Abstract
As a typical intelligent device, magnetorheological (MR) dampers have been widely applied in vibration control and mitigation. However, the inherent hysteresis characteristics of magnetic materials can cause significant time delays and fluctuations, affecting the controllability and damping performance of MR dampers. Most existing [...] Read more.
As a typical intelligent device, magnetorheological (MR) dampers have been widely applied in vibration control and mitigation. However, the inherent hysteresis characteristics of magnetic materials can cause significant time delays and fluctuations, affecting the controllability and damping performance of MR dampers. Most existing mathematical models have not considered the adverse effects of magnetic hysteresis characteristics, and this study aims to consider such effects in MR damper models. Based on the magnetic circuit analysis of MR dampers, the Jiles–Atherton (J-A) model is adopted to characterize the magnetic hysteresis properties. Then, a weight adaptive particle swarm optimization algorithm (PSO) is introduced to the J-A model for efficient parameter identifications of this model, in which the differential evolution and the Cauchy variation are combined to improve the diversity of the population and the ability to jump out of the local optimal solution. The results obtained from the improved J-A model are compared with the experimental data under different working conditions, and it shows that the proposed J-A model can accurately predict the damping performance of MR dampers with magnetic hysteresis characteristics. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic structure of the shear valve type single-cylinder double-outlet rod MR damper.</p>
Full article ">Figure 2
<p>Bingham mechanical model.</p>
Full article ">Figure 3
<p>MRD-SEU-D050 MR damper.</p>
Full article ">Figure 4
<p>Force–displacement curves of the MRD-SEU-D050 MR damper at different constant currents.</p>
Full article ">Figure 5
<p>Schematic block diagram of the relationship between the J-A model and the MR damper model.</p>
Full article ">Figure 6
<p>Flow chart of the improved PSO.</p>
Full article ">Figure 7
<p>Comparison of the two algorithms for fitting B-H curves.</p>
Full article ">Figure 8
<p>Comparison of the adaptation values of the two algorithms.</p>
Full article ">Figure 9
<p>Simulation of Bingham mechanical model of MR damper.</p>
Full article ">Figure 10
<p>Simulation of J-A hysteresis model.</p>
Full article ">Figure 11
<p>Experimental system.</p>
Full article ">Figure 12
<p>Comparison of the output damping force–displacement curves of the magnetorheological damper under different working conditions: (<b>a</b>) 0.05 Hz, (<b>b</b>) 0.1 Hz, (<b>c</b>) 0.2 Hz.</p>
Full article ">Figure 13
<p>Output damping force–displacement curves of MR damper at 1 Hz working condition.</p>
Full article ">Figure 14
<p>Comparison of output damping force–displacement curves of MR damper at 0.5 Hz working condition.</p>
Full article ">
34 pages, 3406 KiB  
Article
Evaluating Ontology-Based PD Monitoring and Alerting in Personal Health Knowledge Graphs and Graph Neural Networks
by Nikolaos Zafeiropoulos, Pavlos Bitilis, George E. Tsekouras and Konstantinos Kotis
Information 2024, 15(2), 100; https://doi.org/10.3390/info15020100 - 8 Feb 2024
Cited by 3 | Viewed by 2201
Abstract
In the realm of Parkinson’s Disease (PD) research, the integration of wearable sensor data with personal health records (PHR) has emerged as a pivotal avenue for patient alerting and monitoring. This study delves into the complex domain of PD patient care, with a [...] Read more.
In the realm of Parkinson’s Disease (PD) research, the integration of wearable sensor data with personal health records (PHR) has emerged as a pivotal avenue for patient alerting and monitoring. This study delves into the complex domain of PD patient care, with a specific emphasis on harnessing the potential of wearable sensors to capture, represent and semantically analyze crucial movement data and knowledge. The primary objective is to enhance the assessment of PD patients by establishing a robust foundation for personalized health insights through the development of Personal Health Knowledge Graphs (PHKGs) and the employment of personal health Graph Neural Networks (PHGNNs) that utilize PHKGs. The objective is to formalize the representation of related integrated data, unified sensor and PHR data in higher levels of abstraction, i.e., in a PHKG, to facilitate interoperability and support rule-based high-level event recognition such as patient’s missing dose or falling. This paper, extending our previous related work, presents the Wear4PDmove ontology in detail and evaluates the ontology within the development of an experimental PHKG. Furthermore, this paper focuses on the integration and evaluation of PHKG within the implementation of a Graph Neural Network (GNN). This work emphasizes the importance of integrating PD-related data for monitoring and alerting patients with appropriate notifications. These notifications offer health experts precise and timely information for the continuous evaluation of personal health-related events, ultimately contributing to enhanced patient care and well-informed medical decision-making. Finally, the paper concludes by proposing a novel approach for integrating personal health KGs and GNNs for PD monitoring and alerting solutions. Full article
(This article belongs to the Special Issue Knowledge Graph Technology and its Applications II)
Show Figures

Figure 1

Figure 1
<p>Wear4PDmove ontology key concepts and the reused vocabularies.</p>
Full article ">Figure 2
<p>Example of RDF triples representing related knowledge of a PD patient observation.</p>
Full article ">Figure 3
<p>Knowledge Graph integrating the Wear4PDmove ontology.</p>
Full article ">Figure 4
<p>SPARQL query in the Python environment.</p>
Full article ">Figure 5
<p>Resulting triples from the CONSTRUCT query.</p>
Full article ">Figure 6
<p>Personal Health Graph Neural Network (PHGNN) architecture.</p>
Full article ">Figure 7
<p>Model performance metrics for medium and high alert predictions across training epochs, including loss and accuracy values.</p>
Full article ">Figure 8
<p>Loss medium alert in different hidden layers.</p>
Full article ">Figure 9
<p>Loss of high alert in different hidden layers.</p>
Full article ">Figure 10
<p>Model training progress: Loss values over epochs for different hidden channels.</p>
Full article ">
25 pages, 3088 KiB  
Review
Quantum Computing and Machine Learning on an Integrated Photonics Platform
by Huihui Zhu, Hexiang Lin, Shaojun Wu, Wei Luo, Hui Zhang, Yuancheng Zhan, Xiaoting Wang, Aiqun Liu and Leong Chuan Kwek
Information 2024, 15(2), 95; https://doi.org/10.3390/info15020095 - 7 Feb 2024
Viewed by 4177
Abstract
Integrated photonic chips leverage the recent developments in integrated circuit technology, along with the control and manipulation of light signals, to realize the integration of multiple optical components onto a single chip. By exploiting the power of light, integrated photonic chips offer numerous [...] Read more.
Integrated photonic chips leverage the recent developments in integrated circuit technology, along with the control and manipulation of light signals, to realize the integration of multiple optical components onto a single chip. By exploiting the power of light, integrated photonic chips offer numerous advantages over traditional optical and electronic systems, including miniaturization, high-speed data processing and improved energy efficiency. In this review, we survey the current status of quantum computation, optical neural networks and the realization of some algorithms on integrated optical chips. Full article
(This article belongs to the Special Issue Quantum Information Processing and Machine Learning)
Show Figures

Figure 1

Figure 1
<p>Schematic of the integrated units performing gates and states. (<b>a</b>) On-chip polarizing beam splitter. (<b>b</b>) Probabilistic C-Phase entangling gate. (<b>c</b>) Bell state <math display="inline"><semantics> <mrow> <mrow> <mo>|</mo> <mn>00</mn> <mo>〉</mo> </mrow> <mo>+</mo> <mrow> <mo>|</mo> <mn>11</mn> <mo>〉</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 2
<p>Summary of various quantum machine learning tasks.</p>
Full article ">Figure 3
<p>The structure of classical neural networks and Variational Quantum Classifier.</p>
Full article ">Figure 4
<p>Timeline of key demonstrations of integrated quantum photonics. Thees technologies include on-chip interference and CNOT gate [<a href="#B49-information-15-00095" class="html-bibr">49</a>], Shor’s algorithm [<a href="#B63-information-15-00095" class="html-bibr">63</a>], quantum walk [<a href="#B64-information-15-00095" class="html-bibr">64</a>], high visibility interference [<a href="#B65-information-15-00095" class="html-bibr">65</a>], on-chip SNSPD [<a href="#B54-information-15-00095" class="html-bibr">54</a>], boson sampling [<a href="#B66-information-15-00095" class="html-bibr">66</a>], on-chip QD source [<a href="#B67-information-15-00095" class="html-bibr">67</a>], Grover’s search algorithm [<a href="#B68-information-15-00095" class="html-bibr">68</a>], measurement of 6-photon on chip [<a href="#B69-information-15-00095" class="html-bibr">69</a>], quantum communication [<a href="#B70-information-15-00095" class="html-bibr">70</a>,<a href="#B71-information-15-00095" class="html-bibr">71</a>], universal linear optics [<a href="#B72-information-15-00095" class="html-bibr">72</a>], molecular vibronic dynamics [<a href="#B73-information-15-00095" class="html-bibr">73</a>], high-dimension quantum device [<a href="#B74-information-15-00095" class="html-bibr">74</a>], 8-photon processing [<a href="#B75-information-15-00095" class="html-bibr">75</a>], error-corrected qubits [<a href="#B76-information-15-00095" class="html-bibr">76</a>], large-scale quantum device [<a href="#B50-information-15-00095" class="html-bibr">50</a>] and topologically protected quantum source [<a href="#B77-information-15-00095" class="html-bibr">77</a>].</p>
Full article ">Figure 5
<p>Multi-mode interferometer to split the light passively with a fixed ratio of 1:1.</p>
Full article ">Figure 6
<p>Phase shifter to induce relative phase change between two arms.</p>
Full article ">Figure 7
<p>Typical schematic of an <span class="html-italic">N</span>-mode photonic integrated circuit to represent an arbitrary <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>×</mo> <mi>N</mi> </mrow> </semantics></math> Unitary matrix. The final Unitary matrix form is the product of the matrices for each MZI component.</p>
Full article ">Figure 8
<p>(<b>a</b>) Non-degenerated and (<b>b</b>) degenerated spontaneous four-wave mixing process to generate photon pairs on chips by absorbing two pump photons.</p>
Full article ">
17 pages, 1117 KiB  
Article
Design of a Meaningful Framework for Time Series Forecasting in Smart Buildings
by Louis Closson, Christophe Cérin, Didier Donsez and Jean-Luc Baudouin
Information 2024, 15(2), 94; https://doi.org/10.3390/info15020094 - 7 Feb 2024
Viewed by 1767
Abstract
This paper aims to provide discernment toward establishing a general framework, dedicated to data analysis and forecasting in smart buildings. It constitutes an industrial return of experience from an industrialist specializing in IoT supported by the academic world. With the necessary improvement of [...] Read more.
This paper aims to provide discernment toward establishing a general framework, dedicated to data analysis and forecasting in smart buildings. It constitutes an industrial return of experience from an industrialist specializing in IoT supported by the academic world. With the necessary improvement of energy efficiency, discernment is paramount for facility managers to optimize daily operations and prioritize renovation work in the building sector. With the scale of buildings and the complexity of Heating, Ventilation, and Air Conditioning (HVAC) systems, the use of artificial intelligence is deemed the cheapest tool, holding the highest potential, even if it requires IoT sensors and a deluge of data to establish genuine models. However, the wide variety of buildings, users, and data hinders the development of industrial solutions, as specific studies often lack relevance to analyze other buildings, possibly with different types of data monitored. The relevance of the modeling can also disappear over time, as buildings are dynamic systems evolving with their use. In this paper, we propose to study the forecasting ability of the widely used Long Short-Term Memory (LSTM) network algorithm, which is well-designed for time series modeling, across an instrumented building. In this way, we considered the consistency of the performances for several issues as we compared to the cases with no prediction, which is lacking in the literature. The insight provided let us examine the quality of AI models and the quality of data needed in forecasting tasks. Finally, we deduced that efficient models and smart choices about data allow meaningful insight into developing time series modeling frameworks for smart buildings. For reproducibility concerns, we also provide our raw data, which came from one “real” smart building, as well as significant information regarding this building. In summary, our research aims to develop a methodology for exploring, analyzing, and modeling data from the smart buildings sector. Based on our experiment on forecasting temperature sensor measurements, we found that a bigger AI model (1) does not always imply a longer time in training and (2) can have little impact on accuracy and (3) using more features is tied to data processing order. We also observed that providing more data is irrelevant without a deep understanding of the problem physics. Full article
(This article belongs to the Special Issue Internet of Things and Cloud-Fog-Edge Computing)
Show Figures

Figure 1

Figure 1
<p>Map of the instrumented building.</p>
Full article ">Figure 2
<p>LSTM Neural Network principle and architecture. (<b>a</b>) LSTM principle; (<b>b</b>) LSTM architecture.</p>
Full article ">Figure 3
<p>Overall performances of LSTM for temperature forecasting on all sensors. (<b>a</b>) Performance as function of architecture used; (<b>b</b>) mean performances and training duration for several LSTM architectures.</p>
Full article ">Figure 4
<p>Performances as a function of selected forecasting policy and features. (<b>a</b>) HVAC consumption prediction performances; (<b>b</b>) indoor temperature prediction performances.</p>
Full article ">Figure 5
<p>Study of performances as a function of history length in minutes. (<b>a</b>) Temperature forecasting performances; (<b>b</b>) HVAC consumption forecasting performance.</p>
Full article ">Figure 6
<p>Study of performances as a function of history length. (<b>a</b>) Consumption forecasting performances; (<b>b</b>) temperature forecasting performances.</p>
Full article ">Figure 7
<p>Performances of forecasting as a function of training data length. (<b>a</b>) Performances of HVAC consumption forecasting; (<b>b</b>) performances of temperature forecasting.</p>
Full article ">
18 pages, 925 KiB  
Article
Chinese Cyberbullying Detection Using XLNet and Deep Bi-LSTM Hybrid Model
by Shifeng Chen, Jialin Wang and Ketai He
Information 2024, 15(2), 93; https://doi.org/10.3390/info15020093 - 6 Feb 2024
Cited by 4 | Viewed by 2208
Abstract
The popularization of the internet and the widespread use of smartphones have led to a rapid growth in the number of social media users. While information technology has brought convenience to people, it has also given rise to cyberbullying, which has a serious [...] Read more.
The popularization of the internet and the widespread use of smartphones have led to a rapid growth in the number of social media users. While information technology has brought convenience to people, it has also given rise to cyberbullying, which has a serious negative impact. The identity of online users is hidden, and due to the lack of supervision and the imperfections of relevant laws and policies, cyberbullying occurs from time to time, bringing serious mental harm and psychological trauma to the victims. The pre-trained language model BERT (Bidirectional Encoder Representations from Transformers) has achieved good results in the field of natural language processing, which can be used for cyberbullying detection. In this research, we construct a variety of traditional machine learning, deep learning and Chinese pre-trained language models as a baseline, and propose a hybrid model based on a variant of BERT: XLNet, and deep Bi-LSTM for Chinese cyberbullying detection. In addition, real cyber bullying remarks are collected to expand the Chinese offensive language dataset COLDATASET. The performance of the proposed model outperforms all baseline models on this dataset, improving 4.29% compared to SVM—the best performing method in traditional machine learning, 1.49% compared to GRU—the best performing method in deep learning, and 1.13% compared to BERT. Full article
Show Figures

Figure 1

Figure 1
<p>Proposed model.</p>
Full article ">Figure 2
<p>Two-stream self-attention and permutation language model training process [<a href="#B33-information-15-00093" class="html-bibr">33</a>].</p>
Full article ">Figure 3
<p>Long short-term memory (LSTM) cell architecture.</p>
Full article ">Figure 4
<p>Comparison of weighted average F1-score for all methods.</p>
Full article ">Figure 5
<p>The effect of deepening the layers of Bi-LSTM on the results of the proposed model.</p>
Full article ">Figure 6
<p>Comparison between the proposed model and four advanced methods.</p>
Full article ">
20 pages, 4194 KiB  
Article
Do Large Language Models Show Human-like Biases? Exploring Confidence—Competence Gap in AI
by Aniket Kumar Singh, Bishal Lamichhane, Suman Devkota, Uttam Dhakal and Chandra Dhakal
Information 2024, 15(2), 92; https://doi.org/10.3390/info15020092 - 6 Feb 2024
Viewed by 2970
Abstract
This study investigates self-assessment tendencies in Large Language Models (LLMs), examining if patterns resemble human cognitive biases like the Dunning–Kruger effect. LLMs, including GPT, BARD, Claude, and LLaMA, are evaluated using confidence scores on reasoning tasks. The models provide self-assessed confidence levels before [...] Read more.
This study investigates self-assessment tendencies in Large Language Models (LLMs), examining if patterns resemble human cognitive biases like the Dunning–Kruger effect. LLMs, including GPT, BARD, Claude, and LLaMA, are evaluated using confidence scores on reasoning tasks. The models provide self-assessed confidence levels before and after responding to different questions. The results show cases where high confidence does not correlate with correctness, suggesting overconfidence. Conversely, low confidence despite accurate responses indicates potential underestimation. The confidence scores vary across problem categories and difficulties, reducing confidence for complex queries. GPT-4 displays consistent confidence, while LLaMA and Claude demonstrate more variations. Some of these patterns resemble the Dunning–Kruger effect, where incompetence leads to inflated self-evaluations. While not conclusively evident, these observations parallel this phenomenon and provide a foundation to further explore the alignment of competence and confidence in LLMs. As LLMs continue to expand their societal roles, further research into their self-assessment mechanisms is warranted to fully understand their capabilities and limitations. Full article
Show Figures

Figure 1

Figure 1
<p>Interaction with LLMs for data generation.</p>
Full article ">Figure 2
<p>Comparison of A1 and A2 scores for Claude-2.</p>
Full article ">Figure 3
<p>Faceted density plot of confidence levels by LLM type. The plot reveals varying patterns of confidence distribution across different LLM types, suggesting nuanced self-perceptions in these models.</p>
Full article ">Figure 4
<p>Density plots of correctness for different confidence scores (A1 and A2).</p>
Full article ">Figure 5
<p>Average confidence levels by category and LLM.</p>
Full article ">Figure 6
<p>Average confidence scores by problem level.</p>
Full article ">Figure A1
<p>Density plots of correctness for different confidence scores (R1 and R2).</p>
Full article ">Figure A2
<p>Density plot of correctness vs. confidence scores for various language learning models (A1 and A2).</p>
Full article ">Figure A3
<p>Density plot of correctness vs. confidence scores for various language learning models (R1 and R2).</p>
Full article ">
22 pages, 5663 KiB  
Article
Leveraging Semantic Text Analysis to Improve the Performance of Transformer-Based Relation Extraction
by Marie-Therese Charlotte Evans, Majid Latifi, Mominul Ahsan and Julfikar Haider
Information 2024, 15(2), 91; https://doi.org/10.3390/info15020091 - 6 Feb 2024
Cited by 1 | Viewed by 1781
Abstract
Keyword extraction from Knowledge Bases underpins the definition of relevancy in Digital Library search systems. However, it is the pertinent task of Joint Relation Extraction, which populates the Knowledge Bases from which results are retrieved. Recent work focuses on fine-tuned, Pre-trained Transformers. Yet, [...] Read more.
Keyword extraction from Knowledge Bases underpins the definition of relevancy in Digital Library search systems. However, it is the pertinent task of Joint Relation Extraction, which populates the Knowledge Bases from which results are retrieved. Recent work focuses on fine-tuned, Pre-trained Transformers. Yet, F1 scores for scientific literature achieve just 53.2, versus 69 in the general domain. The research demonstrates the failure of existing work to evidence the rationale for optimisations to finetuned classifiers. In contrast, emerging research subjectively adopts the common belief that Natural Language Processing techniques fail to derive context and shared knowledge. In fact, global context and shared knowledge account for just 10.4% and 11.2% of total relation misclassifications, respectively. In this work, the novel employment of semantic text analysis presents objective challenges for the Transformer-based classification of Joint Relation Extraction. This is the first known work to quantify that pipelined error propagation accounts for 45.3% of total relation misclassifications, the most poignant challenge in this domain. More specifically, Part-of-Speech tagging highlights the misclassification of complex noun phrases, accounting for 25.47% of relation misclassifications. Furthermore, this study identifies two limitations in the purported bidirectionality of the Bidirectional Encoder Representations from Transformers (BERT) Pre-trained Language Model. Firstly, there is a notable imbalance in the misclassification of right-to-left relations, which occurs at a rate double that of left-to-right relations. Additionally, a failure to recognise local context through determiners and prepositions contributes to 16.04% of misclassifications. Furthermore, it is highlighted that the annotation scheme of the singular dataset utilised in existing research, Scientific Entities, Relations and Coreferences (SciERC), is marred by ambiguity. Notably, two asymmetric relations within this dataset achieve recall rates of only 10% and 29%. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

Figure 1
<p>Example of unique local entity context for different relation types containing the same entities using SciERC.</p>
Full article ">Figure 2
<p>PL-Marker-packed levitated markers compared to PURE solid markers.</p>
Full article ">Figure 3
<p>JRE F1 Scores for state-of-the-art, fine-tuned classifiers using SCIBERT Pre-trained Language Model and SciERC dataset.</p>
Full article ">Figure 4
<p>Semantic text analysis framework.</p>
Full article ">Figure 5
<p>SciERC relation type examples, including relation directionality.</p>
Full article ">Figure 6
<p>Example of PL-Marker subject-oriented packing strategy using SciERC.</p>
Full article ">Figure 7
<p>Example of POS tagging using Natural Language Toolkit (NLTK) and SciERC.</p>
Full article ">Figure 8
<p>Example of the semantic text analysis process across the hierarchy of all 4 themes using SciERC.</p>
Full article ">Figure 9
<p>Named Entity Recognition analysis steps.</p>
Full article ">Figure 10
<p>Relation Extraction analysis steps.</p>
Full article ">Figure 11
<p>Comparison of NER Standard Recall, Start and End Token Recall, and Standard NER Not Predicted values, alongside dataset distribution statistics.</p>
Full article ">Figure 12
<p>SciERC dataset statistics in Python—Relations.</p>
Full article ">Figure 13
<p>Entity error propagation (<b>a</b>) missing complex noun phrase (<b>b</b>) boundary misclassification predicting two entities and the relation as one complex noun phrase.</p>
Full article ">Figure 14
<p>Examples of [Part-Of] predicted as (<b>a</b>) [Feature-Of] and (<b>b</b>) [Hyponym-Of].</p>
Full article ">Figure 15
<p>Instances of PL-Marker misclassifications, despite the presence of local prepositions, such as (<b>a</b>) ‘in’, (<b>b</b>) ‘from’, and (<b>c</b>) ‘in’.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop