[go: up one dir, main page]

 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (311)

Search Parameters:
Keywords = end-stacking

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 840 KiB  
Article
Sentiment Informed Sentence BERT-Ensemble Algorithm for Depression Detection
by Bayode Ogunleye, Hemlata Sharma and Olamilekan Shobayo
Big Data Cogn. Comput. 2024, 8(9), 112; https://doi.org/10.3390/bdcc8090112 - 5 Sep 2024
Viewed by 344
Abstract
The World Health Organisation (WHO) revealed approximately 280 million people in the world suffer from depression. Yet, existing studies on early-stage depression detection using machine learning (ML) techniques are limited. Prior studies have applied a single stand-alone algorithm, which is unable to deal [...] Read more.
The World Health Organisation (WHO) revealed approximately 280 million people in the world suffer from depression. Yet, existing studies on early-stage depression detection using machine learning (ML) techniques are limited. Prior studies have applied a single stand-alone algorithm, which is unable to deal with data complexities, prone to overfitting, and limited in generalization. To this end, our paper examined the performance of several ML algorithms for early-stage depression detection using two benchmark social media datasets (D1 and D2). More specifically, we incorporated sentiment indicators to improve our model performance. Our experimental results showed that sentence bidirectional encoder representations from transformers (SBERT) numerical vectors fitted into the stacking ensemble model achieved comparable F1 scores of 69% in the dataset (D1) and 76% in the dataset (D2). Our findings suggest that utilizing sentiment indicators as an additional feature for depression detection yields an improved model performance, and thus, we recommend the development of a depressive term corpus for future work. Full article
Show Figures

Figure 1

Figure 1
<p>An illustration of our experimental setup.</p>
Full article ">Figure 2
<p>Classification objective framework of the SBERT model [<a href="#B50-BDCC-08-00112" class="html-bibr">50</a>].</p>
Full article ">Figure 3
<p>Illustration of stacked ensemble model.</p>
Full article ">
15 pages, 1471 KiB  
Article
TrajectoryNAS: A Neural Architecture Search for Trajectory Prediction
by Ali Asghar Sharifi, Ali Zoljodi and Masoud Daneshtalab
Sensors 2024, 24(17), 5696; https://doi.org/10.3390/s24175696 - 1 Sep 2024
Viewed by 330
Abstract
Autonomous driving systems are a rapidly evolving technology. Trajectory prediction is a critical component of autonomous driving systems that enables safe navigation by anticipating the movement of surrounding objects. Lidar point-cloud data provide a 3D view of solid objects surrounding the ego-vehicle. Hence, [...] Read more.
Autonomous driving systems are a rapidly evolving technology. Trajectory prediction is a critical component of autonomous driving systems that enables safe navigation by anticipating the movement of surrounding objects. Lidar point-cloud data provide a 3D view of solid objects surrounding the ego-vehicle. Hence, trajectory prediction using Lidar point-cloud data performs better than 2D RGB cameras due to providing the distance between the target object and the ego-vehicle. However, processing point-cloud data is a costly and complicated process, and state-of-the-art 3D trajectory predictions using point-cloud data suffer from slow and erroneous predictions. State-of-the-art trajectory prediction approaches suffer from handcrafted and inefficient architectures, which can lead to low accuracy and suboptimal inference times. Neural architecture search (NAS) is a method proposed to optimize neural network models by using search algorithms to redesign architectures based on their performance and runtime. This paper introduces TrajectoryNAS, a novel neural architecture search (NAS) method designed to develop an efficient and more accurate LiDAR-based trajectory prediction model for predicting the trajectories of objects surrounding the ego vehicle. TrajectoryNAS systematically optimizes the architecture of an end-to-end trajectory prediction algorithm, incorporating all stacked components that are prerequisites for trajectory prediction, including object detection and object tracking, using metaheuristic algorithms. This approach addresses the neural architecture designs in each component of trajectory prediction, considering accuracy loss and the associated overhead latency. Our method introduces a novel multi-objective energy function that integrates accuracy and efficiency metrics, enabling the creation of a model that significantly outperforms existing approaches. Through empirical studies, TrajectoryNAS demonstrates its effectiveness in enhancing the performance of autonomous driving systems, marking a significant advancement in the field. Experimental results reveal that TrajcetoryNAS yields a minimum of 4.8 higger accuracy and 1.1* lower latency over competing methods on the NuScenes dataset. Full article
(This article belongs to the Special Issue Object Detection Based on Vision Sensors and Neural Network)
Show Figures

Figure 1

Figure 1
<p>(Top Row) Cascade methods that independently address detection, tracking, and predicting, they inherently carry the risk of compounding errors throughout the pipeline. This originates from—each sub-module’s assumption of receiving perfect input, which rarely holds true in real-world applications. Consequently, errors introduced in earlier stages propagate and magnify downstream, potentially leading to inaccurate final outcomes. (Bottom Row) End-to-end methods that predict future movement directly from raw data, enabling end-to-end training and benefiting from the joint optimization of object detection, tracking, and prediction tasks.</p>
Full article ">Figure 2
<p>TrajectoryNAS state diagram. A model generated from the search space. The generated model trains using the mini dataset. The results are sent back to search space to generate a new model. The best final model is fully trained using the original dataset.</p>
Full article ">Figure 3
<p>The overview of TrajcetoryNAS process.</p>
Full article ">Figure 4
<p>TrajectoryNAS optimization curve.</p>
Full article ">Figure 5
<p>The visual demonstration of TrajectoryNAS; the first row is the trajectory prediction for cars, and the second row is the trajectory prediction for the pedestrian. Green lines are ground-truth. Blue lines are trajectory prediction with highest probability. Cyan lines are trajectory predictions with the highest probability.</p>
Full article ">
23 pages, 7922 KiB  
Article
Groundwater LNAPL Contamination Source Identification Based on Stacking Ensemble Surrogate Model
by Yukun Bai, Wenxi Lu, Zibo Wang and Yaning Xu
Water 2024, 16(16), 2274; https://doi.org/10.3390/w16162274 - 12 Aug 2024
Viewed by 746
Abstract
Groundwater LNAPL (Light Non-Aqueous Phase Liquid) contamination source identification (GLCSI) is essential for effective remediation and risk assessment. Addressing the GLCSI problem often involves numerous repetitive forward simulations, which are computationally expensive and time-consuming. Establishing a surrogate model for the simulation model is [...] Read more.
Groundwater LNAPL (Light Non-Aqueous Phase Liquid) contamination source identification (GLCSI) is essential for effective remediation and risk assessment. Addressing the GLCSI problem often involves numerous repetitive forward simulations, which are computationally expensive and time-consuming. Establishing a surrogate model for the simulation model is an effective way to overcome this challenge. However, how to obtain high-quality samples for training the surrogate model and which method should be used to develop the surrogate model with higher accuracy remain important questions to explore. To this end, this paper innovatively adopted the quasi-Monte Carlo (QMC) method to sample from the prior space of unknown variables. Then, this paper established a variety of individual machine learning surrogate models, respectively, and screened three with higher training accuracy among them as the base-learning models (BLMs). The Stacking ensemble framework was utilized to integrate the three BLMs to establish the ensemble surrogate model for the groundwater LNAPL multiphase flow numerical simulation model. Finally, a hypothetical case of groundwater LNAPL contamination was designed. After evaluating the accuracy of the Stacking ensemble surrogate model, the differential evolution Markov chain (DE-MC) algorithm was applied to jointly identify information on groundwater LNAPL contamination source and key hydrogeological parameters. The results of this study demonstrated the following: (1) Employing the QMC method to sample from the prior space resulted in more uniformly distributed and representative samples, which improved the quality of the training data. (2) The developed Stacking ensemble surrogate model had a higher accuracy than any individual surrogate model, with an average R2 of 0.995, and reduced the computational burden by 99.56% compared to the inversion process based on the simulation model. (3) The application of the DE-MC algorithm effectively solved the GLCSI problem, and the mean relative error of the identification results of unknown variables was less than 5%. Full article
Show Figures

Figure 1

Figure 1
<p>A total of 100 sets of sample points generated using (<b>a</b>) MC and (<b>b</b>) QMC methods within space <math display="inline"><semantics> <mrow> <msup> <mrow> <mrow> <mo>(</mo> <mrow> <mn>0</mn> <mo>,</mo> <mn>1</mn> </mrow> <mo>)</mo> </mrow> </mrow> <mn>2</mn> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 2
<p>The conceptual framework of a Stacking ensemble model.</p>
Full article ">Figure 3
<p>Schematic diagram of the network structure of the MLP model. The colored dots represents neurons in different layers.</p>
Full article ">Figure 4
<p>Schematic diagram of the network structure of the RF model.</p>
Full article ">Figure 5
<p>Schematic diagram of the network structure of the DBN model.</p>
Full article ">Figure 6
<p>Schematic of the study area’s subdivision and the generalized boundary conditions.</p>
Full article ">Figure 7
<p>Schematic of the potential distribution range of the contamination source and the location distribution of 6 observation wells.</p>
Full article ">Figure 8
<p>Reference log penetration field for the hypothetical case.</p>
Full article ">Figure 9
<p>Histogram of R<sup>2</sup> comparisons for individual surrogate models.</p>
Full article ">Figure 10
<p>Histogram of RMSE comparisons for individual surrogate models.</p>
Full article ">Figure 11
<p>Comparison of R<sup>2</sup> and RMSE of the DBN model with the Stacki ng ensemble model.</p>
Full article ">Figure 12
<p>Fitting diagram between simulated model output and surrogate model prediction output.</p>
Full article ">Figure 13
<p>Variation in convergence factor with number of iterations.</p>
Full article ">Figure 14
<p>Frequency distribution histograms and posterior probability density function curves. (<b>a</b>–<b>h</b>) represents variables n, α<sub>wL</sub>, α<sub>wT</sub>, L<sub>x</sub>, L<sub>y</sub>, t<sub>on</sub>, t<sub>off</sub> and Q respectively.</p>
Full article ">Figure 15
<p>Recognition results for log-permeability fields.</p>
Full article ">
16 pages, 1818 KiB  
Article
FFA-BiGRU: Attention-Based Spatial-Temporal Feature Extraction Model for Music Emotion Classification
by Yuping Su, Jie Chen, Ruiting Chai, Xiaojun Wu and Yumei Zhang
Appl. Sci. 2024, 14(16), 6866; https://doi.org/10.3390/app14166866 - 6 Aug 2024
Viewed by 521
Abstract
Music emotion recognition is becoming an important research direction due to its great significance for music information retrieval, music recommendation, and so on. In the task of music emotion recognition, the key to achieving accurate emotion recognition lies in how to extract the [...] Read more.
Music emotion recognition is becoming an important research direction due to its great significance for music information retrieval, music recommendation, and so on. In the task of music emotion recognition, the key to achieving accurate emotion recognition lies in how to extract the affect-salient features fully. In this paper, we propose an end-to-end spatial-temporal feature extraction method named FFA-BiGRU for music emotion classification. Taking the log Mel-spectrogram of music audio as the input, this method employs an attention-based convolutional residual module named FFA, which serves as a spatial feature learning module to obtain multi-scale spatial features. In the FFA module, three group architecture blocks extract multi-level spatial features, each of which consists of a stack of multiple channel-spatial attention-based residual blocks. Then, the output features from FFA are fed into the bidirectional gated recurrent units (BiGRU) module to capture the temporal features of music further. In order to make full use of the extracted spatial and temporal features, the output feature maps of FFA and those of the BiGRU are concatenated in the channel dimension. Finally, the concatenated features are passed through fully connected layers to predict the emotion classification results. The experimental results of the EMOPIA dataset show that the proposed model achieves better classification accuracy than the existing baselines. Meanwhile, the ablation experiments also demonstrate the effectiveness of each part of the proposed method. Full article
Show Figures

Figure 1

Figure 1
<p>An overview of the proposed FFA-BiGRU model. The model consists of a spatial feature learning module (FFA), a temporal feature learning module (BiGRU), and an emotion prediction module.</p>
Full article ">Figure 2
<p>Channel-spatial attention mechanism. The input feature maps are first scaled by channel attention weights and then by spatial attention weights. The filter size of Conv layers in both the CA and SA blocks is 1 × 1.</p>
Full article ">Figure 3
<p>The architecture of the GRU unit. It has two doors: reset door <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>r</mi> </mrow> <mrow> <mi>t</mi> </mrow> </msub> </mrow> </semantics></math> and update door <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>z</mi> </mrow> <mrow> <mi>t</mi> </mrow> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>The FC layer for emotion prediction.</p>
Full article ">Figure 5
<p>Confusion matrix of the proposed model.</p>
Full article ">
37 pages, 18482 KiB  
Article
Active Queue Management in L4S with Asynchronous Advantage Actor-Critic: A FreeBSD Networking Stack Perspective
by Deol Satish, Jonathan Kua and Shiva Raj Pokhrel
Future Internet 2024, 16(8), 265; https://doi.org/10.3390/fi16080265 - 25 Jul 2024
Viewed by 640
Abstract
Bufferbloat is one of the leading causes of high data transmission latency and jitter on the Internet, which severely impacts the performance of low-latency interactive applications such as online streaming, cloud-based gaming/applications, Internet of Things (IoT) applications, voice over IP (VoIP), real-time video [...] Read more.
Bufferbloat is one of the leading causes of high data transmission latency and jitter on the Internet, which severely impacts the performance of low-latency interactive applications such as online streaming, cloud-based gaming/applications, Internet of Things (IoT) applications, voice over IP (VoIP), real-time video conferencing, and so forth. There is currently a pressing need for developing Transmission Control Protocol (TCP) congestion control algorithms and bottleneck queue management schemes that can collaboratively control/reduce end-to-end latency, thus ensuring optimal quality of service (QoS) and quality of experience (QoE) for users. This paper introduces a novel solution by experimentally integrate the low latency, low loss, and scalable throughput (L4S) architecture (specified by the IETF in RFC 9330) in FreeBSD framework with the asynchronous advantage actor-critic (A3C) reinforcement learning algorithm. The first phase involves incorporating a modified dual-queue coupled active queue management (AQM) system for L4S into the FreeBSD networking stack, enhancing queue management and mitigating latency and packet loss. The second phase employs A3C to adjust and fine-tune the system performance dynamically. Finally, we evaluate the proposed solution’s effectiveness through comprehensive experiments, comparing it with traditional AQM-based systems. This paper contributes to the advancement of machine learning (ML) for transport protocol research in the field. The experimental implementation and results presented in this paper are made available through our GitHub repositories. Full article
Show Figures

Figure 1

Figure 1
<p>L4S AQM Architecture.</p>
Full article ">Figure 2
<p>A3C coupled L4S Architecture.</p>
Full article ">Figure 3
<p>Network topology utilized for evaluating AQM algorithms.</p>
Full article ">Figure 4
<p>Case 1: CoDel (ECN enabled)—Scenario 1 with Bandwidth = 10 Mbps, Delay = 20 ms. (<b>a</b>) Throughput; (<b>b</b>) Congestion Window; (<b>c</b>) Smoothed TCP RTT.</p>
Full article ">Figure 5
<p>Case 2: CoDel (ECN disabled)—Scenario 1 with Bandwidth = 10 Mbps, Delay = 20 ms. (<b>a</b>) Throughput; (<b>b</b>) Congestion Window; (<b>c</b>) Smoothed TCP RTT.</p>
Full article ">Figure 6
<p>Case 1: CoDel (ECN enabled)—Scenario 2 with Bandwidth = 1 Mbps, Delay = 20 ms. (<b>a</b>) Throughput; (<b>b</b>) Congestion Window; (<b>c</b>) Smoothed TCP RTT.</p>
Full article ">Figure 7
<p>Case 2: CoDel (ECN disabled)—Scenario 2 with Bandwidth = 1 Mbps, Delay = 20 ms. (<b>a</b>) Throughput; (<b>b</b>) Congestion Window; (<b>c</b>) Smoothed TCP RTT.</p>
Full article ">Figure 8
<p>Case 1: PIE (ECN enabled)—Scenario 1 with Bandwidth = 10 Mbps, Delay = 20 ms. (<b>a</b>) Throughput; (<b>b</b>) Congestion Window; (<b>c</b>) Smoothed TCP RTT.</p>
Full article ">Figure 9
<p>Case 2: PIE (ECN disabled)—Scenario 1 with Bandwidth = 10 Mbps, Delay = 20 ms. (<b>a</b>) Throughput; (<b>b</b>) Congestion Window; (<b>c</b>) Smoothed TCP RTT.</p>
Full article ">Figure 10
<p>Case 1: PIE (ECN enabled)—Scenario 2 with Bandwidth = 1 Mbps, Delay = 20 ms. (<b>a</b>) Throughput; (<b>b</b>) Congestion Window; (<b>c</b>) Smoothed TCP RTT.</p>
Full article ">Figure 11
<p>Case 2: PIE (ECN disabled)—Scenario 2 with Bandwidth = 1 Mbps, Delay = 20 ms. (<b>a</b>) Throughput; (<b>b</b>) Congestion Window; (<b>c</b>) Smoothed TCP RTT.</p>
Full article ">Figure 12
<p>Case 1: FQ-CoDel (ECN enabled)—Scenario 1 with Bandwidth = 10 Mbps, Delay = 20 ms. (<b>a</b>) Throughput; (<b>b</b>) Congestion Window; (<b>c</b>) Smoothed TCP RTT.</p>
Full article ">Figure 13
<p>Case 2: FQ-CoDel (ECN disabled)—Scenario 1 with Bandwidth = 10 Mbps, Delay = 20 ms. (<b>a</b>) Throughput; (<b>b</b>) Congestion Window; (<b>c</b>) Smoothed TCP RTT.</p>
Full article ">Figure 14
<p>Case 1: FQ-CoDel (ECN enabled)—Scenario 2 with Bandwidth = 1 Mbps, Delay = 20 ms. (<b>a</b>) Throughput; (<b>b</b>) Congestion Window; (<b>c</b>) Smoothed TCP RTT.</p>
Full article ">Figure 15
<p>Case 2: FQ-CoDel (ECN disabled)—Scenario 2 with Bandwidth = 1 Mbps, Delay = 20 ms. (<b>a</b>) Throughput; (<b>b</b>) Congestion Window; (<b>c</b>) Smoothed TCP RTT.</p>
Full article ">Figure 16
<p>Case 1: FQ-PIE (ECN enabled)—Scenario 1 with Bandwidth = 10 Mbps, Delay = 20 ms. (<b>a</b>) Throughput; (<b>b</b>) Congestion Window; (<b>c</b>) Smoothed TCP RTT.</p>
Full article ">Figure 17
<p>Case 2: FQ-PIE (ECN disabled)—Scenario 1 with Bandwidth = 10 Mbps, Delay = 20 ms. (<b>a</b>) Throughput; (<b>b</b>) Congestion Window; (<b>c</b>) Smoothed TCP RTT.</p>
Full article ">Figure 18
<p>Case 1: FQ-PIE (ECN enabled)—Scenario 2 with Bandwidth = 1 Mbps, Delay = 20 ms. (<b>a</b>) Throughput; (<b>b</b>) Congestion Window; (<b>c</b>) Smoothed TCP RTT.</p>
Full article ">Figure 19
<p>Case 2: FQ-PIE (ECN disabled)—Scenario 2 with Bandwidth = 1 Mbps, Delay = 20 ms. (<b>a</b>) Throughput; (<b>b</b>) Congestion Window; (<b>c</b>) Smoothed TCP RTT.</p>
Full article ">Figure 20
<p>Case 1: L4S (ECN enabled)—Scenario 1 with Bandwidth = 10 Mbps, Delay = 20 ms. (<b>a</b>) Throughput; (<b>b</b>) Congestion Window; (<b>c</b>) Smoothed TCP RTT.</p>
Full article ">Figure 21
<p>Case 2: L4S (ECN disabled)—Scenario 1 with Bandwidth = 10 Mbps, Delay = 20 ms. (<b>a</b>) Throughput; (<b>b</b>) Congestion Window; (<b>c</b>) Smoothed TCP RTT.</p>
Full article ">Figure 22
<p>Case 1: L4S (ECN enabled)—Scenario 2 with Bandwidth = 1 Mbps, Delay = 20 ms. (<b>a</b>) Throughput; (<b>b</b>) Congestion Window; (<b>c</b>) Smoothed TCP RTT.</p>
Full article ">Figure 23
<p>Case 2: L4S (ECN disabled)—Scenario 2 with Bandwidth = 1 Mbps, Delay = 20 ms. (<b>a</b>) Throughput; (<b>b</b>) Congestion Window; (<b>c</b>) Smoothed TCP RTT.</p>
Full article ">Figure 24
<p>Network topology utilized for data collection for A3C-L4S model.</p>
Full article ">Figure 25
<p>Evolution of the average reward (<math display="inline"><semantics> <msub> <mi>R</mi> <mi>t</mi> </msub> </semantics></math>) of the trained A3C model over the entire fifty epochs.</p>
Full article ">Figure 26
<p>Predicted queue delay vs. actual queue delay for all workers during packet transmission. (<b>a</b>) Predicted QDelay vs. Actual QDelay—Agent Worker 1; (<b>b</b>) Predicted QDelay vs. Actual QDelay—Agent Worker 2; (<b>c</b>) Predicted QDelay vs. Actual Qdelay—Agent Worker 3; (<b>d</b>) Predicted QDelay vs. Actual Qdelay—Agent Worker 4.</p>
Full article ">Figure 27
<p>Predicted queue delay vs. actual queue delay for all workers with varying reward scaling factor in units of 100 μs. (<b>a</b>) Predicted QDelay with varying reward scaling factor Agent Worker 1; (<b>b</b>) Predicted QDelay with varying reward scaling factor Agent Worker 2; (<b>c</b>) Predicted QDelay with varying reward scaling factor Agent Worker 3; (<b>d</b>) Predicted QDelay with varying reward scaling factor Agent Worker 4.</p>
Full article ">Figure 27 Cont.
<p>Predicted queue delay vs. actual queue delay for all workers with varying reward scaling factor in units of 100 μs. (<b>a</b>) Predicted QDelay with varying reward scaling factor Agent Worker 1; (<b>b</b>) Predicted QDelay with varying reward scaling factor Agent Worker 2; (<b>c</b>) Predicted QDelay with varying reward scaling factor Agent Worker 3; (<b>d</b>) Predicted QDelay with varying reward scaling factor Agent Worker 4.</p>
Full article ">Figure 28
<p>Case 1: (ECN enabled) Throughput Scenario 1 with Bandwidth = 10 Mbps, Delay = 20 ms. (<b>a</b>) FQ-CoDel (Throughput); (<b>b</b>) FQ-PIE (Throughput); (<b>c</b>) L4S (Throughput).</p>
Full article ">Figure 29
<p>Case 2: (ECN disabled) Throughput Scenario 1 with Bandwidth = 10 Mbps, Delay = 20 ms. (<b>a</b>) FQ-CoDel (Throughput); (<b>b</b>) FQ-PIE (Throughput); (<b>c</b>) L4S (Throughput).</p>
Full article ">Figure 30
<p>Case 1: (ECN enabled) Throughput Scenario 2 with Bandwidth = 1 Mbps, Delay = 20 ms. (<b>a FQ-CoDel (Throughput)</b>); (<b>b</b>) FQ-PIE (Throughput); (<b>c</b>) L4S (Throughput).</p>
Full article ">Figure 31
<p>Case 2: (ECN disabled) Throughput Scenario 2 with Bandwidth = 1 Mbps, Delay = 20 ms. (<b>a</b>) FQ-CoDel (Throughput); (<b>b</b>) FQ-PIE (Throughput); (<b>c</b>) L4S (Throughput).</p>
Full article ">Figure 32
<p>Case 1: (ECN enabled) Smoothed TCP RTT measured in seconds for Scenario 1 with Bandwidth = 10 Mbps, Delay = 20 ms. (<b>a</b>) FQ-CoDel (Smoothed RTT); (<b>b</b>) FQ-PIE (Smoothed RTT); (<b>c</b>) L4S (Smoothed RTT).</p>
Full article ">Figure 33
<p>Case 2: (ECN disabled) Smoothed TCP RTT measured in seconds for Scenario 1 with Bandwidth = 10 Mbps, Delay = 20 ms. (<b>a</b>) FQ-CoDel (Smoothed RTT); (<b>b</b>) FQ-PIE (Smoothed RTT); (<b>c</b>) L4S (Smoothed RTT).</p>
Full article ">Figure 34
<p>Case 1: (ECN enabled) Smoothed TCP RTT measured in seconds for Scenario 2 with Bandwidth = 1 Mbps, Delay = 20 ms. (<b>a</b>) FQ-CoDel (Smoothed RTT); (<b>b</b>) FQ-PIE (Smoothed RTT); (<b>c</b>) L4S (Smoothed RTT).</p>
Full article ">Figure 35
<p>Case 2:(ECN disabled) Smoothed TCP RTT measured in seconds for Scenario 2 with Bandwidth = 1 Mbps, Delay = 20 ms. (<b>a</b>) FQ-CoDel (Smoothed RTT); (<b>b</b>) FQ-PIE (Smoothed RTT); (<b>c</b>) L4S (Smoothed RTT).</p>
Full article ">Figure 36
<p>Case 1: (ECN enabled) Congestion Window Scenario 1 with Bandwidth = 10 Mbps, Delay = 20 ms. (<b>a</b>) FQ-CoDel (<span class="html-italic">cwnd</span>); (<b>b</b>) FQ-PIE (<span class="html-italic">cwnd</span>); (<b>c</b>) L4S (<span class="html-italic">cwnd</span>).</p>
Full article ">Figure 37
<p>Case 2: (ECN disabled) Congestion Window Scenario 1 with Bandwidth = 10 Mbps, Delay = 20 ms. (<b>a</b>) FQ-CoDel (<span class="html-italic">cwnd</span>); (<b>b</b>) FQ-PIE (<span class="html-italic">cwnd</span>); (<b>c</b>) L4S (<span class="html-italic">cwnd</span>).</p>
Full article ">Figure 38
<p>Case 1: (ECN enabled) Congestion Window Scenario 2 with Bandwidth = 1 Mbps, Delay = 20 ms. (<b>a</b>) FQ-CoDel (<span class="html-italic">cwnd</span>); (<b>b</b>) FQ-PIE (<span class="html-italic">cwnd</span>); (<b>c</b>) L4S (<span class="html-italic">cwnd</span>).</p>
Full article ">Figure 39
<p>Case 2:(ECN disabled) Congestion Window Scenario 2 with Bandwidth = 1 Mbps, Delay = 20 ms. (<b>a</b>) FQ-CoDel (<span class="html-italic">cwnd</span>); (<b>b</b>) FQ-PIE (<span class="html-italic">cwnd</span>); (<b>c</b>) L4S (<span class="html-italic">cwnd</span>).</p>
Full article ">
16 pages, 7912 KiB  
Article
HIV-1 Intasomes Assembled with Excess Integrase C-Terminal Domain Protein Facilitate Structural Studies by Cryo-EM and Reveal the Role of the Integrase C-Terminal Tail in HIV-1 Integration
by Min Li, Zhen Li, Xuemin Chen, Yanxiang Cui, Alan N. Engelman and Robert Craigie
Viruses 2024, 16(7), 1166; https://doi.org/10.3390/v16071166 - 20 Jul 2024
Viewed by 931
Abstract
Retroviral integration is mediated by intasome nucleoprotein complexes wherein a pair of viral DNA ends are bridged together by a multimer of integrase (IN). Atomic-resolution structures of HIV-1 intasomes provide detailed insights into the mechanism of integration and inhibition by clinical IN inhibitors. [...] Read more.
Retroviral integration is mediated by intasome nucleoprotein complexes wherein a pair of viral DNA ends are bridged together by a multimer of integrase (IN). Atomic-resolution structures of HIV-1 intasomes provide detailed insights into the mechanism of integration and inhibition by clinical IN inhibitors. However, previously described HIV-1 intasomes are highly heterogeneous and have the tendency to form stacks, which is a limiting factor in determining high-resolution cryo-EM maps. We have assembled HIV-1 intasomes in the presence of excess IN C-terminal domain protein, which was readily incorporated into the intasomes. The purified intasomes were largely homogeneous and exhibited minimal stacking tendencies. The cryo-EM map resolution was further improved to 2.01 Å, which will greatly facilitate structural studies of IN inhibitor action and drug resistance mechanisms. The C-terminal 18 residues of HIV-1 IN, which are critical for virus replication and integration in vitro, have not been well resolved in previous intasome structures, and its function remains unclear. We show that the C-terminal tail participates in intasome assembly, resides within the intasome core, and forms a small alpha helix (residues 271–276). Mutations that disrupt alpha helix integrity impede IN activity in vitro and disrupt HIV-1 infection at the step of viral DNA integration. Full article
(This article belongs to the Section General Virology)
Show Figures

Figure 1

Figure 1
<p>Schematic of “hetero-intasome” assembly. (<b>A</b>) Domain organization of dodecameric HIV-1 intasomes. Green, IN proteins; red, vDNA. (<b>B</b>) HIV-1 “oligomer-intasome”, which is a stack of octameric intasomes shown as cartoons and space fill (flanking subunit). One dodecameric unit is shown in green, and the other repeat units are shown in cyan. (<b>C</b>) Assembly of “hetero-intasomes” with an excess of CTD in the assembly reaction mixture. Exogenous synaptic CTDs are shown as space-fill in light green.</p>
Full article ">Figure 2
<p>Excess CTD stimulates full-length IN integration activity. (<b>A</b>) Different amounts of CTD were preincubated with Sso7d-IN and pre-cleaved U5-25bp vDNA substrate prior to initiation of strand transfer at 37 °C. Recovered DNAs were visualized by fluorescence using a Typhoon 8600 scanner. (<b>B</b>) Indicated amounts of CTD protein were preincubated with WT-IN and pre-cleaved U5-69T DNA substrate prior to incubation at 37 °C for 2 h. DNA integration products were analyzed as in panel A. The concentrations of CTD and concerted integration products are indicated. Molecular mass standards in kb are indicated to the left of the gel panels. Ctrl, negative control sample that omitted WT-IN from the strand transfer assay.</p>
Full article ">Figure 3
<p>Size-exclusion chromatography and integration activity of hetero-intasomes assembled with full-length Sso7d-IN and CTD. (<b>A</b>) Elution profile of hetero-intasomes on the Superose 6 Increase 10/300 GL column (Cytiva). Monodispersed hetero-intasomes and free proteins and vDNA are indicated with arrows. Homo-intasomes formed with Sso7d-IN without added CTD are shown for comparison. (<b>B</b>) Fractions 1 to 24 (lane 1 to 24), corresponding to 1.2 mL to 2.4 mL elution volumes, were analyzed by means of SDS-PAGE. Sso7d-IN, LEDGF-IBD, and CTD are indicated with arrows. Mass standard positions (in kD) are indicated on the left. (<b>C</b>) Fractions 1 to 24 were assessed for strand transfer activity and integration products were detected by means of ethidium bromide staining. Concerted integration products are indicated on the right and mass standards (in kb) are given on the left. (<b>D</b>) Concerted integration activity was not affected by either IBD binding or CTD incorporation. Purified 2.5 nM intasomes assembled with Sso7d-IN (Homo-Ins), Sso7d-IN with CTD (Hetero-Ins), and Sso7d-IN with CTD and IBD (IBD-bound Hetero-Ins) were tested for concerted integration. The negative control (Ctrl.) was target DNA in the absence of added intasomes. (<b>E</b>) Purified hetero-intasomes were analyzed by means of SDS-PAGE. Lane 1, 0.25 μg; Lane 2, 0.5 μg; and Lane 3, 1.0 μg, were loaded onto the gel. The results are representative of three independent experiments.</p>
Full article ">Figure 4
<p>Structure determination of hetero-intasomes. (<b>A</b>) Representative micrograph of hetero-intasomes showing mono-dispersed particles. (<b>B</b>) Final reconstructed cryo-EM map of the CIC. (<b>C</b>) Atomic model of the HIV-1 CIC. The chains are color-coded according to the labels on the right, with synaptic and distal CTDs indicated. (<b>D</b>) Local resolution estimation of cryoEM map by ResMap. The sliced map is colored according to the local resolution of the masked intasome core map. (<b>E</b>) Representative density showing tryptophan 61, tyrosine 83, and phenylalanine 185 residues in the CCD. It is noted that the features of aromatic ring holes are expected for a 2 Å cryo-EM map. (<b>F</b>) Density map around the IN active site and inhibitor binding coordinates, including catalytic residues, D64, E152, bound DTG, Mg<sup>2+</sup>, and H<sub>2</sub>O molecules.</p>
Full article ">Figure 5
<p>The C-terminal tail stabilizes the intasome core structure. (<b>A</b>) Schematic of full-length (FL) HIV-1 IN highlighting C-terminal tail residues 268–288. The residues altered by site-directed mutagenesis are underlined. (<b>B</b>) Cryo-EM reconstruction density map (left) and atomic model (right) of the C-terminal tail (residues 268–281). The map is shown with the final structural model superimposed. Residues 271 to 276 form a small alpha helix. (<b>C</b>) Distal CTDs (pink) with resolved C-terminal tails 268–281 (purple) are highlighted. The left and right halves of intasome IN protomers are shown as cartoons and dots, respectively. (<b>D</b>) Interactions between C-terminal tail residues and vDNA and surrounding IN protomer residues. The C-terminal tail is shown as sticks. The interacting residues are indicated. (<b>E</b>) The C-terminal tail (surface colored with electrostatic potential) bridges the interactions of IN protomers. Intasome chains A, B, C, and F are shown as cartoons and are color-coded.</p>
Full article ">Figure 6
<p>Functional analyses of the IN C-terminal tail. (<b>A</b>) WT-IN and C-terminal tail mutant strand transfer activities. The migration positions of half-site and concerted integration reaction products are indicated to the right of the gel image, while the positions of mass standards in kb are shown to the left. (<b>B</b>) Strand transfer activities of Sso7d-IN WT and deletion derivatives. (<b>C</b>) EMSA detection of intasome assembly. The reaction condition is the same as in panel (<b>B</b>), except that 50 μM DTG was added and plasmid target DNA was omitted. Intasomes were analyzed by native 3% agarose gel electrophoresis and detected by a fluorescence scanner. (<b>D</b>) Integration assay with Sso7d-IN C-terminal tail missense mutants. The reaction conditions were as in panel B. The results are representative of three independent experiments.</p>
Full article ">Figure 7
<p>Virus-based experiments. (<b>A</b>) Levels of virus release from transfected cells (p24 values; n = 2 independent experiments) were normalized to WT. (<b>B</b>) Percent-normalized IN mutant viral infectivities (WT HIV-Luc set to 100%) for n = 2 to 3 independent experiments. (<b>C</b>) LRT products were percent-normalized to WT (n = between 3 and 8 independent experiments). (<b>D</b>) Alu integration of indicated IN mutant viruses was percent-normalized to WT (n = 2 independent experiments). (<b>E</b>) Percent-normalized values of 2-LTR circle formation (n = 2 independent experiments). Results are averages +/− SEM. Statistically significant differences as assessed by two-tailed Student’s <span class="html-italic">t</span> test in comparison to WT virus are represented with asterisks (* <span class="html-italic">p</span> &lt; 0.05, ** <span class="html-italic">p</span> &lt; 0.01, *** <span class="html-italic">p</span> &lt; 0.001, **** <span class="html-italic">p</span> &lt; 0.0001). ns, not significant.</p>
Full article ">
20 pages, 24161 KiB  
Article
Deep Embedding Koopman Neural Operator-Based Nonlinear Flight Training Trajectory Prediction Approach
by Jing Lu, Jingjun Jiang and Yidan Bai
Mathematics 2024, 12(14), 2162; https://doi.org/10.3390/math12142162 - 10 Jul 2024
Viewed by 668
Abstract
Accurate flight training trajectory prediction is a key task in automatic flight maneuver evaluation and flight operations quality assurance (FOQA), which is crucial for pilot training and aviation safety management. The task is extremely challenging due to the nonlinear chaos of trajectories, the [...] Read more.
Accurate flight training trajectory prediction is a key task in automatic flight maneuver evaluation and flight operations quality assurance (FOQA), which is crucial for pilot training and aviation safety management. The task is extremely challenging due to the nonlinear chaos of trajectories, the unconstrained airspace maps, and the randomization of driving patterns. In this work, a deep learning model based on data-driven modern koopman operator theory and dynamical system identification is proposed. The model does not require the manual selection of dictionaries and can automatically generate augmentation functions to achieve nonlinear trajectory space mapping. The model combines stacked neural networks to create a scalable depth approximator for approximating the finite-dimensional Koopman operator. In addition, the model uses finite-dimensional operator evolution to achieve end-to-end adaptive prediction. In particular, the model can gain some physical interpretability through operator visualization and generative dictionary functions, which can be used for downstream pattern recognition and anomaly detection tasks. Experiments show that the model performs well, particularly on flight training trajectory datasets. Full article
(This article belongs to the Special Issue Data Mining and Machine Learning with Applications, 2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Flight training trajectories from the CAFUC dataset.</p>
Full article ">Figure 2
<p>Koopman measure-invariant subspace.</p>
Full article ">Figure 3
<p>Deep Embedding Koopman Neural Operator (DE-KNO) general framework.</p>
Full article ">Figure 4
<p>The CAFUC dataset is visualized with 3D trajectory views from different viewpoints, the blue line is the original part and the yellow line is the predicted part. (<b>top</b>) and 2D views showing the time series of the main parameters (<b>bottom</b>).</p>
Full article ">Figure 5
<p>Experiment process: (<b>a</b>) CAFUC, (<b>b</b>) Lorenz.</p>
Full article ">Figure 5 Cont.
<p>Experiment process: (<b>a</b>) CAFUC, (<b>b</b>) Lorenz.</p>
Full article ">Figure 6
<p>Model stability test results: <b>left</b>, efficiency on CAFUC dataset; <b>right</b>, efficiency on Traffic dataset.</p>
Full article ">Figure 7
<p>Model efficiency comparison: left, efficiency on CAFUC dataset; right, efficiency on Traffic dataset. The left graph records the training time, MSE, and memory usage of each model on the CAFUC dataset. The right graph records the training time, MSE, and memory usage of each model on the Traffic dataset.</p>
Full article ">
27 pages, 11196 KiB  
Article
Mechanical Characterization of GFRP Tiled Laminates for Structural Engineering Applications: Stiffness, Strength and Failure Mechanisms
by Jordi Uyttersprot, Wouter De Corte and Wim Van Paepegem
J. Compos. Sci. 2024, 8(7), 265; https://doi.org/10.3390/jcs8070265 - 8 Jul 2024
Viewed by 666
Abstract
This study investigates the mechanical properties of tiled laminates, frequently used in FRP bridges, and a completely new class of composites for which currently no experimental literature is available. In this paper, first a microscopic examination of laminates extracted from bridge deck flanges [...] Read more.
This study investigates the mechanical properties of tiled laminates, frequently used in FRP bridges, and a completely new class of composites for which currently no experimental literature is available. In this paper, first a microscopic examination of laminates extracted from bridge deck flanges is performed, revealing complex multi-ply structures and tiled laminates in the transverse direction of the bridge deck. The subsequent fabrication of tiled laminates in the transverse (i.e., weak) and longitudinal (i.e., strong) span direction explores stiffness and strength characteristics depending on the stacking angle. It is observed that the stiffness in both directions is only slightly reduced with increasing stacking angles, reaching a maximum decrease of 10%, while the failure strength is significantly reduced, particularly with longitudinal tiling, dropping by approximately 70% for a 2° stacking angle. Transverse tiling demonstrates a more moderate 45% strength reduction due to the presence of some 90° plies. Given the small reduction in the stiffness and the fact that in many applications the design is mainly governed by serviceability (i.e., stiffness) requirements than strength, this strength reduction may be acceptable, considering other advantages of the concept. Additionally, this research sheds light on failure mechanisms, emphasizing the role of ply assembly in stress distribution and highlighting the importance of gradual ply ends in reducing strain concentrations. These findings provide valuable insights for optimizing tiled laminates in structural applications, ensuring their effective and reliable use. Full article
(This article belongs to the Special Issue Characterization and Modelling of Composites, Volume III)
Show Figures

Figure 1

Figure 1
<p>Delamination propagation (red arrows) in a traditional composite sandwich panel with plane parallel laminate skin and a tiled sandwich panel with tiled laminate during and accidental damage (blue lines).</p>
Full article ">Figure 2
<p>Test setup uniaxial tensile test and DIC with inset of the sample speckle pattern.</p>
Full article ">Figure 3
<p>Reference axes of an oriented ply abcd (1, 2) relative to global lamination axes (x, y) (plane ABCD) [<a href="#B13-jcs-08-00265" class="html-bibr">13</a>].</p>
Full article ">Figure 4
<p>Internal orientation of the fiber plies in a tiled GFRP web–core sandwich footbridge.</p>
Full article ">Figure 5
<p>Transverse cross-section (TL laminate) with Close-ups 1 and 2.</p>
Full article ">Figure 6
<p>Longitudinal cross-section (PP laminate) with Close-ups 1 and 2.</p>
Full article ">Figure 7
<p>Graphical illustration of the laminate lay-up at the flange–web connection in the transverse and longitudinal direction of the laminate in the top and bottom flange of a web–core sandwich panel bridge deck.</p>
Full article ">Figure 8
<p>45°, 0° and 90° coupons (<b>top</b>) and associated top (<b>middle</b>) and edge view (<b>bottom</b>) of the failure behavior.</p>
Full article ">Figure 9
<p>Top flange transverse specimen with no (<b>a</b>), one (<b>b</b>,<b>c</b>) and two (<b>d</b>) webs.</p>
Full article ">Figure 10
<p>Tensile test setup with extensometer and strain gauges at the front and back.</p>
Full article ">Figure 11
<p>Overview of the tensile strength (<b>left</b>) and Young’s modulus (<b>right</b>) of the top and bottom flange for the different types of specimens.</p>
Full article ">Figure 12
<p>Failure behavior of transverse specimens with no (0 W), one (1 W) and two webs (2 W).</p>
Full article ">Figure 13
<p>Laminate construction and lay-up of the PP reference specimens including cutting lines.</p>
Full article ">Figure 14
<p>Laminate construction and lay-up of the TL specimens including cutting lines.</p>
Full article ">Figure 15
<p>Graphical representation of the manufacturing of a gradual TL using an edge strip.</p>
Full article ">Figure 16
<p>Stress–strain data of the tensile tests on the PP reference and TL specimens with a 2° and 4° stacking angle.</p>
Full article ">Figure 17
<p>Failure behavior of the PP reference (<b>a</b>), TL specimens (<b>b</b>) and a close-up of the interlaminar failure between the plies of a TL (<b>c</b>).</p>
Full article ">Figure 18
<p>Full-field strain image (red highest strain and purple lowest strain) and DIC strain evolution over the centerline for the TL2° (<b>left</b>) and TL4° (<b>right</b>) specimens.</p>
Full article ">Figure 19
<p>Geometry (in mm) and laminate construction of the PP reference and TL specimens.</p>
Full article ">Figure 20
<p>Local Young’s modulus along the centerline of the PP reference and TL specimens.</p>
Full article ">Figure 21
<p>Average stress–strain data of the PP and TL specimens (shifted every 0.2%).</p>
Full article ">Figure 22
<p>Relative Young’s modulus in function of the theoretical stacking angle.</p>
Full article ">Figure 23
<p>Failure behavior for a PP (<b>left</b>), TL1t1 (<b>center</b>) and TL3t1 (<b>right</b>) specimen.</p>
Full article ">Figure 24
<p>Online microscopic images of the failure mechanism in TL2t1 at the location of an overlap and ply start/end (<b>a</b>) with the green lines indicating one ply stack; the propagation of the crack due to shear stresses (<b>b</b>–<b>d</b>), culminating in the ultimate failure (<b>e</b>).</p>
Full article ">Figure 25
<p>Microscopic close-ups of failure onset in specimen TL2t1 at the location of a ply end/beginning with larger (<b>a</b>) and smaller (<b>b</b>) laminate thickness due to ply stacking.</p>
Full article ">Figure 26
<p>Full-field strain image and evolution over the centerline for the PP reference and the different types of TL specimens.</p>
Full article ">Figure 27
<p>Overall mean microstrain interval and average strain for the PP reference and the different types of TL specimens.</p>
Full article ">Figure 28
<p>Comparison between the stiffness (circle) and LPF strength (square) relative to a PP laminate for a tiled laminate in the longitudinal (black) and transverse (white) direction with trendlines (dotted lines).</p>
Full article ">
26 pages, 9229 KiB  
Article
A Novel Artificial Intelligence Prediction Process of Concrete Dam Deformation Based on a Stacking Model Fusion Method
by Wenyuan Wu, Huaizhi Su, Yanming Feng, Shuai Zhang, Sen Zheng, Wenhan Cao and Hongchen Liu
Water 2024, 16(13), 1868; https://doi.org/10.3390/w16131868 - 29 Jun 2024
Viewed by 647
Abstract
Deformation effectively represents the structural integrity of concrete dams and acts as a clear indicator of their operational performance. Predicting deformation is critical for monitoring the safety of hydraulic structures. To this end, this paper proposes an artificial intelligence-based process for predicting concrete [...] Read more.
Deformation effectively represents the structural integrity of concrete dams and acts as a clear indicator of their operational performance. Predicting deformation is critical for monitoring the safety of hydraulic structures. To this end, this paper proposes an artificial intelligence-based process for predicting concrete dam deformation. Initially, using the principles of feature engineering, the preprocessing of deformation safety monitoring data is conducted. Subsequently, employing a stacking model fusion method, a novel prediction process embedded with multiple artificial intelligence algorithms is developed. Moreover, three new performance indicators—a superiority evaluation indicator, an accuracy evaluation indicator, and a generalization evaluation indicator—are introduced to provide a comprehensive assessment of the model’s effectiveness. Finally, an engineering example demonstrates that the ensemble artificial intelligence method proposed herein outperforms traditional statistical models and single machine learning models in both fitting and predictive accuracy, thereby providing a scientific and effective foundation for concrete dam deformation prediction and safety monitoring. Full article
Show Figures

Figure 1

Figure 1
<p>Location of deformation monitoring points of a concrete dam.</p>
Full article ">Figure 2
<p>Missing deformation monitoring data of a concrete dam.</p>
Full article ">Figure 3
<p>Stacking ensemble learning prediction process.</p>
Full article ">Figure 4
<p>Schematic diagram of the stacking-based learner.</p>
Full article ">Figure 5
<p>Schematic diagram of the stacking meta-learner.</p>
Full article ">Figure 6
<p>Flow chart of the XGBoost algorithm.</p>
Full article ">Figure 7
<p>Flow chart of the Extra-Trees algorithm.</p>
Full article ">Figure 8
<p>Flow chart of the SVR algorithm.</p>
Full article ">Figure 9
<p>Layout of the horizontal displacement monitoring system of a concrete dam.</p>
Full article ">Figure 10
<p>Overall flowchart of the paper.</p>
Full article ">Figure 11
<p>Spatial correlation of EXD-18 measurement point reference sequence: (<b>a</b>) Time sequence of EXD-18 and EXD-2; (<b>b</b>) Correlation coefficient between time sequences of EXD-18 and EXD-2; (<b>c</b>) Time sequence of EXD-18 and EXD-3; (<b>d</b>) Correlation coefficient between time sequences of EXD-18 and EXD-3; (<b>e</b>) Time sequence of EXD-18 and EXD-4; (<b>f</b>) Correlation coefficient between time sequences of EXD-18 and EXD-4.</p>
Full article ">Figure 12
<p>Time correlation of EXD-18 measurement point reference sequence: (<b>a</b>) Time sequence of 2014 and 2013; (<b>b</b>) Correlation coefficient between time sequences of 2014 and 2013; (<b>c</b>) Time sequence of 2014 and 2015; (<b>d</b>) Correlation coefficient between time sequences of 2014 and 2015; (<b>e</b>) Time sequence of 2014 and 2017; (<b>f</b>) Correlation coefficient between time sequences of 2014 and 2017.</p>
Full article ">Figure 13
<p>Water level–temperature–displacement hydrograph.</p>
Full article ">Figure 14
<p>Training and prediction processes of different models for horizontal displacement time series of EXD-18 measurement points: (<b>a</b>) HST, (<b>b</b>) Extra-Trees, (<b>c</b>) XGBoost, (<b>d</b>) SVR, and (<b>e</b>) Proposed.</p>
Full article ">Figure 15
<p>Training and prediction processes of different models for horizontal displacement time series of EXD-D7 measurement points: (<b>a</b>) HST, (<b>b</b>) Extra-Trees, (<b>c</b>) XGBoost, (<b>d</b>) SVR, and (<b>e</b>) Proposed.</p>
Full article ">Figure 16
<p>The prediction results at EXD-18 for different models and their linear analysis: (<b>a</b>) prediction results of HST; (<b>b</b>) linear analysis of HST prediction; (<b>c</b>) prediction results of Extra-Trees; (<b>d</b>) linear analysis of Extra-Trees prediction; (<b>e</b>) prediction results of XGBoost; (<b>f</b>) linear analysis of XGBoost prediction; (<b>g</b>) prediction results of SVR; (<b>h</b>) linear analysis of SVR prediction; (<b>i</b>) prediction results of the proposed process; (<b>j</b>) linear analysis of HST and the proposed process.</p>
Full article ">Figure 16 Cont.
<p>The prediction results at EXD-18 for different models and their linear analysis: (<b>a</b>) prediction results of HST; (<b>b</b>) linear analysis of HST prediction; (<b>c</b>) prediction results of Extra-Trees; (<b>d</b>) linear analysis of Extra-Trees prediction; (<b>e</b>) prediction results of XGBoost; (<b>f</b>) linear analysis of XGBoost prediction; (<b>g</b>) prediction results of SVR; (<b>h</b>) linear analysis of SVR prediction; (<b>i</b>) prediction results of the proposed process; (<b>j</b>) linear analysis of HST and the proposed process.</p>
Full article ">Figure 17
<p>Raincloud plots for residuals of different models: (<b>a</b>) EXD-18 prediction residuals; (<b>b</b>) EXD-D7 prediction residuals.</p>
Full article ">Figure 18
<p>Raincloud plots for the absolute value of residuals of different models: (<b>a</b>) Absolute value of residuals of EXD-18 prediction; (<b>b</b>) Absolute value of residuals of EXD-D7 prediction. The red lines connect the various averages.</p>
Full article ">Figure 19
<p>Radar chart of the basic evaluation indicators for different models. (<b>a</b>) EXD-18 (<b>b</b>) EXD-D7.</p>
Full article ">Figure 20
<p>Histogram of the proposed evaluation indicators for different models.</p>
Full article ">Figure A1
<p>Proposed process prediction results and their linear analysis at different points: (<b>a</b>) prediction results of EXD-2; (<b>b</b>) linear analysis of EXD-2 prediction; (<b>c</b>) prediction results of EXD-14; (<b>d</b>) linear analysis of EXD-14 prediction; (<b>e</b>) prediction results of EXD-25; (<b>f</b>) linear analysis of EXD-25 prediction.</p>
Full article ">
25 pages, 1198 KiB  
Article
Multimodal Machine Learning for Prognosis and Survival Prediction in Renal Cell Carcinoma Patients: A Two-Stage Framework with Model Fusion and Interpretability Analysis
by Keyue Yan, Simon Fong, Tengyue Li and Qun Song
Appl. Sci. 2024, 14(13), 5686; https://doi.org/10.3390/app14135686 - 29 Jun 2024
Cited by 1 | Viewed by 975
Abstract
Current medical limitations in predicting cancer survival status and time necessitate advancements beyond traditional methods and physical indicators. This research introduces a novel two-stage prognostic framework for renal cell carcinoma, addressing the inadequacies of existing diagnostic approaches. In the first stage, the framework [...] Read more.
Current medical limitations in predicting cancer survival status and time necessitate advancements beyond traditional methods and physical indicators. This research introduces a novel two-stage prognostic framework for renal cell carcinoma, addressing the inadequacies of existing diagnostic approaches. In the first stage, the framework accurately predicts the survival status (alive or deceased) with metrics Accuracy, Precision, Recall, and F1 score to evaluate the effects of the classification results, while the second stage focuses on forecasting the future survival time of deceased patients with Root Mean Square Error and Mean Absolute Error to evaluate the regression results. Leveraging popular machine learning models, such as Adaptive Boosting, Extra Trees, Gradient Boosting, Random Forest, and Extreme Gradient Boosting, along with fusion models like Voting, Stacking, and Blending, our approach significantly improves prognostic accuracy as shown in our experiments. The novelty of our research lies in the integration of a logistic regression meta-model for interpreting the blending model’s predictions, enhancing transparency. By the SHapley Additive exPlanations’ interpretability, we provide insights into variable contributions, aiding understanding at both global and local levels. Through modal segmentation and multimodal fusion applied to raw data from the Surveillance, Epidemiology, and End Results program, we enhance the precision of renal cell carcinoma prognosis. Our proposed model provides an interpretable analysis of model predictions, highlighting key variables influencing classification and regression decisions in the two-stage renal cell carcinoma prognosis framework. By addressing the black-box problem inherent in machine learning, our proposed model helps healthcare practitioners with a more reliable and transparent basis for applying machine learning in cancer prognostication. Full article
Show Figures

Figure 1

Figure 1
<p>Model structure of Voting.</p>
Full article ">Figure 2
<p>Model structure of Stacking and Blending.</p>
Full article ">Figure 3
<p>Experimental framework.</p>
Full article ">Figure 4
<p>Feature importance of logistical model of Blending.</p>
Full article ">Figure 5
<p>Feature importance of RF in stage 1.</p>
Full article ">Figure 6
<p>Feature importance of GB in stage 1.</p>
Full article ">Figure 7
<p>Top 6 features’ dependence plot of GB.</p>
Full article ">Figure 8
<p>Top 6 features’ dependence plot of RF.</p>
Full article ">Figure 9
<p>Waterfall plot of GB for individual RCC patient.</p>
Full article ">Figure 10
<p>Waterfall plot of RF for individual RCC patient.</p>
Full article ">Figure 11
<p>Survival status prognosis of 1–5 years.</p>
Full article ">Figure 12
<p>Feature importance of linear regression of Stacking.</p>
Full article ">Figure 13
<p>Feature importance of linear regression of Blending.</p>
Full article ">Figure 14
<p>Feature importance of GB in stage 2.</p>
Full article ">Figure 15
<p>Feature importance of XGB in stage 2.</p>
Full article ">Figure 16
<p>Feature importance of ET in stage 2.</p>
Full article ">
13 pages, 1866 KiB  
Article
IMTIBOT: An Intelligent Mitigation Technique for IoT Botnets
by Umang Garg, Santosh Kumar and Aniket Mahanti
Future Internet 2024, 16(6), 212; https://doi.org/10.3390/fi16060212 - 17 Jun 2024
Viewed by 688
Abstract
The tremendous growth of the Internet of Things (IoT) has gained a lot of attention in the global market. The massive deployment of IoT is also inherent in various security vulnerabilities, which become easy targets for hackers. IoT botnets are one type of [...] Read more.
The tremendous growth of the Internet of Things (IoT) has gained a lot of attention in the global market. The massive deployment of IoT is also inherent in various security vulnerabilities, which become easy targets for hackers. IoT botnets are one type of critical malware that degrades the performance of the IoT network and is difficult to detect by end-users. Although there are several traditional IoT botnet mitigation techniques such as access control, data encryption, and secured device configuration, these traditional mitigation techniques are difficult to apply due to normal traffic behavior, similar packet transmission, and the repetitive nature of IoT network traffic. Motivated by botnet obfuscation, this article proposes an intelligent mitigation technique for IoT botnets, named IMTIBoT. Using this technique, we harnessed the stacking of ensemble classifiers to build an intelligent system. This stacking classifier technique was tested using an experimental testbed of IoT nodes and sensors. This system achieved an accuracy of 0.984, with low latency. Full article
(This article belongs to the Special Issue Internet of Things and Cyber-Physical Systems II)
Show Figures

Figure 1

Figure 1
<p>Proposed intelligent system.</p>
Full article ">Figure 2
<p>Experimental testbed setup for IoT botnets.</p>
Full article ">Figure 3
<p>Packet delivery ratio for normal and botnet traffic.</p>
Full article ">Figure 4
<p>End-to-end delay for normal and botnet traffic.</p>
Full article ">Figure 5
<p>Throughput for normal and botnet traffic.</p>
Full article ">Figure 6
<p>Packet arrival time for normal and botnet traffic.</p>
Full article ">Figure 7
<p>Packet loss for normal and botnet traffic.</p>
Full article ">
18 pages, 6476 KiB  
Article
Dense Object Detection Based on De-Homogenized Queries
by Yueming Huang, Chenrui Ma, Hao Zhou, Hao Wu and Guowu Yuan
Electronics 2024, 13(12), 2312; https://doi.org/10.3390/electronics13122312 - 13 Jun 2024
Viewed by 490
Abstract
Dense object detection is widely used in automatic driving, video surveillance, and other fields. This paper focuses on the challenging task of dense object detection. Currently, detection methods based on greedy algorithms, such as non-maximum suppression (NMS), often produce many repetitive predictions or [...] Read more.
Dense object detection is widely used in automatic driving, video surveillance, and other fields. This paper focuses on the challenging task of dense object detection. Currently, detection methods based on greedy algorithms, such as non-maximum suppression (NMS), often produce many repetitive predictions or missed detections in dense scenarios, which is a common problem faced by NMS-based algorithms. Through the end-to-end DETR (DEtection TRansformer), as a type of detector that can incorporate the post-processing de-duplication capability of NMS, etc., into the network, we found that homogeneous queries in the query-based detector lead to a reduction in the de-duplication capability of the network and the learning efficiency of the encoder, resulting in duplicate prediction and missed detection problems. To solve this problem, we propose learnable differentiated encoding to de-homogenize the queries, and at the same time, queries can communicate with each other via differentiated encoding information, replacing the previous self-attention among the queries. In addition, we used joint loss on the output of the encoder that considered both location and confidence prediction to give a higher-quality initialization for queries. Without cumbersome decoder stacking and guaranteeing accuracy, our proposed end-to-end detection framework was more concise and reduced the number of parameters by about 8% compared to deformable DETR. Our method achieved excellent results on the challenging CrowdHuman dataset with 93.6% average precision (AP), 39.2% MR−2, and 84.3% JI. The performance overperformed previous SOTA methods, such as Iter-E2EDet (Progressive End-to-End Object Detection) and MIP (One proposal, Multiple predictions). In addition, our method is more robust in various scenarios with different densities. Full article
(This article belongs to the Special Issue Applications of Computer Vision, 2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Current mainstream detection frameworks, different coloured arrows are matched with the same coloured GTs. (<b>a</b>) Anchor-based detector; (<b>b</b>) query-based detector.</p>
Full article ">Figure 2
<p>Our detection framework based on differentiated query. Different colored arrows are matched with the same colored GTs, and different colored boxes represent differences in the encoded content of the queries.</p>
Full article ">Figure 3
<p>Statistics of IoU distance and cosine similarity among queries.</p>
Full article ">Figure 4
<p>Network updates for different queries in training, and different colored boxes represent differences in the encoded content of the queries. (<b>a</b>) Detection methods based on homogeneous queries; (<b>b</b>) our proposed detection method based on de-homogenized queries.</p>
Full article ">Figure 5
<p>Overall framework of our proposed detector.</p>
Full article ">Figure 6
<p>Structure of the DCG (De-Homo Coding Generator) module.</p>
Full article ">Figure 7
<p>Statistics of score and IoU scores in initialized queries. The red line is the regression curve after the use of the GQS, and the blue line is without it.</p>
Full article ">Figure 8
<p>AP of detection results for each stage of the decoder.</p>
Full article ">Figure 9
<p>MR<sup>−2</sup> of detection results for each stage of the decoder.</p>
Full article ">Figure 10
<p>Comparison of model performance when using different numbers of queries. Comparative analysis of differentiated queries.</p>
Full article ">Figure 11
<p>Cosine similarity of query at different IoU distances (<b>a</b>) before de-homogenization, (<b>b</b>) after de-homogenization.</p>
Full article ">Figure 12
<p>Comparing the relative improvement of our detection results in different confidence scores.</p>
Full article ">Figure 13
<p>The relative improvement of our method over deformable DETR in different dense scenarios (the matching IoU threshold is 0.5).</p>
Full article ">Figure 14
<p>The relative improvement of our method over deformable DETR in different dense scenarios (the matching IoU threshold is 0.8).</p>
Full article ">Figure 15
<p>Comparison of the actual detection results for (<b>a</b>) the two-stage deformable DETR (<b>b</b>) and our method.</p>
Full article ">
14 pages, 1408 KiB  
Article
ARFGCN: Adaptive Receptive Field Graph Convolutional Network for Urban Crowd Flow Prediction
by Genan Dai, Hu Huang, Xiaojiang Peng, Bowen Zhang and Xianghua Fu
Mathematics 2024, 12(11), 1739; https://doi.org/10.3390/math12111739 - 3 Jun 2024
Viewed by 393
Abstract
Urban crowd flow prediction is an important task for transportation systems and public safety. While graph convolutional networks (GCNs) have been widely adopted for this task, existing GCN-based methods still face challenges. Firstly, they employ fixed receptive fields, failing to account for urban [...] Read more.
Urban crowd flow prediction is an important task for transportation systems and public safety. While graph convolutional networks (GCNs) have been widely adopted for this task, existing GCN-based methods still face challenges. Firstly, they employ fixed receptive fields, failing to account for urban region heterogeneity where different functional zones interact distinctly with their surroundings. Secondly, they lack mechanisms to adaptively adjust spatial receptive fields based on temporal dynamics, which limits prediction performance. To address these limitations, we propose an Adaptive Receptive Field Graph Convolutional Network (ARFGCN) for urban crowd flow prediction. ARFGCN allows each region to independently determine its receptive field size, adaptively adjusted and learned in an end-to-end manner during training, enhancing model prediction performance. It comprises a time-aware adaptive receptive field (TARF) gating mechanism, a stacked 3DGCN, and a prediction layer. The TARF aims to leverage gating in neural networks to adapt receptive fields based on temporal dynamics, enabling the predictive network to adapt to urban regional heterogeneity. The TARF can be easily integrated into the stacked 3DGCN, enhancing the prediction. Experimental results demonstrate ARFGCN’s effectiveness compared to other methods. Full article
(This article belongs to the Section Mathematics and Computer Science)
Show Figures

Figure 1

Figure 1
<p>The framework of the proposed ASRGCN model for urban crowd flow prediction.</p>
Full article ">Figure 2
<p>The framework of the TARF.</p>
Full article ">Figure 3
<p>The effect of the number of layers. (<b>a</b>) BikeNYC. (<b>b</b>) YellowTaxi.</p>
Full article ">Figure 4
<p>Regional adaptive receptive field distributions on two datasets.</p>
Full article ">
14 pages, 3949 KiB  
Article
Research on Multi-Modal Pedestrian Detection and Tracking Algorithm Based on Deep Learning
by Rui Zhao, Jutao Hao and Huan Huo
Future Internet 2024, 16(6), 194; https://doi.org/10.3390/fi16060194 - 31 May 2024
Viewed by 627
Abstract
In the realm of intelligent transportation, pedestrian detection has witnessed significant advancements. However, it continues to grapple with challenging issues, notably the detection of pedestrians in complex lighting scenarios. Conventional visible light mode imaging is profoundly affected by varying lighting conditions. Under optimal [...] Read more.
In the realm of intelligent transportation, pedestrian detection has witnessed significant advancements. However, it continues to grapple with challenging issues, notably the detection of pedestrians in complex lighting scenarios. Conventional visible light mode imaging is profoundly affected by varying lighting conditions. Under optimal daytime lighting, visibility is enhanced, leading to superior pedestrian detection outcomes. Conversely, under low-light conditions, visible light mode imaging falters due to the inadequate provision of pedestrian target information, resulting in a marked decline in detection efficacy. In this context, infrared light mode imaging emerges as a valuable supplement, bolstering pedestrian information provision. This paper delves into pedestrian detection and tracking algorithms within a multi-modal image framework grounded in deep learning methodologies. Leveraging the YOLOv4 algorithm as a foundation, augmented by a channel stack fusion module, a novel multi-modal pedestrian detection algorithm tailored for intelligent transportation is proposed. This algorithm capitalizes on the fusion of visible and infrared light mode image features to enhance pedestrian detection performance amidst complex road environments. Experimental findings demonstrate that compared to the Visible-YOLOv4 algorithm, renowned for its high performance, the proposed Double-YOLOv4-CSE algorithm exhibits a notable improvement, boasting a 5.0% accuracy rate enhancement and a 6.9% reduction in logarithmic average missing rate. This research’s goal is to ensure that the algorithm can run smoothly even on a low configuration 1080 Ti GPU and to improve the algorithm’s coverage at the application layer, making it affordable and practical for both urban and rural areas. This addresses the broader research problem within the scope of smart cities and remote ends with limited computational power. Full article
Show Figures

Figure 1

Figure 1
<p>Research frame diagram.</p>
Full article ">Figure 2
<p>CSPNet structure.</p>
Full article ">Figure 3
<p>Dual-stream parallel backbone feature extraction network architecture.</p>
Full article ">Figure 4
<p>Channel stack fusion module.</p>
Full article ">Figure 5
<p>Improved PANet multi-scale feature fusion network based on FPN.</p>
Full article ">Figure 6
<p>CSP-SPPNet module (CBL—the smallest component within the YOLOv3 network architecture, comprises three components: convolution (Conv), batch normalization (BN), and the leaky ReLU activation function).</p>
Full article ">Figure 7
<p>Processing flow of prediction layer.</p>
Full article ">Figure 8
<p>Comparison of traditional single-mode pedestrian detection algorithm and multi-mode pedestrian detection algorithm.</p>
Full article ">
34 pages, 20870 KiB  
Article
To (US)Be or Not to (US)Be: Discovering Malicious USB Peripherals through Neural Network-Driven Power Analysis
by Koffi Anderson Koffi, Christos Smiliotopoulos, Constantinos Kolias and Georgios Kambourakis
Electronics 2024, 13(11), 2117; https://doi.org/10.3390/electronics13112117 - 29 May 2024
Viewed by 771
Abstract
Nowadays, The Universal Serial Bus (USB) is one of the most adopted communication standards. However, the ubiquity of this technology has attracted the interest of attackers. This situation is alarming, considering that the USB protocol has penetrated even into critical infrastructures. Unfortunately, the [...] Read more.
Nowadays, The Universal Serial Bus (USB) is one of the most adopted communication standards. However, the ubiquity of this technology has attracted the interest of attackers. This situation is alarming, considering that the USB protocol has penetrated even into critical infrastructures. Unfortunately, the majority of the contemporary security detection and prevention mechanisms against USB-specific attacks work at the application layer of the USB protocol stack and, therefore, can only provide partial protection, assuming that the host is not itself compromised. Toward this end, we propose a USB authentication system designed to identify (and possibly block) heterogeneous USB-based attacks directly from the physical layer. Empirical observations demonstrate that any extraneous/malicious activity initiated by malicious/compromised USB peripherals tends to consume additional electrical power. Driven by this observation, our proposed solution is based on the analysis of the USB power consumption patterns. Valuable power readings can easily be obtained directly by the power lines of the USB connector with low-cost, off-the-shelf equipment. Our experiments demonstrate the ability to effectively distinguish benign from malicious USB devices, as well as USB peripherals from each other, relying on the power side channel. At the core of our analysis lies an Autoencoder model that handles the feature extraction process; this process is paired with a long short-term memory (LSTM) and a convolutional neural network (CNN) model for detecting malicious peripherals. We meticulously evaluated the effectiveness of our approach and compared its effectiveness against various other shallow machine learning (ML) methods. The results indicate that the proposed scheme can identify USB devices as benign or malicious/counterfeit with a perfect F1-score. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

Figure 1
<p>On the surface, an O.MG cable appears to be a simple USB cable (<b>left</b>) (Image from [<a href="#B17-electronics-13-02117" class="html-bibr">17</a>]); however, such devices have a malicious microcontroller embedded at the connector side (<b>right</b>) (Image from [<a href="#B18-electronics-13-02117" class="html-bibr">18</a>]).</p>
Full article ">Figure 2
<p>USB 3.0 protocol stack (based on [<a href="#B34-electronics-13-02117" class="html-bibr">34</a>,<a href="#B37-electronics-13-02117" class="html-bibr">37</a>]).</p>
Full article ">Figure 3
<p>Autoencoder architecture.</p>
Full article ">Figure 4
<p>Abstract view of the proposed framework. The framework employs an Autoencoder to extract the features used with the CNN-LSTM model for device identification and anomaly detection.</p>
Full article ">Figure 5
<p>Key architectural elements employed in the Autoencoder and deep learning model. The Autoencoder is composed of encoder and decoder components. The DNN model is built on the integration of an LSTM layer, a CNN layer, and an attention mechanism, designed to optimize feature extraction and temporal pattern recognition, and focus on significant sections of the signal, respectively.</p>
Full article ">Figure 6
<p>Additive attention mechanism in the proposed framework.</p>
Full article ">Figure 7
<p>The utilized testbed. The oscilloscope is not shown.</p>
Full article ">Figure 8
<p>Power consumption signals of a normal flash drive device vs. a normal keyboard device. Zoom-out view on the <b>top</b> and zoom-in view on the <b>bottom</b>.</p>
Full article ">Figure 9
<p>Power consumption signals of a normal USB keyboard device vs. the malicious Teensyduino masquerading as a keyboard. Zoom-out view on the <b>top</b> and zoom-in view on the <b>bottom</b>. Notice the spikes and irregular patterns in the Teensyduino signal highlighted in red.</p>
Full article ">Figure 10
<p>Power consumption signals of a normal USB mouse vs. the malicious O.MG cable. Zoom-out view on the <b>top</b> and zoom-in view on the <b>bottom</b>. Notice the increased amplitude in the O.MG cable signal highlighted in red.</p>
Full article ">Figure 11
<p>Comparison of the impact of the feature extraction methods on the training times (darker color) and inference times (lighter color) of the shallow models.</p>
Full article ">Figure 12
<p>Comparison of the impact of the feature extraction methods on the training times (darker color) and inference times (lighter color) of the CNN-LSTM model.</p>
Full article ">Figure 13
<p>Shallow vs. DDN models in terms of F1 score for raw signals, TSFresh-extracted features, and Autoencoder-extracted features.</p>
Full article ">Figure 14
<p>Random Forest vs. CNN-LSTM for raw signals, features extracted by TSFresh, and features extracted by Autoencoder.</p>
Full article ">Figure A1
<p>Random Forest confusion matrices on the tasks using the raw signals.</p>
Full article ">Figure A2
<p>CNN-LSTM confusion matrices on the tasks using the raw signals.</p>
Full article ">Figure A3
<p>Random Forest confusion matrices for TSFresh-extracted features.</p>
Full article ">Figure A4
<p>CNN-LSTM confusion matrices for TSFresh-extracted features.</p>
Full article ">Figure A5
<p>Confusion matrices of the Random Forest classifier on the tasks using the features extracted by the Autoencoder.</p>
Full article ">Figure A6
<p>CNN-LSTM confusion matrices on the tasks using the features extracted by the Autoencoder.</p>
Full article ">
Back to TopTop