[go: up one dir, main page]

 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,114)

Search Parameters:
Keywords = recursive

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 5517 KiB  
Article
Gust Response and Alleviation of Avian-Inspired In-Plane Folding Wings
by Haibo Zhang, Haolin Yang, Yongjian Yang, Chen Song and Chao Yang
Biomimetics 2024, 9(10), 641; https://doi.org/10.3390/biomimetics9100641 (registering DOI) - 18 Oct 2024
Abstract
The in-plane folding wing is one of the important research directions in the field of morphing or bionic aircraft, showing the unique application value of enhancing aircraft maneuverability and gust resistance. This article provides a structural realization of an in-plane folding wing and [...] Read more.
The in-plane folding wing is one of the important research directions in the field of morphing or bionic aircraft, showing the unique application value of enhancing aircraft maneuverability and gust resistance. This article provides a structural realization of an in-plane folding wing and an aeroelasticity modeling method for the folding process of the wing. By approximating the change in structural properties in each time step, a method for calculating the structural transient response expressed in recursive form is obtained. On this basis, an aeroelasticity model of the wing is developed by coupling with the aerodynamic model using the unsteady panel/viscous vortex particle hybrid method. A wind-tunnel test is implemented to demonstrate the controllable morphing capability of the wing under aerodynamic loads and to validate the reliability of the wing loads predicted by the method in this paper. The results of the gust simulation show that the gust scale has a significant effect on the response of both the open- and closed-loop systems. When the gust alleviation controller is enabled, the peak bending moment at the wing root can be reduced by 5.5%∼47.3% according to different gust scales. Full article
Show Figures

Figure 1

Figure 1
<p>Scheme of the in-plane folding wing.</p>
Full article ">Figure 2
<p>Cross-section shape of the wing.</p>
Full article ">Figure 3
<p>Design of skeleton-inspired beam.</p>
Full article ">Figure 4
<p>Iteration process for optimization design.</p>
Full article ">Figure 5
<p>Optimized result of cross-section radius distribution of rods in cells.</p>
Full article ">Figure 6
<p>Beam with cellular structure of non-uniform density.</p>
Full article ">Figure 7
<p>Prototype of avian-inspired in-plane folding wing. (<b>a</b>) Skeleton only. (<b>b</b>) Skin is attached.</p>
Full article ">Figure 8
<p>Parameter approximating within a single time step. (<b>a</b>) Hermite interpolation for structural parameters. (<b>b</b>) Linear interpolation for generalized coordinates.</p>
Full article ">Figure 9
<p>Unsteady panel/viscous vortex particle hybrid method.</p>
Full article ">Figure 10
<p>Coupling workflows of aeroelasticity model of in-plane folding wings.</p>
Full article ">Figure 11
<p>Wind-tunnel test of the wing prototype.</p>
Full article ">Figure 12
<p>Comparison of the wing loads at the steady working condition (<math display="inline"><semantics> <mrow> <mi>u</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math> ms<math display="inline"><semantics> <msup> <mo>⁢</mo> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <msup> <mn>5</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>). (<b>a</b>) Lift. (<b>b</b>) Drag and lift-to-drag ratio.</p>
Full article ">Figure 13
<p>Comparison of the wing lift results at the unsteady working condition (<math display="inline"><semantics> <mrow> <mi>u</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math> ms<math display="inline"><semantics> <msup> <mo>⁢</mo> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <msup> <mn>5</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>f</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math> Hz).</p>
Full article ">Figure 14
<p>Gust field velocity curve.</p>
Full article ">Figure 15
<p>Bending moment at the wing root.</p>
Full article ">Figure 16
<p>Comparison of the dynamic and quasi-static models. (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>2</mn> <mi>c</mi> </mrow> </semantics></math>, dynamic model. (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>2</mn> <mi>c</mi> </mrow> </semantics></math>, quasi-static model. (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>80</mn> <mi>c</mi> </mrow> </semantics></math>, dynamic model. (<b>d</b>) <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>80</mn> <mi>c</mi> </mrow> </semantics></math>, quasi-static model.</p>
Full article ">Figure 17
<p>Maximum additional bending moment at the wing root.</p>
Full article ">Figure 18
<p>Wing aeroservoelastic system schematic.</p>
Full article ">Figure 19
<p>Servo system schematic.</p>
Full article ">Figure 20
<p>Gust alleviation controller schematic.</p>
Full article ">Figure 21
<p>Designing of gust alleviation controller.</p>
Full article ">Figure 22
<p>Bending moment response curve of the closed-loop system. (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>2</mn> <mi>c</mi> </mrow> </semantics></math>, comparison of additional bending moments. (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>2</mn> <mi>c</mi> </mrow> </semantics></math>, variation of folding angle. (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>9</mn> <mi>c</mi> </mrow> </semantics></math>, comparison of additional bending moments. (<b>d</b>) <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>9</mn> <mi>c</mi> </mrow> </semantics></math>, variation of folding angle. (<b>e</b>) <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>80</mn> <mi>c</mi> </mrow> </semantics></math>, comparison of additional bending moments. (<b>f</b>) <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>80</mn> <mi>c</mi> </mrow> </semantics></math>, variation of folding angle.</p>
Full article ">Figure 23
<p>Effect of the controller under different scales of gusts.</p>
Full article ">
12 pages, 271 KiB  
Article
On Productiveness and Complexity in Computable Analysis Through Rice-Style Theorems for Real Functions
by Jingnan Xie, Harry B. Hunt and Richard E. Stearns
Mathematics 2024, 12(20), 3248; https://doi.org/10.3390/math12203248 - 17 Oct 2024
Viewed by 282
Abstract
This paper investigates the complexity of real functions through proof techniques inspired by formal language theory. Productiveness, which is a stronger form of non-recursive enumerability, is employed to analyze the complexity of various problems related to real functions. Our work provides a deep [...] Read more.
This paper investigates the complexity of real functions through proof techniques inspired by formal language theory. Productiveness, which is a stronger form of non-recursive enumerability, is employed to analyze the complexity of various problems related to real functions. Our work provides a deep reexamination of Hilbert’s tenth problem and the equivalence to the identically 0 function problem, extending the undecidability results of these problems into the realm of productiveness. Additionally, we study the complexity of the equivalence to the identically 0 function problem over different domains. We then construct highly efficient many-one reductions to establish Rice-style theorems for the study of real functions. Specifically, we show that many predicates, including those related to continuity, differentiability, uniform continuity, right and left differentiability, semi-differentiability, and continuous differentiability, are as hard as the equivalence to the identically 0 function problem. Due to their high efficiency, these reductions preserve nearly any level of complexity, allowing us to address both complexity and productiveness results simultaneously. By demonstrating these results, which highlight a more nuanced and potentially more intriguing aspect of real function theory, we provide new insights into how various properties of real functions can be analyzed. Full article
(This article belongs to the Section Mathematics and Computer Science)
16 pages, 10473 KiB  
Article
Multi-Source Remote Sensing Data for Wetland Information Extraction: A Case Study of the Nanweng River National Wetland Reserve
by Hao Yu, Shicheng Li, Zhimin Liang, Shengnan Xu, Xin Yang and Xiaoyan Li
Sensors 2024, 24(20), 6664; https://doi.org/10.3390/s24206664 - 16 Oct 2024
Viewed by 222
Abstract
Wetlands play a vital role in regulating the global carbon cycle, providing biodiversity, and reducing flood risks. These functions maintain ecological balance and ensure human well-being. Timely, accurate monitoring of wetlands is essential, not only for conservation efforts, but also for achieving Sustainable [...] Read more.
Wetlands play a vital role in regulating the global carbon cycle, providing biodiversity, and reducing flood risks. These functions maintain ecological balance and ensure human well-being. Timely, accurate monitoring of wetlands is essential, not only for conservation efforts, but also for achieving Sustainable Development Goals (SDGs). In this study, we combined Sentinel-1/2 images, terrain data, and field observation data collected in 2020 to better understand wetland distribution. A total of 22 feature variables were extracted from multi-source data, including spectral bands, spectral indices (especially red edge indices), terrain features, and radar features. To avoid high correlations between variables and reduce data redundancy, we selected a subset of features based on recursive feature elimination (RFE) and Pearson correlation analysis methods. We adopted the random forest (RF) method to construct six wetland delineation schemes and incorporated multiple types of characteristic variables. These variables were based on remote sensing image pixels and objects. Combining red-edge features, terrain data, and radar data significantly improved the accuracy of land cover information extracted in low-mountain and hilly areas. Moreover, the accuracy of object-oriented schemes surpassed that of pixel-level methods when applied to wetland classification. Among the three pixel-based schemes, the addition of terrain and radar data increased the overall classification accuracy by 7.26%. In the object-based schemes, the inclusion of radar and terrain data improved classification accuracy by 4.34%. The object-based classification method achieved the best results for swamps, water bodies, and built-up land, with relative accuracies of 96.00%, 90.91%, and 96.67%, respectively. Even higher accuracies were observed in the pixel-based schemes for marshes, forests, and bare land, with relative accuracies of 98.67%, 97.53%, and 80.00%, respectively. This study’s methodology can provide valuable reference information for wetland data extraction research and can be applied to a wide range of future research studies. Full article
(This article belongs to the Section Environmental Sensing)
Show Figures

Figure 1

Figure 1
<p>The topographical location of the study area with a distribution of the sample points.</p>
Full article ">Figure 2
<p>General workflow for wetland detection (SNIC is short for simple non-iterative clustering).</p>
Full article ">Figure 3
<p>Correlation analysis of characteristic variables.</p>
Full article ">Figure 4
<p>The variable importance measures in descending order and the average overall accuracy of different feature combinations of each Plan.</p>
Full article ">Figure 5
<p>Pixel-based feature classification accuracy.</p>
Full article ">Figure 6
<p>Comparison of classification results based on pixel schemes.</p>
Full article ">Figure 7
<p>Object-based feature classification accuracy.</p>
Full article ">Figure 8
<p>Comparison of classification results based on object schemes.</p>
Full article ">Figure 9
<p>Comparison with other wetland maps (ESA_WorldCover and ESRI_WorldCover). (<b>A</b>–<b>D</b>) represent four different classification demonstration areas.</p>
Full article ">
18 pages, 4442 KiB  
Article
Integrating Learning-Driven Model Behavior and Data Representation for Enhanced Remaining Useful Life Prediction in Rotating Machinery
by Tarek Berghout, Eric Bechhoefer, Faycal Djeffal and Wei Hong Lim
Machines 2024, 12(10), 729; https://doi.org/10.3390/machines12100729 - 15 Oct 2024
Viewed by 351
Abstract
The increasing complexity of modern mechanical systems, especially rotating machinery, demands effective condition monitoring techniques, particularly deep learning, to predict potential failures in a timely manner and enable preventative maintenance strategies. Health monitoring data analysis, a widely used approach, faces challenges due to [...] Read more.
The increasing complexity of modern mechanical systems, especially rotating machinery, demands effective condition monitoring techniques, particularly deep learning, to predict potential failures in a timely manner and enable preventative maintenance strategies. Health monitoring data analysis, a widely used approach, faces challenges due to data randomness and interpretation difficulties, highlighting the importance of robust data quality analysis for reliable monitoring. This paper presents a two-part approach to address these challenges. The first part focuses on comprehensive data preprocessing using only feature scaling and selection via random forest (RF) algorithm, streamlining the process by minimizing human intervention while managing data complexity. The second part introduces a Recurrent Expansion Network (RexNet) composed of multiple layers built on recursive expansion theories from multi-model deep learning. Unlike traditional Rex architectures, this unified framework allows fine tuning of RexNet hyperparameters, simplifying their application. By combining data quality analysis with RexNet, this methodology explores multi-model behaviors and deeper interactions between dependent (e.g., health and condition indicators) and independent variables (e.g., Remaining Useful Life (RUL)), offering richer insights than conventional methods. Both RF and RexNet undergo hyperparameter optimization using Bayesian methods under variability reduction (i.e., standard deviation) of residuals, allowing the algorithms to reach optimal solutions and enabling fair comparisons with state-of-the-art approaches. Applied to high-speed bearings using a large wind turbine dataset, this approach achieves a coefficient of determination of 0.9504, enhancing RUL prediction. This allows for more precise maintenance scheduling from imperfect predictions, reducing downtime and operational costs while improving system reliability under varying conditions. Full article
Show Figures

Figure 1

Figure 1
<p>Visual summary of the methodological steps.</p>
Full article ">Figure 2
<p>High-speed shaft positioning in a 2 MW wind turbine and the cracked inner race of a high-speed bearing: (<b>a</b>) schematic representation of the wind turbine gearbox, illustrating the high-speed shaft’s location; (<b>b</b>) cracked inner race of a high-speed bearing. Both images (<b>a</b>,<b>b</b>) are adapted from [<a href="#B23-machines-12-00729" class="html-bibr">23</a>,<a href="#B25-machines-12-00729" class="html-bibr">25</a>], respectively, and are licensed under CC BY open access. The original images have been enhanced using content-aware techniques, denoising, and editing for improved clarity.</p>
Full article ">Figure 3
<p>Overview of raw data: (<b>a</b>) energy levels for cage, ball, inner race, and outer race; (<b>b</b>) shaft tick progression over time; (<b>c</b>) gearing load represented by RPM; (<b>d</b>) HI variation.</p>
Full article ">Figure 4
<p>Hyperparameters of the RF regressor.</p>
Full article ">Figure 5
<p>Feature importance results obtained by a Bayesian optimized RF regressor.</p>
Full article ">Figure 6
<p>Simplified flow diagram of the architecture of the proposed RexNet approach.</p>
Full article ">Figure 7
<p>Comparative analysis of the studied models’ training performance: (<b>a</b>) training loss over epochs for each model; (<b>b</b>) bar chart of the Area Under the Loss Curve (AULC), quantifying the overall loss behavior.</p>
Full article ">Figure 8
<p>RUL comparison and residual analysis: (<b>a</b>) training set, a comparison between actual (ideal) and predicted RUL for LSTM and RexNet models; (<b>b</b>) testing set, a comparison between actual (ideal) and predicted RUL for LSTM and RexNet models; (<b>c</b>) training set residuals, the difference between actual and predicted RUL; (<b>d</b>) testing set residuals, the difference between actual and predicted RUL.</p>
Full article ">Figure 9
<p>Residual normal distributions: (<b>a</b>) training set, a comparison of residual distributions for LSTM and RexNet models, highlighting the variability and error margins in each model’s predictions; (<b>b</b>) testing set, a comparison of residual distributions for LSTM and RexNet models, illustrating the accuracy and generalization of the models on unseen data.</p>
Full article ">
24 pages, 2131 KiB  
Article
Improving Text Classification in Agricultural Expert Systems with a Bidirectional Encoder Recurrent Convolutional Neural Network
by Xiaojuan Guo, Jianping Wang, Guohong Gao, Li Li, Junming Zhou and Yancui Li
Electronics 2024, 13(20), 4054; https://doi.org/10.3390/electronics13204054 (registering DOI) - 15 Oct 2024
Viewed by 372
Abstract
With the rapid development of internet and AI technologies, Agricultural Expert Systems (AESs) have become crucial for delivering technical support and decision-making in agricultural management. However, traditional natural language processing methods often struggle with specialized terminology and context, and they lack the adaptability [...] Read more.
With the rapid development of internet and AI technologies, Agricultural Expert Systems (AESs) have become crucial for delivering technical support and decision-making in agricultural management. However, traditional natural language processing methods often struggle with specialized terminology and context, and they lack the adaptability to handle complex text classifications. The diversity and evolving nature of agricultural texts make deep semantic understanding and integration of contextual knowledge especially challenging. To tackle these challenges, this paper introduces a Bidirectional Encoder Recurrent Convolutional Neural Network (AES-BERCNN) tailored for short-text classification in agricultural expert systems. We designed an Agricultural Text Encoder (ATE) with a six-layer transformer architecture to capture both preceding and following word information. A recursive convolutional neural network based on Gated Recurrent Units (GRUs) was also developed to merge contextual information and learn complex semantic features, which are then combined with the ATE output and refined through max-pooling to form the final feature representation. The AES-BERCNN model was tested on a self-constructed agricultural dataset, achieving an accuracy of 99.63% in text classification. Its generalization ability was further verified on the Tsinghua News dataset. Compared to other models such as TextCNN, DPCNN, BiLSTM, and BERT-based models, the AES-BERCNN shows clear advantages in agricultural text classification. This work provides precise and timely technical support for intelligent agricultural expert systems. Full article
Show Figures

Figure 1

Figure 1
<p>Preprocessing and construction dataset.</p>
Full article ">Figure 2
<p>Structure of AES-BERCNN.</p>
Full article ">Figure 3
<p>Structure of ATE.</p>
Full article ">Figure 4
<p>Structure of the agricultural text classifier.</p>
Full article ">Figure 5
<p>The accuracy, loss, and time of comparative models on agricultural question training set: (<b>a</b>) accuracy comparison, (<b>b</b>) loss comparison, and (<b>c</b>) time comparison.</p>
Full article ">Figure 6
<p>Comparative model experimental results of precision, recall, and F1 on agricultural question test dataset: (<b>a</b>) precision comparison, (<b>b</b>) recall comparison, and (<b>c</b>) F1 comparison.</p>
Full article ">Figure 7
<p>Confusion matrix of classification effect of each model: (<b>a</b>) BiLSTM, (<b>b</b>) BiGRU, (<b>c</b>) TextCNN, (<b>d</b>) DPCNN, (<b>e</b>) BERT-TextCNN, (<b>f</b>) BERT-DPCNN, (<b>g</b>) BERT-BiLSTM, (<b>h</b>) BERT-BiGRU, (<b>i</b>) ATE-DPCNN, (<b>j</b>) ATE-TextCNN, (<b>k</b>) ATE-BiLSTM, (<b>l</b>) ATE-BiGRU, and (<b>m</b>) ATE-BERCNN.</p>
Full article ">Figure 7 Cont.
<p>Confusion matrix of classification effect of each model: (<b>a</b>) BiLSTM, (<b>b</b>) BiGRU, (<b>c</b>) TextCNN, (<b>d</b>) DPCNN, (<b>e</b>) BERT-TextCNN, (<b>f</b>) BERT-DPCNN, (<b>g</b>) BERT-BiLSTM, (<b>h</b>) BERT-BiGRU, (<b>i</b>) ATE-DPCNN, (<b>j</b>) ATE-TextCNN, (<b>k</b>) ATE-BiLSTM, (<b>l</b>) ATE-BiGRU, and (<b>m</b>) ATE-BERCNN.</p>
Full article ">Figure 8
<p>The accuracy and loss of comparative models on Tsinghua training set: (<b>a</b>) accuracy comparison and (<b>b</b>) loss comparison.</p>
Full article ">
20 pages, 1285 KiB  
Article
RS-Net: Hyperspectral Image Land Cover Classification Based on Spectral Imager Combined with Random Forest Algorithm
by Xuyang Li, Xiangsuo Fan, Qi Li and Xueqiang Zhao
Electronics 2024, 13(20), 4046; https://doi.org/10.3390/electronics13204046 - 14 Oct 2024
Viewed by 384
Abstract
Recursive neural networks and transformers have recently become dominant in hyperspectral (HS) image classification due to their ability to capture long-range dependencies in spectral sequences. Despite the success of these sequential architectures, mainstream deep learning methods primarily handle two-dimensional structured data. However, challenges [...] Read more.
Recursive neural networks and transformers have recently become dominant in hyperspectral (HS) image classification due to their ability to capture long-range dependencies in spectral sequences. Despite the success of these sequential architectures, mainstream deep learning methods primarily handle two-dimensional structured data. However, challenges such as the curse of dimensionality, spectral variability, and confounding factors in hyperspectral remote sensing images limit their effectiveness, especially in remote sensing applications. To address this issue, this paper proposes a novel land cover classification algorithm that integrates random forests with a spectral transformer network structure (RS-Net). Firstly, this paper presents a combination of the Gramian Angular Field (GASF) and Gramian Angular Difference Field (GADF) algorithms, which effectively maps the multidimensional time series constructed for each pixel onto two-dimensional image features, enabling precise extraction and recognition in the backend network algorithms and improving the classification accuracy of land cover types. Secondly, to capture the relationships between features at different scales, this paper proposes a SpectralFormer network architecture using the Context and Structure Encoding (CASE) module to effectively learn dependencies between channels. This architecture enhances important features and suppresses unimportant ones, thereby addressing the semantic gap and improving the recognition capability of land cover features. Finally, the final prediction results are determined by a voting mechanism from the Random Forest algorithm, which synthesizes predictions from multiple decision trees to enhance classification stability and accuracy. To better compare the performance of RS-Net, this paper conducted extensive experiments on three benchmark HS datasets obtained from satellite and airborne imagers, comparing various classic neural network models. Surprisingly, the RS-Net algorithm achieves high performance and efficiency, offering a new and effective tool for land cover classification. Full article
(This article belongs to the Topic Hyperspectral Imaging and Signal Processing)
Show Figures

Figure 1

Figure 1
<p>Schematic representation of the RS -Net architecture for hyperspectral image classification tasks.</p>
Full article ">Figure 2
<p>Schematic diagram of the structure of the Gramercy corner.</p>
Full article ">Figure 3
<p>CASE module.</p>
Full article ">Figure 4
<p>Evaluating indicator data visualization at Houston HS.</p>
Full article ">Figure 5
<p>Illustration of color images, transformation and test labels, and classification maps obtained by comparative methods on the Houston HS dataset.</p>
Full article ">Figure 6
<p>Illustration of color images, transformation and test labels, and classification maps obtained by comparative methods on the Indian Pines HS dataset.</p>
Full article ">Figure 7
<p>Illustration of color images, transformation and test labels, and classification maps obtained by comparative methods on the Pavia University HS dataset.</p>
Full article ">
22 pages, 29294 KiB  
Article
Ghost Removal from Forward-Scan Sonar Views near the Sea Surface for Image Enhancement and 3-D Object Modeling
by Yuhan Liu and Shahriar Negahdaripour
Remote Sens. 2024, 16(20), 3814; https://doi.org/10.3390/rs16203814 - 14 Oct 2024
Viewed by 409
Abstract
Underwater sonar is the primary remote sensing and imaging modality within turbid environments with poor visibility. The two-dimensional (2-D) images of a target near the air–sea interface (or resting on a hard seabed), acquired by forward-scan sonar (FSS), are generally corrupted by the [...] Read more.
Underwater sonar is the primary remote sensing and imaging modality within turbid environments with poor visibility. The two-dimensional (2-D) images of a target near the air–sea interface (or resting on a hard seabed), acquired by forward-scan sonar (FSS), are generally corrupted by the ghost and sometimes mirror components, formed by the multipath propagation of transmitted acoustic beams. In the processing of the 2-D FSS views to generate an accurate three-dimensional (3-D) object model, the corrupted regions have to be discarded. The sonar tilt angle and distance from the sea surface are two important parameters for the accurate localization of the ghost and mirror components. We propose a unified optimization technique for improving both the measurements of these two parameters from inexpensive sensors and the accuracy of a 3-D object model using 2-D FSS images at known poses. The solution is obtained by the recursive updating of sonar parameters and 3-D object model. Utilizing the 3-D object model, we can enhance the original images and generate synthetic views for arbitrary sonar poses. We demonstrate the performance of our method in experiments with the synthetic and real images of three targets: two dominantly convex coral rocks and a highly concave toy wood table. Full article
(This article belongs to the Topic Computer Vision and Image Processing, 2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Ghost component overlaps with and is indistinguishable from the object region in every view. Mirror component at reference position (elevation axis pointing upward) overlaps with both object and ghost regions (<b>a</b>). As the sonar rotates about the viewing direction (from 0° to 67.5° in increments of 22.5°, here), it separating from the object (<b>b</b>), and forms a distinct blob (<b>c</b>,<b>d</b>).</p>
Full article ">Figure 2
<p>(<b>a</b>) For a sonar beam in <math display="inline"><semantics> <mi>θ</mi> </semantics></math> direction, image intensity <math display="inline"><semantics> <mrow> <mspace width="-0.166667em"/> <mi>I</mi> <mspace width="-0.166667em"/> </mrow> </semantics></math> of pixel <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </semantics></math> depends on cumulative echos from unknown number of surface patches within volume <math display="inline"><semantics> <mrow> <mspace width="-0.166667em"/> <msub> <mi>V</mi> <mrow> <mspace width="-0.166667em"/> <mi>ϕ</mi> </mrow> </msub> <mspace width="-0.166667em"/> </mrow> </semantics></math> arriving at sonar receiver simultaneously; <math display="inline"><semantics> <mrow> <mspace width="-0.166667em"/> <msub> <mi>V</mi> <mi>ϕ</mi> </msub> <mspace width="-0.166667em"/> </mrow> </semantics></math> covers elevation-angle interval <math display="inline"><semantics> <mrow> <mspace width="-0.166667em"/> <mo>[</mo> <mo>−</mo> <msub> <mi>W</mi> <mi>ϕ</mi> </msub> <mo>,</mo> <msub> <mi>W</mi> <mi>ϕ</mi> </msub> <mo>]</mo> </mrow> </semantics></math>, range interval [<span class="html-italic">ℜ</span>, <math display="inline"><semantics> <mrow> <mo>ℜ</mo> <mspace width="-0.166667em"/> <mo>+</mo> <mrow> <mi>δ</mi> <mo>ℜ</mo> </mrow> </mrow> </semantics></math>] along the beam covering azimuthal-angle interval [<math display="inline"><semantics> <mrow> <mi>θ</mi> <mo>,</mo> <mi>θ</mi> <mspace width="-0.166667em"/> <mo>+</mo> <mspace width="-0.166667em"/> <mrow> <mi>δ</mi> <mi>θ</mi> </mrow> </mrow> </semantics></math>]. (<b>b</b>) A coral rock with voxelated volume and triangular surface mesh of SC solution. (<b>c</b>) Virtual mirror object geometry: transmitted sound waves in direction <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">R</mi> <mn>1</mn> </msub> </semantics></math> are scattered by surface at <math display="inline"><semantics> <msub> <mi>P</mi> <mi>s</mi> </msub> </semantics></math>. Reflected portion along “unique direction” <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">R</mi> <mn>2</mn> </msub> </semantics></math> towards <math display="inline"><semantics> <msub> <mi>P</mi> <mi>W</mi> </msub> </semantics></math> on water surface (with surface normal <math display="inline"><semantics> <mi mathvariant="bold-italic">n</mi> </semantics></math>) is specularly reflected towards the sonar along <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">R</mi> <mn>3</mn> </msub> </semantics></math>, leading to the appearance of a virtual mirror object point at <math display="inline"><semantics> <msub> <mi>P</mi> <mi>m</mi> </msub> </semantics></math>. (<b>d</b>) Virtual ghost object geometry: considering the reverse direction of the mirror-point pathway, sound waves traveling along <math display="inline"><semantics> <mrow> <mo>−</mo> <msub> <mi mathvariant="bold-italic">R</mi> <mn>3</mn> </msub> </mrow> </semantics></math> are specularly reflected towards the object along <math display="inline"><semantics> <mrow> <mo>−</mo> <msub> <mi mathvariant="bold-italic">R</mi> <mn>2</mn> </msub> </mrow> </semantics></math>, and are scattered at <math display="inline"><semantics> <msub> <mi>P</mi> <mi>s</mi> </msub> </semantics></math>, of which components along <math display="inline"><semantics> <mrow> <mo>−</mo> <msub> <mi mathvariant="bold-italic">R</mi> <mn>1</mn> </msub> </mrow> </semantics></math> are captured by the sonar. This leads to the appearance of ghost point <math display="inline"><semantics> <msub> <mi>P</mi> <mi>g</mi> </msub> </semantics></math> along the sonar beam directed at <math display="inline"><semantics> <msub> <mi>P</mi> <mi>s</mi> </msub> </semantics></math> (at a longer range <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">R</mi> <mi>g</mi> </msub> </semantics></math>).</p>
Full article ">Figure 3
<p>(<b>a</b>) Block diagram of entire algorithm; (<b>b</b>) steps in 3-D shape optimization by displacement of model vertices, computed from 3-D vertex motions that are estimated from the 2-D image motions aligning the object regions in the data and synthetic views.</p>
Full article ">Figure 4
<p>(<b>a</b>) The 2-D vectors <math display="inline"><semantics> <mrow> <mo>{</mo> <msubsup> <mrow> <mi mathvariant="bold">v</mi> </mrow> <mrow> <mi>m</mi> <mi>i</mi> </mrow> <mi>O</mi> </msubsup> <mo>,</mo> <msubsup> <mrow> <mi mathvariant="bold">v</mi> </mrow> <mrow> <mi>m</mi> <mi>j</mi> </mrow> <mi>M</mi> </msubsup> <mo>}</mo> </mrow> </semantics></math> align the frontal contours <math display="inline"><semantics> <mrow> <mo>{</mo> <msubsup> <mi mathvariant="script">C</mi> <mi>m</mi> <mi>O</mi> </msubsup> <mo>,</mo> <msubsup> <mi mathvariant="script">C</mi> <mi>m</mi> <mi>M</mi> </msubsup> <mo>}</mo> </mrow> </semantics></math> of the object and mirror regions in the real images with counterparts <math display="inline"><semantics> <mrow> <mo>{</mo> <msubsup> <mover accent="true"> <mi mathvariant="script">C</mi> <mo>˜</mo> </mover> <mi>m</mi> <mi>O</mi> </msubsup> <mo>,</mo> <msubsup> <mover accent="true"> <mi mathvariant="script">C</mi> <mo>˜</mo> </mover> <mi>m</mi> <mi>M</mi> </msubsup> <mo>}</mo> </mrow> </semantics></math> in the synthetic views; (<b>b</b>) magnified view of relevant regions.</p>
Full article ">Figure 5
<p>Processing steps in the decomposition of sonar data into object and ghost components. (<b>a</b>) Generation of synthetic object image from image formation model, and localizing the ghost and mirror components to identify regions overlapping with object image. (<b>b</b>) Segmentation of real and synthetic object regions into overlapping and non-overlapping parts, using non-overlapping region in generating the LUT for synthetic-to-real object transformation, and apply the LUT to reconstruct overlapping object region to complete the object image by fusing with non-overlapping part. (<b>c</b>) Segmentation of ghost area into overlapping and non-overlapping regions, producing the non-overlapping part. (<b>d</b>) Discounting for the object image within overlap area to generate the ghost component. (<b>e</b>) Generation of ghost image from overlapping and non-overlapping components.</p>
Full article ">Figure 6
<p>Three targets—two dominantly convex coral rocks with mild local concavities and a highly concave wood table—with height, maximum width, and imaging conditions.</p>
Full article ">Figure 7
<p>Coral-one experiment—(<b>a</b>–<b>d</b>) synthetic and (<b>a’</b>–<b>d’</b>) real data. (<b>a</b>,<b>b</b>,<b>a’</b>,<b>b’</b>) Optimization of sonar depth and tilt parameters. (<b>c</b>,<b>c’</b>) Image <math display="inline"><semantics> <msub> <mi>E</mi> <mi>I</mi> </msub> </semantics></math> and volumetric <math display="inline"><semantics> <msub> <mi>E</mi> <mi>V</mi> </msub> </semantics></math> errors moving in tandem confirm 3-D model improvement with reduced image error. (<b>d</b>) Initialized SC solution (top) and optimized 3-D model (bottom), shown by blue surface mesh, are superimposed on Kinect model (black mesh); (<b>d’</b>) optimized SC (blue mesh) and Kinect (red mesh) models.</p>
Full article ">Figure 8
<p>Coral-two experiment—(<b>a</b>–<b>d</b>) synthetic and (<b>a’</b>–<b>d’</b>) real data. (<b>a</b>,<b>b</b>,<b>a’</b>,<b>b’</b>) Optimization of sonar depth and tilt parameters. (<b>c</b>,<b>c’</b>) Improving 3-D model leads to smaller volumetric <math display="inline"><semantics> <msub> <mi>E</mi> <mi>V</mi> </msub> </semantics></math> and image <math display="inline"><semantics> <msub> <mi>E</mi> <mi>I</mi> </msub> </semantics></math> errors. (<b>d</b>) Kinect model (black mesh) superimposed on initialized SC solution (top) and optimized 3-D model (bottom), shown by blue surface meshes. (<b>d’</b>) Optimized SC (blue mesh) and Kinect (red mesh) models.</p>
Full article ">Figure 9
<p>Wood table experiment— (<b>a</b>–<b>d</b>) synthetic and (<b>a’</b>–<b>d’</b>) real data. (<b>a</b>,<b>b</b>,<b>a’</b>,<b>b’</b>) Optimization of sonar depth and tilt parameters. (<b>c</b>,<b>c’</b>) Improving 3-D model reduces both volumetric <math display="inline"><semantics> <msub> <mi>E</mi> <mi>V</mi> </msub> </semantics></math> and image <math display="inline"><semantics> <msub> <mi>E</mi> <mi>I</mi> </msub> </semantics></math> errors. (<b>d</b>) Kinect model (black mesh) superimposed on initialized SC solution (top) and optimized 3-D model (bottom), shown by blue surface meshes. (<b>d’</b>) Optimized SC (blue mesh) and Kinect (red mesh) models.</p>
Full article ">Figure 10
<p>Coral-one experiment—(<b>a</b>) data; (<b>b</b>) data over image region only; (<b>c</b>) initial and (<b>d</b>) optimized synthetic view generated by the 3-D model.</p>
Full article ">Figure 11
<p>Coral-two experiment—(<b>a</b>) data; (<b>b</b>) data over image region only; (<b>c</b>) initial and (<b>d</b>) optimized synthetic view generated by the 3-D model.</p>
Full article ">Figure 12
<p>Wood table experiment—(<b>a</b>) data; (<b>b</b>) data within object region only; (<b>c</b>) initial and (<b>d</b>) optimized synthetic view generated by the 3-D model.</p>
Full article ">Figure 13
<p>Sets of images as in previous experiments for (<b>a1</b>–<b>d1</b>) coral-one and (<b>a2</b>–<b>d2</b>) coral-two views, in which object, ghost, and mirror components overlap (not used in the optimization). (<b>a1</b>,<b>a2</b>) data; (<b>b1</b>,<b>b2</b>) data within object region only; (<b>c1</b>,<b>c2</b>) initial and (<b>d1</b>,<b>d2</b>) optimized synthetic view generated by the 3-D model.</p>
Full article ">Figure 14
<p>Sets of wood table images as in previous figures for views in which object, ghost, and mirror components overlap (not used in optimization process). (<b>a</b>) data; (<b>b</b>) data within object region only; (<b>c</b>) initial and (<b>d</b>) optimized synthetic view generated by the 3-D model.</p>
Full article ">
19 pages, 463 KiB  
Article
Understanding Success: An Initial Investigation Considering the Alignment of University Branding with the Expectations of Future Students
by Helen O’Sullivan, Martyn Polkinghorne and Mike O’Sullivan
Adm. Sci. 2024, 14(10), 257; https://doi.org/10.3390/admsci14100257 - 13 Oct 2024
Viewed by 343
Abstract
This research investigates how university students define and perceive success, an area that is increasingly important to ensuring that a university’s brand remains aligned to the expectations of future students. Over the next decade, university students will comprise members of Generation Z (Gen [...] Read more.
This research investigates how university students define and perceive success, an area that is increasingly important to ensuring that a university’s brand remains aligned to the expectations of future students. Over the next decade, university students will comprise members of Generation Z (Gen Z), and by recognizing this group of students’ preferences and aspirations, universities can tailor their branding, educational portfolio, and overall campus experiences to ensure that together they resonate and satisfy evolving needs and demands. Using data based on a sample of Gen Z undergraduate students undertaking their degrees at three case study UK post-1992 universities, this research adopted an exploratory, interpretivist methodology. Data collected from semi-structured interviews were analyzed using recursive abstraction to identify underlying patterns and trends within the data. The research identified five key themes that Gen Z are using to define success, and these are the following: (1) being objective and task-driven; (2) embracing fluidity and subjectivity; (3) being ethically and morally responsible; (4) having resilience; and (5) accepting and learning from failure. Recommendations were made for actions that universities should start to take to enable them to work toward achieving this. Full article
Show Figures

Figure 1

Figure 1
<p>Framework indicating Gen Z’s interpretation of success.</p>
Full article ">
18 pages, 4253 KiB  
Article
RSTSRN: Recursive Swin Transformer Super-Resolution Network for Mars Images
by Fanlu Wu, Xiaonan Jiang, Tianjiao Fu, Yao Fu, Dongdong Xu and Chunlei Zhao
Appl. Sci. 2024, 14(20), 9286; https://doi.org/10.3390/app14209286 - 12 Oct 2024
Viewed by 438
Abstract
High-resolution optical images will provide planetary geology researchers with finer and more microscopic image data information. In order to maximize scientific output, it is necessary to further increase the resolution of acquired images, so image super-resolution (SR) reconstruction techniques have become the best [...] Read more.
High-resolution optical images will provide planetary geology researchers with finer and more microscopic image data information. In order to maximize scientific output, it is necessary to further increase the resolution of acquired images, so image super-resolution (SR) reconstruction techniques have become the best choice. Aiming at the problems of large parameter quantity and high computational complexity in current deep learning-based image SR reconstruction methods, we propose a novel Recursive Swin Transformer Super-Resolution Network (RSTSRN) for SR applied to images. The RSTSRN improves upon the LapSRN, which we use as our backbone architecture. A Residual Swin Transformer Block (RSTB) is used for more efficient residual learning, which consists of stacked Swin Transformer Blocks (STBs) with a residual connection. Moreover, the idea of parameter sharing was introduced to reduce the number of parameters, and a multi-scale training strategy was designed to accelerate convergence speed. Experimental results show that the proposed RSTSRN achieves superior performance on 2×, 4× and 8×SR tasks to state-of-the-art methods with similar parameters. Especially on high-magnification SR tasks, the RSTSRN has great performance superiority. Compared to the LapSRN network, for 2×, 4× and 8× Mars image SR tasks, the RSTSRN network has increased PSNR values by 0.35 dB, 0.88 dB and 1.22 dB, and SSIM values by 0.0048, 0.0114 and 0.0311, respectively. Full article
(This article belongs to the Special Issue Advances in Image Recognition and Processing Technologies)
Show Figures

Figure 1

Figure 1
<p>RSTSRN network architecture.</p>
Full article ">Figure 2
<p>Multi-scale training process.</p>
Full article ">Figure 3
<p>Shifted window.</p>
Full article ">Figure 4
<p>Examples of HiRISE images.</p>
Full article ">Figure 5
<p>Visual comparison for 2×SR on the barbara.</p>
Full article ">Figure 6
<p>Visual comparison for 4× SR on the butterfly.</p>
Full article ">Figure 7
<p>Visual comparison for 8× SR on the img_040.</p>
Full article ">Figure 8
<p>The 2× SR results of the 54th Mars image.</p>
Full article ">Figure 9
<p>The 4× SR results of the 59th Mars image.</p>
Full article ">Figure 10
<p>The 8× SR results of the 26th Mars image.</p>
Full article ">
28 pages, 3798 KiB  
Article
Smooth Sigmoid Surrogate (SSS): An Alternative to Greedy Search in Decision Trees
by Xiaogang Su, George Ekow Quaye, Yishu Wei, Joseph Kang, Lei Liu, Qiong Yang, Juanjuan Fan and Richard A. Levine
Mathematics 2024, 12(20), 3190; https://doi.org/10.3390/math12203190 - 11 Oct 2024
Viewed by 575
Abstract
Greedy search (GS) or exhaustive search plays a crucial role in decision trees and their various extensions. We introduce an alternative splitting method called smooth sigmoid surrogate (SSS) in which the indicator threshold function used in GS is approximated by a smooth sigmoid [...] Read more.
Greedy search (GS) or exhaustive search plays a crucial role in decision trees and their various extensions. We introduce an alternative splitting method called smooth sigmoid surrogate (SSS) in which the indicator threshold function used in GS is approximated by a smooth sigmoid function. This approach allows for parametric smoothing or regularization of the erratic and discrete GS process, making it more effective in identifying the true cutoff point, particularly in the presence of weak signals, as well as less prone to the inherent end-cut preference problem. Additionally, SSS provides a convenient means of evaluating the best split by referencing a parametric nonlinear model. Moreover, in many variants of recursive partitioning, SSS can be reformulated as a one-dimensional smooth optimization problem, rendering it computationally more efficient than GS. Extensive simulation studies and real data examples are provided to evaluate and demonstrate its effectiveness. Full article
(This article belongs to the Special Issue Statistics and Data Science)
Show Figures

Figure 1

Figure 1
<p>Plot of the goodness-of-split measure <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mo>????</mo> <mo>(</mo> <mi>c</mi> <mo>)</mo> </mrow> </semantics></math> and its approximation <math display="inline"><semantics> <mrow> <mover accent="true"> <mo>Δ</mo> <mo>˜</mo> </mover> <mo>????</mo> <mrow> <mo>(</mo> <mi>c</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> versus the permissible cutoff point <span class="html-italic">c</span>. Data of sample size <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>50</mn> </mrow> </semantics></math> were generated from Model A in (<a href="#FD7-mathematics-12-03190" class="html-disp-formula">7</a>) with <math display="inline"><semantics> <mrow> <mi>β</mi> <mo>=</mo> <mn>0.2</mn> </mrow> </semantics></math>. The dark line corresponds to the goodness-of-split measures or splitting statistics <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mo>????</mo> <mo>(</mo> <mi>c</mi> <mo>)</mo> </mrow> </semantics></math> in greedy search, while the lines in gray scale are approximated <math display="inline"><semantics> <mrow> <mover accent="true"> <mo>Δ</mo> <mo>˜</mo> </mover> <mo>????</mo> <mrow> <mo>(</mo> <mi>c</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> values in SSS with <math display="inline"><semantics> <mrow> <mi>a</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <mn>100</mn> </mrow> </semantics></math>. The red solid triangle point corresponds to the best cutpoint found by GS, while the blue disc points are the best cutpoints obtained from SSS with different <span class="html-italic">a</span> values. The solid green line corresponds to the true cutoff point <math display="inline"><semantics> <mrow> <msub> <mi>c</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 2
<p>Plot of the logistic function <math display="inline"><semantics> <mrow> <mi>π</mi> <mo>{</mo> <mi>x</mi> <mo>;</mo> <mi>a</mi> <mo>}</mo> </mrow> </semantics></math> for different values of <math display="inline"><semantics> <mrow> <mi>a</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <mn>100</mn> </mrow> </semantics></math>. The dark line corresponds to the Heaviside step function <math display="inline"><semantics> <mrow> <mi>H</mi> <mo>(</mo> <mi>x</mi> <mo>)</mo> <mo>=</mo> <mi>I</mi> <mo>{</mo> <mi>x</mi> <mo>≤</mo> <mn>0</mn> <mo>}</mo> <mo>.</mo> </mrow> </semantics></math></p>
Full article ">Figure 3
<p>Empirical density of estimated cutpoint <math display="inline"><semantics> <mover accent="true"> <mi>c</mi> <mo stretchy="false">^</mo> </mover> </semantics></math>: SSS vs. GS (<b>a</b>–<b>j</b>). Ten scenarios were considered by combining two sample sizes <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>∈</mo> <mo>{</mo> <mn>50</mn> <mo>,</mo> <mn>500</mn> <mo>}</mo> </mrow> </semantics></math>, signal strength <math display="inline"><semantics> <mrow> <msub> <mi>β</mi> <mn>1</mn> </msub> <mo>∈</mo> <mrow> <mo>{</mo> <mn>0</mn> <mo>,</mo> <mn>0.2</mn> <mo>,</mo> <mn>1</mn> <mo>}</mo> </mrow> <mo>,</mo> </mrow> </semantics></math> and two true cutoff points <math display="inline"><semantics> <mrow> <msub> <mi>c</mi> <mn>0</mn> </msub> <mo>∈</mo> <mrow> <mo>{</mo> <mn>0.5</mn> <mo>,</mo> <mn>0.8</mn> <mo>}</mo> </mrow> </mrow> </semantics></math> (indicated via vertical blue lines). Each scenario was examined with 200 simulation runs. In each plot, the density curve of <math display="inline"><semantics> <mover accent="true"> <mi>c</mi> <mo stretchy="false">^</mo> </mover> </semantics></math> from GS is shaded and outlined in red. The <span class="html-italic">a</span> value in SSS falls within the range {1, 2, <span class="html-italic">…</span>, 100}, corresponding to the curves in grayscale, where darker colors represent larger <span class="html-italic">a</span> values.</p>
Full article ">Figure 4
<p>MSE of the estimated cutpoint <math display="inline"><semantics> <mover accent="true"> <mi>c</mi> <mo stretchy="false">^</mo> </mover> </semantics></math> vs. <span class="html-italic">a</span> in SSS. In Panels (<b>a</b>,<b>b</b>,<b>e</b>,<b>f</b>), the true cutpoint is <math display="inline"><semantics> <mrow> <msub> <mi>c</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>, while in Panels (<b>c</b>,<b>d</b>,<b>g</b>,<b>h</b>) it is <math display="inline"><semantics> <mrow> <msub> <mi>c</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>0.8</mn> </mrow> </semantics></math>. The solid red horizontal line corresponds to the MSE value from GS.</p>
Full article ">Figure 5
<p>Plot of estimated cutoff point <math display="inline"><semantics> <mover accent="true"> <mi>c</mi> <mo stretchy="false">^</mo> </mover> </semantics></math> versus <math display="inline"><semantics> <mrow> <mi>a</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <mn>100</mn> </mrow> </semantics></math> for 100 simulation runs (<b>a</b>–<b>d</b>). Data were generated from Model A with <math display="inline"><semantics> <mrow> <mi>β</mi> <mo>=</mo> <mn>1</mn> <mo>.</mo> </mrow> </semantics></math> Each gray line corresponds to one simulation run. Superimposed on each plot are the true cutoff point (dotted red line) and mean cutoff at each <span class="html-italic">a</span> (solid green line). In Panel (<b>b</b>), the median cutoff is also added (solid blue line). Four scenarios were considered by combining two sample sizes <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>∈</mo> <mo>{</mo> <mn>50</mn> <mo>,</mo> <mn>500</mn> <mo>}</mo> </mrow> </semantics></math> and two true cutoff points <math display="inline"><semantics> <mrow> <msub> <mi>c</mi> <mn>0</mn> </msub> <mo>∈</mo> <mrow> <mo>{</mo> <mn>0.5</mn> <mo>,</mo> <mn>0.8</mn> <mo>}</mo> </mrow> <mo>.</mo> </mrow> </semantics></math></p>
Full article ">Figure 6
<p>Relative difference in MSE of SSS versus GS for different sample sizes <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>∈</mo> <mo>{</mo> <mn>50</mn> <mo>,</mo> <mn>100</mn> <mo>,</mo> <mn>150</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <mn>1000</mn> <mo>}</mo> <mo>.</mo> </mrow> </semantics></math> A few constant <span class="html-italic">a</span> choices in {10, 30, 50} are considered for SSS, together with <span class="html-italic">n</span>-adaptive choices <math display="inline"><semantics> <mrow> <mo>{</mo> <mi>n</mi> <mo>,</mo> <msqrt> <mi>n</mi> </msqrt> <mo>,</mo> <mo form="prefix">ln</mo> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>}</mo> <mo>.</mo> </mrow> </semantics></math> Negative values indicate advantages over GS.</p>
Full article ">Figure 7
<p>Computing time comparison between GS and SSS. The sample size <span class="html-italic">n</span> varies in <math display="inline"><semantics> <mrow> <mo>{</mo> <mn>20</mn> <mo>,</mo> <mn>30</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <mn>100</mn> <mo>,</mo> <mn>200</mn> <mo>,</mo> <mn>300</mn> <mo>,</mo> <mo>…</mo> </mrow> </semantics></math>, 10,000). The total CPU time (in seconds) for ten simulation runs in each setting was recorded for both GS and SSS. For SSS, the number <span class="html-italic">m</span> of iterative steps in Brent’s optimization algorithm was also obtained and averaged over ten simulation runs. Panel (<b>a</b>) plots the CPU computing time versus the sample size <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>.</mo> </mrow> </semantics></math> The computing times from GS are plotted with black circles and superimposed with a smooth curve from lowess. The curves from SSS with <math display="inline"><semantics> <mrow> <mi>a</mi> <mo>∈</mo> <mo>{</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <mn>100</mn> <mo>}</mo> </mrow> </semantics></math> are plotted. In addition, an adaptive choice <math display="inline"><semantics> <mrow> <mi>a</mi> <mo>=</mo> <msqrt> <mi>n</mi> </msqrt> </mrow> </semantics></math> is also included and its associated computing times are plotted with gray diamonds. In Panel (<b>b</b>), the averaged number of iterative steps involved in Brent’s method are plotted vs. <span class="html-italic">n</span>.</p>
Full article ">Figure 8
<p>Bar plot of frequencies (from 1000 simulation runs) of splitting variables selected by SSS vs. greedy search. The data were generated from a null model. Nine variables <math display="inline"><semantics> <mrow> <mo>{</mo> <msub> <mi>X</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>X</mi> <mn>2</mn> </msub> <mo>,</mo> <mo>…</mo> <mo>,</mo> <msub> <mi>X</mi> <mn>9</mn> </msub> <mo>}</mo> </mrow> </semantics></math>, with the possible number of distinct values equal to <math display="inline"><semantics> <mrow> <mo>{</mo> <mn>2</mn> <mo>,</mo> <mn>3</mn> <mo>,</mo> <mn>4</mn> <mo>,</mo> <mn>5</mn> <mo>,</mo> <mn>10</mn> <mo>,</mo> <mn>20</mn> <mo>,</mo> <mn>50</mn> <mo>,</mo> <mn>100</mn> <mo>,</mo> <mn>500</mn> <mo>}</mo> </mrow> </semantics></math>, respectively, were included in each dataset. Two sample sizes <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>50</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>500</mn> </mrow> </semantics></math> were considered. For SSS, three choices of <math display="inline"><semantics> <mrow> <mi>a</mi> <mo>∈</mo> </mrow> </semantics></math> {10, 30, 50} were tried.</p>
Full article ">Figure 9
<p>Percentages of correct selection (out of 500 simulation runs) by GS vs. SSS: (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>50</mn> </mrow> </semantics></math> and (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>500</mn> <mo>.</mo> </mrow> </semantics></math> The data were generated from Model B in (<a href="#FD9-mathematics-12-03190" class="html-disp-formula">9</a>).</p>
Full article ">Figure 10
<p>Analysis of 1987 baseball salary data. In (<b>a</b>), <math display="inline"><semantics> <msub> <mi>S</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>S</mi> <mn>2</mn> </msub> </semantics></math> represent specific subsets of baseball teams. Within each terminal node is the mean response (log-transformed salary); underneath is the sample size.</p>
Full article ">Figure 11
<p>Comparison of SSS and GS in finding the best cutoff point <math display="inline"><semantics> <mrow> <mi>c</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math> for classification trees. The results are based on 500 simulation runs. Panels (<b>a</b>,<b>b</b>) plot the estimated density curves for <math display="inline"><semantics> <mover accent="true"> <mi>c</mi> <mo stretchy="false">^</mo> </mover> </semantics></math> with <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>50</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>500</mn> </mrow> </semantics></math>, respectively. Panel (<b>c</b>) is the scatterplot (smoothed in such a way that the frequencies of overlapped points are represented in the electromagnetic spectrum) of the cutoff point identified by GS versus that identified by SSS when <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>50</mn> <mo>.</mo> </mrow> </semantics></math> Panel (<b>d</b>) plots the MSE vs. <span class="html-italic">a</span>, superimposed as orange solid lines with MSE from GS.</p>
Full article ">Figure 12
<p>Comparison of SSS (combined with Boulesteix [<a href="#B42-mathematics-12-03190" class="html-bibr">42</a>]’s method) and GS in terms of selection bias for classification trees. The bar plots are based on frequencies (out of 500 simulation runs) of splitting. Variables were selected by either method. Data were generated from a null model. Two sample sizes of <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>50</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>500</mn> </mrow> </semantics></math> were considered.</p>
Full article ">Figure 13
<p>Final classification trees for the credit card default data. The misclassification error rates are 0.1792 for the SSS tree and 0.1798 for the GS tree.</p>
Full article ">
16 pages, 4048 KiB  
Article
Integrative Analysis of ATAC-Seq and RNA-Seq through Machine Learning Identifies 10 Signature Genes for Breast Cancer Intrinsic Subtypes
by Jeong-Woon Park and Je-Keun Rhee
Biology 2024, 13(10), 799; https://doi.org/10.3390/biology13100799 - 7 Oct 2024
Viewed by 784
Abstract
Breast cancer is a heterogeneous disease composed of various biologically distinct subtypes, each characterized by unique molecular features. Its formation and progression involve a complex, multistep process that includes the accumulation of numerous genetic and epigenetic alterations. Although integrating RNA-seq transcriptome data with [...] Read more.
Breast cancer is a heterogeneous disease composed of various biologically distinct subtypes, each characterized by unique molecular features. Its formation and progression involve a complex, multistep process that includes the accumulation of numerous genetic and epigenetic alterations. Although integrating RNA-seq transcriptome data with ATAC-seq epigenetic information provides a more comprehensive understanding of gene regulation and its impact across different conditions, no classification model has yet been developed for breast cancer intrinsic subtypes based on such integrative analyses. In this study, we employed machine learning algorithms to predict intrinsic subtypes through the integrative analysis of ATAC-seq and RNA-seq data. We identified 10 signature genes (CDH3, ERBB2, TYMS, GREB1, OSR1, MYBL2, FAM83D, ESR1, FOXC1, and NAT1) using recursive feature elimination with cross-validation (RFECV) and a support vector machine (SVM) based on SHAP (SHapley Additive exPlanations) feature importance. Furthermore, we found that these genes were primarily associated with immune responses, hormone signaling, cancer progression, and cellular proliferation. Full article
(This article belongs to the Special Issue Advances in Biological Breast Cancer Research)
Show Figures

Figure 1

Figure 1
<p>Scatter plot of gene expression and promoter accessibility for 10 PAM50 signature genes. Spearman correlation analysis was performed, considering a correlation coefficient ≥ 0.65 and a corresponding <span class="html-italic">p</span>-value &lt; 0.01 as significant. Among the significantly correlated genes, 10 overlapped with the PAM50 gene signature: (<b>a</b>) <span class="html-italic">PHGDH</span>, (<b>b</b>) <span class="html-italic">MDM2</span>, (<b>c</b>) <span class="html-italic">CDH3</span>, (<b>d</b>) <span class="html-italic">MAPT</span>, (<b>e</b>) <span class="html-italic">ERBB2</span>, (<b>f</b>) <span class="html-italic">TYMS</span>, (<b>g</b>) <span class="html-italic">MYBL2</span>, (<b>h</b>) <span class="html-italic">ESR1</span>, (<b>i</b>) <span class="html-italic">FOXC1</span>, and (<b>j</b>) <span class="html-italic">NAT1</span>. Each point is color-coded according to the intrinsic subtype of breast cancer. Abbreviations: Basal, Basal-like; LumA, Luminal A; LumB, Luminal B; Her2, Her2-enriched.</p>
Full article ">Figure 1 Cont.
<p>Scatter plot of gene expression and promoter accessibility for 10 PAM50 signature genes. Spearman correlation analysis was performed, considering a correlation coefficient ≥ 0.65 and a corresponding <span class="html-italic">p</span>-value &lt; 0.01 as significant. Among the significantly correlated genes, 10 overlapped with the PAM50 gene signature: (<b>a</b>) <span class="html-italic">PHGDH</span>, (<b>b</b>) <span class="html-italic">MDM2</span>, (<b>c</b>) <span class="html-italic">CDH3</span>, (<b>d</b>) <span class="html-italic">MAPT</span>, (<b>e</b>) <span class="html-italic">ERBB2</span>, (<b>f</b>) <span class="html-italic">TYMS</span>, (<b>g</b>) <span class="html-italic">MYBL2</span>, (<b>h</b>) <span class="html-italic">ESR1</span>, (<b>i</b>) <span class="html-italic">FOXC1</span>, and (<b>j</b>) <span class="html-italic">NAT1</span>. Each point is color-coded according to the intrinsic subtype of breast cancer. Abbreviations: Basal, Basal-like; LumA, Luminal A; LumB, Luminal B; Her2, Her2-enriched.</p>
Full article ">Figure 2
<p>Recursive feature elimination using SHAP feature importance to find optimal genes for support vector machine. The plot shows the range of accuracy scores (from 1 to 813 genes) as a function of the number of features selected, based on 1939 training samples derived from the GSE96058 dataset.</p>
Full article ">Figure 3
<p>Logistic regression model training and evaluation. (<b>a</b>) UMAP plot showing the gene expression profiles of 10 selected gene from feature selection in the 1939 training samples derived from GSE96058 data. (<b>b</b>) A confusion matrix shows the consistency between the actual intrinsic subtype, and the intrinsic subtype predicted by the logistic regression model on the 831 test samples. The color axis presents the number of samples in each subtype. Abbreviations: Basal, Basal-like; LumA, Luminal A; LumB, Luminal B; Her2, Her2-enriched.</p>
Full article ">Figure 4
<p>Logistic regression model training and evaluation using GDC TCGA-BRCA dataset. (<b>a</b>) UMAP plot displaying gene expression profiles of 10 selected genes in 667 training samples from GDC TCGA-BRCA data. (<b>b</b>) Confusion matrix illustrating consistency between actual and predicted intrinsic subtypes by logistic regression model on 286 GDC TCGA-BRCA test samples. Abbreviations: Basal, Basal-like; LumA, Luminal A; LumB, Luminal B; Her2, Her2-enriched.</p>
Full article ">Figure 5
<p>Enrichment analysis using the 10 selected genes from feature selection. (<b>a</b>) This plot shows the significantly enriched GO biological process with <span class="html-italic">q</span>-value &lt; 0.05. (<b>b</b>) This plot shows the significantly enriched KEGG pathway with <span class="html-italic">q</span>-value &lt; 0.1. Gene ratio is the number of overlapped genes between uploaded genes and those in the pathway category divided by the number of genes. It was sorted based on the <span class="html-italic">q</span>-value and expressed by color.</p>
Full article ">
28 pages, 890 KiB  
Article
State Estimation for Measurement-Saturated Memristive Neural Networks with Missing Measurements and Mixed Time Delays Subject to Cyber-Attacks: A Non-Fragile Set-Membership Filtering Framework
by Ziyang Wang, Peidong Wang, Jiasheng Wang, Peng Lou and Juan Li
Appl. Sci. 2024, 14(19), 8936; https://doi.org/10.3390/app14198936 - 4 Oct 2024
Viewed by 500
Abstract
This paper is concerned with the state estimation problem based on non-fragile set-membership filtering for a class of measurement-saturated memristive neural networks (MNNs) with unknown but bounded (UBB) noises, mixed time delays and missing measurements (MMs), subject to cyber-attacks under the framework of [...] Read more.
This paper is concerned with the state estimation problem based on non-fragile set-membership filtering for a class of measurement-saturated memristive neural networks (MNNs) with unknown but bounded (UBB) noises, mixed time delays and missing measurements (MMs), subject to cyber-attacks under the framework of weighted try-once-discard protocol (WTOD protocol). Considering bandwidth-limited open networks, this paper proposes an improved set-membership filtering based on WTOD protocol to partially solve the problem that multiple sensor-related problems and multiple network-induced phenomena influence the state estimation performance of MNNs. Moreover, this paper also discusses the gain perturbations of the estimator and proposes an improved non-fragile estimation framework based on set-membership filtering, which enhances the robustness of the estimation approach. The proposed estimation framework can effectively estimate the state of MNNs with UBB noises, estimator gain perturbations, mixed time-delays, cyber-attacks, measurement saturations and MMs. This paper first utilizes mathematical induction to provide the sufficient conditions for the existence of the desired estimator, and obtains the estimator gain by solving a set of linear matrix inequalities. Then, a recursive optimization algorithm is utilized to achieve optimal estimation performance. The effectiveness of the theoretical results is verified by comparative numerical simulation examples. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of sector-bounded nonlinear saturation function.</p>
Full article ">Figure 2
<p>Schematic diagram of measurement saturation, MMs, WTOD protocol and cyber-attacks.</p>
Full article ">Figure 3
<p>True state and its estimated values based on our approach and other approaches from [<a href="#B11-applsci-14-08936" class="html-bibr">11</a>,<a href="#B27-applsci-14-08936" class="html-bibr">27</a>,<a href="#B55-applsci-14-08936" class="html-bibr">55</a>].</p>
Full article ">Figure 4
<p>True state and its estimated values based on our approach and other approaches from [<a href="#B11-applsci-14-08936" class="html-bibr">11</a>,<a href="#B27-applsci-14-08936" class="html-bibr">27</a>,<a href="#B55-applsci-14-08936" class="html-bibr">55</a>].</p>
Full article ">Figure 5
<p>Occurrences of bias injection attacks.</p>
Full article ">Figure 6
<p>Occurrences of MMs on each sensor node.</p>
Full article ">Figure 7
<p>True state and its estimated values based on our approach and other approaches from [<a href="#B11-applsci-14-08936" class="html-bibr">11</a>,<a href="#B27-applsci-14-08936" class="html-bibr">27</a>,<a href="#B55-applsci-14-08936" class="html-bibr">55</a>].</p>
Full article ">Figure 8
<p>True state and its estimated values based on our approach and other approaches from [<a href="#B11-applsci-14-08936" class="html-bibr">11</a>,<a href="#B27-applsci-14-08936" class="html-bibr">27</a>,<a href="#B55-applsci-14-08936" class="html-bibr">55</a>].</p>
Full article ">Figure 9
<p>Selected nodes to transmit the latest data under the WTOD protocol.</p>
Full article ">Figure 10
<p>Selected nodes to transmit the latest data under the RR protocol.</p>
Full article ">Figure 11
<p>True state and its estimated values based on our approach and the approach from [<a href="#B55-applsci-14-08936" class="html-bibr">55</a>].</p>
Full article ">Figure 12
<p>True state and its estimated values based on our approach and the approach from [<a href="#B55-applsci-14-08936" class="html-bibr">55</a>].</p>
Full article ">Figure 13
<p>True state and its estimated values based on our approach and the approach from [<a href="#B55-applsci-14-08936" class="html-bibr">55</a>].</p>
Full article ">
19 pages, 3076 KiB  
Article
Three-Stage Recursive Learning Technique for Face Mask Detection on Imbalanced Datasets
by Chi-Yi Tsai, Wei-Hsuan Shih and Humaira Nisar
Mathematics 2024, 12(19), 3104; https://doi.org/10.3390/math12193104 - 4 Oct 2024
Viewed by 627
Abstract
In response to the COVID-19 pandemic, governments worldwide have implemented mandatory face mask regulations in crowded public spaces, making the development of automatic face mask detection systems critical. To achieve robust face mask detection performance, a high-quality and comprehensive face mask dataset is [...] Read more.
In response to the COVID-19 pandemic, governments worldwide have implemented mandatory face mask regulations in crowded public spaces, making the development of automatic face mask detection systems critical. To achieve robust face mask detection performance, a high-quality and comprehensive face mask dataset is required. However, due to the difficulty in obtaining face samples with masks in the real-world, public face mask datasets are often imbalanced, leading to the data imbalance problem in model training and negatively impacting detection performance. To address this problem, this paper proposes a novel recursive model-training technique designed to improve detection accuracy on imbalanced datasets. The proposed method recursively splits and merges the dataset based on the attribute characteristics of different classes, enabling more balanced and effective model training. Our approach demonstrates that the carefully designed splitting and merging of datasets can significantly enhance model-training performance. This method was evaluated using two imbalanced datasets. The experimental results show that the proposed recursive learning technique achieves a percentage increase (PI) of 84.5% in mean average precision ([email protected]) on the Kaggle dataset and of 186.3% on the Eden dataset compared to traditional supervised learning. Additionally, when combined with existing oversampling techniques, the PI on the Kaggle dataset further increases to 88.9%, highlighting the potential of the proposed method for improving detection accuracy in highly imbalanced datasets. Full article
(This article belongs to the Special Issue Advances in Algorithm Design and Machine Learning)
Show Figures

Figure 1

Figure 1
<p>Three conditions of face mask-wearing: (<b>a</b>) correct mask-wearing, (<b>b</b>) no mask-wearing, and (<b>c</b>) incorrect mask-wearing.</p>
Full article ">Figure 2
<p>Comparison of (<b>a</b>) the traditional supervised learning and (<b>b</b>) the proposed recursive learning method. The proposed recursive learning method incorporates dataset manipulation processing into the model-training process to train the model recursively.</p>
Full article ">Figure 3
<p>Concept of the proposed dataset split-and-merge processing for recursive learning.</p>
Full article ">Figure 4
<p>Illustration of the proposed three-stage recursive learning method combined with dataset split-and-merge processing.</p>
Full article ">Figure 5
<p>Flowchart of the proposed recursive learning method.</p>
Full article ">Figure 6
<p>Illustration of the distances C and D between the ground truth A and the predicted B bounding boxes.</p>
Full article ">Figure 7
<p>Experimental results of (<b>a</b>) the supervised learning and (<b>b</b>) the proposed Over-R-S1S2S3 learning method on the Kaggle test set, along with (<b>c</b>) and (<b>d</b>), the corresponding zoom-in results.</p>
Full article ">Figure 8
<p>Experimental results of (<b>a</b>) the supervised learning and (<b>b</b>) the proposed Over-R-S1S2S3 learning method on the Kaggle test set, along with (<b>c</b>) and (<b>d</b>), the corresponding zoom-in results.</p>
Full article ">
16 pages, 1020 KiB  
Article
Tight 9-Cycle Decompositions of λ-Fold Complete 3-Uniform Hypergraphs
by Hongtao Zhao and Jianxiao Gu
Mathematics 2024, 12(19), 3101; https://doi.org/10.3390/math12193101 - 3 Oct 2024
Viewed by 354
Abstract
For 2tm, let Zm denote the group of integers modulo m, and let TCm(t) denote the t-uniform hypergraph with vertex set Zm and hyperedge set [...] Read more.
For 2tm, let Zm denote the group of integers modulo m, and let TCm(t) denote the t-uniform hypergraph with vertex set Zm and hyperedge set {{i,i+1,i+2,,i+t1}:iZm}. Any hypergraph isomorphic to TCm(t) is a t-uniform tight m-cycle. In this paper, we consider the existence of tight 9-cycle decompositions of λ-fold complete 3-uniform hypergraphs. According to the recursive constructions, the required designs of small orders are found. For hypergraphs with large orders, they can be recursively generated using some designs of small orders. Then, we obtain the necessary and sufficient conditions for the existence of TC9(3)-decomposition of λKn(3). We show there exists a TC9(3)-decomposition of λKn(3) if and only if λn(n1)(n2)0(mod54), λ(n1)(n2)0(mod6) and n9. Full article
Show Figures

Figure 1

Figure 1
<p>The 3-uniform tight 9-cycle <math display="inline"><semantics> <mrow> <mi>T</mi> <msubsup> <mi>C</mi> <mn>9</mn> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </msubsup> </mrow> </semantics></math> denoted <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>a</mi> <mo>,</mo> <mi>b</mi> <mo>,</mo> <mi>c</mi> <mo>,</mo> <mi>d</mi> <mo>,</mo> <mi>e</mi> <mo>,</mo> <mi>f</mi> <mo>,</mo> <mi>g</mi> <mo>,</mo> <mi>h</mi> <mo>,</mo> <mi>i</mi> <mo>)</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 2
<p>The seven blocks generated by <math display="inline"><semantics> <mrow> <mo>(</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <msub> <mo>∞</mo> <mn>0</mn> </msub> <mo>,</mo> <mn>5</mn> <mo>,</mo> <msub> <mo>∞</mo> <mn>1</mn> </msub> <mo>,</mo> <mn>4</mn> <mo>,</mo> <mn>6</mn> <mo>,</mo> <mn>3</mn> <mo>)</mo> </mrow> </semantics></math> through modular addition calculation.</p>
Full article ">
35 pages, 970 KiB  
Article
Using Symmetries to Investigate the Complete Integrability, Solitary Wave Solutions and Solitons of the Gardner Equation
by Willy Hereman and Ünal Göktaş
Math. Comput. Appl. 2024, 29(5), 91; https://doi.org/10.3390/mca29050091 - 3 Oct 2024
Viewed by 359
Abstract
In this paper, using a scaling symmetry, it is shown how to compute polynomial conservation laws, generalized symmetries, recursion operators, Lax pairs, and bilinear forms of polynomial nonlinear partial differential equations, thereby establishing their complete integrability. The Gardner equation is chosen as the [...] Read more.
In this paper, using a scaling symmetry, it is shown how to compute polynomial conservation laws, generalized symmetries, recursion operators, Lax pairs, and bilinear forms of polynomial nonlinear partial differential equations, thereby establishing their complete integrability. The Gardner equation is chosen as the key example, as it comprises both the Korteweg–de Vries and modified Korteweg–de Vries equations. The Gardner and Miura transformations, which connect these equations, are also computed using the concept of scaling homogeneity. Exact solitary wave solutions and solitons of the Gardner equation are derived using Hirota’s method and other direct methods. The nature of these solutions depends on the sign of the cubic term in the Gardner equation and the underlying mKdV equation. It is shown that flat (table-top) waves of large amplitude only occur when the sign of the cubic nonlinearity is negative (defocusing case), whereas the focusing Gardner equation has standard elastically colliding solitons. This paper’s aim is to provide a review of the integrability properties and solutions of the Gardner equation and to illustrate the applicability of the scaling symmetry approach. The methods and algorithms used in this paper have been implemented in Mathematica, but can be adapted for major computer algebra systems. Full article
(This article belongs to the Special Issue Symmetry Methods for Solving Differential Equations)
Show Figures

Figure 1

Figure 1
<p>Graphs of the solitary wave (solid line) and cnoidal wave (dashed line) solutions for <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>6</mn> <mo>,</mo> <mi>k</mi> <mo>=</mo> <mn>2</mn> <mo>,</mo> <mspace width="0.166667em"/> <mi>m</mi> <mo>=</mo> <mstyle scriptlevel="0" displaystyle="false"> <mfrac> <mn>9</mn> <mn>10</mn> </mfrac> </mstyle> <mo>,</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>δ</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 2
<p>Graphs of (<a href="#FD145-mca-29-00091" class="html-disp-formula">145</a>) (<b>left</b>) and (<a href="#FD146-mca-29-00091" class="html-disp-formula">146</a>) (<b>right</b>) both with the minus signs in front of the square roots, and both for <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>12</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>β</mi> <mo>=</mo> <mn>6</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>δ</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>. The curves on the right correspond to <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>=</mo> <mn>0.25</mn> </mrow> </semantics></math> (dashed line) and <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>=</mo> <mn>0.9</mn> </mrow> </semantics></math> (solid line).</p>
Full article ">Figure 3
<p>Graphs of (<a href="#FD149-mca-29-00091" class="html-disp-formula">149</a>) (<b>left</b>) and (<a href="#FD150-mca-29-00091" class="html-disp-formula">150</a>) (<b>right</b>), both for <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>12</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>β</mi> <mo>=</mo> <mo>−</mo> <mn>6</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>δ</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>. The curves on the right correspond to <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>=</mo> <mn>0.25</mn> </mrow> </semantics></math> (dashed line) and <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>=</mo> <mn>0.9</mn> </mrow> </semantics></math> (solid line).</p>
Full article ">Figure 4
<p>Graphs of (<a href="#FD151-mca-29-00091" class="html-disp-formula">151</a>) for <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>0.625</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>0.75</mn> </mrow> </semantics></math> (<b>left</b>), and <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>0.9999</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>0.999995</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>0.9999995</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> (<b>right</b>). The solitary wave becomes taller and narrower as the value of <span class="html-italic">k</span> increases.</p>
Full article ">Figure 5
<p>Graphs of (<a href="#FD151-mca-29-00091" class="html-disp-formula">151</a>) (full line) and (<a href="#FD153-mca-29-00091" class="html-disp-formula">153</a>) (dashed line) for four different values of <span class="html-italic">k</span>.</p>
Full article ">Figure 6
<p>Graph of the two-soliton solution (<a href="#FD155-mca-29-00091" class="html-disp-formula">155</a>) of the focusing Gardner equation at three different moments in time.</p>
Full article ">Figure 7
<p>Bird’s eye view of a two-soliton collision for the focusing Gardner equation. Notice the phase shift after collision: the taller (faster) soliton is shifted forward and the shorter (slower) soliton backward relative to where they would have been if they had not collided.</p>
Full article ">Figure 8
<p>2D and 3D graphs of solution (<a href="#FD156-mca-29-00091" class="html-disp-formula">156</a>) for <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>6</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>β</mi> <mo>=</mo> <mo>−</mo> <mn>6</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>δ</mi> <mo>=</mo> <mn>0.65</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 9
<p>2D and 3D graphs of solution (<a href="#FD156-mca-29-00091" class="html-disp-formula">156</a>) for <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>6</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>β</mi> <mo>=</mo> <mo>−</mo> <mn>6</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>δ</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>.</p>
Full article ">
Back to TopTop