[go: up one dir, main page]

 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,978)

Search Parameters:
Keywords = fine scale

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 6748 KiB  
Article
FD-Net: A Single-Stage Fire Detection Framework for Remote Sensing in Complex Environments
by Jianye Yuan, Haofei Wang, Minghao Li, Xiaohan Wang, Weiwei Song, Song Li and Wei Gong
Remote Sens. 2024, 16(18), 3382; https://doi.org/10.3390/rs16183382 (registering DOI) - 11 Sep 2024
Abstract
Fire detection is crucial due to the exorbitant annual toll on both human lives and the economy resulting from fire-related incidents. To enhance forest fire detection in complex environments, we propose a new algorithm called FD-Net for various environments. Firstly, to improve detection [...] Read more.
Fire detection is crucial due to the exorbitant annual toll on both human lives and the economy resulting from fire-related incidents. To enhance forest fire detection in complex environments, we propose a new algorithm called FD-Net for various environments. Firstly, to improve detection performance, we introduce a Fire Attention (FA) mechanism that utilizes the position information from feature maps. Secondly, to prevent geometric distortion during image cropping, we propose a Three-Scale Pooling (TSP) module. Lastly, we fine-tune the YOLOv5 network and incorporate a new Fire Fusion (FF) module to enhance the network’s precision in identifying fire targets. Through qualitative and quantitative comparisons, we found that FD-Net outperforms current state-of-the-art algorithms in performance on both fire and fire-and-smoke datasets. This further demonstrates FD-Net’s effectiveness for application in fire detection. Full article
Show Figures

Figure 1

Figure 1
<p>Part of the challenging fire dataset. (<b>a</b>) Small object fire dataset. (<b>b</b>) Negative example fire dataset. (<b>c</b>) Occluded fire dataset.</p>
Full article ">Figure 2
<p>FA mechanism network architecture.</p>
Full article ">Figure 3
<p>TSP module structure.</p>
Full article ">Figure 4
<p>FF module network structure diagram.</p>
Full article ">Figure 5
<p>Prediction Head model structure diagram.</p>
Full article ">Figure 6
<p>The FD-Net fire detection architecture. We give the output value of each convolution layer from front to back. For instance, “CBS: 320*320*64” indicates that the output length and width of “CBS” are 320, and the number of output channels is 64. Instead of using various convolution blocks, we utilize different colors; each time we input 64 images into the network, the input image dimensions are 640*640. The splicing method between each module is similar to YOLOv5.</p>
Full article ">Figure 7
<p>Target box visualization. (<b>a</b>) Coordinate map of the target frame’s center point; (<b>b</b>) the length and width of the frame target box.</p>
Full article ">Figure 8
<p>Graph of anchors.</p>
Full article ">Figure 9
<p>Size of the bounding box.</p>
Full article ">Figure 10
<p>Analysis of the fire dataset’s training effects.</p>
Full article ">Figure 11
<p>Qualitative comparison results.</p>
Full article ">Figure 11 Cont.
<p>Qualitative comparison results.</p>
Full article ">Figure 12
<p>Visualization of the bounding boxes.</p>
Full article ">Figure 13
<p>Qualitative comparison results.</p>
Full article ">Figure 13 Cont.
<p>Qualitative comparison results.</p>
Full article ">
17 pages, 110874 KiB  
Article
RT-CBAM: Refined Transformer Combined with Convolutional Block Attention Module for Underwater Image Restoration
by Renchuan Ye, Yuqiang Qian and Xinming Huang
Sensors 2024, 24(18), 5893; https://doi.org/10.3390/s24185893 - 11 Sep 2024
Abstract
Recently, transformers have demonstrated notable improvements in natural advanced visual tasks. In the field of computer vision, transformer networks are beginning to supplant conventional convolutional neural networks (CNNs) due to their global receptive field and adaptability. Although transformers excel in capturing global features, [...] Read more.
Recently, transformers have demonstrated notable improvements in natural advanced visual tasks. In the field of computer vision, transformer networks are beginning to supplant conventional convolutional neural networks (CNNs) due to their global receptive field and adaptability. Although transformers excel in capturing global features, they lag behind CNNs in handling fine local features, especially when dealing with underwater images containing complex and delicate structures. In order to tackle this challenge, we propose a refined transformer model by improving the feature blocks (dilated transformer block) to more accurately compute attention weights, enhancing the capture of both local and global features. Subsequently, a self-supervised method (a local and global blind-patch network) is embedded in the bottleneck layer, which can aggregate local and global information to enhance detail recovery and improve texture restoration quality. Additionally, we introduce a multi-scale convolutional block attention module (MSCBAM) to connect encoder and decoder features; this module enhances the feature representation of color channels, aiding in the restoration of color information in images. We plan to deploy this deep learning model onto the sensors of underwater robots for real-world underwater image-processing and ocean exploration tasks. Our model is named the refined transformer combined with convolutional block attention module (RT-CBAM). This study compares two traditional methods and six deep learning methods, and our approach achieved the best results in terms of detail processing and color restoration. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>The diagram illustrates the complete architecture of the RT-CBAM model. This model consists of a multi-scale hierarchical design of refined dilated transformer blocks. It also includes convolutional block attention module to enhance feature representation capabilities and a local and global blind-patch network for efficient feature extraction and fusion.</p>
Full article ">Figure 2
<p>The overall structure of the enhanced transformer module comprises two components: the self-attention mechanism and the feed-forward network. Enhancing the self-attention mechanism notably boosts the feature representation capability of this module.</p>
Full article ">Figure 3
<p>Visual comparisons of restoration results sampled from the Test-L400 and Test-E120 datasets are shown from left to right: original underwater image, UDCP [<a href="#B19-sensors-24-05893" class="html-bibr">19</a>], Retinex-based [<a href="#B18-sensors-24-05893" class="html-bibr">18</a>], FUnIE-GAN [<a href="#B23-sensors-24-05893" class="html-bibr">23</a>], UGAN [<a href="#B9-sensors-24-05893" class="html-bibr">9</a>], Waternet [<a href="#B22-sensors-24-05893" class="html-bibr">22</a>], U-Trans [<a href="#B13-sensors-24-05893" class="html-bibr">13</a>], our proposed RT-CBAM, and the reference image.</p>
Full article ">Figure 4
<p>The restoration results sampled from Test-U60 are visually compared and displayed, with images presented from left to right as follows: original underwater image, FUnIE [<a href="#B23-sensors-24-05893" class="html-bibr">23</a>], UDCP [<a href="#B19-sensors-24-05893" class="html-bibr">19</a>], Retinex-based [<a href="#B18-sensors-24-05893" class="html-bibr">18</a>], UGan [<a href="#B9-sensors-24-05893" class="html-bibr">9</a>], WaterNet [<a href="#B22-sensors-24-05893" class="html-bibr">22</a>], U-Trans [<a href="#B13-sensors-24-05893" class="html-bibr">13</a>], and the proposed RT-CBAM.</p>
Full article ">Figure 5
<p>A visual comparison of the restoration results sampled from Test-Seathru is shown, with selected images being high-resolution (1280 × 1280 pixels). The images, from left to right, are the original underwater image, FUnIE [<a href="#B23-sensors-24-05893" class="html-bibr">23</a>], UDCP [<a href="#B19-sensors-24-05893" class="html-bibr">19</a>], Retinex-based [<a href="#B18-sensors-24-05893" class="html-bibr">18</a>], UGan [<a href="#B9-sensors-24-05893" class="html-bibr">9</a>], WaterNet [<a href="#B22-sensors-24-05893" class="html-bibr">22</a>], U-Trans [<a href="#B13-sensors-24-05893" class="html-bibr">13</a>], and the proposed RT-CBAM.</p>
Full article ">Figure 6
<p>Qualitative comparison on the UIEB dataset. The restoration results obtained by our algorithm exhibit more pleasing contrast and more precise textures.</p>
Full article ">Figure 7
<p>Evaluation of detail restoration in high-resolution images. From left to right, the images are the original underwater image, Retinex-based [<a href="#B18-sensors-24-05893" class="html-bibr">18</a>], U-Trans [<a href="#B13-sensors-24-05893" class="html-bibr">13</a>], FUnIE-Gan [<a href="#B23-sensors-24-05893" class="html-bibr">23</a>], RAUNE-Net [<a href="#B27-sensors-24-05893" class="html-bibr">27</a>], UGan [<a href="#B26-sensors-24-05893" class="html-bibr">26</a>], Waternet [<a href="#B22-sensors-24-05893" class="html-bibr">22</a>], and our proposed RT-CBAM.</p>
Full article ">Figure 8
<p>Visual comparison and evaluation of color restoration performance selected from the Color-Checker7 dataset.</p>
Full article ">Figure 9
<p>Visual comparison of the ablation study sampled from Test-E120 and Test-U60. The left side represents full-reference evaluation, and the right side represents no-reference evaluation.</p>
Full article ">
25 pages, 2705 KiB  
Review
Advancements in the Application of CO2 Capture and Utilization Technologies—A Comprehensive Review
by Queendarlyn Adaobi Nwabueze and Smith Leggett
Fuels 2024, 5(3), 508-532; https://doi.org/10.3390/fuels5030028 - 11 Sep 2024
Abstract
Addressing escalating energy demands and greenhouse gas emissions in the oil and gas industry has driven extensive efforts in carbon capture and utilization (CCU), focusing on power plants and industrial facilities. However, utilizing CO2 as a raw material to produce valuable chemicals, [...] Read more.
Addressing escalating energy demands and greenhouse gas emissions in the oil and gas industry has driven extensive efforts in carbon capture and utilization (CCU), focusing on power plants and industrial facilities. However, utilizing CO2 as a raw material to produce valuable chemicals, materials, and fuels for transportation may offer a more sustainable and long-term solution than sequestration alone. This approach also presents promising alternatives to traditional chemical feedstock in industries such as fine chemicals, pharmaceuticals, and polymers. This review comprehensively outlines the current state of CO2 capture technologies, exploring the associated challenges and opportunities regarding their efficiency and economic feasibility. Specifically, it examines the potential of technologies such as chemical looping, membrane separation, and adsorption processes, which are advancing the frontiers of CO2 capture by enhancing efficiency and reducing costs. Additionally, it explores the various methods of CO2 utilization, highlighting the potential benefits and applications. These methods hold potential for producing high-value chemicals and materials, offering new pathways for industries to reduce their carbon footprint. The integration of CO2 capture and utilization is also examined, emphasizing its potential as a cost-effective and efficient approach that mitigates climate change while converting CO2 into a valuable resource. Finally, the review outlines the challenges in designing, developing, and scaling up CO2 capture and utilization processes, providing a comprehensive perspective on the technical and economic challenges that need to be addressed. It provides a roadmap for technologies, suggesting that their successful deployment could result in significant environmental benefits and encourage innovation in sustainable practices within the energy and chemical sectors. Full article
Show Figures

Figure 1

Figure 1
<p>Different capture processes studied in both the industry and academia.</p>
Full article ">Figure 2
<p>Different methods of utilizing CO<sub>2</sub>.</p>
Full article ">Figure 3
<p>Illustration of the CO<sub>2</sub>-enhanced oil/gas recovery process [<a href="#B92-fuels-05-00028" class="html-bibr">92</a>].</p>
Full article ">Figure 4
<p>Illustration of the reservoir and surface components involved in the process of CO₂-enhanced water/brine recovery [<a href="#B105-fuels-05-00028" class="html-bibr">105</a>].</p>
Full article ">Figure 5
<p>Pathways of the formation of coke and deactivation in the process of converting methanol [<a href="#B116-fuels-05-00028" class="html-bibr">116</a>].</p>
Full article ">Figure 6
<p>Schematic of the combined process of capture and utilization of carbon [<a href="#B158-fuels-05-00028" class="html-bibr">158</a>].</p>
Full article ">
19 pages, 5700 KiB  
Article
Molecular and Morphological Evidence for the Description of Three Novel Velvet Worm Species (Onychophora: Peripatopsidae: Peripatopsis sedgwicki s.s.) from South Africa
by Aaron Barnes and Savel R. Daniels
Diversity 2024, 16(9), 566; https://doi.org/10.3390/d16090566 - 11 Sep 2024
Abstract
During the present study, DNA sequence and morphological data were used to delineate species boundaries in the velvet worm, Peripatopsis sedgwicki species complex. The combined mitochondrial cytochrome c oxidase subunit one (COI) and the nuclear 18S rRNA loci were phylogenetically analyzed [...] Read more.
During the present study, DNA sequence and morphological data were used to delineate species boundaries in the velvet worm, Peripatopsis sedgwicki species complex. The combined mitochondrial cytochrome c oxidase subunit one (COI) and the nuclear 18S rRNA loci were phylogenetically analyzed using Bayesian inference and maximum likelihood platforms that both demonstrated the presence of four, statistically well-supported clades (A–D). In addition, five species delimitation methods (ASAP, bPTP, bGMYC, STACEY and iBPP) were used on the combined DNA sequence data to identify possible novel lineages. All five species delimitation methods supported the distinction of the Fort Fordyce Nature Reserve specimens in the Eastern Cape province, however, in the main P. sedgwicki s.l. species complex, the species delimitation methods revealed a variable number of novel operational taxonomic units. Gross morphological characters were of limited utility, with only the leg pair number in the Fort Fordyce Nature Reserve specimens and the white head-collar of the Van Stadens Wildflower Nature Reserve specimens being diagnostic. The RADseq results from the earlier study of P. sedgwicki s.l. provided highly congruent results with the four clades observed in the present study. The distribution of P. sedgwicki s.s. (clade B) is restricted to the western portions of its distribution in the Afrotemperate forested regions of the Western Cape Province, South Africa. Three novel species, P. collarium sp. nov., (clade C) P. margaritarius sp. nov., (clade A) and P. orientalis sp. nov., (clade D) are described, of which the first two species are narrow range endemics. The present study, along with several recent systematic studies of velvet worms affirms the importance of fine-scale sampling to detect and document the alpha taxonomic diversity of Onychophora. Full article
(This article belongs to the Section Animal Diversity)
Show Figures

Figure 1

Figure 1
<p>Map showing localities in the southern Western Cape and Eastern Cape provinces of South Africa indicating the four genetic clades retrieved in the <span class="html-italic">Peripatopsis sedgwicki</span> species complex revealed with the use of Sanger sequences (<span class="html-italic">COI</span>) and RADseq [<a href="#B10-diversity-16-00566" class="html-bibr">10</a>]. Localities 1–4 represents the distribution of <span class="html-italic">P. sedgwicki</span> s.s. (clade B); localities 5–20, 22–23 represents the distribution of <span class="html-italic">P</span>. <span class="html-italic">orientalis</span> sp. nov., (clade D) while localities 21 and 24 represents the distribution of the two narrow point endemics, <span class="html-italic">P</span>. <span class="html-italic">collarium</span> sp. nov., (clade C) and <span class="html-italic">P. margaritarius</span> sp. nov., (clade A) respectively. Clades correspond to <a href="#diversity-16-00566-f002" class="html-fig">Figure 2</a>.</p>
Full article ">Figure 2
<p>Species delimitation tree based on <span class="html-italic">COI</span> and <span class="html-italic">18S</span>. Total evidence species tree of concatenated <span class="html-italic">COI</span> + <span class="html-italic">18S</span> rRNA sequences for <span class="html-italic">Peripatopsis sedgwicki</span> s.s. produced by the STACEY analysis. Nodal support presentation for each tree produced by each of these analyses follows the of the key on the left of the figure (top left = STACEY; top right = MrBayes [BEAST]; bottom left = Maximum likelihood; bottom right = Bayesian phylogenetics and phylogeography). The similarity matrix represents the results of the STACEY maximum clade credibility tree minimum clusters from the total evidence dataset (<span class="html-italic">COI</span> + <span class="html-italic">18S</span> rRNA). Black squares represent posterior probabilities (white = 0, black = 1) for pairs of individuals (sample localities) belonging to the same cluster. The lines in the matrix separate putative species boundaries based on the observed clusters. The seven vertical multi-coloured bars represent alternative taxonomies with each segment of these bars representing distinct species according to the respective approach. The final bar (right) represents the species consensus between these methods.</p>
Full article ">Figure 3
<p>Live images of dorsal and ventral surfaces of the four velvet worm species. Live photographs of a single specimen representative from each of the four clades in the <span class="html-italic">Peripatopsis sedgwicki</span> species complex. <span class="html-italic">Peripatopsis sedgwicki</span> s.s. (<b>A</b>) dorsal and (<b>B</b>) ventral view of live specimen (clade B); <span class="html-italic">P. margaritarius</span> sp. nov., (<b>C</b>) dorsal and (<b>D</b>) ventral view of live specimen (clade A); <span class="html-italic">P</span>. <span class="html-italic">collarium</span> sp. nov., (<b>E</b>) dorsal and (<b>F</b>) ventral image of live specimen (clade C); finally, <span class="html-italic">P</span>. <span class="html-italic">orientalis</span> sp. nov., (clade D) is represented by (<b>G</b>) dorsal and (<b>H</b>) ventral images of live specimens. Scale bar = 10 mm. Photo credit: A. Barnes.</p>
Full article ">Figure 4
<p>SEM micrographs Scanning electron micrographs (SEM) of primary and accessory dermal papillae for the four clades in the <span class="html-italic">Peripatopsis sedgwicki</span> species complex. Each white dot represents a scale rank. (<b>A</b>) dorsal papilla and (<b>B</b>) ventral papilla of <span class="html-italic">P</span>. <span class="html-italic">sedgwicki</span> s.s. (clade B). (<b>C</b>) dorsal papilla and (<b>D</b>) ventral papilla of <span class="html-italic">P</span>. <span class="html-italic">margaritarius</span> sp. nov., (clade A). (<b>E</b>) dorsal papilla and (<b>F</b>) ventral papilla of <span class="html-italic">P</span>. <span class="html-italic">collarium</span> sp. nov., (clade C). (<b>G</b>) dorsal papilla and (<b>H</b>) ventral papilla of <span class="html-italic">P</span>. <span class="html-italic">orientalis</span> sp. nov., (clade D). Scale bar = 20 µm.</p>
Full article ">
18 pages, 13182 KiB  
Article
Hierarchical Progressive Image Forgery Detection and Localization Method Based on UNet
by Yang Liu, Xiaofei Li, Jun Zhang, Shuohao Li, Shengze Hu and Jun Lei
Big Data Cogn. Comput. 2024, 8(9), 119; https://doi.org/10.3390/bdcc8090119 - 10 Sep 2024
Viewed by 231
Abstract
The rapid development of generative technologies has made the production of forged products easier, and AI-generated forged images are increasingly difficult to accurately detect, posing serious privacy risks and cognitive obstacles to individuals and society. Therefore, constructing an effective method that can accurately [...] Read more.
The rapid development of generative technologies has made the production of forged products easier, and AI-generated forged images are increasingly difficult to accurately detect, posing serious privacy risks and cognitive obstacles to individuals and society. Therefore, constructing an effective method that can accurately detect and locate forged regions has become an important task. This paper proposes a hierarchical and progressive forged image detection and localization method called HPUNet. This method assigns more reasonable hierarchical multi-level labels to the dataset as supervisory information at different levels, following cognitive laws. Secondly, multiple types of features are extracted from AI-generated images for detection and localization, and the detection and localization results are combined to enhance the task-relevant features. Subsequently, HPUNet expands the obtained image features into four different resolutions and performs detection and localization at different levels in a coarse-to-fine cognitive order. To address the limited feature field of view caused by inconsistent forgery sizes, we employ three sets of densely cross-connected hierarchical networks for sufficient interaction between feature images at different resolutions. Finally, a UNet network with a soft-threshold-constrained feature enhancement module is used to achieve detection and localization at different scales, and the reliance on a progressive mechanism establishes relationships between different branches. We use ACC and F1 as evaluation metrics, and extensive experiments on our method and the baseline methods demonstrate the effectiveness of our approach. Full article
Show Figures

Figure 1

Figure 1
<p>Description of forged image detection. (<b>a</b>) Classified t-SNE images of the dataset in the ResNet50 network. (<b>b</b>) Examples of AI tampering with images. (<b>c</b>) Schematic diagram of image multi-level label division.</p>
Full article ">Figure 2
<p>General structure of the HPUNet network. It combines multiple types of image features for detection and localization, and the dual-branch attention mechanism amplifies strongly relevant features while suppressing weakly relevant features. Combined with UNet to construct a hierarchical network, it achieves accurate detection and localization of forged images in a coarse-to-fine cognitive order.</p>
Full article ">Figure 3
<p>Two-branch attention fusion module.</p>
Full article ">Figure 4
<p>Diagram of feature fusion for branch <math display="inline"><semantics> <msub> <mi>θ</mi> <mn>2</mn> </msub> </semantics></math>.</p>
Full article ">Figure 5
<p>Soft-threshold dual-attention module.</p>
Full article ">Figure 6
<p>t-SNE visual comparison.</p>
Full article ">Figure 7
<p>Comparison picture of HPUNet and DA-HFNet.</p>
Full article ">Figure 8
<p>Comparison of large-scale fake image localization results.</p>
Full article ">Figure 9
<p>Comparison of small-scale fake image localization results.</p>
Full article ">
27 pages, 6983 KiB  
Article
DA-YOLOv7: A Deep Learning-Driven High-Performance Underwater Sonar Image Target Recognition Model
by Zhe Chen, Guohao Xie, Xiaofang Deng, Jie Peng and Hongbing Qiu
J. Mar. Sci. Eng. 2024, 12(9), 1606; https://doi.org/10.3390/jmse12091606 - 10 Sep 2024
Viewed by 177
Abstract
Affected by the complex underwater environment and the limitations of low-resolution sonar image data and small sample sizes, traditional image recognition algorithms have difficulties achieving accurate sonar image recognition. The research builds on YOLOv7 and devises an innovative fast recognition model designed explicitly [...] Read more.
Affected by the complex underwater environment and the limitations of low-resolution sonar image data and small sample sizes, traditional image recognition algorithms have difficulties achieving accurate sonar image recognition. The research builds on YOLOv7 and devises an innovative fast recognition model designed explicitly for sonar images, namely the Dual Attention Mechanism YOLOv7 model (DA-YOLOv7), to tackle such challenges. New modules such as the Omni-Directional Convolution Channel Prior Convolutional Attention Efficient Layer Aggregation Network (OA-ELAN), Spatial Pyramid Pooling Channel Shuffling and Pixel-level Convolution Bilat-eral-branch Transformer (SPPCSPCBiFormer), and Ghost-Shuffle Convolution Enhanced Layer Aggregation Network-High performance (G-ELAN-H) are central to its design, which reduce the computational burden and enhance the accuracy in detecting small targets and capturing local features and crucial information. The study adopts transfer learning to deal with the lack of sonar image samples. By pre-training the large-scale Underwater Acoustic Target Detection Dataset (UATD dataset), DA-YOLOV7 obtains initial weights, fine-tuned on the smaller Smaller Common Sonar Target Detection Dataset (SCTD dataset), thereby reducing the risk of overfitting which is commonly encountered in small datasets. The experimental results on the UATD, the Underwater Optical Target Detection Intelligent Algorithm Competition 2021 Dataset (URPC), and SCTD datasets show that DA-YOLOV7 exhibits outstanding performance, with [email protected] scores reaching 89.4%, 89.9%, and 99.15%, respectively. In addition, the model maintains real-time speed while having superior accuracy and recall rates compared to existing mainstream target recognition models. These findings establish the superiority of DA-YOLOV7 in sonar image analysis tasks. Full article
Show Figures

Figure 1

Figure 1
<p>Structure of the YOLOv7.</p>
Full article ">Figure 2
<p>(<b>left</b>): OA-ELAN structure diagram, (<b>right</b>): ODConv Structure diagram.</p>
Full article ">Figure 3
<p>The CPCA attention mechanism.</p>
Full article ">Figure 4
<p><b>Left</b>: SPPCSPC structure, <b>Right</b>: BiFormer structure.</p>
Full article ">Figure 5
<p>Structure diagram of G-ELAN-H.</p>
Full article ">Figure 6
<p>The DA-YOLOv7 network.</p>
Full article ">Figure 7
<p>Confusion matrix of the ablation model: (<b>a</b>) YOLOv7; (<b>b</b>) YOLOv7 + OA-ELAN; (<b>c</b>) YOLOv7 + OA-ELAN + SPPCSPCBiFormer; (<b>d</b>) DA-YOLOv7.</p>
Full article ">Figure 8
<p>The PR curve: (<b>a</b>) YOLOv7; (<b>b</b>) YOLOv7 + OA-ELAN; (<b>c</b>) YOLOv7 + OA-ELAN + SPPCSPCBiFormer; (<b>d</b>) DA-YOLOv7.</p>
Full article ">Figure 9
<p>Curve of the change in loss value on the UATD dataset.</p>
Full article ">Figure 10
<p>Prediction results of various targets in UATD multi-beam forward-looking sonar images.</p>
Full article ">Figure 11
<p>SCTD sonar image dataset: (<b>a</b>) human; (<b>b</b>) ship; (<b>c</b>) aircraft.</p>
Full article ">Figure 12
<p>Flowchart of the training strategy for the SCTD dataset.</p>
Full article ">Figure 13
<p>The effect of recognition on the SCTD dataset: (<b>a</b>) SCTD mAP Results; (<b>b</b>) SCTD aircraft-AP results; (<b>c</b>) SCTD human-AP results; (<b>d</b>) SCTD ship-AP results.</p>
Full article ">Figure 14
<p>The sample information of URPC is as follows: (<b>a</b>) Labels: The upper left corner shows the distribution of categories; the upper right corner presents the visualization of all box sizes; the lower left corner indicates the distribution of the box centroid position; the lower right corner depicts the distribution of the box aspect ratio. (<b>b</b>) Example images.</p>
Full article ">Figure 15
<p>Recognition results in multiple underwater scenes.</p>
Full article ">Figure 16
<p>RCurve of loss value changes on the UPRC dataset.</p>
Full article ">
24 pages, 4921 KiB  
Article
DuCFF: A Dual-Channel Feature-Fusion Network for Workload Prediction in a Cloud Infrastructure
by Kai Jia, Jun Xiang and Baoxia Li
Electronics 2024, 13(18), 3588; https://doi.org/10.3390/electronics13183588 - 10 Sep 2024
Viewed by 189
Abstract
Cloud infrastructures are designed to provide highly scalable, pay-as-per-use services to meet the performance requirements of users. The workload prediction of the cloud plays a crucial role in proactive auto-scaling and the dynamic management of resources to move toward fine-grained load balancing and [...] Read more.
Cloud infrastructures are designed to provide highly scalable, pay-as-per-use services to meet the performance requirements of users. The workload prediction of the cloud plays a crucial role in proactive auto-scaling and the dynamic management of resources to move toward fine-grained load balancing and job scheduling due to its ability to estimate upcoming workloads. However, due to users’ diverse usage demands, the changing characteristics of workloads have become more and more complex, including not only short-term irregular fluctuation characteristics but also long-term dynamic variations. This prevents existing workload-prediction methods from fully capturing the above characteristics, leading to degradation of prediction accuracy. To deal with the above problems, this paper proposes a framework based on a dual-channel temporal convolutional network and transformer (referred to as DuCFF) to perform workload prediction. Firstly, DuCFF introduces data preprocessing technology to decouple different components implied by workload data and combine the original workload to form new model inputs. Then, in a parallel manner, DuCFF adopts the temporal convolution network (TCN) channel to capture local irregular fluctuations in workload time series and the transformer channel to capture long-term dynamic variations. Finally, the features extracted from the above two channels are further fused, and workload prediction is achieved. The performance of the proposed DuCFF’s was verified on various workload benchmark datasets (i.e., ClarkNet and Google) and compared to its nine competitors. Experimental results show that the proposed DuCFF can achieve average performance improvements of 65.2%, 70%, 64.37%, and 15%, respectively, in terms of Mean Absolute Error (MAE), Root Mean Square Error (RMSE), Mean Absolute Percentage Error (MAPE) and R-squared (R2) compared to the baseline model CNN-LSTM. Full article
Show Figures

Figure 1

Figure 1
<p>The architecture of DuCFF, (<b>a</b>) main structure, (<b>b</b>) TCN channel, and (<b>c</b>) Transformer channel.</p>
Full article ">Figure 2
<p>The decomposed effects for three workload datasets using VMD. (<b>a</b>) Decomposed effect for ClarkNet–HTTP trace data (Requests). (<b>b</b>) Decomposed effect for Google trace data1 (CPU utilization). (<b>c</b>) Decomposed effect for Google trace data2 (CPU utilization).</p>
Full article ">Figure 3
<p>The implementation process of the move window sampling.</p>
Full article ">Figure 4
<p>A comparison of traditional CNN and TCN.</p>
Full article ">Figure 5
<p>The calculation process of MSA.</p>
Full article ">Figure 6
<p>The variation characteristics of workload data collected from Clarknet and Google traces. (<b>a</b>) <span class="html-italic">CNH</span>. (<b>b</b>) <span class="html-italic">GC1</span>. (<b>c</b>) <span class="html-italic">GC2</span>.</p>
Full article ">Figure 7
<p>The fitting effects on <span class="html-italic">GC2</span> for the proposed DuCFF and compared models.</p>
Full article ">Figure 8
<p>The parameter sensitivity analysis on <span class="html-italic">CNH</span> for the proposed DuCFF in terms of two selected evaluation metrics.</p>
Full article ">Figure 9
<p>The performance overhead comparisons for all models (time is recorded in seconds).</p>
Full article ">
28 pages, 7195 KiB  
Article
MEEAFusion: Multi-Scale Edge Enhancement and Joint Attention Mechanism Based Infrared and Visible Image Fusion
by Yingjiang Xie, Zhennan Fei, Da Deng, Lingshuai Meng, Fu Niu and Jinggong Sun
Sensors 2024, 24(17), 5860; https://doi.org/10.3390/s24175860 - 9 Sep 2024
Viewed by 435
Abstract
Infrared and visible image fusion can integrate rich edge details and salient infrared targets, resulting in high-quality images suitable for advanced tasks. However, most available algorithms struggle to fully extract detailed features and overlook the interaction of complementary features across different modal images [...] Read more.
Infrared and visible image fusion can integrate rich edge details and salient infrared targets, resulting in high-quality images suitable for advanced tasks. However, most available algorithms struggle to fully extract detailed features and overlook the interaction of complementary features across different modal images during the feature fusion process. To address this gap, this study presents a novel fusion method based on multi-scale edge enhancement and a joint attention mechanism (MEEAFusion). Initially, convolution kernels of varying scales were utilized to obtain shallow features with multiple receptive fields unique to the source image. Subsequently, a multi-scale gradient residual block (MGRB) was developed to capture the high-level semantic information and low-level edge texture information of the image, enhancing the representation of fine-grained features. Then, the complementary feature between infrared and visible images was defined, and a cross-transfer attention fusion block (CAFB) was devised with joint spatial attention and channel attention to refine the critical supplemental information. This allowed the network to obtain fused features that were rich in both common and complementary information, thus realizing feature interaction and pre-fusion. Lastly, the features were reconstructed to obtain the fused image. Extensive experiments on three benchmark datasets demonstrated that the MEEAFusion proposed in this research has considerable strengths in terms of rich texture details, significant infrared targets, and distinct edge contours, and it achieves superior fusion performance. Full article
Show Figures

Figure 1

Figure 1
<p>Display of fusion results. IR and VIS denote infrared image and visible image, and Figures (<b>a</b>–<b>g</b>) show the fusion results of FusionGAN [<a href="#B8-sensors-24-05860" class="html-bibr">8</a>], IPLF [<a href="#B9-sensors-24-05860" class="html-bibr">9</a>], STDFusionNet [<a href="#B10-sensors-24-05860" class="html-bibr">10</a>], DenseFuse [<a href="#B11-sensors-24-05860" class="html-bibr">11</a>], RFN-Nest [<a href="#B12-sensors-24-05860" class="html-bibr">12</a>], PMGI [<a href="#B13-sensors-24-05860" class="html-bibr">13</a>], and FLFuse-Net [<a href="#B14-sensors-24-05860" class="html-bibr">14</a>], respectively. The red and green boxes outline the salient targets and detail regions.</p>
Full article ">Figure 2
<p>MEEAFusion—overall framework.</p>
Full article ">Figure 3
<p>MGRB module structure.</p>
Full article ">Figure 4
<p>Gradient convolution results of the visible image. (<b>a</b>,<b>b</b>) show the 3 × 3 and 5 × 5 Sobel convolution results, respectively.</p>
Full article ">Figure 5
<p>CAFB module structure.</p>
Full article ">Figure 6
<p>Visual display of fusion results for scene 00537D.</p>
Full article ">Figure 7
<p>Visual display of fusion results for scene 00878N.</p>
Full article ">Figure 8
<p>Visual display of fusion results for scene 01024N.</p>
Full article ">Figure 9
<p>Data distribution of fusion results for 36 pairs of MSRS images over the eight objective evaluation criteria. Each point (x, y) in this Figure means (100 × x)% of fused images whose metric values do not exceed y.</p>
Full article ">Figure 10
<p>Visual display of fusion results for bench scene. The salient regions are highlighted with red boxes.</p>
Full article ">Figure 11
<p>Visual display of fusion results for Kaptein_1123 scene. The salient and detailed regions are highlighted with red and green boxes.</p>
Full article ">Figure 12
<p>Data distribution of fusion results for 20 pairs of TNO images over the eight objective evaluation criteria. Each point (x, y) in this Figure means (100 × x)% of fused images whose metric values do not exceed y.</p>
Full article ">Figure 13
<p>Visual display of fusion results for scene FLIR_00006. The detailed regions are highlighted with red boxes.</p>
Full article ">Figure 13 Cont.
<p>Visual display of fusion results for scene FLIR_00006. The detailed regions are highlighted with red boxes.</p>
Full article ">Figure 14
<p>Visual display of fusion results for scene FLIR_06570. The salient and detailed regions are highlighted with red and green boxes.</p>
Full article ">Figure 15
<p>Data distribution of fusion results for 30 pairs of RoadScene images over the eight objective evaluation criteria. Each point (x, y) in this Figure means (100 × x)% of fused images whose metric values do not exceed y.</p>
Full article ">Figure 16
<p>Visual results of the ablation experiment. The salient and detailed regions are highlighted with red and green boxes.</p>
Full article ">Figure 17
<p>Visual display of YOLOv5s prediction results for fused images of scene 00479D.</p>
Full article ">Figure 18
<p>Visual display of YOLOv5s prediction results for fused images of scene 01348N.</p>
Full article ">
19 pages, 5464 KiB  
Article
A Multi-Scale Liver Tumor Segmentation Method Based on Residual and Hybrid Attention Enhanced Network with Contextual Integration
by Liyan Sun, Linqing Jiang, Mingcong Wang, Zhenyan Wang and Yi Xin
Sensors 2024, 24(17), 5845; https://doi.org/10.3390/s24175845 - 9 Sep 2024
Viewed by 239
Abstract
Liver cancer is one of the malignancies with high mortality rates worldwide, and its timely detection and accurate diagnosis are crucial for improving patient prognosis. To address the limitations of traditional image segmentation techniques and the U-Net network in capturing fine image features, [...] Read more.
Liver cancer is one of the malignancies with high mortality rates worldwide, and its timely detection and accurate diagnosis are crucial for improving patient prognosis. To address the limitations of traditional image segmentation techniques and the U-Net network in capturing fine image features, this study proposes an improved model based on the U-Net architecture, named RHEU-Net. By replacing traditional convolution modules in the encoder and decoder with improved residual modules, the network’s feature extraction capabilities and gradient stability are enhanced. A Hybrid Gated Attention (HGA) module is integrated before the skip connections, enabling the parallel processing of channel and spatial attentions, optimizing the feature fusion strategy, and effectively replenishing image details. A Multi-Scale Feature Enhancement (MSFE) layer is introduced at the bottleneck, utilizing multi-scale feature extraction technology to further enhance the expression of receptive fields and contextual information, improving the overall feature representation effect. Testing on the LiTS2017 dataset demonstrated that RHEU-Net achieved Dice scores of 95.72% for liver segmentation and 70.19% for tumor segmentation. These results validate the effectiveness of RHEU-Net and underscore its potential for clinical application. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>Architecture of RHEU-Net, where Module A denotes the residual module, Module B refers to the Multi-Scale Feature Enhancement module (MSFE), Module C indicates the Hybrid Gated Attention module (HGA), and Module D includes convolution operations.</p>
Full article ">Figure 2
<p>(<b>a</b>) Structure of the residual module in ResNet; (<b>b</b>) Structure of the residual module used in the encoder; (<b>c</b>) Structure of the residual module used in the decoder.</p>
Full article ">Figure 3
<p>Structure of the Hybrid Gated Attention module.</p>
Full article ">Figure 4
<p>Structure of the Channel Attention Module.</p>
Full article ">Figure 5
<p>Structure of the spatial attention module.</p>
Full article ">Figure 6
<p>Structure of the Hybrid Gated Attention Module.</p>
Full article ">Figure 7
<p>Structure of the Multi-Scale Feature Enhancement Module.</p>
Full article ">Figure 8
<p>(<b>a</b>) Original image; (<b>b</b>) Flip horizontal; (<b>c</b>) Flip vertical; (<b>d</b>) Left rotation; (<b>e</b>) Right rotation.</p>
Full article ">Figure 9
<p>Segmentation results from various networks on selected test set images in the ablation experiment. From left to right: (<b>a</b>) original CT image, (<b>b</b>) gold standard, (<b>c</b>) U-Net, (<b>d</b>) Res+U-Net, (<b>e</b>) HGA+U-Net, (<b>f</b>) MSFE+U-Net, (<b>g</b>) Res+HGA+U-Net, and (<b>h</b>) RHEU-Net (method described in this study).</p>
Full article ">Figure 10
<p>Training loss trends of different models.</p>
Full article ">Figure 11
<p>Comparison of liver segmentation results from different networks against the gold standard. From left to right, the images represent: (<b>a</b>) original CT image, (<b>b</b>) gold standard, (<b>c</b>) Unet, (<b>d</b>) AttentionUnet, (<b>e</b>) ResUnet-a, (<b>f</b>) CAUnet, (<b>g</b>) Res Unet++, (<b>h</b>) RIUNet, (<b>i</b>) RHEUnet (method described in this study).</p>
Full article ">
14 pages, 28030 KiB  
Article
Laboratory and Field Performance Evaluation of NMAS 9.5, 8.0, and 5.6 mm SMA Mixtures for Sustainable Pavement
by Cheolmin Baek, Ohsun Kwon and Jongsub Lee
Sustainability 2024, 16(17), 7840; https://doi.org/10.3390/su16177840 - 9 Sep 2024
Viewed by 266
Abstract
This study evaluates the laboratory and field performance of stone mastic asphalt (SMA) mixtures with nominal maximum aggregate sizes (NMAS) of 9.5, 8.0, and 5.6 mm. Aggregates and fine aggregates of these sizes were produced using an impact crusher and a polyurethane screen. [...] Read more.
This study evaluates the laboratory and field performance of stone mastic asphalt (SMA) mixtures with nominal maximum aggregate sizes (NMAS) of 9.5, 8.0, and 5.6 mm. Aggregates and fine aggregates of these sizes were produced using an impact crusher and a polyurethane screen. Mix designs for SMA overlays on aged concrete pavement were developed. Laboratory tests assessed rutting performance using full-scale accelerated pavement testing (APT) equipment and reflective cracking resistance using an asphalt mixture performance tester (AMPT). Field evaluations included noise reduction using CPX equipment, skid resistance using SN equipment, and bond strength using field cores. Results showed that for 8.0 mm SMA mixtures to achieve the same rutting performance as 9.5 mm SMA, PG76-22 grade binder was required, whereas 5.6 mm SMA required PG82-22. The 8.0 and 5.6 mm SMA mixtures showed 22.2% and 25% reduced crack progression, respectively, compared with the 9.5 mm SMA mixtures. Field evaluations indicated that 8.0 mm and 5.6 mm SMA pavements reduced tire–pavement noise by 1.7 and 0.8 dB, increased skid resistance by 8.5% and 2.0%, and enhanced shear bond strength by 150%, compared with 9.5 mm SMA. Overall, the 8.0 mm SMA mixture on aged concrete pavement demonstrated superior durability and functionality toward sustainable pavement systems. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Impact crusher and (<b>b</b>) polyurethane crusher screen.</p>
Full article ">Figure 2
<p>Aggregate gradation of (<b>a</b>) 9.5, (<b>b</b>) 8.0, and (<b>c</b>) 5.6 mm SMA mixtures.</p>
Full article ">Figure 3
<p>Structural design of accelerated pavement test section.</p>
Full article ">Figure 4
<p>(<b>a</b>) 9.5, 8.0, and 5.6 mm SMA pavements with PG76-22 and (<b>b</b>) PG82-22.</p>
Full article ">Figure 5
<p>Measurement of the rutting deformation transverse profile and example of results.</p>
Full article ">Figure 6
<p>Specimen fabrication of asphalt overlay test: (<b>a</b>) gyratory compactor, (<b>b</b>) specimen preparation, and (<b>c</b>) test set up.</p>
Full article ">Figure 7
<p>Structural design of field trial construction for SMA overlays: (<b>a</b>) 9.5 mm + 9.5 mm, (<b>b</b>) 8.0 mm + 9.5 mm, (<b>c</b>) 5.6 mm + 9.5 mm, (<b>d</b>) 9.5 mm + 8.0 mm, and (<b>e</b>) 9.5 mm + 5.6 mm SMAs.</p>
Full article ">Figure 8
<p>Shear bond strength test using field cores: (<b>a</b>) field coring, (<b>b</b>) cored test specimens, and (<b>c</b>) test set up.</p>
Full article ">Figure 9
<p>APT results: NMAS effect with (<b>a</b>) PG82-22 and (<b>b</b>) PG76-22; PG effect with (<b>c</b>) 9.5 mm SMA, (<b>d</b>) 8.0 mm SMA, and (<b>e</b>) 5.6 mm SMA pavements.</p>
Full article ">Figure 10
<p>Reflective cracking resistance results.</p>
Full article ">Figure 11
<p>CPX test results.</p>
Full article ">Figure 12
<p>Skid resistance test results.</p>
Full article ">Figure 13
<p>Shear bond strength test results.</p>
Full article ">
19 pages, 4999 KiB  
Article
Study on Downscaling Correction of Near-Surface Wind Speed Grid Forecasts in Complex Terrain
by Xin Liu, Zhimin Li and Yanbo Shen
Atmosphere 2024, 15(9), 1090; https://doi.org/10.3390/atmos15091090 - 8 Sep 2024
Viewed by 291
Abstract
Accurate forecasting of wind speeds is a crucial aspect of providing fine-scale professional meteorological services (such as wind energy generation and transportation operations etc.). This article utilizes CMA-MESO model forecast data and CARAS-SUR_1 km ground truth grid data from January, April, July, and [...] Read more.
Accurate forecasting of wind speeds is a crucial aspect of providing fine-scale professional meteorological services (such as wind energy generation and transportation operations etc.). This article utilizes CMA-MESO model forecast data and CARAS-SUR_1 km ground truth grid data from January, April, July, and October 2022, employing the random forest algorithm to establish and evaluate a downscaling correction model for near-surface 1 km resolution wind-speed grid forecast in the complex terrain area of northwestern Hebei Province. The results indicate that after downscaling correction, the spatial distribution of grid forecast wind speeds in the entire complex terrain study area becomes more refined, with spatial resolution improving from 3 km to 1 km, reflecting fine-scale terrain effects. The accuracy of the corrected wind speed forecast significantly improves compared to the original model, with forecast errors showing stability in both time and space. The mean bias decreases from 2.25 m/s to 0.02 m/s, and the root mean square error (RMSE) decreases from 3.26 m/s to 0.52 m/s. Forecast errors caused by complex terrain, forecast lead time, and seasonal factors are significantly reduced. In terms of wind speed categories, the correction significantly improves forecasts for wind speeds below 8 m/s, with RMSE decreasing from 2.02 m/s to 0.59 m/s. For wind speeds above 8 m/s, there is also a good correction effect, with RMSE decreasing from 2.20 m/s to 1.65 m/s. Selecting the analysis of the Zhangjiakou strong wind process on 26 April 2022, it was found that the downscaled corrected forecast wind speed is very close to the observed wind speed at the station and the ground truth grid points. The correction effect is particularly significant in areas affected by strong winds, such as the Bashang Plateau and valleys, which has significant reference value. Full article
(This article belongs to the Special Issue Solar Irradiance and Wind Forecasting)
Show Figures

Figure 1

Figure 1
<p>The topography and distribution of national meteorological stations in the study area. M1—Shangyi, M2—Zhangbei, M3—Tianzhen, M4—Huai’an, M5—Yangyuan, M6—Xuanhua, M7—Wanquan, M8—Chongli, M9—Qiaodong, M10—Huailai, M11—Zhuolu.</p>
Full article ">Figure 2
<p>Flowchart of the wind speed gridded downscaling correction model using the random forest algorithm.</p>
Full article ">Figure 3
<p>Spatial distributions of the mean wind speed by CARAS-SUR_1 km (<b>a</b>), CMA-MESO 3 km (<b>b</b>), downscaling corrected forecast (<b>c</b>), the root mean square error (RMSE) of forecasting wind speed (<b>d</b>), and downscaling corrected wind speed (<b>e</b>) in the study area in 2022 (unit: m/s).</p>
Full article ">Figure 4
<p>The root mean square error (RMSE) (<b>a</b>), mean bias (BIAS) (<b>b</b>), and correlation coefficient (R) (<b>c</b>) of wind speed forecasts for representative months in each season in the study area in 2022.</p>
Full article ">Figure 5
<p>The root mean square error (RMSE) (<b>a</b>), mean bias (BIAS) (<b>b</b>), and correlation coefficient (R) (<b>c</b>) of wind speed forecasts for 1 to 24 h in the study area in 2022.</p>
Full article ">Figure 5 Cont.
<p>The root mean square error (RMSE) (<b>a</b>), mean bias (BIAS) (<b>b</b>), and correlation coefficient (R) (<b>c</b>) of wind speed forecasts for 1 to 24 h in the study area in 2022.</p>
Full article ">Figure 6
<p>The root mean square error (RMSE) (<b>a</b>), mean bias (BIAS) (<b>b</b>), and correlation coefficient (R) (<b>c</b>) of wind speed forecasts for different wind speed categories in the study area in 2022.</p>
Full article ">Figure 6 Cont.
<p>The root mean square error (RMSE) (<b>a</b>), mean bias (BIAS) (<b>b</b>), and correlation coefficient (R) (<b>c</b>) of wind speed forecasts for different wind speed categories in the study area in 2022.</p>
Full article ">Figure 7
<p>Spatial distributions of near-surface observed wind speed from CARAS-SUR_1 km (<b>a1</b>,<b>a2</b>,<b>a3</b>,<b>a4</b>), forecasted wind speed from CMA-MESO 3 km (<b>b1</b>,<b>b2</b>,<b>b3</b>,<b>b4</b>), and downscaling corrected forecasted wind speed (<b>c1</b>,<b>c2</b>,<b>c3</b>,<b>c4</b>) at 02:00, 07:00, 13:00, and 19:00 on 26 April 2022 in the study area (unit: m/s).</p>
Full article ">Figure 8
<p>The time series plot of wind speeds at representative meteorological stations on 26 April 2022. M2—Zhangbei, M9—Qiaodong, M6—Xuanhua, M10—Huailai.</p>
Full article ">Figure 8 Cont.
<p>The time series plot of wind speeds at representative meteorological stations on 26 April 2022. M2—Zhangbei, M9—Qiaodong, M6—Xuanhua, M10—Huailai.</p>
Full article ">
19 pages, 9439 KiB  
Article
MFAD-RTDETR: A Multi-Frequency Aggregate Diffusion Feature Flow Composite Model for Printed Circuit Board Defect Detection
by Zhihua Xie and Xiaowei Zou
Electronics 2024, 13(17), 3557; https://doi.org/10.3390/electronics13173557 - 7 Sep 2024
Viewed by 446
Abstract
To address the challenges of excessive model parameters and low detection accuracy in printed circuit board (PCB) defect detection, this paper proposes a novel PCB defect detection model based on the improved RTDETR (Real-Time Detection, Embedding and Tracking) method, named MFAD-RTDETR. Specifically, the [...] Read more.
To address the challenges of excessive model parameters and low detection accuracy in printed circuit board (PCB) defect detection, this paper proposes a novel PCB defect detection model based on the improved RTDETR (Real-Time Detection, Embedding and Tracking) method, named MFAD-RTDETR. Specifically, the proposed model introduces the designed Detail Feature Retainer (DFR) into the original RTDETR backbone to capture and retain local details. Subsequently, based on the Mamba architecture, the Visual State Space (VSS) module is integrated to enhance global attention while reducing the original quadratic complexity to a linear level. Furthermore, by exploiting the deformable attention mechanism, which dynamically adjusts reference points, the model achieves precise localization of target defects and improves the accuracy of the transformer in complex visual tasks. Meanwhile, a receptive field synthesis mechanism is incorporated to enrich multi-scale semantic information and reduce parameter complexity. In addition, the scheme proposes a novel Multi-frequency Aggregation and Diffusion feature composite paradigm (MFAD-feature composite paradigm), which consists of the Aggregation Diffusion Fusion (ADF) module and the Refiner Feature Composition (RFC) module. It aims to strengthen features with fine-grained awareness while preserving a certain level of global attention. Finally, the Wise IoU (WIoU) dynamic nonmonotonic focusing mechanism is used to reduce competition among high-quality anchor boxes and mitigate the effects of the harmful gradients from low-quality examples, thereby concentrating on anchor boxes of average quality to promote the overall performance of the detector. Extensive experiments are conducted on the PCB defect dataset released by Peking University to validate the effectiveness of the proposed model. The experimental results show that our approach achieves the 97.0% and 51.0% performance in mean Average Precision (mAP)@0.5 and [email protected]:0.95, respectively, which significantly outperforms the original RTDETR. Moreover, the model reduces the number of parameters by approximately 18.2% compared to the original RTDETR. Full article
(This article belongs to the Special Issue Deep Learning for Computer Vision Application)
Show Figures

Figure 1

Figure 1
<p>The framework of RTDETR.</p>
Full article ">Figure 2
<p>The framework of MFAD-RTDETR. With the blue dashed arrows, the diffusion mechanism effectively propagates these high-frequency details throughout the network.</p>
Full article ">Figure 3
<p>The structure diagrams of SMPConv, CGLU, and DFR modules.</p>
Full article ">Figure 4
<p>The Mamba (S6) architecture diagram.</p>
Full article ">Figure 5
<p>The structure diagram of the SS2D module.</p>
Full article ">Figure 6
<p>The FDASI module structure diagram.</p>
Full article ">Figure 7
<p>The SSFF module structure diagram.</p>
Full article ">Figure 8
<p>The structure diagrams of DAttention module and Offset network.</p>
Full article ">Figure 9
<p>The DRBC3 module structure diagram.</p>
Full article ">Figure 10
<p>The classic defect images (<b>a</b>) missing hole; (<b>b</b>) mouse bite; (<b>c</b>) open circuit; (<b>d</b>) short circuit; (<b>e</b>) spur; and (<b>f</b>) spurious copper. The red boxes mark the corresponding defects.</p>
Full article ">Figure 11
<p>The loss, precision, recall, mAP50, and mAP50-95 curves of the MFAD-RTDETR model.</p>
Full article ">Figure 12
<p>Confusion matrix comparison plot of RTDETR and MFAD-RTDETR. (<b>a</b>) represents the confusion matrix for RTDETR, (<b>b</b>) represents the confusion matrix for MFAD-RTDETR.</p>
Full article ">Figure 13
<p>The PR curves and F1-confidence curves of the MFAD-RTDETR model.</p>
Full article ">Figure 14
<p>Heat Maps Corresponding to Before and After Adding the P1 Detection Head.</p>
Full article ">Figure 15
<p>Samples of test results for (<b>a</b>) missing hole; (<b>b</b>) mouse bite; (<b>c</b>) open circuit; (<b>d</b>) short circuit; (<b>e</b>) spur; and (<b>f</b>) spurious copper.</p>
Full article ">
23 pages, 6196 KiB  
Article
Alloying and Segregation in PdRe/Al2O3 Bimetallic Catalysts for Selective Hydrogenation of Furfural
by Simon T. Thompson and H. Henry Lamb
Catalysts 2024, 14(9), 604; https://doi.org/10.3390/catal14090604 - 7 Sep 2024
Viewed by 279
Abstract
X-ray absorption fine structure (XAFS) spectroscopy, temperature-programmed reduction (TPR), and temperature-programmed hydride decomposition (TPHD) were employed to elucidate the structures of a series of PdRe/Al2O3 bimetallic catalysts for the selective hydrogenation of furfural. TPR evidenced low-temperature Re reduction in the [...] Read more.
X-ray absorption fine structure (XAFS) spectroscopy, temperature-programmed reduction (TPR), and temperature-programmed hydride decomposition (TPHD) were employed to elucidate the structures of a series of PdRe/Al2O3 bimetallic catalysts for the selective hydrogenation of furfural. TPR evidenced low-temperature Re reduction in the bimetallic catalysts consistent of the migration of [ReO4] (perrhenate) species to hydrogen-covered Pd nanoparticles on highly hydroxylated γ-Al2O3. TPHD revealed a strong suppression of β-PdHx formation in the reduced catalysts prepared by (i) co-impregnation and (ii) [HReO4] impregnation of the reduced Pd/Al2O3, indicating the formation of Pd-rich alloy nanoparticles; however, reduced catalysts prepared by (iii) [Pd(NH3)4]2+ impregnation of calcined Re/Al2O3 and subsequent re-calcination did not. Re LIII X-ray absorption edge shifts were used to determine the average Re oxidation states after reduction at 400 °C. XAFS spectroscopy and high-angle annular dark field (HAADF)-scanning transmission electron microscopy (STEM) revealed that a reduced 5 wt.% Re/Al2O3 catalyst contained small Re clusters and nanoparticles comprising Re atoms in low positive oxidation states (~1.5+) and incompletely reduced Re species (primarily Re4+). XAFS spectroscopy of the bimetallic catalysts evidenced Pd-Re bonding consistent with Pd-rich alloy formation. The Pd and Re total first-shell coordination numbers suggest that either Re is segregated to the surface (and Pd to the core) of alloy nanoparticles and/or segregated Pd nanoparticles are larger than Re nanoparticles (or clusters). The Cowley short-range order parameters are strongly positive indicating a high degree of heterogeneity (clustering or segregation of metal atoms) in these bimetallic catalysts. Catalysts prepared using the Pd(NH3)4[ReO4]2 double complex salt (DCS) exhibit greater Pd-Re intermixing but remain heterogeneous on the atomic scale. Full article
(This article belongs to the Special Issue Heterogeneous Catalysis for Selective Hydrogenation)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>TPR profiles of the Re/Al<sub>2</sub>O<sub>3</sub> and PdRe/Al<sub>2</sub>O<sub>3</sub> catalysts. Original data from [<a href="#B10-catalysts-14-00604" class="html-bibr">10</a>,<a href="#B11-catalysts-14-00604" class="html-bibr">11</a>], except Pd3Re5-SI*.</p>
Full article ">Figure 2
<p>TPHD spectra of catalysts prepared from Pd(NO<sub>3</sub>)<sub>2</sub> (<b>a</b>) and Pd(NH<sub>3</sub>)<sub>4</sub>(NO<sub>3</sub>)<sub>2</sub> (<b>b</b>) following TPR and cooling in 5% H<sub>2</sub>/Ar to −50 °C.</p>
Full article ">Figure 2 Cont.
<p>TPHD spectra of catalysts prepared from Pd(NO<sub>3</sub>)<sub>2</sub> (<b>a</b>) and Pd(NH<sub>3</sub>)<sub>4</sub>(NO<sub>3</sub>)<sub>2</sub> (<b>b</b>) following TPR and cooling in 5% H<sub>2</sub>/Ar to −50 °C.</p>
Full article ">Figure 3
<p>Re L<sub>III</sub> XANES of Re/Al<sub>2</sub>O<sub>3</sub> and PdRe/Al<sub>2</sub>O<sub>3</sub> catalysts measured after in situ reduction at 400 °C (<b>a</b>) and Re L<sub>III</sub> edge-shift calibration with Re standards and PdRe/Al<sub>2</sub>O<sub>3</sub> catalysts following 400 °C reduction and He purge (<b>b</b>).</p>
Full article ">Figure 4
<p>Correlation of Re oxidation states determined by XANES and EXAFS spectroscopies with Re oxidation states determined by TPR for Re/Al<sub>2</sub>O<sub>3</sub> and PdRe/Al<sub>2</sub>O<sub>3</sub> catalysts after reduction at 400 °C for 1 h.</p>
Full article ">Figure 5
<p>Pd K EXAFS spectra of PdRe/Al<sub>2</sub>O<sub>3</sub> catalysts measured after in situ reduction at 400 °C: <span class="html-italic">k</span><sup>2</sup> <span class="html-italic">chi</span> data (<b>a</b>) and corresponding Pd phase-corrected Fourier transforms (3.1–14 Å<sup>−1</sup>) (<b>b</b>).</p>
Full article ">Figure 6
<p>Re L<sub>III</sub> EXAFS spectra of Re/Al<sub>2</sub>O<sub>3</sub> and PdRe/Al<sub>2</sub>O<sub>3</sub> catalysts measured after in situ reduction at 400 °C: <span class="html-italic">k</span><sup>2</sup> <span class="html-italic">chi</span> data (<b>a</b>) and corresponding Re phase-corrected FT magnitudes (3.5–14 Å<sup>−1</sup>) (<b>b</b>).</p>
Full article ">Figure 6 Cont.
<p>Re L<sub>III</sub> EXAFS spectra of Re/Al<sub>2</sub>O<sub>3</sub> and PdRe/Al<sub>2</sub>O<sub>3</sub> catalysts measured after in situ reduction at 400 °C: <span class="html-italic">k</span><sup>2</sup> <span class="html-italic">chi</span> data (<b>a</b>) and corresponding Re phase-corrected FT magnitudes (3.5–14 Å<sup>−1</sup>) (<b>b</b>).</p>
Full article ">Figure 7
<p>EXAFS spectra and fit for Re5-H after reduction and He purge at 400 °C: <span class="html-italic">k</span><sup>2</sup>-weighted Re L<sub>III</sub> Fourier transform magnitude (<b>a</b>) and real part (<b>b</b>) with individual backscattering paths shown (offset).</p>
Full article ">Figure 7 Cont.
<p>EXAFS spectra and fit for Re5-H after reduction and He purge at 400 °C: <span class="html-italic">k</span><sup>2</sup>-weighted Re L<sub>III</sub> Fourier transform magnitude (<b>a</b>) and real part (<b>b</b>) with individual backscattering paths shown (offset).</p>
Full article ">Figure 8
<p>HAADF-STEM images of Re5-H after reduction in H<sub>2</sub> at 400 °C for 1 h. See <a href="#app1-catalysts-14-00604" class="html-app">Figure S4 (Supplemental Information)</a> for the composite particle size distribution derived from these images.</p>
Full article ">Figure 9
<p>EXAFS spectra and fits for Pd3Re5-CI measured after reduction and He purge at 400 °C: <span class="html-italic">k</span><sup>2</sup>-weighted Pd K and Re L<sub>III</sub> FT magnitudes (<b>a</b>) and real parts ((<b>b</b>) and (<b>c</b>), respectively) with individual backscattering paths shown (offset).</p>
Full article ">Figure 10
<p>EXAFS spectra and fits for Re5Pd3-SI measured after reduction and He purge at 400 °C: <span class="html-italic">k</span><sup>2</sup>-weighted Pd K and Re L<sub>III</sub> Fourier transform magnitudes (<b>a</b>) and real parts ((<b>b</b>) and (<b>c</b>), respectively) with individual backscattering paths shown (offset).</p>
Full article ">
16 pages, 4171 KiB  
Article
The Small Step Early Intervention Program for Infants at High Risk of Cerebral Palsy: A Single-Subject Research Design Study
by Ann-Kristin G. Elvrum, Silja Berg Kårstad, Gry Hansen, Ingrid Randby Bjørkøy, Stian Lydersen, Kristine Hermansen Grunewaldt and Ann-Christin Eliasson
J. Clin. Med. 2024, 13(17), 5287; https://doi.org/10.3390/jcm13175287 - 6 Sep 2024
Viewed by 333
Abstract
Background/Objectives: Early interventions for infants at high risk of cerebral palsy (CP) are recommended, but limited evidence exists. Our objective was, therefore, to evaluate the effects of the family-centered and interprofessional Small Step early intervention program on motor development in infants at [...] Read more.
Background/Objectives: Early interventions for infants at high risk of cerebral palsy (CP) are recommended, but limited evidence exists. Our objective was, therefore, to evaluate the effects of the family-centered and interprofessional Small Step early intervention program on motor development in infants at high risk of CP (ClinicalTrials.gov: NCT03264339). Methods: A single-subject research design was employed to investigate participant characteristics (motor dysfunction severity measured using the Hammersmith Infant Neurological Examination (HINE) and Alberta Infant Motor Scale (AIMS) at three months of corrected age (3mCA) related to intervention response. The repeated measures Peabody Developmental Motor Scales-2 fine and gross motor composite (PDMS2-FMC and -GMC) and Hand Assessment for Infants (HAI) were analyzed visually by cumulative line graphs, while the Gross Motor Function Measure-66 (GMFM-66) was plotted against reference percentiles for various Gross Motor Function Classification System (GMFCS) levels. Results: All infants (n = 12) received the Small Step program, and eight completed all five training steps. At two years of corrected age (2yCA), nine children were diagnosed with CP. The children with the lowest HINE < 25 and/or AIMS ≤ 6 at 3mCA (n = 4) showed minor improvements during the program and were classified at GMFCS V 2yCA. Children with HINE = 25–40 (n = 5) improved their fine motor skills during the program, and four children had larger GMFM-66 improvements than expected according to the reference curves but that did not always happen during the mobility training steps. Three children with HINE = 41–50 and AIMS > 7 showed the largest improvements and were not diagnosed with CP 2yCA. Conclusions: Our results indicate that the Small Step program contributed to the children’s motor development, with better results for those with an initial higher HINE (>25). The specificity of training could not be confirmed. Full article
(This article belongs to the Section Clinical Pediatrics)
Show Figures

Figure 1

Figure 1
<p>Timeline for the Small Step single-subject research design study. Eligible children were recruited at the regular hospital screening (T0) at three months of corrected age (3mCA). Data collection included three assessments in the baseline phase (A1–A3), one assessment after each training step, and additional two assessments in the post-intervention phase (A4–A5). In addition, one assessment was performed at follow-up (A6) when the children were two years of corrected age (2yCA). The order of the hand function (B) and mobility (C) steps were randomized, while communication (D) was fixed as the third training step.</p>
Full article ">Figure 2
<p>Cumulative slopes for (<b>a</b>) Peabody Developmental Motor Scales 2nd edition (PDMS-2) fine motor development and (<b>b</b>) PDMS-2 gross motor development measured during the baseline period (A), the training periods targeting hand function (B1 and B2), mobility (C1 and C2), and communication (D), and post-intervention (A). Each participant is represented by an identification (ID) number. The green and blue colors indicate active training periods targeting hand function and mobility, respectively. The <span class="html-italic">Y</span>-axis shows PDMS-2 raw scores. The <span class="html-italic">X</span>-axis shows number of weeks.</p>
Full article ">Figure 3
<p>Cumulative slopes for individual Hand Assessment for Infants (HAI) total raw scores are plotted against norm-referenced growth curves for HAI total raw scores [<a href="#B28-jcm-13-05287" class="html-bibr">28</a>] during the baseline period (A), the training periods targeting hand function (B1 and B2), mobility (C1 and C2), and communication (D), and post-intervention (A). Each participant is represented by an identification (ID) number. The green color indicates active training periods targeting hand function. The <span class="html-italic">Y</span>-axis shows HAI total raw scores. The <span class="html-italic">X</span>-axis shows number of weeks after inclusion in the Small Step program.</p>
Full article ">Figure 4
<p>Individual Gross Motor Function Measure-66 (GMFM-66) interval data for the participants with a confirmed CP diagnosis at 2 years of corrected age. The GMFM-66 cumulative slopes are plotted according to the GMFM percentiles for the corresponding GMFCS classification level [<a href="#B35-jcm-13-05287" class="html-bibr">35</a>] during the baseline period (A), the training periods targeting hand function (B1 and B2), mobility (C1 and C2), and communication (D), and post-intervention (A). Each participant is represented by an identification (ID) number. The blue color indicates active training periods targeting hand mobility. The <span class="html-italic">Y</span>-axis shows GMFM-66 interval level scores. The <span class="html-italic">X</span>-axis shows years of age.</p>
Full article ">
14 pages, 3015 KiB  
Article
Surface Chemistry and Flotation of Gold-Bearing Pyrite
by Seda Özçelik and Zafir Ekmekçi
Minerals 2024, 14(9), 914; https://doi.org/10.3390/min14090914 - 6 Sep 2024
Viewed by 239
Abstract
Gold grains are observed in a variety of forms, such as coarse-liberated native gold grains, and ultra-fine grains associated with sulfide or non-sulfide mineral particles, in the form of solid solution in sulfide minerals, mainly pyrite. In the flotation of gold ores, bulk [...] Read more.
Gold grains are observed in a variety of forms, such as coarse-liberated native gold grains, and ultra-fine grains associated with sulfide or non-sulfide mineral particles, in the form of solid solution in sulfide minerals, mainly pyrite. In the flotation of gold ores, bulk sulfide mineral flotation is generally applied to maximize gold recovery. This approach gives high gold recoveries, but it also causes the recovery of barren sulfide minerals (i.e., sulfide mineral particles with no gold content), which increases concentrate tonnage and transportation costs and reduces the grade sometimes to below the saleable limit (approx. 10 g/t Au). This study addresses the differences between gold-bearing and barren pyrite particles taken from various ore deposits and utilizes these differences for the selective flotation of gold-bearing pyrite. The laboratory scale flotation tests conducted on three pyrite samples having different cyanide soluble gold contents show that a selective separation between gold-bearing pyrite and barren pyrite particles could be achieved under specific flotation conditions. Gold recovery is correlated directly with the cyanide-soluble gold in the ore samples. Electrochemical experiments were conducted to elucidate the differences in surface properties of the two types of pyrite. The barren pyrite particles were more cathodic and prone to cathodic reduction of OH and depressant ions on the surface, and they could be depressed effectively without significantly affecting the gold-bearing particles. Full article
(This article belongs to the Special Issue Surface Chemistry and Reagents in Flotation)
Show Figures

Figure 1

Figure 1
<p>BSE Images of the gold-bearing particles from the test samples: (<b>a</b>) and (<b>b</b>) Sample 1; (<b>c</b>) Sample 2; and (<b>d</b>) Sample 3.</p>
Full article ">Figure 2
<p>Optical microscope image of polished mineral electrode surfaces of gold-bearing pyrite (Au-Py).</p>
Full article ">Figure 3
<p>Flotation of gold-bearing pyrite particles from the barren pyrite at different chemical conditions in Sample 1: (<b>a</b>) shows the gold recovery as a function of mass pull; (<b>b</b>) gold grade vs. recovery curves; (<b>c</b>) gold-pyrite selectivity curves; and (<b>d</b>) silver grade vs. recovery curves.</p>
Full article ">Figure 4
<p>The arsenic grade and recovery of the concentrates produced in the presence of oxidizers and MBS in Sample 1.</p>
Full article ">Figure 5
<p>The flotation performance of gold-bearing pyrite particles under different flotation conditions in Sample 2 and Sample 3: (<b>a</b>) shows gold grade vs. recovery curves and (<b>b</b>) gold recovery vs. pyrite recovery curves.</p>
Full article ">Figure 6
<p>Open Circuit Potential (OCP) of the pyrite samples as a function of time.</p>
Full article ">Figure 7
<p>EIS results of Au-Py and B-Py pyrite samples obtained at pH = 9.2 and at 100 mV polarization potential. (<b>a</b>) Bode magnitude (<b>b</b>) Bode-Phase angle plot.</p>
Full article ">Figure 8
<p>Equivalent electrical circuit for modeling the EIS data [<a href="#B27-minerals-14-00914" class="html-bibr">27</a>].</p>
Full article ">Figure 9
<p>Schematic illustration of the galvanic interaction between gold and pyrite: (<b>a</b>) in a gold-bearing single particle, and (<b>b</b>) when a gold-bearing particle is in contact with a barren pyrite particle.</p>
Full article ">Figure 10
<p>The relationship between CNSolAu and gold recovery.</p>
Full article ">
Back to TopTop