[go: up one dir, main page]

 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (30)

Search Parameters:
Keywords = NSST

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 13327 KiB  
Article
Fusion of Infrared and Visible Light Images Based on Improved Adaptive Dual-Channel Pulse Coupled Neural Network
by Bin Feng, Chengbo Ai and Haofei Zhang
Electronics 2024, 13(12), 2337; https://doi.org/10.3390/electronics13122337 - 14 Jun 2024
Cited by 1 | Viewed by 723
Abstract
The pulse-coupled neural network (PCNN), due to its effectiveness in simulating the mammalian visual system to perceive and understand visual information, has been widely applied in the fields of image segmentation and image fusion. To address the issues of low contrast and the [...] Read more.
The pulse-coupled neural network (PCNN), due to its effectiveness in simulating the mammalian visual system to perceive and understand visual information, has been widely applied in the fields of image segmentation and image fusion. To address the issues of low contrast and the loss of detail information in infrared and visible light image fusion, this paper proposes a novel image fusion method based on an improved adaptive dual-channel PCNN model in the non-subsampled shearlet transform (NSST) domain. Firstly, NSST is used to decompose the infrared and visible light images into a series of high-pass sub-bands and a low-pass sub-band, respectively. Next, the PCNN models are stimulated using the weighted sum of the eight-neighborhood Laplacian of the high-pass sub-bands and the energy activity of the low-pass sub-band. The high-pass sub-bands are fused using local structural information as the basis for the linking strength for the PCNN, while the low-pass sub-band is fused using a linking strength based on multiscale morphological gradients. Finally, the fused high-pass and low-pass sub-bands are reconstructed to obtain the fused image. Comparative experiments demonstrate that, subjectively, this method effectively enhances the contrast of scenes and targets while preserving the detail information of the source images. Compared to the best mean values of the objective evaluation metrics of the compared methods, the proposed method shows improvements of 2.35%, 3.49%, and 11.60% in information entropy, mutual information, and standard deviation, respectively. Full article
(This article belongs to the Special Issue Machine Learning Methods for Solving Optical Imaging Problems)
Show Figures

Figure 1

Figure 1
<p>Dual-channels PCNN model structure.</p>
Full article ">Figure 2
<p>The dual-level decomposition process of NSST [<a href="#B4-electronics-13-02337" class="html-bibr">4</a>].</p>
Full article ">Figure 3
<p>Flow chart of the fusion algorithm [<a href="#B17-electronics-13-02337" class="html-bibr">17</a>,<a href="#B18-electronics-13-02337" class="html-bibr">18</a>].</p>
Full article ">Figure 4
<p>Source image and extracted images. (<b>a</b>) Infrared and visible light source images; (<b>b</b>) decomposed singular value images.</p>
Full article ">Figure 5
<p>Source image and extracted images; (<b>a</b>) Infrared and visible light source images; (<b>b</b>) Multiscale morphological gradients images.</p>
Full article ">Figure 6
<p>The first group of original images and six fused images. (<b>a</b>) Visible light image; (<b>b</b>) Infrared image; (<b>c</b>) NSST; (<b>d</b>) NSST-DCPCNN; (<b>e</b>) VSM-WLS; (<b>f</b>) NSST-PAPCNN; (<b>g</b>) NSCT-PAUDPCNN; (<b>h</b>) Proposed.</p>
Full article ">Figure 7
<p>The second group of original images and six fused images; (<b>a</b>) Visible light image; (<b>b</b>) Infrared image; (<b>c</b>) NSST; (<b>d</b>) NSST-DCPCNN; (<b>e</b>) VSM-WLS; (<b>f</b>) NSST-PAPCNN; (<b>g</b>) NSCT-PAUDPCNN; (<b>h</b>) Proposed.</p>
Full article ">Figure 8
<p>The third group of original images and six fused images; (<b>a</b>) Visible light image; (<b>b</b>) Infrared image; (<b>c</b>) NSST; (<b>d</b>) NSST-DCPCNN; (<b>e</b>) VSM-WLS; (<b>f</b>) NSST-PAPCNN; (<b>g</b>) NSCT-PAUDPCNN; (<b>h</b>) Proposed.</p>
Full article ">Figure 9
<p>The fourth group of original images and six fused images; (<b>a</b>) Visible light image; (<b>b</b>) Infrared image; (<b>c</b>) NSST; (<b>d</b>) NSST-DCPCNN; (<b>e</b>) VSM-WLS; (<b>f</b>) NSST-PAPCNN; (<b>g</b>) NSCT-PAUDPCNN; (<b>h</b>) Proposed.</p>
Full article ">Figure 10
<p>Comparison of objective evaluation metrics for the ten sets; (<b>a</b>) SF; (<b>b</b>) IE; (<b>c</b>) Q<sup>AB/F</sup>; (<b>d</b>) MI; (<b>e</b>) SD.</p>
Full article ">
22 pages, 18573 KiB  
Article
A Multi-Scale Fusion Strategy for Side Scan Sonar Image Correction to Improve Low Contrast and Noise Interference
by Ping Zhou, Jifa Chen, Pu Tang, Jianjun Gan and Hongmei Zhang
Remote Sens. 2024, 16(10), 1752; https://doi.org/10.3390/rs16101752 - 15 May 2024
Viewed by 980
Abstract
Side scan sonar images have great application prospects in underwater surveys, target detection, and engineering activities. However, the acquired sonar images exhibit low illumination, scattered noise, distorted outlines, and unclear edge textures due to the complicated undersea environment and intrinsic device flaws. Hence, [...] Read more.
Side scan sonar images have great application prospects in underwater surveys, target detection, and engineering activities. However, the acquired sonar images exhibit low illumination, scattered noise, distorted outlines, and unclear edge textures due to the complicated undersea environment and intrinsic device flaws. Hence, this paper proposes a multi-scale fusion strategy for side scan sonar (SSS) image correction to improve the low contrast and noise interference. Initially, an SSS image was decomposed into low and high frequency sub-bands via the non-subsampled shearlet transform (NSST). Then, modified multi-scale retinex (MMSR) was employed to enhance the contrast of the low frequency sub-band. Next, sparse dictionary learning (SDL) was utilized to eliminate high frequency noise. Finally, the process of NSST reconstruction was completed by fusing the emerging low and high frequency sub-band images to generate a new sonar image. The experimental results demonstrate that the target features, underwater terrain, and edge contours could be clearly displayed in the image corrected by the multi-scale fusion strategy when compared to eight correction techniques: BPDHE, MSRCR, NPE, ALTM, LIME, FE, WT, and TVRLRA. Effective control was achieved over the speckle noise of the sonar image. Furthermore, the AG, STD, and E values illustrated the delicacy and contrast of the corrected images processed by the proposed strategy. The PSNR value revealed that the proposed strategy outperformed the advanced TVRLRA technology in terms of filtering performance by at least 8.8%. It can provide sonar imagery that is appropriate for various circumstances. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Classification review of side scan sonar image correction.</p>
Full article ">Figure 2
<p>Flow chart of proposed strategy for low illumination enhancement and noise suppression.</p>
Full article ">Figure 3
<p>Multi-scale decomposition process of NSST with side scan sonar images.</p>
Full article ">Figure 4
<p>Enhancement process of low-frequency sub-band image.</p>
Full article ">Figure 5
<p>Denoising process of high-frequency sub-band image with sparse dictionary learning.</p>
Full article ">Figure 6
<p>Measured sonar image set. (<b>a</b>) S1 image; (<b>b</b>) S2 image; (<b>c</b>) S3 image; (<b>d</b>) S4 image.</p>
Full article ">Figure 7
<p>Correction effect on grayscale and pseudo-color sonar images. (<b>a</b>) Grayscale image effects; (<b>b</b>) pseudo-color image effects.</p>
Full article ">Figure 8
<p>Low-frequency image enhancement effects in different neighborhood ranges. (<b>a</b>) Single kernel, Ω = [5]; (<b>b</b>) triple kernel, Ω = [5, 10, 20]; (<b>c</b>) triple kernel, Ω = [5, 8, 15].</p>
Full article ">Figure 9
<p>Low-frequency image enhancement effects under different gamma factors. (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>=</mo> <mn>1.8</mn> </mrow> </semantics></math>; (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 10
<p>The correction effects of the various retinex models.</p>
Full article ">Figure 11
<p>The grayscale distribution characteristics of the retinex variant models. (<b>a</b>) S1 image; (<b>b</b>) S2 image; (<b>c</b>) S3 image; (<b>d</b>) S4 image.</p>
Full article ">Figure 11 Cont.
<p>The grayscale distribution characteristics of the retinex variant models. (<b>a</b>) S1 image; (<b>b</b>) S2 image; (<b>c</b>) S3 image; (<b>d</b>) S4 image.</p>
Full article ">Figure 11 Cont.
<p>The grayscale distribution characteristics of the retinex variant models. (<b>a</b>) S1 image; (<b>b</b>) S2 image; (<b>c</b>) S3 image; (<b>d</b>) S4 image.</p>
Full article ">Figure 12
<p>The comparative effects of representative correction techniques.</p>
Full article ">Figure 13
<p>Correction effects of two filters. (<b>a</b>) Original; (<b>b</b>) guided filtering; (<b>c</b>) ours—bilateral filtering.</p>
Full article ">Figure 14
<p>The effectiveness of different correction methods for multi-source scenes.</p>
Full article ">Figure 14 Cont.
<p>The effectiveness of different correction methods for multi-source scenes.</p>
Full article ">
19 pages, 27088 KiB  
Article
Research on Multi-Scale Fusion Method for Ancient Bronze Ware X-ray Images in NSST Domain
by Meng Wu, Lei Yang and Ruochang Chai
Appl. Sci. 2024, 14(10), 4166; https://doi.org/10.3390/app14104166 - 14 May 2024
Cited by 1 | Viewed by 714
Abstract
X-ray imaging is a valuable non-destructive tool for examining bronze wares, but the complexity of the coverings of bronze wares and the limitations of single-energy imaging techniques often obscure critical details, such as lesions and ornamentation. Therefore, multiple imaging is required to fully [...] Read more.
X-ray imaging is a valuable non-destructive tool for examining bronze wares, but the complexity of the coverings of bronze wares and the limitations of single-energy imaging techniques often obscure critical details, such as lesions and ornamentation. Therefore, multiple imaging is required to fully present the key information of bronze artifacts, which affects the complete presentation of information and increases the difficulty of analysis and interpretation. Using high-performance image fusion technology to fuse X-ray images of different energies into one image can effectively solve this problem. However, there is currently no specialized method for the fusion of images of bronze artifacts. Considering the special requirements for the restoration of bronze artifacts and the existing fusion framework, this paper proposes a new method. It is a novel multi-scale morphological gradient and local topology-coupled neural P systems approach within the Non-Subsampled Shearlet Transform domain. It addresses the absence of a specialized method for image fusion of bronze artifacts. The method proposed in this paper is compared with eight high-performance fusion methods and validated using a total of six evaluation metrics. The results demonstrate the significant theoretical and practical potential of this method for advancing the analysis and preservation of cultural heritage artifacts. Full article
Show Figures

Figure 1

Figure 1
<p>Flowchart of the fusion method. A, B is the input image, F is the fusion result image.</p>
Full article ">Figure 2
<p>The schematic diagram of the three-level NSST decomposition.</p>
Full article ">Figure 3
<p>Bronze mirror x-light source images (<b>a</b>). Clear X-ray image of rim area and (<b>b</b>) clear X-ray image of the decorative area. (1)–(4) Bronze mirror images for the first to fourth groups, respectively.</p>
Full article ">Figure 4
<p>Fusion results of the first set of bronze mirror images. (<b>a</b>) LRD; (<b>b</b>) NMP; (<b>c</b>) F-PCNN; (<b>d</b>) LatLRR; (<b>e</b>) IVFusion; (<b>f</b>) PL-NSCT; (<b>g</b>) MMIF; (<b>h</b>) IFE; (<b>i</b>) the proposed method. Blue and yellow represent crack areas, while red represents textured areas.</p>
Full article ">Figure 5
<p>Fusion results of the second set of bronze mirror images. (<b>a</b>) LRD; (<b>b</b>) NMP; (<b>c</b>) F-PCNN; (<b>d</b>) LatLRR; (<b>e</b>) IVFusion; (<b>f</b>) PL-NSCT; (<b>g</b>) MMIF; (<b>h</b>) IFE; (<b>i</b>) the proposed method. Blue and yellow represent crack areas, while red represents textured areas.</p>
Full article ">Figure 6
<p>Comparison of methods for unclear presentation of bronze mirror crack information and texture information. (<b>a</b>) the first set of bronze mirrors; (<b>b</b>) the second set of bronze mirrors. Blue and yellow represent crack areas, while red represents textured areas.</p>
Full article ">Figure 7
<p>Comparison of methods for unclear presentation of crack information in the first group of bronze mirrors. Blue and yellow represent crack areas, while red represents textured areas.</p>
Full article ">Figure 8
<p>Comparison of methods for unclear presentation of texture information in the second group of bronze mirrors. Blue and yellow represent crack areas, while red represents textured areas.</p>
Full article ">Figure 9
<p>Fusion results of the third set of bronze mirror images. (<b>a</b>) LRD; (<b>b</b>) NMP; (<b>c</b>) F-PCNN; (<b>d</b>) LatLRR; (<b>e</b>) IVFusion; (<b>f</b>) PL-NSCT; (<b>g</b>) MMIF; (<b>h</b>) IFE; (<b>i</b>) the proposed method. Blue and yellow represent crack areas, while red represents textured areas.</p>
Full article ">Figure 10
<p>Fusion results of the fourth set of bronze mirror images. (<b>a</b>) LRD; (<b>b</b>) NMP; (<b>c</b>) F-PCNN; (<b>d</b>) LatLRR; (<b>e</b>) IVFusion; (<b>f</b>) PL-NSCT; (<b>g</b>) MMIF; (<b>h</b>) IFE; (<b>i</b>) the proposed method. Red and yellow represent crack areas, while red represents textured areas.</p>
Full article ">Figure 11
<p>Comparison of methods for unclear presentation of crack information in the third group of bronze mirrors. Blue and yellow represent crack areas, while red represents textured areas.</p>
Full article ">Figure 12
<p>Comparison of methods for unclear presentation of crack information in the fourth group of bronze mirrors. Red and yellow represent crack areas, while red represents textured areas.</p>
Full article ">Figure 13
<p>Comparison of methods for unclear presentation of texture information in the fourth group of bronze mirrors. Red and yellow represent crack areas, while red represents textured areas.</p>
Full article ">Figure 14
<p>Visualization of six evaluation indicators for four sets of bronze mirror images across different fusion methods. (<b>a</b>) The first set of bronze mirror images; (<b>b</b>) the second set of bronze mirror images; (<b>c</b>) the third set of bronze mirror images; (<b>d</b>) the fourth set of bronze mirror images.</p>
Full article ">
16 pages, 13197 KiB  
Article
Roles of Al and Mg on the Microstructure and Corrosion Resistance of Zn-Al-Mg Hot-Dipped Coated Steel
by Taixiong Guo, Yuhao Wang, Liusi Yu, Yongqing Jin, Bitao Zeng, Baojie Dou, Xiaoling Liu and Xiuzhou Lin
Materials 2024, 17(7), 1512; https://doi.org/10.3390/ma17071512 - 27 Mar 2024
Viewed by 1079
Abstract
In this work, a novel zinc–aluminum–magnesium (Zn-Al-Mg, ZM) coated steel was prepared using the hot-dip method. The microstructure and corrosion resistance of the ZM-coated steel were investigated. Compared to the conventional galvanized steel (GI), the ZM coating demonstrated a distinctive phase structure, consisting [...] Read more.
In this work, a novel zinc–aluminum–magnesium (Zn-Al-Mg, ZM) coated steel was prepared using the hot-dip method. The microstructure and corrosion resistance of the ZM-coated steel were investigated. Compared to the conventional galvanized steel (GI), the ZM coating demonstrated a distinctive phase structure, consisting of Zn phase, binary eutectic (Zn/MgZn2), and ternary eutectic (Zn/Al/MgZn2). The corrosion resistance of the ZM-coated and GI-coated steels was evaluated by neutral salt spray test (NSST), polarization and electrochemical impedance spectroscopy (EIS). The results indicated that ZM-coated steel provided superior long-term corrosion protection in a NaCl environment compared to GI-coated steel. The scanning vibrating electrode technique (SVET) proved to be an effective method for investigating the evolution of the anodic and cathodic on the local coating surface. GI-coated steel exhibited a potential and current density distribution between the cathodic and anodic sites nearly three orders of magnitude higher than that of ZM-coated steel, suggesting a higher corrosion rate for GI-coated steel. Full article
Show Figures

Figure 1

Figure 1
<p>Surface micrographs of GI-coated steel (<b>a</b>,<b>b</b>), ZM-coated steel (<b>c</b>,<b>d</b>), the cross-section of GI-coated steel (<b>e</b>), ZM-coated steel (<b>f</b>), and XRD spectra of GI- and ZM-coated steel (<b>g</b>).</p>
Full article ">Figure 2
<p>The photographs of (<b>a</b>) GI- and (<b>b</b>) ZM-coated steel after different times of NSST testing.</p>
Full article ">Figure 3
<p>The weight loss rate of GI- and ZM-coated steel (<b>a</b>) and the weight loss of GI- (<b>b</b>) and ZM- (<b>c</b>) coated steel after different times of NSST experiments.</p>
Full article ">Figure 4
<p>Morphologies of the surface of GI- (<b>a</b>,<b>b</b>) and ZM-coated steel (<b>c</b>,<b>d</b>) after 5 days and 40 days NNST experiments.</p>
Full article ">Figure 5
<p>Cross-section images and elemental mappings of GI- and ZM-coated steel after 40 days of NSST experiments.</p>
Full article ">Figure 6
<p>XRD patterns of corrosion products of (<b>a</b>) GI- and (<b>b</b>) ZM-coated steel after 5 d and 40 d exposure to salt spray test.</p>
Full article ">Figure 7
<p>OCP of ZM and GI coated steel as a function of immersion time in 3.5 wt.% NaCl solution.</p>
Full article ">Figure 8
<p>EIS spectroscopy of GI- and ZM-coated steels after 0.5 h (<b>a</b>,<b>b</b>), 168 h (<b>c</b>,<b>d</b>), open circuit potential (OCP) immersion in 3.5 wt.% NaCl solution (<b>c</b>), and (<b>d</b>) equivalent circuit diagrams were used to analyze the EIS data.</p>
Full article ">Figure 9
<p>Polarization curves of GI and ZM after 0.5 h and 168 h immersion in 3.5 wt.%NaCl solution. (<b>a</b>) 0.5 h, (<b>b</b>) 168 h.</p>
Full article ">Figure 10
<p>The distribution of potential and current density on the surface of GI-coated steel (<b>a</b>,<b>b</b>) and ZM-coated steel (<b>c</b>,<b>d</b>) at the beginning of immersion in 3.5 wt.% NaCl solution obtained from SVET.</p>
Full article ">Figure 11
<p>Potential distribution on the surface of GI-coated steel (<b>upper</b>) and ZM-coated steel (<b>bottom</b>) as a function of immersion time in 3.5 wt.% NaCl solution obtained from SVET.</p>
Full article ">Figure 12
<p>Current density distribution on the surface of GI-coated steel (<b>upper</b>) and ZM-coated steel (<b>bottom</b>) as a function of immersion time in 3.5 wt.% NaCl solution obtained from SVET.</p>
Full article ">
23 pages, 11979 KiB  
Article
Multi-Focus Image Fusion via PAPCNN and Fractal Dimension in NSST Domain
by Ming Lv, Zhenhong Jia, Liangliang Li and Hongbing Ma
Mathematics 2023, 11(18), 3803; https://doi.org/10.3390/math11183803 - 5 Sep 2023
Cited by 1 | Viewed by 1030
Abstract
Multi-focus image fusion is a popular technique for generating a full-focus image, where all objects in the scene are clear. In order to achieve a clearer and fully focused fusion effect, in this paper, the multi-focus image fusion method based on the parameter-adaptive [...] Read more.
Multi-focus image fusion is a popular technique for generating a full-focus image, where all objects in the scene are clear. In order to achieve a clearer and fully focused fusion effect, in this paper, the multi-focus image fusion method based on the parameter-adaptive pulse-coupled neural network and fractal dimension in the nonsubsampled shearlet transform domain was developed. The parameter-adaptive pulse coupled neural network-based fusion rule was used to merge the low-frequency sub-bands, and the fractal dimension-based fusion rule via the multi-scale morphological gradient was used to merge the high-frequency sub-bands. The inverse nonsubsampled shearlet transform was used to reconstruct the fused coefficients, and the final fused multi-focus image was generated. We conducted comprehensive evaluations of our algorithm using the public Lytro dataset. The proposed method was compared with state-of-the-art fusion algorithms, including traditional and deep-learning-based approaches. The quantitative and qualitative evaluations demonstrated that our method outperformed other fusion algorithms, as evidenced by the metrics data such as QAB/F, QE, QFMI, QG, QNCIE, QP, QMI, QNMI, QY, QAG, QPSNR, and QMSE. These results highlight the clear advantages of our proposed technique in multi-focus image fusion, providing a significant contribution to the field. Full article
Show Figures

Figure 1

Figure 1
<p>The example of multi-focus image fusion. (<b>a</b>) Right focus; (<b>b</b>) left focus; (<b>c</b>) NSSTPA [<a href="#B3-mathematics-11-03803" class="html-bibr">3</a>]; (<b>d</b>) proposed.</p>
Full article ">Figure 2
<p>The example of NSST decomposition [<a href="#B31-mathematics-11-03803" class="html-bibr">31</a>].</p>
Full article ">Figure 3
<p>The structure of the SPCNN model [<a href="#B3-mathematics-11-03803" class="html-bibr">3</a>].</p>
Full article ">Figure 4
<p>The structure of the proposed method.</p>
Full article ">Figure 5
<p>Lytro dataset.</p>
Full article ">Figure 5 Cont.
<p>Lytro dataset.</p>
Full article ">Figure 6
<p>Results of Lytro-01. (<b>a</b>) NSSTPA; (<b>b</b>) PMGI; (<b>c</b>) TLDSR; (<b>d</b>) CSSA; (<b>e</b>) LEGFF; (<b>f</b>) U2Fusion; (<b>g</b>) NSSTDW; (<b>h</b>) ZMFF; (<b>i</b>) proposed.</p>
Full article ">Figure 7
<p>Results of Lytro-04. (<b>a</b>) NSSTPA; (<b>b</b>) PMGI; (<b>c</b>) TLDSR; (<b>d</b>) CSSA; (<b>e</b>) LEGFF; (<b>f</b>) U2Fusion; (<b>g</b>) NSSTDW; (<b>h</b>) ZMFF; (<b>i</b>) proposed.</p>
Full article ">Figure 8
<p>Results of Lytro-06. (<b>a</b>) NSSTPA; (<b>b</b>) PMGI; (<b>c</b>) TLDSR; (<b>d</b>) CSSA; (<b>e</b>) LEGFF; (<b>f</b>) U2Fusion; (<b>g</b>) NSSTDW; (<b>h</b>) ZMFF; (<b>i</b>) proposed.</p>
Full article ">Figure 9
<p>Results of Lytro-07. (<b>a</b>) NSSTPA; (<b>b</b>) PMGI; (<b>c</b>) TLDSR; (<b>d</b>) CSSA; (<b>e</b>) LEGFF; (<b>f</b>) U2Fusion; (<b>g</b>) NSSTDW; (<b>h</b>) ZMFF; (<b>i</b>) proposed.</p>
Full article ">Figure 9 Cont.
<p>Results of Lytro-07. (<b>a</b>) NSSTPA; (<b>b</b>) PMGI; (<b>c</b>) TLDSR; (<b>d</b>) CSSA; (<b>e</b>) LEGFF; (<b>f</b>) U2Fusion; (<b>g</b>) NSSTDW; (<b>h</b>) ZMFF; (<b>i</b>) proposed.</p>
Full article ">Figure 10
<p>Results of Lytro-09. (<b>a</b>) NSSTPA; (<b>b</b>) PMGI; (<b>c</b>) TLDSR; (<b>d</b>) CSSA; (<b>e</b>) LEGFF; (<b>f</b>) U2Fusion; (<b>g</b>) NSSTDW; (<b>h</b>) ZMFF; (<b>i</b>) proposed.</p>
Full article ">Figure 11
<p>The line chart of metrics.</p>
Full article ">Figure 11 Cont.
<p>The line chart of metrics.</p>
Full article ">Figure 11 Cont.
<p>The line chart of metrics.</p>
Full article ">Figure 12
<p>The fusion results of sequence multi-focus image fusion.</p>
Full article ">Figure 12 Cont.
<p>The fusion results of sequence multi-focus image fusion.</p>
Full article ">Figure 13
<p>Other multi-modal image fusion results. (<b>a</b>) Source 1; (<b>b</b>) source 2; (<b>c</b>) proposed.</p>
Full article ">Figure 13 Cont.
<p>Other multi-modal image fusion results. (<b>a</b>) Source 1; (<b>b</b>) source 2; (<b>c</b>) proposed.</p>
Full article ">
21 pages, 4502 KiB  
Article
TDFusion: When Tensor Decomposition Meets Medical Image Fusion in the Nonsubsampled Shearlet Transform Domain
by Rui Zhang, Zhongyang Wang, Haoze Sun, Lizhen Deng and Hu Zhu
Sensors 2023, 23(14), 6616; https://doi.org/10.3390/s23146616 - 23 Jul 2023
Cited by 4 | Viewed by 1262
Abstract
In this paper, a unified optimization model for medical image fusion based on tensor decomposition and the non-subsampled shearlet transform (NSST) is proposed. The model is based on the NSST method and the tensor decomposition method to fuse the high-frequency (HF) and low-frequency [...] Read more.
In this paper, a unified optimization model for medical image fusion based on tensor decomposition and the non-subsampled shearlet transform (NSST) is proposed. The model is based on the NSST method and the tensor decomposition method to fuse the high-frequency (HF) and low-frequency (LF) parts of two source images to obtain a mixed-frequency fused image. In general, we integrate low-frequency and high-frequency information from the perspective of tensor decomposition (TD) fusion. Due to the structural differences between the high-frequency and low-frequency representations, potential information loss may occur in the fused images. To address this issue, we introduce a joint static and dynamic guidance (JSDG) technique to complement the HF/LF information. To improve the result of the fused images, we combine the alternating direction method of multipliers (ADMM) algorithm with the gradient descent method for parameter optimization. Finally, the fused images are reconstructed by applying the inverse NSST to the fused high-frequency and low-frequency bands. Extensive experiments confirm the superiority of our proposed TDFusion over other comparison methods. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>This is the image fusion process of the TDFusion model. Firstly, the source images <math display="inline"><semantics><msub><mi>S</mi><mn>1</mn></msub></semantics></math> and <math display="inline"><semantics><msub><mi>S</mi><mn>2</mn></msub></semantics></math> are decomposed into low-frequency <math display="inline"><semantics><msub><mi>L</mi><mn>1</mn></msub></semantics></math> and <math display="inline"><semantics><msub><mi>L</mi><mn>2</mn></msub></semantics></math> and high-frequency <math display="inline"><semantics><msub><mi>H</mi><mn>1</mn></msub></semantics></math> and <math display="inline"><semantics><msub><mi>H</mi><mn>2</mn></msub></semantics></math> by the NSST method. Then, high-frequency <math display="inline"><semantics><msub><mi>H</mi><mn>1</mn></msub></semantics></math> and low-frequency <math display="inline"><semantics><msub><mi>L</mi><mn>2</mn></msub></semantics></math> are fused by the tensor decomposition method to obtain <math display="inline"><semantics><msub><mi>G</mi><mi>a</mi></msub></semantics></math>, and low-frequency <math display="inline"><semantics><msub><mi>L</mi><mn>1</mn></msub></semantics></math> and high-frequency <math display="inline"><semantics><msub><mi>H</mi><mn>2</mn></msub></semantics></math> are fused by the same method to obtain <math display="inline"><semantics><msub><mi>G</mi><mi>b</mi></msub></semantics></math>. The obtained <math display="inline"><semantics><msub><mi>G</mi><mi>a</mi></msub></semantics></math> and <math display="inline"><semantics><msub><mi>G</mi><mi>b</mi></msub></semantics></math> are added to the JSDG guided filter and the corresponding information from high-frequency <math display="inline"><semantics><msub><mi>H</mi><mn>1</mn></msub></semantics></math> and <math display="inline"><semantics><msub><mi>H</mi><mn>2</mn></msub></semantics></math> is added, while the WLE and WSEML methods are used to complete the low-frequency part information. Finally, the fused image is reconstructed by performing the inverse NSST on the fused high-frequency <math display="inline"><semantics><msub><mi>H</mi><mi>f</mi></msub></semantics></math> and low-frequency <math display="inline"><semantics><msub><mi>L</mi><mi>f</mi></msub></semantics></math>.</p>
Full article ">Figure 2
<p>This figure displays the intermediate results of the fusion process. (<b>a1</b>,<b>a2</b>) are the source images; (<b>b1</b>,<b>b2</b>) are slices of the high-frequency maps <math display="inline"><semantics><mrow><msub><mi>H</mi><mn>1</mn></msub><mo>,</mo><msub><mi>H</mi><mn>2</mn></msub></mrow></semantics></math>; (<b>c1</b>,<b>c2</b>) are the low-frequency maps <math display="inline"><semantics><mrow><msub><mi>L</mi><mn>1</mn></msub><mo>,</mo><msub><mi>L</mi><mn>2</mn></msub></mrow></semantics></math>; (<b>d1</b>,<b>d2</b>) are the slices of the mixed-frequency maps.</p>
Full article ">Figure 3
<p>A comparison between the slices of high-frequency maps <math display="inline"><semantics><msub><mi mathvariant="script">U</mi><mrow><mn>1</mn><mo>,</mo><mn>2</mn></mrow></msub></semantics></math> (<b>a1</b>,<b>a2</b>), original mixed-frequency maps <math display="inline"><semantics><msub><mi mathvariant="script">G</mi><mrow><mi>a</mi><mo>,</mo><mi>b</mi></mrow></msub></semantics></math> (<b>b1</b>,<b>b2</b>), and complete mixed-frequency maps <math display="inline"><semantics><msub><mi mathvariant="script">U</mi><mrow><mi>a</mi><mo>,</mo><mi>b</mi></mrow></msub></semantics></math> (<b>c1</b>,<b>c2</b>). The blue and red boxes are shown the enlarged details of each frequency map.</p>
Full article ">Figure 4
<p>The visual effects of various fusion methods on T1 and T2 images. Each image has two close-ups. The first two columns of each group are the source images T1 (<b>a1</b>–<b>a3</b>) and T2 (<b>b1</b>–<b>b3</b>). The fused images are obtained by NSST-PAPCNN (<b>c1</b>–<b>c3</b>), NSCT-PCDC (<b>d1</b>–<b>d3</b>), GFF (<b>e1</b>–<b>e3</b>), ASR (<b>f1</b>–<b>f3</b>), CS-MCA (<b>g1</b>–<b>g3</b>), FCFusion (<b>h1</b>–<b>h3</b>), and TDFusion (<b>i1</b>–<b>i3</b>). The red boxes are shown the enlarged details of fused result.</p>
Full article ">Figure 5
<p>The visual effects of various fusion methods on T2 and PD images. Each image has two close-ups. The first two columns of each group are the source images T2 (<b>a1</b>–<b>a3</b>) and PD (<b>b1</b>–<b>b3</b>). The fused images were obtained by NSST-PAPCNN (<b>c1</b>–<b>c3</b>), NSCT-PCDC (<b>d1</b>–<b>d3</b>), GFF (<b>e1</b>–<b>e3</b>), ASR (<b>f1</b>–<b>f3</b>), CS-MCA (<b>g1</b>–<b>g3</b>), FCFusion (<b>h1</b>–<b>h3</b>), and TDFusion (<b>i1</b>–<b>i3</b>). The red boxes are shown the enlarged details of fused result.</p>
Full article ">Figure 6
<p>The visual effects of various fusion methods on CT and MR-T2 images. Each image has two close-ups. The first two columns of each group are the source images CT (<b>a1</b>–<b>a3</b>) and T2 (<b>b1</b>–<b>b3</b>). The fused images were obtained by NSST-PAPCNN (<b>c1</b>–<b>c3</b>), NSCT-PCDC (<b>d1</b>–<b>d3</b>), GFF (<b>e1</b>–<b>e3</b>), ASR (<b>f1</b>–<b>f3</b>), CS-MCA (<b>g1</b>–<b>g3</b>), FCFusion (<b>h1</b>–<b>h3</b>), and TDFusion (<b>i1</b>–<b>i3</b>). The red boxes are shown the enlarged details of fused result.</p>
Full article ">Figure 7
<p>The visual effects of various fusion methods on MRI and PET images. Each image has two close-ups. The first two columns of each group are the source images MRI (<b>a1</b>–<b>a3</b>) and PET (<b>b1</b>–<b>b3</b>). The fused images are obtained by NSCT-PCDC (<b>c1</b>–<b>c3</b>), GFF (<b>d1</b>–<b>d3</b>), ASR (<b>e1</b>–<b>e3</b>), NSST-PAPCNN (<b>f1</b>–<b>f3</b>), FCFusion (<b>g1</b>–<b>g3</b>), and TDFusion (<b>h1</b>–<b>h3</b>). The red boxes are shown the enlarged details of fused result.</p>
Full article ">Figure 8
<p>The visual effects of various fusion methods on MRI and SPECT images. Each image has two close-ups. The first two columns of each group are the source images MRI (<b>a1</b>–<b>a3</b>) and GFF (<b>b1</b>–<b>b3</b>). The fused images are obtained by PET (<b>c1</b>–<b>c3</b>), ASR (<b>d1</b>–<b>d3</b>), NSST-PAPCNN (<b>e1</b>–<b>e3</b>), NSCT-PCDC(<b>f1</b>–<b>f3</b>), FCFusion (<b>g1</b>–<b>g3</b>), and TDFusion (<b>h1</b>–<b>h3</b>). The red boxes are shown the enlarged details of fused result.</p>
Full article ">Figure 9
<p>The convergence chain diagram of grayscale fusion and color fusion. From Equation (<a href="#FD10-sensors-23-06616" class="html-disp-formula">10</a>), we present the converged value of <span class="html-italic">F</span> of grayscale fusion and color fusion in (<b>a</b>) and (<b>b</b>) respectively.</p>
Full article ">Figure 10
<p>Ablation experimental data results. The indicators are ChenBlum, FM-pixel, MS-SSIM, NCC, <math display="inline"><semantics><msub><mi>Q</mi><mrow><mi>A</mi><mi>B</mi><mo>/</mo><mi>F</mi></mrow></msub></semantics></math>, SF, and STD.</p>
Full article ">
13 pages, 247 KiB  
Article
Survivors of Commercial Sexual Exploitation Involved in the Justice System: Mental Health Outcomes, HIV/STI Risks, and Perceived Needs to Exit Exploitation and Facilitate Recovery
by Arduizur Carli Richie-Zavaleta, Edina Butler, Kathi Torres and Lianne A. Urada
Sexes 2023, 4(2), 256-268; https://doi.org/10.3390/sexes4020017 - 13 Apr 2023
Viewed by 2061
Abstract
This exploratory retrospective study analyzes the emotional and mental processes, risk behavior for HIV/STIs, and needed services to exit commercial sexual exploitation. Participants were court-referred to the local survivor-led program, Freedom from Exploitation, in southern California. Data were collected (N = 168) using [...] Read more.
This exploratory retrospective study analyzes the emotional and mental processes, risk behavior for HIV/STIs, and needed services to exit commercial sexual exploitation. Participants were court-referred to the local survivor-led program, Freedom from Exploitation, in southern California. Data were collected (N = 168) using an intake assessment form for a period of five years (2015–2020). Two groups were identified in the data, self-identified survivors of sex trafficking (SST) and non-self-identified survivors of sex trafficking (NSST). Bivariate and multivariate logistic regressions examined the associations of HIV/STI risks and emotional and mental processes with these two subgroups. Findings demonstrated that both groups experienced gender-based violence and similar emotional and mental processes as well as HIV/STIs risks. However, in adjusted models, the SST group had three times the odds of experiencing abuse by a sex buyer when asked to use a condom and eight times the odds of feeling hopeless or desperate and experiencing nightmares/flashbacks among other negative mental health outcomes. Both SST and NSST said they needed assistance to obtain legal services and complete a high school equivalency credential, among other services. Findings may be used by social service and law enforcement agencies to better assist survivors of sex trafficking and similar groups in supporting their rehabilitation and protection. Full article
(This article belongs to the Special Issue Exclusive Papers Collection of the Editorial Board of Sexes)
15 pages, 2065 KiB  
Article
Multimodality Medical Image Fusion Using Clustered Dictionary Learning in Non-Subsampled Shearlet Transform
by Manoj Diwakar, Prabhishek Singh, Ravinder Singh, Dilip Sisodia, Vijendra Singh, Ankur Maurya, Seifedine Kadry and Lukas Sevcik
Diagnostics 2023, 13(8), 1395; https://doi.org/10.3390/diagnostics13081395 - 12 Apr 2023
Cited by 7 | Viewed by 2099
Abstract
Imaging data fusion is becoming a bottleneck in clinical applications and translational research in medical imaging. This study aims to incorporate a novel multimodality medical image fusion technique into the shearlet domain. The proposed method uses the non-subsampled shearlet transform (NSST) to extract [...] Read more.
Imaging data fusion is becoming a bottleneck in clinical applications and translational research in medical imaging. This study aims to incorporate a novel multimodality medical image fusion technique into the shearlet domain. The proposed method uses the non-subsampled shearlet transform (NSST) to extract both low- and high-frequency image components. A novel approach is proposed for fusing low-frequency components using a modified sum-modified Laplacian (MSML)-based clustered dictionary learning technique. In the NSST domain, directed contrast can be used to fuse high-frequency coefficients. Using the inverse NSST method, a multimodal medical image is obtained. Compared to state-of-the-art fusion techniques, the proposed method provides superior edge preservation. According to performance metrics, the proposed method is shown to be approximately 10% better than existing methods in terms of standard deviation, mutual information, etc. Additionally, the proposed method produces excellent visual results regarding edge preservation, texture preservation, and more information. Full article
Show Figures

Figure 1

Figure 1
<p>Multimodality medical image fusion proposed framework.</p>
Full article ">Figure 2
<p>Results of multimodality medical image fusion; (<b>a</b>) input multimodality medical image 1; (<b>b</b>) input multimodality medical image 2; (<b>c</b>) Zhang et al. [<a href="#B12-diagnostics-13-01395" class="html-bibr">12</a>]; (<b>d</b>) Ramlal et al. [<a href="#B13-diagnostics-13-01395" class="html-bibr">13</a>]; (<b>e</b>) Dogra et al. [<a href="#B14-diagnostics-13-01395" class="html-bibr">14</a>]; (<b>f</b>) Ullah et al. [<a href="#B15-diagnostics-13-01395" class="html-bibr">15</a>]; (<b>g</b>) Huang et al. [<a href="#B16-diagnostics-13-01395" class="html-bibr">16</a>]; (<b>h</b>) Liu et al. [<a href="#B17-diagnostics-13-01395" class="html-bibr">17</a>]; (<b>i</b>) Mehta et al. [<a href="#B18-diagnostics-13-01395" class="html-bibr">18</a>]; (<b>j</b>) proposed method.</p>
Full article ">Figure 3
<p>Results of multimodality medical image fusion; (<b>a</b>) input multimodality medical image 1; (<b>b</b>) input multimodality medical image 2; (<b>c</b>) Zhang et al. [<a href="#B12-diagnostics-13-01395" class="html-bibr">12</a>]; (<b>d</b>) Ramlal et al. [<a href="#B13-diagnostics-13-01395" class="html-bibr">13</a>]; (<b>e</b>) Dogra et al. [<a href="#B14-diagnostics-13-01395" class="html-bibr">14</a>]; (<b>f</b>) Ullah et al. [<a href="#B15-diagnostics-13-01395" class="html-bibr">15</a>]; (<b>g</b>) Huang et al. [<a href="#B16-diagnostics-13-01395" class="html-bibr">16</a>]; (<b>h</b>) Liu et al. [<a href="#B17-diagnostics-13-01395" class="html-bibr">17</a>]; (<b>i</b>) Mehta et al. [<a href="#B18-diagnostics-13-01395" class="html-bibr">18</a>]; (<b>j</b>) proposed method.</p>
Full article ">Figure 4
<p>Results of multimodality medical image fusion; (<b>a</b>) input multimodality medical image 1; (<b>b</b>) input multimodality medical image 2; (<b>c</b>) Zhang et al. [<a href="#B12-diagnostics-13-01395" class="html-bibr">12</a>]; (<b>d</b>) Ramlal et al. [<a href="#B13-diagnostics-13-01395" class="html-bibr">13</a>]; (<b>e</b>) Dogra et al. [<a href="#B14-diagnostics-13-01395" class="html-bibr">14</a>]; (<b>f</b>) Ullah et al. [<a href="#B15-diagnostics-13-01395" class="html-bibr">15</a>]; (<b>g</b>) Huang et al. [<a href="#B16-diagnostics-13-01395" class="html-bibr">16</a>]; (<b>h</b>) Liu et al. [<a href="#B17-diagnostics-13-01395" class="html-bibr">17</a>]; (<b>i</b>) Mehta et al. [<a href="#B18-diagnostics-13-01395" class="html-bibr">18</a>]; (<b>j</b>) proposed method.</p>
Full article ">Figure 5
<p>Zoomed results of multimodality medical image fusion; (<b>a</b>) input multimodality medical image 1; (<b>b</b>) input multimodality medical image 2; (<b>c</b>) Zhang et al. [<a href="#B12-diagnostics-13-01395" class="html-bibr">12</a>]; (<b>d</b>) Ramlal et al. [<a href="#B13-diagnostics-13-01395" class="html-bibr">13</a>]; (<b>e</b>) Dogra et al. [<a href="#B14-diagnostics-13-01395" class="html-bibr">14</a>]; (<b>f</b>) Ullah et al. [<a href="#B15-diagnostics-13-01395" class="html-bibr">15</a>]; (<b>g</b>) Huang et al. [<a href="#B16-diagnostics-13-01395" class="html-bibr">16</a>]; (<b>h</b>) Liu et al. [<a href="#B17-diagnostics-13-01395" class="html-bibr">17</a>]; (<b>i</b>) Mehta et al. [<a href="#B18-diagnostics-13-01395" class="html-bibr">18</a>]; (<b>j</b>) proposed method.</p>
Full article ">
16 pages, 6327 KiB  
Article
Research on Multi-Scale Feature Extraction and Working Condition Classification Algorithm of Lead-Zinc Ore Flotation Foam
by Xiaoping Jiang, Huilin Zhao, Junwei Liu, Suliang Ma and Mingzhen Hu
Appl. Sci. 2023, 13(6), 4028; https://doi.org/10.3390/app13064028 - 22 Mar 2023
Cited by 1 | Viewed by 1564
Abstract
To address the problems of difficult online monitoring, low recognition efficiency and the subjectivity of work condition identification in mineral flotation processes, a foam flotation performance state recognition method is developed to improve the issues mentioned above. This method combines multi-dimensional CNN (convolutional [...] Read more.
To address the problems of difficult online monitoring, low recognition efficiency and the subjectivity of work condition identification in mineral flotation processes, a foam flotation performance state recognition method is developed to improve the issues mentioned above. This method combines multi-dimensional CNN (convolutional neural networks) characteristics and improved LBP (local binary patterns) characteristics. We have divided the foam flotation conditions into six categories. First, the multi-directional and multi-scale selectivity and anisotropy of nonsubsampled shearlet transform (NSST) are used to decompose the flotation foam images at multiple frequency scales, and a multi-channel CNN network is designed to extract static features from the images at different frequencies. Then, the flotation video image sequences are rotated and dynamic features are extracted by the LBP-TOP (local binary patterns from three orthogonal planes), and the CNN-extracted static picture features are fused with the LBP dynamic video features. Finally, classification decisions are made by a PSO-RVFLNs (particle swarm optimization-random vector functional link networks) algorithm to accurately identify the foam flotation performance states. Experimental results show that the detection accuracy of the new method is significantly improved by 4.97% and 6.55%, respectively, compared to the single CNN algorithm and the traditional LBP algorithm, respectively. The accuracy of flotation performance state classification was as high as 95.17%, and the method reduced manual intervention, thus improving production efficiency. Full article
Show Figures

Figure 1

Figure 1
<p>Foam images from six performance states.</p>
Full article ">Figure 2
<p>3-stage NSST multiscale decomposition of bubble image.</p>
Full article ">Figure 3
<p>Working condition 1: Three-dimensional display of bubble image.</p>
Full article ">Figure 4
<p>NSST-CNN network feature extraction model.</p>
Full article ">Figure 5
<p>(<b>a</b>) Schematic diagram of three orthogonal planes; (<b>b</b>) Each plane extends the neighborhood.</p>
Full article ">Figure 6
<p>Schematic diagram of LBP-TOP feature extraction. (a) an image in the XY plane, (b) an image in the XT plane, giving a visual impression of a row over time, and (c) the movement of a column in time space.</p>
Full article ">Figure 7
<p>PSO-RVFLNS algorithm flow.</p>
Full article ">Figure 8
<p>KRVFLNS condition recognition model combining CNN and LBP features; (<b>a</b>) NSST, RILBP feature map sequence extraction; (<b>b</b>) Multiple dimensional feature map sequence extraction; (<b>c</b>) Feature vector extraction.</p>
Full article ">Figure 9
<p>Recognition accuracy under different LBP parameters.</p>
Full article ">Figure 10
<p>The processing effect of different values.</p>
Full article ">Figure 11
<p>LBP-TOP texture feature diagram.</p>
Full article ">Figure 12
<p>Visualization results of CNN features.</p>
Full article ">Figure 13
<p>Curve of loss function.</p>
Full article ">Figure 14
<p>Accuracy of different activation functions.</p>
Full article ">Figure 15
<p>RVFLNS test output.</p>
Full article ">Figure 16
<p>PSO-RVFLNS test output.</p>
Full article ">Figure 17
<p>Performance recognition results of three modes. (<b>a</b>) Foam flotation condition identification combining multi-scale CNN and LBP features; (<b>b</b>) Multi-scale NSST-CNN condition identification; (<b>c</b>) LBP-TOP condition identification.</p>
Full article ">Figure 17 Cont.
<p>Performance recognition results of three modes. (<b>a</b>) Foam flotation condition identification combining multi-scale CNN and LBP features; (<b>b</b>) Multi-scale NSST-CNN condition identification; (<b>c</b>) LBP-TOP condition identification.</p>
Full article ">
18 pages, 5006 KiB  
Article
Classification of Mineral Foam Flotation Conditions Based on Multi-Modality Image Fusion
by Xiaoping Jiang, Huilin Zhao and Junwei Liu
Appl. Sci. 2023, 13(6), 3512; https://doi.org/10.3390/app13063512 - 9 Mar 2023
Cited by 3 | Viewed by 1427
Abstract
Accurate and rapid identification of mineral foam flotation states can increase mineral utilization and reduce the consumption of reagents. The traditional flotation process concentrates on extracting foam features from a single-modality foam image, and the accuracy is undesirable once problems such as insufficient [...] Read more.
Accurate and rapid identification of mineral foam flotation states can increase mineral utilization and reduce the consumption of reagents. The traditional flotation process concentrates on extracting foam features from a single-modality foam image, and the accuracy is undesirable once problems such as insufficient image clarity or poor foam boundaries are encountered. In this work, a classification method based on multi-modality image fusion and CNN-PCA-SVM is proposed for work condition recognition of visible and infrared gray foam images. Specifically, the visible and infrared gray images are fused in the non-subsampled shearlet transform (NSST) domain using the parameter adaptive pulse coupled neural network (PAPCNN) method and the image quality detection method for high and low frequencies, respectively. The convolution neural network (CNN) is used as a trainable feature extractor to process the fused foam images, the principal component analysis (PCA) reduces feature data, and the support vector machine (SVM) is used as a recognizer to classify the foam flotation condition. After experiments, this model can fuse the foam images and recognize the flotation condition classification with high accuracy. Full article
Show Figures

Figure 1

Figure 1
<p>Classification diagram of foam flotation conditions.</p>
Full article ">Figure 2
<p>Diagram of NSST three-level decomposition.</p>
Full article ">Figure 3
<p>The architecture of the PAPCNN model.</p>
Full article ">Figure 4
<p>Foam image 3D display diagram.</p>
Full article ">Figure 5
<p>Schematic of the foam image fusion method.</p>
Full article ">Figure 6
<p>CNN model for features extraction.</p>
Full article ">Figure 7
<p>Structure of CNN-PCA-SVM model.</p>
Full article ">Figure 8
<p>Schematic of the foam image fusion and classification.</p>
Full article ">Figure 9
<p>The influence of parameters <span class="html-italic">K</span> on the performance of fused image.</p>
Full article ">Figure 10
<p>The influence of parameters <span class="html-italic">N</span> on the performance of fused images.</p>
Full article ">Figure 11
<p>Image indexes of different fusion methods.</p>
Full article ">Figure 12
<p>Relationship between PCA dimensions and accuracy.</p>
Full article ">Figure 13
<p>Confusion matrix of 10-fold cross-validation mode.</p>
Full article ">Figure 14
<p>Confusion matrix of the training dataset.</p>
Full article ">Figure 15
<p>Confusion matrix of the testing dataset.</p>
Full article ">Figure 16
<p>VGG16-PCA-SVM confusion matrix of the blind dataset.</p>
Full article ">
14 pages, 8261 KiB  
Article
Panchromatic and Multispectral Image Fusion Combining GIHS, NSST, and PCA
by Lina Xu, Guangqi Xie and Sitong Zhou
Appl. Sci. 2023, 13(3), 1412; https://doi.org/10.3390/app13031412 - 20 Jan 2023
Cited by 3 | Viewed by 1645
Abstract
Spatial and spectral information are essential sources of information in remote sensing applications, and the fusion of panchromatic and multispectral images effectively combines the advantages of both. Due to the existence of two main classes of fusion methods—component substitution (CS) and multi-resolution analysis [...] Read more.
Spatial and spectral information are essential sources of information in remote sensing applications, and the fusion of panchromatic and multispectral images effectively combines the advantages of both. Due to the existence of two main classes of fusion methods—component substitution (CS) and multi-resolution analysis (MRA), which have different advantages—mixed approaches are possible. This paper proposes a fusion algorithm that combines the advantages of generalized intensity–hue–saturation (GIHS) and non-subsampled shearlet transform (NSST) with principal component analysis (PCA) technology to extract more spatial information. Therefore, compared with the traditional algorithms, the algorithm in this paper uses PCA transformation to obtain spatial structure components from PAN and MS, which can effectively inject spatial information while maintaining spectral information with high fidelity. First, PCA is applied to each band of low-resolution multispectral (MS) images and panchromatic (PAN) images to obtain the first principal component and to calculate the intensity of MS. Then, the PAN image is fused with the first principal component using NSST, and the fused image is used to replace the original intensity component. Finally, a fused image is obtained using the GIHS algorithm. Using the urban, plants and water, farmland, and desert images from GeoEye-1, WorldView-4, GaoFen-7 (GF-7), and Gaofen Multi-Mode (GFDM) as experimental data, this fusion method was tested using the evaluation mode with references and the evaluation mode without references and was compared with five other classic fusion algorithms. The results showed that the algorithms in this paper had better fusion performances in both spectral preservation and spatial information incorporation. Full article
(This article belongs to the Special Issue Recent Advances in Image Processing)
Show Figures

Figure 1

Figure 1
<p>Hybrid model of the CS method and MRA method.</p>
Full article ">Figure 2
<p>Flowchart of the proposed method.</p>
Full article ">Figure 3
<p>Fusion flowchart of PC1 and the PAN image using the NSST transform.</p>
Full article ">Figure 4
<p>Fusion results for GE-1 images using different methods.</p>
Full article ">Figure 5
<p>Fusion results for the WV-4 images using different methods.</p>
Full article ">Figure 6
<p>Fusion results for the GF-7 images using different methods.</p>
Full article ">Figure 6 Cont.
<p>Fusion results for the GF-7 images using different methods.</p>
Full article ">Figure 7
<p>Fusion results for the GFDM images using different methods.</p>
Full article ">Figure 7 Cont.
<p>Fusion results for the GFDM images using different methods.</p>
Full article ">
19 pages, 3522 KiB  
Article
SI2FM: SID Isolation Double Forest Model for Hyperspectral Anomaly Detection
by Zhenhua Mu, Ming Wang, Yihan Wang, Ruoxi Song and Xianghai Wang
Remote Sens. 2023, 15(3), 612; https://doi.org/10.3390/rs15030612 - 20 Jan 2023
Cited by 1 | Viewed by 1766
Abstract
Hyperspectral image (HSI) anomaly detection (HSI-AD) has become a hot issue in hyperspectral information processing as a method for detecting undesired targets without a priori information against unknown background and target information, which can be better adapted to the needs of practical applications. [...] Read more.
Hyperspectral image (HSI) anomaly detection (HSI-AD) has become a hot issue in hyperspectral information processing as a method for detecting undesired targets without a priori information against unknown background and target information, which can be better adapted to the needs of practical applications. However, the demanding detection environment with no prior and small targets, as well as the large data and high redundancy of HSI itself, make the study of HSI-AD very challenging. First, we propose an HSI-AD method based on the nonsubsampled shearlet transform (NSST) domain spectral information divergence isolation double forest (SI2FM) in this paper. Further, the method excavates the intrinsic deep correlation properties between NSST subband coefficients of HSI in two ways to provide synergistic constraints and guidance on the prediction of abnormal target coefficients. On the one hand, with the “difference band” as a guide, the global isolation forest and local isolation forest models are constructed based on the spectral information divergence (SID) attribute values of the difference band and the low-frequency and high-frequency subbands, and the anomaly scores are determined by evaluating the path lengths of the isolation binary tree nodes in the forest model to obtain a progressively optimized anomaly detection map. On the other hand, based on the relationship of NSST high-frequency subband coefficients of spatial-spectral dimensions, the three-dimensional forest structure is constructed to realize the co-optimization of multiple anomaly detection maps obtained from the isolation forest. Finally, the guidance of the difference band suppresses the background noise and anomaly interference to a certain extent, enhancing the separability of target and background. The two-branch collaborative optimization based on the NSST subband coefficient correlation mining of HSI enables the prediction of anomaly sample coefficients to be gradually improved from multiple perspectives, which effectively improves the accuracy of anomaly detection. The effectiveness of the algorithm is verified by comparing real hyperspectral datasets captured in four different scenes with eleven typical anomaly detection algorithms currently available. Full article
(This article belongs to the Special Issue Hyperspectral Remote Sensing Imaging and Processing)
Show Figures

Figure 1

Figure 1
<p>The flow chart of the NSST Transform.</p>
Full article ">Figure 2
<p>Diagram of the splitting point. (<b>a</b>) Anomaly splitting; (<b>b</b>) Normal point splitting.</p>
Full article ">Figure 3
<p>Flow chart of the proposed SI2FM-HIS-AD framework.</p>
Full article ">Figure 4
<p>NSST subbands and difference band in the HSI band. (<b>a</b>) HSI band; (<b>b</b>) Low-frequency subband; (<b>c</b>) Difference band; (<b>d</b>) Anomaly detection reference map; (<b>e</b>) High-frequency subband in direction 1; (<b>f</b>) High-frequency subband in direction 2; (<b>g</b>) High-frequency subband in direction 3; (<b>h</b>) High-frequency subband in direction 4.</p>
Full article ">Figure 4 Cont.
<p>NSST subbands and difference band in the HSI band. (<b>a</b>) HSI band; (<b>b</b>) Low-frequency subband; (<b>c</b>) Difference band; (<b>d</b>) Anomaly detection reference map; (<b>e</b>) High-frequency subband in direction 1; (<b>f</b>) High-frequency subband in direction 2; (<b>g</b>) High-frequency subband in direction 3; (<b>h</b>) High-frequency subband in direction 4.</p>
Full article ">Figure 5
<p>Example illustration the structure of an iTree.</p>
Full article ">Figure 6
<p>Illustration of NSST high-frequency subbands spatial spectral dimensional forest construction.</p>
Full article ">Figure 7
<p>Experimental dataset.</p>
Full article ">Figure 8
<p>Influence of different parameters on SI2FM algorithm detection performance. (<b>a</b>) The number of NSST decomposition layers; (<b>b</b>) The isolation forest binary tree sub-sampling size; (<b>c</b>) The number of isolation binary trees.</p>
Full article ">Figure 9
<p>I-AD maps on four datasets. (<b>a</b>) RX; (<b>b</b>) LRX; (<b>c</b>) UNRS; (<b>d</b>) CRD; (<b>e</b>) LSAD; (<b>f</b>) LSUNRSORAD; (<b>g</b>) LSAD-CR-IDW; (<b>h</b>) CRDBPSW; (<b>i</b>) FEBPAD; (<b>j</b>) VABS; (<b>k</b>) KIFD; (<b>l</b>) SI2FM.</p>
Full article ">Figure 9 Cont.
<p>I-AD maps on four datasets. (<b>a</b>) RX; (<b>b</b>) LRX; (<b>c</b>) UNRS; (<b>d</b>) CRD; (<b>e</b>) LSAD; (<b>f</b>) LSUNRSORAD; (<b>g</b>) LSAD-CR-IDW; (<b>h</b>) CRDBPSW; (<b>i</b>) FEBPAD; (<b>j</b>) VABS; (<b>k</b>) KIFD; (<b>l</b>) SI2FM.</p>
Full article ">Figure 9 Cont.
<p>I-AD maps on four datasets. (<b>a</b>) RX; (<b>b</b>) LRX; (<b>c</b>) UNRS; (<b>d</b>) CRD; (<b>e</b>) LSAD; (<b>f</b>) LSUNRSORAD; (<b>g</b>) LSAD-CR-IDW; (<b>h</b>) CRDBPSW; (<b>i</b>) FEBPAD; (<b>j</b>) VABS; (<b>k</b>) KIFD; (<b>l</b>) SI2FM.</p>
Full article ">Figure 10
<p>ROC curves and target background separation boxplot on four datasets. (<b>a</b>) ROC curves; (<b>b</b>) Target background separation boxplot.</p>
Full article ">Figure 10 Cont.
<p>ROC curves and target background separation boxplot on four datasets. (<b>a</b>) ROC curves; (<b>b</b>) Target background separation boxplot.</p>
Full article ">
21 pages, 11923 KiB  
Article
A Remote Sensing Image Fusion Method Combining Low-Level Visual Features and Parameter-Adaptive Dual-Channel Pulse-Coupled Neural Network
by Zhaoyang Hou, Kaiyun Lv, Xunqiang Gong and Yuting Wan
Remote Sens. 2023, 15(2), 344; https://doi.org/10.3390/rs15020344 - 6 Jan 2023
Cited by 7 | Viewed by 1705
Abstract
Remote sensing image fusion can effectively solve the inherent contradiction between spatial resolution and spectral resolution of imaging systems. At present, the fusion methods of remote sensing images based on multi-scale transform usually set fusion rules according to local feature information and pulse-coupled [...] Read more.
Remote sensing image fusion can effectively solve the inherent contradiction between spatial resolution and spectral resolution of imaging systems. At present, the fusion methods of remote sensing images based on multi-scale transform usually set fusion rules according to local feature information and pulse-coupled neural network (PCNN), but there are problems such as single local feature, as fusion rule cannot effectively extract feature information, PCNN parameter setting is complex, and spatial correlation is poor. To this end, a fusion method of remote sensing images that combines low-level visual features and a parameter-adaptive dual-channel pulse-coupled neural network (PADCPCNN) in a non-subsampled shearlet transform (NSST) domain is proposed in this paper. In the low-frequency sub-band fusion process, a low-level visual feature fusion rule is constructed by combining three local features, local phase congruency, local abrupt measure, and local energy information to enhance the extraction ability of feature information. In the process of high-frequency sub-band fusion, the structure and parameters of the dual-channel pulse-coupled neural network (DCPCNN) are optimized, including: (1) the multi-scale morphological gradient is used as an external stimulus to enhance the spatial correlation of DCPCNN; and (2) implement parameter-adaptive representation according to the difference box-counting, the Otsu threshold, and the image intensity to solve the complexity of parameter setting. Five sets of remote sensing image data of different satellite platforms and ground objects are selected for experiments. The proposed method is compared with 16 other methods and evaluated from qualitative and quantitative aspects. The experimental results show that, compared with the average value of the sub-optimal method in the five sets of data, the proposed method is optimized by 0.006, 0.009, 0.009, 0.035, 0.037, 0.042, and 0.020, respectively, in the seven evaluation indexes of information entropy, mutual information, average gradient, spatial frequency, spectral distortion, ERGAS, and visual information fidelity, indicating that the proposed method has the best fusion effect. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Framework of the proposed NSST-LLVF-PADCPCNN method.</p>
Full article ">Figure 2
<p>Architecture of the DCPCNN model.</p>
Full article ">Figure 3
<p>Flowchart of the method design.</p>
Full article ">Figure 4
<p>Five sets of multispectral image and panchromatic image. (<b>a1</b>) PAN image of QuickBird; (<b>a2</b>) MS image of QuickBird e; (<b>b1</b>) PAN image of SPOT-6; (<b>b2</b>) MS image of SPOT-6; (<b>c1</b>) PAN image of WorldView-2; (<b>c2</b>) MS image of WorldView-2; (<b>d1</b>) PAN image of WorldView-3; (<b>d2</b>) MS image of WorldView-3; (<b>e1</b>) PAN image of Pleiades; (<b>e2</b>) MS image of Pleiades.</p>
Full article ">Figure 5
<p>Fusion effect of QuickBird image.</p>
Full article ">Figure 6
<p>Error images corresponding to QuickBird images.</p>
Full article ">Figure 7
<p>Fusion effect of SPOT-6 image.</p>
Full article ">Figure 8
<p>Error images corresponding to SPOT-6 images.</p>
Full article ">Figure 9
<p>Fusion effect of WorldView-2 image.</p>
Full article ">Figure 10
<p>Error images corresponding to WorldView-2 images.</p>
Full article ">Figure 11
<p>Fusion effect of WorldView-3 image.</p>
Full article ">Figure 12
<p>Error images corresponding to WorldView-3 images.</p>
Full article ">Figure 13
<p>Fusion effect of Pleiades image.</p>
Full article ">Figure 14
<p>Error images corresponding to Pleiades images.</p>
Full article ">
17 pages, 5520 KiB  
Article
Evaluation of the Long-Term Performance of Marine and Offshore Coatings System Exposed on a Traditional Stationary Site and an Operating Ship and Its Correlation to Accelerated Test
by Krystel Pélissier, Nathalie Le Bozec, Dominique Thierry and Nicolas Larché
Coatings 2022, 12(11), 1758; https://doi.org/10.3390/coatings12111758 - 16 Nov 2022
Cited by 6 | Viewed by 2657
Abstract
Anticorrosive coatings are widely used to protect steel against corrosion. Different standards exist to access the corrosion performance of anticorrosive paints. Among them, the so-called neutral salt spray test (NSST-ISO 9227) or cycling corrosion tests ISO 12944-6, ISO 12944-9, NACE TM0304, or NACE [...] Read more.
Anticorrosive coatings are widely used to protect steel against corrosion. Different standards exist to access the corrosion performance of anticorrosive paints. Among them, the so-called neutral salt spray test (NSST-ISO 9227) or cycling corrosion tests ISO 12944-6, ISO 12944-9, NACE TM0304, or NACE TM0404 can be named. It is well-known that some accelerated corrosion tests are not fully representative of the field exposure results. However, a lack in the literature exists correlating accelerated tests to field exposure, especially when long-term durations are considered. In this study, 11 different organic coatings have been investigated in terms of coating resistance to corrosion creep in two types of field exposure sites, namely a stationary site and an operating ship, and their performance was compared to two accelerated tests (ISO 12944-9 and modified ASTM D5894 standard). The results showed differences in the sites’ corrosivity and the coating systems’ performance as a function of the exposure sites. A lack of correlation exists between the ISO 12944-9 standard and the stationary site, due to the latter’s high corrosivity, while, to the contrary, a satisfying correlation with the operating ship was demonstrated; whereas, the modified ASTM D5894 standard showed a satisfying correlation with both types of sites. Full article
(This article belongs to the Section Corrosion, Wear and Erosion)
Show Figures

Figure 1

Figure 1
<p>Photography of the coated samples exposed on the Enez Sun.</p>
Full article ">Figure 2
<p>Schematic representation of (<b>a</b>) the scribe and (<b>b</b>) scribe evaluation.</p>
Full article ">Figure 3
<p>Maximum scribe creep before coating removal (M1) as a function of the site of exposure after 6 years of exposure for (<b>a</b>) zinc-rich primers and (<b>b</b>) barrier-type primers.</p>
Full article ">Figure 4
<p>Schematic representation of the protective mechanism of zinc-rich primer in the presence of a scribe.</p>
Full article ">Figure 5
<p>Evolution of the total area of underfilm corrosion as a function of the time of exposure and site of exposition for (<b>a</b>) zinc-rich primers and (<b>b</b>) barrier-type primers.</p>
Full article ">Figure 6
<p>Photographs of the scribe creep before coating removal after 1 year and 6 years of exposure.</p>
Full article ">Figure 7
<p>Box plot of the maximum scribe creep (M1) for Brest and Enez Sun sites as a function of the time of exposure for (<b>a</b>) zinc-rich primer and (<b>b</b>) barrier-type primer.</p>
Full article ">Figure 8
<p>Photographs after 6 years of exposure in (<b>a</b>) S1 (Zn ethyl silicate) for Brest, (<b>b</b>) S6 (epoxy aluminium) for Brest, (<b>c</b>) S10 (polyamine epoxy) for Brest, (<b>d</b>) S1 (Zn ethyl silicate) for Enez Sun, (<b>e</b>) S6 (epoxy aluminium) for Enez Sun, and (<b>f</b>) S10 (polyamine epoxy) for Enez Sun.</p>
Full article ">Figure 9
<p>Linear regression plot between the maximum scribe creep before coating removal (M1) for Brest and Enez Sun. The red plot is the linear regression plot, with S7 excluded.</p>
Full article ">Figure 10
<p>Maximum scribe creep before coating removal (M1) for the ISO 12944-9 after 6 months of testing and for the modified ASTM D5894 after 3 months of testing.</p>
Full article ">Figure 11
<p>Linear regression plot between the maximum scribe creep for (<b>a</b>) ISO 12944-9 and Brest (6 years), (<b>b</b>) ASTM D5894 and Brest (6 years), (<b>c</b>) ISO 12944-9 and Enez Sun (6 years), and (<b>d</b>) ASTM D5894 and Enez Sun (6 years).</p>
Full article ">Figure 12
<p>Evolution of the coefficient of correlation obtained after linear regression as a function of the time of exposure for (<b>a</b>) the two exposures sites and ISO 12944-9 standard and (<b>b</b>) the two exposures sites and ASTM D5894. For ISO 12944-9 standard, S7 was excluded for Brest, and S7 and S11 were excluded for Enez Sun, as a function of the time of exposure. For the modified ASTM D5894 standard, S10 and S9 were excluded for Brest, and S9 was excluded for Enez Sun, as a function of the time of exposure.</p>
Full article ">
13 pages, 13090 KiB  
Article
Seismic Coherent Noise Removal of Source Array in the NSST Domain
by Minghao Yu, Xiangbo Gong and Xiaojie Wan
Appl. Sci. 2022, 12(21), 10846; https://doi.org/10.3390/app122110846 - 26 Oct 2022
Cited by 2 | Viewed by 1452
Abstract
The technique of the source array based on the vibroseis can provide the strong energy of a seismic wave field, which better meets the need for seismic exploration. The seismic coherent noise reduces the signal-to-noise ratio (SNR) of the source array seismic data [...] Read more.
The technique of the source array based on the vibroseis can provide the strong energy of a seismic wave field, which better meets the need for seismic exploration. The seismic coherent noise reduces the signal-to-noise ratio (SNR) of the source array seismic data and affects the seismic data processing. The traditional coherent noise removal methods often cause some damage to the effective signal while suppressing coherent noise or cannot suppress the interference wave effectively at all. Based on the multi-scale and multi-direction properties of the non-subsampled Shearlet transform (NSST) and its simple mathematical structure, the seismic coherent noise removal method of source array in NSST domain is proposed. The method is applied to both the synthetic seismic data and the filed seismic data. After processing with this method, the coherent noise of the seismic data is greatly removed and the effective signal information is greatly protected. The analysis of the results demonstrates the effectiveness and practicability of the proposed method on coherent noise attenuation. Full article
(This article belongs to the Special Issue Technological Advances in Seismic Data Processing and Imaging)
Show Figures

Figure 1

Figure 1
<p>Frequency domain subdivision and its support interval.</p>
Full article ">Figure 2
<p>The decomposition process of the NSST.</p>
Full article ">Figure 3
<p>The flowchart of seismic coherent noise removal method in the NSST domain.</p>
Full article ">Figure 4
<p>The processing results of the synthetic seismic data by the proposed method. (<b>a</b>) The original synthetic seismic data. (<b>b</b>) The removed noise. (<b>c</b>) The seismic data after the denoising.</p>
Full article ">Figure 5
<p>The NSST coefficients after the decomposition of the NSST. (<b>a</b>) The approximate NSST coefficients. (<b>b</b>) The detail NSST coefficients at scale 2, 2 directions. (<b>c</b>) The detail NSST coefficients at scale 1, 4 directions.</p>
Full article ">Figure 6
<p>The overview of the field exploration area.</p>
Full article ">Figure 7
<p>The processing results of the field seismic data using the proposed method. (<b>a</b>) The field seismic data. (<b>b</b>) The removed noise by NSST. (<b>c</b>) The seismic data after the NSST. (<b>d</b>) The removed noise by FK filter. (<b>e</b>) The seismic data after the FK filter.</p>
Full article ">Figure 8
<p>The main NSST coefficients containing the coherent noise. (<b>a</b>) The detail NSST coefficients at scale 2, 2 directions. (<b>b</b>) The detail NSST coefficients at scale 1, 3 directions.</p>
Full article ">Figure 9
<p>Waveform and amplitude spectra of the seismic data. (<b>a</b>) The waveform of trace 80 of the original seismic data (the green curve), the denoised seismic data by the NSST (the blue curve), and the denoised seismic data by the FK filter (the red curve). (<b>b</b>) The amplitude spectra of trace 80 of the original seismic data (the green curve), the denoised seismic data by the NSST (the blue curve) and the denoised seismic data by the FK filter (the red curve). (<b>c</b>) The average amplitude spectra of the original seismic data (the green curve), the denoised seismic data by the NSST (the blue curve) and the denoised seismic data by the FK filter (the red curve).</p>
Full article ">Figure 10
<p>The results of applying f-k transform in the original seismic data and the denoised seismic data. (<b>a</b>) The original data. (<b>b</b>) The seismic data after the NSST. (<b>c</b>) The seismic data after the FK filter.</p>
Full article ">
Back to TopTop