[go: up one dir, main page]

Next Article in Journal
Composition of the Influence Group in the q-Voter Model and Its Impact on the Dynamics of Opinions
Next Article in Special Issue
An N-Shaped Lightweight Network with a Feature Pyramid and Hybrid Attention for Brain Tumor Segmentation
Previous Article in Journal
Deep Individual Active Learning: Safeguarding against Out-of-Distribution Challenges in Neural Networks
Previous Article in Special Issue
FLPP: A Federated-Learning-Based Scheme for Privacy Protection in Mobile Edge Computing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Lightweight Cross-Modal Information Mutual Reinforcement Network for RGB-T Salient Object Detection

1
School of Automation, Hangzhou Dianzi University, Hangzhou 310018, China
2
Lishui Institute, Hangzhou Dianzi University, Lishui 323000, China
*
Authors to whom correspondence should be addressed.
Entropy 2024, 26(2), 130; https://doi.org/10.3390/e26020130
Submission received: 14 December 2023 / Revised: 26 January 2024 / Accepted: 29 January 2024 / Published: 31 January 2024
(This article belongs to the Special Issue Methods in Artificial Intelligence and Information Processing II)
Figure 1
<p>Some examples of RGB-T datasets. (<b>a</b>) Ours. (<b>b</b>) PCNet. (<b>c</b>) TAGF.</p> ">
Figure 2
<p>Overall architecture of our lightweight cross-modal information mutual reinforcement network for RGB-T salient object detection. ‘E1∼E5’ are the five modules of the encoder. ‘TDec’ and ‘RDec’ are the decoder modules of the auxiliary decoder. ‘CMIMR’ is the cross-modal information mutual reinforcement module. ‘SIGF’ is the semantic-information-guided fusion module.</p> ">
Figure 3
<p>Architecture of the cross-modal information mutual reinforcement (CMIMR) module. ‘<math display="inline"><semantics> <mrow> <mi>C</mi> <mi>o</mi> <mi>n</mi> <mi>v</mi> <mspace width="4pt"/> <mn>1</mn> <mo>×</mo> <mn>1</mn> </mrow> </semantics></math>’ is the <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>×</mo> <mn>1</mn> </mrow> </semantics></math> convolution. ‘<math display="inline"><semantics> <mrow> <mi>S</mi> <mi>A</mi> </mrow> </semantics></math>’ is the spatial attention. ‘<math display="inline"><semantics> <mrow> <mi>D</mi> <mi>S</mi> <mi>C</mi> <mi>o</mi> <mi>n</mi> <mi>v</mi> <mspace width="4pt"/> <mn>3</mn> <mo>×</mo> <mn>3</mn> </mrow> </semantics></math>’ is the depth-separable convolution with the <math display="inline"><semantics> <mrow> <mn>3</mn> <mo>×</mo> <mn>3</mn> </mrow> </semantics></math> convolution kernel.</p> ">
Figure 4
<p>Architecture of the semantic-information-guided fusion (SIGF) module. ‘<math display="inline"><semantics> <mrow> <mi>D</mi> <mi>S</mi> <mi>C</mi> <mi>o</mi> <mi>n</mi> <mi>v</mi> <mspace width="4pt"/> <mn>3</mn> <mo>×</mo> <mn>3</mn> </mrow> </semantics></math>’ is the depth-separable convolution with the <math display="inline"><semantics> <mrow> <mn>3</mn> <mo>×</mo> <mn>3</mn> </mrow> </semantics></math> convolution kernel. ‘<math display="inline"><semantics> <mrow> <mi>V</mi> <mi>A</mi> <mi>B</mi> </mrow> </semantics></math>’ is the visual attention block. ‘<math display="inline"><semantics> <mrow> <mi>U</mi> <msub> <mi>p</mi> <mrow> <mo>×</mo> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math>’ is the two-times upsample.</p> ">
Figure 5
<p>PR curves and F-measure curves of the compared methods on the RGB-T datasets.</p> ">
Figure 6
<p>Visual comparisons with other methods. (<b>a</b>) Ours. (<b>b</b>) ADF. (<b>c</b>) MIDD. (<b>d</b>) MMNet. (<b>e</b>) MIADPD. (<b>f</b>) OSRNet. (<b>g</b>) ECFFNet. (<b>h</b>) PCNet. (<b>i</b>) TAGF. (<b>j</b>) UMINet. (<b>k</b>) APNet.</p> ">
Figure 7
<p>Visual comparisons with ablation experiments on the effectiveness of the CMIMR module. (<b>a</b>) Ours. (<b>b</b>) <span class="html-italic">w</span>/<span class="html-italic">o</span> CMIMR. (<b>c</b>) <span class="html-italic">w</span>/<span class="html-italic">o</span> PDFE. (<b>d</b>) <span class="html-italic">w</span>/<span class="html-italic">o</span> IMR.</p> ">
Figure 8
<p>Visual comparisons with ablation experiments on the effectiveness of the SIGF module. (<b>a</b>) Ours. (<b>b</b>) <span class="html-italic">w</span>/<span class="html-italic">o</span> SIGF. (<b>c</b>) <span class="html-italic">w</span>/<span class="html-italic">o</span> SIE. (<b>d</b>) <span class="html-italic">w</span>/<span class="html-italic">o</span> VAB.</p> ">
Figure 9
<p>Visual comparisons with ablation experiments on the effectiveness of the IoU loss and auxiliary decoder. (<b>a</b>) Ours. (<b>b</b>) <span class="html-italic">w</span>/<span class="html-italic">o</span> IoU. (<b>c</b>) <span class="html-italic">w</span>/<span class="html-italic">o</span> AD.</p> ">
Versions Notes

Abstract

:
RGB-T salient object detection (SOD) has made significant progress in recent years. However, most existing works are based on heavy models, which are not applicable to mobile devices. Additionally, there is still room for improvement in the design of cross-modal feature fusion and cross-level feature fusion. To address these issues, we propose a lightweight cross-modal information mutual reinforcement network for RGB-T SOD. Our network consists of a lightweight encoder, the cross-modal information mutual reinforcement (CMIMR) module, and the semantic-information-guided fusion (SIGF) module. To reduce the computational cost and the number of parameters, we employ the lightweight module in both the encoder and decoder. Furthermore, to fuse the complementary information between two-modal features, we design the CMIMR module to enhance the two-modal features. This module effectively refines the two-modal features by absorbing previous-level semantic information and inter-modal complementary information. In addition, to fuse the cross-level feature and detect multiscale salient objects, we design the SIGF module, which effectively suppresses the background noisy information in low-level features and extracts multiscale information. We conduct extensive experiments on three RGB-T datasets, and our method achieves competitive performance compared to the other 15 state-of-the-art methods.

1. Introduction

Salient object detection (SOD) is a computer vision technique that segments the most-visually interesting objects from an image, mimicking attention mechanisms. It is important to note that SOD differs from object detection tasks that aim to predict object bounding boxes. SOD has been employed as a preprocessing step in many computer vision tasks, such as image fusion [1], perceptual video coding [2], compressed video sensing [3], image quality assessment [4], and so on.
Traditional methods for RGB SOD were initially proposed, but they could not achieve optimal performance. With the advent of CNNs [5] and U-Nets [6], deep-learning-based methods became popular in SOD. For example, multiscale information was extracted in PoolNet [7] and MINet [8]. The edge feature was generated and supplemented to the object feature in EGNet [9] and EMFINet [10]. Later, depth maps were introduced in SOD, which is called RGB-D SOD. In this field, the depth-enhanced module [11] was designed to fuse two-modal features. However, the RGB-D dataset still has some shortcomings. Some depth maps are not accurate due to the limitations of the acquisition equipment. Researchers turned to introducing thermal infrared images into SOD, called RGB-T SOD.
RGB-T SOD has seen significant progress in recent years. For example, CBAM [12] is employed in [13] to fuse two-modal features. To capture multiscale information, FAM module is employed in [13], and the SGCU module is designed in CSRNet [14]. Despite their outstanding efforts in RGB-T SOD, there are still some problems that need to be addressed. Most of the existing works are based on a heavy model, which is unsuitable for mobile devices. Besides, there is still room for research on effectively integrating the complementary information between two-modal features. Figure 1 shows some examples where PCNet [15] and TAGF [16] cannot present the detection results well. Another problem is how to fuse two-level features and explore multiscale information during the decoding stage.
Based on the aforementioned discussions, we propose a lightweight network for RGB-T SOD. Specifically, we employ the lightweight backbone MobileNet-V2 [17] in the encoder and the depth-separable convolution [18] in the decoder. To address the problem of two-modal feature fusion, we introduce the CMIMR module. We enhance two-modal features by transferring semantic information into them using the previous-level decoded feature. After this enhancement, we mutually reinforce two-modal features by communicating complementary information between them. Additionally, we design the SIGF module to aggregate two-level features and explore multiscale information during the decoding stage. Unlike RFB [11,19] and FAM [7], we employ the visual attention block (VAB) [20] to explore the multiscale information of the fused feature in the decoder.
Our main contributions are summarized as follows:
  • We propose a lightweight cross-modal information mutual reinforcement network for RGB-T salient object detection. Our network comprises a lightweight encoder, the cross-modal information mutual reinforcement (CMIMR) module, and the semantic-information-guided fusion (SIGF) module.
  • To fuse complementary information between two-modal features, we introduce the CMIMR module, which effectively refines the two-modal features.
  • Extensive experiments conducted on three RGB-T datasets demonstrate the effectiveness of our method.

2. Related Works

Salient Object Detection

Numerous works have been proposed for SOD [21,22,23]. Initially, prior knowledge and manually designed features [24] were employed. With the advent of deep learning, CNN-based methods have made significant strides. For instance, many methods have attempted to capture multiscale information in images (RFB [19,25] and FAM [7]). Additionally, many works have focused on refining the edge details of salient objects [9,26,27]. Furthermore, several lightweight methods have been proposed to adapt to mobile devices [28,29]. While these methods have made great progress in RGB SOD, they do not perform as well when the RGB image has cluttered backgrounds, low contrast, and object occlusion.
RGB-D SOD is a technique that uses depth maps to provide complementary information to RGB images. To fuse two-modal features, several methods have been proposed, including the depth-enhanced module [11], selective self-mutual attention [30], the cross-modal depth-weighted combination block [31], the dynamic selective module [32], the cross-modal information exchange module [33], the feature-enhanced module [34], the cross-modal disentanglement module [35], the unified cross dual-attention module [36], and inverted bottleneck cross-modality fusion [37]. Despite the progress made by RGB-D SOD, it performs poorly on low-quality examples, where some depth maps are inaccurate due to the limitations of the acquisition equipment.
In addition to depth maps, thermal infrared images have been employed to provide complementary information to RGB images, which is called RGB-T SOD. Many works have made efforts in this area [38,39]. To fuse two-modal features, several methods have been proposed, including CBAM [12,13], the complementary weighting module [40], the cross-modal multi-stage fusion module [41], the multi-modal interactive attention unit [42], the effective cross-modality fusion module [43], the semantic constraint provider [44], the modality difference reduction module [45], the spatial complementary fusion module [46], and the cross-modal interaction module [15]. To fuse two-level features during the decoding stage, the FAM module [13] and interactive decoders [47] were proposed. Additionally, lightweight networks [14,48] have been proposed to meet the requirements of mobile devices.

3. Methodology

3.1. Architecture Overview

We present the overall architecture of our method in Figure 2, which is a typical encoder–decoder structure. In the encoder part, we adopted the lightweight MobileNet-V2 (E1∼E5) [17] as the backbone to extract five-level features F i R , F i T i = 1 , , 5 for the two-modal inputs, respectively. To explore the complementary information between the two-modal features, we designed the cross-modal information mutual reinforcement module to fuse the two-modal features. To detect multiscale objects and fuse the two-level features, we designed the semantic-information-guided fusion module to suppress interfering information and explore multiscale information. Additionally, we employed two auxiliary decoder branches. On the one hand, this guides the two-modal encoders to extract modality-specific information [49] for the two-modal inputs, which helps to make the feature learning process more stable. On the other hand, this provides supplementary information in terms of single-channel saliency features. The decoder modules of the two auxiliary decoder branches are equipped with a simple structure, namely concatenation followed by 3 × 3 depth-separable convolution (DSConv) [18]. Finally, the 1 × 1 convolution is applied on three decoded features, resulting in three single-channel saliency features F 1 Fd S , F 2 Td S , and F 2 Rd S . After that, the sigmoid activation function is applied to obtain saliency maps S F , S T , and S R . To fuse the complementary information between the three decoder branches, we summed the three single-channel saliency features and applied the sigmoid function to obtain the saliency map S test during the testing stage. The above processes can be formulated as follows:
F 1 Fd S = C o n v 1 × 1 F 1 Fd F 2 Td S = C o n v 1 × 1 F 2 Td F 2 Rd S = C o n v 1 × 1 F 2 Rd ,
S F = σ F 1 Fd S S T = σ F 2 Td S S R = σ F 2 Rd S S test = σ F 1 Fd S + F 2 Td S + F 2 Rd S ,
where C o n v 1 × 1 means the 1 × 1 convolution and σ is the sigmoid function, which maps the single-channel saliency feature to the saliency map. F 1 Fd , F 2 Td , and F 2 Rd are the output features of the primary decoder and two auxiliary decoders.

3.2. Cross-Modal Information Mutual Reinforcement Module

Fusing complementary information between two-modal features is an essential question for RGB-T SOD. Two-modal features often contain noisy and inconsistent information, which can hinder the learning process of the saliency features. To address these issues, we designed the CMIMR module to suppress noisy information in the two-modal features and mutually supply valuable information.
The structure of the CMIMR module is illustrated in Figure 3. Specifically, we used the previous-level decoded feature, which contains accurate semantic and location information, to enhance the two-modal features by the concatenation–convolution operation, respectively. This guides the two-modal features to concentrate more on valuable information and alleviate background noise. However, this enhancement operation may weaken the beneficial information in the two-modal features. To address this issue, we added residual connections to the two-modal enhanced features. This process can be described as follows:
F i Tle = F i T F i Rle = F i R i = 5 ,
F i Tle = F i T C o n v 1 × 1 F i T , U p × 2 F i + 1 Fd F i Rle = F i R C o n v 1 × 1 F i R , U p × 2 F i + 1 Fd i = 1 , , 4 ,
where ⊕ means elementwise summation and C o n v 1 × 1 is the 1 × 1 convolution block consisting of the 1 × 1 convolution layer, and a batch normalization layer. [ · , · ] denotes concatenating two features along the channel dimension. U p × 2 means 2-times bilinear upsampling. F i T and F i R are the encoder features of the thermal image and RGB image at the ith-level. F i Tle and F i Rle are the previous-level information-enhanced two-modal features. F i + 1 Fd is the decoded feature at the ( i + 1 ) th level. The semantic and location information from the previous-level decoded features help suppress noisy information in the two-modal features, which facilitates the exploration of complementary information in the subsequent process.
After the aforementioned enhancement, we further exchanged the complementary information between the two-modal features. Since two-modal features contain both complementary and misleading information, directly concatenating them together can harm the appropriate fusion. Taking the RGB feature as an example, we intended to utilize the thermal feature to enhance it. Considering that spatial attention [50] can adaptively highlight regions of interest and filter the noisy information, we utilized the spatial attention map of the RGB feature to filter misleading information in the thermal features. This is because we wanted to preserve valuable information in the thermal feature, which is complementary to the RGB feature. After that, we concatenated the spatial-attention-filtered thermal feature with the RGB feature to supplement beneficial information into the RGB feature. Through this operation, the complementary information in the thermal feature can adaptively flow into the RGB feature, thereby obtaining a cross-modal information-enhanced RGB feature. The enhancement process for the thermal feature is similar to that of the RGB feature. Finally, we combined the two-modal enhanced features by elementwise summation to aggregate them:
F i Tme = D S C o n v 3 × 3 F i Tle , S A F i Tle F i Rle F i Rme = D S C o n v 3 × 3 F i Rle , S A F i Rle F i Tle F i F = D S C o n v 3 × 3 F i Tme F i Rme i = 1 , , 5 ,
where D S C o n v 3 × 3 is the 3 × 3 DSConv layer [18], ⊙ represents the elementwise multiplication operation, and S A denotes the spatial attention [50]. F i Tme and F i Rme are cross-modal enhanced two-modal features. F i F is the two-modal fused feature. In summary, the CMIMR module can effectively suppress background noise in two-modal features under the guidance of previous-level semantic information. Furthermore, it can supplement valuable information to each modal feature, which helps to effectively fuse the two-modal features.

3.3. Semantic-Information-Guided Fusion Module

How to design the two-level feature aggregation module during the decoding stage is a crucial question for SOD. It is related to whether we can recover the elaborate details of salient objects. Since low-level features contain much noisy information, directly concatenating them together will inevitably introduce disturbing information into the fused features. To rectify the noisy information in the low-level features, we transmitted the semantic information in the high-level feature into it. Besides, multiscale information is vital in SOD tasks. Salient objects in different scenes are of various sizes and shapes, but the ordinary 3 × 3 convolution cannot accurately detect these salient objects. Inspired by the great success of multiscale information-capture modules (e.g., RFB [7,11] and FAM [19]) in SOD, we employed the visual attention block (VAB) [20] to capture the multiscale features. The VAB was initially designed as a lightweight feature-extraction backbone for many visual tasks.
The SIGF module structure is shown in Figure 4. Specifically, to suppress the background noisy information in the low-level feature, we utilized the high-level feature to refine the feature representation of the low-level feature. We concatenated the high-level feature into the low-level feature to enhance it. In the feature-enhancement process, valuable information in the low-level features may be diluted, so we introduced residual connections to preserve it. This process can be expressed as follows:
F i Fe = F i F D S C o n v 3 × 3 F i F , U p × 2 F i + 1 Fd i = 1 , , 4 ,
where F i Fe is the semantic-information-enhanced feature. F i + 1 Fd is the decoded feature at the ( i + 1 ) t h level. F i F is the two-modal fused features. Then, to enable our method to detect salient objects of various sizes and shapes, we used the VAB to extract multiscale information contained in the fused features:
F i Fd = V A B F i F i = 5 V A B D S C o n v 3 × 3 F i Fe , U p × 2 F i + 1 Fd i = 1 , , 4 ,
where V A B is the visual attention block [20]. F i Fd is the decoded feature at the ith level. The VAB consists of two parts: the large kernel attention (LKA) and feed-forward network (FFN) [51]. In the large kernel attention, the depth-separable convolution, depth-separable dilation convolution with dilation d, and a 1 × 1 convolution are successively stacked to capture multiscale information:
V A B F = F F N ( L K A ( F ) ) L K A ( F ) = C o n v 1 × 1 D S C o n v d D S C o n v F F ,
where D S C o n v d is the depth-separable convolution with dilation d. F stands for the feature being processed. In summary, our module can rectify noisy information in the low-level feature under the guidance of high-level accurate semantic information. Meanwhile, the VAB successfully extracts multiscale information, which is beneficial for detecting multiscale salient objects.

3.4. Loss Function

The deep supervision strategy [52] is adopted in our method. Specifically, the saliency predictions of deep features F i Fd i = 1 , , 5 are supervised, as shown in Figure 2. Additionally, the saliency predictions of two auxiliary decoders’ output features F 2 Td , F 2 Rd are also supervised. The BCE loss [53] and IoU loss [54] are employed to calculate the losses between saliency predictions and the GT:
a l l = i = 1 5 1 2 i 1 l o s s S i F , G + l o s s S T , G + l o s s S R , G l o s s = b c e + I o U ,
where S i F , S T , and S R mean the saliency predictions of the deep features F i Fd , F 2 Td , and F 2 Rd , respectively. G means the ground truth. b c e and I o U mean the BCE loss and IoU loss, respectively.

4. Experiments

4.1. Experiment Settings

4.1.1. Datasets

There are three RGB-T SOD datasets that have been widely employed in existing works: VT821 [55], VT1000 [56], and VT5000 [13]. VT821 consists of 821 manually registered RGB-T image pairs. VT1000 is composed of 1000 well-aligned RGB-T image pairs. VT5000 has 5000 RGB-T image pairs, containing complex scenes and diverse objects. Following the previous works’ setting [47], 2500 samples from VT5000 were selected as the training dataset. The other 2500 samples from VT5000 and all samples from VT821 and VT1000 served as the testing datasets. To avoid overfitting, the training dataset was augmented by random flipping and random rotation [11].

4.1.2. Implementation Details

The model was trained on a GeForce RTX 2080 Ti (11GB memory). The Pytorch framework was employed in the code implementation. The encoders were initialized with the pre-trained MobileNet-V2 [17], while the other parameters were initialized with the Kaiming uniform distribution [57]. The input image was resized to 224 × 224 for both the training and testing stages. The training epochs and batch size were set to 120 and 20, respectively. The Adam optimizer was employed to reduce the loss of our method. The learning rate was set to 1 × 10 4 and will decay to 1 × 10 5 after 90 epochs.

4.2. Evaluation Metrics

To compare the performance of our method with other methods, four numeric evaluation metrics were employed, the mean absolute error ( M ), F-measure ( F β ) [58], E-measure ( E ξ ) [59], and structure-measure ( S α ) [60]. Besides, the PR curve and F-measure curve are plotted to show their evaluation results.

4.2.1. M

The mean absolute error M calculates the mean absolute error between the prediction value and the GT:
M = 1 W × H i = 1 W j = 1 H | S ( i , j ) G ( i , j ) | ,
where G ( i , j ) and S ( i , j ) denote the ground truth and the saliency map, respectively.

4.2.2. F β

The F-measure ( F β ) is the weighted harmonic mean of the recall and precision, which is formulated as
F β = ( 1 + β 2 ) P r e c i s i o n · R e c a l l β 2 · P r e c i s i o n + R e c a l l ,
where β 2 was set to 0.3, referring to [58].

4.2.3. E ξ

The E-measure ( E ξ ) evaluates the global and local similarities between the ground truth and predictions:
E ξ = 1 W × H i = 1 W j = 1 H φ S ( i , j ) , G ( i , j ) ,
where φ is the enhanced alignment matrix.

4.2.4. S α

The structure-measure ( S α ) evaluates the structural similarities of salient objects between the ground truth and predictions:
S α = α S o + ( 1 α ) S r ,
where S r and S o mean region-aware and object-aware structural similarity, respectively, and α was set to 0.5, referring to [60].

4.3. Comparisons with the SOTA Methods

To show the effectiveness of our method, we compared it with 15 SOTA methods, the RGB SOD methods BASNet [27], EGNet [9], and CPD [19] and the RGB-T SOD methods ADF [13], MIDD [47], MMNet [41], MIADPD [42], OSRNet [61], ECFFNet [43], PCNet [15], TAGF [16], UMINet [62], MGAI [63], APNet [64], CGFNet [65], CSRNet [14], and LSNet [48]. For a fair comparison, the saliency maps of all compared methods are either directly provided by the author or re-implemented by the official public code.

4.3.1. Quantitative Comparison

We compared the performance of the heavy-model-based methods in Table 1 and the lightweight methods in Table 2. The PR and F-measure curves of the compared methods on the three RGB-T datasets are plotted in Figure 5. Our method outperformed most methods in terms of four metrics, except for S α , which was slightly inferior to the other methods. Compared to the heavy-model-based methods, as shown in Table 1, our method improved 6.9 % , 2.0 % , and 1.1 % in terms of M , F β , and E ξ on VT5000. Although our method was not as good as other methods in terms of S α , it requires only 6.1M parameters and 1.5G FLOP and can be easily applied to mobile devices. The inference speed of our method was mediocre on a professional GPU (GeForce RTX 2080 Ti, Santa Clara, CA, USA) with 34.9 FPS. However, given that the mobile devices only have access to the CPU, our method outperformed the other methods with 6.5 FPS (AMD Ryzen 7 5800H, Santa Clara, CA, USA). Besides, we compare our method with existing lightweight methods in Table 2. Our method outperformed the other methods on most metrics, except for S α on VT1000 and VT821. Our method improved 12.5 % , 2.3 % , and 1.2 % in terms of M , F β , and E ξ on VT5000. Among the lightweight methods, the FLOP and FPS of our method were not as good as LSNet, but our method performed better. In addition, we plot the PR and F-measure curves in Figure 5 to visually compare the performance of all methods. We can see that the precision of our method was higher than other methods on VT5000 and VT821, when the recall was not very close to 1. The F-measure curves consider the trade-offs between precision and recall. We can see that our method obtained better F-measure scores on VT5000 and VT821. We evaluate the IoU and Dice scores of our method in Table 3 with reference to most image segmentation tasks. We can see that our method performed better on VT1000 than on VT5000 and VT821. Additionally, our method outperformed the compared method LSNet on all three datasets.
To demonstrate the significance of the performance improvement of our method, the t-test was performed. We retrained our method and obtained six sets of experiment results, shown in Table 4. Concretely, assuming the metrics X N ( μ , σ 2 ) , the test statistic was t = X ¯ μ 0 S / n , where S 2 is an unbiased estimate of σ 2 . X ¯ μ 0 S / n t ( n 1 ) . t ( n 1 ) is the Student distribution with n 1 degrees of freedom. Therefore, the t-test was used in our hypothesis test. For the evaluation metric M , the left-sided test was performed, i.e., the H 0 hypothesis was that the M of our method was greater than that of the compared method. For the other five metrics F β , S α , E ξ , I o U , and D i c e , the right-sided test was performed, i.e., the H 0 hypothesis was that the corresponding results of our method were less than those of the compared method. The p-value is reported in our t-test. Three significance levels α were used in our t-test, i.e., 0.01, 0.05, and 0.1. Generally speaking, if p-value ≤ 0.01, the test is highly significant. If 0.01 < p-value ≤ 0.05, the test is significant. If 0.05 < p-value ≤ 0.1, the test is not significant. If p-value > 0.1, then there is no reason to reject the H 0 hypothesis. As shown in Table 5, the p-value of our method was less than 0.01 for M , F β , and E ξ on the three datasets, indicating that the t-test was highly significant.

4.3.2. Qualitative Comparison

To demonstrate the effectiveness of our method, we also provide the visual comparisons with other methods in Figure 6. In this figure, the challenging scenes include small objects (1st and 2nd row), multiple objects (3rd and 4th row), a misleading RGB image (5th row), and misleading thermal images (6th, 7th, and 8th row). As seen in Figure 6, our method can detect salient objects better than other methods. For example, in the first and second rows, our method can accurately detect small salient objects, while other methods like MMNet and MIADPD failed in this case. In the third and fourth rows, our method can detect multiple objects in the scene, but the other methods performed poorly. In the fifth row, our method can detect the salient object effectively despite the low contrast in the RGB image, while the other methods were interfered with by the noisy information in the RGB image. In the sixth and seventh rows, the salient objects have apparent contrast in the RGB image, but are similar to other objects in the background in the thermal image. The thermal images provide misleading information, which can be easily solved by our method. In summary, our method can accurately overcome the challenges in these scenarios due to the better fusion of the complementary information between the two-modal features and multiscale information extraction.

4.4. Ablation Study

4.4.1. Effectiveness of Cross-Modal Information Mutual Reinforcement Module

To demonstrate the effectiveness of the CMIMR module, we perform several ablation experiments in Table 6. First, we removed the CMIMR module, i.e., the two-modal features were directly concatenated followed by the 3 × 3 DSConv, referred to as w/o CMIMR. Compared with this variant, our method improved M and F β by 5.0% and 1.7% on VT5000, respectively. This suggests that our method can effectively fuse complementary information between two-modal features by enhancing them with the guidance information of the previous level and inter-modality. To demonstrate that the performance improvement of each module is significant, we perform t-test in Table 7. As shown in Table 7, the p-value of our method was less than 0.01 for all four metrics compared to the variant w/o CMIMR, so the test was highly significant. To demonstrate that the CMIMR outperformed the other modules that play the same role in existing methods, we replaced it with the two-modal feature fusion module in ADF [13], abbreviated as w ADF-TMF. Compared to this variant, our method improved the M and F β by 2.4% and 0.8% on VT5000, respectively. Compared to the variant w ADF-TMF, the p-value of our method was less than 0.01 for F β and S α on VT5000, so the test was highly significant. This suggests that the design of the CMIMR module is sound.
Second, we removed the previous-level decoded feature enhancement, which is abbreviated as w/o PDFE, i.e., two-modal features are not enhanced by the previous-level decoded feature, but are directly fed into the cross-modal information mutual enhancement component of the CMIMR module. Compared to this variant, our method improved the M and F β by 2.1% and 0.8% on VT5000, respectively. Compared to the variant w/o PDFE, the p-value of our method was less than 0.01 for the F β , S α , and E ξ on VT5000; therefore, the test was highly significant. This shows that the PDFE component is conducive to suppressing noisy information in two-modal features. Finally, we removed the cross-modal information mutual reinforcement component, which is abbreviated as w/o IMR, i.e., after the PDFE component, the two-modal features were fused by the concatenation– 3 × 3 DSConv. Compared to this variant, our method improved the M and F β by 3.0% and 0.8% on VT5000, respectively. Compared to the variant w/o IMR, the p-value of our method was less than 0.01 for the F β , S α , and E ξ on VT5000, so the test was highly significant. This suggests that the IMR component helps to transfer complementary information to each other and suppress the distracting information in each modality. We also show the saliency maps of the ablation experiments in Figure 7. In the first row, the holly is obvious in the RGB image, and other ablation variants mistook it for salient objects. In the second row, the potato in the thermal image is similar to the salient objects, and other ablation variants cannot distinguish it accurately. However, with the CMIMR module, our method can eliminate this misleading information. In conclusion, the CMIMR module can effectively fuse the complementary information between two-modal features and mitigate the adverse effects of distracting information.

4.4.2. Effectiveness of Semantic-Information-Guided Fusion Module

To demonstrate the effectiveness of the semantic-information-guided fusion module, we conducted three ablation experiments. The results are shown in Table 6. First, we removed the SIGF module in our method, abbreviated as w/o SIGF, i.e., the two-level features were directly concatenated, followed by the 3 × 3 DSConv. Compared to this variant, our method improved the M and F β by 3.9% and 1.2% on VT5000, respectively. This demonstrates that the SIGF module is helpful in suppressing interfering information and exploring multiscale information. To demonstrate that the performance improvement of the SIGF module is significant, we perform the t-test in Table 7. Compared to the variant w/o SIGF, the p-value of our method was less than 0.01 for four metrics on VT5000, so the test was highly significant, except for the p-value, which was less than 0.05 for S α on VT821, which was significant. To demonstrate that the SIGF module outperformed other the modules that play the same role in existing methods, we replaced it with the decoder module in ADF [13], abbreviated as w ADF-Decoder. Compared to this variant, our method improved the M and F β by 2.4% and 1.0% on VT5000, respectively. Compared to the variant w ADF-Decoder, the p-value of our method was less than 0.01 for F β on VT5000, so the test was highly significant. This suggests that the design of the SIGF module is sound.
Second, we removed the previous-level semantic information enhancement in the SIGF module, which is abbreviated as w/o SIE, i.e., the previous-level semantic information enhancement was removed, and the two-level features were directly concatenated in the SIGF module. Compared with this variant, our method improved the M and F β by 1.8% and 0.7% on VT5000, respectively. This demonstrates that the SIE component helps to suppress interfering information. Compared to the variant w/o SIE, the p-value of our method was less than 0.05 for the F β , S α , and E ξ on VT5000, so the test was significant. Next, we removed the VAB component in the SIGF module, which is abbreviated as w/o VAB, i.e., the VAB component was removed in the SIGF module, and the other components were retained. Compared to this variant, our method improved the M and F β by 2.7% and 0.8% on VT5000, respectively. This shows that the VAB is capable of capturing the multiscale information of salient objects. Compared to the variant w/o VAB, the p-value of our method was less than 0.01 for the F β and S α on VT5000, so the test was highly significant. Besides, we also replaced the VAB in the SIGF module with the RFB and FAM, abbreviated as w SIGF-RFB and w SIGF-FAM, respectively. Compared to the RFB variant, our method improved the M and F β by 2.1% and 0.6% on VT5000, respectively. Compared to the variant w SIGF-RFB, the p-value of our method was less than 0.05 for the F β and E ξ on VT5000, so the test was significant. Compared to the FAM variant, our method improved the M and F β by 2.1% and 0.6% on VT5000, respectively. These two results indicate that the VAB slightly outperformed the RFB and FAM in capturing multiscale context information. We also show the visual comparisons of the ablation experiments in Figure 8. In the first row, the variants are disturbed by the tire. In the second row, other variants are unable to detect small objects. With the SIGF module, our method effectively addresses these challenges. In summary, the SIGF module can effectively suppress interfering information and capture multiscale information.

4.4.3. Effectiveness of Hybrid Loss and Auxiliary Decoder

To demonstrate the effectiveness of the hybrid loss and auxiliary decoder, we conducted two ablation experiments. The results are presented in Table 6. First, we removed the IoU loss, which is abbreviated as w/o IoU, i.e., only the BCE loss was employed in training our model. Compared to this variant, our method improved the M and F β by 3.0% and 1.4% on VT5000, respectively. Compared to the variant w/o IoU, the p-value of our method was less than 0.01 for the F β and E ξ on VT5000, so the test was highly significant. This demonstrates that the IoU loss is conducive to boosting the performance from the perspective of integral consistency. As shown in Figure 9b, the variant w/o IoU is susceptible to background noise. To demonstrate of the effectiveness of summing three single-channel saliency features, we employed three learnable parameters to weight them and, then, summed the weighted features, abbreviated as w LPW. Compared to this variant, our method improved the M and F β by 4.2% and 1.8% on VT5000, respectively. Compared to the variant w LPW, the p-value of our method was less than 0.01 for M , F β , and E ξ on VT5000, so the test was highly significant. However, our method failed to perform in the S α , i.e., the learnable parameters can improve the S α , but it did not perform as well as our method on the other metrics. Besides, we also conducted an experiment on the summation of three saliency maps, abbreviated as S F + S R + S T . The results were even worse than those only employing S F . Compared to this variant, our method improved the M and F β by 20.1% and 10.6% on VT5000, respectively. Compared to the variant S F + S R + S T , the p-value of our method was less than 0.01 for four metrics on VT5000, so the test was highly significant. This suggests that summing the three saliency maps together can have a detrimental effect. In Table 6, we also report the evaluation results of the three saliency maps, abbreviated as S F , S R , and S T , respectively. Note that we wished to evaluate the contribution of the three saliency maps ( S F , S R , and S T ) in the same setup as our full method, and therefore, the network parameters remained unchanged. The primary decoder saliency map S F was much better than the two auxiliary decoder saliency maps S R and S T . Compared to the S F , our method improved the M and F β by 1.8% and 0.8% on VT5000, respectively. This suggests that summing three single-channel saliency features can also provide beneficial information for S F . Unfortunately, however, this strategy had an adverse effect on S α , reducing the S α by 0.6% on VT5000.
We also conducted experiments only employing one modality as the input, abbreviated as RGB and T. That is, two auxiliary decoders were removed, the CMIMR module was removed, and no two-modal feature fusion were required since only one modality was used as the input. We input the RGB image and thermal image into the modified network separately. Then, the SIGF module was employed to decode the two-level features from top-to-bottom. Only employing the RGB image as the input was better than only employing the T image, but our method can greatly improve the results. Compared to the variant RGB, out method improved the M and F β by 23.4% and 4.4% on VT5000, respectively. Compared to the variant RGB, the p-value of our method was less than 0.01 for four metrics on VT5000, so the test was highly significant.
Besides, to demonstrate the necessity of two auxiliary decoders, we removed two auxiliary decoders, which is abbreviated as w/o AD, i.e., only the primary decoder was retained in our modified model. Compared to this variant, our method improved the M and F β by 10.8% and 2.0% on VT5000, respectively. Compared to the variant w/o AD, the p-value of our method was less than 0.01 for four metrics on VT5000, so the test was highly significant. This demonstrates that two auxiliary decoders can guide the two-modal encoders to extract modality-specific information and supplement valuable information at the single-channel saliency feature level. Unfortunately, the AD module did not perform well in all cases, but considering that it boosted most metrics, its failure cases in S α are acceptable. Note that since the network structure was modified in these three cases (w/o AD, RGB, and T), we needed to retrain the network to obtain the saliency maps, which is a different experimental setup from the ablation experiments S F , S R , and S T . As shown in Figure 9c, the variant w/o AD failed to guide two encoders to extract beneficial information. On the contrary, our entire model performed well in these cases.

4.5. Scalability on RGB-D Datasets

To demonstrate the scalability of our method, we retrained it on the RGB-D datasets. Following the settings in [66], we employed the 1485 images from NJU2K [67] and 700 images from NLPR [68] as the training datasets. The other parts of NJU2K, NLPR, and all images of SIP [66], STERE1000 [69] were taken as the testing datasets. Note that when testing on DUT [70], the extra 800 images from DUT were also taken as the training datasets, namely a total of 2985 images for training on DUT.
To demonstrate the effectiveness of our method, we compared it with 10 SOTA methods, S2MA [30], AFNet [71], ICNet [31], PSNet [72], DANet [73], DCMF [35], MoADNet [37], CFIDNet [34], HINet [33], and LSNet [48]. As shown in Table 8, our method improved 3.2% and 0.5% in terms of the M and E ξ on the NJU2K dataset. Besides, our method improved 0.8% and 0.9% in terms of the M and F β on the NLPR dataset. This demonstrates that our method has a preferable generalization ability on the RGB-D datasets. To demonstrate that the performance improvement of our method was significant, the t-test is performed in Table 9. We retrained our method and obtained six sets of experiment results. As shown in Table 9, compared to other methods, the p-value of M , F β , and E ξ on NJU2K were less than 0.01; therefore, the t-test was highly significant. The p-value of M and F β on NLPR were less than 0.01; therefore, the test was highly significant.

5. Discussion

This paper further identifies three important issues in RGB-T SOD: two-modal feature fusion, two-level feature fusion, and the saliency information fusion of three decoder branches. It also provides feasible solutions to these issues, which researchers can use to make further improvements. Our method has three advantages. First, in the two-modal feature fusion, the supplementary information is retained and interfering information is filtered. Second, in the two-level feature fusion, the guidance of the semantic information helps to suppress noise information in the low-level features. Third, the auxiliary decoder can guide the two encoders to extract modality-specific information. However, there are limitations to our method. First, the summation of three single-channel saliency features improves other the metrics, but degrades the S α . Second, while the full CMIMR and SIGF bring significant improvements to our method, their subcomponents do not largely improve the metrics. We will further address these limitations in future work. There are several directions for future development in this field. First, boundary information should be taken into account to recover clearer boundaries of salient objects. Second, although existing methods have made great progress, the structure is complex and simpler, and more-effective solutions need to be explored. Finally, the solutions of two-modal feature fusion and two-level feature fusion need further improvement.

6. Conclusions

In this paper, we propose a lightweight cross-modal information mutual reinforcement network for RGB-T salient object detection. Our proposed method consists of the cross-modal information mutual reinforcement module and the semantic-information-guided fusion module. The former module fuses complementary information between two-modal features by enhancing them with semantic information of the previous-level decoded feature and the inter-modal complementary information. The latter module fuses the two-level features and mines the multiscale information from the deep features by rectifying the low-level feature with the previous-level decoded feature and inserting the VAB to obtain the global contextual information. In summary, our method can effectively fuse complementary information between two-modal features and recover the details of salient objects. We conducted extensive experiments on three RGB-T datasets, and the results showed that our method is competitive compared with 15 state-of-the-art methods.

Author Contributions

Conceptualization, C.L. and B.W.; methodology, C.L. and B.W.; software, Y.S.; validation, Y.S. and J.Z.; formal analysis, J.Z.; investigation, X.Z.; resources, J.Z.; writing—original draft preparation, C.L.; writing—review and editing, B.W.; visualization, C.L.; supervision, X.Z. and C.Y.; project administration, X.Z. and C.Y.; funding acquisition, C.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China under Grants 62271180, 62171002, 62031009, U21B2024, 62071415, 62001146; the “Pioneer” and “Leading Goose” R&D Program of Zhejiang Province (2022C01068); the Zhejiang Province Key Research and Development Program of China under Grants 2023C01046, 2023C01044; the Zhejiang Province Nature Science Foundation of China under Grants LZ22F020003, LDT23F01014F01; the 111 Project under Grants D17019; and the Fundamental Research Funds for the Provincial Universities of Zhejiang under Grants GK219909299001-407.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The experiment results in this article are publicly available in this repository: https://github.com/lvchengtao/CMIMR (accessed on 28 January 2024).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Liu, H.; Ma, M.; Wang, M.; Chen, Z.; Zhao, Y. SCFusion: Infrared and Visible Fusion Based on Salient Compensation. Entropy 2023, 25, 985. [Google Scholar] [CrossRef]
  2. Cui, X.; Peng, Z.; Jiang, G.; Chen, F.; Yu, M. Perceptual Video Coding Scheme Using Just Noticeable Distortion Model Based on Entropy Filter. Entropy 2019, 21, 1095. [Google Scholar] [CrossRef]
  3. Wang, W.; Wang, J.; Chen, J. Adaptive Block-Based Compressed Video Sensing Based on Saliency Detection and Side Information. Entropy 2021, 23, 1184. [Google Scholar] [CrossRef] [PubMed]
  4. Guan, X.; He, L.; Li, M.; Li, F. Entropy Based Data Expansion Method for Blind Image Quality Assessment. Entropy 2020, 22, 60. [Google Scholar] [CrossRef] [PubMed]
  5. Lecun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef]
  6. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention, Proceedings of the 18th International Conference, Munich, Germany, 5–9 October 2015; Springer: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar]
  7. Liu, J.J.; Hou, Q.; Cheng, M.M.; Feng, J.; Jiang, J. A simple pooling-based design for real-time salient object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 3917–3926. [Google Scholar]
  8. Pang, Y.; Zhao, X.; Zhang, L.; Lu, H. Multi-scale interactive network for salient object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 9413–9422. [Google Scholar]
  9. Zhao, J.X.; Liu, J.J.; Fan, D.P.; Cao, Y.; Yang, J.; Cheng, M.M. EGNet: Edge guidance network for salient object detection. In Proceedings of the IEEE International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 8779–8788. [Google Scholar]
  10. Zhou, X.; Shen, K.; Liu, Z.; Gong, C.; Zhang, J.; Yan, C. Edge-aware multiscale feature integration network for salient object detection in optical remote sensing images. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5605315. [Google Scholar] [CrossRef]
  11. Fan, D.P.; Zhai, Y.; Borji, A.; Yang, J.; Shao, L. BBS-Net: RGB-D salient object detection with a bifurcated backbone strategy network. In Computer Vision—ECCV 2020, Proceedings of the 16th European Conference, Glasgow, UK, 23–28 August 2020; Springer: Cham, Switzerland, 2020; pp. 275–292. [Google Scholar]
  12. Woo, S.; Park, J.; Lee, J.Y.; Kweon, I.S. Cbam: Convolutional block attention module. In Computer Vision—ECCV 2018, Proceedings of the 15th European Conference, Munich, Germany, 8–14 September 2018; Springer: Cham, Switzerland, 2018; pp. 3–19. [Google Scholar]
  13. Tu, Z.; Ma, Y.; Li, Z.; Li, C.; Xu, J.; Liu, Y. RGBT salient object detection: A large-scale dataset and benchmark. IEEE Trans. Multimed. 2022, 25, 4163–4176. [Google Scholar] [CrossRef]
  14. Huo, F.; Zhu, X.; Zhang, L.; Liu, Q.; Shu, Y. Efficient Context-Guided Stacked Refinement Network for RGB-T Salient Object Detection. IEEE Trans. Circuits Syst. Video Technol. 2021, 32, 3111–3124. [Google Scholar] [CrossRef]
  15. Wu, R.; Bi, H.; Zhang, C.; Zhang, J.; Tong, Y.; Jin, W.; Liu, Z. Pyramid contract-based network for RGB-T salient object detection. Multimed. Tools Appl. 2023, 1–21. [Google Scholar] [CrossRef]
  16. Wang, H.; Song, K.; Huang, L.; Wen, H.; Yan, Y. Thermal images-aware guided early fusion network for cross-illumination RGB-T salient object detection. Eng. Appl. Artif. Intell. 2023, 118, 105640. [Google Scholar] [CrossRef]
  17. Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar]
  18. Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
  19. Wu, Z.; Su, L.; Huang, Q. Cascaded partial decoder for fast and accurate salient object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 3907–3916. [Google Scholar]
  20. Guo, M.H.; Lu, C.Z.; Liu, Z.N.; Cheng, M.M.; Hu, S.M. Visual attention network. Comput. Vis. Media 2023, 9, 733–752. [Google Scholar] [CrossRef]
  21. Gupta, A.K.; Seal, A.; Prasad, M.; Khanna, P. Salient Object Detection Techniques in Computer Vision—A Survey. Entropy 2020, 22, 1174. [Google Scholar] [CrossRef] [PubMed]
  22. Zhang, Y.; Chen, F.; Peng, Z.; Zou, W.; Zhang, C. Exploring Focus and Depth-Induced Saliency Detection for Light Field. Entropy 2023, 25, 1336. [Google Scholar] [CrossRef]
  23. Zhou, X.; Fang, H.; Liu, Z.; Zheng, B.; Sun, Y.; Zhang, J.; Yan, C. Dense attention-guided cascaded network for salient object detection of strip steel surface defects. IEEE Trans. Instrum. Meas. 2021, 71, 5004914. [Google Scholar] [CrossRef]
  24. Itti, L.; Koch, C.; Niebur, E. A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 1998, 20, 1254–1259. [Google Scholar] [CrossRef]
  25. Liu, S.; Huang, D. Receptive field block net for accurate and fast object detection. In Computer Vision—ECCV 2018, Proceedings of the 15th European Conference, Munich, Germany, 8–14 September 2018; Springer: Cham, Switzerland, 2018; pp. 385–400. [Google Scholar]
  26. Zhou, X.; Shen, K.; Weng, L.; Cong, R.; Zheng, B.; Zhang, J.; Yan, C. Edge-guided recurrent positioning network for salient object detection in optical remote sensing images. IEEE Trans. Cybern. 2022, 53, 539–552. [Google Scholar] [CrossRef]
  27. Qin, X.; Zhang, Z.; Huang, C.; Gao, C.; Dehghan, M.; Jagersand, M. Basnet: Boundary-aware salient object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 7479–7489. [Google Scholar]
  28. Li, G.; Liu, Z.; Zhang, X.; Lin, W. Lightweight salient object detection in optical remote-sensing images via semantic matching and edge alignment. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5601111. [Google Scholar] [CrossRef]
  29. Li, G.; Liu, Z.; Bai, Z.; Lin, W.; Ling, H. Lightweight Salient Object Detection in Optical Remote Sensing Images via Feature Correlation. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5617712. [Google Scholar] [CrossRef]
  30. Liu, N.; Zhang, N.; Han, J. Learning selective self-mutual attention for RGB-D saliency detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 13756–13765. [Google Scholar]
  31. Li, G.; Liu, Z.; Ling, H. ICNet: Information conversion network for RGB-D based salient object detection. IEEE Trans. Image Process. 2020, 29, 4873–4884. [Google Scholar] [CrossRef]
  32. Wen, H.; Yan, C.; Zhou, X.; Cong, R.; Sun, Y.; Zheng, B.; Zhang, J.; Bao, Y.; Ding, G. Dynamic selective network for RGB-D salient object detection. IEEE Trans. Image Process. 2021, 30, 9179–9192. [Google Scholar] [CrossRef] [PubMed]
  33. Bi, H.; Wu, R.; Liu, Z.; Zhu, H.; Zhang, C.; Xiang, T.Z. Cross-modal hierarchical interaction network for RGB-D salient object detection. Pattern Recognit. 2023, 136, 109194. [Google Scholar] [CrossRef]
  34. Chen, T.; Hu, X.; Xiao, J.; Zhang, G.; Wang, S. CFIDNet: Cascaded feature interaction decoder for RGB-D salient object detection. Neural Comput. Appl. 2022, 34, 7547–7563. [Google Scholar] [CrossRef]
  35. Chen, H.; Deng, Y.; Li, Y.; Hung, T.Y.; Lin, G. RGBD salient object detection via disentangled cross-modal fusion. IEEE Trans. Image Process. 2020, 29, 8407–8416. [Google Scholar] [CrossRef]
  36. Wu, Z.; Allibert, G.; Meriaudeau, F.; Ma, C.; Demonceaux, C. Hidanet: Rgb-d salient object detection via hierarchical depth awareness. IEEE Trans. Image Process. 2023, 32, 2160–2173. [Google Scholar] [CrossRef]
  37. Jin, X.; Yi, K.; Xu, J. MoADNet: Mobile asymmetric dual-stream networks for real-time and lightweight RGB-D salient object detection. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 7632–7645. [Google Scholar] [CrossRef]
  38. Wan, B.; Lv, C.; Zhou, X.; Sun, Y.; Zhu, Z.; Wang, H.; Yan, C. TMNet: Triple-modal interaction encoder and multi-scale fusion decoder network for V-D-T salient object detection. Pattern Recognit. 2024, 147, 110074. [Google Scholar] [CrossRef]
  39. Wan, B.; Zhou, X.; Sun, Y.; Wang, T.; Lv, C.; Wang, S.; Yin, H.; Yan, C. MFFNet: Multi-modal Feature Fusion Network for V-D-T Salient Object Detection. IEEE Trans. Multimed. 2023, 26, 2069–2081. [Google Scholar] [CrossRef]
  40. Zhang, Q.; Xiao, T.; Huang, N.; Zhang, D.; Han, J. Revisiting feature fusion for RGB-T salient object detection. IEEE Trans. Circuits Syst. Video Technol. 2020, 31, 1804–1818. [Google Scholar] [CrossRef]
  41. Gao, W.; Liao, G.; Ma, S.; Li, G.; Liang, Y.; Lin, W. Unified information fusion network for multi-modal RGB-D and RGB-T salient object detection. IEEE Trans. Circuits Syst. Video Technol. 2021, 32, 2091–2106. [Google Scholar] [CrossRef]
  42. Liang, Y.; Qin, G.; Sun, M.; Qin, J.; Yan, J.; Zhang, Z. Multi-modal interactive attention and dual progressive decoding network for RGB-D/T salient object detection. Neurocomputing 2022, 490, 132–145. [Google Scholar] [CrossRef]
  43. Zhou, W.; Guo, Q.; Lei, J.; Yu, L.; Hwang, J.N. ECFFNet: Effective and consistent feature fusion network for RGB-T salient object detection. IEEE Trans. Circuits Syst. Video Technol. 2021, 32, 1224–1235. [Google Scholar] [CrossRef]
  44. Cong, R.; Zhang, K.; Zhang, C.; Zheng, F.; Zhao, Y.; Huang, Q.; Kwong, S. Does thermal really always matter for RGB-T salient object detection? IEEE Trans. Multimed. 2022, 25, 6971–6982. [Google Scholar] [CrossRef]
  45. Chen, G.; Shao, F.; Chai, X.; Chen, H.; Jiang, Q.; Meng, X.; Ho, Y.S. CGMDRNet: Cross-guided modality difference reduction network for RGB-T salient object detection. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 6308–6323. [Google Scholar] [CrossRef]
  46. Ma, S.; Song, K.; Dong, H.; Tian, H.; Yan, Y. Modal complementary fusion network for RGB-T salient object detection. Appl. Intell. 2023, 53, 9038–9055. [Google Scholar] [CrossRef]
  47. Tu, Z.; Li, Z.; Li, C.; Lang, Y.; Tang, J. Multi-Interactive dual-decoder for RGB-Thermal salient object detection. IEEE Trans. Image Process. 2021, 30, 5678–5691. [Google Scholar] [CrossRef] [PubMed]
  48. Zhou, W.; Zhu, Y.; Lei, J.; Yang, R.; Yu, L. LSNet: Lightweight spatial boosting network for detecting salient objects in RGB-thermal images. IEEE Trans. Image Process. 2023, 32, 1329–1340. [Google Scholar] [CrossRef] [PubMed]
  49. Zhou, T.; Fu, H.; Chen, G.; Zhou, Y.; Fan, D.P.; Shao, L. Specificity-preserving rgb-d saliency detection. In Proceedings of the IEEE International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 4681–4691. [Google Scholar]
  50. Chen, L.; Zhang, H.; Xiao, J.; Nie, L.; Shao, J.; Liu, W.; Chua, T.S. Sca-cnn: Spatial and channel-wise attention in convolutional networks for image captioning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 5659–5667. [Google Scholar]
  51. Wang, W.; Xie, E.; Li, X.; Fan, D.P.; Song, K.; Liang, D.; Lu, T.; Luo, P.; Shao, L. Pvt v2: Improved baselines with pyramid vision transformer. Comput. Vis. Media 2022, 8, 415–424. [Google Scholar] [CrossRef]
  52. Hou, Q.; Cheng, M.M.; Hu, X.; Borji, A.; Tu, Z.; Torr, P.H. Deeply supervised salient object detection with short connections. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 3203–3212. [Google Scholar]
  53. De Boer, P.T.; Kroese, D.P.; Mannor, S.; Rubinstein, R.Y. A tutorial on the cross-entropy method. Ann. Oper. Res. 2005, 134, 19–67. [Google Scholar] [CrossRef]
  54. Máttyus, G.; Luo, W.; Urtasun, R. Deeproadmapper: Extracting road topology from aerial images. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 3438–3446. [Google Scholar]
  55. Wang, G.; Li, C.; Ma, Y.; Zheng, A.; Tang, J.; Luo, B. RGB-T saliency detection benchmark: Dataset, baselines, analysis and a novel approach. In Image and Graphics Technologies, Proceedings of the 13th Conference on Image and Graphics Technologies and Applications, IGTA 2018, Beijing, China, 8–10 April 2018; Springer: Singapore, 2018; pp. 359–369. [Google Scholar]
  56. Tu, Z.; Xia, T.; Li, C.; Wang, X.; Ma, Y.; Tang, J. RGB-T image saliency detection via collaborative graph learning. IEEE Trans. Multimed. 2019, 22, 160–173. [Google Scholar] [CrossRef]
  57. He, K.; Zhang, X.; Ren, S.; Sun, J. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1026–1034. [Google Scholar]
  58. Achanta, R.; Hemami, S.; Estrada, F.; Susstrunk, S. Frequency-tuned salient region detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 1597–1604. [Google Scholar]
  59. Fan, D.P.; Gong, C.; Cao, Y.; Ren, B.; Cheng, M.M.; Borji, A. Enhanced-alignment measure for binary foreground map evaluation. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence (IJCAI-18), Stockholm, Sweden, 13–19 July 2018; pp. 698–704. [Google Scholar]
  60. Fan, D.P.; Cheng, M.M.; Liu, Y.; Li, T.; Borji, A. Structure-measure: A new way to evaluate foreground maps. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 4548–4557. [Google Scholar]
  61. Huo, F.; Zhu, X.; Zhang, Q.; Liu, Z.; Yu, W. Real-time one-stream semantic-guided refinement network for RGB-thermal salient object detection. IEEE Trans. Instrum. Meas. 2022, 71, 2512512. [Google Scholar] [CrossRef]
  62. Gao, L.; Fu, P.; Xu, M.; Wang, T.; Liu, B. UMINet: A unified multi-modality interaction network for RGB-D and RGB-T salient object detection. Vis. Comput. 2023, 1–18. [Google Scholar] [CrossRef]
  63. Song, K.; Huang, L.; Gong, A.; Yan, Y. Multiple graph affinity interactive network and a variable illumination dataset for RGBT image salient object detection. IEEE Trans. Circuits Syst. Video Technol. 2022, 33, 3104–3118. [Google Scholar] [CrossRef]
  64. Zhou, W.; Zhu, Y.; Lei, J.; Wan, J.; Yu, L. APNet: Adversarial learning assistance and perceived importance fusion network for all-day RGB-T salient object detection. IEEE Trans. Emerg. Top. Comput. Intell. 2021, 6, 957–968. [Google Scholar] [CrossRef]
  65. Wang, J.; Song, K.; Bao, Y.; Huang, L.; Yan, Y. CGFNet: Cross-Guided Fusion Network for RGB-T Salient Object Detection. IEEE Trans. Circuits Syst. Video Technol. 2021, 32, 2949–2961. [Google Scholar] [CrossRef]
  66. Fan, D.P.; Lin, Z.; Zhang, Z.; Zhu, M.; Cheng, M.M. Rethinking RGB-D salient object detection: Models, data sets, and large-scale benchmarks. IEEE Trans. Neural Netw. Learn. Syst. 2020, 32, 2075–2089. [Google Scholar] [CrossRef] [PubMed]
  67. Ju, R.; Ge, L.; Geng, W.; Ren, T.; Wu, G. Depth saliency based on anisotropic center-surround difference. In Proceedings of the IEEE International Conference on Image Processing, Paris, France, 27–30 October 2014; pp. 1115–1119. [Google Scholar]
  68. Peng, H.; Li, B.; Xiong, W.; Hu, W.; Ji, R. Rgbd salient object detection: A benchmark and algorithms. In Computer Vision—ECCV 2014, Proceedings of the 13th European Conference, Zurich, Switzerland, 6–12 September 2014; Springer: Cham, Switzerland, 2014; pp. 92–109. [Google Scholar]
  69. Niu, Y.; Geng, Y.; Li, X.; Liu, F. Leveraging stereopsis for saliency analysis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 454–461. [Google Scholar]
  70. Piao, Y.; Ji, W.; Li, J.; Zhang, M.; Lu, H. Depth-induced multi-scale recurrent attention network for saliency detection. In Proceedings of the IEEE International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 7254–7263. [Google Scholar]
  71. Wang, N.; Gong, X. Adaptive fusion for RGB-D salient object detection. IEEE Access 2019, 7, 55277–55284. [Google Scholar] [CrossRef]
  72. Bi, H.; Wu, R.; Liu, Z.; Zhang, J.; Zhang, C.; Xiang, T.Z.; Wang, X. PSNet: Parallel symmetric network for RGB-T salient object detection. Neurocomputing 2022, 511, 410–425. [Google Scholar] [CrossRef]
  73. Zhao, X.; Zhang, L.; Pang, Y.; Lu, H.; Zhang, L. A single stream network for robust and real-time RGB-D salient object detection. In Computer Vision—ECCV 2020, Proceedings of the 16th European Conference, Glasgow, UK, 23–28 August 2020; Springer: Cham, Switzerland, 2020; pp. 646–662. [Google Scholar]
Figure 1. Some examples of RGB-T datasets. (a) Ours. (b) PCNet. (c) TAGF.
Figure 1. Some examples of RGB-T datasets. (a) Ours. (b) PCNet. (c) TAGF.
Entropy 26 00130 g001
Figure 2. Overall architecture of our lightweight cross-modal information mutual reinforcement network for RGB-T salient object detection. ‘E1∼E5’ are the five modules of the encoder. ‘TDec’ and ‘RDec’ are the decoder modules of the auxiliary decoder. ‘CMIMR’ is the cross-modal information mutual reinforcement module. ‘SIGF’ is the semantic-information-guided fusion module.
Figure 2. Overall architecture of our lightweight cross-modal information mutual reinforcement network for RGB-T salient object detection. ‘E1∼E5’ are the five modules of the encoder. ‘TDec’ and ‘RDec’ are the decoder modules of the auxiliary decoder. ‘CMIMR’ is the cross-modal information mutual reinforcement module. ‘SIGF’ is the semantic-information-guided fusion module.
Entropy 26 00130 g002
Figure 3. Architecture of the cross-modal information mutual reinforcement (CMIMR) module. ‘ C o n v 1 × 1 ’ is the 1 × 1 convolution. ‘ S A ’ is the spatial attention. ‘ D S C o n v 3 × 3 ’ is the depth-separable convolution with the 3 × 3 convolution kernel.
Figure 3. Architecture of the cross-modal information mutual reinforcement (CMIMR) module. ‘ C o n v 1 × 1 ’ is the 1 × 1 convolution. ‘ S A ’ is the spatial attention. ‘ D S C o n v 3 × 3 ’ is the depth-separable convolution with the 3 × 3 convolution kernel.
Entropy 26 00130 g003
Figure 4. Architecture of the semantic-information-guided fusion (SIGF) module. ‘ D S C o n v 3 × 3 ’ is the depth-separable convolution with the 3 × 3 convolution kernel. ‘ V A B ’ is the visual attention block. ‘ U p × 2 ’ is the two-times upsample.
Figure 4. Architecture of the semantic-information-guided fusion (SIGF) module. ‘ D S C o n v 3 × 3 ’ is the depth-separable convolution with the 3 × 3 convolution kernel. ‘ V A B ’ is the visual attention block. ‘ U p × 2 ’ is the two-times upsample.
Entropy 26 00130 g004
Figure 5. PR curves and F-measure curves of the compared methods on the RGB-T datasets.
Figure 5. PR curves and F-measure curves of the compared methods on the RGB-T datasets.
Entropy 26 00130 g005
Figure 6. Visual comparisons with other methods. (a) Ours. (b) ADF. (c) MIDD. (d) MMNet. (e) MIADPD. (f) OSRNet. (g) ECFFNet. (h) PCNet. (i) TAGF. (j) UMINet. (k) APNet.
Figure 6. Visual comparisons with other methods. (a) Ours. (b) ADF. (c) MIDD. (d) MMNet. (e) MIADPD. (f) OSRNet. (g) ECFFNet. (h) PCNet. (i) TAGF. (j) UMINet. (k) APNet.
Entropy 26 00130 g006
Figure 7. Visual comparisons with ablation experiments on the effectiveness of the CMIMR module. (a) Ours. (b) w/o CMIMR. (c) w/o PDFE. (d) w/o IMR.
Figure 7. Visual comparisons with ablation experiments on the effectiveness of the CMIMR module. (a) Ours. (b) w/o CMIMR. (c) w/o PDFE. (d) w/o IMR.
Entropy 26 00130 g007
Figure 8. Visual comparisons with ablation experiments on the effectiveness of the SIGF module. (a) Ours. (b) w/o SIGF. (c) w/o SIE. (d) w/o VAB.
Figure 8. Visual comparisons with ablation experiments on the effectiveness of the SIGF module. (a) Ours. (b) w/o SIGF. (c) w/o SIE. (d) w/o VAB.
Entropy 26 00130 g008
Figure 9. Visual comparisons with ablation experiments on the effectiveness of the IoU loss and auxiliary decoder. (a) Ours. (b) w/o IoU. (c) w/o AD.
Figure 9. Visual comparisons with ablation experiments on the effectiveness of the IoU loss and auxiliary decoder. (a) Ours. (b) w/o IoU. (c) w/o AD.
Entropy 26 00130 g009
Table 1. Quantitative comparisons with the heavy-model-based methods on the RGB-T datasets. Param means the number of parameters. FLOP means floating point operations. FPS means frames per second, which was tested on two types of processors, i.e., professional graphics processing unit GeForce RTX 2080 Ti (GPU) and central processing unit AMD Ryzen 7 5800H @ 3.2 GHz (CPU), respectively. The top three results are marked in red, green, and blue color in each column, respectively. ↑ and ↓ mean a larger value is better and a smaller value is better, respectively.
Table 1. Quantitative comparisons with the heavy-model-based methods on the RGB-T datasets. Param means the number of parameters. FLOP means floating point operations. FPS means frames per second, which was tested on two types of processors, i.e., professional graphics processing unit GeForce RTX 2080 Ti (GPU) and central processing unit AMD Ryzen 7 5800H @ 3.2 GHz (CPU), respectively. The top three results are marked in red, green, and blue color in each column, respectively. ↑ and ↓ mean a larger value is better and a smaller value is better, respectively.
Pub.Param ↓FLOP ↓FPS ↑VT5000VT1000VT821
MGCPUGPU M F β S α E ξ M F β S α E ξ M F β S α E ξ
RGBBASNetCVPR1987.1127.60.9473.00.05420.7620.83860.8780.03050.84490.90860.92230.06730.73350.82280.8556
EGNetICCV19108.0156.80.9395.10.05110.77410.8530.88860.03290.84740.90970.9230.06370.72550.83010.8581
CPDCVPR1947.917.83.9738.20.04650.78590.85470.89640.03120.86170.90720.93080.07950.71730.81840.8474
RGB-TADFTMM220.04830.77750.86350.8910.0340.84580.90940.92220.07660.71590.81020.8443
MIDDTIP2152.4216.71.5636.50.04610.78760.85610.89260.02930.86950.90690.93530.04460.80320.87120.8974
MMNetTCSVT2164.142.51.7931.10.04330.78090.86180.88940.02680.86260.91330.9320.03970.79490.87310.8944
MIADPDNP220.04040.79250.87860.89680.02510.86740.92370.9360.06990.73980.84440.8529
OSRNetTIM2215.642.42.2963.10.03990.82070.87520.91080.02210.88960.92580.94910.04260.81140.87510.9
ECFFNetTCSVT210.03760.80830.87360.91230.02140.87780.92240.94820.03440.81170.87610.9088
PCNetMTA230.03630.8290.87490.91880.0210.88650.9320.94820.03620.81930.87340.9005
TAGFEAAI2336.2115.10.8733.10.03590.82560.88360.91620.02110.88790.92640.95080.03460.82050.88050.9091
UMINetVC230.03540.82930.8820.9220.02120.89060.9260.95610.05420.78910.85830.8866
APNetTETCI2130.446.60.9936.90.03450.82210.87510.91820.02130.88480.92040.95150.03410.81810.86690.9121
Our 6.11.56.534.90.03210.84630.87950.9320.02050.90160.92290.96080.03110.8410.87760.9262
Table 2. Quantitative comparisons with the lightweight methods on the RGB-T datasets. The best result is marked in red color in each column. ↑ and ↓ mean a larger value is better and a smaller value is better, respectively.
Table 2. Quantitative comparisons with the lightweight methods on the RGB-T datasets. The best result is marked in red color in each column. ↑ and ↓ mean a larger value is better and a smaller value is better, respectively.
Pub.Param ↓FLOP ↓FPS ↑VT5000VT1000VT821
MGCPUGPU M F β S α E ξ M F β S α E ξ M F β S α E ξ
CSRNetTCSVT211.04.42.724.80.04170.80930.86780.90680.02420.87510.91840.93930.03760.82890.88470.9116
LSNetTIP234.61.211.651.10.03670.82690.87640.92060.02240.88740.92440.95280.03290.82760.87770.9179
Our 6.11.56.534.90.03210.84630.87950.9320.02050.90160.92290.96080.03110.8410.87760.9262
Table 3. The t-test of our method with the compared methods on the RGB-T datasets. For the evaluation metrics I o U and D i c e , the right-sided test was performed. The p-value is reported in this table. ↑ mean a larger value is better and a smaller value is better, respectively.
Table 3. The t-test of our method with the compared methods on the RGB-T datasets. For the evaluation metrics I o U and D i c e , the right-sided test was performed. The p-value is reported in this table. ↑ mean a larger value is better and a smaller value is better, respectively.
VT5000VT1000VT821
IoU Dice IoU Dice IoU Dice
LSNet0.76090.84110.86270.91370.76650.8393
Our0.77210.85310.8650.9160.76840.8439
0.77280.85310.8630.91490.76760.8424
0.77180.8520.86490.91610.76080.8357
0.77380.85380.86320.91510.76690.8416
0.7710.85190.86290.91410.76850.8432
0.77030.85120.86240.91350.7650.8398
p-value1.9 ×   10 6 4.7 ×   10 7 0.05620.01540.59381.1 ×   10 8
Table 4. Six sets of experiment results of our method on the RGB-T datasets. ↑ and ↓ mean a larger value is better and a smaller value is better, respectively.
Table 4. Six sets of experiment results of our method on the RGB-T datasets. ↑ and ↓ mean a larger value is better and a smaller value is better, respectively.
VT5000VT1000VT821
No. M F β S α E ξ M F β S α E ξ M F β S α E ξ
10.03210.84630.87950.9320.02050.90160.92290.96080.03110.8410.87760.9262
20.03250.8430.87970.93110.02050.89780.92150.95890.03120.83850.87640.9251
30.03220.84510.87970.93180.01990.90040.92320.96080.0320.83840.87350.9222
40.03240.84360.880.93190.02030.89730.92160.95910.03160.83690.87610.9244
50.03310.84010.87860.92990.02050.89720.92140.95970.03110.83610.87730.9242
60.03320.84070.87810.930.02050.89810.92140.95950.0310.83690.87530.9242
Table 5. The t-test of our method with the compared methods on the RGB-T datasets. For the evaluation metric M , the left-sided test was performed, while for the other three metrics F β , S α , and E ξ , the right-sided test was performed. The p-value is reported in this table. ↑ and ↓ mean a larger value is better and a smaller value is better, respectively.
Table 5. The t-test of our method with the compared methods on the RGB-T datasets. For the evaluation metric M , the left-sided test was performed, while for the other three metrics F β , S α , and E ξ , the right-sided test was performed. The p-value is reported in this table. ↑ and ↓ mean a larger value is better and a smaller value is better, respectively.
VT5000VT1000VT821
Compared Method M F β S α E ξ M F β S α E ξ M F β S α E ξ
BASNet4.8 × 10 10 2.5 × 10 9 2.2 × 10 10 2.1 × 10 10 8.4 × 10 10 4.8 × 10 9 9.3 × 10 8 5.5 × 10 10 1.6 × 10 11 1.4 × 10 10 1.9 × 10 9 2.7 × 10 10
EGNet1.0 × 10 9 5.7 × 10 9 2.0 × 10 9 6.2 × 10 10 2.9 × 10 10 6.1 × 10 9 1.4 × 10 7 6.1 × 10 10 2.7 × 10 11 1.0 × 10 10 3.9 × 10 9 3.3 × 10 10
CPD4.3 × 10 9 1.4 × 10 8 2.8 × 10 9 1.7 × 10 9 6.0 × 10 10 3.1 × 10 8 5.7 × 10 8 2.0 × 10 9 3.7 × 10 12 7.0 × 10 11 1.3 × 10 9 1.6 × 10 10
ADF2.4 × 10 9 7.3 × 10 9 2.5 × 10 8 8.3 × 10 10 1.9 × 10 10 5.2 × 10 9 1.3 × 10 7 5.5 × 10 10 5.0 × 10 12 6.6 × 10 11 6.5 × 10 10 1.3 × 10 10
MIDD5.0 × 10 9 1.7 × 10 8 3.7 × 10 9 1.0 × 10 9 1.6 × 10 9 1.0 × 10 7 5.1 × 10 8 4.6 × 10 9 2.3 × 10 9 3.5 × 10 8 0.00032.9 × 10 8
MMNet1.6 × 10 8 9.5 × 10 9 1.5 × 10 8 6.9 × 10 10 8.1 × 10 9 3.5 × 10 8 8.0 × 10 7 2.5 × 10 9 2.3 × 10 8 1.2 × 10 8 0.00241.7 × 10 8
MIADPD7.7 × 10 8 2.7 × 10 8 0.03991.8 × 10 9 3.8 × 10 8 7.2 × 10 8 0.99805.4 × 10 9 1.1 × 10 11 2.0 × 10 10 2.5 × 10 8 2.3 × 10 10
OSRNet1.1 × 10 7 1.5 × 10 6 2.1 × 10 5 2.5 × 10 8 5.5 × 10 6 3.2 × 10 5 0.99992.9 × 10 7 5.2 × 10 9 1.4 × 10 7 0.09324.9 × 10 8
ECFFNet7.0 × 10 7 1.7 × 10 7 4.1 × 10 6 3.7 × 10 8 6.9 × 10 5 5.4 × 10 7 0.85661.9 × 10 7 3.4 × 10 6 1.4 × 10 7 0.54144.5 × 10 7
PCNet3.1 × 10 6 1.5 × 10 5 1.5 × 10 5 3.0 × 10 7 0.00077.7 × 10 6 11.9 × 10 7 3.4 × 10 7 7.8 × 10 7 0.00385.4 × 10 8
TAGF5.5 × 10 6 5.2 × 10 6 1.5 × 10 5 1.2 × 10 7 0.00041.4 × 10 5 0.99996.8 × 10 5 2.5 × 10 6 1.1 × 10 6 0.99965.0 × 10 7
UMINet1.2 × 10 5 1.7 × 10 5 0.00011.4 × 10 6 0.00025.6 × 10 5 0.99995.4 × 10 5 1.5 × 10 10 6.4 × 10 9 4.5 × 10 7 5.5 × 10 9
APNet7.9 × 10 5 2.1 × 10 6 1.9 × 10 5 2.4 × 10 7 0.00014.0 × 10 6 0.00251.0 × 10 6 5.6 × 10 6 5.7 × 10 7 1.2 × 10 5 1.5 × 10 6
CSRNet3.6 × 10 8 2.0 × 10 7 1.2 × 10 7 1.0 × 10 8 1.1 × 10 7 2.9 × 10 7 6.1 × 10 5 1.1 × 10 8 9.7 × 10 8 2.8 × 10 5 0.99991.2 × 10 6
LSNet1.9 × 10 6 7.6 × 10 6 0.00016.6 × 10 7 2.5 × 10 6 1.1 × 10 5 0.99962.4 × 10 6 9.0 × 10 5 1.4 × 10 5 0.97943.4 × 10 5
Table 6. Ablation studies of our method on three RGB-T datasets. The best result is marked in red color in each column. ↑ and ↓ mean a larger value is better and a smaller value is better, respectively.
Table 6. Ablation studies of our method on three RGB-T datasets. The best result is marked in red color in each column. ↑ and ↓ mean a larger value is better and a smaller value is better, respectively.
VT5000VT1000VT821
M F β S α E ξ M F β S α E ξ M F β S α E ξ
w/o CMIMR0.03380.83210.87440.92740.02220.88810.91740.95560.03340.82490.86820.9163
w/o PDFE0.03280.83960.87620.92950.02110.89350.920.95710.0330.83090.86930.9182
w/o IMR0.03310.83940.87770.92920.02080.89450.92030.95770.03210.83080.87120.9208
w ADF-TMF0.03290.83960.87780.93090.02080.89340.91890.95910.03140.83680.87660.9259
w/o SIGF0.03340.83660.87670.92870.02150.88530.91590.95410.03160.8270.87470.9207
w/o SIE0.03270.84050.87840.930.02080.89270.92020.95710.03350.83080.87120.9201
w/o VAB0.0330.83920.87710.92990.02080.8940.91990.95720.03120.83270.87480.9229
w ADF-Decoder0.03280.83770.87830.92990.0210.89410.91980.95820.03190.83540.87720.9238
w SIGF-FAM0.03280.84160.87950.93120.02050.89650.92150.95950.03160.83510.87750.9231
w SIGF-RFB0.03280.84110.87940.93020.02080.89660.92190.95840.03280.83540.87660.9221
w/o IoU0.03310.83440.87880.92760.02220.88280.92160.94880.03320.82590.87640.9165
S F 0.03270.83960.88470.92890.02110.89030.92690.94990.03040.83530.88720.9219
S R 0.04190.79670.85780.90650.02650.87270.91390.94030.04270.77160.84460.8914
S T 0.04610.76080.83890.89110.03540.83270.88640.92040.05180.7450.82280.8751
S F + S R + S T 0.04020.76490.87740.88440.02760.8440.92140.92160.04070.76770.87930.8802
w LPW0.03350.83160.88180.92550.02110.88610.92590.94930.03110.82960.88910.9199
w/o AD0.0360.82940.87780.92280.02110.89020.92610.95220.03340.82770.87940.9198
RGB0.04190.81050.86160.91150.02570.88090.9160.94670.05430.76380.84310.8939
T0.0440.77660.84390.90070.03390.84440.88840.92860.04940.75950.82490.8853
Our0.03210.84630.87950.9320.02050.90160.92290.96080.03110.8410.87760.9262
Table 7. The t-test of our method with ablation experiments on the RGB-T datasets. For the evaluation metric M , the left-sided test was performed. For the other three metrics F β , S α , and E ξ , the right-sided test was performed. The p-value is reported in this table. ↑ and ↓ mean a larger value is better and a smaller value is better, respectively.
Table 7. The t-test of our method with ablation experiments on the RGB-T datasets. For the evaluation metric M , the left-sided test was performed. For the other three metrics F β , S α , and E ξ , the right-sided test was performed. The p-value is reported in this table. ↑ and ↓ mean a larger value is better and a smaller value is better, respectively.
VT5000VT1000VT821
Ablation Variant M F β S α E ξ M F β S α E ξ M F β S α E ξ
w/o CMIMR0.00065.0 × 10 5 8.6 × 10 6 0.00014.2 × 10 6 1.5 × 10 5 1.8 × 10 5 2.9 × 10 5 2.4 × 10 5 4.6 × 10 6 2.5 × 10 5 1.2 × 10 5
w/o PDFE0.15140.00808.2 × 10 5 0.00450.00040.00050.00100.00026.7 × 10 5 9.2 × 10 5 5.3 × 10 5 4.3 × 10 5
w/o IMR0.02040.00640.00180.00220.00360.00120.00190.00080.00248.6 × 10 5 0.00030.0006
w ADF-TMF0.07710.00800.00240.30170.00360.00040.00010.04610.34570.08240.80230.9816
w/o SIGF0.00370.00060.00020.00084.4 × 10 5 4.8 × 10 6 4.6 × 10 6 6.6 × 10 6 0.07661.1 × 10 5 0.04020.0005
w/o SIE0.28180.02230.01790.01780.00360.00020.00150.00021.9 × 10 5 8.6 × 10 5 0.00030.0002
w/o VAB0.03920.00520.00040.01330.00360.00070.00080.00030.78080.00040.04950.0199
w ADF-Decoder0.15140.00140.01230.01330.00070.00080.00060.00250.00800.00800.94310.1634
w SIGF-FAM0.15140.09060.76130.58020.11770.01510.09830.20680.07660.00520.96940.0312
w SIGF-RFB0.15140.04730.66040.03300.00360.01770.38890.00440.00010.00800.80230.0040
w/o IoU0.02040.00020.09270.00014.2 × 10 6 2.1 × 10 6 0.14342.5 × 10 7 3.9 × 10 5 6.8 × 10 6 0.71311.3 × 10 5
S F 0.28170.00800.99990.00120.00044.7 × 10 5 0.99994.3 × 10 7 0.9990.006910.0029
S R 3.2 × 10 8 4.1 × 10 8 5.4 × 10 9 9.6 × 10 9 1.0 × 10 8 1.8 × 10 7 1.1 × 10 6 1.5 × 10 8 5.0 × 10 9 1.4 × 10 9 2.6 × 10 8 1.1 × 10 8
S T 5.0 × 10 9 2.4 × 10 9 2.3 × 10 10 8.5 × 10 10 1.2 × 10 10 1.7 × 10 9 7.1 × 10 10 4.3 × 10 10 2.6 × 10 10 2.6 × 10 10 1.9 × 10 9 1.5 × 10 9
S F + S R + S T 8.8 × 10 8 3.0 × 10 9 0.00083.9 × 10 10 4.5 × 10 9 4.4 × 10 9 0.06695.0 × 10 10 1.3 × 10 8 1.1 × 10 9 0.99882.5 × 10 9
w LPW0.00234.0 × 10 5 0.99981.5 × 10 5 0.00046.5 × 10 6 0.99993.2 × 10 7 0.89964.1 × 10 5 10.0002
w/o AD4.7 × 10 6 1.7 × 10 5 0.00242.1 × 10 5 0.00044.5 × 10 6 0.99991.6 × 10 6 2.4 × 10 5 1.5 × 10 5 0.99870.0002
RGB3.2 × 10 8 2.4 × 10 7 1.4 × 10 8 3.0 × 10 8 2.1 × 10 8 1.2 × 10 6 5.0 × 10 6 1.1 × 10 7 1.5 × 10 10 8.0 × 10 10 2.1 × 10 8 1.6 × 10 8
T1.2 × 10 8 6.8 × 10 9 4.5 × 10 10 3.3 × 10 9 2.0 × 10 10 4.6 × 10 9 9.4 × 10 10 1.4 × 10 9 4.9 × 10 10 6.1 × 10 10 2.3 × 10 9 4.6 × 10 9
Table 8. Quantitative comparisons with 10 methods on the RGB-D datasets. The top three results are marked in red, green, and blue color in each row, respectively. ↑ and ↓ mean a larger value is better and a smaller value is better, respectively.
Table 8. Quantitative comparisons with 10 methods on the RGB-D datasets. The top three results are marked in red, green, and blue color in each row, respectively. ↑ and ↓ mean a larger value is better and a smaller value is better, respectively.
S2MAAFNetICNetPSNetDANetDCMFMoADNetCFIDNetHINetLSNetOur
NJU2K M 0.05330.05330.0520.04850.04640.04270.0410.0380.03870.03790.0367
F β 0.86460.86720.86760.86590.87630.88040.89030.8910.8960.89980.901
S α 0.89420.88010.89390.88980.89690.91250.90620.91410.91510.91070.9021
E ξ 0.91630.91880.91270.91250.9260.92460.93390.92890.93850.94010.9447
NLPR M 0.030.0330.02840.02870.02850.0290.02740.02580.02590.02440.0242
F β 0.84790.82030.8650.88380.86620.8490.86640.88030.87250.88240.8917
S α 0.91450.89940.92150.90610.91370.9210.91480.9210.92120.91690.9136
E ξ 0.94070.93060.94350.94570.94780.93810.94480.950.94910.95540.9564
DUT M 0.0440.07220.04670.03510.03130.0332
F β 0.88470.82980.88360.90570.92140.9212
S α 0.9030.85240.88940.92790.92690.9154
E ξ 0.93490.90120.9290.95050.95890.9531
SIP M 0.06970.0540.05850.06030.06580.04920.0521
F β 0.83340.86150.8460.85650.84340.88190.8805
S α 0.85270.87710.86480.86320.85520.88440.8709
E ξ 0.8990.91670.91020.90580.8990.92710.9178
STERE1000 M 0.05080.04720.04470.05210.04760.04270.04240.04270.0490.05430.0439
F β 0.85450.87180.86420.85220.85810.86590.86660.87890.85860.85420.874
S α 0.89040.89140.90250.86780.89220.90970.89890.90120.89190.87070.8822
E ξ 0.92540.93370.92560.90660.92630.92980.93430.93250.92730.91940.9364
Table 9. Hypothesis test of our method with the compared methods on the RGB-D datasets. The t-test was used in our hypothesis test. For the evaluation metric M , the left-sided test was performed. For other three metrics F β , S α , and E ξ , the right-sided test was performed. The p-value is reported in this table. ↑ and ↓ mean a larger value is better and a smaller value is better, respectively.
Table 9. Hypothesis test of our method with the compared methods on the RGB-D datasets. The t-test was used in our hypothesis test. For the evaluation metric M , the left-sided test was performed. For other three metrics F β , S α , and E ξ , the right-sided test was performed. The p-value is reported in this table. ↑ and ↓ mean a larger value is better and a smaller value is better, respectively.
OurS2MAAFNetICNetPSNetDANetDCMFMoADNetCFIDNetHINetLSNet
NJU2K M 0.03670.0370.03630.03590.03610.03628.8 × 10 10 8.8 × 10 10 1.3 × 10 9 4.6 × 10 9 1.2 × 10 8 1.2 × 10 7 5.6 × 10 7 9.4 × 10 5 1.7 × 10 5 0.0001
F β 0.9010.90130.90130.90280.90340.90352.8 × 10 9 4.0 × 10 9 4.2 × 10 9 3.3 × 10 9 1.8 × 10 8 4.3 × 10 8 8.6 × 10 7 1.2 × 10 6 2.1 × 10 5 0.0018
S α 0.90210.90180.90270.90390.90340.90348.1 × 10 7 6.6 × 10 9 6.9 × 10 7 1.1 × 10 7 5.1 × 10 6 10.9999111
E ξ 0.94470.94420.94470.94510.9450.9452.3 × 10 11 3.6 × 10 11 1.3 × 10 11 1.2 × 10 11 1.8 × 10 10 1.3 × 10 10 2.8 × 10 9 4.2 × 10 10 4.4 × 10 8 1.9 × 10 7
NLPR M 0.02420.02450.02470.02450.02430.02464.6 × 10 9 5.3 × 10 10 2.5 × 10 8 1.8 × 10 8 2.2 × 10 8 1.3 × 10 8 1.1 × 10 7 5.5 × 10 6 3.9 × 10 6 0.7897
F β 0.89170.88880.88980.89220.89250.89277.4 × 10 9 6.3 × 10 10 9.1 × 10 8 4.5 × 10 5 1.1 × 10 7 8.4 × 10 9 1.2 × 10 7 6.9 × 10 6 4.8 × 10 7 2.0 × 10 5
S α 0.91360.91190.91270.91290.9130.91220.99962.1 × 10 8 16.8 × 10 7 0.994810.9998110.9999
E ξ 0.95640.95480.95510.95560.95610.95571.1 × 10 8 8.4 × 10 10 3.1 × 10 8 8.5 × 10 8 2.8 × 10 7 5.0 × 10 9 5.5 × 10 8 1.4 × 10 6 6.9 × 10 7 0.2078
DUT M 0.03320.03310.03210.03240.03210.03261.4 × 10 8 -2.8 × 10 11 -4.8 × 10 9 2.5 × 10 5 0.9994---
F β 0.92120.91920.92240.92140.92290.92056.8 × 10 9 -7.0 × 10 11 -5.9 × 10 9 4.8 × 10 7 0.5922---
S α 0.91540.91420.91560.91450.91560.91418.1 × 10 8 -2.0 × 10 11 -1.8 × 10 9 11---
E ξ 0.95310.95460.95530.95440.95580.95452.4 × 10 8 -1.6 × 10 10 -6.4 × 10 9 5.5 × 10 5 0.9999---
SIP M 0.05210.05070.05530.05360.05340.0542--9.6 × 10 7 -0.1443-0.00026.1 × 10 5 3.7 × 10 6 0.9991
F β 0.88050.88550.87590.87810.87980.8773--2.2 × 10 7 -2.3 × 10 5 -1.1 × 10 6 7.0 × 10 6 7.5 × 10 7 0.9280
S α 0.87090.87590.86610.86930.86970.868--2.7 × 10 5 -0.9983-0.00620.00215.7 × 10 5 0.9999
E ξ 0.91780.92110.91130.91550.9150.9133--3.7 × 10 5 -0.7525-0.00580.00053.7 × 10 5 0.9998
STERE1000 M 0.04390.04530.04430.04410.04450.04442.7 × 10 7 1.6 × 10 5 0.10521.1 × 10 7 8.3 × 10 6 0.99980.99990.99981.4 × 10 6 3.0 × 10 8
F β 0.8740.86910.87280.87470.87580.8776.1 × 10 6 0.06080.00023.5 × 10 6 1.7 × 10 5 0.00040.00070.99661.9 × 10 5 5.6 × 10 6
S α 0.88220.880.88070.88180.88090.88121117.8 × 10 8 111112.6 × 10 7
E ξ 0.93640.93520.93530.93630.93590.93654.9 × 10 8 0.00015.4 × 10 8 2.9 × 10 10 7.6 × 10 8 7.2 × 10 7 0.00041.3 × 10 5 1.3 × 10 7 5.1 × 10 9
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lv, C.; Wan, B.; Zhou, X.; Sun, Y.; Zhang, J.; Yan, C. Lightweight Cross-Modal Information Mutual Reinforcement Network for RGB-T Salient Object Detection. Entropy 2024, 26, 130. https://doi.org/10.3390/e26020130

AMA Style

Lv C, Wan B, Zhou X, Sun Y, Zhang J, Yan C. Lightweight Cross-Modal Information Mutual Reinforcement Network for RGB-T Salient Object Detection. Entropy. 2024; 26(2):130. https://doi.org/10.3390/e26020130

Chicago/Turabian Style

Lv, Chengtao, Bin Wan, Xiaofei Zhou, Yaoqi Sun, Jiyong Zhang, and Chenggang Yan. 2024. "Lightweight Cross-Modal Information Mutual Reinforcement Network for RGB-T Salient Object Detection" Entropy 26, no. 2: 130. https://doi.org/10.3390/e26020130

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop