Active Bidirectional Self-Training Network for Cross-Domain Segmentation in Remote-Sensing Images
<p>A domain shift exists between urban and rural scenarios and is manifested by differences in target characteristics and imbalances in the class distribution. The t-SNE [<a href="#B25-remotesensing-16-02507" class="html-bibr">25</a>] feature visualization for the UDA method using the DCA [<a href="#B22-remotesensing-16-02507" class="html-bibr">22</a>] and our ADA method is shown on the right.</p> "> Figure 2
<p>Domain offsets in the instances of the Urban domain from the Rural domain based on Gaussian distribution.</p> "> Figure 3
<p>Architecture of the proposed ABSNet. The upper left part is the multi-prototype active region selection module, responsible for selecting and labeling the target-domain active samples that are the most informative for domain adaptation. The lower part represents the source-weighted class-balanced self-training process, in which the distance measurement from the source samples to the target distribution and the class entropy are incorporated for self-training.</p> "> Figure 4
<p>Illustration of multi-prototype active region selection module.</p> "> Figure 5
<p>Illustration of the source-weighted class-balanced self-training process.</p> "> Figure 6
<p>Segmentation visualization results with different DA of semantic segmentation methods in Rural-to-Urban task. (<b>a</b>) Image. (<b>b</b>) Ground truth. (<b>c</b>) DCA. (<b>d</b>) MADA. (<b>e</b>) RIPU. (<b>f</b>) ABSNet.</p> "> Figure 7
<p>Segmentation visualization results with different DA of semantic segmentation methods in Urban-to-Rural task. (<b>a</b>) Image. (<b>b</b>) Ground truth. (<b>c</b>) DCA. (<b>d</b>) MADA. (<b>e</b>) RIPU. (<b>f</b>) ABSNet.</p> "> Figure 8
<p>Segmentation visualization results with different DA of semantic segmentation methods in VH-to-PD task. (<b>a</b>) Image. (<b>b</b>) Ground truth. (<b>c</b>) Alonso’s. (<b>d</b>) MADA. (<b>e</b>) RIPU. (<b>f</b>) ABSNet.</p> "> Figure 9
<p>Radar charts of mIoU(%) on the 7 categories of the LoveDA dataset with the baseline method and our ABSNet. (<b>a</b>) Rural-to-Urban. (<b>b</b>) Urban-to-Rural.</p> "> Figure 10
<p>Performance improvement of our method for different number of active samples. (<b>a</b>) Rural-to-Urban. (<b>b</b>) Urban-to-Rural.</p> ">
Abstract
:1. Introduction
- 1.
- We propose a novel active bidirectional self-training network for cross-domain semantic segmentation in RSIs. Different from previous UDA methods, our approach aims to learn the realistic distribution of the target-domain data whenever possible. It selectively trains advantageous samples from the source-domain data.
- 2.
- The multi-prototype active region selection (MARS) module is introduced, relying on multiple prototypes and covariances to more precisely characterize the feature distribution of the source domain. This enables the selection of representative samples from the target domain. Additionally, the region labeling based on superpixels is more convenient and involves less labeling redundancy.
- 3.
- Source-weighted class-balanced self-training (SCBS) is proposed for the fine-tuning of semi-supervised domain adaptation. This approach measures the domain adaptation capability of the source-domain samples and combines it with class average entropy to denoise the source-domain samples and alleviate class imbalance.
2. Related Works
2.1. Domain-Adaptive Semantic Segmentation
2.2. Domain-Adaptive Semantic Segmentation for RSIs
2.3. Active Domain Adaptation
3. Methods
3.1. Overview
3.2. Multi-Prototype Active Region Selection
3.3. Source-Weighted Class-Balanced Self-Training
3.4. Optimization Process
Algorithm 1: The Optimization Process of the ABSNet. |
Require: Labeled source-domain data , unlabeled target-domain data . Annotation budget B. The segmentation network G, parameterized by . Number of iterations N. Define: Target active label .
|
4. Experimental Results
4.1. Dataset Description
4.2. Implementation Details
4.3. Comparisons with the State-of-the-Art
4.3.1. Quantitative Results
4.3.2. Qualitative Results
4.4. Analysis and Discussion
4.4.1. Ablation Study
4.4.2. Comparison of Active Sample Selection Methods
4.4.3. Impact of the Number of Prototypes
4.4.4. Impact of the Number of Centroids
4.4.5. Impact of the Smoothing Parameter
4.4.6. Impact of the Number of Active Samples
4.4.7. Evaluation of Inference Speed
4.5. Limitations and Future Works
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Liu, X.; He, J.; Yao, Y.; Zhang, J.; Liang, H.; Wang, H.; Hong, Y. Classifying urban land use by integrating remote sensing and social media data. Int. J. Geogr. Inf. Sci. 2017, 31, 1675–1696. [Google Scholar] [CrossRef]
- Marcos, D.; Volpi, M.; Kellenberger, B.; Tuia, D. Land cover mapping at very high resolution with rotation equivariant CNNs: Towards small yet accurate models. ISPRS J. Photogramm. Remote Sens. 2018, 145, 96–107. [Google Scholar] [CrossRef]
- Maboudi, M.; Amini, J.; Malihi, S.; Hahn, M. Integrating fuzzy object based image analysis and ant colony optimization for road extraction from remotely sensed images. ISPRS J. Photogramm. Remote Sens. 2018, 138, 151–163. [Google Scholar] [CrossRef]
- Hamuda, E.; Glavin, M.; Jones, E. A survey of image processing techniques for plant extraction and segmentation in the field. Comput. Electron. Agric. 2016, 125, 184–199. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
- Sun, X.; Shi, A.; Huang, H.; Mayer, H. BAS4Net: Boundary-aware semi-supervised semantic segmentation network for very high resolution remote sensing images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 5398–5413. [Google Scholar] [CrossRef]
- Niu, R.; Sun, X.; Tian, Y.; Diao, W.; Chen, K.; Fu, K. Hybrid multiple attention network for semantic segmentation in aerial images. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5603018. [Google Scholar] [CrossRef]
- Li, X.; He, H.; Li, X.; Li, D.; Cheng, G.; Shi, J.; Weng, L.; Tong, Y.; Lin, Z. PointFlow: Flowing semantics through points for aerial image segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 4217–4226. [Google Scholar]
- Mou, L.; Hua, Y.; Zhu, X.X. Relation matters: Relational context-aware fully convolutional network for semantic segmentation of high-resolution aerial images. IEEE Trans. Geosci. Remote Sens. 2020, 58, 7557–7569. [Google Scholar] [CrossRef]
- Niu, R.; Sun, X.; Tian, Y.; Diao, W.; Feng, Y.; Fu, K. Improving semantic segmentation in aerial imagery via graph reasoning and disentangled learning. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5611918. [Google Scholar] [CrossRef]
- Yang, Z.; Yan, Z.; Sun, X.; Diao, W.; Yang, Y.; Li, X. Category correlation and adaptive knowledge distillation for compact cloud detection in remote sensing images. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5623318. [Google Scholar] [CrossRef]
- Tsai, Y.H.; Hung, W.C.; Schulter, S.; Sohn, K.; Yang, M.H.; Chandraker, M. Learning to adapt structured output space for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7472–7481. [Google Scholar]
- Luo, Y.; Zheng, L.; Guan, T.; Yu, J.; Yang, Y. Taking a closer look at domain shift: Category-level adversaries for semantics consistent domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 2507–2516. [Google Scholar]
- Wang, H.; Shen, T.; Zhang, W.; Duan, L.Y.; Mei, T. Classes matter: A fine-grained adversarial approach to cross-domain semantic segmentation. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2020; pp. 642–659. [Google Scholar]
- Zheng, A.; Wang, M.; Li, C.; Tang, J.; Luo, B. Entropy guided adversarial domain adaptation for aerial image semantic segmentation. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5405614. [Google Scholar] [CrossRef]
- Vu, T.H.; Jain, H.; Bucher, M.; Cord, M.; Pérez, P. Advent: Adversarial entropy minimization for domain adaptation in semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 2517–2526. [Google Scholar]
- Liu, W.; Zhang, W.; Sun, X.; Guo, Z. Unsupervised Cross-Scene Aerial Image Segmentation via Spectral Space Transferring and Pseudo-Label Revising. Remote Sens. 2023, 15, 1207. [Google Scholar] [CrossRef]
- Zou, Y.; Yu, Z.; Kumar, B.; Wang, J. Unsupervised domain adaptation for semantic segmentation via class-balanced self-training. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 289–305. [Google Scholar]
- Mei, K.; Zhu, C.; Zou, J.; Zhang, S. Instance adaptive self-training for unsupervised domain adaptation. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; Proceedings, Part XXVI 16. Springer: Berlin/Heidelberg, Germany, 2020; pp. 415–430. [Google Scholar]
- Zhang, P.; Zhang, B.; Zhang, T.; Chen, D.; Wang, Y.; Wen, F. Prototypical pseudo label denoising and target structure learning for domain adaptive semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 12414–12424. [Google Scholar]
- Hoyer, L.; Dai, D.; Van Gool, L. Daformer: Improving network architectures and training strategies for domain-adaptive semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 9924–9935. [Google Scholar]
- Wu, L.; Lu, M.; Fang, L. Deep covariance alignment for domain adaptive remote sensing image segmentation. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5620811. [Google Scholar] [CrossRef]
- Li, W.; Gao, H.; Su, Y.; Momanyi, B.M. Unsupervised domain adaptation for remote sensing semantic segmentation with transformer. Remote Sens. 2022, 14, 4942. [Google Scholar] [CrossRef]
- Gao, K.; Yu, A.; You, X.; Qiu, C.; Liu, B. Prototype and Context Enhanced Learning for Unsupervised Domain Adaptation Semantic Segmentation of Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5608316. [Google Scholar] [CrossRef]
- Van der Maaten, L.; Hinton, G. Visualizing data using t-SNE. J. Mach. Learn. Res. 2008, 9, 2579–2605. [Google Scholar]
- Su, J.C.; Tsai, Y.H.; Sohn, K.; Liu, B.; Maji, S.; Chandraker, M. Active adversarial domain adaptation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Snowmass, CO, USA, 1–5 March 2020; pp. 739–748. [Google Scholar]
- Prabhu, V.; Chandrasekaran, A.; Saenko, K.; Hoffman, J. Active domain adaptation via clustering uncertainty-weighted embeddings. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 8505–8514. [Google Scholar]
- Ning, M.; Lu, D.; Wei, D.; Bian, C.; Yuan, C.; Yu, S.; Ma, K.; Zheng, Y. Multi-anchor active domain adaptation for semantic segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 9112–9122. [Google Scholar]
- Xie, B.; Yuan, L.; Li, S.; Liu, C.H.; Cheng, X. Towards fewer annotations: Active learning via region impurity and prediction uncertainty for domain adaptive semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 8068–8078. [Google Scholar]
- Reynolds, D.A.; Rose, R.C. Robust text-independent speaker identification using Gaussian mixture speaker models. IEEE Trans. Speech Audio Process. 1995, 3, 72–83. [Google Scholar] [CrossRef]
- Wang, Z.; Wei, Y.; Feris, R.; Xiong, J.; Hwu, W.M.; Huang, T.S.; Shi, H. Alleviating semantic-level shift: A semi-supervised domain adaptation method for semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020; pp. 936–937. [Google Scholar]
- Alonso, I.; Sabater, A.; Ferstl, D.; Montesano, L.; Murillo, A.C. Semi-supervised semantic segmentation with pixel-level contrastive learning from a class-wise memory bank. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 8219–8228. [Google Scholar]
- Gao, K.; Yu, A.; You, X.; Qiu, C.; Liu, B.; Zhang, F. Cross-Domain Multi-Prototypes with Contradictory Structure Learning for Semi-Supervised Domain Adaptation Segmentation of Remote Sensing Images. Remote Sens. 2023, 15, 3398. [Google Scholar] [CrossRef]
- Wang, D.; Shang, Y. A new active labeling method for deep learning. In Proceedings of the 2014 International Joint Conference on Neural Networks (IJCNN), Beijing, China, 6–11 July 2014; pp. 112–119. [Google Scholar]
- Wang, K.; Zhang, D.; Li, Y.; Zhang, R.; Lin, L. Cost-effective active learning for deep image classification. IEEE Trans. Circuits Syst. Video Technol. 2016, 27, 2591–2600. [Google Scholar] [CrossRef]
- Ash, J.T.; Zhang, C.; Krishnamurthy, A.; Langford, J.; Agarwal, A. Deep batch active learning by diverse, uncertain gradient lower bounds. arXiv 2019, arXiv:1906.03671. [Google Scholar]
- Kirsch, A.; Van Amersfoort, J.; Gal, Y. Batchbald: Efficient and diverse batch acquisition for deep bayesian active learning. Adv. Neural Inf. Process. Syst. 2019, 7026–7037. [Google Scholar]
- Wu, T.H.; Liu, Y.C.; Huang, Y.K.; Lee, H.Y.; Su, H.T.; Huang, P.C.; Hsu, W.H. Redal: Region-based and diversity-aware active learning for point cloud semantic segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 15510–15519. [Google Scholar]
- Chen, L.C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 834–848. [Google Scholar] [CrossRef] [PubMed]
- Cai, L.; Xu, X.; Liew, J.H.; Foo, C.S. Revisiting superpixels for active learning in semantic segmentation with realistic annotation costs. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 10988–10997. [Google Scholar]
- Van den Bergh, M.; Boix, X.; Roig, G.; De Capitani, B.; Van Gool, L. Seeds: Superpixels extracted via energy-driven sampling. In Proceedings of the Computer Vision—ECCV 2012: 12th European Conference on Computer Vision, Florence, Italy, 7–13 October 2012; Proceedings, Part VII 12. Springer: Berlin/Heidelberg, Germany, 2012; pp. 13–26. [Google Scholar]
- Wang, J.; Zheng, Z.; Ma, A.; Lu, X.; Zhong, Y. LoveDA: A remote sensing land-cover dataset for domain adaptive semantic segmentation. arXiv 2021, arXiv:2110.08733. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar]
- Xie, E.; Wang, W.; Yu, Z.; Anandkumar, A.; Alvarez, J.M.; Luo, P. SegFormer: Simple and efficient design for semantic segmentation with transformers. Adv. Neural Inf. Process. Syst. 2021, 34, 12077–12090. [Google Scholar]
- Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 2015, 115, 211–252. [Google Scholar] [CrossRef]
- Loshchilov, I.; Hutter, F. Decoupled weight decay regularization. arXiv 2017, arXiv:1711.05101. [Google Scholar]
Setting | Method | Type | PA(%) | BG * | Building | Road | Water | Barren | Forest | Agricultural | mIoU(%) |
---|---|---|---|---|---|---|---|---|---|---|---|
UDA | AdaptNet [12] | AT | 54.81 | 50.90 | 28.43 | 17.73 | 46.86 | 13.10 | 17.83 | 11.21 | 26.58 |
CLAN [13] | AT | 53.61 | 47.71 | 35.33 | 27.04 | 40.93 | 22.93 | 22.52 | 8.88 | 29.34 | |
CBST [18] | ST | 63.07 | 53.09 | 46.90 | 40.23 | 72.14 | 18.13 | 14.14 | 21.66 | 38.04 | |
IAST [19] | ST | 61.50 | 52.54 | 42.98 | 40.40 | 70.86 | 15.67 | 4.85 | 19.03 | 35.19 | |
DCA [22] | ST | 63.15 | 52.21 | 49.65 | 37.59 | 61.73 | 26.60 | 20.32 | 24.04 | 38.88 | |
Semi-DA | Alonso’s [32] | Random | 60.83 | 45.58 | 49.11 | 46.51 | 59.77 | 32.69 | 22.95 | 23.81 | 40.06 |
MADA [28] | AL | 62.30 | 48.35 | 50.13 | 44.51 | 70.53 | 31.25 | 23.74 | 22.79 | 41.62 | |
RIPU [29] | AL | 62.57 | 49.96 | 49.31 | 47.03 | 71.03 | 22.62 | 22.99 | 36.47 | 42.77 | |
ABSNet | AL | 65.66 | 52.83 | 52.04 | 49.89 | 72.93 | 28.13 | 27.36 | 35.61 | 45.54 |
Setting | Method | Type | PA(%) | BG * | Building | Road | Water | Barren | Forest | Agricultural | mIoU(%) |
---|---|---|---|---|---|---|---|---|---|---|---|
UDA | AdaptNet [12] | AT | 59.51 | 38.24 | 34.08 | 27.54 | 53.94 | 18.69 | 52.26 | 39.22 | 37.71 |
CLAN [13] | AT | 63.78 | 40.29 | 41.28 | 27.53 | 51.58 | 14.27 | 55.25 | 50.98 | 40.17 | |
CBST [18] | ST | 62.41 | 29.30 | 39.33 | 36.01 | 55.70 | 16.75 | 53.89 | 53.88 | 40.69 | |
IAST [19] | ST | 62.83 | 32.34 | 50.43 | 34.78 | 46.43 | 23.94 | 54.21 | 46.04 | 41.17 | |
DCA [22] | ST | 63.13 | 28.89 | 51.58 | 33.42 | 52.99 | 29.54 | 51.42 | 53.38 | 43.03 | |
Semi-DA | Alonso’s [32] | Random | 66.56 | 41.53 | 51.50 | 33.32 | 59.86 | 20.33 | 53.86 | 54.60 | 45.00 |
MADA [28] | AL | 67.53 | 41.82 | 53.58 | 35.07 | 60.48 | 16.20 | 56.15 | 56.40 | 45.67 | |
RIPU [29] | AL | 66.30 | 41.10 | 52.19 | 32.57 | 57.48 | 34.37 | 54.86 | 52.99 | 46.51 | |
ABSNet | AL | 69.02 | 41.77 | 57.94 | 39.94 | 61.03 | 34.33 | 59.27 | 56.07 | 50.05 |
Setting | Method | Type | PA(%) | Impervious Surface | Building | Low Vegetation | Tree | Car | mIoU(%) |
---|---|---|---|---|---|---|---|---|---|
UDA | Advent [16] | AT | 60.03 | 49.80 | 54.85 | 40.19 | 26.94 | 46.71 | 43.70 |
Zheng’s [15] | AT | 60.89 | 47.63 | 48.77 | 34.92 | 41.17 | 51.58 | 44.81 | |
ProDA [20] | ST | 74.59 | 67.67 | 78.59 | 47.01 | 45.02 | 72.20 | 62.10 | |
Li‘s [23] | ST | 77.98 | 72.64 | 82.39 | 54.12 | 48.69 | 60.51 | 63.67 | |
DAFormer [21] | ST | 79.25 | 66.09 | 78.23 | 63.06 | 56.83 | 77.57 | 68.36 | |
PCEL [24] | ST | 81.32 | 65.04 | 82.64 | 63.09 | 70.96 | 76.95 | 71.74 | |
Semi-DA | Alonso’s [32] | Random | 84.23 | 77.93 | 86.43 | 65.38 | 68.48 | 66.95 | 73.03 |
MADA [28] | AL | 85.03 | 78.14 | 86.59 | 67.38 | 68.70 | 71.17 | 74.40 | |
RIPU [29] | AL | 85.18 | 77.65 | 87.07 | 67.31 | 69.61 | 71.30 | 74.59 | |
ABSNet | AL | 86.59 | 79.47 | 89.21 | 70.46 | 71.24 | 76.24 | 77.32 |
Tasks | Baseline | MARS | * | * | PA(%) | mIoU(%) |
---|---|---|---|---|---|---|
Rural-to-Urban | ✓ | 61.86 | 39.04 | |||
✓ | ✓ | 63.09 | 42.58 | |||
✓ | ✓ | 63.53 | 41.31 | |||
✓ | ✓ | ✓ | 64.44 | 41.97 | ||
✓ | ✓ | ✓ | ✓ | 65.66 | 45.54 | |
Urban-to-Rural | ✓ | 63.96 | 40.04 | |||
✓ | ✓ | 66.92 | 47.88 | |||
✓ | ✓ | 67.31 | 47.20 | |||
✓ | ✓ | ✓ | 68.57 | 47.80 | ||
✓ | ✓ | ✓ | ✓ | 69.02 | 50.05 |
Tasks | Method | mIoU(%) |
---|---|---|
Rural-to-Urban | RAND | 40.07 |
ENT [34] | 41.66 | |
CONF [34] | 41.19 | |
CLUE [27] | 42.01 | |
MARS | 42.58 | |
Urban-to-Rural | RAND | 43.49 |
ENT [34] | 46.02 | |
CONF [34] | 44.31 | |
CLUE [27] | 46.07 | |
MARS | 47.88 |
Tasks | K = 1 | K = 2 | K = 4 | K = 6 | K = 8 |
---|---|---|---|---|---|
Rural-to-Urban | 40.84 | 41.85 | 42.58 | 42.19 | 41.46 |
Urban-to-Rural | 45.63 | 46.96 | 46.46 | 47.88 | 47.65 |
Tasks | V = 50 | V = 100 | V = 150 | V = 200 | V = 250 |
---|---|---|---|---|---|
Rural-to-Urban | 44.99 | 45.06 | 45.34 | 45.54 | 45.17 |
Urban-to-Rural | 49.45 | 49.54 | 50.05 | 49.73 | 49.85 |
Tasks | = 0.5 | = 0.9 | = 0.99 | = 0.999 | = 0.9995 |
---|---|---|---|---|---|
Rural-to-Urban | 42.82 | 44.50 | 45.19 | 45.54 | 45.39 |
Urban-to-Rural | 47.49 | 49.67 | 49.83 | 50.05 | 49.87 |
B | Method | Rural-to-Urban | Urban-to-Rural | ||
---|---|---|---|---|---|
PA(%) | mIoU(%) | PA(%) | mIoU(%) | ||
0 | - | 61.86 | 39.04 | 63.96 | 40.04 |
w/ SCBS | 64.44 | 41.97 | 68.57 | 47.80 | |
5% | MARS | 63.09 | 42.58 | 66.92 | 47.88 |
w/ SCBS | 65.66 | 45.54 | 69.02 | 50.05 | |
10% | MARS | 63.48 | 43.00 | 67.56 | 48.35 |
w/ SCBS | 67.40 | 45.65 | 69.67 | 51.75 | |
50% | MARS | 65.61 | 44.79 | 68.83 | 50.16 |
w/ SCBS | 67.36 | 46.94 | 70.36 | 52.90 | |
100% | - | 67.08 | 45.89 | 70.10 | 51.30 |
w/ SCBS | 68.05 | 47.52 | 70.94 | 53.24 |
Tasks | Model | Params(M) | FLOPs(G) | FPS |
---|---|---|---|---|
Rural-to-Urban | Deeplabv2 | 39.1 | 47.9 | 14.7 |
Urban-to-Rural | Deeplabv2 | 39.1 | 47.9 | 14.7 |
VH-to-PD | DAFormer | 85.2 | 183.3 | 5.3 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Yang, Z.; Yan, Z.; Diao, W.; Ma, Y.; Li, X.; Sun, X. Active Bidirectional Self-Training Network for Cross-Domain Segmentation in Remote-Sensing Images. Remote Sens. 2024, 16, 2507. https://doi.org/10.3390/rs16132507
Yang Z, Yan Z, Diao W, Ma Y, Li X, Sun X. Active Bidirectional Self-Training Network for Cross-Domain Segmentation in Remote-Sensing Images. Remote Sensing. 2024; 16(13):2507. https://doi.org/10.3390/rs16132507
Chicago/Turabian StyleYang, Zhujun, Zhiyuan Yan, Wenhui Diao, Yihang Ma, Xinming Li, and Xian Sun. 2024. "Active Bidirectional Self-Training Network for Cross-Domain Segmentation in Remote-Sensing Images" Remote Sensing 16, no. 13: 2507. https://doi.org/10.3390/rs16132507