A Self-Trained Model for Cloud, Shadow and Snow Detection in Sentinel-2 Images of Snow- and Ice-Covered Regions
"> Figure 1
<p>Geographic distribution of the datasets. The labeled numbers indicate the number of Sentinel-2 scenes used from a given site, and their colors indicate the dataset to which they belong.</p> "> Figure 2
<p>The U-Net architecture [<a href="#B14-remotesensing-14-01825" class="html-bibr">14</a>] with 32 start filters and a depth of 5, used in stage 2 of the self-training framework. The layers that constitute each encoder and decoder block are shown inside the boxes with dotted borders. The number of feature maps is indicated below each colored box. The resolution is the same for all layers in the encoder/decoder block; this is indicated at the top left of the dotted box. The output of the encoder, which is provided via the skip connection, is concatenated with the output of the up-convolution operation from the previous layer.</p> "> Figure 3
<p>Confusion matrix of an <span class="html-italic">M</span>-class segmentation problem with <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>6</mn> </mrow> </semantics></math>. Each element in the matrix, denoted <math display="inline"><semantics> <msub> <mi>n</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </semantics></math>, is the total number of pixels that belong to class <span class="html-italic">i</span> and predicted as class <span class="html-italic">j</span> by the model. For class 3, the True Positive pixels are represented by the green cell; the False Negative pixels are represented by the blue cells and the False Positive pixels are represented by the red cells.</p> "> Figure 4
<p>Comparison of test image results from Tile 10WDE in Northwest Territories, Canada, captured on 1 June 2020.</p> "> Figure 5
<p>Comparison of test image results from Tile 26XNQ in North East Greenland, captured on 14 September 2020.</p> "> Figure 6
<p>Comparison of test image results from different stages of the self-training framework and the results from the models trained using supervised training, from Tile 04WDV in Alaska, USA, captured on 27 October 2020.</p> "> Figure A1
<p>Comparison of test image results from Tile 04CEU in Marie Byrd Land, Antractica, captured on 12 December 2020.</p> "> Figure A2
<p>Comparison of validation image results from Tile 16XEG in Nunavut, Canada, captured on 15 March 2020.</p> "> Figure A3
<p>Comparison of validation image results from Tile 41XNE in Arkhangelsk Oblast, Russia, captured on 23 September 2020.</p> ">
Abstract
:1. Introduction
2. Previous Related Work
2.1. Self-Training
2.2. Cloud Detection Algorithms
3. Materials and Methods
3.1. Dataset
3.2. Neural Network Architecture
3.3. Self-Training Framework
3.4. Training Implementation
3.4.1. Regularization Techniques
3.4.2. Loss Function
3.5. Validation Metrics
4. Results
5. Discussion
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Appendix A. Tile-Wise Quarterly Distribution of Sentinel-2 Scenes
Appendix A.1. Training Dataset
Tile No. | Location | Jan.–Mar. | Apr.–Jun. | Jul.–Sep. | Oct.–Dec. | Size |
---|---|---|---|---|---|---|
06WVC | U.S.A | 1 | 1 | 1 | 1390 | |
07VCG | U.S.A | 1 | 2 | 1452 | ||
11TLG | U.S.A | 1 | 1 | 1 | 1452 | |
11WNV | Canada | 1 | 1 | 1 | 1452 | |
12XVM | Canada | 1 | 2 | 1452 | ||
14CMC | Antarctica | 2 | 1 | 1452 | ||
15WXQ | Canada | 1 | 2 | 1452 | ||
16CEU | Antarctica | 1 | 2 | 1452 | ||
18DVF | Antarctica | 2 | 1 | 1405 | ||
19DEE | Antarctica | 1 | 1 | 1 | 1324 | |
19JEH | Argentina | 1 | 1 | 1 | 1452 | |
19WER | Canada | 1 | 2 | 1452 | ||
19XEH | Greenland | 1 | 2 | 1348 | ||
20WMT | Canada | 1 | 1 | 1 | 1452 | |
20XNR | Greenland | 1 | 2 | 1442 | ||
21CWJ | Antarctica | 1 | 2 | 1121 | ||
21UUA | Canada | 1 | 2 | 1452 | ||
21XWC | Greenland | 1 | 1 | 1 | 1452 | |
27XVB | Greenland | 1 | 1 | 1 | 1452 | |
27XWH | Greenland | 1 | 2 | 1356 | ||
30XWR | Greenland Sea | 1 | 1 | 1 | 1283 | |
34WED | Norway | 3 | 1448 | |||
42XVJ | Russia | 1 | 1 | 1 | 1452 | |
44XMF | Russia | 1 | 2 | 1452 | ||
45DWG | Antarctica | 1 | 1 | 1 | 1075 | |
47XMJ | Russia | 1 | 2 | 1452 | ||
49XDE | Russia | 1 | 1 | 1 | 1452 | |
54WVT | Russia | 1 | 1 | 1 | 1452 | |
55XDD | Russia | 1 | 2 | 1368 | ||
58CDV | Antarctica | 2 | 1 | 1452 | ||
59CMU | Antarctica | 2 | 1 | 1452 | ||
60WWT | Russia | 1 | 1 | 1 | 1446 |
Appendix A.2. Validation Dataset
Tile No. | Location | Jan.–Mar. | Apr.–Jun. | Jul.–Sep. | Oct.–Dec. | No. Labeled Pixels |
---|---|---|---|---|---|---|
16XEG | Canada | 1 | 2 | 1,491,148 | ||
18CWU | Antarctica | 1 | 678,120 | |||
19FDU | Chile | 2 | 868,457 | |||
26XNR | Greenland | 1 | 3 | 1,514,134 | ||
27XVL | Greenland | 4 | 2,646,020 | |||
41XNE | Russia | 1 | 1 | 913,888 | ||
45SVV | China | 2 | 2 | 3,971,467 | ||
45WXR | Russia | 1 | 1 | 1,064,095 |
Appendix A.3. Test Dataset
Tile No. | Location | Jan.–Mar. | Apr.–Jun. | Jul.–Sep. | Oct.–Dec. | No. Labeled Pixels |
---|---|---|---|---|---|---|
04CEU | Antarctica | 1 | 1 | 1,072,859 | ||
04WDV | U.S.A | 1 | 2 | 590,574 | ||
10WDE | Canada | 1 | 2 | 1,922,806 | ||
18XVM | Canada | 2 | 1 | 1,438,918 | ||
26XNQ | Greenland | 3 | 2,086,371 | |||
27XWK | Greenland | 3 | 1,251,262 | |||
32VMP | Norway | 1 | 1 | 764,357 | ||
42DVG | Antarctica | 1 | 295,047 | |||
52XDF | Russia | 3 | 2,174,747 |
Appendix B. Additional Image Results
References
- Zhu, Z.; Woodcock, C.E. Object-based cloud and cloud shadow detection in Landsat imagery. Remote Sens. Environ. 2012, 118, 83–94. [Google Scholar] [CrossRef]
- Zhu, Z.; Wang, S.; Woodcock, C.E. Improvement and expansion of the Fmask algorithm: Cloud, cloud shadow, and snow detection for Landsats 4–7, 8, and Sentinel 2 images. Remote Sens. Environ. 2015, 159, 269–277. [Google Scholar] [CrossRef]
- Qiu, S.; Zhu, Z.; He, B. Fmask 4.0: Improved cloud and cloud shadow detection in Landsats 4–8 and Sentinel-2 imagery. Remote Sens. Environ. 2019, 231, 111205. [Google Scholar] [CrossRef]
- Louis, J.; Debaecker, V.; Pflug, B.; Main-Knorn, M.; Bieniarz, J.; Mueller-Wilm, U.; Cadau, E.; Gascon, F. Sentinel-2 Sen2Cor: L2A processor for users. In Proceedings of the ESA Living Planet Symposium Living Planet Symposium, Prague, Czech Republic, 9–13 May 2016; pp. 1–8. [Google Scholar]
- Christodoulou, C.I.; Michaelides, S.C.; Pattichis, C.S. Multifeature texture analysis for the classification of clouds in satellite imagery. IEEE Trans. Geosci. Remote Sens. 2003, 41, 2662–2668. [Google Scholar] [CrossRef]
- Li, Z.; Shen, H.; Li, H.; Xia, G.; Gamba, P.; Zhang, L. Multi-feature combined cloud and cloud shadow detection in GaoFen-1 wide field of view imagery. Remote Sens. Environ. 2017, 191, 342–358. [Google Scholar] [CrossRef] [Green Version]
- Sun, L.; Wei, J.; Wang, J.; Mi, X.; Guo, Y.; Lv, Y.; Yang, Y.; Gan, P.; Zhou, X.; Jia, C.; et al. A universal dynamic threshold cloud detection algorithm (UDTCDA) supported by a prior surface reflectance database. J. Geophys. Res. Atmos. 2016, 121, 7172–7196. [Google Scholar] [CrossRef]
- Zhou, G.; Zhou, X.; Yue, T.; Liu, Y. An optional threshold with SVM cloud detection algorithm and DSP implementation. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, XLI-B8, 771–777. [Google Scholar] [CrossRef] [Green Version]
- Sui, Y.; He, B.; Fu, T. Energy-based cloud detection in multispectral images based on the SVM technique. Int. J. Remote Sens. 2019, 40, 5530–5543. [Google Scholar] [CrossRef]
- Hollstein, A.; Segl, K.; Guanter, L.; Brell, M.; Enesco, M. Ready-to-use methods for the detection of clouds, cirrus, snow, shadow, water and clear sky pixels in Sentinel-2 MSI images. Remote Sens. 2016, 8, 666. [Google Scholar] [CrossRef] [Green Version]
- Ghasemian, N.; Akhoondzadeh, M. Introducing two random forest based methods for cloud detection in remote sensing images. Adv. Space Res. 2018, 62, 288–303. [Google Scholar] [CrossRef]
- Le Hégarat-Mascle, S.; André, C. Use of Markov random fields for automatic cloud/shadow detection on high resolution optical images. ISPRS J. Photogramm. Remote Sens. 2009, 64, 351–366. [Google Scholar] [CrossRef]
- Vivone, G.; Addesso, P.; Conte, R.; Longo, M.; Restaino, R. A class of cloud detection algorithms based on a MAP-MRF approach in space and time. IEEE Trans. Geosci. Remote Sens. 2014, 52, 5100–5115. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), Munich, Germany, 5–9 October 2015; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2015; Volume 9351, pp. 234–241. [Google Scholar]
- Sallab, A.E.; Abdou, M.; Perot, E.; Yogamani, S. Deep reinforcement learning framework for autonomous driving. Electron. Imaging 2017, 2017, 70–76. [Google Scholar] [CrossRef] [Green Version]
- Sohn, K.; Zhang, Z.; Li, C.L.; Zhang, H.; Lee, C.Y.; Pfister, T. A simple semi-supervised learning framework for object detection. arXiv 2020, arXiv:2005.04757. [Google Scholar]
- Zoph, B.; Ghiasi, G.; Lin, T.Y.; Cui, Y.; Liu, H.; Cubuk, E.D.; Le, Q. Rethinking pre-training and self-training. In Advances in Neural Information Processing Systems; Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2020; Volume 33, pp. 3833–3845. [Google Scholar]
- Xie, Q.; Luong, M.T.; Hovy, E.; Le, Q.V. Self-training with noisy student improves ImageNet classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 14–19 June 2020; pp. 10687–10698. [Google Scholar]
- Lee, H.W.; Kim, N.r.; Lee, J.H. Deep neural network self-training based on unsupervised learning and dropout. Int. J. Fuzzy Log. Intell. Syst. 2017, 17, 1–9. [Google Scholar] [CrossRef] [Green Version]
- Babakhin, Y.; Sanakoyeu, A.; Kitamura, H. Semi-supervised segmentation of salt bodies in seismic images using an ensemble of convolutional neural networks. In Proceedings of the German Conference on Pattern Recognition (GCPR), Dortmund, Germany, 10–13 September 2019; pp. 218–231. [Google Scholar]
- Chen, L.C.; Lopes, R.G.; Cheng, B.; Collins, M.D.; Cubuk, E.D.; Zoph, B.; Adam, H.; Shlens, J. Naive-student: Leveraging semi-supervised learning in video sequences for urban scene segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), Glasgow, UK, 23–28 August 2020; pp. 695–714. [Google Scholar]
- Yilmaz, F.F.; Heckel, R. Image recognition from raw labels collected without annotators. arXiv 2019, arXiv:1910.09055. [Google Scholar]
- Huang, C.; Thomas, N.; Goward, S.N.; Masek, J.G.; Zhu, Z.; Townshend, J.R.G.; Vogelmann, J.E. Automated masking of cloud and cloud shadow for forest change analysis using Landsat images. Int. J. Remote Sens. 2010, 31, 5449–5464. [Google Scholar] [CrossRef]
- Irish, R.R.; Barker, J.L.; Goward, S.N.; Arvidson, T. Characterization of the Landsat-7 ETM+ automated cloud-cover assessment (ACCA) algorithm. Photogramm. Eng. Remote Sens. 2006, 72, 1179–1188. [Google Scholar] [CrossRef]
- Foga, S.; Scaramuzza, P.L.; Guo, S.; Zhu, Z.; Dilley, R.D., Jr.; Beckmann, T.; Schmidt, G.L.; Dwyer, J.L.; Hughes, M.J.; Laue, B. Cloud detection algorithm comparison and validation for operational Landsat data products. Remote Sens. Environ. 2017, 194, 379–390. [Google Scholar] [CrossRef] [Green Version]
- Chai, D.; Newsam, S.; Zhang, H.K.; Qiu, Y.; Huang, J. Cloud and cloud shadow detection in Landsat imagery based on deep convolutional neural networks. Remote Sens. Environ. 2019, 225, 307–316. [Google Scholar] [CrossRef]
- Xu, K.; Guan, K.; Peng, J.; Luo, Y.; Wang, S. DeepMask: An algorithm for cloud and cloud shadow detection in optical satellite remote sensing images using deep residual network. arXiv 2019, arXiv:1911.03607. [Google Scholar]
- Jeppesen, J.H.; Jacobsen, R.H.; Inceoglu, F.; Toftegaard, T.S. A cloud detection algorithm for satellite imagery based on deep learning. Remote Sens. Environ. 2019, 229, 247–259. [Google Scholar] [CrossRef]
- Mohajerani, S.; Saeedi, P. Cloud-Net: An end-to-end cloud detection algorithm for Landsat 8 imagery. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Yokohama, Japan, 28 July–2 August 2019; pp. 1029–1032. [Google Scholar]
- Shao, Z.; Pan, Y.; Diao, C.; Cai, J. Cloud detection in remote sensing images based on multiscale features-convolutional neural network. IEEE Trans. Geosci. Remote Sens. 2019, 57, 4062–4076. [Google Scholar] [CrossRef]
- Zhan, Y.; Wang, J.; Shi, J.; Cheng, G.; Yao, L.; Sun, W. Distinguishing cloud and snow in satellite images via deep convolutional network. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1785–1789. [Google Scholar] [CrossRef]
- Yan, Z.; Yan, M.; Sun, H.; Fu, K.; Hong, J.; Sun, J.; Zhang, Y.; Sun, X. Cloud and cloud shadow detection using multilevel feature fused segmentation network. IEEE Geosci. Remote Sens. Lett. 2018, 15, 1600–1604. [Google Scholar] [CrossRef]
- Zhang, L.; Sun, J.; Yang, X.; Jiang, R.; Ye, Q. Improving deep learning-based cloud detection for satellite images with attention mechanism. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
- Yu, J.; Li, Y.; Zheng, X.; Zhong, Y.; He, P. An effective cloud detection method for Gaofen-5 images via deep learning. Remote Sens. 2020, 12, 2106. [Google Scholar] [CrossRef]
- Liu, Y.; Wang, W.; Li, Q.; Min, M.; Yao, Z. DCNet: A deformable convolutional cloud detection network for remote sensing imagery. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
- Li, Y.; Chen, W.; Zhang, Y.; Tao, C.; Xiao, R.; Tan, Y. Accurate cloud detection in high-resolution remote sensing imagery by weakly supervised deep learning. Remote Sens. Environ. 2020, 250, 112045. [Google Scholar] [CrossRef]
- Liu, C.C.; Zhang, Y.C.; Chen, P.Y.; Lai, C.C.; Chen, Y.H.; Cheng, J.H.; Ko, M.H. Clouds classification from Sentinel-2 imagery with deep residual learning and semantic image segmentation. Remote Sens. 2019, 11, 119. [Google Scholar] [CrossRef] [Green Version]
- Li, J.; Wu, Z.; Hu, Z.; Jian, C.; Luo, S.; Mou, L.; Zhu, X.X.; Molinier, M. A lightweight deep learning-based cloud detection method for Sentinel-2A imagery fusing multiscale spectral and spatial features. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–19. [Google Scholar] [CrossRef]
- Hughes, M.J.; Kennedy, R. High-quality cloud masking of Landsat 8 imagery using convolutional neural networks. Remote Sens. 2019, 11, 2591. [Google Scholar] [CrossRef] [Green Version]
- ESA. Sentinel-2 Spectral Band Information. Available online: https://sentinel.esa.int/web/sentinel/user-guides/sentinel-2-msi/resolutions/radiometric (accessed on 19 June 2020).
- QGIS Development Team. QGIS Geographic Information System; QGIS Association: Grüt, Switzerland, 2021. [Google Scholar]
- Qiu, S.; He, B.; Zhu, Z.; Liao, Z.; Quan, X. Improving Fmask cloud and cloud shadow detection in mountainous area for Landsats 4–8 images. Remote Sens. Environ. 2017, 199, 107–119. [Google Scholar] [CrossRef]
- Hall, D.K.; Riggs, G.A.; Salomonson, V.V. Development of methods for mapping global snow cover using moderate resolution imaging spectroradiometer data. Remote Sens. Environ. 1995, 54, 127–140. [Google Scholar] [CrossRef]
- DeVries, T.; Taylor, G.W. Improved regularization of convolutional neural networks with cutout. arXiv 2017, arXiv:1708.04552. [Google Scholar]
- Tompson, J.; Goroshin, R.; Jain, A.; LeCun, Y.; Bregler, C. Efficient object localization using convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 648–656. [Google Scholar]
- Eigen, D.; Fergus, R. Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 11–18 December 2015; pp. 2650–2658. [Google Scholar]
- Kampffmeyer, M.; Salberg, A.B.; Jenssen, R. Semantic segmentation of small objects and modeling of uncertainty in urban remote sensing images using deep convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Las Vegas, NV, USA, 29 June–1 July 2016; pp. 680–688. [Google Scholar]
- Martinuzzi, S.; Gould, W.A.; González, O.M.R. Creating Cloud-Free Landsat ETM+ Data Sets in Tropical Landscapes: Cloud and Cloud-Shadow Removal; General Technical Report IITF-32; US Department of Agriculture, Forest Service, International Institute of Tropical Forestry: Rio Piedras, PR, USA, 2007. [Google Scholar]
Band No. | Central Wavelength (nm) | Resolution (m) | Image Size (Pixels) | Details |
---|---|---|---|---|
2 | 492.4 | 10 | 10,980 × 10,980 | Blue Band |
3 | 559.8 | 10 | 10,980 × 10,980 | Green Band |
4 | 664.6 | 10 | 10,980 × 10,980 | Red Band |
8 | 832.8 | 10 | 10,980 × 10,980 | Near Infrared |
11 | 1613.7 | 20 | 5490 × 5490 | Short-wave Infrared |
12 | 2202.4 | 20 | 5490 × 5490 | Short-wave Infrared |
Sen2Cor Label | New Label |
---|---|
No-Data | No-Data |
Saturated/Defective Pixels | |
Unclassified | |
Vegetation | Clear-Sky Land |
Non-vegatated | |
Cloud High Probability | Cloud |
Cloud Medium Probability | |
Cloud Low Probability | |
Thin Cirrus Clouds | |
Cloud Shadow | Shadow |
Shadows/ Dark Area Pixels | |
Water | Water |
Snow | Snow |
Network Architecture | Training Data | ||||||
---|---|---|---|---|---|---|---|
Training Stage | No. Start Filters | Depth | No. Parameters | Size | Training Batch | Label Source | |
1 | 16 | 5 | 1.9 M | 11,263 | Batch-1 | Fmask | |
2 | 32 | 5 | 7.8 M | 22,524 | Batch-1 | Fmask | |
Batch-2 | Model Stage-1 | ||||||
3 | 24 | 6 | 17.5 M | 33,785 | Batch-1 | Fmask | |
Batch-2 | Model Stage-2 | ||||||
Batch-3 | Model Stage-2 | ||||||
4 | 32 | 6 | 31.1 M | 45,046 | Batch-1 | Fmask | |
Batch-2 | Model Stage-3 | ||||||
Batch-3 | Model Stage-3 | ||||||
Batch-4 | Model Stage-3 |
Class Distribution (%) | ||||||||
---|---|---|---|---|---|---|---|---|
Dataset | Label Source | Size | No-Data | Clear-Sky Land | Cloud | Shadow | Snow | Water |
Train | Fmask | 45,046 | 0.48 | 13.04 | 39.62 | 15.51 | 29.97 | 11.36 |
Validation | Human-labeled | ∼204 a | 0 | 10.64 | 40.70 | 24.80 | 13.83 | 10.03 |
Test | Human-labeled | ∼180 b | 0 | 15.59 | 55.71 | 12.07 | 18.28 | 18.34 |
Fmask 4 Labels | Recall | |||||||
---|---|---|---|---|---|---|---|---|
No-Data | Clear-Sky Land | Cloud | Shadow | Snow | Water | |||
True Labels | No-Data | 0 | 0 | 0 | 0 | 0 | 0 | 0.00 |
Clear-Sky Land | 0 | 629,977 | 7605 | 2727 | 3431 | 4772 | 0.97 | |
Cloud | 0 | 647,645 | 4,663,884 | 98,931 | 1,050,628 | 82 | 0.72 | |
Shadow | 73 | 67,800 | 32,191 | 513,100 | 585,632 | 201,325 | 0.37 | |
Snow | 0 | 71 | 114,551 | 7800 | 1,990,272 | 7744 | 0.94 | |
Water | 5 | 1123 | 1117 | 4823 | 420 | 959,212 | 0.99 | |
Precision | 0.00 | 0.47 | 0.97 | 0.82 | 0.55 | 0.82 |
Sen2Cor 2.8 Labels | Recall | |||||||
---|---|---|---|---|---|---|---|---|
No-Data | Clear-Sky Land | Cloud | Shadow | Snow | Water | |||
True Labels | No-Data | 0 | 0 | 0 | 0 | 0 | 0 | 0.00 |
Clear-Sky Land | 164,756 | 385,323 | 85,479 | 10,415 | 870 | 1669 | 0.59 | |
Cloud | 69,182 | 4772 | 5,106,178 | 2626 | 1,262,053 | 16,359 | 0.79 | |
Shadow | 78,465 | 1529 | 32,579 | 252,020 | 333,368 | 702,160 | 0.18 | |
Snow | 5238 | 0 | 88,040 | 8 | 2,011,819 | 15,333 | 0.95 | |
Water | 221 | 0 | 511 | 2412 | 0 | 963,556 | 1.00 | |
Precision | 0.00 | 0.98 | 0.96 | 0.94 | 0.56 | 0.57 |
Our Model Labels | Recall | |||||||
---|---|---|---|---|---|---|---|---|
No-Data | Clear-Sky Land | Cloud | Shadow | Snow | Water | |||
True Labels | No-Data | 0 | 0 | 0 | 0 | 0 | 0 | 0.00 |
Clear-Sky Land | 1 | 573,931 | 65,119 | 7777 | 1468 | 216 | 0.88 | |
Cloud | 85 | 23,195 | 6,111,259 | 156,238 | 170,347 | 46 | 0.95 | |
Shadow | 207 | 4232 | 10,126 | 1,076,655 | 39,761 | 269,140 | 0.77 | |
Snow | 102 | 22 | 54,618 | 24,948 | 2,032,404 | 8344 | 0.96 | |
Water | 0 | 0 | 0 | 5445 | 0 | 961,255 | 0.99 | |
Precision | 0.00 | 0.95 | 0.98 | 0.85 | 0.91 | 0.78 |
Fmask 4 | Sen2Cor 2.8 | Our Model | ||||||
---|---|---|---|---|---|---|---|---|
Class | Precision | Recall | Precision | Recall | Precision | Recall | ||
Clear-Sky Land | 0.47 | 0.97 | 0.98 | 0.59 | 0.95 | 0.88 | ||
Cloud | 0.97 | 0.72 | 0.96 | 0.79 | 0.98 | 0.95 | ||
Shadow | 0.82 | 0.37 | 0.94 | 0.18 | 0.85 | 0.77 | ||
Snow | 0.55 | 0.94 | 0.56 | 0.95 | 0.91 | 0.96 | ||
Water | 0.82 | 0.99 | 0.57 | 1.00 | 0.78 | 0.99 |
Fmask 4 | Sen2Cor 2.8 | Our Model | ||||||
---|---|---|---|---|---|---|---|---|
Class | F1 Score | IoU | F1 Score | IoU | F1 Score | IoU | ||
Clear-Sky Land | 0.63 | 0.46 | 0.74 | 0.59 | 0.92 | 0.85 | ||
Cloud | 0.83 | 0.70 | 0.87 | 0.77 | 0.96 | 0.93 | ||
Shadow | 0.51 | 0.34 | 0.30 | 0.18 | 0.81 | 0.68 | ||
Snow | 0.69 | 0.53 | 0.70 | 0.54 | 0.93 | 0.87 | ||
Water | 0.90 | 0.81 | 0.72 | 0.57 | 0.87 | 0.77 | ||
Total Accuracy | 0.76 | 0.75 | 0.93 | |||||
mIOU | 0.57 | 0.53 | 0.82 |
Stage-1 | Stage-2 | Stage-3 | Stage-4 | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Class | F1 Score | IoU | F1 Score | IoU | F1 Score | IoU | F1 Score | IoU | |||
Clear-Sky Land | 0.61 | 0.44 | 0.81 | 0.68 | 0.92 | 0.84 | 0.92 | 0.85 | |||
Cloud | 0.88 | 0.79 | 0.94 | 0.89 | 0.96 | 0.93 | 0.96 | 0.93 | |||
Shadow | 0.68 | 0.52 | 0.76 | 0.62 | 0.81 | 0.67 | 0.81 | 0.68 | |||
Snow | 0.83 | 0.71 | 0.91 | 0.83 | 0.93 | 0.86 | 0.93 | 0.87 | |||
Water | 0.89 | 0.80 | 0.88 | 0.78 | 0.87 | 0.77 | 0.87 | 0.77 | |||
Total Accuracy | 0.83 | 0.90 | 0.93 | 0.93 | |||||||
mIoU | 0.65 | 0.76 | 0.82 | 0.82 |
Our Model | Supervised (Noisy) | Supervised (Clean) | ||||||
---|---|---|---|---|---|---|---|---|
Class | F1 Score | IoU | F1 Score | IoU | F1 Score | IoU | ||
Clear-Sky Land | 0.92 | 0.85 | 0.90 | 0.81 | 0.87 | 0.77 | ||
Cloud | 0.96 | 0.93 | 0.95 | 0.90 | 0.94 | 0.88 | ||
Shadow | 0.81 | 0.68 | 0.69 | 0.52 | 0.34 | 0.20 | ||
Snow | 0.93 | 0.87 | 0.90 | 0.81 | 0.77 | 0.62 | ||
Water | 0.87 | 0.77 | 0.82 | 0.69 | 0.85 | 0.74 | ||
Total Accuracy | 0.93 | 0.89 | 0.84 | |||||
mIoU | 0.82 | 0.75 | 0.64 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Nambiar, K.G.; Morgenshtern, V.I.; Hochreuther, P.; Seehaus, T.; Braun, M.H. A Self-Trained Model for Cloud, Shadow and Snow Detection in Sentinel-2 Images of Snow- and Ice-Covered Regions. Remote Sens. 2022, 14, 1825. https://doi.org/10.3390/rs14081825
Nambiar KG, Morgenshtern VI, Hochreuther P, Seehaus T, Braun MH. A Self-Trained Model for Cloud, Shadow and Snow Detection in Sentinel-2 Images of Snow- and Ice-Covered Regions. Remote Sensing. 2022; 14(8):1825. https://doi.org/10.3390/rs14081825
Chicago/Turabian StyleNambiar, Kamal Gopikrishnan, Veniamin I. Morgenshtern, Philipp Hochreuther, Thorsten Seehaus, and Matthias Holger Braun. 2022. "A Self-Trained Model for Cloud, Shadow and Snow Detection in Sentinel-2 Images of Snow- and Ice-Covered Regions" Remote Sensing 14, no. 8: 1825. https://doi.org/10.3390/rs14081825