Assessment of Deep Learning Techniques for Land Use Land Cover Classification in Southern New Caledonia
"> Figure 1
<p>Map of New Caledonia and its Provinces (in blue the South Province, in red the North Province and in green the Islands Province). The South Province, zoomed in on the right, shows the four areas selected for the training (in magenta) and the area selected for the test (in cyan).</p> "> Figure 2
<p>(<b>a</b>) SPOT6 remote sensing image (<math display="inline"><semantics> <mrow> <mn>2334</mn> <mo>×</mo> <mn>2537</mn> </mrow> </semantics></math> pixels), (<b>b</b>) manually classified according to the LC nomenclature <a href="#remotesensing-13-02257-t003" class="html-table">Table 3</a> and (<b>c</b>) according to the LU nomenclature <a href="#remotesensing-13-02257-t004" class="html-table">Table 4</a>. The black pixels represent the mask layer brought by clouds and shadows.</p> "> Figure 3
<p>The AlexNet architecture. The number of kernels is indicated at the bottom of each convolution layer, the rest of the parameters are indicated in <a href="#remotesensing-13-02257-t005" class="html-table">Table 5</a>.</p> "> Figure 4
<p>Illustration of a ResNet architecture, the arrows above the layers depicting the “skip connections” in the architecture. The number of kernels is indicated at the bottom of each convolution layer.</p> "> Figure 5
<p>Illustration of a dense block. The different colored arrows represent the skip connection made at different levels of the dense block.</p> "> Figure 6
<p>SegNet architecture. The number of kernels is indicated at the bottom of each convolution layer. The arrows represent the activation layers used as inputs to the corresponding up-convolution layers.</p> "> Figure 7
<p>(<b>left</b>) The normal function of a convolution layer or an Atrous convolution with a separation of 1. (<b>right</b>) An atrous convolution with a separation rate of 6, only the red square will be used to calculate the result (blue square).</p> "> Figure 8
<p>DeepLab architecture illustration from the paper [<a href="#B49-remotesensing-13-02257" class="html-bibr">49</a>] showing the internal structure of the architecture, in particular the activation pyramid present in the encoder.</p> "> Figure 9
<p>(<b>a</b>) SPOT 6 images of the test area with a zoom in two specific zones, (<b>b</b>) 1 and (<b>c</b>) 2. (<b>d</b>–<b>f</b>) LU classification by a human operator of these areas compared to (<b>g</b>–<b>i</b>) the output of the DeepLab architecture.</p> ">
Abstract
:1. Introduction
2. Materials
2.1. Study Area: New Caledonia
2.2. Data Collection
2.3. Classification System
3. Methods
3.1. Use of Neo-Channels
3.2. XGBoost
3.3. Deep Learning Architectures
3.3.1. Central-Pixel Labeling
3.3.2. Semantic Labelling
3.4. Sampling Method
3.5. Mapping and Confusion Matrix
4. Results
4.1. Evaluation of Land Cover Classification
4.2. Evaluation of Land Use Classification
4.3. Influence of Neo-Channels and Land Cover as Input on the Learning
4.4. Confusion Matrix of the Best Deep Learning Model for LULC Classification
5. Discussion
6. Conclusions
Supplementary Materials
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Appendix A
Architectures | OA | PA | UA | F1-Score |
---|---|---|---|---|
XGboost | 80.26% | 81.91% | 82.73% | 82.23% |
AlexNet | 93.00% | 92.78% | 93.96% | 93.37% |
ResNet | 99.00% | 99.33% | 99.31% | 99.32% |
DenseNet | 99.00% | 99.35% | 99.09% | 99.22% |
SegNet | 94.57% | 96.08% | 91.33% | 93.64% |
DeepLab | 94.64% | 96.03% | 93.91% | 94.96% |
FCN | 95.84% | 96.90% | 94.54% | 95.71% |
Architectures | OA | PA | UA | F1-Score |
---|---|---|---|---|
XGboost | 59.29% | 59.13% | 54.26% | 56.59% |
AlexNet | 82.00% | 84.08% | 79.37% | 81.66% |
ResNet | 95.00% | 96.54% | 95.69% | 96.12% |
DenseNet | 96.00% | 96.58% | 96.41% | 96.49% |
SegNet | 78.79% | 66.74% | 70.16% | 68.41% |
DeepLab | 86.58% | 89.94% | 81.63% | 85.59% |
FCN | 85.34% | 89.30% | 78.87% | 83.76% |
References
- Veldkamp, A.; Lambin, E. Predicting land-use change. Agric. Ecosyst. Environ. 2001, 85, 1–6. [Google Scholar] [CrossRef]
- Wilson, J.S.; Clay, M.; Martin, E.; Stuckey, D.; Vedder-Risch, K. Evaluating environmental influences of zoning in urban ecosystems with remote sensing. Remote Sens. Environ. 2003, 86, 303–321. [Google Scholar] [CrossRef]
- Lary, D.J.; Alavi, A.H.; Gandomi, A.H.; Walker, A.L. Machine learning in geosciences and remote sensing. Geosci. Front. 2015, 7, 3–10. [Google Scholar] [CrossRef] [Green Version]
- Blaschke, T. Object based image analysis for remote sensing. ISPRS J. Photogramm. Remote Sens. 2010, 65, 2–16. [Google Scholar] [CrossRef] [Green Version]
- Chazeau, J. Research on New Caledonian Terrestrial Fauna: Achievements and Prospects. Biodivers. Lett. 1993, 1, 123–129. [Google Scholar] [CrossRef]
- Isnard, S.; L’huillier, L.; Rigault, F.; Jaffré, T. How did the ultramafic soils shape the flora of the New Caledonian hotspot? Plant Soil 2016, 403, 53–76. [Google Scholar] [CrossRef]
- Dumas, P.; Printemps, J.; Mangeas, M.; Luneau, G. Developing erosion models for integrated coastal zone management: A case study of The New Caledonia west coast. Mar. Pollut. Bull. 2010, 61, 519–529. [Google Scholar] [CrossRef]
- Grandcolas, P.; Murienne, J.; Robillard, T.; Desutter-Grandcolas, L.; Jourdan, H.; Guilbert, E.; Deharveng, L. New Caledonia: A very old Darwinian island? Phil. Trans. R. Soc. B 2008, 363, 3309–3317. [Google Scholar] [CrossRef] [Green Version]
- Pelletier, B. Geology of the New Caledonia region and its implications for the study of the New Caledonian biodiversity. In Compendium of Marine Species from New Caledonia, 2nd ed.; Payri, C., Richer de Forges, B., Colin, F., Eds.; Documents Scientifiques et Techniques: II, IRD: Nouméa, France, 2007; pp. 19–32. Available online: https://www.documentation.ird.fr/hor/fdi:010059749 (accessed on 1 June 2021).
- Pellens, R.; Grandcolas, P. Conservation and Management of the Biodiversity in a Hotspot Characterized by Short Range Endemism and Rarity: The Challenge of New Caledonia. In Biodiversity Hotspots; Rescigno, V., Maletta, S., Eds.; Nova Science Publishers: Hauppage, NY, USA, 2010; pp. 139–151. [Google Scholar]
- Rolland, A.K.; Crépin, A.; Kenner, C.; Afro, P. Production de Données D’Occupation du sol 2010–2014 sur la Province Sud. 2017. Available online: http://oeil.nc/cdrn/index.php/resource/bibliographie/view/27689 (accessed on 2 February 2020).
- Deng, L.; Yu, D. Deep Learning: Methods and Applications. Found. Trends Signal Process. 2014, 7, 197–387. [Google Scholar] [CrossRef] [Green Version]
- Cohen, N.; Sharir, O.; Shashua, A. On the Expressive Power of Deep Learning: A Tensor Analysis. In 29th Annual Conference on Learning Theory; Feldman, V., Rakhlin, A., Shamir, O., Eds.; PMLR: Columbia University: New York, NY, USA, 2016; Volume 49, pp. 698–728. [Google Scholar]
- Zhao, Z.; Zheng, P.; Xu, S.; Wu, X. Object Detection With Deep Learning: A Review. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 3212–3232. [Google Scholar] [CrossRef] [Green Version]
- Schmidhuber, J. Deep Learning in Neural Networks: An Overview. Neural Netw. 2015, 61, 85–117. [Google Scholar] [CrossRef] [Green Version]
- Hadjimitsis, D.G.; Clayton, C.R.I.; Hope, V.S. An assessment of the effectiveness of atmospheric correction algorithms through the remote sensing of some reservoirs. Int. J. Remote Sens. 2004, 25, 3651–3674. [Google Scholar] [CrossRef]
- Teillet, P.M. Image correction for radiometric effects in remote sensing. Int. J. Remote Sens. 1986, 7, 1637–1651. [Google Scholar] [CrossRef]
- Vicente-Serrano, S.M.; Pérez-Cabello, F.; Lasanta, T. Assessment of radiometric correction techniques in analyzing vegetation variability and change using time series of Landsat images. Remote Sens. Environ. 2008, 112, 3916–3934. [Google Scholar] [CrossRef]
- Zhang, L.; Zhang, L.; Kumar, V. Deep learning for Remote Sensing Data. IEEE Geosci. Remote Sens. Mag. 2016, 4, 22–40. [Google Scholar] [CrossRef]
- Cheng, G.; Han, J.; Lu, X. Remote Sensing Image Scene Classification: Benchmark and State of the Art. Proc. IEEE 2017, 105, 1865–1883. [Google Scholar] [CrossRef] [Green Version]
- Pascal, M.; Richer De Forges, B.; Le Guyader, H.; Simberloff, D. Mining and Other Threats to the New Caledonia Biodiversity Hotspot. Conserv. Biol. 2008, 22, 498–499. [Google Scholar] [CrossRef]
- Mittermeier, R.A.; Werner, T.B.; Lees, A. New Caledonia—A conservation imperative for an ancient land. Oryx 1996, 30, 104–112. [Google Scholar] [CrossRef] [Green Version]
- Georganos, S.; Grippa, T.; Vanhuysse, S.; Lennert, M.; Shimoni, M.; Wolff, E. Very High Resolution Object-Based Land Use–Land Cover Urban Classification Using Extreme Gradient Boosting. IEEE Geosci. Remote Sens. Lett. 2018, 15, 607–611. [Google Scholar] [CrossRef] [Green Version]
- Maillard, P. Comparing texture analysis methods through classification. Photogramm. Eng. Remote Sens. 2003, 69, 357–367. [Google Scholar] [CrossRef] [Green Version]
- GIE OCEANIDE; Jérémy, A.; Antoine, W.; OEIL. L’évolution des Paysages en Province sud. 2012. Available online: https://www.oeil.nc/cdrn/index.php/resource/bibliographie/view/2322 (accessed on 12 March 2020).
- Rousset, G. SPOT6 Satellite Imagery, Land Cover and Land Use Classification of 5 Areas in the South Province of New Caledonia. Version Provisoire. 2021. https://doi.org/10.23708/PHVOJD (accessed on 12 March 2020).
- Gao, J.; Li, P.; Chen, Z.; Zhang, J. A Survey on Deep Learning for Multimodal Data Fusion. Neural Comput. 2020, 32, 829–864. [Google Scholar] [CrossRef]
- Heymann, Y.; Steenmans, C.; Croisille, G.; Bossard, M.; Lenco, M.; Wyatt, B.; Weber, J.L.; O’Brian, C.; Cornaert, M.H.; Sifakis, N. Corine Land Cover Technical Guide; Office for Official Publications of the European Communities: Luxembourg, 1994. [Google Scholar]
- Adeline, K.; Chen, M.; Briottet, X.; Pang, S.; Paparoditis, N. Shadow detection in very high spatial resolution aerial images: A comparative study. ISPRS J. Photogramm. Remote Sens. 2013, 80, 21–38. [Google Scholar] [CrossRef]
- Ngo, T.T.; Collet, C.; Mazet, V. Détection simultanée de l’ombre et la végétation sur des images aériennes couleur en haute résolution. Trait. Signal 2014, 32, 311–333. [Google Scholar] [CrossRef]
- Myneni, R.B.; Hall, F.G.; Sellers, P.J.; Marshak, A.L. The interpretation of spectral vegetation indexes. IEEE Trans. Geosci. Remote Sens. 1995, 33, 481–486. [Google Scholar] [CrossRef]
- Xu, Y.; Wu, L.; Xie, Z.; Chen, Z. Building Extraction in Very High Resolution Remote Sensing Imagery Using Deep Learning and Guided Filters. Remote Sens. 2018, 10, 144. [Google Scholar] [CrossRef] [Green Version]
- Rouse, J.W.; Haas, R.H.; Schell, J.A.; Deering, D. Monitoring the vernal advancement and retrogradation (green wave effect) of natural vegetation. Prog. Rep. RSC 1978-1 1973, 371, 112. [Google Scholar]
- Gevers, T.; Smeulders, A.W.M. Color-based object recognition. Pattern Recognit. 1999, 32, 453–464. [Google Scholar] [CrossRef] [Green Version]
- Qi, J.; Chehbouni, A.; Huete, A.R.; Kerr, Y.H.; Sorooshian, S. A modified soil adjusted vegetation index. Remote Sens. Environ. 1994, 48, 119–126. [Google Scholar] [CrossRef]
- Woebbecke, D.M.; Meyer, G.E.; Von Bargen, K.; Mortensen, D.A. Color indices for weed identification under various soil, residue, and lighting conditions. Trans. ASAE 1994, 38, 259–269. [Google Scholar] [CrossRef]
- Xu, H. Modification of normalised difference water index (NDWI) to enhance open water features in remotely sensed imagery. Int. J. Remote Sens. 2006, 27, 3025–3033. [Google Scholar] [CrossRef]
- Meyer, G.E.; Neto, J.C. Verification of color vegetation indices for automated crop imaging applications. Comput. Electron. Agric. 2008, 63, 282–293. [Google Scholar] [CrossRef]
- Chen, T.; Guestrin, C. XGBoost: A Scalable Tree Boosting System. In Proceedings of the 22nd ACM Sigkdd International Conference on Knowledge Discovery and Data Mining; Association for Computing Machinery: New York, NY, USA, 2016; pp. 785–794. [Google Scholar]
- LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2323. [Google Scholar] [CrossRef] [Green Version]
- Buda, M.; Maki, A.; Mazurowski, M.A. A systematic study of the class imbalance problem in convolutional neural networks. Neural Netw. 2017, 106, 1–23. [Google Scholar] [CrossRef] [Green Version]
- Krizhevsky, A.; Sutskever, I.; Hinton, G. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25, 1097–1105. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); IEEE Computer Society: Los Alamitos, CA, USA, 2016; pp. 770–778. [Google Scholar]
- Huang, G.; Liu, Z.; Weinberger, K.Q. Densely Connected Convolutional Networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); IEEE Computer Society: Los Alamitos, CA, USA, 2016; pp. 2261–2269. [Google Scholar]
- Badrinarayanan, V.; Kendall, A.; Cipolla, R. SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef]
- Noh, H.; Hong, S.; Han, B. Learning deconvolution network for semantic segmentation. In Proceedings of the IEEE International Conference on Computer Vision; IEEE Computer Society: Los Alamitos, CA, USA, 2015; pp. 1520–1528. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015; Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F., Eds.; Springer International Publishing: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar]
- Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); IEEE Computer Society: Los Alamitos, CA, USA, 2015; pp. 3431–3440. [Google Scholar]
- Chen, L.C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 834–848. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Identity mappings in deep residual networks. In Computer Vision—ECCV 2016; Leibe, B., Matas, J., Sebe, N., Welling, M., Eds.; Springer International Publishing: Cham, Switzerland, 2016; pp. 630–645. [Google Scholar]
- Chollet, F. Xception: Deep learning with depthwise separable convolutions. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); IEEE Computer Society: Los Alamitos, CA, USA, 2017; pp. 1251–1258. [Google Scholar]
- Feng, J.; Chen, J.; Liu, L.; Cao, X.; Zhang, X.; Jiao, L.; Yu, T. CNN-based multilayer spatial—Spectral feature fusion and sample augmentation with local and nonlocal constraints for hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 1299–1313. [Google Scholar] [CrossRef]
- Wang, J.; Zheng, Y.; Wang, M.; Shen, Q.; Huang, J. Object-Scale Adaptive Convolutional Neural Networks for High-Spatial Resolution Remote Sensing Image Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 14, 283–299. [Google Scholar] [CrossRef]
- Confalonieri, R.; Weyde, T.; Besold, T.R.; del Prado Martín, F.M. Using ontologies to enhance human understandability of global post-hoc explanations of black-box models. Artif. Intell. 2021, 296, 103471. [Google Scholar] [CrossRef]
- Su, T.C. A filter-based post-processing technique for improving homogeneity of pixel-wise classification data. Eur. J. Remote Sens. 2016, 49, 531–552. [Google Scholar] [CrossRef] [Green Version]
Area | (; ) | (; ) | Surface |
---|---|---|---|
Test1 | (419,828; 285,110) | (423,328; 288,915) | 13.3 km |
Train1 | (382,983; 276,240) | (387,031; 281,621) | 21.8 km |
Train2 | (416,813; 244,503) | (421,228; 248,117) | 25.3 km |
Train3 | (448,857; 220,743) | (453,938; 225,718) | 52.1 km |
Train4 | (479,224; 221,634) | (487,729; 227,756) | 15.9 km |
Tiles Name | Acquisition Date |
---|---|
spot6_pms_201306292238055_ort_1119497101 | 29 June 2013 |
spot6_pms_201406282239012_ort_1119498101 | 28 June 2014 |
spot6_pms_201406212242046_ort_1119505101 | 21 June 2014 |
spot6_pms_201406282239047_ort_1119509101 | 28 June 2014 |
spot6_pms_201407122232016_ort_1119525101 | 12 July 2014 |
spot6_pms_201407172243021_ort_1119527101 | 17 July 2014 |
spot6_pms_201407242239020_ort_1119528101 | 24 July 2014 |
spot6_pms_201407292250014_ort_1119529101 | 29 July 2014 |
spot6_pms_201407292250032_ort_1119530101 | 29 July 2014 |
spot6_pms_201407312236004_ort_1119532101 | 31 July 2014 |
spot6_pms_201408122243031_ort_1119533101 | 12 August 2014 |
spot6_pms_201410032243032_ort_1119534101 | 3 October 2014 |
L | Description | |
---|---|---|
1 | | Buildings |
2 | | Bare soil |
3 | | Forest |
4 | | Low-density vegetation |
5 | | Water surfaces |
L1 | L2 | Description | C | |
---|---|---|---|---|
1 | | Urban or built-up areas | ||
11 | | Urban areas | 1 | |
12 | | Industrial areas | 2 | |
13 | | Worksites and mines | 3 | |
14 | | Road networks | 4 | |
15 | | Trails | 5 | |
2 | | Undeveloped areas | ||
21 | | Dense vegetation, forests | 6 | |
22 | | Wooded savanna, forest patch border | 7 | |
23 | | bush, grassy savanna | 8 | |
24 | | Bare rocks | 9 | |
25 | | Bare soil | 10 | |
3 | | Wetlands | ||
31 | | Water bodies | 11 | |
32 | | Engravements (dry river beds) | 12 |
Layer | Conv | Kernels | Stride | Pad |
---|---|---|---|---|
conv1 | 11 × 11 | 96 | 4 | 0 |
conv2 | 5 × 5 | 256 | 1 | 2 |
conv3 | 3 × 3 | 384 | 1 | 1 |
conv4 | 3 × 3 | 384 | 1 | 1 |
conv5 | 3 × 3 | 256 | 1 | 1 |
maxpool | 2 × 2 | Na | 2 | 0 |
DataSets | Description |
---|---|
RGBNIR | Dataset containing the raw channels: R, G, B and NIR |
NEO | Dataset containing all the neo-channels: L, , NDVI, MSAVI, MNDWI and ExG |
LCE | Land cover produced by the human operator |
LCM | Land cover produced by a machine learning model |
Architectures | OA | PA | UA | F1-Score |
---|---|---|---|---|
XGboost | 77.55% | 72.39% | 76.93% | 74.60% |
AlexNet | 73.29% | 66.77% | 78.98% | 72.36% |
ResNet | 79.13% | 73.18% | 82.55% | 77.58% |
DenseNet | 79.55% | 73.10% | 83.13% | 77.79% |
SegNet | 79.54% | 73.00% | 82.16% | 77.31% |
DeepLab | 81.41% | 75.15% | 84.37% | 79.49% |
FCN | 80.48% | 74.66% | 83.99% | 79.05% |
Architectures | OA | PA | UA | F1-Score |
---|---|---|---|---|
XGboost | 51.56% | 42.61% | 38.27% | 40.32% |
AlexNet | 45.79% | 33.93% | 38.26% | 35.97% |
ResNet | 55.96% | 43.89% | 49.58% | 46.56% |
DenseNet | 59.59% | 46.18% | 55.00% | 50.21% |
SegNet | 58.36 | 37.16 | 40.48 | 38.75% |
DeepLab | 61.45% | 49.77% | 51.04% | 50.40% |
FCN | 56.07% | 49.59% | 47.22% | 48.38% |
DenseNet | DeepLab | |||||||
---|---|---|---|---|---|---|---|---|
Datasets | OA | PA | UA | F1-Score | OA | PA | UA | F1-Score |
RGBNIR | 59.59% | 46.18% | 55.00% | 50.21% | 61.45% | 49.77% | 51.04% | 50.40% |
NEO | 57.03% | 45.26% | 52.21% | 48.49% | 57.53% | 46.04% | 51.74% | 48.72% |
RGBNIR+NEO | 59,48% | 46.67% | 53.91% | 50.03% | 59.83% | 47.55% | 50,08% | 48.78% |
RGBNIR+ | 61.78% | 48.22% | 55.14% | 51.45% | 60.32% | 48.51% | 51.1% | 49.77% |
RGBNIR+L+ | 57.99% | 46.84% | 52.86% | 49.67% | 61.51% | 50.25% | 52.44% | 51.32% |
RGBNIR++ExG | 60.89% | 47.28% | 55.71% | 51.15% | 60.77% | 47.31% | 49.81% | 48.05% |
RGBNIR+LCE | 74.90% | 58.55% | 61.36% | 59.92% | 73.56% | 56.09% | 61.06% | 58.47% |
RGBNIR+LCM | 62.08% | 50.31% | 53.86% | 52.02% | 63.61% | 50.74% | 51.36% | 51.05% |
Predicted | |||||
---|---|---|---|---|---|
Buildings | Bare Soil | Forest | Low-Density Vegetation | Water Surfaces | |
Buildings | 0.43 | 0 | 0 | 0 | 0 |
Bare soil | 0.24 | 0.84 | 0.02 | 0.04 | 0.07 |
Forest | 0.18 | 0.02 | 0.78 | 0.12 | 0.05 |
Low-density vegetation | 0.15 | 0.14 | 0.19 | 0.84 | 0.01 |
Water surfaces | 0 | 0 | 0.01 | 0 | 0.87 |
Predicted | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Urban Areas | Industrial Areas | Worksites and Mines | Road Networks | Trails | Forests | Medium-Density Vegetation | Low-Density Vegetation | Bare Rocks | Bare Soil | Water Surfaces | Engravements | |
Urban areas | 0.72 | 0.32 | 0 | 0.09 | 0.01 | 0.02 | 0.01 | 0 | 0.01 | 0 | 0 | 0 |
Industrial areas | 0.03 | 0.43 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Worksites and mines | 0 | 0 | 0.86 | 0.02 | 0.15 | 0 | 0.01 | 0.02 | 0.27 | 0.69 | 0.02 | 0.05 |
Road networks | 0.05 | 0.18 | 0.01 | 0.58 | 0.07 | 0 | 0 | 0 | 0 | 0.02 | 0 | 0.04 |
Trails | 0.01 | 0.01 | 0.02 | 0.03 | 0.08 | 0 | 0 | 0.01 | 0.02 | 0.01 | 0 | 0.01 |
Forests | 0.06 | 0.03 | 0 | 0.06 | 0.07 | 0.77 | 0.19 | 0.02 | 0.05 | 0.01 | 0.04 | 0.03 |
Medium-density vegetation | 0.09 | 0.02 | 0.03 | 0.07 | 0.25 | 0.17 | 0.69 | 0.46 | 0.27 | 0.03 | 0.01 | 0.05 |
Low-density vegetation | 0.04 | 0.01 | 0.03 | 0.04 | 0.16 | 0.02 | 0.08 | 0.47 | 0.09 | 0.06 | 0 | 0.03 |
Bare rocks | 0 | 0 | 0 | 0.01 | 0.04 | 0 | 0.01 | 0.01 | 0.12 | 0 | 0 | 0.01 |
Bare soil | 0 | 0 | 0.02 | 0.05 | 0.04 | 0 | 0 | 0.01 | 0.02 | 0.14 | 0 | 0.4 |
Water surfaces | 0 | 0 | 0 | 0 | 0 | 0.01 | 0 | 0 | 0.01 | 0 | 0.88 | 0.01 |
Engravements | 0 | 0 | 0.03 | 0.05 | 0.13 | 0.01 | 0.01 | 0 | 0.14 | 0.04 | 0.05 | 0.37 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Rousset, G.; Despinoy, M.; Schindler, K.; Mangeas, M. Assessment of Deep Learning Techniques for Land Use Land Cover Classification in Southern New Caledonia. Remote Sens. 2021, 13, 2257. https://doi.org/10.3390/rs13122257
Rousset G, Despinoy M, Schindler K, Mangeas M. Assessment of Deep Learning Techniques for Land Use Land Cover Classification in Southern New Caledonia. Remote Sensing. 2021; 13(12):2257. https://doi.org/10.3390/rs13122257
Chicago/Turabian StyleRousset, Guillaume, Marc Despinoy, Konrad Schindler, and Morgan Mangeas. 2021. "Assessment of Deep Learning Techniques for Land Use Land Cover Classification in Southern New Caledonia" Remote Sensing 13, no. 12: 2257. https://doi.org/10.3390/rs13122257
APA StyleRousset, G., Despinoy, M., Schindler, K., & Mangeas, M. (2021). Assessment of Deep Learning Techniques for Land Use Land Cover Classification in Southern New Caledonia. Remote Sensing, 13(12), 2257. https://doi.org/10.3390/rs13122257