Airborne Laser Scanning Point Cloud Classification Using the DGCNN Deep Learning Method
<p>Methodological workflow used in this study to classify Airborne Laser Scanning (ALS) point clouds of two test areas using Dynamic Graph Convolutional Neural Network (DGCNN) architecture.</p> "> Figure 2
<p>The DGCNN components for semantic segmentation architecture: (<b>a</b>) The network uses spatial transformation followed by three sequential EdgeConv layers and three fully connected layers. A max pooling operation is performed as a symmetric edge function to solve for the point clouds ordering problem—i.e., it makes the model permutation invariant while capturing global features. The fully connected layers will produce class prediction scores for each point. (<b>b</b>) A spatial transformation module is used to learn the rotation matrix of the points and increase spatial invariance of the input point clouds. (<b>c</b>) EdgeConv which acts as multilayer perceptron (MLP), is applied to learn local geometric features for each point. Source: Wang et al. (2018).</p> "> Figure 3
<p>Basic differences between PointNet and DGCNN. (<b>a</b>) The PointNet output of the feature extraction <span class="html-italic">h</span>(<span class="html-italic">x<sub>i</sub></span>), is only related to the point itself. (<b>b</b>) The DGCNN incorporates local geometric relations <span class="html-italic">h</span>(<span class="html-italic">x<sub>i</sub>,x<sub>j</sub>−x<sub>i</sub></span>) between a point and its neighborhood. Here, a <span class="html-italic">k</span>-nn graph is constructed with <span class="html-italic">k</span> = 4.</p> "> Figure 4
<p>Representation of vegetation in one of the city parks in Area-1 (Surabaya city). The vegetation is mostly dominated by trees with rounded canopies. (<b>a</b>) 3D point cloud of trees, (<b>b</b>) Aerial images, (<b>c</b>) Street view of the trees ©GoogleMap2021.</p> "> Figure 5
<p>Input data and coverage of Surabaya city, Indonesia. (<b>a</b>) Orthophoto of the study area covering eight grids, with the test tile shown in orange; (<b>b</b>) ALS point cloud of grids 5 and 6 colored by elevation; (<b>c</b>) the 1:1000 base map of grids 5 and 6 containing buildings (orange), roads (red), and water bodies (blue).</p> "> Figure 6
<p>Dutch up-to-date Elevation Archive File, version 3 (AHN3) grids selected for this study. (<b>a</b>) Grids in purple are used as training while the grid in orange was used as the test area; (<b>b</b>) AHN3 point clouds labeled as bare land (brown), buildings (yellow), water (blue), and “others” (green).</p> "> Figure 7
<p>Labeled point clouds from AHN3 of Area-2 with different block sizes: (<b>a</b>) Block size = 30 m, (<b>b</b>) Block size = 50 m, (<b>c</b>) Block size = 70 m. Brown points represent bare land, yellow points represent buildings, blue points represent water, and points in green represent “other” class.</p> "> Figure 8
<p>Samples of different feature set results. (<b>a</b>–<b>d</b>): Classification results of four feature combinations in comparison to (<b>e</b>) base map, (<b>f</b>) aerial orthophoto, (<b>g</b>) LiDAR intensity, and (<b>h</b>) Digital Surface Model (DSM). In (<b>a</b>–<b>e</b>), blue color represents bare land, green represents trees, orange represents buildings, and red represents roads, respectively.</p> "> Figure 9
<p>Comparison of two classification results obtained by two different loss functions. (<b>a</b>) The SCE loss function resulted in more complete roads; (<b>b</b>) using FL resulted in incomplete roads (white rectangle) and it falsely classified a sand pile as building (yellow ellipse).</p> "> Figure 10
<p>Relief displacement makes a high-rise building block an adjacent lower building and trees. (<b>a</b>) The leaning of a building (inside orange circle) on an orthophoto indicates relief displacement; (<b>b</b>) the building in the orthophoto has an offset of up to 17 m from the reference polygon (pink outlines); (<b>c</b>) the LiDAR DSM indicates a part of building that does not exist in the base map/reference (inside the black ellipse); (<b>d</b>) DGCNN can detect building points (orange) correctly, including the missing building part (inside the black ellipse); (<b>e</b>) 3D visualization of the classified building points including the missing building part (inside the white ellipse).</p> "> Figure 11
<p>Visualization of point cloud classification results of a subtile in Area-2 (bare land in brown, buildings in yellow, water in blue, and “others” in green) in comparison with the ground truth. (<b>a</b>) Block size = 30 m, (<b>b</b>) Block size = 50 m, (<b>c</b>) Block size = 70 m, (<b>d</b>) Ground truth, (<b>e</b>) Satellite image ©GoogleMap2020, (<b>f</b>) DSM of the blue box shows that cars are categorized as “others”.</p> "> Figure 12
<p>3D visualization (<b>a</b>–<b>d</b>) and 2D visualization (<b>e</b>–<b>h</b>) of point cloud classification results of different block sizes and ground truths over a subset area of Area-2 (bare land in brown, buildings in yellow, water in blue and “others” in green).</p> "> Figure 13
<p>Comparison of point cloud classification results for different block sizes on a particular highly vegetated area show that bigger block sizes result in more misclassifications on trees (bare land in brown, buildings in yellow, water in blue and “others” (trees) in green). (<b>a</b>) Block size = 30 m, (<b>b</b>) block size = 50 m, (<b>c</b>) block size = 70 m, (<b>d</b>) ground truth.</p> "> Figure 14
<p>Per class probability distribution obtained by the network over test area with different block sizes (<b>a</b>–<b>c</b>). From top to bottom: class of bare land (1st row), buildings (2nd row), water (3rd row), and “others” (4th row).</p> "> Figure 14 Cont.
<p>Per class probability distribution obtained by the network over test area with different block sizes (<b>a</b>–<b>c</b>). From top to bottom: class of bare land (1st row), buildings (2nd row), water (3rd row), and “others” (4th row).</p> ">
Abstract
:1. Introduction
2. Related Work
3. Experiments
3.1. DGCNN
3.2. Area-1
3.2.1. Training Set Preparation
- nonground points are labeled as buildings using 2D building polygons of the base map. Using the same method, ground points were labeled as road. Remaining points were labeled as bare land;
- from the points labeled as building or road, any point that has surface roughness above a threshold is relabeled as tree. The surface roughness was estimated for each point based on the distances to the best fitting plane estimated using all neighboring points inside an area of 2 m 2 m. Given the resulting roughness values for both trees and building points in selected test areas, and given the fact that tree canopies in the study area have minimum diameters of about 3m, the roughness threshold was set to 0.5 m;
- a Statistical Outlier Removal (SOR) algorithm was performed to remove remaining outliers. We set the threshold for the average distance () = 30 and multiplier of standard deviation = 2. This means that the algorithm calculates the average distance of 30 k-neighboring points and then removes any point having distance more than ;
- as the final step, training samples data were converted to the hdf5 (.h5) format by splitting each part of the area into blocks of size 30 30 m with a stride of 15 m. Based on our experiments in Area-1, these parameter values give the best accuracy. However, to ensure that the network uses an efficient spatial range, the block size may require an adjustment if applied to study areas of different characteristics.
3.2.2. The Choice of Feature Combinations and Loss Functions
- Feature Set 1 uses the default feature combination, as widely used by indoor point cloud benchmarks (e.g., S3DIS dataset).
- Feature Set 2 replaces RGB color with LiDAR features (Intensity, Return number, and Number of returns (IRnN)) to evaluate the importance of LiDAR features.
- Feature Set 3 combines two color channels (Red and Green) with LiDAR intensity to investigate the importance of spectral features.
- Feature Set 4 combines full RGB color features and LiDAR IRnN features and excludes normalized coordinates to evaluate the importance of global geometry.
- Softmax Cross Entropy (SCE) loss. This is a combination of a softmax activation function and cross entropy loss. Softmax is frequently appended to the last fully connected layer of a classification network. Softmax converts logits, the raw score output by the last layer of the neural network, into probabilities in the range 0 to 1. The function converts the logits into probabilities by taking the exponents of the given input value and the sum of exponentials of all values in the input. The ratio between the exponential input value and the sum of exponential values is the output of softmax. Cross entropy describes the loss between two probability distributions. It measures the similarity of the predictions to the actual labels of the training samples.Consider a training dataset with input points within a batch of size , and is the -th label target class (one-hot vector) among . denotes the feature vector before the last fully connected layer of classes. and , represent the trainable weights and biases of the -th class in softmax regression, respectively. Then the SCE loss is written as follows:
- Focal loss is introduced to address accuracy issues due to class imbalance for one-stage object detection. Focal loss is a cross entropy loss that weighs the contribution of each sample to the loss based on the classification error. The idea is that, if a sample is already classified correctly by the network, its contribution to the loss decreases. Lin et al. [31] claim that this strategy solves the problem of class imbalance by making the loss implicitly focus on problematic classes. Moreover, the algorithm weights the contribution of each class to the loss in a more explicit way using Sigmoid activation. The focal loss function for multiclassification is defined as:
3.2.3. Training Settings
3.3. Area-2
3.3.1. Training Set Preparation
3.3.2. The Choice of Block Size
3.3.3. Training Settings
3.4. Evaluation Metrics
- Overall accuracy, indicating the percentage of correctly classified points of all classes from the total number of reference points. This metric shows general performance of the model, and thus may provide limited information in case of class imbalance.
- The confusion matrix is a summary table reporting the number of true positives, true negatives, false negatives, and false positives of each class. The matrix provides information on the prediction metrics per class and the types of errors made by the classification model.
- Precision, recall, and F1 score: Precision and recall are metrics commonly used for evaluating classification performance in information technology and are related to the false and true positive rates [41,42]. Recall (also known as completeness) refers to the percentage of the total points correctly predicted by the model, while the precision (also known as correctness) refers to the percentage of correctly classified points in all positive predictions. The F1 score is a weighted average of precision and recall to measure model accuracy. The metrics are formulated as follows:
4. Results and Discussions
4.1. Area-1
4.1.1. Results of Different Feature Combinations
- Bare land class
- Tree class
- Buildings
- Roads
4.1.2. Results of Different Loss Functions
4.1.3. Results on Area with Relief Displacement
4.2. Area-2
5. Conclusions and Recommendations
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Bláha, M.; Vogel, C.; Richard, A.; Wegner, J.D.; Pock, T.; Schindler, K. Large-Scale Semantic 3D Reconstruction: An Adaptive Multi-Resolution Model for Multi-Class Volumetric Labeling. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
- Nguyen, A.; Le, B. 3D Point Cloud Segmentation: A survey. In Proceedings of the 2013 6th IEEE Conference on Robotics, Automation and Mechatronics (RAM), Manila, Philippines, 12–15 November 2013; pp. 225–230. [Google Scholar]
- Kang, Z.; Yang, J. A probabilistic graphical model for the classification of mobile LiDAR point clouds. ISPRS J. Photogramm. Remote Sens. 2018, 143, 108–123. [Google Scholar] [CrossRef]
- Bello, S.A.; Yu, S.; Wang, C.; Adam, J.M.; Li, J. Deep learning on 3D point clouds. Remote Sens. 2020, 12, 1729. [Google Scholar] [CrossRef]
- Qi, C.R.; Su, H.; Mo, K.; Guibas, L.J. Pointnet: Deep Learning on Point Sets For 3D Classification and Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 652–660. [Google Scholar] [CrossRef] [Green Version]
- Rottensteiner, F.; Clode, S. Building and Road Extraction by Lidar and Imagery, Chapter 9. In Topographic Laser Ranging and Scanning; Shan, J., Toth, C.K., Eds.; CRC Press, Taylor and Francis: Boca Raton, FL, USA, 2019; pp. 463–496. [Google Scholar]
- Habib, A. Building and Road Extraction by Lidar And Imagery, Chapter 13. In Topographic Laser Ranging and Scanning; Shan, J., Toth, C.K., Eds.; CRC Press, Taylor and Francis: Boca Raton, FL, USA, 2009; pp. 389–419. [Google Scholar]
- Ish-Horowicz, J.; Udwin, D.; Flaxman, S.; Filippi, S.; Crawford, L. Interpreting deep neural networks through variable importance. arXiv 2019, arXiv:1901.09839v3. [Google Scholar]
- Zhang, Q.; Yang, L.T.; Chen, Z.; Li, P. A survey on deep learning for big data. Inf. Fusion 2018, 42, 146–157. [Google Scholar] [CrossRef]
- Cai, J.; Luo, J.; Wang, S.; Yang, S. Feature selection in machine learning: A new perspective. Neurocomputing 2018, 300, 70–79. [Google Scholar] [CrossRef]
- Singla, S.; Wallace, E.; Feng, S.; Feizi, S. Understanding impacts of high-order loss approximations and features in deep learning interpretation. In Proceedings of the 36th International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019; pp. 5848–5856. [Google Scholar]
- Engelmann, F.; Kontogianni, A.H.; Leibe, B. Exploring Spatial Context for 3d Semantic Segmentation of Point Clouds. In Proceedings of the IEEE International Conference on Computer Vision Workshops, Venice, Italy, 22–29 October 2017; pp. 716–724. [Google Scholar]
- Griffiths, D.; Boehm, J. A review on deep learning techniques for 3D sensed data classification. Remote Sens. 2019, 11, 1499. [Google Scholar] [CrossRef] [Green Version]
- Carrio, A.; Sampedro, C.; Rodriguez-Ramos, A.; Campoy, P. A review of deep learning methods and applications for unmanned aerial vehicles. J. Sens. 2017, 2017, 3296874. [Google Scholar] [CrossRef]
- Balado, J.; Martínez-Sánchez, J.; Arias, P.; Novo, A. Road environment semantic segmentation with deep learning from MLS point cloud data. Sensors 2019, 19, 3466. [Google Scholar] [CrossRef] [Green Version]
- Landrieu, L.; Simonovsky, M. Large-Scale Point Cloud Semantic Segmentation with Superpoint Graphs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4558–4567. [Google Scholar]
- Li, Y.; Bu, R.; Sun, M.; Wu, W.; Di, X.; Chen, B. PointCNN: Convolution on X-Transformed Points. In Proceedings of the 32nd Conference on Neural Information Processing Systems (NeurIPS), Montreal, QC, Canada, 2–8 December 2018. [Google Scholar]
- Wang, Y.; Sun, Y.; Liu, Z.; Sarma, S.E.; Bronstein, M.M.; Solomon, J.M. Dynamic graph CNN for learning on point clouds. ACM Trans. Graph. 2019, 38, 3326362. [Google Scholar] [CrossRef] [Green Version]
- Horwath, J.P.; Zakharov, D.N.; Mégret, R.; Stach, E.A. Understanding important features of deep learning models for segmentation of high-resolution transmission electron microscopy images. NPJ Comput. Mater. 2020, 6, 1–9. [Google Scholar] [CrossRef]
- Soilán Rodríguez, M.; Lindenbergh, R.; Riveiro Rodríguez, B.; Sánchez Rodríguez, A. PointNet for the automatic classification of aerial point clouds. ISPRS Annals Photogramm. Remote Sens. Spat. Inf. Sci. 2019, IV-2/W5, 445–452. [Google Scholar]
- Wicaksono, S.B.; Wibisono, A.; Jatmiko, W.; Gamal, A.; Wisesa, H.A. Semantic Segmentation on LiDAR Point Cloud in Urban Area Using Deep Learning. In Proceedings of the IEEE 2019 International Workshop on Big Data and Information Security (IWBIS), Bali, Indonesia, 11 October 2019; pp. 63–66. [Google Scholar] [CrossRef]
- Schmohl, S.; Sörgel, U. Submanifold sparse convolutional networks for semantic segmentation of large-scale ALS point clouds. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, Vol. IV-2/W5, 77-84. [Google Scholar] [CrossRef] [Green Version]
- Xiu, H.; Poliyapram, V.; Kim, K.S.; Nakamura, R.; Yan, W. 3D Semantic Segmentation for High-Resolution Aerial Survey Derived Point Clouds Using Deep Learning. In Proceedings of the 26th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, Seattle, WA, USA, 6–9 November 2018; pp. 588–591. [Google Scholar] [CrossRef]
- Poliyapram, V.; Wang, W.; Nakamura, R. A point-wise lidar and image multimodal fusion network (PMNet) for aerial point cloud 3D semantic segmentation. Remote Sens. 2019, 11, 2961. [Google Scholar] [CrossRef] [Green Version]
- Gopalakrishnan, R.; Seppänen, A.; Kukkonen, M.; Packalen, P. Utility of image point cloud data towards generating enhanced multitemporal multisensor land cover maps. Int. J. Appl. Earth Obs. Geoinf. 2020, 86, 102012. [Google Scholar] [CrossRef]
- Zhou, X.; Liu, N.; Tang, F.; Zhao, Y.; Qin, K.; Zhang, L.; Li, D. A deep manifold learning approach for spatial-spectral classification with limited labeled training samples. Neurocomputing 2019, 331, 138–149. [Google Scholar] [CrossRef]
- Yang, Z.; Jiang, W.; Lin, Y.; Elberink, S.O. Using training samples retrieved from a topographic map and unsupervised segmentation for the classification of airborne laser scanning Data. Remote Sens. 2020, 12, 877. [Google Scholar] [CrossRef] [Green Version]
- Johnson, J.M.; Khoshgoftaar, T.M. Survey on deep learning with class imbalance. J. Big Data 2019, 6, 27. [Google Scholar] [CrossRef]
- Winiwarter, L.; Mandlburger, G.; Schmohl, S.; Pfeifer, N. Classification of ALS Point Clouds Using End-to-End Deep Learning. PFG J. Photogramm. Remote Sens. Geoinf. Sci. 2019, 87, 75–90. [Google Scholar] [CrossRef]
- Hensman, P.; Masko, D. The Impact of Imbalanced Training Data for Convolutional Neural Networks. In Degree Project in Computer Scienc; KTH Royal Institute of Technology: Stockholm, Sweden, 2015. [Google Scholar]
- Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal Loss for Dense Object Detection. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
- Huang, R.; Xu, Y.; Hong, D.; Yao, W.; Ghamisi, P.; Stilla, U. Deep point embedding for urban classification using ALS point clouds: A new perspective from local to global. ISPRS J. Photogramm. Remote Sens. 2020, 163, 62–81. [Google Scholar] [CrossRef]
- Xie, Y.; Tian, J.; Zhu, X.X. A review of point cloud semantic segmentation. IEEE Geosci. Remote Sens. Mag. 2019, 8, 38–59. [Google Scholar] [CrossRef] [Green Version]
- Kadaster and Geonovum. Publieke Dienstverlening Op de Kaart (PDOK). Available online: https://www.pdok.nl/ (accessed on June 2020).
- GeoTiles Ready-Made Geodata with A Focus on The Netherlands. Available online: https://geotiles.nl (accessed on September 2020).
- Qian, B. AHN3-Dgcnn.Pytorch. Github. Available online: https://github.com/bbbaiqian/AHN3-dgcnn.pytorc (accessed on August 2020).
- Congalton, R.G. A review of assessing the accuracy of classifications of remotely sensed data. Remote Sens. Environ. 1991, 37, 35–36. [Google Scholar] [CrossRef]
- Foody, G.M. Status of land cover classification accuracy assessment. Remote Sens. Environ. 2002, 80, 185–201. [Google Scholar] [CrossRef]
- Maratea, A.; Petrosino, A.; Manzo, M. Adjusted F-measure and kernel scaling for imbalanced data learning. Inf. Sci. 2014, 257, 331–341. [Google Scholar] [CrossRef]
- Alakus, T.B.; Turkoglu, I. Comparison of deep learning approaches to predict COVID-19 infection. Chaos Solitons Fractals 2020, 140, 110120. [Google Scholar] [CrossRef]
- Raschka, S. An overview of general performance metrics of binary classifier systems. arXiv 2014, arXiv:1410.53330v1. [Google Scholar] [CrossRef]
- Tharwat, A. Classification assessment methods. Appl. Comput. Inform. 2020. [Google Scholar] [CrossRef]
Feature Sets | Set Name | Input Features |
---|---|---|
Set 1 | RGB | |
Set 2 | IRnN | |
Set 3 | RGI | |
Set 4 | RGBIRnN |
Grid ID | Usage | ||
---|---|---|---|
38FN1 | 47.5 | 25.4 | Training |
37EN2 | 50.6 | 28.8 | Training |
32CN1 | 87.4 | 27.8 | Training |
31HZ2 | 55.9 | 27.4 | Testing |
Feature Set | Feature Vector | OA (%) | Avg F1 Score (%) | F1 Score per Class (%) | |||
---|---|---|---|---|---|---|---|
Bare Land | Trees | Buildings | Roads | ||||
Set 1 | 83.9 | 81.4 | 83.0 | 80.3 | 87.3 | 75.1 | |
Set 2 | 85.7 | 83.5 | 84.2 | 81.6 | 89.1 | 79.0 | |
Set 3 | 83.9 | 81.4 | 83.5 | 79.9 | 87.4 | 74.9 | |
Set 4 | 91.8 | 88.8 | 87.7 | 88.6 | 94.8 | 84.1 |
Feature Set | Bare Land | Trees | Buildings | Roads | ||||
---|---|---|---|---|---|---|---|---|
Precision | Recall | Precision | Recall | Precision | Recall | Precision | Recall | |
Set 1 | 76.2% | 91.2% | 81.6% | 79.0% | 87.6% | 87.0% | 83.9% | 68.0% |
Set 2 | 77.5% | 92.1% | 88.4% | 75.7% | 86.9% | 91.4% | 84.2% | 74.5% |
Set 3 | 76.3% | 92.1% | 82.3% | 77.7% | 86.9% | 87.8% | 86.1% | 66.2% |
Set 4 | 85.2% | 90.4% | 93.3% | 84.3% | 93.4% | 96.2% | 86.9% | 81.6% |
Loss Function | Feature Vector | OA (%) | F1 Score (%) | |||
---|---|---|---|---|---|---|
Bare Land | Trees | Buildings | Roads | |||
SCE | 91.8 | 87.7 | 88.6 | 94.8 | 84.1 | |
FL | 88.1 | 81.8 | 85.3 | 92.7 | 68.6 |
Feature Set 4 (RGBIRnN) | Reference | Precision | ||||
---|---|---|---|---|---|---|
Bare Land | Trees | Buildings | Roads | |||
Prediction | Bare land | 340,132 | 770 | 33,529 | 78,364 | 75.1% |
Trees | 304 | 553,175 | 105,094 | 42 | 84.0% | |
Building | 18,099 | 83,704 | 1,552,315 | 3367 | 93.7% | |
Road | 20,557 | 97 | 763 | 112,952 | 84.1% | |
Recall | 89.7% | 86.7% | 91.8% | 58.0% | 88.1% |
Block Size (m) | OA (%) | Avg F1 Score (%) | F1 Score (%) | |||
---|---|---|---|---|---|---|
Bare Land | Buildings | Water | Others | |||
30 | 91.7 | 84.8 | 95.2 | 83.1 | 67.8 | 92.9 |
50 | 93.3 | 89.7 | 95.8 | 87.7 | 81.1 | 94.0 |
70 | 93.0 | 88.05 | 95.8 | 87.3 | 75.5 | 93.6 |
Block Size (m) | Bare Land | Buildings | Water | Others | ||||
---|---|---|---|---|---|---|---|---|
Precision | Recall | Precision | Recall | Precision | Recall | Precision | Recall | |
30 | 92.9% | 97.5% | 92.6% | 75.4% | 81.3% | 58.2% | 90.7% | 95.1% |
50 | 94.5% | 97.1% | 90.3% | 85.3% | 81.4% | 80.9% | 93.8% | 94.3% |
70 | 97.8% | 94.0% | 86.6% | 87.9% | 66.5% | 87.2% | 92.8% | 94.5% |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Widyaningrum, E.; Bai, Q.; Fajari, M.K.; Lindenbergh, R.C. Airborne Laser Scanning Point Cloud Classification Using the DGCNN Deep Learning Method. Remote Sens. 2021, 13, 859. https://doi.org/10.3390/rs13050859
Widyaningrum E, Bai Q, Fajari MK, Lindenbergh RC. Airborne Laser Scanning Point Cloud Classification Using the DGCNN Deep Learning Method. Remote Sensing. 2021; 13(5):859. https://doi.org/10.3390/rs13050859
Chicago/Turabian StyleWidyaningrum, Elyta, Qian Bai, Marda K. Fajari, and Roderik C. Lindenbergh. 2021. "Airborne Laser Scanning Point Cloud Classification Using the DGCNN Deep Learning Method" Remote Sensing 13, no. 5: 859. https://doi.org/10.3390/rs13050859