Local Geometric Structure Feature for Dimensionality Reduction of Hyperspectral Imagery
<p>Process of the proposed LGSFA method.</p> "> Figure 2
<p>Graph construction of the LGSFA method.</p> "> Figure 3
<p>An example for the processing of LGSFA.</p> "> Figure 4
<p>Salinas hyperspectral image. (<b>a</b>) HSI in false color; (<b>b</b>) Ground truth.</p> "> Figure 5
<p>Indian Pines hyperspectral image. (<b>a</b>) HSI in false color; (<b>b</b>) Ground truth.</p> "> Figure 6
<p>Urban hyperspectral image. (<b>a</b>) HSI in false color; (<b>b</b>) Ground truth.</p> "> Figure 7
<p>Two-dimension embedding of different DR methods on the Indian Pines data set. (<b>a</b>) Spectral signatures; (<b>b</b>) PCA; (<b>c</b>) NPE; (<b>d</b>) LPP; (<b>e</b>) SDE; (<b>f</b>) LFDA; (<b>g</b>) MMC; (<b>h</b>) MFA; (<b>i</b>) LGSFA.</p> "> Figure 8
<p>OAs with respect to different numbers of neighbors on the Salinas data set.</p> "> Figure 9
<p>OAs with respect to different dimensions on the Salinas data set.</p> "> Figure 10
<p>Classification maps of different methods with SVMCK on the Salinas data set. (<b>a</b>) Ground truth; (<b>b</b>) Baseline (95.8%, 0.954); (<b>c</b>) PCA (94.8%, 0.942); (<b>d</b>) NPE (96.4%, 0.960); (<b>e</b>) LPP (96.0%, 0.955); (<b>f</b>) SDE (95.5%, 0.950); (<b>g</b>) LFDA (96.3%, 0.958); (<b>h</b>) MMC (94.7%, 0.941); (<b>i</b>) MFA (95.5%, 0.950); (<b>j</b>) LGSFA (99.2%, 0.991). Note that OA and KC are given in parentheses.</p> "> Figure 11
<p>OAs with respect to different numbers of neighbors on the Indian Pines data set.</p> "> Figure 12
<p>OAs with respect to different dimensions on the Indian Pines data set.</p> "> Figure 13
<p>Classification maps of different methods with SVMCK on the Indian Pines data set. (<b>a</b>) Ground truth; (<b>b</b>) Baseline (93.9%, 0.930); (<b>c</b>) PCA (91.8%, 0.907); (<b>d</b>) NPE (92.0%, 0.908); (<b>e</b>) LPP (91.0%, 0.897); (<b>f</b>) SDE (91.1%, 0.899); (<b>g</b>) LFDA (90.9%, 0.896); (<b>h</b>) MMC (89.9%, 0.885); (<b>i</b>) MFA (93.5%, 0.926); (<b>j</b>) LGSFA (98.1%, 0.978). Note that OA and KC are given in parentheses.</p> "> Figure 14
<p>OAs with respect to different numbers of neighbors on the Urban data set.</p> "> Figure 15
<p>OAs with respect to different dimensions on the Urban data set.</p> "> Figure 16
<p>Classification maps of different methods with SVMCK on the Urban data set. (<b>a</b>) Ground truth; (<b>b</b>) Baseline (86.0%, 0.765); (<b>c</b>) PCA (85.1%, 0.750); (<b>d</b>) NPE (86.5%, 0.773); (<b>e</b>) LPP (87.0%, 0.781); (<b>f</b>) SDE (85.7%, 0.758); (<b>g</b>) LFDA (85.6%, 0.759); (<b>h</b>) MMC (86.1%, 0.765); (<b>i</b>) MFA (87.5%, 0.788); (<b>j</b>) LGSFA (88.8%, 0.809). Note that OA and KC are given in parentheses.</p> ">
Abstract
:1. Introduction
2. Related Works
2.1. Graph Embedding
2.2. Marginal Fisher Analysis
3. Local Geometric Structure Fisher Analysis
Algorithm 1 LGSFA |
|
4. Experimental Results and Discussion
4.1. Data Sets
4.2. Experimental Setup
4.3. Two-Dimension Embedding
4.4. Experiments on the Salinas Data Set
4.5. Experiments on the Indian Pines Data Set
4.6. Experiments on the Urban Data Set
4.7. Discussion
- The proposed LGSFA method consistently outperforms Baseline, PCA, NPE, LPP, SDE, LFDA, MMC and MFA in most conditions on three real HSI data sets. The reason for this appearance is that LGSFA utilizes neighbor points and corresponding intraclass reconstruction points to construct the intrinsic and penalty graphs, while MFA just uses the neighbor points to construct the intrinsic and penalty graphs. That is to say, our proposed method can effictively compact the intraclass data and separate the interclass data, and it can capture more intrinsic information hidden in HSI data sets than other methods.
- It is clear that LGSFA produces a smoother classification map and achieves better accuracy compared with other methods in most classes. LGSFA effectively reveal the intrinsic manifold structures of hyperspectral data. Thus, this method obtains good discriminating features and improves the classification performance of the NN, SAM and SVMCK classifiers for hyperspectral data.
- In the experiments, it is noticeable that the SVMCK classifier always performs better than NN and SAM. The reason is that SVMCK applies the spatial and spectral information while NN and SAM only use the spectral information for HSI classification.
- With the running time of different DR methods, the computational complexity of LGSFA depends on the number of bands, training samples, and neighbor points. The proposed method costs more time than other DR algorithms. The reason is that LGSFA needs much running time to construct the intrinsic graph and the penalty graph.
- In the experiments of two-dimension embedding, LGSFA achieves better distribution for data points than other DR methods. The results show that LGSFA can improve the intra-manifold compactness and the inter-manifold separability to enhance the diversity of data from different classes.
5. Conclusions
Acknowledgments
Author Contributions
Conflicts of Interest
References
- Liang, H.M.; Li, Q. Hyperspectral imagery classification using sparse representations of convolutional neural network features. Remote Sens. 2016, 8, 919. [Google Scholar] [CrossRef]
- He, W.; Zhang, H.Y.; Zhang, L.P.; Philips, W.; Liao, W.Z. Weighted sparse graph based dimensionality reduction for hyperspectral images. IEEE Geosci. Remote Sens. Lett. 2016, 13, 686–690. [Google Scholar] [CrossRef]
- Zhong, Y.F.; Wang, X.Y.; Zhao, L.; Feng, R.Y.; Zhang, L.P.; Xu, Y.Y. Blind spectral unmixing based on sparse component analysis for hyperspectral remote sensing imagery. J. Photogramm. Remote Sens. 2016, 119, 49–63. [Google Scholar] [CrossRef]
- Zhou, Y.C.; Peng, J.T.; Chen, C.L.P. Dimension reduction using spatial and spectral regularized local discriminant embedding for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2015, 53, 1082–1095. [Google Scholar] [CrossRef]
- Feng, F.B.; Li, W.; Du, Q.; Zhang, B. Dimensionality reduction of hyperspectral image with graph-based discriminant analysis considering spectral similarity. Remote Sens. 2017, 9, 323. [Google Scholar] [CrossRef]
- Sun, W.W.; Halevy, A.; Benedetto, J.J.; Czaja, W.; Liu, C.; Wu, H.B.; Shi, B.Q.; Li, W.Y. Ulisomap based nonlinear dimensionality reduction for hyperspectral imagery classification. J. Photogramm. Remote Sens. 2014, 89, 25–36. [Google Scholar] [CrossRef]
- Rathore, M.M.U.; Paul, A.; Ahmad, A.; Chen, B.W.; Huang, B.; Ji, W. Real-time big data analytical architecture for remote sensing application. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 4610–4621. [Google Scholar] [CrossRef]
- Tong, Q.X.; Xue, Y.Q.; Zhang, L.F. progress in hyperspectral remote sensing science and technology in China over the past three decades. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 70–91. [Google Scholar] [CrossRef]
- Huang, H.; Luo, F.L.; Liu, J.M.; Yang, Y.Q. Dimensionality reduction of hyperspectral images based on sparse discriminant manifold embedding. J. Photogramm. Remote Sens. 2015, 106, 42–54. [Google Scholar] [CrossRef]
- Zhang, L.P.; Zhong, Y.F.; Huang, B.; Gong, J.Y.; Li, P.X. Dimensionality reduction based on clonal selection for hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2007, 45, 4172–4186. [Google Scholar] [CrossRef]
- Cheng, G.L.; Zhu, F.Y.; Xiang, S.M.; Wang, Y.; Pan, C.H. Semisupervised hyperspectral image classification via discriminant analysis and robust regression. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 595–608. [Google Scholar] [CrossRef]
- Shi, Q.; Zhang, L.P.; Du, B. Semisupervised discriminative locally enhanced alignment for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2013, 51, 4800–4815. [Google Scholar] [CrossRef]
- Huang, H.; Yang, M. Dimensionality reduction of hyperspectral images with sparse discriminant embedding. IEEE Trans. Geosci. Remote Sens. 2015, 53, 5160–5169. [Google Scholar] [CrossRef]
- Yang, S.Y.; Jin, P.L.; Li, B.; Yang, L.X.; Xu, W.H.; Jiao, L.C. Semisupervised dual-geometric subspace projection for dimensionality reduction of hyperspectral image data. IEEE Trans. Geosci. Remote Sens. 2014, 52, 3587–3593. [Google Scholar] [CrossRef]
- Zhang, L.F.; Zhang, L.P.; Tao, D.C.; Huang, X.; Du, B. Compression of hyperspectral remote sensing images by tensor approach. Neurocomputing 2015, 147, 358–363. [Google Scholar] [CrossRef]
- Cheng, X.M.; Chen, Y.R.; Tao, Y.; Wang, C.Y.; Kim, M.S.; Lefcourt, A.M. A novel integrated PCA and FLD method on hyperspectral image feature extraction for cucumber chilling damage inspection. Trans. ASAE 2004, 47, 1313–1320. [Google Scholar] [CrossRef]
- Guan, L.X.; Xie, W.X.; Pei, J.H. Segmented minimum noise fraction transformation for efficient feature extraction of hyperspectral images. Pattern Recognit. 2015, 48, 3216–3226. [Google Scholar]
- Cai, D.; He, X.; Han, J. Semi-supervised discriminant analysis. In Proceedings of the International Conference on Computer Vision, Rio de Janeiro, Brazil, 14–21 October 2007; pp. 1–7. [Google Scholar]
- Li, H.F.; Jiang, T.; Zhang, K.S. Efficient and robust feature extraction by maximum margin criterion. IEEE Trans. Neural Netw. 2006, 17, 157–165. [Google Scholar] [CrossRef] [PubMed]
- Sugiyama, M. Dimensionality reduction of multimodal labeled data by local Fisher discriminant analysis. J. Mach. Learn. Res. 2007, 8, 1027–1061. [Google Scholar]
- Shao, Z.; Zhang, L. Sparse dimensionality reduction of hyperspectral image based on semi-supervised local Fisher discriminant analysis. Int. J. Appl. Earth Obs. Geoinf. 2014, 31, 122–129. [Google Scholar] [CrossRef]
- Bachmann, C.M.; Ainsworth, T.L.; Fusina, R.A. Exploiting manifold geometry in hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2005, 43, 441–454. [Google Scholar] [CrossRef]
- Yang, H.L.; Crawford, M.M. Domain adaptation with preservation of manifold geometry for hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 543–555. [Google Scholar] [CrossRef]
- Ma, L.; Zhang, X.F.; Yu, X.; Luo, D.P. Spatial regularized local manifold learning for classification of hyperspectral images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 609–624. [Google Scholar] [CrossRef]
- Tang, Y.Y.; Yuan, H.L.; Li, L.Q. Manifold-Based Sparse Representation for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2014, 52, 7606–7618. [Google Scholar] [CrossRef]
- Zhang, L.F.; Zhang, Q.; Zhang, L.P.; Tao, D.C.; Huang, X.; Du, B. Ensemble manifold regularized sparse low-rank approximation for multiview feature embedding. Pattern Recognit. 2015, 48, 3102–3112. [Google Scholar] [CrossRef]
- Tenenbaum, J.B.; de Silva, V.; Langford, J.C. A global geometric framework for nonlinear dimensionality reduction. Science 2000, 290, 2319–2323. [Google Scholar] [CrossRef] [PubMed]
- Belkin, M.; Niyogi, P. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Comput. 2003, 15, 1373–1396. [Google Scholar] [CrossRef]
- Roweis, S.T.; Saul, L.K. Nonlinear dimensionality reduction by locally linear embedding. Science 2000, 290, 2323–2326. [Google Scholar] [CrossRef] [PubMed]
- Wang, Z.Y.; He, B.B. Locality perserving projections algorithm for hyperspectral image dimensionality reduction. In Proceedings of the 19th International Conference on Geoinformatics, Shanghai, China, 24–26 June 2011; pp. 1–4. [Google Scholar]
- He, X.F.; Cai, D.; Yan, S.C.; Zhang, H.J. Neighborhood preserving embedding. In Proceedings of the 10th International Conference on Computer Vision, Beijing, China, 17–21 October 2005; pp. 1208–1213. [Google Scholar]
- Yan, S.C.; Xu, D.; Zhang, B.Y.; Zhang, H.J.; Yang, Q.; Lin, S. Graph embedding and extensions: A general framework for dimensionality reduction. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 40–51. [Google Scholar] [CrossRef] [PubMed]
- Chen, Y.S.; Zhao, X.; Jia, X.P. Spectral-spatial classification of hyperspectral data based on deep belief network. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 2381–2392. [Google Scholar] [CrossRef]
- Feng, Z.X.; Yang, S.Y.; Wang, S.G.; Jiao, L.C. Discriminative spectral-spatial margin-based semisupervised dimensionality reduction of hyperspectral data. IEEE Geosci. Remote Sens. Lett. 2015, 12, 224–228. [Google Scholar] [CrossRef]
- Luo, F.L.; Huang, H.; Ma, Z.Z.; Liu, J.M. Semisupervised sparse manifold discriminative analysis for feature extraction of hyperspectral images. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6197–6211. [Google Scholar] [CrossRef]
- Camps-Valls, G.; Gomez-Chova, L.; Munoz-Mari, J.; Vila-Frances, J.; Calpe-Maravilla, J. Composite kernels for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2006, 3, 93–97. [Google Scholar] [CrossRef]
- Chang, C.C.; Lin, C.J. LIBSVM: A Library for Support Vector Machines. 2001. Available online: http://www.csie.ntu.edu.tw/~cjlin/libsvm (accessed on 18 May 2017).
Classifier | DR | ||||
---|---|---|---|---|---|
NN | Baseline | 82.4 ± 0.9 (0.805) | 84.1 ± 0.7 (0.824) | 84.7 ± 0.6 (0.830) | 85.1 ± 0.4 (0.834) |
PCA | 82.4 ± 0.9 (0.805) | 84.1 ± 0.7 (0.824) | 84.6 ± 0.6 (0.830) | 85.1 ± 0.4 (0.834) | |
NPE | 83.3 ± 0.9 (0.815) | 85.8 ± 0.6 (0.842) | 86.2 ± 0.5 (0.847) | 87.1 ± 0.4 (0.856) | |
LPP | 83.6 ± 0.9 (0.818) | 85.6 ± 0.4 (0.840) | 86.1 ± 0.5 (0.845) | 86.7 ± 0.5 (0.852) | |
SDE | 82.5 ± 1.0 (0.806) | 83.8 ± 0.8 (0.821) | 84.9 ± 0.6 (0.832) | 85.1 ± 0.3 (0.835) | |
LFDA | 83.5 ± 1.0 (0.818) | 84.8 ± 0.6 (0.832) | 85.3 ± 0.6 (0.836) | 85.7 ± 0.4 (0.841) | |
MMC | 82.2 ± 0.9 (0.803) | 84.0 ± 0.7 (0.822) | 84.4 ± 0.5 (0.827) | 84.9 ± 0.4 (0.832) | |
MFA | 86.6 ± 1.0 (0.852) | 87.9 ± 0.7 (0.866) | 88.2 ± 0.8 (0.869) | 88.4 ± 0.7 (0.871) | |
LGSFA | 87.3 ± 1.4 (0.859) | 88.8 ± 0.4 (0.875) | 89.4 ± 0.6 (0.882) | 89.8 ± 0.5 (0.886) | |
SAM | Baseline | 82.9 ± 1.0 (0.811) | 84.0 ± 0.9 (0.823) | 84.3 ± 0.7 (0.826) | 85.5 ± 0.5 (0.839) |
PCA | 82.8 ± 1.0 (0.809) | 83.8 ± 0.8 (0.821) | 84.1 ± 0.7 (0.824) | 85.3 ± 0.4 (0.837) | |
NPE | 84.2 ± 1.1 (0.825) | 85.7 ± 1.0 (0.841) | 86.1 ± 0.9 (0.846) | 87.3 ± 0.5 (0.859) | |
LPP | 83.2 ± 1.1 (0.814) | 84.8 ± 0.9 (0.832) | 85.3 ± 0.9 (0.837) | 86.6 ± 0.4 (0.851) | |
SDE | 83.4 ± 1.0 (0.815) | 84.0 ± 1.1 (0.823) | 85.2 ± 0.5 (0.835) | 85.2 ± 0.4 (0.835) | |
LFDA | 83.7 ± 1.0 (0.819) | 84.4 ± 0.9 (0.827) | 84.5 ± 0.7 (0.829) | 85.6 ± 0.5 (0.840) | |
MMC | 82.6 ± 0.9 (0.807) | 83.6 ± 0.9 (0.818) | 84.0 ± 0.6 (0.823) | 85.0 ± 0.5 (0.833) | |
MFA | 85.6 ± 1.1 (0.840) | 86.7 ± 0.7 (0.853) | 87.0 ± 0.6 (0.856) | 87.4 ± 0.9 (0.860) | |
LGSFA | 88.2 ± 0.6 (0.869) | 89.3 ± 1.0 (0.881) | 89.4 ± 0.8 (0.882) | 90.0 ± 0.4 (0.889) | |
SVMCK | Baseline | 89.2 ± 1.2 (0.880) | 92.3 ± 0.9 (0.914) | 94.1 ± 0.5 (0.934) | 94.6 ± 0.4 (0.940) |
PCA | 88.1 ± 1.1 (0.868) | 91.6 ± 0.7 (0.906) | 93.7 ± 0.9 (0.930) | 94.3 ± 0.4 (0.937) | |
NPE | 87.0 ± 2.3 (0.855) | 91.2 ± 1.0 (0.902) | 93.4 ± 0.9 (0.927) | 95.1 ± 0.3 (0.945) | |
LPP | 86.7 ± 0.7 (0.853) | 90.5 ± 1.1 (0.894) | 92.8 ± 0.7 (0.920) | 93.5 ± 1.0 (0.927) | |
SDE | 86.2 ± 2.0 (0.847) | 90.4 ± 1.2 (0.893) | 92.6 ± 0.7 (0.917) | 93.4 ± 0.6 (0.927) | |
LFDA | 89.6 ± 1.4 (0.885) | 92.1 ± 1.2 (0.912) | 94.3 ± 0.7 (0.936) | 94.8 ± 0.7 (0.942) | |
MMC | 89.3 ± 2.0 (0.881) | 91.9 ± 0.3 (0.910) | 93.5 ± 0.8 (0.928) | 94.1 ± 0.8 (0.934) | |
MFA | 92.3 ± 1.5 (0.914) | 94.9 ± 0.9 (0.943) | 95.9 ± 0.3 (0.954) | 96.1 ± 0.4 (0.956) | |
LGSFA | 94.2 ± 1.2 (0.935) | 95.8 ± 0.8 (0.953) | 96.8 ± 0.5 (0.964) | 97.1 ± 0.4 (0.968) |
Class | Samples | DR with SVMCK Classifier | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Training | Test | Baseline | PCA | NPE | LPP | SDE | LFDA | MMC | MFA | LGSFA | |
1 | 40 | 1969 | 100 | 100 | 99.9 | 99.9 | 99.7 | 100 | 100 | 99.8 | 100 |
2 | 75 | 3651 | 99.5 | 99.9 | 100 | 99.9 | 99.5 | 99.8 | 99.9 | 100 | 100 |
3 | 40 | 1936 | 100 | 97.1 | 99.9 | 99.9 | 99.9 | 99.5 | 96.1 | 99.8 | 100 |
4 | 28 | 1366 | 99.9 | 99.9 | 99.8 | 99.7 | 99.7 | 99.9 | 99.9 | 99.9 | 100 |
5 | 54 | 2624 | 99.6 | 96.4 | 98.6 | 98.9 | 98.5 | 98.7 | 97.1 | 99.3 | 99.8 |
6 | 79 | 3880 | 100 | 99.9 | 99.9 | 100 | 100.0 | 100 | 99.9 | 100 | 100 |
7 | 72 | 3507 | 99.5 | 99.4 | 99.7 | 99.7 | 99.3 | 99.5 | 99.5 | 100 | 100 |
8 | 225 | 11,046 | 90.1 | 89.1 | 89.7 | 90.4 | 89.5 | 91.0 | 88.8 | 92.4 | 97.8 |
9 | 124 | 6079 | 99.7 | 99.5 | 100 | 99.1 | 100 | 100 | 99.1 | 100 | 100 |
10 | 66 | 3212 | 97.9 | 96.9 | 98.2 | 97.2 | 98.0 | 98.8 | 97.1 | 98.0 | 100 |
11 | 21 | 1047 | 97.8 | 96.0 | 99.4 | 99.0 | 96.6 | 99.5 | 97.1 | 99.1 | 100 |
12 | 39 | 1888 | 100 | 99.9 | 99.9 | 100 | 99.9 | 99.7 | 100 | 100 | 100 |
13 | 18 | 898 | 97.9 | 96.8 | 97.6 | 88.8 | 98.0 | 99.2 | 96.1 | 100 | 99.2 |
14 | 21 | 1049 | 99.3 | 98.7 | 99.7 | 97.6 | 96.6 | 97.0 | 98.7 | 98.7 | 98.8 |
15 | 145 | 7123 | 87.2 | 84.0 | 91.6 | 90.3 | 87.3 | 88.6 | 84.3 | 80.1 | 97.4 |
16 | 36 | 1771 | 98.9 | 99.1 | 98.7 | 97.7 | 97.5 | 98.9 | 98.8 | 99.5 | 100 |
AA | 98.0 | 97.0 | 98.3 | 97.4 | 97.5 | 98.1 | 97.0 | 97.9 | 99.6 | ||
OA | 95.8 | 94.8 | 96.4 | 96.0 | 95.5 | 96.3 | 94.7 | 95.5 | 99.2 | ||
KC | 0.954 | 0.942 | 0.960 | 0.955 | 0.950 | 0.958 | 0.941 | 0.950 | 0.991 | ||
DR time (s) | - | 0.059 | 0.220 | 0.119 | 2.228 | 0.045 | 0.243 | 0.179 | 2.185 | ||
Classification time (s) | 1194.0 | 360.0 | 353.8 | 349.5 | 400.5 | 359.9 | 357.2 | 356.5 | 353.7 |
Classifier | DR | ||||
---|---|---|---|---|---|
NN | Baseline | 55.1 ± 1.1 (0.498) | 58.5 ± 1.6 (0.535) | 61.1 ± 0.9 (0.562) | 62.6 ± 0.7 (0.578) |
PCA | 55.0 ± 1.2 (0.496) | 58.6 ± 1.7 (0.535) | 61.4 ± 1.1 (0.565) | 62.6 ± 0.9 (0.578) | |
NPE | 54.5 ± 1.9 (0.491) | 57.3 ± 1.5 (0.521) | 60.1 ± 1.3 (0.552) | 61.5 ± 1.0 (0.566) | |
LPP | 55.9 ± 1.4 (0.506) | 60.3 ± 1.6 (0.554) | 62.8 ± 1.1 (0.581) | 64.4 ± 0.4 (0.598) | |
SDE | 53.3 ± 1.2 (0.478) | 57.5 ± 0.9 (0.523) | 61.0 ± 0.9 (0.561) | 62.3 ± 0.8 (0.575) | |
LFDA | 55.2 ± 1.4 (0.499) | 59.2 ± 1.6 (0.542) | 62.4 ± 0.8 (0.576) | 63.9 ± 0.9 (0.593) | |
MMC | 54.8 ± 1.4 (0.494) | 58.7 ± 1.6 (0.537) | 61.9 ± 1.0 (0.571) | 63.4 ± 0.5 (0.587) | |
MFA | 51.0 ± 1.1 (0.454) | 57.8 ± 1.2 (0.528) | 60.8 ± 1.6 (0.559) | 61.3 ± 1.0 (0.564) | |
LGSFA | 58.1 ± 1.5 (0.530) | 67.0 ± 1.6 (0.628) | 72.2 ± 0.8 (0.686) | 73.7 ± 0.9 (0.702) | |
SAM | Baseline | 55.5 ± 1.8 (0.502) | 60.4 ± 1.1 (0.555) | 62.2 ± 1.5 (0.574) | 64.6 ± 1.1 (0.600) |
PCA | 55.8 ± 1.8 (0.505) | 60.9 ± 1.2 (0.560) | 62.3 ± 1.4 (0.575) | 64.6 ± 1.2 (0.600) | |
NPE | 53.0 ± 2.0 (0.474) | 58.0 ± 1.6 (0.528) | 60.2 ± 1.1 (0.551) | 62.3 ± 1.3 (0.575) | |
LPP | 55.8 ± 1.8 (0.505) | 61.0 ± 1.5 (0.561) | 62.8 ± 1.2 (0.581) | 65.0 ± 1.1 (0.605) | |
SDE | 54.1 ± 2.0 (0.487) | 58.7 ± 0.7 (0.537) | 60.2 ± 1.1 (0.552) | 62.9 ± 1.0 (0.581) | |
LFDA | 54.3 ± 1.8 (0.489) | 60.0 ± 1.4 (0.550) | 62.1 ± 1.5 (0.573) | 64.4 ± 1.3 (0.598) | |
MMC | 58.6 ± 2.0 (0.537) | 63.6 ± 1.0 (0.590) | 65.7 ± 1.2 (0.613) | 67.3 ± 1.1 (0.630) | |
MFA | 48.8 ± 2.2 (0.432) | 57.0 ± 2.2 (0.519) | 58.4 ± 1.9 (0.533) | 58.9 ± 1.9 (0.538) | |
LGSFA | 57.6 ± 1.5 (0.525) | 67.7 ± 1.1 (0.635) | 71.0 ± 1.4 (0.672) | 73.3 ± 0.8 (0.697) | |
SVMCK | Baseline | 77.1 ± 1.7 (0.743) | 84.6 ± 1.1 (0.825) | 88.2 ± 1.0 (0.865) | 89.8 ± 1.0 (0.883) |
PCA | 73.1 ± 2.5 (0.697) | 81.8 ± 1.5 (0.793) | 85.4 ± 1.0 (0.833) | 87.9 ± 1.5 (0.862) | |
NPE | 73.1 ± 2.9 (0.697) | 81.6 ± 2.2 (0.791) | 85.4 ± 1.2 (0.834) | 87.7 ± 1.7 (0.859) | |
LPP | 70.7 ± 1.6 (0.672) | 81.1 ± 2.1 (0.786) | 84.6 ± 1.2 (0.825) | 87.4 ± 1.4 (0.856) | |
SDE | 73.4 ± 1.8 (0.700) | 81.8 ± 1.4 (0.794) | 86.7 ± 1.1 (0.849) | 89.2 ± 0.8 (0.876) | |
LFDA | 74.4 ± 2.4 (0.712) | 82.4 ± 1.4 (0.800) | 86.2 ± 1.1 (0.843) | 88.1 ± 1.0 (0.863) | |
MMC | 74.7 ± 2.3 (0.715) | 81.3 ± 1.3 (0.788) | 84.4 ± 2.7 (0.822) | 84.7 ± 2.4 (0.825) | |
MFA | 71.3 ± 2.0 (0.678) | 82.2 ± 1.4 (0.798) | 88.1 ± 1.0 (0.865) | 91.4 ± 1.1 (0.901) | |
LGSFA | 85.3 ± 3.1 (0.833) | 91.2 ± 1.7 (0.900) | 95.0 ± 1.0 (0.942) | 96.2 ± 1.0 (0.957) |
Class | Samples | DR with SVMCK Classifier | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Training | Test | Baseline | PCA | NPE | LPP | SDE | LFDA | MMC | MFA | LGSFA | |
1 | 10 | 36 | 83.3 | 91.7 | 75.0 | 72.2 | 94.4 | 61.1 | 44.4 | 55.6 | 66.7 |
2 | 143 | 1285 | 87.9 | 87.2 | 86.2 | 85.3 | 85.7 | 84.4 | 85.2 | 91.3 | 96.5 |
3 | 83 | 747 | 93.3 | 89.6 | 85.4 | 85.0 | 87.1 | 91.6 | 84.5 | 94.8 | 96.3 |
4 | 24 | 213 | 93.9 | 79.3 | 93.0 | 90.1 | 85.0 | 73.7 | 84.5 | 74.2 | 97.7 |
5 | 48 | 435 | 95.2 | 95.9 | 95.6 | 95.9 | 97.2 | 96.3 | 96.8 | 95.9 | 97.5 |
6 | 73 | 657 | 99.7 | 100 | 100 | 99 | 99.7 | 100 | 97.0 | 99.1 | 100 |
7 | 10 | 18 | 94.4 | 94.4 | 100 | 100 | 94.4 | 94.4 | 88.9 | 100 | 100 |
8 | 48 | 430 | 99.8 | 97.4 | 99 | 100 | 99.5 | 99.1 | 99.8 | 100 | 100 |
9 | 10 | 10 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 90.0 | 100 |
10 | 97 | 875 | 85.7 | 86.6 | 86.5 | 83.3 | 82.1 | 86.3 | 80.2 | 86.1 | 96.1 |
11 | 246 | 2209 | 95.2 | 91.1 | 92.3 | 91.4 | 91.8 | 90.6 | 91.1 | 94.7 | 99.3 |
12 | 59 | 534 | 92.3 | 84.8 | 83.9 | 82.0 | 83.3 | 81.3 | 85.0 | 88.4 | 97.8 |
13 | 21 | 184 | 98.9 | 98.9 | 97.8 | 98.9 | 97.8 | 99.5 | 98.9 | 98.9 | 97.3 |
14 | 127 | 1138 | 97.8 | 98.1 | 98.6 | 98.0 | 97.7 | 98.2 | 96.9 | 97.7 | 99.9 |
15 | 39 | 347 | 96.3 | 96.5 | 97.4 | 96.8 | 94.2 | 94.2 | 88.8 | 93.7 | 100 |
16 | 10 | 83 | 97.6 | 96 | 100 | 89.2 | 90.4 | 88 | 92.8 | 98.8 | 92.8 |
AA | 94.5 | 93.0 | 93.2 | 91.7 | 92.5 | 89.9 | 88.4 | 91.2 | 96.1 | ||
OA | 93.9 | 91.8 | 92.0 | 91.0 | 91.1 | 90.9 | 89.9 | 93.5 | 98.1 | ||
KC | 0.930 | 0.907 | 0.908 | 0.897 | 0.899 | 0.896 | 0.885 | 0.926 | 0.978 | ||
DR time (s) | - | 0.012 | 0.163 | 0.051 | 2.056 | 0.012 | 0.528 | 0.086 | 1.827 | ||
Classification time (s) | 40.7 | 12.7 | 12.7 | 12.6 | 14.4 | 12.7 | 12.7 | 12.9 | 12.8 |
Classifier | DR | ||||
---|---|---|---|---|---|
NN | Baseline | 72.3 ± 3.5 (0.577) | 72.6 ± 2.1 (0.585) | 75.1 ± 1.0 (0.614) | 76.0 ± 1.3 (0.627) |
PCA | 72.3 ± 3.5 (0.577) | 72.6 ± 2.1 (0.585) | 75.1 ± 1.0 (0.615) | 76.0 ± 1.3 (0.627) | |
NPE | 72.2 ± 3.5 (0.575) | 71.7 ± 2.1 (0.574) | 74.8 ± 1.0 (0.613) | 75.7 ± 1.4 (0.627) | |
LPP | 71.9 ± 3.9 (0.571) | 72.1 ± 2.2 (0.579) | 74.3 ± 0.9 (0.604) | 75.3 ± 1.6 (0.617) | |
SDE | 72.6 ± 3.6 (0.581) | 72.9 ± 2.2 (0.589) | 75.3 ± 1.1 (0.617) | 76.1 ± 1.3 (0.628) | |
LFDA | 75.3 ± 2.7 (0.620) | 75.6 ± 1.6 (0.625) | 76.9 ± 1.0 (0.641) | 77.5 ± 1.3 (0.651) | |
MMC | 71.4 ± 4.0 (0.565) | 70.9 ± 2.0 (0.562) | 72.3 ± 1.7 (0.576) | 73.4 ± 1.7 (0.590) | |
MFA | 74.6 ± 3.3 (0.609) | 74.1 ± 2.5 (0.605) | 76.2 ± 1.0 (0.630) | 77.1 ± 1.1 (0.644) | |
LGSFA | 75.3 ± 3.1 (0.622) | 76.7 ± 1.7 (0.641) | 78.1 ± 1.5 (0.658) | 78.7 ± 0.5 (0.669) | |
SAM | Baseline | 70.3 ± 2.8 (0.552) | 72.0 ± 3.4 (0.572) | 72.6 ± 2.1 (0.584) | 73.5 ± 1.3 (0.595) |
PCA | 70.2 ± 2.7 (0.550) | 71.9 ± 3.4 (0.570) | 72.6 ± 2.1 (0.583) | 73.4 ± 1.3 (0.594) | |
NPE | 72.0 ± 2.6 (0.574) | 74.7 ± 3.6 (0.609) | 75.4 ± 2.4 (0.620) | 75.8 ± 1.4 (0.628) | |
LPP | 72.0 ± 2.8 (0.576) | 74.0 ± 4.0 (0.600) | 74.5 ± 1.9 (0.610) | 75.4 ± 1.7 (0.623) | |
SDE | 70.3 ± 2.7 (0.551) | 71.9 ± 3.4 (0.571) | 72.6 ± 2.1 (0.584) | 73.5 ± 1.3 (0.595) | |
LFDA | 74.6 ± 2.0 (0.611) | 74.9 ± 3.3 (0.614) | 75.5 ± 1.5 (0.623) | 75.8 ± 1.2 (0.628) | |
MMC | 69.3 ± 2.9 (0.537) | 69.3 ± 4.5 (0.535) | 70.5 ± 2.5 (0.552) | 70.6 ± 2.4 (0.554) | |
MFA | 72.4 ± 2.1 (0.580) | 73.6 ± 3.6 (0.595) | 74.3 ± 1.8 (0.607) | 75.0 ± 1.3 (0.617) | |
LGSFA | 76.7 ± 1.9 (0.641) | 77.8 ± 2.5 (0.655) | 78.1 ± 1.1 (0.659) | 78.9 ± 1.2 (0.671) | |
SVMCK | Baseline | 75.5 ± 3.1 (0.626) | 78.2 ± 0.9 (0.661) | 79.8 ± 0.9 (0.684) | 81.4 ± 1.4 (0.707) |
PCA | 72.0 ± 3.7 (0.581) | 74.5 ± 2.5 (0.610) | 75.6 ± 1.7 (0.627) | 78.7 ± 1.4 (0.666) | |
NPE | 73.1 ± 3.0 (0.592) | 77.3 ± 1.7 (0.648) | 78.0 ± 1.5 (0.660) | 80.4 ± 1.8 (0.691) | |
LPP | 72.5 ± 3.3 (0.586) | 77.3 ± 3.5 (0.652) | 79.6 ± 1.5 (0.683) | 81.2 ± 2.0 (0.703) | |
SDE | 73.0 ± 3.5 (0.591) | 75.6 ± 3.2 (0.624) | 77.8 ± 3.0 (0.656) | 80.2 ± 2.7 (0.689) | |
LFDA | 75.2 ± 2.6 (0.622) | 77.9 ± 1.6 (0.658) | 79.1 ± 1.9 (0.675) | 80.5 ± 2.1 (0.693) | |
MMC | 70.5 ± 4.8 (0.558) | 75.7 ± 3.7 (0.628) | 75.2 ± 3.8 (0.621) | 80.4 ± 3.1 (0.691) | |
MFA | 72.6 ± 2.8 (0.587) | 76.7 ± 1.7 (0.642) | 78.3 ± 1.8 (0.664) | 80.2 ± 1.4 (0.688) | |
LGSFA | 79.8 ± 2.8 (0.688) | 82.9 ± 1.4 (0.731) | 84.0 ± 1.5 (0.748) | 85.5 ± 1.0 (0.767) |
Class | Samples | DR with SVMCK Classifier | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Training | Test | Baseline | PCA | NPE | LPP | SDE | LFDA | MMC | MFA | LGSFA | |
1 | 185 | 9045 | 90.6 | 86.3 | 87.1 | 87.2 | 85.3 | 87.9 | 84.1 | 87.3 | 88.6 |
2 | 29 | 1444 | 14.1 | 54.1 | 53.0 | 51.3 | 51.2 | 55.9 | 46.6 | 60.4 | 53.6 |
3 | 65 | 3165 | 94.7 | 95.6 | 96.3 | 95.7 | 94.5 | 95.4 | 91.9 | 94.8 | 97.2 |
4 | 30 | 1466 | 90.0 | 93.0 | 92.2 | 91.5 | 88.3 | 93.4 | 92.2 | 91.5 | 95.0 |
5 | 829 | 40,613 | 90.7 | 88.9 | 90.8 | 91.4 | 90.4 | 89.5 | 91.6 | 92.3 | 93.5 |
6 | 268 | 13,108 | 73.7 | 72.2 | 73.5 | 74.8 | 72.7 | 72.3 | 72.8 | 73.4 | 75.5 |
AA | 75.6 | 81.7 | 82.2 | 82.0 | 80.4 | 82.4 | 79.9 | 83.3 | 83.9 | ||
OA | 86.0 | 85.1 | 86.5 | 87.0 | 85.7 | 85.6 | 86.1 | 87.5 | 88.8 | ||
KC | 0.765 | 0.750 | 0.773 | 0.781 | 0.758 | 0.759 | 0.765 | 0.788 | 0.809 | ||
DR time (s) | - | 0.040 | 0.308 | 0.085 | 3.390 | 3.390 | 0.449 | 2.185 | 3.837 | ||
Classification time (s) | 1637.7 | 438.0 | 421.3 | 445.5 | 438.5 | 434.4 | 438.1 | 435.8 | 430.4 |
© 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Luo, F.; Huang, H.; Duan, Y.; Liu, J.; Liao, Y. Local Geometric Structure Feature for Dimensionality Reduction of Hyperspectral Imagery. Remote Sens. 2017, 9, 790. https://doi.org/10.3390/rs9080790
Luo F, Huang H, Duan Y, Liu J, Liao Y. Local Geometric Structure Feature for Dimensionality Reduction of Hyperspectral Imagery. Remote Sensing. 2017; 9(8):790. https://doi.org/10.3390/rs9080790
Chicago/Turabian StyleLuo, Fulin, Hong Huang, Yule Duan, Jiamin Liu, and Yinghua Liao. 2017. "Local Geometric Structure Feature for Dimensionality Reduction of Hyperspectral Imagery" Remote Sensing 9, no. 8: 790. https://doi.org/10.3390/rs9080790
APA StyleLuo, F., Huang, H., Duan, Y., Liu, J., & Liao, Y. (2017). Local Geometric Structure Feature for Dimensionality Reduction of Hyperspectral Imagery. Remote Sensing, 9(8), 790. https://doi.org/10.3390/rs9080790