Adaptive Weighted Graph Fusion Incomplete Multi-View Subspace Clustering
"> Figure 1
<p>Framework of the proposed adaptive weighted graph fusion incomplete multi-view subspace clustering (AWGF-IMSC). It is a novel incomplete multi-view clustering method to fuse the local-structure contained graph with adaptive view-importance learning. Incomplete graphs of different scales are fused into a complete graph with automatically learning weights. In addition, the constructed complete graph will further guide the learning process of incomplete graphs and latent representations.</p> "> Figure 2
<p>Sample images of the four datasets. Row from top to bottom represents images from BUAA, Caltech7, 100Leaves and Mfeat, respectively.</p> "> Figure 3
<p>NMI results with different incomplete ratios on four incomplete datasets.</p> "> Figure 4
<p>Parameter study on four incomplete datasets. Pictures depict the diversification of NMI under different parameter combinations in the condition of missing 10% instances.</p> "> Figure 5
<p>The influence of parameters <math display="inline"><semantics> <msub> <mi>λ</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>λ</mi> <mn>2</mn> </msub> </semantics></math> on NMI when 30% instances are missing on four datasets.</p> "> Figure 6
<p>The convergence curves of the objective values on four datasets.</p> ">
Abstract
:1. Introduction
- It induces the similarity graph fusion after obtaining latent spaces to extract the local structure of inner views. By virtue of it, noise existing in the original space can be eliminated in latent space and contribute to better graph construction.
- It incorporates relations between missing samples and complete samples into the complete graph. The sparse constraint imposed on the complete graph improves the view-inconsistency and reduces the disagreements between views, making the proposed method more robust in most cases.
- The importance of each view is automatically learned and adaptively optimized during the optimization. Consequently, the important view has strong guidance in the learning process. Moreover, there is no limitation to the number of views in our approach. The proposed method is applicable to any multi-view datasets.
2. Notation
3. Related Work
3.1. Semi-Non-Negative Matrix Factorization for Single View
3.2. Subspace Clustering
3.3. Incomplete Multi-View Spectral Clustering with Adaptive Graph Learning (IMSC-AGL)
4. Method
4.1. Adaptive Weighted Graph Fusion Incomplete Multi-View Subspace Clustering
4.2. Optimization Algorithm for AWGF-IMSC
4.2.1. Update
4.2.2. Update
4.2.3. Update
4.2.4. Update
4.2.5. Update
4.3. Convergence and Computational Complexity
Algorithm 1: AWGF-IMSC |
5. Experiment
5.1. Datasets
- BUAA-visnir face database (BUAA) [49]. The dataset BUAA used in this paper contains 1350 instances of 150 categories. Each instance has visible images (VIS) and near infrared images (NIR), which naturally form a two-view dataset. Both VIS and NIR images are 640×480 pixels. Then, they are resized into matrix and vectorized into 100-dimensional features.
- Caltech7 [50]. The Caltech7 dataset is a subset of the Caltech101 dataset, containing seven categories (Face, Motorbikes, Dolla-Bill, Garfield, Snoopy, Stop-Sign and Windsorchair) and 1474 instances. The original images of dataset Caltech7 differ in size. We follow the work in [48], selecting two of five given features as the multi-view dataset. The selected two views refer to 512 dimensional GIST features [51] and 928 dimensional local binary patterns(LBP) features [51].
- One-hundred plant species leaves dataset (100Leaves) [52]. The 100Leaves dataset contains 1600 instances from 100 categories. The original images of 100Leaves differ in size, too. Shape descriptor, fine scale margin and texture histogram features constitute three-views to depict samples from different perspectives.
- Mfeat handwritten digit dataset (Mfeat) [53]. This dataset contains 2000 samples. The size of the original images of dataset Mfeat is 891 × 702 pixels. The public multi-view dataset of it has six views. In our experiments, we select 76-dimensional features of Fourier coefficients of the character shapes and 240-dimensional features of pixel averages.
5.2. Baselines
- Best single view (BSV). BSV first fills the missing samples with the average feature values of its view. The affinity matrices can be constructed by Gaussian kernel. Then, we perform spectral clustering algorithm on the similarity matrix of each view and report the best clustering performance.
- Partial multi-view clustering (PVC) [35]. This method supposes that the instances available in both views should have a common representation. The view-specific instances which are missing in another view should maintain the specific information. Based on the NMF, this method integrates the common and view-specific representations in the latent space to form a unified representation.
- Multiple incomplete view clustering via weighted non-negative matrix factorization with regularization (MIC) [38]. This paper first fills the missing instances with an average value of features and then learns a regularized latent subspace by weighted NMF.
- Incomplete multi-modal visual data grouping (IMG) [36]. IMG proposes to use the latent representation to generate a complete graph, which establishes a connection between missing data from different views.
- Doubly aligned incomplete multi-view clustering (DAIMC) [37]. The proposed method first aligns the samples into a common representation by semi-NMF and then aligns the base matrices with the help of regularized regression modal.
- Incomplete multi-view spectral clustering with adaptive graph learning (INMF-AGL) [48] induces a co-regularization term to learn the common representation, which integrates the graph learning and spectral clustering.
5.3. Experiment Setting
5.4. Experiment Results and Analysis
- Compared with the proposed method, the BSV method yields worse clustering performance. This is mainly because directly filling the missing instances with the average features will lead them to be clustered into the same group. The weighted NMF methods DAIMC and MIC perform better than BSV at a low missing rate since the NMF-based methods learn a shared representation to exploit the complementary information across views. Besides, the weighted manner reduces the negative impact of the missing instances. However, with the increasing incomplete ratio, these two methods suffer a sharp decline, especially apparent in Mfeat dataset. Methods like IMG and INMF_AGL involving the graph construction perform better than them. Our proposed method integrates the advantages of NMF-based and graph-based methods, adaptively fusing the graph learned from each embedding space. Therefore, our AWGF_IMSC method reaches the best clustering performance in most cases.
- Comparing with the INMF_AGL [48], our proposed method consistently further improves the clustering performance and achieves better results among the benchmark datasets. In addition to the cases on Mfeat, the INMF_AGL method performs higher accuracy than AWGF_IMSC under the 20–50% missing rate. However, our performance exceeds it under the incomplete ratio in 60% and 70%. Although both of INMF_AGL and our AWGF_IMSC adopt subspace clustering to build graph structure in each view, the clustering results demonstrate the effectiveness of graph fusion instead of indicator fusion in INMF_AGL.
- AWGF-IMSC shows clear advantages over other compared baselines under various incomplete ratios, with three best and one second-best results out of the total four datasets. For example, on the BUAA dataset (Table 3), our method transcends the second best method by 7.94%, 7.58%, 7.18%, 7.53%, 14.93%, 9.24% and 7%, respectively. More significant improvements can be seen on dataset Caltech7. In Table 2, the ACCs of the proposed method are 23.42%, 13.10%, 15.02%, 9.74%, 10.48%, 10.8% and 11.43% higher than the second best INMF_AGL method. These significant results verify the effectiveness of the proposed adaptive weighted graph-based fusion learning for incomplete multi-view clustering. Our method achieves the best average rank in Caltech7, BUAA and 100Leaves. In Mfeat, the average rank of the proposed method is second best, which is only 0.43 more than the best, but 0.85 less than the third.
- As shown in Figure 3, we can also observe that the proposed algorithm outperforms other methods on all of the datasets under various incomplete ratio. Besides, our method appears a relatively stable trend as the missing rate increases. Moreover, the abnormal phenomenon of BSV in dataset Caltech7 (Figure 3a) maybe because the preserved complete view has an excellent structure when generating large missing datasets. Therefore, the results of the BSV will be outstanding. Other methods are affected by the negative impact of missing samples and thus produce a lower effect than BSV, while our method is still superior to all compared methods by a more significant proportion, further illustrating the effectiveness and superiority of the proposed method.
5.5. Analysis of the Parameter Sensitivity
5.6. Convergence Analysis
6. Conclusions
Author Contributions
Funding
Conflicts of Interest
Abbreviations
AWGF-IMSC | Adaptive weighted graph fusion incomplete multi-view subspace clustering |
MVC | Multi-view clustering |
NMF | Non-negative matrix factorization |
BUAA | BUAA-visnir face database |
100Leaves | One-hundred plant species leaves dataset |
Mfeat | Mfeat handwritten digit dataset |
VIS | Visual image |
NIR | Near infrared image |
LBP | Local binary patterns |
IR | Incomplete ratio |
ACC | Accuracy |
NMI | Normalized mutual information |
TP, FP, FN, TN | True positive, false positive, false negative, true negative |
BSV | Best single view |
PVC | Partial multi-view clustering |
MIC | Multiple incomplete view clustering via weighted non-negative matrix factorization with regularization |
IMG | Incomplete multi-modal visual data grouping |
DAIMC | Doubly aligned incomplete multi-view clustering |
INMF-AGL | Incomplete multiview spectral clustering with adaptive graph learning |
References
- Zhao, Q.; Zhang, Y.; Qin, Q.; Luo, B. Quantized Residual Preference Based Linkage Clustering for Model Selection and Inlier Segmentation in Geometric Multi-Model Fitting. Sensors 2020, 20, 3806. [Google Scholar] [CrossRef]
- Biabani, M.; Fotouhi, H.; Yazdani, N. An Energy-Efficient Evolutionary Clustering Technique for Disaster Management in IoT Networks. Sensors 2020, 20, 2647. [Google Scholar] [CrossRef]
- Deng, T.; Ye, D.; Ma, R.; Fujita, H.; Xiong, L. Low-rank local tangent space embedding for subspace clustering. Inf. Sci. 2020, 508, 1–21. [Google Scholar] [CrossRef]
- Peng, X.; Feng, J.; Zhou, J.T.; Lei, Y.; Yan, S. Deep subspace clustering. IEEE Trans. Neural Netw. Learn. Syst. 2020. [Google Scholar] [CrossRef]
- Chen, J.; Zhao, Z.; Ye, J.; Liu, H. Nonlinear Adaptive Distance Metric Learning for Clustering. In Proceedings of the 13th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD’07), San Jose, CA, USA, 12–15 August 2007; Association for Computing Machinery: New York, NY, USA, 2007; pp. 123–132. [Google Scholar] [CrossRef]
- Gönen, M.; Alpaydin, E. Localized Multiple Kernel Learning. In Proceedings of the 25th International Conference on Machine Learning (ICML’08), Helsinki, Finland, 5–9 July 2008; Association for Computing Machinery: New York, NY, USA, 2008; pp. 352–359. [Google Scholar] [CrossRef] [Green Version]
- Chaudhuri, K.; Kakade, S.M.; Livescu, K.; Sridharan, K. Multi-View Clustering via Canonical Correlation Analysis. In Proceedings of the 26th Annual International Conference on Machine Learning (ICML’09), Montreal, QC, Canada, 14–18 June 2009; Association for Computing Machinery: New York, NY, USA, 2009; pp. 129–136. [Google Scholar] [CrossRef] [Green Version]
- Yu, S.; Tranchevent, L.; Liu, X.; Glanzel, W.; Suykens, J.A.; De Moor, B.; Moreau, Y. Optimized data fusion for kernel k-means clustering. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 1031–1039. [Google Scholar] [CrossRef]
- Huang, H.C.; Chuang, Y.Y.; Chen, C.S. Multiple kernel fuzzy clustering. IEEE Trans. Fuzzy Syst. 2012, 20, 120–134. [Google Scholar] [CrossRef] [Green Version]
- Liu, X.; Wang, L.; Yin, J.; Zhu, E.; Zhang, J. An efficient approach to integrating radius information into multiple kernel learning. IEEE Trans. Cybern. 2013, 43, 557–569. [Google Scholar]
- Gönen, M.; Margolin, A.A. Localized Data Fusion for Kernel k-Means Clustering with Application to Cancer Biology. In Advances in Neural Information Processing Systems 27; Curran Associates, Inc.: New York, NY, USA, 2014; pp. 1305–1313. [Google Scholar]
- Liu, X.; Dou, Y.; Yin, J.; Wang, L.; Zhu, E. Multiple Kernel k-Means Clustering with Matrix-Induced Regularization. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, Phoenix, AZ, USA, 12–17 February 2016; pp. 1888–1894. [Google Scholar]
- Wang, S.; Zhu, E.; Hu, J.; Li, M.; Zhao, K.; Hu, N.; Liu, X. Efficient multiple kernel k-means clustering with late fusion. IEEE Access 2019, 7, 61109–61120. [Google Scholar] [CrossRef]
- Chowdhary, C.L.; Mittal, M.; Pattanaik, P.; Marszalek, Z. An Efficient Segmentation and Classification System in Medical Images Using Intuitionist Possibilistic Fuzzy C-Mean Clustering and Fuzzy SVM Algorithm. Sensors 2020, 20, 3903. [Google Scholar] [CrossRef]
- Du, L.; Zhou, P.; Shi, L.; Wang, H.; Fan, M.; Wang, W.; Shen, Y.D. Robust Multiple Kernel K-Means Using ℓ2,1 Norm. In Proceedings of the 24th International Conference on Artificial Intelligence, Buenos Aires, Argentina, 25–31 July 2015; pp. 3476–3482. [Google Scholar]
- Zhou, S.; Liu, X.; Li, M.; Zhu, E.; Liu, L.; Zhang, C.; Yin, J. Multiple kernel clustering with neighbor-kernel subspace segmentation. IEEE Trans. Neural Netw. Learn. Syst. 2019, 31, 1351–1362. [Google Scholar] [CrossRef] [Green Version]
- Wang, S.; Liu, X.; Zhu, E.; Tang, C.; Liu, J.; Hu, J.; Xia, J.; Yin, J. Multi-view Clustering via Late Fusion Alignment Maximization. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, Macao, China, 10–16 August 2019; pp. 3778–3784. [Google Scholar] [CrossRef] [Green Version]
- Kumar, A.; Daumé, H., III. A Co-Training Approach for Multi-View Spectral Clustering. In Proceedings of the 28th International Conference on Machine Learning (ICML’11), Bellevue, WA, USA, 28 June–2 July 2011; Omnipress: Madison, WI, USA, 2011; pp. 393–400. [Google Scholar]
- Kumar, A.; Rai, P.; Daumé, H. Co-Regularized Multi-View Spectral Clustering. In Proceedings of the 24th International Conference on Neural Information Processing Systems (NIPS’11), Guangzhou, China, 14–18 November 2017; Curran Associates Inc.: Red Hook, NY, USA, 2011; pp. 1413–1421. [Google Scholar]
- Zhang, Z.; Liu, L.; Shen, F.; Shen, H.T.; Shao, L. Binary multi-view clustering. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 41, 1774–1782. [Google Scholar] [CrossRef]
- Jiang, G.; Wang, H.; Peng, J.; Chen, D.; Fu, X. Graph-based Multi-view Binary Learning for Image Clustering. arXiv 2019, arXiv:1912.05159. [Google Scholar]
- Zhang, C.; Fu, H.; Hu, Q.; Cao, X.; Xie, Y.; Tao, D.; Xu, D. Generalized latent multi-view subspace clustering. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 42, 86–99. [Google Scholar] [CrossRef]
- Zhang, C.; Hu, Q.; Fu, H.; Zhu, P.; Cao, X. Latent Multi-view Subspace Clustering. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); IEEE Computer Society: Los Alamitos, CA, USA, 2017; pp. 4333–4341. [Google Scholar] [CrossRef]
- Brbić, M.; Kopriva, I. Multi-view low-rank sparse subspace clustering. Pattern Recognit. 2018, 73, 247–258. [Google Scholar] [CrossRef] [Green Version]
- Kang, Z.; Zhao, X.; Peng, C.; Zhu, H.; Zhou, J.T.; Peng, X.; Chen, W.; Xu, Z. Partition level multiview subspace clustering. Neural Netw. 2020, 122, 279–288. [Google Scholar] [CrossRef]
- Liu, G.; Lin, Z.; Yan, S.; Sun, J.; Yu, Y.; Ma, Y. Robust Recovery of Subspace Structures by Low-Rank Representation. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 171–184. [Google Scholar] [CrossRef] [Green Version]
- Lee, D.D.; Seung, H.S. Learning the parts of objects by non-negative matrix factorization. Nature 1999, 401, 788–791. [Google Scholar] [CrossRef]
- Liu, J.; Wang, C.; Gao, J.; Han, J. Multi-View Clustering via Joint Nonnegative Matrix Factorization. In Proceedings of the 2013 SIAM International Conference on Data Mining, Austin, TX, USA, 2–4 May 2013; pp. 252–260. [Google Scholar] [CrossRef] [Green Version]
- Zong, L.; Zhang, X.; Zhao, L.; Yu, H.; Zhao, Q. Multi-view clustering via multi-manifold regularized non-negative matrix factorization. Neural Netw. 2017, 88, 74–89. [Google Scholar] [CrossRef] [Green Version]
- Yang, Z.; Liang, N.; Yan, W.; Li, Z.; Xie, S. Uniform Distribution Non-Negative Matrix Factorization for Multiview Clustering. IEEE Trans. Cybern. 2020. [Google Scholar] [CrossRef]
- Yin, M.; Huang, W.; Gao, J. Shared Generative Latent Representation Learning for Multi-View Clustering. In Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; pp. 6688–6695. [Google Scholar]
- Zhao, L.; Chen, Z.; Yang, Y.; Wang, Z.J.; Leung, V.C. Incomplete multi-view clustering via deep semantic mapping. Neurocomputing 2018, 275, 1053–1062. [Google Scholar] [CrossRef]
- Yu, X.; Li, H.; Zhang, Z.; Gan, C. The Optimally Designed Variational Autoencoder Networks for Clustering and Recovery of Incomplete Multimedia Data. Sensors 2019, 19, 809. [Google Scholar] [CrossRef] [Green Version]
- Liu, X.; Li, M.; Tang, C.; Xia, J.; Xiong, J.; Liu, L.; Kloft, M.; Zhu, E. Efficient and effective regularized incomplete multi-view clustering. IEEE Trans. Pattern Anal. Mach. Intell. 2020. [Google Scholar] [CrossRef]
- Li, S.Y.; Jiang, Y.; Zhou, Z.H. Partial Multi-View Clustering. In Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, Québec City, QC, Canada, 27–31 July 2014; pp. 1968–1974. [Google Scholar]
- Zhao, H.; Liu, H.; Fu, Y. Incomplete Multi-Modal Visual Data Grouping. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, New York, NY, USA, 9–15 July 2016; pp. 2392–2398. [Google Scholar]
- Hu, M.; Chen, S. Doubly Aligned Incomplete Multi-View Clustering. In Proceedings of the 27th International Joint Conference on Artificial Intelligence, Stockholm, Sweden, 13–19 July 2018; pp. 2262–2268. [Google Scholar]
- Shao, W.; He, L.; Yu, P.S. Multiple Incomplete Views Clustering via Weighted Nonnegative Matrix Factorization with ℓ2,1 Regularization. In Proceedings of the 2015th European Conference on Machine Learning and Knowledge Discovery in Databases, Porto, Portugal, 7–11 September 2015; Springer: Berlin/Heidelberg, Germany, 2015; pp. 318–334. [Google Scholar] [CrossRef]
- Gao, H.; Peng, Y.; Jian, S. Incomplete multi-view clustering. In International Conference on Intelligent Information Processing; Springer: Berlin/Heidelberg, Germany, 2016; pp. 245–255. [Google Scholar] [CrossRef] [Green Version]
- Wang, J.; Lai, S.; Li, M. Improved image fusion method based on NSCT and accelerated NMF. Sensors 2012, 12, 5872–5887. [Google Scholar] [CrossRef]
- Zhou, Q.; Feng, Z.; Benetos, E. Adaptive noise reduction for sound event detection using subband-weighted NMF. Sensors 2019, 19, 3206. [Google Scholar] [CrossRef] [Green Version]
- Ding, C.H.Q.; Li, T.; Jordan, M.I. Convex and Semi-Nonnegative Matrix Factorizations. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 45–55. [Google Scholar] [CrossRef] [Green Version]
- He, M.; Yang, Y.; Wang, H. Learning latent features for multi-view clustering based on NMF. In International Joint Conference on Rough Sets; Springer: Berlin/Heidelberg, Germany, 2016; pp. 459–469. [Google Scholar]
- Hu, M.; Chen, S. One-pass incomplete multi-view clustering. In Proceedings of the AAAI Conference on Artificial Intelligence 2019, Honolulu, HI, USA, 27–28 January 2019; Volume 33, pp. 3838–3845. [Google Scholar]
- Parsons, L.; Haque, E.; Liu, H. Subspace clustering for high dimensional data: A review. Acm Sigkdd Explor. Newsl. 2004, 6, 90–105. [Google Scholar] [CrossRef]
- Hoppenstedt, B.; Reichert, M.; Kammerer, K.; Probst, T.; Schlee, W.; Spiliopoulou, M.; Pryss, R. Dimensionality Reduction and Subspace Clustering in Mixed Reality for Condition Monitoring of High-Dimensional Production Data. Sensors 2019, 19, 3903. [Google Scholar] [CrossRef] [Green Version]
- Elhamifar, E.; Vidal, R. Sparse subspace clustering: Algorithm, theory, and applications. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 2765–2781. [Google Scholar] [CrossRef] [Green Version]
- Wen, J.; Xu, Y.; Liu, H. Incomplete multiview spectral clustering with adaptive graph learning. IEEE Trans. Cybern. 2018, 1418–1429. [Google Scholar] [CrossRef]
- Huang, D.; Sun, J.; Wang, Y. The Buaa-Visnir Face Database Instructions; Technical Report IRIP-TR-12-FR-001; School Computer Science and Engineering, Beihang University: Beijing, China, 2012. [Google Scholar]
- Fei-Fei, L.; Fergus, R.; Perona, P. Learning generative visual models from few training examples: An incremental Bayesian approach tested on 101 object categories. In Proceedings of the 2004 Conference on Computer Vision and Pattern Recognition Workshop, Washington, DC, USA, 27 June–2 July 2004. [Google Scholar]
- Oliva, A.; Torralba, A. Modeling the shape of the scene: A holistic representation of the spatial envelope. Int. J. Comput. Vis. 2001, 42, 145–175. [Google Scholar] [CrossRef]
- Mallah, C.; Cope, J.; Orwell, J. Plant leaf classification using probabilistic integration of shape, texture and margin features. Signal Process. Pattern Recognit. Appl. 2013, 5, 45–54. [Google Scholar]
- van Breukelen, M.; Duin, R.P.; Tax, D.M.; Den Hartog, J. Handwritten digit recognition by combined classifiers. Kybernetika 1998, 34, 381–386. [Google Scholar]
Samples | Views | Clusters | Feature | |||
---|---|---|---|---|---|---|
BUAA | 1350 | 2 | 150 | 100 | 100 | |
Caltech7 | 1474 | 2 | 7 | 512 | 928 | |
100Leaves | 1600 | 3 | 100 | 64 | 64 | 64 |
Mfeat | 2000 | 2 | 10 | 76 | 240 |
IR\Method | BSV | MIC | IMG | DAIMC | INMF_AGL | Ours |
---|---|---|---|---|---|---|
10% | 0.3328 | 0.4007 | 0.5189 | 0.4105 | 0.5263 | 0.7605 |
20% | 0.3145 | 0.3886 | 0.5027 | 0.3969 | 0.5794 | 0.7104 |
30% | 0.3436 | 0.3205 | 0.4837 | 0.4122 | 0.5791 | 0.7293 |
40% | 0.4139 | 0.3366 | 0.5080 | 0.3423 | 0.6007 | 0.6981 |
50% | 0.4861 | 0.3493 | 0.4290 | 0.3550 | 0.5702 | 0.6750 |
60% | 0.5245 | 0.3446 | 0.4943 | 0.4155 | 0.5874 | 0.6954 |
70% | 0.4324 | 0.3539 | 0.4837 | 0.4109 | 0.5799 | 0.6942 |
Average Rank | 4.43 | 5.71 | 3.29 | 4.57 | 2.00 | 1.00 |
IR\Method | BSV | MIC | IMG | DAIMC | INMF_AGL | Ours |
---|---|---|---|---|---|---|
10% | 0.2964 | 0.0193 | 0.3424 | 0.3203 | 0.5652 | 0.6446 |
20% | 0.2997 | 0.0193 | 0.3424 | 0.2775 | 0.5637 | 0.6395 |
30% | 0.3006 | 0.0193 | 0.3424 | 0.2341 | 0.5566 | 0.6284 |
40% | 0.2997 | 0.0193 | 0.3424 | 0.2336 | 0.5505 | 0.6258 |
50% | 0.2965 | 0.0193 | 0.3424 | 0.2373 | 0.4730 | 0.6223 |
60% | 0.2975 | 0.0193 | 0.3424 | 0.2413 | 0.5293 | 0.6217 |
70% | 0.2979 | 0.0193 | 0.3424 | 0.2596 | 0.5410 | 0.6110 |
Average Rank | 4.14 | 6.00 | 3.00 | 4.86 | 2.00 | 1.00 |
IR\Method | BSV | MIC | IMG | DAIMC | INMF_AGL | Ours |
---|---|---|---|---|---|---|
10% | 0.1069 | 0.6208 | 0.5661 | 0.6628 | 0.8223 | 0.8433 |
20% | 0.1064 | 0.5768 | 0.5143 | 0.5750 | 0.7835 | 0.8104 |
30% | 0.1060 | 0.5234 | 0.4607 | 0.4740 | 0.7529 | 0.7776 |
40% | 0.1056 | 0.4744 | 0.4302 | 0.4370 | 0.7026 | 0.7291 |
50% | 0.1075 | 0.4665 | 0.4001 | 0.3068 | 0.6551 | 0.7028 |
60% | 0.1065 | 0.4449 | 0.3668 | 0.3405 | 0.6316 | 0.6478 |
70% | 0.1057 | 0.4261 | 0.3740 | 0.3875 | 0.5893 | 0.6194 |
Average Rank | 6.00 | 3.14 | 4.71 | 4.14 | 2.00 | 1.00 |
IR\Method | BSV | MIC | IMG | DAIMC | INMF_AGL | Ours |
---|---|---|---|---|---|---|
10% | 0.1520 | 0.6477 | 0.5442 | 0.8670 | 0.8650 | 0.8100 |
20% | 0.1484 | 0.5721 | 0.5004 | 0.7151 | 0.8415 | 0.7866 |
30% | 0.1471 | 0.5220 | 0.5133 | 0.5737 | 0.8177 | 0.7995 |
40% | 0.1497 | 0.4539 | 0.4554 | 0.5042 | 0.8148 | 0.7864 |
50% | 0.1458 | 0.3640 | 0.4037 | 0.5423 | 0.7989 | 0.7680 |
60% | 0.1524 | 0.3583 | 0.3424 | 0.5819 | 0.7214 | 0.7515 |
70% | 0.1476 | 0.3447 | 0.3424 | 0.6446 | 0.7027 | 0.7256 |
Average Rank | 6.00 | 4.29 | 4.71 | 2.71 | 1.43 | 1.86 |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, P.; Wang, S.; Hu, J.; Cheng, Z.; Guo, X.; Zhu, E.; Cai, Z. Adaptive Weighted Graph Fusion Incomplete Multi-View Subspace Clustering. Sensors 2020, 20, 5755. https://doi.org/10.3390/s20205755
Zhang P, Wang S, Hu J, Cheng Z, Guo X, Zhu E, Cai Z. Adaptive Weighted Graph Fusion Incomplete Multi-View Subspace Clustering. Sensors. 2020; 20(20):5755. https://doi.org/10.3390/s20205755
Chicago/Turabian StyleZhang, Pei, Siwei Wang, Jingtao Hu, Zhen Cheng, Xifeng Guo, En Zhu, and Zhiping Cai. 2020. "Adaptive Weighted Graph Fusion Incomplete Multi-View Subspace Clustering" Sensors 20, no. 20: 5755. https://doi.org/10.3390/s20205755