Fast Global and Local Semi-Supervised Learning via Matrix Factorization
<p>Links between samples and anchors, where <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math>–<math display="inline"><semantics> <msub> <mi>x</mi> <mn>7</mn> </msub> </semantics></math> represent the seven samples and <math display="inline"><semantics> <msub> <mi>u</mi> <mn>1</mn> </msub> </semantics></math>–<math display="inline"><semantics> <msub> <mi>u</mi> <mn>4</mn> </msub> </semantics></math> represent the four anchors.</p> "> Figure 2
<p>Clustering performance with different labeled samples on (<b>a</b>) COIL20 dataset, (<b>b</b>) YaleB dataset, (<b>c</b>) COIL100 dataset, (<b>d</b>) USPS dataset, (<b>e</b>) MNIST dataset, and (<b>f</b>) Letters dataset.</p> "> Figure 3
<p>Sensitivity of FGLMF on (<b>a</b>) COIL20 dataset, (<b>b</b>) YaleB dataset, (<b>c</b>) COIL100 dataset, (<b>d</b>) USPS dataset, (<b>e</b>) MNIST dataset, and (<b>f</b>) Letters dataset.</p> "> Figure 3 Cont.
<p>Sensitivity of FGLMF on (<b>a</b>) COIL20 dataset, (<b>b</b>) YaleB dataset, (<b>c</b>) COIL100 dataset, (<b>d</b>) USPS dataset, (<b>e</b>) MNIST dataset, and (<b>f</b>) Letters dataset.</p> "> Figure 4
<p>Accuracy vs. time of FGLMF with different anchors on (<b>a</b>) COIL20 dataset, (<b>b</b>) YaleB dataset, (<b>c</b>) COIL100 dataset, (<b>d</b>) USPS dataset, (<b>e</b>) MNIST dataset, and (<b>f</b>) Letters dataset.</p> "> Figure 4 Cont.
<p>Accuracy vs. time of FGLMF with different anchors on (<b>a</b>) COIL20 dataset, (<b>b</b>) YaleB dataset, (<b>c</b>) COIL100 dataset, (<b>d</b>) USPS dataset, (<b>e</b>) MNIST dataset, and (<b>f</b>) Letters dataset.</p> "> Figure 5
<p>Bases generated by FGLMF. The first row is COIL20, the second is YaleB, the third is COIL100, the fourth is USPS, the fifth is MNIST, and the sixth is Letters.</p> "> Figure 6
<p>Adjacency matrix on COIL20 dataset: (<b>a</b>) bipartite graph generated by Equation (3); (<b>b</b>) full adjacency matrix generated by bipartite using Equation (6); (<b>c</b>) normalized full adjacency matrix generated by Gaussian kernel.</p> "> Figure 7
<p>Convergence curve on (<b>a</b>) COIL20 dataset, (<b>b</b>) USPS dataset, (<b>c</b>) COIL100 dataset, (<b>d</b>) USPS dataset, (<b>e</b>) MNIST dataset, and (<b>f</b>) Letters dataset.</p> ">
Abstract
:1. Introduction
- A novel method using only matrix factorization is proposed. It simultaneously learns the global and local information between data, and it is easily interpretable.
- A bipartite graph is introduced to the symmetric matrix factorization. The computational complexity of the proposed method needs , which is smaller than other bipartite graph-based methods’ .
- The proposed SMF is convex due to using the label information. Therefore, every optimal solution is the global optimal solution, and it can be solved very fast.
2. Preliminaries
2.1. Bipartite Graph
2.2. Matrix Factorization
3. Proposed Method
3.1. Methodology
3.2. Optimization
3.2.1. Fix , Optimize
3.2.2. Fix , Optimize
Algorithm 1 FGLMF |
|
3.3. Computational Complexity
- It takes to compute the anchors.
- It takes to compute the distance between samples and anchors.
- It takes to compute the bipartite graph, where k is the number of nearest neighbors.
- It takes to compute the objective function of .
- It takes to compute the gradient of .
- It takes to compute the SVD of .
4. Experiments
4.1. Compared Method
- TSVD: Truncated singular value decomposition [29], it can give the best approximation with a preset rank.
- SemiGNMF: Semi-supervised graph regularized non-negative matrix factorization [7].
- CSNMF: Correntropy-based semi-supervised NMF [15].
- CSCF: Correntropy-based semi-supervised concept factorization [16].
- CLMF: Correntropy-based low-rank matrix factorization [17].
- PCPSNMF: pairwise constraint propagation-induced SNMF [18].
- S4NMF: Self-supervised semi-supervised non-negative matrix factorization [30].
- HSSNMF: Hypergraph-based semi-supervised symmetric non-negative matrix factorization [19].
- EAGR: Efficient anchor graph regularization [21].
- BGSSL: Bipartite graph semi-supervised learning [22].
4.2. Datasets
4.3. Experiment Settings
4.4. Experiment Performance
4.5. Time Consumption
4.6. Sensitivity to Parameter
4.7. Effect of Anchors
4.8. Bases
4.9. Generated Graph
4.10. Convergence Study
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Abbreviations
MF | Matrix factorization |
SMF | Symmetric matrix factorization |
LMF | Low-rank matrix factorization |
FGLMF | Fast global and local matrix factorization |
MCC [17] | Maximum correntropy criterion |
NMF [1,2] | Non-negative matrix factorization |
SVD [3] | Singular value decomposition |
PCA [4] | Principal component analysis |
CF [5] | Concept factorization |
GNMF [7] | Graph-regularized non-negative matrix factorization |
LCCF [8] | Locally consistent concept factorization |
NMF-LCAG [9] | Non-negative matrix factorization with locality-constrained adaptive graph |
GCCF [10] | Correntropy-based graph-regularized concept factorization |
CHNMF [11] | Correntropy-based hypergraph-regularized non-negative matrix factorization |
CNMF [12] | Constrained non-negative matrix factorization |
CCF [13] | Constrained concept factorization |
NMFCC [14] | Non-negative matrix factorization-based constrained clustering |
CSNMF [15] | Correntropy-based semi-supervised non-negative matrix factorization |
CSCF [16] | Correntropy-based semi-supervised concept factorization |
CLMF [17] | Correntropy-based low-rank matrix factorization |
SIS [17] | Sparsity-induced similarity |
SNMFCC [14] | Symmetric non-negative matrix factorization-based constrained clustering |
PCPSNMF [18] | Pairwise constraint propagation-induced symmetric non-negative matrix factorization |
HSSNMF [19] | Hypergraph-based semi-supervised symmetric non-negative matrix factorization |
LAE [20] | Local anchor embedding |
EAGR [21] | Efficient anchor graph regularization |
FLAE [21] | Fast local anchor embedding |
BGSSL [22] | Bipartite graph-based semi-supervised learning |
MURs | Multiplicative updating rules |
Appendix A. Proof of Theorem 1
Appendix B. Proof of Theorem 2
References
- Lee, D.D.; Seung, H.S. Learning the parts of objects by non-negative matrix factorization. Nature 1999, 401, 788–791. [Google Scholar] [CrossRef] [PubMed]
- Lee, D.; Seung, H.S. Algorithms for non-negative matrix factorization. Adv. Neural Inf. Process. Syst. 2000, 13. Available online: https://papers.neurips.cc/paper_files/paper/2000/hash/f9d1152547c0bde01830b7e8bd60024c-Abstract.html (accessed on 1 September 2024).
- Duda, R.O.; Hart, P.E. Pattern Classification; John Wiley & Sons: Hoboken, NJ, USA, 2006. [Google Scholar]
- Wold, S.; Esbensen, K.; Geladi, P. Principal component analysis. Chemom. Intell. Lab. Syst. 1987, 2, 37–52. [Google Scholar] [CrossRef]
- Xu, W.; Gong, Y. Document clustering by concept factorization. In Proceedings of the 27th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Sheffield, UK, 25–29 July 2004; pp. 202–209. [Google Scholar]
- Zhang, Z.; Zhao, K. Low-rank matrix approximation with manifold regularization. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 35, 1717–1729. [Google Scholar] [CrossRef] [PubMed]
- Cai, D.; He, X.; Han, J.; Huang, T.S. Graph regularized nonnegative matrix factorization for data representation. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 33, 1548–1560. [Google Scholar]
- Cai, D.; He, X.; Han, J. Locally consistent concept factorization for document clustering. IEEE Trans. Knowl. Data Eng. 2010, 23, 902–913. [Google Scholar] [CrossRef]
- Yi, Y.; Wang, J.; Zhou, W.; Zheng, C.; Kong, J.; Qiao, S. Non-negative matrix factorization with locality constrained adaptive graph. IEEE Trans. Circuits Syst. Video Technol. 2019, 30, 427–441. [Google Scholar] [CrossRef]
- Peng, S.; Ser, W.; Chen, B.; Sun, L.; Lin, Z. Correntropy based graph regularized concept factorization for clustering. Neurocomputing 2018, 316, 34–48. [Google Scholar] [CrossRef]
- Yu, N.; Wu, M.J.; Liu, J.X.; Zheng, C.H.; Xu, Y. Correntropy-based hypergraph regularized NMF for clustering and feature selection on multi-cancer integrated data. IEEE Trans. Cybern. 2020, 51, 3952–3963. [Google Scholar] [CrossRef]
- Liu, H.; Wu, Z.; Li, X.; Cai, D.; Huang, T.S. Constrained nonnegative matrix factorization for image representation. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 34, 1299–1311. [Google Scholar] [CrossRef]
- Liu, H.; Yang, G.; Wu, Z.; Cai, D. Constrained concept factorization for image representation. IEEE Trans. Cybern. 2013, 44, 1214–1224. [Google Scholar]
- Zhang, X.; Zong, L.; Liu, X.; Luo, J. Constrained clustering with nonnegative matrix factorization. IEEE Trans. Neural Netw. Learn. Syst. 2015, 27, 1514–1526. [Google Scholar] [CrossRef] [PubMed]
- Peng, S.; Ser, W.; Chen, B.; Lin, Z. Robust semi-supervised nonnegative matrix factorization for image clustering. Pattern Recognit. 2021, 111, 107683. [Google Scholar] [CrossRef]
- Peng, S.; Yang, Z.; Nie, F.; Chen, B.; Lin, Z. Correntropy based semi-supervised concept factorization with adaptive neighbors for clustering. Neural Netw. 2022, 154, 203–217. [Google Scholar] [CrossRef] [PubMed]
- Zhou, N.; Choi, K.S.; Chen, B.; Du, Y.; Liu, J.; Xu, Y. Correntropy-based low-rank matrix factorization with constraint graph learning for image clustering. IEEE Trans. Neural Netw. Learn. Syst. 2022, 34, 10433–10446. [Google Scholar] [CrossRef]
- Wu, W.; Jia, Y.; Kwong, S.; Hou, J. Pairwise constraint propagation-induced symmetric nonnegative matrix factorization. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 6348–6361. [Google Scholar] [CrossRef]
- Yin, J.; Peng, S.; Yang, Z.; Chen, B.; Lin, Z. Hypergraph based semi-supervised symmetric nonnegative matrix factorization for image clustering. Pattern Recognit. 2023, 137, 109274. [Google Scholar] [CrossRef]
- Liu, W.; He, J.; Chang, S.F. Large graph construction for scalable semi-supervised learning. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), Haifa, Israel, 21–24 June 2010; pp. 679–686. [Google Scholar]
- Wang, M.; Fu, W.; Hao, S.; Tao, D.; Wu, X. Scalable semi-supervised learning by efficient anchor graph regularization. IEEE Trans. Knowl. Data Eng. 2016, 28, 1864–1877. [Google Scholar] [CrossRef]
- He, F.; Nie, F.; Wang, R.; Li, X.; Jia, W. Fast semisupervised learning with bipartite graph for large-scale data. IEEE Trans. Neural Netw. Learn. Syst. 2019, 31, 626–638. [Google Scholar] [CrossRef]
- Huang, J.; Nie, F.; Huang, H. A new simplex sparse learning model to measure data similarity for clustering. In Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, Buenos Aires, Argentina, 25–31 July 2015. [Google Scholar]
- Antelmi, A.; Cordasco, G.; Polato, M.; Scarano, V.; Spagnuolo, C.; Yang, D. A survey on hypergraph representation learning. ACM Comput. Surv. 2023, 56, 1–38. [Google Scholar] [CrossRef]
- Nocedal, J.; Wright, S.J. (Eds.) Numerical Optimization; Springer: New York, NY, USA, 1999. [Google Scholar]
- Hager, W.W.; Zhang, H. A new conjugate gradient method with guaranteed descent and an efficient line search. SIAM J. Optim. 2005, 16, 170–192. [Google Scholar] [CrossRef]
- Hager, W.W.; Zhang, H. Algorithm 851: CG_DESCENT, a conjugate gradient method with guaranteed descent. ACM Trans. Math. Softw. (TOMS) 2006, 32, 113–137. [Google Scholar] [CrossRef]
- Hager, W.W.; Zhang, H. The limited memory conjugate gradient method. SIAM J. Optim. 2013, 23, 2150–2168. [Google Scholar] [CrossRef]
- Hansen, P.C. Truncated singular value decomposition solutions to discrete ill-posed problems with ill-determined numerical rank. SIAM J. Sci. Stat. Comput. 1990, 11, 503–518. [Google Scholar] [CrossRef]
- Chavoshinejad, J.; Seyedi, S.A.; Tab, F.A.; Salahian, N. Self-supervised semi-supervised nonnegative matrix factorization for data clustering. Pattern Recognit. 2023, 137, 109282. [Google Scholar] [CrossRef]
- Nie, F.; Xue, J.; Wu, D.; Wang, R.; Li, H.; Li, X. Coordinate descent method for k k-means. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 2371–2385. [Google Scholar] [CrossRef]
- Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
Dataset | No. of Instances (n) | No. of Features (d) | No. of Classes (C) |
---|---|---|---|
COIL20 | 1440 | 1024 | 20 |
YaleB | 2414 | 1024 | 38 |
COIL100 | 7200 | 1024 | 100 |
USPS | 9298 | 256 | 10 |
MNIST | 70000 | 784 | 10 |
Letters | 145600 | 784 | 26 |
TSVD | SemiGNMF | CSNMF | CSCF | CLMF | PCPSNMF | S4NMF | HSSNMF | EAGR | BGSSL | FGLMF | |
---|---|---|---|---|---|---|---|---|---|---|---|
COIL20 | 60.67 ± 03.33 | 89.23 ± 02.19 | 86.15 ± 03.46 | 86.02 ± 03.23 | 91.44 ± 01.12 | 87.27 ± 03.22 | 91.41 ± 02.40 | 91.42 ± 02.75 | 95.42 ± 00.97 | 95.63 ± 00.96 | 97.79 ± 00.84 |
YaleB | 08.54 ± 00.26 | 55.58 ± 02.68 | 58.42 ± 04.02 | 45.80 ± 02.19 | 77.46 ± 00.96 | 66.11 ± 02.77 | 69.97 ± 02.89 | 72.82 ± 02.01 | 58.06 ± 01.55 | 54.78 ± 01.56 | 72.24 ± 01.39 |
COIL100 | 48.63 ± 01.39 | 82.62 ± 00.74 | 72.92 ± 02.25 | 73.99 ± 02.49 | 75.21 ± 00.50 | 83.00 ± 01.22 | 65.01 ± 01.80 | 88.75 ± 01.02 | 86.98 ± 00.33 | 85.71 ± 00.91 | 90.44 ± 00.42 |
USPS | 64.04 ± 01.87 | - | - | 93.92 ± 03.72 | 90.91 ± 00.26 | 60.67 ± 07.27 | 83.22 ± 05.11 | 83.34 ± 03.70 | 96.34 ± 00.14 | 96.03 ± 00.15 | 96.45 ± 00.18 |
MNIST | 54.21 ± 02.27 | OM | OM | OM | OM | OM | OM | OM | 96.49 ± 00.03 | 95.38 ± 00.10 | 96.62 ± 00.05 |
Letters | 37.31 ± 00.77 | OM | OM | OM | OM | OM | OM | OM | 82.32 ± 00.16 | 75.63 ± 00.16 | 82.50 ± 00.15 |
TSVD | SemiGNMF | CSNMF | CSCF | CLMF | PCPSNMF | S4NMF | HSSNMF | EAGR | BGSSL | FGLMF | |
---|---|---|---|---|---|---|---|---|---|---|---|
COIL20 | 74.75 ± 01.10 | 95.27 ± 00.41 | 93.71 ± 01.39 | 93.05 ± 01.56 | 89.82 ± 00.92 | 92.29 ± 01.44 | 94.88 ± 01.13 | 95.63 ± 01.08 | 96.13 ± 00.81 | 95.92 ± 00.78 | 97.54 ± 00.96 |
YaleB | 12.89 ± 00.46 | 71.54 ± 00.81 | 70.90 ± 01.62 | 62.81 ± 01.46 | 74.58 ± 01.18 | 68.16 ± 01.94 | 73.39 ± 01.91 | 73.00 ± 01.22 | 60.91 ± 00.89 | 59.20 ± 01.03 | 71.38 ± 01.10 |
COIL100 | 75.52 ± 00.34 | 93.56 ± 00.33 | 88.98 ± 00.87 | 90.03 ± 00.94 | 80.46 ± 00.40 | 90.87 ± 00.43 | 79.07 ± 01.17 | 94.17 ± 00.35 | 93.20 ± 00.20 | 92.64 ± 00.31 | 94.41 ± 00.20 |
USPS | 59.10 ± 00.75 | - | - | 91.73 ± 02.05 | 80.74 ± 00.45 | 74.91 ± 03.33 | 86.14 ± 04.19 | 85.70 ± 00.98 | 91.18 ± 00.31 | 90.59 ± 00.24 | 91.34 ± 00.31 |
MNIST | 50.91 ± 01.12 | OM | OM | OM | OM | OM | OM | OM | 91.14 ± 00.07 | 89.41 ± 00.13 | 91.40 ± 00.10 |
Letters | 39.48 ± 00.49 | OM | OM | OM | OM | OM | OM | OM | 75.36 ± 00.17 | 70.23 ± 00.09 | 75.3 ± 00.16 |
TSVD | SemiGNMF | CSNMF | CSCF | CLMF | PCPSNMF | S4NMF | HSSNMF | EAGR | BGSSL | FGLMF | |
---|---|---|---|---|---|---|---|---|---|---|---|
COIL20 | 53.38 ± 02.67 | 85.40 ± 02.72 | 86.54 ± 02.75 | 82.08 ± 03.99 | 84.16 ± 01.69 | 83.09 ± 03.27 | 88.46 ± 02.89 | 88.92 ± 02.87 | 91.64 ± 01.37 | 92.50 ± 01.45 | 95.60 ± 01.66 |
YaleB | 00.29 ± 00.12 | 38.76 ± 02.51 | 47.09 ± 02.97 | 35.21 ± 02.53 | 58.79 ± 01.69 | 47.34 ± 03.02 | 55.76 ± 03.61 | 55.28 ± 02.19 | 37.06 ± 01.77 | 34.56 ± 01.81 | 52.96 ± 01.88 |
COIL100 | 43.31 ± 00.96 | 76.97 ± 00.89 | 63.84 ± 04.11 | 68.28 ± 04.10 | 59.45 ± 00.85 | 76.20 ± 01.31 | 53.23 ± 02.06 | 83.78 ± 01.23 | 81.09 ± 00.61 | 79.89 ± 01.02 | 85.75 ± 00.56 |
USPS | 50.64 ± 01.49 | - | - | 92.35 ± 03.12 | 81.78 ± 00.53 | 58.55 ± 07.33 | 83.63 ± 04.37 | 80.72 ± 02.27 | 93.02 ± 00.25 | 92.41 ± 00.24 | 93.24 ± 00.30 |
MNIST | 38.07 ± 01.51 | OM | OM | OM | OM | OM | OM | OM | 92.48 ± 00.06 | 90.19 ± 00.21 | 92.75 ± 00.10 |
Letters | 21.66 ± 00.44 | OM | OM | OM | OM | OM | OM | OM | 68.10 ± 00.26 | 59.32 ± 00.15 | 68.08 ± 00.24 |
TSVD | SemiGNMF | CSNMF | CSCF | CLMF | PCPSNMF | S4NMF | HSSNMF | EAGR | BGSSL | FGLMF | |
---|---|---|---|---|---|---|---|---|---|---|---|
COIL20 | 58.30 ± 03.61 | 88.04 ± 02.24 | 83.73 ± 04.51 | 85.26 ± 03.49 | 91.65 ± 01.09 | 86.43 ± 03.50 | 89.86 ± 03.31 | 89.58 ± 02.93 | 95.46 ± 01.00 | 95.53 ± 00.96 | 97.80 ± 00.81 |
YaleB | 08.62 ± 00.42 | 55.63 ± 03.10 | 57.44 ± 04.64 | 46.03 ± 02.21 | 78.84 ± 00.75 | 66.10 ± 02.74 | 68.04 ± 03.01 | 71.84 ± 02.19 | 58.39 ± 01.41 | 54.39 ± 01.37 | 72.34 ± 01.37 |
COIL100 | 46.71 ± 01.57 | 81.35 ± 00.85 | 71.94 ± 02.24 | 72.97 ± 02.43 | 76.76 ± 00.51 | 82.44 ± 01.28 | 63.22 ± 01.86 | 88.14 ± 00.97 | 86.80 ± 00.29 | 85.38 ± 00.92 | 90.42 ± 00.43 |
USPS | 61.56 ± 02.45 | - | - | 91.76 ± 05.61 | 90.66 ± 00.27 | 56.10 ± 08.06 | 78.75 ± 05.91 | 78.14 ± 06.16 | 95.98 ± 00.17 | 95.64 ± 00.18 | 96.06 ± 00.21 |
MNIST | 53.76 ± 02.38 | OM | OM | OM | OM | OM | OM | OM | 96.46 ± 00.03 | 95.33 ± 00.11 | 96.59 ± 00.05 |
Letters | 37.29 ± 00.93 | OM | OM | OM | OM | OM | OM | OM | 82.25 ± 00.17 | 74.73 ± 00.20 | 82.73 ± 00.15 |
Other Learning | Anchor-Based Learning | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
TSVD | SemiGNMF | CSNMF | CSCF | CLMF | PCPSNMF | S4NMF | HSSNMF | k-means | EAGR | BGSSL | FGLMF | ||
COIL20 | 0.34 | 1.15 | 38.20 | 79.87 | 24.36 | 4.31 | 64.44 | 6.48 | 0.55 | +0.12 | +0.06 | +0.05 | |
YaleB | 0.69 | 2.67 | 105.26 | 273.92 | 53.71 | 13.23 | 234.31 | 19.82 | 2.13 | +0.23 | +0.17 | +0.15 | |
COIL100 | 3.75 | 12.52 | 1509.16 | 4287.62 | 421.90 | 150.15 | 2787.81 | 176.65 | 9.84 | +0.89 | +0.85 | +0.74 | |
USPS | 0.23 | - | - | 5522.62 | 328.54 | 225.21 | 3224.49 | 321.27 | 4.11 | +0.62 | +0.78 | +0.34 | |
MNIST | 3.69 | OM | OM | OM | OM | OM | OM | OM | 61.5 | +4.96 | +5.42 | +2.91 | |
Letters | 8.18 | OM | OM | OM | OM | OM | OM | OM | 129.77 | +14.49 | +13.43 | +7.04 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Du, Y.; Luo, W.; Wu, Z.; Zhou, N. Fast Global and Local Semi-Supervised Learning via Matrix Factorization. Mathematics 2024, 12, 3242. https://doi.org/10.3390/math12203242
Du Y, Luo W, Wu Z, Zhou N. Fast Global and Local Semi-Supervised Learning via Matrix Factorization. Mathematics. 2024; 12(20):3242. https://doi.org/10.3390/math12203242
Chicago/Turabian StyleDu, Yuanhua, Wenjun Luo, Zezhong Wu, and Nan Zhou. 2024. "Fast Global and Local Semi-Supervised Learning via Matrix Factorization" Mathematics 12, no. 20: 3242. https://doi.org/10.3390/math12203242