Abstract
The problem of matrix approximation appears ubiquitously in recommendation systems, computer vision and text mining. The prevailing assumption is that the partially observed matrix has a low-rank or can be well approximated by a low-rank matrix. However, this assumption is strictly that the partially observed matrix is globally low rank. In this paper, we propose a local sensitive formulation of matrix approximation which relaxes the global low-rank assumption, leading to a representation of the observed matrix as a weighted sum of low-rank matrices. We solve the problem by nonconvex optimization which exhibits superior performance of low rank matrix estimation when compared with convex relaxation. Our experiments show improvements in prediction accuracy over classical approaches for recommendation tasks.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Abernethy, J., Bach, F., Evgeniou, T., et al.: Low-rank matrix factorization with attributes. arXiv preprint cs/0611124 (2006)
ACM SIGKDD, Netflix, Proceedings of KDD Cup and Workshop (2007). http://www.cs.uic.edu/~liub/KDD-cup-2007/proceedings.html
Candès, E.J., Recht, B.: Exact matrix completion via convex optimization. Found. Comput. Math. 9(6), 717–772 (2009)
Candès, E.J., Tao, T.: The power of convex relaxation: near-optimal matrix completion. IEEE Trans. Inf. Theory 56(5), 2053–2080 (2010)
Argyriou, A., Evgeniou, T., Pontil, M.: Multi-task feature learning. Adv. Neural. Inf. Process. Syst. 19, 41 (2007)
Lee, J., Kim, S., Lebanon, G., et al.: Local low-rank matrix approximation. ICML 28(2), 82–90 (2013)
Recht, B., Fazel, M., Parrilo, P.A.: Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Rev. 52(3), 471–501 (2010)
Osher, S., Burger, M., Goldfarb, D., et al.: An iterative regularization method for total variation-based image restoration. Multiscale Model. Simul. 4(2), 460–489 (2005)
Ji, H., Liu, C., Shen, Z., et al.: Robust video denoising using low rank matrix completion. In: CVPR, pp. 1791–1798 (2010)
Candes, E.J., Eldar, Y.C., Strohmer, T., et al.: Phase retrieval via matrix completion. SIAM Rev. 57(2), 225–251 (2015)
Lee, D.D., Seung, H.S.: Learning the parts of objects by non-negative matrix factorization. Nature 401(6755), 788–791 (1999)
Deerwester, S., Dumais, S.T., Furnas, G.W., et al.: Indexing by latent semantic analysis. J. Am. Soc. Inf. Sci. 41(6), 391 (1990)
Buckland, M.: Annual review of information science and technology. J. Documentation (2013)
Sarwar, B., Karypis, G., Konstan, J., et al.: Application of dimensionality reduction in recommender system-a case study. Minnesota Univ Minneapolis Dept of Computer Science (2000)
Koren, Y., Bell, R., Volinsky, C.: Matrix factorization techniques for recommender systems. Computer 42(8), 30–37 (2009)
Koren, Y.: Collaborative filtering with temporal dynamics. Commun. ACM 53(4), 89–97 (2010)
Wang, X.-F., Huang, D.S., Xu, H.: An efficient local Chan-Vese model for image segmentation. Pattern Recogn. 43(3), 603–618 (2010)
Li, B., Huang, D.S.: Locally linear discriminant embedding: an efficient method for face recognition. Pattern Recogn. 41(12), 3813–3821 (2008)
Huang, D.S., Du, J.-X.: A constructive hybrid structure optimization methodology for radial basis probabilistic neural networks. IEEE Trans. Neural Netw. 19(12), 2099–2115 (2008)
Singer, A., Cucuringu, M.: Uniqueness of low-rank matrix completion by rigidity theory. SIAM J. Matrix Anal. Appl. 31(4), 1621–1641 (2010)
Cai, J.F., Candès, E.J., Shen, Z.: A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 20(4), 1956–1982 (2010)
Ma, S., Goldfarb, D., Chen, L.: Fixed point and Bregman iterative methods for matrix rank minimization. Math. Program. 128(1–2), 321–353 (2011)
Wen, Z., Yin, W., Zhang, Y.: Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm. Math. Program. Comput. 4(4), 333–361 (2012)
Toh, K.C., Yun, S.: An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems. Pac. J. Optim. 6(615–640), 15 (2010)
Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009)
Huang, D.S.: Systematic theory of neural networks for pattern recognition. Publishing House of Electronic Industry of China, May 1996. (in Chinese)
Huang, D.S., Jiang, W.: A general CPL-AdS methodology for fixing dynamic parameters in dual environments. IEEE Trans. Syst. Man Cybern. - Part 42(5), 1489–1500 (2012)
Lin, Z., Chen, M., Ma, Y.: The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices. arXiv preprint arXiv:1009.5055 (2010)
Huang, D.S.: Radial basis probabilistic neural networks: model and application. Int. J. Pattern Recogn. Artif. Intell. 13(7), 1083–1101 (1999)
Acknowledgments
This work was supported by the grants of the National Science Foundation of China, Nos. 61472280, 61472173, 61572447, 61373098, 61672382, 61672203, 61402334, 61520106006, 31571364, U1611265, and 61532008, China Postdoctoral Science Foundation Grant, No. 2016M601646.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Li, CY., Bao, W., Li, Z., Zhang, Y., Jiang, YL., Yuan, CA. (2017). Local Sensitive Low Rank Matrix Approximation via Nonconvex Optimization. In: Huang, DS., Hussain, A., Han, K., Gromiha, M. (eds) Intelligent Computing Methodologies. ICIC 2017. Lecture Notes in Computer Science(), vol 10363. Springer, Cham. https://doi.org/10.1007/978-3-319-63315-2_67
Download citation
DOI: https://doi.org/10.1007/978-3-319-63315-2_67
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-63314-5
Online ISBN: 978-3-319-63315-2
eBook Packages: Computer ScienceComputer Science (R0)