[go: up one dir, main page]

Skip to main content
Log in

An efficient algorithm for Kernel two-dimensional principal component analysis

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Recently, a new approach called two-dimensional principal component analysis (2DPCA) has been proposed for face representation and recognition. The essence of 2DPCA is that it computes the eigenvectors of the so-called image covariance matrix without matrix-to-vector conversion. Kernel principal component analysis (KPCA) is a non-linear generation of the popular principal component analysis via the Kernel trick. Similarly, the Kernelization of 2DPCA can be benefit to develop the non-linear structures in the input data. However, the standard K2DPCA always suffers from the computational problem for using the image matrix directly. In this paper, we propose an efficient algorithm to speed up the training procedure of K2DPCA. The results of experiments on face recognition show that the proposed algorithm can achieve much more computational efficiency and remarkably save the memory-consuming compared to the standard K2DPCA.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Kirby Y, Sirovich L (1990) Application of the Karhunen-loeve procedure for the characterization of human faces. IEEE Trans PAMI 12(1):103–108

    Google Scholar 

  2. Turk M, Pentland A (1991) Eigenfaces for Recognition. J Cognitive Neuroscience pp 71–86

  3. Yang J, Zhang D, Frangi AF, Yang JY (2004) Two-dimensional PCA: a new approach to appearance based face representation and recognition. IEEE Trans PAMI 26(1):131–137

    Google Scholar 

  4. Schölkopf B, Smola A, Muller KR (1998) Nonlinear component analysis as a kernel eigenvalue problem. Neural Comput 10:1299–1319

    Article  Google Scholar 

  5. Rosipal R, Girolami M, Trejo L, Cichocki A (2001) An expectation-maximization approach to nonlinear component analysis. Neural Comput 13:505–510

    Article  MATH  Google Scholar 

  6. Yang MH (2005) Kernel Eigenfaces vs. Kernel Fisherfaces: Face Recognition Using Kernel Methods Proceedings of the Fifth International Conference on Automatic Face and Gesture Recognition (FG 2002) pp 215–220

  7. Zheng WM, Zou CR, Zhao L (2005) An improved algorithm for kernel principal component analysis. Neural Process Lett 22:49–56

    Article  Google Scholar 

Download references

Acknowledgments

This work was partly supported by the National Natural Science Foundations of China under grant 60503023, and partly supported by the Natural Science Foundations of Jiangsu province under the grant BK2005407, partly supported by the key laboratory of image processing and image communication of Jiangsu province under the grant ZK205013, and partly supported by Program for New Century Excellent Talents in University (NCET).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ning Sun.

Appendix A: centering samples in K2DPCA

Appendix A: centering samples in K2DPCA

From the analysis in the paper, the centralization of samples is equal to centralize the kernel matrix K. Let K C is the centered kernel matrix:

$$ K^{C} = BB^{T} $$
(A.1)

where

$$ B = {\left(\begin{aligned}\,& (\phi (x^{1}_{1}) - (1/N){\sum\nolimits_{i = 1}^N {\phi (x^{1}_{i})}})^{T} \\\,& \begin{array}{*{20}c} {{\begin{array}{*{20}c} & \\ \end{array}}}&& \\ \end{array} \vdots \\\,& (\phi (x^{1}_{N}) - (1/N){\sum\nolimits_{i = 1}^N {\phi (x^{1}_{i})}})^{T} \\\,& (\phi (x^{2}_{1}) - (1/N){\sum\nolimits_{i = 1}^N {\phi (x^{2}_{i})}})^{T} \\\,& \begin{array}{*{20}c} {{\begin{array}{*{20}c} & \\ \end{array}}}&& \\ \end{array} \vdots \\& (\phi (x^{s}_{N}) - (1/N){\sum\nolimits_{i = 1}^N {\phi (x^{s}_{i})}})^{T}. \\ \end{aligned} \right)} $$
(A.2)

Divide the K C into K C (j,l) j,l=1, ..., s , and the notation 1 N  is N ×  N matrix which all elements are equal to 1/N. Thus the centered kernel matrix is

$$ K^{C} (j,l) = K(j,l) - 1_{N} K(j,l) - K(j,l)1_{N} + 1_{N} K(j,l)1_{N}, \quad (j,l = 1, \ldots, s) $$
(A.3)

To the test samples, we centralize the test samples kernel matrix K(j) in (24). \({\hat{1}_{N} \,\hbox{is}\,s \times N}\) matrix and 1 N  is N ×  N matrix which all elements are equal to 1/N. So, the centered test samples kernel matrix K C (j) can be defined as following:

$$ \begin{aligned} K^{C} (j) &= {\left(\begin{aligned}\,& (\phi (t^{1}) - \frac{1}{N}{\sum\nolimits_{p = 1}^N {\phi (x^{1}_{p})}})^{T} \\\,& \begin{array}{*{20}c} &&& \\ \end{array} \vdots \\\,& (\phi (t^{s}) - \frac{1}{N}{\sum\nolimits_{p = 1}^N {\phi (x^{s}_{p})}})^{T} \\ \end{aligned} \right)}{\left({(\phi (x^{j}_{1}) - \frac{1}{N}{\sum\nolimits_{q = 1}^N {\phi (x^{j}_{q})), \ldots}},(\phi (x^{j}_{N}) - \frac{1}{N}{\sum\nolimits_{q = 1}^N {\phi (x^{j}_{q}))}})} \right)}\\ &= K(j) - K(j)1_{N} - \hat{1}_{N} \hat{K}(j) + \hat{1}_{N} \hat{K}(j)1_{N},\\ \end{aligned} $$
(A.4)

where \({\hat{K}(j)\, \hbox{is}\,N \times N}\) matrix and the element \({(\hat{K}(j))_{{pq}} }\) is

$$ (\hat{K}(j))_{{ab}} = \phi (x^{a}_{p})\phi (x^{j}_{b})^{T} $$
(A.5)

Rights and permissions

Reprints and permissions

About this article

Cite this article

Sun, N., Wang, Hx., Ji, Zh. et al. An efficient algorithm for Kernel two-dimensional principal component analysis. Neural Comput & Applic 17, 59–64 (2008). https://doi.org/10.1007/s00521-007-0111-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-007-0111-0

Keywords

Navigation