[go: up one dir, main page]

CN112966735A - Supervision multi-set correlation feature fusion method based on spectral reconstruction - Google Patents

Supervision multi-set correlation feature fusion method based on spectral reconstruction Download PDF

Info

Publication number
CN112966735A
CN112966735A CN202110235178.4A CN202110235178A CN112966735A CN 112966735 A CN112966735 A CN 112966735A CN 202110235178 A CN202110235178 A CN 202110235178A CN 112966735 A CN112966735 A CN 112966735A
Authority
CN
China
Prior art keywords
matrix
group
fractional
intra
class
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110235178.4A
Other languages
Chinese (zh)
Other versions
CN112966735B (en
Inventor
袁运浩
朱莉
李云
强继朋
朱毅
李斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yangzhou University
Original Assignee
Yangzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yangzhou University filed Critical Yangzhou University
Publication of CN112966735A publication Critical patent/CN112966735A/en
Application granted granted Critical
Publication of CN112966735B publication Critical patent/CN112966735B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/18Complex mathematical operations for evaluating statistical data, e.g. average values, frequency distributions, probability functions, regression analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Artificial Intelligence (AREA)
  • Algebra (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Operations Research (AREA)
  • Probability & Statistics with Applications (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种基于谱重建的监督多集相关特征融合方法,包括1)定义训练样本集的投影方向;2)计算训练样本的组间类内相关矩阵和自协方差矩阵;3)组间类内相关矩阵做奇异值分解,自协方差矩阵做特征值分解;4)重构分数阶组间类内相关矩阵和分数阶自协方差矩阵;5)构建FDMCCA的最优化模型;6)求解特征向量矩阵,形成投影矩阵;7)融合降维后的特征;8)选取不同数量的图像分别做训练集和测试集,计算识别率。本发明能够有效地处理多个视图数据的信息融合问题,同时分数阶参数的引入削弱了因噪声干扰和有限训练样本带来的影响,提高了系统识别的准确率。

Figure 202110235178

The invention discloses a supervised multi-set correlation feature fusion method based on spectral reconstruction. The intra-class correlation matrix is used for singular value decomposition, and the auto-covariance matrix is used for eigenvalue decomposition; 4) Reconstruct the fractional inter-group intra-class correlation matrix and fractional auto-covariance matrix; 5) Build the optimization model of FDMCCA; 6) Solve the eigenvector matrix to form a projection matrix; 7) fuse the features after dimension reduction; 8) select different numbers of images as training set and test set respectively, and calculate the recognition rate. The invention can effectively deal with the information fusion problem of multiple view data, meanwhile, the introduction of fractional order parameters weakens the influence caused by noise interference and limited training samples, and improves the accuracy of system identification.

Figure 202110235178

Description

Supervision multi-set correlation feature fusion method based on spectral reconstruction
Technical Field
The invention relates to the field of pattern recognition, in particular to a supervision multi-set correlation characteristic fusion method based on spectral reconstruction.
Background
A typical correlation analysis (CCA) investigated the linear correlation between two sets of data. The CCA may linearly project two sets of random variables into a low-dimensional subspace with the greatest correlation. Researchers use CCA to simultaneously reduce the dimensions of two sets of feature vectors (i.e., two views) to obtain two low-dimensional feature representations, which are then effectively fused to form discriminative features, thereby improving the classification accuracy of patterns. Because the CCA method is simple and effective, the CCA method has wide application in blind source separation, computer vision, voice recognition and the like.
The canonical correlation analysis is an unsupervised linear learning method. However, in real life there are situations where the dependency between two views cannot be simply represented linearly. If there is a non-linear relationship between the two views, it is not appropriate to still handle the CCA method in this case. The proposal of Kernel Canonical Correlation Analysis (KCCA) effectively solves the nonlinear problem. KCCA is a nonlinear extension of CCA and has a good effect in dealing with simple nonlinear problems. When more complex non-linear problems are encountered, Deep canonical correlation analysis (Deep CCA) may better address such problems. Deep CCA combines a Deep neural network with CCA, and can learn a complex nonlinear relationship of two view data. From another perspective of non-linear expansion, the idea of locality can be incorporated into CCA, and a Locality Preserving Canonical Correlation Analysis (LPCCA) method arises. The LPCCA can find a local manifold structure of each view data to visualize the data.
Although the CCA has a good recognition effect on some pattern recognition problems, it is an unsupervised learning method and does not fully use class label information, which not only causes resource waste, but also reduces the recognition effect. To address this problem, researchers have proposed Discriminant Canonical Correlation Analysis (DCCA) that takes into account the inter-class and intra-class information of the sample. The DCCA method enables the correlation degree between the sample characteristics of the same category to be maximum, and the correlation degree between the sample characteristics of different categories to be minimum, so that the accuracy of mode classification can be improved.
The above methods are all methods suitable for analyzing the relationship between two views, and the application of the above methods is limited when there are three or more views. The multiple-set canonical correlation analysis (MCCA) method is a multi-view extension of the CCA method. The MCCA not only reserves the characteristic of maximum correlation degree between the CCA views, but also makes up the defect that the CCA cannot be applied to a plurality of views, and improves the identification performance of the CCA method. Researchers combine MCCA and DCCA, and have proposed discrimination multiple set canonical correlation analysis (DMCCA), and experiments prove that the method has better recognition performance in the aspects of face recognition, handwritten number recognition, emotion recognition and the like.
When noise interference exists or training samples are few, the auto-covariance matrix and the cross-covariance matrix in the CCA deviate from the true values, resulting in poor final recognition. In order to solve the problem, researchers combine the fractional order thought with CCA, reconstruct an auto-covariance matrix and a cross-covariance matrix by introducing a fractional order parameter, and provide typical correlation analysis of fractional order embedding, so that the influence caused by the deviation is weakened, and the identification performance of the method is improved.
The traditional typical correlation analysis mainly studies the correlation between two views, is an unsupervised learning method, does not consider class label information, and cannot directly process high-dimensional data of more than two views.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, provides a supervision multi-set correlation feature fusion method (FDMCCA) based on spectral reconstruction, can effectively process the problem of multi-view feature fusion, simultaneously weakens the influence caused by noise interference and limited training samples due to the introduction of fractional order parameters, and improves the accuracy of system identification.
The purpose of the invention is realized as follows: a supervised multi-set correlation feature fusion method based on spectral reconstruction comprises the following steps:
step 1) assume that there are P groups of training samples, with the mean of each group of samples being 0 and the number of classes being c, as follows:
Figure BDA0002959684760000031
wherein
Figure BDA0002959684760000032
Denotes the kth sample, m, of the jth class in the ith groupiCharacteristic dimension, n, representing the ith data setjRepresenting the j-th class sample number, and defining the projection direction of the training sample set as
Figure BDA0002959684760000033
Step 2) calculating an intra-class correlation matrix of the interclass training samples
Figure BDA0002959684760000034
Sum auto-covariance matrix
Figure BDA0002959684760000035
Wherein
Figure BDA0002959684760000036
A matrix representing each element as 1;
step 3) for the inter-group intra-class correlation matrix obtained in step 2)
Figure BDA0002959684760000037
Performing singular value decomposition to obtain left and right singular vector matrixes, singular value matrixes and auto-covariance matrixes CiiPerforming eigenvalue decomposition to obtain an eigenvector matrix and an eigenvalue matrix;
step 4) selecting proper fractional order parameters alpha and beta, re-assigning the singular value matrix and the eigenvalue matrix obtained in the step 3), and constructing a fractional order inter-class correlation matrix
Figure BDA0002959684760000038
And fractional order auto-covariance matrix
Figure BDA0002959684760000039
Step 5) constructing an optimized model of FDMCCA as
Figure BDA00029596847600000310
Wherein
Figure BDA00029596847600000311
Introducing a Lagrange multiplier method to obtain a generalized characteristic value problem E omega which is mu F omega, calculating a projection direction omega, wherein mu is a characteristic value,
Figure BDA0002959684760000041
step 6) considering the situation that the autocovariance matrix may be a singular matrix, introducing a regularization parameter eta on the basis of the step 5), and establishing an optimization model under the regularization as
Figure BDA0002959684760000042
The Lagrange multiplier method is introduced to obtain the following generalizedProblem of eigenvalue:
Figure BDA0002959684760000043
wherein
Figure BDA0002959684760000044
Is of size mi×mi1,2, …, P;
step 7) solving eigenvectors corresponding to the first d maximum eigenvalues according to the generalized eigenvalue problem in the step 6), thereby forming a projection matrix W of each group of datai=[ωi1i2,…,ωid],i=1,2,…,P,d≤min{m1,…,mP};
Step 8) utilizing the projection matrix W of each group of dataiRespectively calculating the low-dimensional projection of each group of training samples and testing samples, and then forming fusion features finally used for classification by adopting a serial feature fusion strategy; and calculating the recognition rate.
Further, the correlation matrix in the inter-pair inter-class in step 3)
Figure BDA0002959684760000045
Singular value decomposition and auto-covariance matrix CiiThe characteristic value decomposition comprises the following steps:
step 3-1) to the inter-group intra-class correlation matrix
Figure BDA0002959684760000046
Singular value decomposition is carried out:
Figure BDA0002959684760000047
wherein
Figure BDA0002959684760000048
And
Figure BDA0002959684760000049
are respectively
Figure BDA00029596847600000410
The left and right singular vector matrices of (a),
Figure BDA00029596847600000411
is that
Figure BDA00029596847600000412
A diagonal matrix of singular values of, and
Figure BDA00029596847600000413
step 3-2) on the autocovariance matrix CiiAnd (3) carrying out characteristic value decomposition:
Figure BDA0002959684760000051
wherein
Figure BDA0002959684760000052
Is CiiThe matrix of feature vectors of (a) is,
Figure BDA0002959684760000053
is CiiAnd r, and ri=rank(Cii)。
Further, the step 4) of constructing the fractional order inter-class correlation matrix
Figure BDA0002959684760000054
And fractional order auto-covariance matrix
Figure BDA0002959684760000055
Comprises the following steps:
step 4-1) assuming alpha is a fraction and satisfying 0 ≦ alpha ≦ 1, defining a fractional order intra-class correlation matrix
Figure BDA0002959684760000056
Comprises the following steps:
Figure BDA0002959684760000057
wherein
Figure BDA0002959684760000058
UijAnd VijAnd rijThe definition is given in step 3-1).
Step 4-2) assuming that beta is a fraction and satisfying that beta is more than or equal to 0 and less than or equal to 1, defining a fractional order auto-covariance matrix
Figure BDA0002959684760000059
Comprises the following steps:
Figure BDA00029596847600000510
wherein
Figure BDA00029596847600000511
QiAnd riThe definition of (3) is given in step 3-2).
Compared with the prior art, the invention has the beneficial effects that: on the basis of canonical correlation analysis, combining fractional order embedded canonical correlation analysis (FECCA) with discrimination multiple sets canonical correlation analysis (DMCCA), fully utilizing class label information, being capable of processing information fusion problems of more than two views, being applicable to multi-view feature fusion, reducing influence caused by noise interference and limited training samples due to introduction of fractional order parameters, and improving accuracy of face recognition; when the number of training samples is small, the method has a good identification effect; feature fusion for dimensionality reduction and multiple views; because the information of the class labels is carried, the identification effect of the method is superior to that of other methods in the same class method.
Drawings
FIG. 1 is a flow chart of the present invention.
FIG. 2 is a line graph of the invention versus other methods as a function of dimension.
FIG. 3 is a graph of the recognition rate of the present invention at different numbers of training samples.
Detailed Description
As shown in fig. 1, a supervised multi-set correlation feature fusion method based on spectral reconstruction is characterized by comprising the following steps:
step 1) assume that there are P sets of training samples with a mean of 0 and a number of classes c for each set of samples, as follows
Figure BDA0002959684760000061
Wherein
Figure BDA0002959684760000062
Denotes the kth sample, m, of the jth class in the ith groupiCharacteristic dimension, n, representing the ith data setjRepresenting the j-th class sample number, and defining the projection direction of the training sample set as
Figure BDA0002959684760000063
Step 2) calculating an intra-class correlation matrix of the interclass training samples
Figure BDA0002959684760000064
Sum auto-covariance matrix
Figure BDA0002959684760000065
Wherein
Figure BDA0002959684760000066
A matrix representing each element as 1;
step 3) for the inter-group intra-class correlation matrix obtained in step 2)
Figure BDA0002959684760000067
Performing singular value decomposition to obtain left and right singular vector matrixes, singular value matrixes and auto-covariance matrixes CiiPerforming eigenvalue decomposition to obtain an eigenvector matrix and an eigenvalue matrix;
step 3-1) to the inter-group intra-class correlation matrix
Figure BDA0002959684760000068
Singular value decomposition is carried out:
Figure BDA0002959684760000069
wherein
Figure BDA00029596847600000610
And
Figure BDA00029596847600000611
are respectively
Figure BDA00029596847600000612
The left and right singular vector matrices of (a),
Figure BDA00029596847600000613
is that
Figure BDA00029596847600000614
A diagonal matrix of singular values of, and
Figure BDA00029596847600000615
step 3-2) on the autocovariance matrix CiiAnd (3) carrying out characteristic value decomposition:
Figure BDA00029596847600000616
wherein
Figure BDA00029596847600000617
Is CiiThe matrix of feature vectors of (a) is,
Figure BDA00029596847600000618
is CiiAnd r, and ri=rank(Cii)。
Step 4) selecting proper fractional order parameters alpha and beta, re-assigning the singular value matrix and the eigenvalue matrix obtained in the step 3), and constructing a fractional order inter-class correlation matrix
Figure BDA0002959684760000071
And fractional order auto-covariance matrix
Figure BDA0002959684760000072
Step 4-1) assuming alpha is a fraction and satisfying 0 ≦ alpha ≦ 1, defining a fractional order intra-class correlation matrix
Figure BDA0002959684760000073
Comprises the following steps:
Figure BDA0002959684760000074
wherein
Figure BDA0002959684760000075
UijAnd VijAnd rijThe definition is given in step 3-1);
step 4-2) assuming that beta is a fraction and satisfying that beta is more than or equal to 0 and less than or equal to 1, defining a fractional order auto-covariance matrix
Figure BDA0002959684760000076
Comprises the following steps:
Figure BDA0002959684760000077
wherein
Figure BDA0002959684760000078
QiAnd riThe definition of (3) is given in step 3-2).
Step 5) constructing an optimized model of FDMCCA as
Figure BDA0002959684760000079
Wherein
Figure BDA00029596847600000710
Introducing a Lagrange multiplier method to obtain a generalized characteristic value problem E omega which is mu F omega, calculating a projection direction omega, wherein mu is a characteristic value,
Figure BDA00029596847600000711
step 6) considering the situation that the autocovariance matrix may be a singular matrix, introducing a regularization parameter eta on the basis of the step 5), and establishing an optimization model under the regularization as
Figure BDA00029596847600000712
A Lagrange multiplier method is introduced to obtain the following generalized eigenvalue problem:
Figure BDA0002959684760000081
wherein
Figure BDA0002959684760000082
Is of size mi×mi1,2, …, P;
step 7) solving eigenvectors corresponding to the first d maximum eigenvalues according to the generalized eigenvalue problem in the step 6), thereby forming a projection matrix W of each group of datai=[ωi1i2,…,ωid],i=1,2,…,P,d≤min{m1,…,mP};
Step 8) utilizing the projection matrix W of each group of dataiRespectively calculating the low-dimensional projection of each group of training samples and testing samples, and then forming fusion features finally used for classification by adopting a serial feature fusion strategy; and calculating the recognition rate.
The invention can be further illustrated by the following examples: taking the CMU-PIE face database as an example, the CMU-PIE face database contains face images of 68 persons, and the size of each image is 64 × 64. In this experiment, the first 10 images of each person were used as a training set and the second 14 images were used as a testing set. Reading input face image data to form three different features, namely: feature 1 is the original image data, feature 2 is the median filtered image data, and feature 3 is the mean filtered image data. The dimensionality of each feature is reduced using principal component analysis to form the final three sets of feature data.
Step 1) constructing three groups of data X with the average value of 0iI-1, 2,3, defining the projection direction of the training sample set as
Figure BDA0002959684760000083
Step 2) FDMCCA aims to maximize the correlation of samples within a class and minimize the correlation of samples between classes. Computing intra-class correlation matrices for interclass training samples
Figure BDA0002959684760000084
Sum auto-covariance matrix
Figure BDA0002959684760000085
Wherein
Figure BDA0002959684760000086
A matrix representing each element as 1;
step 3) for the inter-group intra-class correlation matrix obtained in step 2)
Figure BDA0002959684760000087
Performing singular value decomposition to obtain left and right singular vector matrixes, singular value matrixes and auto-covariance matrixes CiiPerforming eigenvalue decomposition to obtain an eigenvector matrix and an eigenvalue matrix;
step 3-1) to the inter-group intra-class correlation matrix
Figure BDA0002959684760000091
Singular value decomposition is carried out:
Figure BDA0002959684760000092
wherein
Figure BDA0002959684760000093
And
Figure BDA0002959684760000094
are respectively
Figure BDA0002959684760000095
The left and right singular vector matrices of (a),
Figure BDA0002959684760000096
is that
Figure BDA0002959684760000097
A diagonal matrix of singular values of, and
Figure BDA0002959684760000098
step 3-2) on the autocovariance matrix CiiAnd (3) carrying out characteristic value decomposition:
Figure BDA0002959684760000099
wherein
Figure BDA00029596847600000910
Is CiiThe matrix of feature vectors of (a) is,
Figure BDA00029596847600000911
is CiiAnd r, and ri=rank(Cii)。
Step 4) defining the value ranges of the fractional order parameters alpha and beta as {0.1,0.2, …,1}, selecting proper fractional order parameters alpha and beta, re-assigning the singular value matrix and the characteristic value matrix obtained in the step 3), and constructing the intra-class correlation matrix among the fractional order groups
Figure BDA00029596847600000912
And fractional order auto-covariance matrix
Figure BDA00029596847600000913
Step 4-1) assuming alpha is a fraction and satisfying 0 ≦ alpha ≦ 1, defining a fractional order intra-class correlation matrix
Figure BDA00029596847600000914
Comprises the following steps:
Figure BDA00029596847600000915
wherein
Figure BDA00029596847600000916
UijAnd VijAnd rijThe definition is given in step 3-1).
Step 4-2) assuming that beta is a fraction and satisfying that beta is more than or equal to 0 and less than or equal to 1, defining a fractional order auto-covariance matrix
Figure BDA00029596847600000917
Comprises the following steps:
Figure BDA00029596847600000918
wherein
Figure BDA00029596847600000919
QiAnd riThe definition of (3) is given in step 3-2).
Step 5) constructing an optimized model of FDMCCA as
Figure BDA00029596847600000920
Wherein
Figure BDA0002959684760000101
The Lagrange multiplier method is introduced to obtain the generalized characteristicsThe value problem E ω ═ μ F ω, and the projection direction ω is then determined, where
Figure BDA0002959684760000102
Step 6) considering the situation that the autocovariance matrix may be a singular matrix, introducing a regularization parameter eta on the basis of the step 5), wherein the eta value range is {10 }-5,10-4…,10}, establishing an optimization model under regularization as
Figure BDA0002959684760000103
The following generalized eigenvalue problem can be obtained by introducing the Lagrange multiplier method:
Figure BDA0002959684760000104
and 7) solving a projection direction omega according to the generalized characteristic value problem in the step 6), calculating the projection of the test sample in the projection direction, adopting a serial characteristic fusion strategy, classifying by using a nearest neighbor classifier, and calculating the recognition rate. Solving eigenvectors corresponding to the first d maximum eigenvalues according to the generalized eigenvalue problem in the step 6), thereby forming a projection matrix W of each group of datai=[ωi1i2,…,ωid],i=1,2,3,d≤min{m1,m2,m3};
Step 8) utilizing the projection matrix W of each group of dataiRespectively calculating the low-dimensional projection of each group of training samples and testing samples, and then forming fusion features finally used for classification by adopting a serial feature fusion strategy; and classifying by using a nearest neighbor classifier, and calculating the recognition rate. The recognition results are shown in table 1 and fig. 2 (BASELINE refers to the classification results after three features are connected in series). As can be seen from table 1 and fig. 2, the FDMCCA method proposed by the present invention is superior in effect to other methods. This is because: compared with MCCA, CCA and BASELINE, FDMCCA is a supervised learning method with prior information and can obtain better identification effect. Andcompared with DMCCA, FDMCCA introduces fractional order thought to correct covariance deviation caused by noise interference and other factors, and identification accuracy is improved.
TABLE 1 recognition Rate on CMU-PIE datasets
Method Percent identification (%)
MCCA 84.09
CCA (feature 1+ feature 2) 71.43
CCA (feature 1+ feature 3) 74.03
CCA (feature 2+ feature 3) 76.30
BASELINE 48.05
DMCCA 79.22
FDMCCA 86.04
In order to examine the influence of the number of training samples on the recognition rate, the fractional order parameters alpha and beta and the regularization parameter eta are fixed, different numbers of images are selected to be respectively used as a training set and a test set, and the recognition rate is shown in FIG. 3. As can be seen from fig. 3, the FDMCCA works better with fewer training samples.
In summary, the present invention provides a supervised multi-set correlation feature fusion method (FDMCCA) based on spectral reconstruction by introducing a fractional order embedding concept based on the CCA method. The method can correct the deviation of the intra-class correlation matrix and the auto-covariance matrix caused by noise interference and limited training samples by introducing the fractional order parameter. Meanwhile, the method makes full use of the class label information, can solve the problem of information fusion of more than two views, and has wider application range and better identification performance.
The present invention is not limited to the above-mentioned embodiments, and based on the technical solutions disclosed in the present invention, those skilled in the art can make some substitutions and modifications to some technical features without creative efforts according to the disclosed technical contents, and these substitutions and modifications are all within the protection scope of the present invention.

Claims (3)

1.一种基于谱重建的监督多集相关特征融合方法,其特征在于,包括以下步骤:1. a kind of supervising multi-set related feature fusion method based on spectrum reconstruction, is characterized in that, comprises the following steps: 步骤1)假定有P组训练样本,其每组样本的均值为0并且类别数目为c,如下:Step 1) Suppose there are P groups of training samples, the mean of each group of samples is 0 and the number of categories is c, as follows:
Figure FDA0002959684750000011
Figure FDA0002959684750000011
其中
Figure FDA0002959684750000012
表示第i组中第j类的第k个样本,mi代表第i组数据集的特征维数,nj表示第j类样本数,定义训练样本集的投影方向为
Figure FDA0002959684750000013
in
Figure FDA0002959684750000012
Represents the kth sample of the jth class in the ith group, m i represents the feature dimension of the ith group data set, nj represents the number of the jth class samples, and defines the projection direction of the training sample set as
Figure FDA0002959684750000013
步骤2)计算组间训练样本的类内相关矩阵
Figure FDA0002959684750000014
和自协方差矩阵
Figure FDA0002959684750000015
其中
Figure FDA0002959684750000016
Figure FDA0002959684750000017
表示每个元素均为1的矩阵;
Step 2) Calculate the intra-class correlation matrix of the training samples between groups
Figure FDA0002959684750000014
and the autocovariance matrix
Figure FDA0002959684750000015
in
Figure FDA0002959684750000016
Figure FDA0002959684750000017
Represents a matrix where each element is 1;
步骤3)对步骤2)得到的组间类内相关矩阵
Figure FDA0002959684750000018
做奇异值分解得到左右奇异向量矩阵和奇异值矩阵,自协方差矩阵Cii做特征值分解,得到特征向量矩阵和特征值矩阵;
Step 3) For the intra-group correlation matrix obtained in step 2)
Figure FDA0002959684750000018
Do singular value decomposition to obtain left and right singular vector matrix and singular value matrix, and do eigenvalue decomposition of self-covariance matrix C ii to obtain eigenvector matrix and eigenvalue matrix;
步骤4)选择合适的分数阶参数α和β,对步骤3)得到奇异值矩阵和特征值矩阵重新赋值,构建分数阶组间类内相关矩阵
Figure FDA0002959684750000019
和分数阶自协方差矩阵
Figure FDA00029596847500000110
Step 4) Select the appropriate fractional parameters α and β, reassign the singular value matrix and eigenvalue matrix obtained in step 3), and construct the fractional inter-group intra-class correlation matrix
Figure FDA0002959684750000019
and the fractional autocovariance matrix
Figure FDA00029596847500000110
步骤5)构建FDMCCA的最优化模型为
Figure FDA00029596847500000111
其中
Figure FDA00029596847500000112
引入拉格朗日乘子法,得到广义特征值问题Eω=μFω,求出投影方向ω,其中μ为特征值,
Step 5) Build the optimal model of FDMCCA as
Figure FDA00029596847500000111
in
Figure FDA00029596847500000112
By introducing the Lagrange multiplier method, the generalized eigenvalue problem Eω=μFω is obtained, and the projection direction ω is obtained, where μ is the eigenvalue,
Figure FDA0002959684750000021
Figure FDA0002959684750000021
步骤6)考虑到自协方差矩阵可能是奇异矩阵的情况,在步骤5)的基础上引入正则化参数η,建立正则化下的最优化模型为
Figure FDA0002959684750000022
引入拉格朗日乘子法,得到如下广义特征值问题:
Step 6) Considering that the auto-covariance matrix may be a singular matrix, the regularization parameter η is introduced on the basis of step 5), and the optimal model under regularization is established as
Figure FDA0002959684750000022
By introducing the Lagrange multiplier method, the following generalized eigenvalue problem is obtained:
Figure FDA0002959684750000023
Figure FDA0002959684750000023
其中
Figure FDA0002959684750000024
是大小为mi×mi的单位矩阵,i=1,2,…,P;
in
Figure FDA0002959684750000024
is an identity matrix of size m i ×m i , i=1,2,...,P;
步骤7)根据步骤6)中的广义特征值问题求解前d个最大特征值对应的特征向量,从而形成每组数据的投影矩阵Wi=[ωi1i2,…,ωid],i=1,2,…,P,d≤min{m1,…,mP};Step 7) Solve the eigenvectors corresponding to the first d largest eigenvalues according to the generalized eigenvalue problem in step 6), thereby forming a projection matrix Wi =[ω i1i2 ,...,ω id ] for each group of data, i =1,2,...,P, d≤min{m 1 ,...,m P }; 步骤8)利用每组数据的投影矩阵Wi,分别计算每组训练样本和测试样本的低维投影,然后采用串行特征融合策略形成最终用于分类的融合特征;并计算识别率。Step 8) Using the projection matrix W i of each group of data, calculate the low-dimensional projections of each group of training samples and test samples respectively, and then adopt the serial feature fusion strategy to form the final fusion feature for classification; and calculate the recognition rate.
2.根据权利要求1所述的一种基于谱重建的监督多集相关特征融合方法,其特征在于,步骤3)所述的对类内相关矩阵
Figure FDA0002959684750000025
做奇异值分解和自协方差矩阵Cii做特征值分解包含以下步骤:
2. a kind of supervising multi-set correlation feature fusion method based on spectral reconstruction according to claim 1, is characterized in that, step 3) described pair intra-class correlation matrix
Figure FDA0002959684750000025
Doing singular value decomposition and eigenvalue decomposition of the autocovariance matrix C ii involves the following steps:
步骤3-1)对类内相关矩阵
Figure FDA0002959684750000026
做奇异值分解:
Step 3-1) For intra-class correlation matrix
Figure FDA0002959684750000026
Do singular value decomposition:
Figure FDA0002959684750000027
Figure FDA0002959684750000027
其中
Figure FDA0002959684750000028
Figure FDA0002959684750000029
分别是
Figure FDA00029596847500000210
的左右奇异向量矩阵,
Figure FDA0002959684750000031
Figure FDA0002959684750000032
的奇异值组成的对角矩阵,并且
Figure FDA0002959684750000033
in
Figure FDA0002959684750000028
and
Figure FDA0002959684750000029
respectively
Figure FDA00029596847500000210
The left and right singular vector matrices of ,
Figure FDA0002959684750000031
Yes
Figure FDA0002959684750000032
a diagonal matrix of singular values of , and
Figure FDA0002959684750000033
步骤3-2)对自协方差矩阵Cii做特征值分解:Step 3-2) Perform eigenvalue decomposition on the auto-covariance matrix C ii :
Figure FDA0002959684750000034
Figure FDA0002959684750000034
其中
Figure FDA0002959684750000035
是Cii的特征向量矩阵,
Figure FDA0002959684750000036
是Cii的特征值组成的对角矩阵,并且ri=rank(Cii)。
in
Figure FDA0002959684750000035
is the eigenvector matrix of C ii ,
Figure FDA0002959684750000036
is a diagonal matrix of eigenvalues of C ii , and ri =rank(C ii ).
3.根据权利要求1或2所述的一种基于谱重建的监督多集相关特征融合方法,其特征在于,步骤4)所述的构建组间分数阶组间类内相关矩阵
Figure FDA0002959684750000037
和分数阶自协方差矩阵
Figure FDA0002959684750000038
包含以下步骤:
3. a kind of supervising multi-set correlation feature fusion method based on spectral reconstruction according to claim 1 and 2, it is characterized in that, step 4) described construction group between fractional order inter-class correlation matrix between groups
Figure FDA0002959684750000037
and the fractional autocovariance matrix
Figure FDA0002959684750000038
Contains the following steps:
步骤4-1)假定α是分数并且满足0≤α≤1,定义分数阶组间类内相关矩阵
Figure FDA0002959684750000039
为:
Step 4-1) Assuming that α is a fraction and satisfies 0≤α≤1, define a fractional inter-group intra-class correlation matrix
Figure FDA0002959684750000039
for:
Figure FDA00029596847500000310
Figure FDA00029596847500000310
其中
Figure FDA00029596847500000311
Uij和Vij以及rij在步骤3-1)中给出定义;
in
Figure FDA00029596847500000311
U ij and V ij and r ij are defined in step 3-1);
步骤4-2)假定β是分数并且满足0≤β≤1,定义分数阶自协方差矩阵
Figure FDA00029596847500000312
为:
Step 4-2) Assuming that β is a fraction and satisfies 0≤β≤1, define a fractional autocovariance matrix
Figure FDA00029596847500000312
for:
Figure FDA00029596847500000313
Figure FDA00029596847500000313
其中
Figure FDA00029596847500000314
Qi和ri的定义在步骤3-2)中给出。
in
Figure FDA00029596847500000314
The definitions of Qi and ri are given in step 3-2).
CN202110235178.4A 2020-11-20 2021-03-03 A supervised multi-set correlation feature fusion method based on spectral reconstruction Active CN112966735B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011307376 2020-11-20
CN2020113073769 2020-11-20

Publications (2)

Publication Number Publication Date
CN112966735A true CN112966735A (en) 2021-06-15
CN112966735B CN112966735B (en) 2023-09-12

Family

ID=76276287

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110235178.4A Active CN112966735B (en) 2020-11-20 2021-03-03 A supervised multi-set correlation feature fusion method based on spectral reconstruction

Country Status (1)

Country Link
CN (1) CN112966735B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113887509A (en) * 2021-10-25 2022-01-04 济南大学 A Fast Multimodal Video Face Recognition Method Based on Image Collection
CN114510966A (en) * 2022-01-14 2022-05-17 电子科技大学 End-to-end brain causal network construction method based on graph neural network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109450499A (en) * 2018-12-13 2019-03-08 电子科技大学 A kind of robust Beamforming Method estimated based on steering vector and spatial power
WO2020010602A1 (en) * 2018-07-13 2020-01-16 深圳大学 Face recognition and construction method and system based on non-linear non-negative matrix decomposition, and storage medium
US20200272422A1 (en) * 2017-10-13 2020-08-27 Nippon Telegraph And Telephone Corporation Synthetic data generation apparatus, method for the same, and program
CN111611963A (en) * 2020-05-29 2020-09-01 扬州大学 A Face Recognition Method Based on Nearest Neighbor Preserving Canonical Correlation Analysis

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200272422A1 (en) * 2017-10-13 2020-08-27 Nippon Telegraph And Telephone Corporation Synthetic data generation apparatus, method for the same, and program
WO2020010602A1 (en) * 2018-07-13 2020-01-16 深圳大学 Face recognition and construction method and system based on non-linear non-negative matrix decomposition, and storage medium
CN109450499A (en) * 2018-12-13 2019-03-08 电子科技大学 A kind of robust Beamforming Method estimated based on steering vector and spatial power
CN111611963A (en) * 2020-05-29 2020-09-01 扬州大学 A Face Recognition Method Based on Nearest Neighbor Preserving Canonical Correlation Analysis

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
惠晓峰;李冰娜;: "基于随机矩阵理论决定多元GARCH模型最佳维度研究", 运筹与管理, no. 04 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113887509A (en) * 2021-10-25 2022-01-04 济南大学 A Fast Multimodal Video Face Recognition Method Based on Image Collection
CN113887509B (en) * 2021-10-25 2022-06-03 济南大学 A Fast Multimodal Video Face Recognition Method Based on Image Collection
CN114510966A (en) * 2022-01-14 2022-05-17 电子科技大学 End-to-end brain causal network construction method based on graph neural network

Also Published As

Publication number Publication date
CN112966735B (en) 2023-09-12

Similar Documents

Publication Publication Date Title
Yang et al. A feature-reduction multi-view k-means clustering algorithm
Yu et al. Deep learning with kernel regularization for visual recognition
CN102930301B (en) Image classification method based on characteristic weight learning and nuclear sparse representation
CN103093235B (en) A kind of Handwritten Numeral Recognition Method based on improving distance core principle component analysis
CN112613536A (en) Near infrared spectrum diesel grade identification method based on SMOTE and deep learning
Yin Nonlinear dimensionality reduction and data visualization: A review
CN101872424A (en) Facial expression recognition method based on Gabor transform optimal channel fuzzy fusion
CN102142082B (en) Virtual sample based kernel discrimination method for face recognition
CN115186798A (en) Knowledge distillation-based regeneration TSK fuzzy classifier
CN107292225A (en) A kind of face identification method
CN103177265B (en) High-definition image classification method based on kernel function Yu sparse coding
CN104636732A (en) Sequence deeply convinced network-based pedestrian identifying method
CN106127240A (en) A kind of classifying identification method of plant image collection based on nonlinear reconstruction model
CN109978042A (en) A kind of adaptive quick K-means clustering method of fusion feature study
CN112115881A (en) Image feature extraction method based on robust identification feature learning
CN108288048A (en) Based on the facial emotions identification feature selection method for improving brainstorming optimization algorithm
CN106067034A (en) A kind of distribution network load curve clustering method based on higher dimensional matrix characteristic root
CN112966735A (en) Supervision multi-set correlation feature fusion method based on spectral reconstruction
CN102495876A (en) Nonnegative local coordinate factorization-based clustering method
CN114926702A (en) Small sample image classification method based on depth attention measurement
CN117034030A (en) Electroencephalo-gram data alignment algorithm based on positive and negative two-way information fusion
CN106886793A (en) Hyperspectral image band selection method based on discriminant information and manifold information
CN105678798A (en) Multi-target fuzzy clustering image segmentation method combining local spatial information
CN110399814A (en) A Face Recognition Method Based on Domain Adaptation Metric of Local Linear Representation
CN113920210A (en) Image low-rank reconstruction method based on adaptive graph learning principal component analysis method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant