[go: up one dir, main page]

CN118038231B - Brain network construction and feature extraction method for fusing multidimensional information in small sample scene - Google Patents

Brain network construction and feature extraction method for fusing multidimensional information in small sample scene Download PDF

Info

Publication number
CN118038231B
CN118038231B CN202410437716.1A CN202410437716A CN118038231B CN 118038231 B CN118038231 B CN 118038231B CN 202410437716 A CN202410437716 A CN 202410437716A CN 118038231 B CN118038231 B CN 118038231B
Authority
CN
China
Prior art keywords
brain
matrix
feature
anchor point
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410437716.1A
Other languages
Chinese (zh)
Other versions
CN118038231A (en
Inventor
李元
赵峰
任延德
毛宁
宋木松
陈小波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Yiying Intelligent Technology Co ltd
Affiliated Hospital of University of Qingdao
Yantai Yuhuangding Hospital
Shandong Technology and Business University
Original Assignee
Shandong Yiying Intelligent Technology Co ltd
Affiliated Hospital of University of Qingdao
Yantai Yuhuangding Hospital
Shandong Technology and Business University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Yiying Intelligent Technology Co ltd, Affiliated Hospital of University of Qingdao, Yantai Yuhuangding Hospital, Shandong Technology and Business University filed Critical Shandong Yiying Intelligent Technology Co ltd
Priority to CN202410437716.1A priority Critical patent/CN118038231B/en
Publication of CN118038231A publication Critical patent/CN118038231A/en
Application granted granted Critical
Publication of CN118038231B publication Critical patent/CN118038231B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/809Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data
    • G06V10/811Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data the classifiers operating on different input data, e.g. multi-modal recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/042Knowledge-based neural networks; Logical representations of neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • G06V10/765Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/766Arrangements for image or video recognition or understanding using pattern recognition or machine learning using regression, e.g. by projecting features on hyperplanes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/803Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Image Analysis (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

本发明属于多模态脑影像分析技术领域,具体涉及小样本场景下融合多维信息的脑网络构建及特征提取方法,步骤包括:对不同模态的脑影像进行预处理,获取处理后的脑影像,对其中每个独立被试的处理后的脑影像进行标示,并进行标准化处理;对于标准化处理后的脑影像,在其中引入L1正则项,获得回归系数矩阵并进行二值化,得到二值化的多特征融合矩阵;利用带有自注意力机制池化层的图卷积神经网络,对多特征融合矩阵进行特征提取;构建四重暹罗网络学习框架,挖掘输入特征之间的特征相似性,最终达到分类的目。本发明可以更好地提取大脑网络的特征,并使得特征更具备区分度,能够更好的应用于精神疾病的辅助诊断。

The present invention belongs to the technical field of multimodal brain image analysis, and specifically relates to a brain network construction and feature extraction method that fuses multidimensional information in a small sample scenario, and the steps include: preprocessing brain images of different modalities, obtaining processed brain images, marking the processed brain images of each independent subject, and performing standardization processing; for the standardized brain images, introducing the L1 regularization term, obtaining the regression coefficient matrix and binarizing it, and obtaining a binarized multi-feature fusion matrix; using a graph convolutional neural network with a self-attention mechanism pooling layer to extract features from the multi-feature fusion matrix; constructing a quadruple Siamese network learning framework to mine the feature similarity between input features, and finally achieving the purpose of classification. The present invention can better extract the features of the brain network, and make the features more discriminative, and can be better applied to the auxiliary diagnosis of mental illness.

Description

Brain network construction and feature extraction method for fusing multidimensional information in small sample scene
Technical Field
The invention belongs to the technical field of multi-mode brain image analysis, is used for auxiliary diagnosis of mental diseases, and particularly relates to a brain network construction and feature extraction method for fusing multi-dimensional information in a small sample scene.
Background
Multi-modal brain image analysis combines the advantages of different imaging techniques to provide a more comprehensive and thorough understanding of brain structure and function from multiple perspectives. This analytical method has significant advantages in neuroscience, clinical diagnosis and research: 1. the multi-mode brain image can simultaneously provide multi-view information such as structure, function, metabolism and the like, so that the understanding of the brain is more comprehensive. The information provided by the different imaging techniques complements each other, helping to understand brain disease and function more accurately. 2. The data precision and reliability are enhanced, and the limitation of a single mode is reduced. By combining multiple data sources, the limitations and deviations of a single imaging technique can be reduced, improving diagnostic accuracy. In clinical diagnostics, multimodal analysis helps to improve the accuracy of disease identification and localization. 3. Better pathological understanding. With the deep research of disease mechanism, the fusion of multi-view features can reveal the multi-aspect influence of diseases, such as the structural and functional changes of Alzheimer's disease, thereby facilitating early diagnosis and prediction and early identification of neurodegenerative diseases and other cerebral lesions. 4. The development of imaging technology is promoted. The need for multimodal brain image analysis has prompted advances and innovations in imaging technology, challenges in data fusion and processing, and has prompted the development of related algorithms and software tools.
The multi-view feature brain image analysis has the advantage that the multi-view feature brain image analysis can provide more comprehensive and deeper brain information than a single imaging technology, and has important significance for research and clinical application of brain science. With the development and improvement of technology, the field is expected to play a larger role in the future. However, in the current research, multi-view feature brain image analysis still faces challenges and drawbacks: 1. the existing fusion technology is not mature. Because of the limitations of fusion algorithms, the existing multi-view feature fusion algorithm may not fully utilize complementary information among different modes, and also cannot well understand the feature relationship among the modes; while some fusion methods rely heavily on a priori knowledge and assumptions, the exploration of new findings may be limited. 2. Complexity of feature extraction. Brain image data is typically high-dimensional, which increases the complexity of feature extraction and data processing; and identifying biologically or clinically significant features from large amounts of data remains a challenge. 3. The difficult problems of data integration and standardization are that the data generated by different imaging technologies have significant differences in terms of spatial resolution, time resolution, signal contrast and the like, so that the data integration becomes complex, which leads to the neural image analysis being usually in a small sample state; meanwhile, the neural image features have larger similarity, and how to distinguish the similarity features is a problem which needs to be solved in a heavy way during classification.
In order to overcome these drawbacks, there is a need to develop more efficient data processing methods, to improve fusion and feature extraction algorithms, to find better classification methods, and to facilitate the assisted diagnosis of mental disorders.
Disclosure of Invention
According to the defects in the prior art, the invention provides the brain network construction and feature extraction method fused with the multidimensional information under the small sample scene, which can better extract the features of the brain network, and ensure that the features have more differentiation and can be better applied to the auxiliary diagnosis of mental diseases.
In order to achieve the above purpose, the invention provides a brain network construction and feature extraction method fusing multidimensional information in a small sample scene, which comprises the following steps:
s1, preprocessing brain images of different modes, obtaining processed brain images, marking each independently tested processed brain image, and performing standardization processing;
S2, introducing an L1 regularization term into the standardized brain image to obtain a regression coefficient matrix, and binarizing the regression coefficient matrix to obtain a binarized multi-feature fusion matrix to complete the initial construction of a brain network;
S3, performing feature extraction on the multi-feature fusion matrix by using a graph convolution neural network SAGCN with a self-attention mechanism pooling layer;
S4, constructing a Siamese network learning framework QSN, wherein the QSN input is the feature matrix extracted by the multi-feature fusion matrix, and finally achieving the aim of classification by excavating feature similarity among input features through QSN, so as to finish verification of the effect of the constructed brain network.
In S1, each independent test is denoted as x= [ X 1,x2,x3,...,xq]T∈Rl×q, where q is the number of brain regions, X q is a sequence consisting of l different features of the q-th brain region, and the brain region mean of each feature is normalized using the whole brain mean and standard deviation of the feature.
In the step S2, the process of obtaining the multi-feature fusion matrix is as follows:
s11, for x i of the ith brain region, i=1, 2,..q-1, using x i as a dependent variable y in sequence, using all other q-1 brain regions as independent variables, denoted as a= [ x 1,x2,...,xi-1,0,xi+1,...,xq ], introducing an L1 regular term therein, and linearly expressing y through y=aw, wherein w is a regression coefficient matrix;
The S12 and L1 regular terms are expressed as follows:
(1);
Wherein s is a natural number, lambda is a regularization parameter and lambda >0; y i ε R, R is the real set;
s13, solving an L1 regularization term by using a near-end gradient descent method, wherein the optimization target is expressed as follows:
(2);
Wherein x is the final optimization target; f (x) is a loss function;
S14, f (x) can be derived, then there is a constant L such that:
(3);
(4);
i.e., x q can be obtained from x q-1, where, For the estimated value of the parameter in the iteration, the constant L exists, then x is equal to/>Also present; x k is the taylor expansion term;
S15, solving the x q, wherein the method is expressed as:
(5);
By the formula (5), a regression coefficient matrix w can be obtained by combining y=aw, and the coefficient which is not 0 is replaced by 1, so that a binarized matrix is obtained, namely the multi-feature fusion matrix.
In the binarized matrix, the brain region with cooperative relationship is prominently set to 1, and the brain region without cooperative relationship is hidden to 0.
In the step S3, SAGCN, the update formulas of the node feature matrix and the adjacent matrix used are as follows:
(6);
Wherein D is a degree matrix; b is an adjacency matrix; z is a node feature matrix; sigma is the activation function; Is the only parameter in the self-attention pooling layer, F represents that each tested has F multiplied by 1 features; representing that the corresponding matrix is subjected to normalization processing;
Wherein the self-attention score in the pooling layer with self-attention mechanism is obtained by convolution of the graph with B and Z combined together in equation (6) to reflect the characteristics and topology of the graph.
In the step S3, in the pooling operation, the node is reserved and judged according to the self-attention score, and the formula is as follows:
(7);
where j represents the indexing operation, KN represents the proportion of reserved nodes determined according to the self-attention score, the nodes are node features on the graph structure, Is a ranking function.
In the step S4, the process of constructing QSN is as follows:
S41, QSN comprises four identical SAGCN and a minimized loss function, the input is the feature matrix extracted by the multi-feature fusion matrix, and the t QSN input is defined as And/>WhereinAnd/>Respectively positive and negative samples,/>And/>The method comprises the steps of respectively obtaining an anchor point sample 1 and an anchor point sample 2, wherein the positive sample and the anchor point sample 1 belong to the same category (namely the anchor point sample 1 belongs to the positive sample), the anchor point sample 1 is an anchor point in the positive sample, the negative sample and the anchor point sample 2 belong to the same category (namely the anchor point sample 2 belongs to the negative sample), and the anchor point sample 2 is an anchor point in the negative sample;
The loss function of S42, QSN is:
(8);
Wherein d represents the distance, d + represents the distance between the anchor point sample and the positive sample, and d - represents the distance between the anchor point sample and the negative sample; alpha is a super parameter for distinguishing whether the t-th identification is valid;
S43、 and/> After input QSN,/>、/>、/>、/>Calculating through one SAGCN respectively, and then obtaining an output result through minimizing a loss function, wherein the feature similarity between input features can be mined through the output result, the purpose of classification is finally achieved, the task is completed, the process is repeated for each QSN input, and finally the construction of the brain network is completed.
In the step S42, the debugging process of the super parameter α is expressed as follows:
(9);
(10);
In the method, in the process of the invention, The convolution process of SAGCN is shown.
The algorithm according to the present invention may be executed by an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the algorithm being implemented by the processor executing software.
The invention has the beneficial effects that:
The multi-feature fusion matrix established by the invention can be better close to the brain working essence, the regression coefficient matrix is utilized and binarized, the brain region with cooperative relationship is prominently set to be 1, the brain region without cooperative relationship is hidden to be 0, the cooperative relationship among the brains participating in the task is better restored, and the hidden feature is found.
The invention adopts the graph convolution neural network SAGCN with the self-attention mechanism pooling layer, and uses the four-fold Siamese network learning framework QSN to distinguish the similarity characteristics, so that the node characteristics and the graph topological structure can be fully considered, meanwhile, the four-fold Siamese network is adopted, the double anchor points are added, the approximate characteristics are distinguished, the problem of small samples of brain images and the problem of higher characteristic similarity can be better adapted, the better classification and identification effects are finally achieved, and the invention can be better applied to the auxiliary diagnosis of mental diseases.
Drawings
FIG. 1 is a flow schematic of the present invention;
Fig. 2 is a graph showing classification accuracy contrast of a patient with mild cognitive impairment according to an embodiment of the invention.
Detailed Description
Embodiments of the invention are further described below with reference to the accompanying drawings:
As shown in fig. 1, the method for constructing and extracting the characteristics of the brain network fusing multidimensional information in a small sample scene comprises the following steps:
S1, preprocessing brain images of different modes, obtaining processed brain images, marking the processed brain images of each independent tested person (each person for acquiring images is a subject or a research object), and performing standardization processing;
Wherein, different pretreatment methods are adopted for different brain images: MRI (nuclear magnetic resonance) uses freesurfer (a kind of magnetic resonance data processing software), fMRI (magnetic resonance functional imaging) uses rest (neuroimaging rest kit), PET (positron emission tomography) uses matlab kits such as SPM (neuroimaging SPM software); the related preprocessing and normalization processes are well known;
S2, introducing an L1 regularization term into the standardized brain image to obtain a regression coefficient matrix, and binarizing the regression coefficient matrix to obtain a binarized multi-feature fusion matrix to complete the initial construction of a brain network;
S3, performing feature extraction on the multi-feature fusion matrix by using a graph convolution neural network SAGCN with a self-attention mechanism pooling layer;
S4, constructing a Siamese network learning framework QSN, wherein the QSN input is the feature matrix extracted by the multi-feature fusion matrix, and finally achieving the aim of classification by excavating feature similarity among input features through QSN, so as to finish verification of the effect of the constructed brain network.
In S1, each individual test is denoted as x= [ X 1,x2,x3,...,xq]T∈Rl×q, where q is the number of brain regions, X q is a sequence of i different features of the q-th brain region (X 1 et al, and so on), the brain region mean of each feature is normalized using the total brain mean and standard deviation of that feature. R l×q is l×q real space, and the superscript T is a transpose, which is known in the art.
S2, the process of obtaining the multi-feature fusion matrix is as follows:
s11, for x i of the ith brain region, i=1, 2,..q-1, using x i as a dependent variable y in sequence, using all other q-1 brain regions as independent variables, denoted as a= [ x 1,x2,...,xi-1,0,xi+1,...,xq ], introducing an L1 regular term therein, and linearly expressing y through y=aw, wherein w is a regression coefficient matrix;
The S12 and L1 regular terms are expressed as follows:
(1);
Wherein s is a natural number, lambda is a regularization parameter and lambda >0; y i ε R, R is the real set;
s13, solving an L1 regularization term by using a near-end gradient descent method, wherein the optimization target is expressed as follows:
(2);
Wherein x is the final optimization target; f (x) is a loss function;
S14, f (x) can be derived, then there is a constant L such that:
(3);
(4);
i.e., x q can be obtained from x q-1, where, For the estimated value of the parameter in the iteration, the constant L exists, then x is equal to/>Also present; x k is the taylor expansion term;
S15, solving the x q, wherein the method is expressed as:
(5);
By the formula (5), a regression coefficient matrix w can be obtained by combining y=aw, and the coefficient which is not 0 is replaced by 1, so that a binarized matrix is obtained, namely the multi-feature fusion matrix.
In S3, SAGCN, the update formula of the node feature matrix and the adjacency matrix used is:
(6);
Wherein D is a degree matrix; b is an adjacency matrix; z is a node feature matrix; sigma is the activation function; Is the only parameter in the self-attention pooling layer, F represents that each tested has F multiplied by 1 features; representing that the corresponding matrix is subjected to normalization processing;
Wherein the self-attention score in the pooling layer with self-attention mechanism is obtained by convolution of the graph with B and Z combined together in equation (6) to reflect the characteristics and topology of the graph.
In S3, in the pooling operation process, retaining and judging the nodes according to the self-attention score, wherein the formula is as follows:
(7);
where j represents the indexing operation, KN represents the proportion of reserved nodes determined according to the self-attention score, the nodes are node features on the graph structure, Is a ranking function.
In S4, the process of constructing QSN is:
S41, QSN comprises four identical SAGCN and a minimized loss function, the input is the feature matrix extracted by the multi-feature fusion matrix, and the t QSN input is defined as And/>WhereinAnd/>Respectively positive and negative samples,/>And/>The method comprises the steps of respectively obtaining an anchor point sample 1 and an anchor point sample 2, wherein the positive sample and the anchor point sample 1 belong to the same category, the anchor point sample 1 is an anchor point in the positive sample, the negative sample and the anchor point sample 2 belong to the same category, and the anchor point sample 2 is an anchor point in the negative sample;
The loss function of S42, QSN is:
(8);
Wherein d represents the distance, d + represents the distance between the anchor point sample and the positive sample, and d - represents the distance between the anchor point sample and the negative sample; alpha is a super parameter for distinguishing whether the t-th identification is valid;
the debugging process of the super parameter alpha is expressed as:
(9);
(10);
In the method, in the process of the invention, A convolution process denoted SAGCN; where k is only an intermediate variable of the calculation process.
S43、And/>After input QSN,/>、/>、/>、/>Calculating through one SAGCN respectively, and then obtaining an output result through minimizing a loss function, wherein the feature similarity between input features can be mined through the output result, the purpose of classification is finally achieved, the task is completed, the process is repeated for each QSN input, and finally the construction of the brain network is completed.
The verification process is as follows: the test was performed for mild cognitive impairment patients and for deep depression patients, respectively.
1) Recognition of mild cognitive impairment patients, using two brain images of Tau protein and Abeta protein of mild cognitive impairment patients, after constructing a multi-feature fusion matrix, first performing neural analysis, as shown in Table 1, wherein the difference of brain regions based on the multi-feature fusion matrix is very close to clinical conditions.
TABLE 1 brain region contrast with differences in multi-feature fusion matrices constructed in patients with mild cognitive impairment compared to normal
Wherein SFG is forehead return; poG is central back; INS is island leaf; CG is buckle strap back; LOcC is the lateral occipital cortex; hippocampus is hippocampus; fuG is temporal sulcus; ITG is temporal return below; MFG is mid-forehead return; SFG is medium return; BG is basal segment; tha is thalamus; ITG is temporal return below; phG is parahippocampal gyrus; MTG is the mid temporal gyrus; IPL is lower top leaf; l is left; r is right. P-values represent significant differences, i.e. between the two brain regions being compared, P-values <0.05 indicate that normal and patient (mild cognitive impairment patients) are compared.
Further, SAGCN is used for extracting features, and the features are put into QSN for classification, so that the accuracy of 95% is obtained, and meanwhile, several articles similar to the current research are compared, and the accuracy of the invention is the best, as shown in fig. 2. In fig. 2 Francisco, duan, ruben, kook cho and Mraco are authors of similar articles studied in the art, respectively, showing the accuracy obtained after classification by the method in the articles.
2) By using the fMRI image of the patient suffering from major depressive disorder, reHo and ALFF features thereof are extracted, and multi-feature fusion and classification are carried out on the features of white matter and gray matter of the MRI image according to the method, so that the identification rate of the depression reaches 83%. The original features and classification results under a triple Siamese network are compared as shown in Table 2.
Table 2 comparison of original features, triple siamese network, and classification results under QSN
Wherein AUC is the area under the ROC curve used to evaluate the performance of the classification model.

Claims (3)

1. The brain network construction and feature extraction method fusing multidimensional information in a small sample scene is characterized by comprising the following steps of:
s1, preprocessing brain images of different modes, obtaining processed brain images, marking each independently tested processed brain image, and performing standardization processing;
S2, introducing an L1 regularization term into the standardized brain image to obtain a regression coefficient matrix, and binarizing the regression coefficient matrix to obtain a binarized multi-feature fusion matrix to complete the initial construction of a brain network;
S3, performing feature extraction on the multi-feature fusion matrix by using a graph convolution neural network SAGCN with a self-attention mechanism pooling layer;
s4, constructing a Siamese network learning framework QSN, wherein the QSN input is a characteristic matrix extracted by a multi-characteristic fusion matrix, and the characteristic similarity between input characteristics is mined through QSN to finally achieve the aim of classification and finish the verification of the constructed brain network effect;
In the step S1, each independent tested object is marked as X= [ X 1,x2,x3,...,xq]T∈Rl×q, wherein q is the number of brain regions, X q is a sequence consisting of the (q) th brain region and the (l) different features, and the brain region mean value of each feature is subjected to standardization processing by using the whole brain mean value and standard deviation of the feature;
In the step S2, the process of obtaining the multi-feature fusion matrix is as follows:
s11, for x i of the ith brain region, i=1, 2,..q-1, using x i as a dependent variable y in sequence, using all other q-1 brain regions as independent variables, denoted as a= [ x 1,x2,...,xi-1,0,xi+1,...,xq ], introducing an L1 regular term therein, and linearly expressing y through y=aw, wherein w is a regression coefficient matrix;
The S12 and L1 regular terms are expressed as follows:
(1);
Wherein s is a natural number, lambda is a regularization parameter and lambda >0; y i ε R, R is the real set;
s13, solving an L1 regularization term by using a near-end gradient descent method, wherein the optimization target is expressed as follows:
(2);
Wherein x is the final optimization target; f (x) is a loss function;
S14, f (x) can be derived, then there is a constant L such that:
(3);
(4);
i.e., x q can be obtained from x q-1, where, For the estimated value of the parameter in the iteration, the constant L exists, then x is equal to/>Also present; x k is the taylor expansion term;
S15, solving the x q, wherein the method is expressed as:
(5);
Combining y=aw to obtain a regression coefficient matrix w through the formula (5), and replacing the coefficient which is not 0 with 1 to obtain a binarized matrix, namely a multi-feature fusion matrix;
In the step S3, SAGCN, the update formulas of the node feature matrix and the adjacent matrix used are as follows:
(6);
Wherein D is a degree matrix; b is an adjacency matrix; z is a node feature matrix; sigma is the activation function; Is the only parameter in the self-attention pooling layer, F represents that each tested has F multiplied by 1 features; representing that the corresponding matrix is subjected to normalization processing;
Wherein the self-attention score in the pooling layer with self-attention mechanism is obtained by convolving the graph of formula (6) where B and Z are combined together to reflect the characteristics and topology of the graph;
In the step S3, in the pooling operation, the node is reserved and judged according to the self-attention score, and the formula is as follows:
(7);
where j represents the indexing operation, KN represents the proportion of reserved nodes determined according to the self-attention score, the nodes are node features on the graph structure, Is a ranking function.
2. The method for constructing and extracting the characteristics of the brain network fusing multidimensional information in the small sample scene as recited in claim 1, wherein the method comprises the following steps: in the step S4, the process of constructing QSN is as follows:
S41, QSN comprises four identical SAGCN and a minimized loss function, the input is the feature matrix extracted by the multi-feature fusion matrix, and the t QSN input is defined as And/>Wherein/>AndRespectively positive and negative samples,/>And/>The method comprises the steps of respectively obtaining an anchor point sample 1 and an anchor point sample 2, wherein the positive sample and the anchor point sample 1 belong to the same category, the anchor point sample 1 is an anchor point in the positive sample, the negative sample and the anchor point sample 2 belong to the same category, and the anchor point sample 2 is an anchor point in the negative sample;
The loss function of S42, QSN is:
(8);
Wherein d represents the distance, d + represents the distance between the anchor point sample and the positive sample, and d - represents the distance between the anchor point sample and the negative sample; alpha is a super parameter for distinguishing whether the t-th identification is valid;
S43、 and/> After input QSN,/>、/>、/>、/>And respectively carrying out calculation through one SAGCN, then obtaining an output result through the minimized loss function, and finally achieving the aim of classification by excavating the feature similarity among the input features through the output result, thereby completing the task.
3. The method for constructing and extracting the characteristics of the brain network fusing multidimensional information in the small sample scene as recited in claim 2, wherein the method comprises the following steps: in the step S42, the debugging process of the super parameter α is expressed as follows:
(9);
(10);
In the method, in the process of the invention, The convolution process of SAGCN is shown.
CN202410437716.1A 2024-04-12 2024-04-12 Brain network construction and feature extraction method for fusing multidimensional information in small sample scene Active CN118038231B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410437716.1A CN118038231B (en) 2024-04-12 2024-04-12 Brain network construction and feature extraction method for fusing multidimensional information in small sample scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410437716.1A CN118038231B (en) 2024-04-12 2024-04-12 Brain network construction and feature extraction method for fusing multidimensional information in small sample scene

Publications (2)

Publication Number Publication Date
CN118038231A CN118038231A (en) 2024-05-14
CN118038231B true CN118038231B (en) 2024-06-18

Family

ID=90993602

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410437716.1A Active CN118038231B (en) 2024-04-12 2024-04-12 Brain network construction and feature extraction method for fusing multidimensional information in small sample scene

Country Status (1)

Country Link
CN (1) CN118038231B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118393368B (en) * 2024-06-28 2024-09-10 广汽埃安新能源汽车股份有限公司 Method and device for evaluating battery endurance, storage medium and equipment
CN118427579B (en) * 2024-07-05 2024-09-06 山东工商学院 A contextual semantic collaborative modeling approach for multimodal neural signals
CN119963563B (en) * 2025-04-11 2025-06-06 南京信息工程大学 A multimodal fusion method for breast cancer prognosis prediction based on deep learning

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105726026A (en) * 2016-01-28 2016-07-06 电子科技大学 Mild cognitive impairment disease classifying method based on brain network and brain structure information
EP3622423A1 (en) * 2017-05-12 2020-03-18 The Regents of The University of Michigan Individual and cohort pharmacological phenotype prediction platform
US11501429B2 (en) * 2017-07-19 2022-11-15 Altius Institute For Biomedical Sciences Methods of analyzing microscopy images using machine learning
CN112735570B (en) * 2021-01-09 2022-02-22 深圳先进技术研究院 Image-driven brain atlas construction method, device, equipment and storage medium
CN115393269A (en) * 2022-07-13 2022-11-25 中国科学院大学 A scalable multi-level graph neural network model based on multi-modal image data
CN115185736B (en) * 2022-09-09 2023-01-31 南京航空航天大学 Micro-service call chain abnormity detection method and device based on graph convolution neural network
CN116433646A (en) * 2023-04-24 2023-07-14 安徽师范大学 A brain network feature extraction method based on convolutional neural network and Transformer
CN117496321A (en) * 2023-11-15 2024-02-02 吉林大学 A target detection method based on the fusion of two modal images
CN117765530A (en) * 2024-01-11 2024-03-26 南京航空航天大学 Multi-mode brain network classification method, system, electronic equipment and medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Classification of Developmental and Brain Disorders via Graph Convolutional Aggregation;Ibrahim Salim · A. Ben Hamza;SPRinger link;20231207;全文 *
Zhang, Y (Zhang, Yu) ; ET ALL.Inter-subject Similarity Guided Brain Network Modeling for MCI Diagnosis.MACHINE LEARNING IN MEDICAL IMAGING.2018,全文. *

Also Published As

Publication number Publication date
CN118038231A (en) 2024-05-14

Similar Documents

Publication Publication Date Title
CN118038231B (en) Brain network construction and feature extraction method for fusing multidimensional information in small sample scene
Basheera et al. A novel CNN based Alzheimer’s disease classification using hybrid enhanced ICA segmented gray matter of MRI
Basheera et al. Convolution neural network–based Alzheimer's disease classification using hybrid enhanced independent component analysis based segmented gray matter of T2 weighted magnetic resonance imaging with clinical valuation
Esmaeilzadeh et al. End-to-end Parkinson disease diagnosis using brain MR-images by 3D-CNN
Liu et al. Towards clinical diagnosis: Automated stroke lesion segmentation on multi-spectral MR image using convolutional neural network
CN107944490B (en) An Image Classification Method Based on Semi-Multimodal Fusion Feature Reduction Framework
EP2483863B1 (en) Method and apparatus for processing medical images
CN110522448A (en) A brain network classification method based on graph convolutional neural network
CN115393269A (en) A scalable multi-level graph neural network model based on multi-modal image data
CN111063442B (en) Brain disease process prediction method and system based on weak supervision multitask matrix completion
CN105726026A (en) Mild cognitive impairment disease classifying method based on brain network and brain structure information
CN113705670B (en) Brain image classification method and device based on magnetic resonance imaging and deep learning
Yang et al. Diagnosis of Parkinson’s disease based on 3D ResNet: The frontal lobe is crucial
WO2006052516A1 (en) System and method for a contiguous support vector machine
CN113052800B (en) Alzheimer disease image analysis method and device
Zhang et al. Transformer-based multimodal fusion for early diagnosis of Alzheimer's disease using structural MRI and PET
CN111938592A (en) Missing Multimodal Representation Learning Algorithm for Alzheimer&#39;s Diagnosis
CN115496953B (en) Brain network classification method based on spatiotemporal graph convolution
CN112863664A (en) Alzheimer disease classification method based on multi-modal hypergraph convolutional neural network
Pallawi et al. Study of Alzheimer’s disease brain impairment and methods for its early diagnosis: a comprehensive survey
Klein et al. Early diagnosis of dementia based on intersubject whole-brain dissimilarities
CN119339943A (en) An early warning system and method for Parkinson&#39;s disease based on graph neural network
Ma et al. Multi-view brain networks construction for alzheimer’s disease diagnosis
Arabi et al. High accuracy diagnosis for MRI imaging of Alzheimer’s disease using XGBoost
CN117036793B (en) A brain age assessment method and device based on multi-scale features of PET images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant