[go: up one dir, main page]

CN118351406A - A globally optimized supervised multimodal image fusion method - Google Patents

A globally optimized supervised multimodal image fusion method Download PDF

Info

Publication number
CN118351406A
CN118351406A CN202410397870.0A CN202410397870A CN118351406A CN 118351406 A CN118351406 A CN 118351406A CN 202410397870 A CN202410397870 A CN 202410397870A CN 118351406 A CN118351406 A CN 118351406A
Authority
CN
China
Prior art keywords
modality
matrix
correlation
mode
mixing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410397870.0A
Other languages
Chinese (zh)
Inventor
戚世乐
胡静娴
梁创
张道强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN202410397870.0A priority Critical patent/CN118351406A/en
Publication of CN118351406A publication Critical patent/CN118351406A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0033Features or image-related aspects of imaging apparatus, e.g. for MRI, optical tomography or impedance tomography apparatus; Arrangements of imaging apparatus in a room
    • A61B5/004Features or image-related aspects of imaging apparatus, e.g. for MRI, optical tomography or impedance tomography apparatus; Arrangements of imaging apparatus in a room adapted for image acquisition of a particular organ or body part
    • A61B5/0042Features or image-related aspects of imaging apparatus, e.g. for MRI, optical tomography or impedance tomography apparatus; Arrangements of imaging apparatus in a room adapted for image acquisition of a particular organ or body part for the brain
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/05Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves
    • A61B5/055Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves involving electronic [EMR] or nuclear [NMR] magnetic resonance, e.g. magnetic resonance imaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/09Supervised learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/809Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data
    • G06V10/811Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data the classifiers operating on different input data, e.g. multi-modal recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2576/00Medical imaging apparatus involving image processing or analysis
    • A61B2576/02Medical imaging apparatus involving image processing or analysis specially adapted for a particular organ or body part
    • A61B2576/026Medical imaging apparatus involving image processing or analysis specially adapted for a particular organ or body part for the brain

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Public Health (AREA)
  • Pathology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Data Mining & Analysis (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Neurology (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

本发明公开了一种全局优化的有监督多模态影像融合方法,预处理各模态脑影像数据,并进行特征提取;将得到的各模态特征进行预处理,得到预处理后的各模态的特征矩阵;计算得到的各模态独立成分的独立性;求解最大化各模态独立成分的独立性、各模态的混合矩阵之间的相关性以及各模态的混合矩阵与参考信息之间的相关性时混合矩阵的最优解。通过最大化独立成分的独立性,并同时最大化各模态独立成分之间以及独立成分与参考信息之间的相关性平方和,从而得到全局最优解。通过引入工作记忆评分作为参考信息,针对性地识别出与临床指标相关并且在空间上相对独立的协同共变成分。从而深入挖掘认知能力与多模态神经影像在精神疾病中的交互关系。

The present invention discloses a globally optimized supervised multimodal image fusion method, which preprocesses brain image data of each modality and performs feature extraction; preprocesses the obtained features of each modality to obtain the feature matrix of each modality after preprocessing; calculates the independence of the independent components of each modality; and solves the optimal solution of the mixing matrix when maximizing the independence of the independent components of each modality, the correlation between the mixing matrices of each modality, and the correlation between the mixing matrices of each modality and reference information. The global optimal solution is obtained by maximizing the independence of the independent components and maximizing the sum of the squares of the correlations between the independent components of each modality and between the independent components and the reference information. By introducing working memory scores as reference information, the synergistic covariant components that are related to clinical indicators and relatively independent in space are specifically identified. Thus, the interactive relationship between cognitive ability and multimodal neuroimaging in mental illness is deeply explored.

Description

一种全局优化的有监督多模态影像融合方法A globally optimized supervised multimodal image fusion method

技术领域Technical Field

本发明涉及医学影像分析,具体是涉及一种全局优化的有监督多模态影像融合方法。The present invention relates to medical image analysis, and in particular to a globally optimized supervised multimodal image fusion method.

背景技术Background technique

近年来,神经成像技术的迅猛发展使得研究人员能够利用多种成像方法采集同一被试在不同模态下的神经影像,从而能够以多维度的方式提供关于大脑解剖结构或功能的信息。例如,常见的结构成像技术包括结构磁共振成像(Structure Magnetic ResonanceImaging,sMRI),它可以用于评估每个体素上灰质和白质的局部浓度或体积的变化,从而反映相应的解剖结构变化。而在功能成像领域,功能磁共振成像(Functional MagneticResonance Imaging,fMRI)则基于血氧水平依赖信号,反映大脑在任务态或静息状态下脑部神经元的活动情况。上述所有成像方式有自己的优势同时也存在一些局限性,单独采用某种成像模式无法同时获得大脑功能和结构的完整信息,尽管扫描对象的多模态影像数据共存,然而实际操作中却常常被分开单独分析,或者仅仅是对两种磁共振影像数据的分析结果进行简单比较或相关,模态间的互补信息无法得到充分利用。多模态数据融合分析方法能够融合各模态数据之间的互补信息,更深入地了解与神经精神疾病相关的病理机制,从而提高诊断和预测精度。In recent years, the rapid development of neuroimaging technology has enabled researchers to use a variety of imaging methods to collect neuroimages of the same subject in different modalities, thereby providing information about the brain's anatomical structure or function in a multi-dimensional manner. For example, common structural imaging techniques include structural magnetic resonance imaging (sMRI), which can be used to evaluate changes in the local concentration or volume of gray matter and white matter on each voxel, thereby reflecting the corresponding anatomical changes. In the field of functional imaging, functional magnetic resonance imaging (fMRI) is based on blood oxygen level-dependent signals to reflect the activity of brain neurons in task or resting states. All of the above imaging methods have their own advantages and limitations. Using a single imaging mode alone cannot simultaneously obtain complete information on brain function and structure. Although the multimodal imaging data of the scanned object coexist, they are often analyzed separately in actual operations, or the analysis results of the two magnetic resonance imaging data are simply compared or correlated, and the complementary information between the modalities cannot be fully utilized. Multimodal data fusion analysis methods can integrate complementary information between modal data, provide a deeper understanding of the pathological mechanisms associated with neuropsychiatric diseases, and thus improve diagnostic and prediction accuracy.

有监督融合方法更具有目标导向性,因为它利用参考信息来指导融合分析,从而能够从大型复杂数据集中精确定位感兴趣的特定成分,有效提升特征提取的目标性。pICA-R(pICA with references)是一种利用基因作为参考信息来引导多模态融合的方法。然而,该方法中的参考信息仅限于空间成分分布,例如特定脑区或特定位点,而无法涵盖被试个体的指标,比如被试样本的各种行为学评分。此外,融合效果严重依赖于基因信息的精确性。MCCAR+jICA(multi-set CCA with reference+joint ICA),在保持原有联合源分离基础性能的同时,可以进一步检测出与参考信息有显著相关性的共变多模态特征,然而该方法分阶段优化目标函数,可能导致第一阶段提取得到的相关性在第二阶段分析中丢失。Supervised fusion methods are more goal-oriented because they use reference information to guide fusion analysis, which can accurately locate specific components of interest from large and complex data sets, effectively improving the targeting of feature extraction. pICA-R (pICA with references) is a method that uses genes as reference information to guide multimodal fusion. However, the reference information in this method is limited to the distribution of spatial components, such as specific brain regions or specific sites, and cannot cover the indicators of individual subjects, such as various behavioral scores of subject samples. In addition, the fusion effect is heavily dependent on the accuracy of genetic information. MCCAR+jICA (multi-set CCA with reference+joint ICA), while maintaining the original basic performance of joint source separation, can further detect covariant multimodal features that are significantly correlated with reference information. However, this method optimizes the objective function in stages, which may cause the correlation extracted in the first stage to be lost in the second stage analysis.

发明内容Summary of the invention

发明目的:针对以上缺点,本发明提供一种提高精度的全局优化的有监督多模态影像融合方法。Purpose of the invention: In view of the above shortcomings, the present invention provides a globally optimized supervised multimodal image fusion method with improved accuracy.

技术方案:为解决上述问题,本发明采用一种全局优化的有监督多模态影像融合方法,包括以下步骤:Technical solution: To solve the above problems, the present invention adopts a globally optimized supervised multimodal image fusion method, which includes the following steps:

(1)预处理各模态脑影像数据,并进行特征提取,得到各模态特征;(1) Preprocessing brain image data of each modality and performing feature extraction to obtain features of each modality;

(2)将得到的各模态特征进行预处理,得到预处理后的各模态的特征矩阵;(2) Preprocessing the obtained modal features to obtain the feature matrix of each modality after preprocessing;

(3)根据各模态的特征矩阵、各模态的混合矩阵和各模态的权重矩阵计算得到的各模态独立成分的独立性;(3) The independence of the independent components of each mode calculated based on the characteristic matrix of each mode, the mixing matrix of each mode and the weight matrix of each mode;

(4)求解最大化各模态独立成分的独立性、各模态的混合矩阵之间的相关性以及各模态的混合矩阵与参考信息之间的相关性时混合矩阵的最优解;(4) Finding the optimal solution of the mixing matrix that maximizes the independence of the independent components of each mode, the correlation between the mixing matrices of each mode, and the correlation between the mixing matrices of each mode and the reference information;

(5)根据步骤(4)中得到的混合矩阵,计算与参考信息相关的多模态的共变成分和混合矩阵。(5) Based on the mixing matrix obtained in step (4), calculate the covariant component and mixing matrix of the multimodal information related to the reference information.

进一步的,所述步骤(3)中各模态独立成分的独立性通过信息熵函数H(·)描述,最大化各模态独立成分的独立性的计算公式为:Furthermore, the independence of the independent components of each modality in step (3) is described by the information entropy function H(·), and the calculation formula for maximizing the independence of the independent components of each modality is:

Uk=Wk·Xk+Wk0U k =W k ·X k +W k0 ;

其中,E(.)为期望值;为概率密度函数;Yk、Uk为计算的中间变量;Wk为混合矩阵Ak的解混矩阵;Ak为模态k的混合矩阵;Wk0为模态k的偏置权重矩阵;Xk为模态k预处理后的特征矩阵。Among them, E(.) is the expected value; is the probability density function; Y k and U k are the intermediate variables of the calculation; W k is the unmixing matrix of the mixing matrix Ak ; Ak is the mixing matrix of mode k; W k0 is the bias weight matrix of mode k; X k is the feature matrix of mode k after preprocessing.

进一步的,所述步骤(4)中求解最大化各模态独立成分的独立性、各模态的混合矩阵之间的相关性以及各模态的混合矩阵与参考信息之间的相关性的求解公式为:Furthermore, the solution formula for maximizing the independence of the independent components of each mode, the correlation between the mixing matrices of each mode, and the correlation between the mixing matrices of each mode and the reference information in step (4) is:

其中,mk(k=1,2)为模态k的独立成分的个数;A1i为模态1的混合矩阵的第i列;A2j为模态2的混合矩阵的第j列;refh表示参考信息矩阵的第h列;Corr(A1i,A2j)表示A1i与A2j的相关性;Corr(A1i,refk)表示A1i与refh的相关性;Corr(A2j,refh)表示A2j与refh的相关性;α1、α2、α3表示正则化参数。Among them, m k (k=1,2) is the number of independent components of mode k; A 1i is the i-th column of the mixing matrix of mode 1; A 2j is the j-th column of the mixing matrix of mode 2; ref h represents the h-th column of the reference information matrix; Corr(A 1i , A 2j ) represents the correlation between A 1i and A 2j ; Corr(A 1i , ref k ) represents the correlation between A 1i and ref h ; Corr(A 2j , ref h ) represents the correlation between A 2 j and ref h ; α 1 , α 2 , and α 3 represent regularization parameters.

进一步的,采用梯度下降算法对求解公式进行循环迭代,直至收敛,得到混合矩阵的最优解:Furthermore, the gradient descent algorithm is used to iterate the solution formula until convergence, and the optimal solution of the mixing matrix is obtained:

各模态的解混矩阵的迭代规则满足:The iteration rules of the unmixing matrix of each mode satisfy:

其中,λk为模态k进行独立成分分析的学习率;I为单位矩阵;T为矩阵的转置。Where λk is the learning rate for independent component analysis of mode k; I is the identity matrix; and T is the transpose of the matrix.

进一步的,所述步骤(4)中最大化各模态的混合矩阵之间的相关性以及各模态的混合矩阵与参考信息之间的相关性的步骤具体包括:Furthermore, the step of maximizing the correlation between the mixing matrices of each mode and the correlation between the mixing matrices of each mode and the reference information in step (4) specifically includes:

(41)计算各模态的混合矩阵以及参考信息之间的相关性,选取相关性最大的三元组A1i、A2j以及refk作为受约束项;(41) Calculate the correlation between the mixing matrix of each mode and the reference information, and select the triplet A 1i , A 2j and ref k with the largest correlation as the constrained items;

(42)将相关性平方和相对于A1i、A2j分别求偏导数,并令之等于0,得到含有A1i、A2j的偏微分方程;(42) Taking the partial derivatives of the sum of squared correlations with respect to A 1i and A 2j respectively and setting them equal to 0, we obtain the partial differential equation containing A 1i and A 2j ;

(43)设定一初始点,将所述A1i、A2j进行迭代更新,并确保相关性平方和的值增加,直至相关性平方和收敛,并将此时的A1i、A2j作为本次迭代过程的最优解;(43) Setting an initial point, iteratively updating the A 1i and A 2j , and ensuring that the value of the correlation square sum increases until the correlation square sum converges, and taking the A 1i and A 2j at this time as the optimal solution of this iteration process;

(44)根据步骤(43)所述最优解更新A1、A2,并重复步骤(41)、(42)和(43),得到最优解A1i、A2j(44) Update A 1 and A 2 according to the optimal solution in step (43), and repeat steps (41), (42) and (43) to obtain the optimal solutions A 1i and A 2j .

进一步的,所述步骤(43)中A1i、A2j进行迭代更新的迭代更新规则满足:Furthermore, the iterative update rule for iteratively updating A 1i and A 2j in step (43) satisfies:

其中,Std(·)为标准差;Cov(·)为协方差;var(·)为方差;分别表示A1i、A2j、refk的平均值;η11、η12、η21、η22均为通过梯度下降算法求得的下降步长;λc1和λc2为学习率。Among them, Std(·) is the standard deviation; Cov(·) is the covariance; var(·) is the variance; They respectively represent the average values of A 1i , A 2j , and ref k ; η 11 , η 12 , η 21 , and η 22 are all descent steps obtained by the gradient descent algorithm; λ c1 and λ c2 are learning rates.

进一步的,所述步骤(1)中各模态影像包括功能磁共振影像fMRI、结构磁共振影像sMRI。所述功能磁共振影像fMRI提取得到的模态特征包括分数低频波动振幅;所述结构磁共振影像sMRI提取得到的模态特征包括灰质体积。所述步骤(2)中将得到的各模态特征进行预处理包括:矩阵化、中心化、白化和利用主成分分析方法进行降维。Furthermore, each modality image in step (1) includes functional magnetic resonance image fMRI and structural magnetic resonance image sMRI. The modality features extracted from the functional magnetic resonance image fMRI include fractional low-frequency fluctuation amplitude; the modality features extracted from the structural magnetic resonance image sMRI include gray matter volume. In step (2), the preprocessing of each modality feature obtained includes: matrixing, centering, whitening and dimension reduction using principal component analysis method.

有益效果:本发明相对于现有技术,其显著优点是通过最大化独立成分的独立性,并同时最大化各模态独立成分之间以及独立成分与参考信息之间的相关性平方和,从而得到全局最优解。通过引入工作记忆评分作为参考信息,针对性地识别出与临床指标相关并且在空间上相对独立的协同共变成分。从而深入挖掘认知能力与多模态神经影像在精神疾病中的交互关系,为理解精神疾病的病理机制和制定个性化治疗方案提供有力支持。Beneficial effects: Compared with the prior art, the significant advantage of the present invention is that it maximizes the independence of independent components and simultaneously maximizes the sum of squares of correlations between independent components of each modality and between independent components and reference information, thereby obtaining a global optimal solution. By introducing working memory scores as reference information, synergistic covariant components that are related to clinical indicators and relatively independent in space are specifically identified. This allows for in-depth exploration of the interactive relationship between cognitive ability and multimodal neuroimaging in mental illness, providing strong support for understanding the pathological mechanisms of mental illness and formulating personalized treatment plans.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

图1为本发明中多模态脑影像融合方法的流程示意图;FIG1 is a schematic diagram of a process of a multimodal brain image fusion method of the present invention;

图2(a)为MCCA、jICA、MCCA+jICA、separate ICA、PICA、MCCAR、MCCAR+jICA和本发明融合方法估计得到的各模态目标成分与参考信息之间的相关性绝对值与各模态目标成分与参考信息之间真实的相关性绝对值的对比示意图;FIG2(a) is a schematic diagram showing the comparison between the absolute values of the correlation between the target components of each modality and the reference information estimated by MCCA, jICA, MCCA+jICA, separate ICA, PICA, MCCAR, MCCAR+jICA and the fusion method of the present invention and the actual absolute values of the correlation between the target components of each modality and the reference information;

图2(b)为MCCA、jICA、MCCA+jICA、separate ICA、PICA、MCCAR、MCCAR+jICA和本发明融合方法相对于各模态目标成分与参考信息之间真实的相关性绝对值求平均的混合矩阵和独立成分的精确度的对比示意图;FIG2( b ) is a schematic diagram showing the comparison of the accuracy of the mixed matrix and independent components of MCCA, jICA, MCCA+jICA, separate ICA, PICA, MCCAR, MCCAR+jICA and the fusion method of the present invention relative to the average of the absolute values of the true correlation between the target components of each modality and the reference information;

图3(a)为MCCA、jICA、MCCA+jICA、separate ICA、PICA、MCCAR、MCCAR+jICA和本发明supervised PICA方法相对于14个噪声水平求平均的独立成分的精确度的示意图;FIG3( a ) is a schematic diagram showing the accuracy of MCCA, jICA, MCCA+jICA, separate ICA, PICA, MCCAR, MCCAR+jICA and the supervised PICA method of the present invention relative to the independent components averaged over 14 noise levels;

图3(b)为MCCA、jICA、MCCA+jICA、separate ICA、PICA、MCCAR、MCCAR+jICA和supervised PICA方法相对于14个噪声水平求平均的混合矩阵的精确度的示意图;FIG3( b ) is a schematic diagram showing the accuracy of the MCCA, jICA, MCCA+jICA, separate ICA, PICA, MCCAR, MCCAR+jICA, and supervised PICA methods relative to the mixing matrix averaged over 14 noise levels;

图4(a)为本发明融合方法检测到的与工作记忆评分相关的多模态联合共变成分的空间视图;FIG4( a ) is a spatial view of the multimodal joint covariant components associated with working memory scores detected by the fusion method of the present invention;

图4(b)为各模态在精神分裂症病患组与健康对照组的组间差异图;Figure 4(b) shows the intergroup differences of each modality between the schizophrenia group and the healthy control group;

图4(c)为各模态与工作记忆评分的相关性拟合图;Figure 4(c) is a fitted graph of the correlation between each modality and working memory score;

图4(d)为各模态之间的相关性拟合图。Figure 4(d) is the correlation fitting diagram between each mode.

具体实施方式Detailed ways

如图1所示,本实施例中的一种全局优化的有监督多模态影像融合方法,包括以下步骤:As shown in FIG1 , a globally optimized supervised multimodal image fusion method in this embodiment includes the following steps:

S1:对各个模态脑影像数据进行预处理并提取其特征。S1: Preprocess the brain imaging data of each modality and extract its features.

各个模态可以是静息态功能脑影像fMRI、结构磁共振影像sMRI和弥散张量图像dMRI等脑成像技术。在本实施例中,采用fMRI和sMRI两个模态,而在实际应用中可以根据需求任意选取多种模态进行实施。Each modality can be a resting state functional brain imaging fMRI, structural magnetic resonance imaging sMRI, diffusion tensor imaging dMRI and other brain imaging technologies. In this embodiment, two modalities, fMRI and sMRI, are used, but in actual applications, multiple modalities can be selected and implemented as needed.

尽管本实施例仅以精神分裂症的多模态数据融合为例进行说明,但本发明的保护范围并不仅限于此。各模态的数据可以涵盖所有精神障碍疾病的多模态数据,包括但不限于精神分裂症、抑郁症等精神疾病,以及非精神疾病如双相情感障碍等的多模态数据。Although this embodiment only takes the multimodal data fusion of schizophrenia as an example for illustration, the protection scope of the present invention is not limited thereto. The data of each modality can cover the multimodal data of all mental disorders, including but not limited to mental illnesses such as schizophrenia and depression, and multimodal data of non-mental diseases such as bipolar disorder.

本实施例中对静息态功能脑影像fMRI提取低频波动的分数振幅fALFF作为其特征,对于fMRI的预处理及特征提取具体为:在MATLAB 2019环境下基于统计参数映射的标准预处理,对数据进行如下处理:(1)头动校正,对每个时间点的图像进行调整,确保在分析过程中保持脑结构的准确对应;(2)层间时间校正,调整fMRI图像中不同层次(切片)之间的获取时间差异;(3)标准化到蒙特利尔MNI标准空间并重采样;优选地,重采样成3×3×3mm;(4)回归掉6个头动参数和白质(WM)、灰质(GM)、脑脊液(CSF);(5)使用8毫米半最大值全宽(full width half max,FWHM)高斯核进行空间平滑;(6)将0.01至0.08赫兹的低频功率范围内的振幅值之和除以整个可检测功率谱的振幅之和,以计算出低频波动的分数振幅(fractional amplitude of low-frequency fluctuations,fALFF)。In this embodiment, the fractional amplitude fALFF of low-frequency fluctuations is extracted from the resting-state functional brain image fMRI as its feature. The preprocessing and feature extraction of fMRI are specifically as follows: in the MATLAB 2019 environment, standard preprocessing based on statistical parameter mapping is performed on the data as follows: (1) head motion correction, the image at each time point is adjusted to ensure the accurate correspondence of brain structure during the analysis process; (2) inter-layer time correction, the acquisition time difference between different layers (slices) in the fMRI image is adjusted; (3) standardization to the Montreal MNI standard space and resampling; preferably, resampling to 3×3×3mm; (4) regressing out 6 head motion parameters and white matter (WM), gray matter (GM), cerebrospinal fluid (CSF); (5) using 8mm full width at half maximum (full width half maximum) (6) The fractional amplitude of low-frequency fluctuations (fALFF) was calculated by dividing the sum of the amplitude values in the low-frequency power range of 0.01 to 0.08 Hz by the sum of the amplitudes of the entire detectable power spectrum.

本实施例中对结构磁共振影像sMRI提取灰质体积GMV作为其特征,对于sMRI的预处理及特征提取具体为:(1)使用SPM12中的统一分割方法,将sMRI数据标准化到蒙特利尔MNI标准空间并重采样;优选地,重采样成3×3×3mm;(2)分割为灰质、白质和脑脊液,结果输出为灰质体积;(3)用8毫米半最大值全宽高斯滤波核对灰质做平滑;(4)检测被试的异常值确保分割正确。In this embodiment, the gray matter volume GMV is extracted from the structural magnetic resonance image sMRI as its feature. The preprocessing and feature extraction of sMRI are specifically as follows: (1) using the unified segmentation method in SPM12, the sMRI data is standardized to the Montreal MNI standard space and resampled; preferably, the data is resampled to 3×3×3 mm; (2) the data is segmented into gray matter, white matter and cerebrospinal fluid, and the result is output as gray matter volume; (3) the gray matter is smoothed using an 8 mm half-maximum full-width Gaussian filter kernel; and (4) the abnormal values of the subjects are detected to ensure the correct segmentation.

对每个被试的fMRI和sMRI数据进行上述预处理后,各模态的特征被转换为3D矩阵。对于fMRI,能够使用的特征包括分数低频波动振幅特征。对于sMRI,能够使用的模态特征包括灰质体积特征。After the above preprocessing of the fMRI and sMRI data of each subject, the features of each modality are converted into a 3D matrix. For fMRI, the features that can be used include the fractional low-frequency fluctuation amplitude feature. For sMRI, the modality features that can be used include the gray matter volume feature.

S2:对各模态特征进行预处理,包括矩阵化、中心化、白化和利用主成分分析方法进行降维。S2: Preprocessing of each modal feature, including matrixing, centering, whitening and dimension reduction using principal component analysis.

矩阵化的处理过程可以为:将步骤S1处理后得到的每个被试的各个模态的3D特征矩阵拉长成行向量(1×L),其中L为体素个数,将所有被试的3D矩阵都进行转换,则得到N×L的矩阵Xk(k=1,2),N表示被试个数,k表示模态数目。对2个模态fMRI和sMRI都进行上述转换,则得到2个模态的特征矩阵。The matrix processing process can be as follows: the 3D feature matrix of each modality of each subject obtained after the processing of step S1 is stretched into a row vector (1×L), where L is the number of voxels, and the 3D matrices of all subjects are transformed to obtain an N×L matrix X k (k=1,2), where N represents the number of subjects and k represents the number of modalities. The above transformation is performed on both fMRI and sMRI modalities to obtain the feature matrices of the two modalities.

接着,对上述特征矩阵进行中心化和白化,以消除数据的平移影响,使其具有相同的方差并且互相不相关,以确保两个模态的特征数值在相同的范围内。Next, the feature matrix is centered and whitened to eliminate the translation effect of the data so that it has the same variance and is uncorrelated with each other, ensuring that the feature values of the two modes are in the same range.

最后,采用主成分分析方法对上述特征矩阵进行降维,以去除冗余信息和不相关的特征,从而降低计算复杂度并减少噪声干扰。这有助于保留数据的主要变化方向,提高模型的泛化性能。Finally, the principal component analysis method is used to reduce the dimension of the above feature matrix to remove redundant information and irrelevant features, thereby reducing computational complexity and reducing noise interference. This helps to retain the main direction of data change and improve the generalization performance of the model.

S3:基于步骤S2得到的降维后的各模态特征,加入参考信息矩阵,同时最大化各模态成分的独立性、模态之间的相关性以及成分与参考信息之间的相关性,采用梯度下降算法进行循环迭代,直至收敛。S3: Based on the reduced-dimensional features of each modal obtained in step S2, the reference information matrix is added, and the independence of each modal component, the correlation between the modalities, and the correlation between the components and the reference information are maximized. The gradient descent algorithm is used for cyclic iteration until convergence.

具体地,根据以下公式实现最大化:Specifically, the maximization is achieved according to the following formula:

在公式(1)中,H(·)为保证各模态独立成分独立性的信息熵函数;mk(k=1,2)表示模态k相互独立的成分个数;A1i表示模态1的混合矩阵的第i列;A2j表示模态2的混合矩阵的第j列;refk表示参考信息矩阵的第k列;Corr(A1i,A2j)表示A1i与A2j的相关性;Corr(A1i,refk)表示A1i与refk的相关性;Corr(A2j,refk)表示A2j与refk的相关性;α1、α2、α3表示控制 之间权重的正则化参数并确保算法优化过程的收敛性。In formula (1), H(·) is the information entropy function that ensures the independence of independent components of each mode; m k (k=1,2) represents the number of independent components of mode k; A 1i represents the i-th column of the mixing matrix of mode 1; A 2j represents the j-th column of the mixing matrix of mode 2; ref k represents the k-th column of the reference information matrix; Corr(A 1i ,A 2j ) represents the correlation between A 1i and A 2j ; Corr(A 1i ,ref k ) represents the correlation between A 1i and ref k ; Corr(A 2j ,ref k ) represents the correlation between A 2j and ref k ; α 1 , α 2 , and α 3 represent the control and The regularization parameter of the weights between them and ensures the convergence of the algorithm optimization process.

具体地,根据以下公式对各模态特征进行并行独立成分分析:Specifically, the independent component analysis of each modal feature is performed in parallel according to the following formula:

其中,分别表示Y1和Y2的概率密度函数;E(.)为期望值;H(.)为熵函数;Wk(k=1,2)为混合矩阵Ak的逆,即为Ak的解混矩阵;Wk0(k=1,2)为模态k的偏置权重矩阵;Xk(k=1,2)表示模态k经过预处理后的特征矩阵。in, and They represent the probability density functions of Y1 and Y2 respectively; E(.) is the expected value; H(.) is the entropy function; Wk (k=1,2) is the inverse of the mixing matrix Ak , that is, the unmixing matrix of Ak ; Wk0 (k=1,2) is the bias weight matrix of mode k; Xk (k=1,2) represents the feature matrix of mode k after preprocessing.

对于ICA的求解,这里采用目前广泛使用的Infomax(信息极大)算法。For solving ICA, the widely used Infomax algorithm is adopted here.

求解公式(1)的最大化问题具体可以包括:计算A1、A2以及ref各列的相关性,选取相关性最大的三元组A1i、A2j以及refk作为受约束项。将相关性平方和相对于A1i、A2j分别求偏导数,并令之等于0,得到含有所述A1i、A2j的偏微分方程。由于目标函数中的相关性平方和是A1i、A2j的二次方程,因此相关性平方和分别对A1i、A2j的偏微分是A1i、A2j的线性函数,所以可以求得近似解。设定一初始点,将所述A1i、A2j通过所述公式(1)进行迭代更新,并确保所述相关性平方和的值增加,当满足目标函数的收敛准则时,迭代停止,并将此时的A1i、A2j作为本次迭代过程的最优解。根据每次迭代过程得到的最优解更新A1、A2,重复上述步骤,直至得到稳定解,即为公式(1)的最优解。Solving the maximization problem of formula (1) may specifically include: calculating the correlation of each column of A 1 , A 2 and ref, and selecting the triples A 1i , A 2j and ref k with the largest correlation as constrained items. Taking partial derivatives of the sum of squares of correlation with respect to A 1i and A 2j , respectively, and setting them equal to 0, a partial differential equation containing A 1i and A 2j is obtained. Since the sum of squares of correlation in the objective function is a quadratic equation of A 1i and A 2j , the partial differentials of the sum of squares of correlation with respect to A 1i and A 2j are linear functions of A 1i and A 2j , respectively, so an approximate solution can be obtained. Setting an initial point, iteratively updating A 1i and A 2j through formula (1), and ensuring that the value of the sum of squares of correlation increases, when the convergence criterion of the objective function is met, the iteration stops, and A 1i and A 2j at this time are taken as the optimal solution of this iteration process. According to the optimal solution obtained in each iteration process, A 1 and A 2 are updated, and the above steps are repeated until a stable solution is obtained, which is the optimal solution of formula (1).

其中,参考信息用于指导多模态融合,其包括但不限于工作记忆评分、认知评分、症状评分、某一位点的基因。参考信息可以根据实际研究目的进行选取。The reference information is used to guide multimodal fusion, including but not limited to working memory scores, cognitive scores, symptom scores, and genes at a certain site. The reference information can be selected according to the actual research purpose.

S4:由步骤S3可得到与参考信息显著相关的多模态共变成分和混合矩阵,从而实现全局优化的有监督多模态脑影像融合。S4: Step S3 can obtain multimodal covariant components and mixing matrices that are significantly correlated with the reference information, thereby achieving globally optimized supervised multimodal brain image fusion.

图2(a)对比了各方法估计得到的各模态目标成分与参考信息之间的相关性绝对值与各模态目标成分与参考信息之间真实的相关性绝对值。Figure 2(a) compares the absolute value of the correlation between each modal target component and the reference information estimated by each method with the absolute value of the true correlation between each modal target component and the reference information.

在本实施例中,MCCA是指多组别典型相关分析;jICA是指联合独立成分分析;separate ICA是指分别进行单个模态的独立成分分析;PICA是指并行独立成分分析;MCCAR是指有监督的多变量典型相关分析;MCCAR+jICA是指有监督的多变量典型相关分析+联合独立成分分析;supervised PICA是指有监督的并行独立成分分析。另外,truecorrelation是指真实的相关性。In this embodiment, MCCA refers to multi-group canonical correlation analysis; jICA refers to joint independent component analysis; separate ICA refers to independent component analysis of a single modality; PICA refers to parallel independent component analysis; MCCAR refers to supervised multivariate canonical correlation analysis; MCCAR+jICA refers to supervised multivariate canonical correlation analysis + joint independent component analysis; supervised PICA refers to supervised parallel independent component analysis. In addition, truecorrelation refers to true correlation.

通过图2(a)可见,本发明实施例提出的方法在真实相关性较小的情况下仍能够提取与参考信息相关的独立成分。随着真实相关性的增加,本发明实施例提出的方法在提取独立成分方面表现出较强的鲁棒性。As shown in Figure 2(a), the method proposed in the embodiment of the present invention can still extract independent components related to the reference information when the true correlation is small. As the true correlation increases, the method proposed in the embodiment of the present invention shows strong robustness in extracting independent components.

图2(b)示例性地示出了各方法相对于各模态目标成分与参考信息之间的相关性绝对值求平均的混合矩阵Ak和独立成分Sk的精确度。FIG2( b ) exemplarily shows the accuracy of each method relative to the mixing matrix Ak and the independent components Sk which are averaged over the absolute values of the correlations between the target components of each modality and the reference information.

精确度是指各方法得到的混合矩阵、独立成分与真实的混合矩阵、成分的相关性,两者的相关性越高则表明精确度越高。Precision refers to the correlation between the mixing matrix and independent components obtained by each method and the true mixing matrix and components. The higher the correlation between the two, the higher the precision.

通过图2(b)可以看出,本发明实施例提出的方法在混合矩阵Ak和独立成分Sk的精确度上相较于其他有监督多模态融合方法是最优的,这说明了使用全局优化求解方法的重要性和必要性。It can be seen from Figure 2(b) that the method proposed in the embodiment of the present invention is optimal in terms of the accuracy of the mixing matrix Ak and the independent component Sk compared to other supervised multimodal fusion methods, which illustrates the importance and necessity of using a global optimization solution method.

图3(a)示例性地示出了各方法中相对于14个噪声水平(峰值信噪比PSNR=[-1110])求平均的独立成分Sk的精确度。图3(b)示例性地示出了各方法相对于14个噪声水平(峰值信噪比PSNR=[-11 10])求平均的混合矩阵Ak的精确度。通过图3(a)和图3(b)可以看出,本发明实施例提出的方法在不同噪声影响下提取独立成分的能力较为稳定,在混合矩阵Ak和独立成分Sk的精确度上都是最优的。FIG3(a) exemplarily shows the accuracy of the averaged independent component Sk in each method relative to 14 noise levels (peak signal-to-noise ratio PSNR = [-1110]). FIG3(b) exemplarily shows the accuracy of the averaged mixing matrix Ak in each method relative to 14 noise levels (peak signal-to-noise ratio PSNR = [-11 10]). It can be seen from FIG3(a) and FIG3(b) that the method proposed in the embodiment of the present invention has a relatively stable ability to extract independent components under the influence of different noises, and is optimal in terms of the accuracy of the mixing matrix Ak and the independent component Sk .

如表1所示,为各方法在峰值信噪比PSNR=7的情况下提取出与reference显著相关的独立成分能力的对比结果。其中,real corr为真实的相关性;A~A*为本发明示例里提出的方法提取得到的混合矩阵的精确度;S~S*为本发明示例里提出的方法提取得到的独立成分的精确度。As shown in Table 1, the comparison results of the ability of each method to extract independent components significantly correlated with reference when the peak signal-to-noise ratio PSNR = 7 are shown. Among them, real corr is the real correlation; A~A * is the accuracy of the mixed matrix extracted by the method proposed in the example of the present invention; S~S * is the accuracy of the independent components extracted by the method proposed in the example of the present invention.

表1各方法提取出与参考信息显著相关的独立成分能力的对比结果Table 1 Comparison of the ability of each method to extract independent components significantly related to the reference information

下面对本发明示例里提出的方法进行的测试进行详细说明。The following is a detailed description of the test performed by the method proposed in the examples of the present invention.

本实例中模拟数据的产生:使用simTB工具箱分别模拟了与fMRI和sMRI数据具有相同维数的两个数据集,每个数据集分别包含6个不同分布的脑网络,以及两个随机混合矩阵。Generation of simulated data in this example: Two data sets with the same dimensions as fMRI and sMRI data were simulated using the simTB toolbox. Each data set contained 6 brain networks with different distributions and two random mixing matrices.

本实例中真实数据的来源:使用Function Biomedical Informatics ResearchNetwork(FBIRN)Phase III study数据集,该数据集包含了294个被试(147精神分裂症患者,147个人口统计学信息相匹配的正常对照组),所有被试的工作记忆评分均已使用CMINDS认知评测系统评估。利用这些数据对本发明实施例提出的方法进行了测试。将工作记忆评分作为参考信息。The source of real data in this example: Function Biomedical Informatics Research Network (FBIRN) Phase III study data set, which contains 294 subjects (147 schizophrenia patients and 147 normal controls with matching demographic information), and the working memory scores of all subjects have been evaluated using the CMINDS cognitive assessment system. These data were used to test the method proposed in the embodiment of the present invention. The working memory score was used as reference information.

图4(a)、图4(b)、图4(c)和图4(d)示例性地示出了本发明实施例在数据集FBIRN上的结果,所述IC为独立成分。其中,图4(a)是本发明实施例提出的方法检测到的与工作记忆评分相关的多模态共变成分的空间视图,可以看出其包含的脑区位置以及激活程度与现有的研究基本一致;图4(b)是各模态的组间差异图;图4(c)是各模态与工作记忆评分Workingmemory的相关性拟合图;图4(d)是模态之间相关性的拟合图。综上所述,可以看出本发明提出的方法可以分离出具有组间差异、模态间相互关联并且与工作记忆评分显著相关的独立成分。Figures 4(a), 4(b), 4(c) and 4(d) exemplarily show the results of the embodiment of the present invention on the data set FBIRN, where the IC is an independent component. Among them, Figure 4(a) is a spatial view of the multimodal covariant component related to the working memory score detected by the method proposed in the embodiment of the present invention. It can be seen that the location of the brain areas and the degree of activation contained therein are basically consistent with existing studies; Figure 4(b) is a group difference diagram of each modality; Figure 4(c) is a correlation fitting diagram of each modality and the working memory score Workingmemory; Figure 4(d) is a fitting diagram of the correlation between modalities. In summary, it can be seen that the method proposed in the present invention can separate independent components with inter-group differences, inter-modality correlations and significant correlations with working memory scores.

Claims (9)

1.一种全局优化的有监督多模态影像融合方法,其特征在于,包括以下步骤:1. A globally optimized supervised multimodal image fusion method, characterized in that it comprises the following steps: (1)预处理各模态脑影像数据,并进行特征提取,得到各模态特征;(1) Preprocessing brain image data of each modality and performing feature extraction to obtain features of each modality; (2)将得到的各模态特征进行预处理,得到预处理后的各模态的特征矩阵;(2) Preprocessing the obtained modal features to obtain the feature matrix of each modality after preprocessing; (3)根据各模态的特征矩阵、各模态的混合矩阵和各模态的权重矩阵计算得到的各模态独立成分的独立性;(3) The independence of the independent components of each mode calculated based on the characteristic matrix of each mode, the mixing matrix of each mode and the weight matrix of each mode; (4)求解最大化各模态独立成分的独立性、各模态的混合矩阵之间的相关性以及各模态的混合矩阵与参考信息之间的相关性时混合矩阵的最优解;(4) Finding the optimal solution of the mixing matrix that maximizes the independence of the independent components of each mode, the correlation between the mixing matrices of each mode, and the correlation between the mixing matrices of each mode and the reference information; (5)根据步骤(4)中得到的混合矩阵,计算与参考信息相关的多模态的共变成分和混合矩阵。(5) Based on the mixing matrix obtained in step (4), calculate the covariant component and mixing matrix of the multimodal information related to the reference information. 2.根据权利要求1所述的有监督多模态影像融合方法,其特征在于,所述步骤(3)中各模态独立成分的独立性通过信息熵函数H(·)描述,最大化各模态独立成分的独立性的计算公式为:2. The supervised multimodal image fusion method according to claim 1 is characterized in that the independence of the independent components of each modality in step (3) is described by an information entropy function H(·), and the calculation formula for maximizing the independence of the independent components of each modality is: Uk=Wk·Xk+Wk0U k =W k ·X k +W k0 ; 其中,E(.)为期望值;为概率密度函数;Yk、Uk为计算的中间变量;Wk为混合矩阵Ak的解混矩阵;Ak为模态k的混合矩阵;Wk0为模态k的偏置权重矩阵;Xk为模态k预处理后的特征矩阵。Among them, E(.) is the expected value; is the probability density function; Y k and U k are the intermediate variables of the calculation; W k is the unmixing matrix of the mixing matrix Ak ; Ak is the mixing matrix of mode k; W k0 is the bias weight matrix of mode k; X k is the feature matrix of mode k after preprocessing. 3.根据权利要求2所述的有监督多模态影像融合方法,其特征在于,所述步骤(4)中求解最大化各模态独立成分的独立性、各模态的混合矩阵之间的相关性以及各模态的混合矩阵与参考信息之间的相关性的求解公式为:3. The supervised multimodal image fusion method according to claim 2 is characterized in that the solution formula for maximizing the independence of the independent components of each modality, the correlation between the mixing matrices of each modality, and the correlation between the mixing matrices of each modality and the reference information in step (4) is: 其中,mk(k=1,2)为模态k的独立成分的个数;A1i为模态1的混合矩阵的第i列;A2j为模态2的混合矩阵的第j列;refh表示参考信息矩阵的第h列;Corr(A1i,A2j)表示A1i与A2j的相关性;Corr(A1i,refk)表示A1i与refh的相关性;Corr(A2j,refh)表示A2j与refh的相关性;α1、α2、α3表示正则化参数。Among them, m k (k=1,2) is the number of independent components of mode k; A 1i is the i-th column of the mixing matrix of mode 1; A 2j is the j-th column of the mixing matrix of mode 2; ref h represents the h-th column of the reference information matrix; Corr(A 1i ,A 2j ) represents the correlation between A 1i and A 2j ; Corr(A 1i ,ref k ) represents the correlation between A 1i and ref h ; Corr(A 2j ,ref h ) represents the correlation between A 2j and ref h ; α 1 , α 2 , and α 3 represent regularization parameters. 4.根据权利要求3所述的有监督多模态影像融合方法,其特征在于,采用梯度下降算法对求解公式进行循环迭代,直至收敛,得到混合矩阵的最优解:4. The supervised multimodal image fusion method according to claim 3 is characterized in that the gradient descent algorithm is used to iterate the solution formula until convergence to obtain the optimal solution of the mixing matrix: 各模态的解混矩阵的迭代规则满足:The iteration rules of the unmixing matrix of each mode satisfy: 其中,λk为模态k进行独立成分分析的学习率;I为单位矩阵;T为矩阵的转置。Where λk is the learning rate for independent component analysis of mode k; I is the identity matrix; and T is the transpose of the matrix. 5.根据权利要求4所述的有监督多模态影像融合方法,其特征在于,所述步骤(4)中最大化各模态的混合矩阵之间的相关性以及各模态的混合矩阵与参考信息之间的相关性的步骤具体包括:5. The supervised multimodal image fusion method according to claim 4, characterized in that the step of maximizing the correlation between the mixing matrices of each modality and the correlation between the mixing matrices of each modality and the reference information in step (4) specifically comprises: (41)计算各模态的混合矩阵以及参考信息之间的相关性,选取相关性最大的三元组A1i、A2j以及refk作为受约束项;(41) Calculate the correlation between the mixing matrix of each mode and the reference information, and select the triplet A 1i , A 2j and ref k with the largest correlation as the constrained items; (42)将相关性平方和相对于A1i、A2j分别求偏导数,并令之等于0,得到含有A1i、A2j的偏微分方程;(42) Taking the partial derivatives of the sum of squared correlations with respect to A 1i and A 2j respectively and setting them equal to 0, we obtain the partial differential equation containing A 1i and A 2j ; (43)设定一初始点,将所述A1i、A2j进行迭代更新,并确保相关性平方和的值增加,直至相关性平方和收敛,并将此时的A1i、A2j作为本次迭代过程的最优解;(43) Setting an initial point, iteratively updating the A 1i and A 2j , and ensuring that the value of the correlation square sum increases until the correlation square sum converges, and taking the A 1i and A 2j at this time as the optimal solution of this iteration process; (44)根据步骤(43)所述最优解更新A1、A2,并重复步骤(41)、(42)和(43),得到最优解A1i、A2j(44) Update A 1 and A 2 according to the optimal solution in step (43), and repeat steps (41), (42) and (43) to obtain the optimal solutions A 1i and A 2j . 6.根据权利要求5所述的有监督多模态影像融合方法,其特征在于,所述步骤(43)中A1i、A2j进行迭代更新的迭代更新规则满足:6. The supervised multimodal image fusion method according to claim 5, characterized in that the iterative update rule for iteratively updating A 1i and A 2j in step (43) satisfies: 其中,Std(·)为标准差;Cov(·)为协方差;var(·)为方差;分别表示A1i、A2j、refk的平均值;η11、η12、η21、η22均为通过梯度下降算法求得的下降步长;λc1和λc2为学习率。Among them, Std(·) is the standard deviation; Cov(·) is the covariance; var(·) is the variance; They respectively represent the average values of A 1i , A 2j , and ref k ; η 11 , η 12 , η 21 , and η 22 are all descent steps obtained by the gradient descent algorithm; λ c1 and λ c2 are learning rates. 7.根据权利要求1所述的有监督多模态影像融合方法,其特征在于,所述步骤(1)中各模态影像包括功能磁共振影像fMRI、结构磁共振影像sMRI。7. The supervised multimodal image fusion method according to claim 1 is characterized in that each modality image in step (1) comprises a functional magnetic resonance image (fMRI) and a structural magnetic resonance image (sMRI). 8.根据权利要求7所述的有监督多模态影像融合方法,其特征在于,所述功能磁共振影像fMRI提取得到的模态特征包括分数低频波动振幅;所述结构磁共振影像sMRI提取得到的模态特征包括灰质体积。8. The supervised multimodal image fusion method according to claim 7 is characterized in that the modal features extracted from the functional magnetic resonance image (fMRI) include fractional low-frequency fluctuation amplitude; and the modal features extracted from the structural magnetic resonance image (sMRI) include gray matter volume. 9.根据权利要求1所述的有监督多模态影像融合方法,其特征在于,所述步骤(2)中将得到的各模态特征进行预处理包括:矩阵化、中心化、白化和利用主成分分析方法进行降维。9. The supervised multimodal image fusion method according to claim 1 is characterized in that the preprocessing of each modal feature obtained in step (2) includes: matrixing, centering, whitening and dimensionality reduction using a principal component analysis method.
CN202410397870.0A 2024-04-03 2024-04-03 A globally optimized supervised multimodal image fusion method Pending CN118351406A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410397870.0A CN118351406A (en) 2024-04-03 2024-04-03 A globally optimized supervised multimodal image fusion method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410397870.0A CN118351406A (en) 2024-04-03 2024-04-03 A globally optimized supervised multimodal image fusion method

Publications (1)

Publication Number Publication Date
CN118351406A true CN118351406A (en) 2024-07-16

Family

ID=91823747

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410397870.0A Pending CN118351406A (en) 2024-04-03 2024-04-03 A globally optimized supervised multimodal image fusion method

Country Status (1)

Country Link
CN (1) CN118351406A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119339873A (en) * 2024-12-16 2025-01-21 北京凯普顿医药科技开发有限公司 A method and device for analyzing nervous system images

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119339873A (en) * 2024-12-16 2025-01-21 北京凯普顿医药科技开发有限公司 A method and device for analyzing nervous system images

Similar Documents

Publication Publication Date Title
Cao et al. Deformable image registration using a cue-aware deep regression network
Ben Rabeh et al. Segmentation of brain MRI using active contour model
CN106023194A (en) Amygdaloid nucleus spectral clustering segmentation method based on resting state function connection
CN105117731A (en) Community partition method of brain functional network
Wismüller et al. Fully automated biomedical image segmentation by self-organized model adaptation
CN108903942A (en) A method of utilizing plural number fMRI spatial source phase identification spatial diversity
CN104361318A (en) Disease diagnosis auxiliary system and disease diagnosis auxiliary method both based on diffusion tensor imaging technology
CN117172294B (en) Method, system, equipment and storage medium for constructing sparse brain network
CN113892936B (en) Interpretable brain age prediction method based on full convolution neural network
CN118351406A (en) A globally optimized supervised multimodal image fusion method
CN111513717A (en) A method for extracting the functional state of the brain
CN113052800B (en) Alzheimer disease image analysis method and device
CN116152235A (en) Cross-modal synthesis method for medical image from CT (computed tomography) to PET (positron emission tomography) of lung cancer
MEVIS Improved bias-corrected fuzzy c-means segmentation of brain mri data
CN112863664A (en) Alzheimer disease classification method based on multi-modal hypergraph convolutional neural network
CN115937581A (en) A classification and identification method for multi-site fMRI data
CN111815650A (en) A Brain Magnetic Resonance Image Segmentation Method Based on Improved Fuzzy C-Means
Bazay et al. Assessing the impact of preprocessing pipelines on fMRI based autism spectrum disorder classification: ABIDE II results
Li et al. Scdm: Unified representation learning for eeg-to-fnirs cross-modal generation in mi-bcis
CN114782371A (en) Brain area automatic segmentation method based on artificial intelligence
Ding et al. Low-rank domain adaptive method with inter-class difference constraint for multi-site autism spectrum disorder identification
Yang et al. A Multi-View Clustering-Based Method for Individual and Group Cortical Parcellations with Resting-State fMRI
Zhu et al. Multi-View Modeling Method for Functional MRI Images
Dessouky et al. Effective features extracting approach using MFCC for automated diagnosis of Alzheimer’s disease
Khatri et al. Convolution Driven Vision Transformer for the Prediction of Mild Cognitive Impairment to Alzheimer’s disease Progression

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination