CN107450730B - A method and system for slow eye movement recognition based on convolutional hybrid model - Google Patents
A method and system for slow eye movement recognition based on convolutional hybrid model Download PDFInfo
- Publication number
- CN107450730B CN107450730B CN201710695419.7A CN201710695419A CN107450730B CN 107450730 B CN107450730 B CN 107450730B CN 201710695419 A CN201710695419 A CN 201710695419A CN 107450730 B CN107450730 B CN 107450730B
- Authority
- CN
- China
- Prior art keywords
- eye movement
- independent
- frequency point
- signal
- components
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/02—Preprocessing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/22—Source localisation; Inverse modelling
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Complex Calculations (AREA)
- Image Analysis (AREA)
Abstract
本发明公开了一种基于卷积混合模型的慢速眼动识别方法及系统,属于眼电图技术领域,包括采用复值ICA算法对各频点的眼动数据进行盲源分离,得到各独立源信号在相应频点上的频域独立分量;对各频点上的独立分量进行尺度补偿,还原独立分量在观测分量中的真实比例成分;采用约束DOA算法对补偿后的独立分量进行排序调整;对尺度补偿后和排序后的各频点的独立分量进行短时傅里叶逆变换处理,得到时域上多通道独立源完整的时间信号;对多通道独立源完整的时间信号进行小波分解,将得到的分解结果与慢速眼动的评判标准进行对比与分析,与慢速眼动特征均相符的则识别为慢速眼动。本发明在时域中对多通道EOG信号进行小波分析,由于没有其他源信号的干扰能快速的从EOG信号中提取出慢速眼动。
The invention discloses a slow eye movement recognition method and system based on a convolution mixed model, belonging to the technical field of electrooculography, comprising using a complex-valued ICA algorithm to perform blind source separation on eye movement data of each frequency point to obtain independent The frequency domain independent components of the source signal at the corresponding frequency points; the scale compensation is performed on the independent components at each frequency point to restore the true proportional components of the independent components in the observed components; the constrained DOA algorithm is used to sort and adjust the compensated independent components ; Perform inverse short-time Fourier transform processing on the independent components of each frequency point after scale compensation and sorting to obtain the complete time signal of the multi-channel independent source in the time domain; perform wavelet decomposition on the complete time signal of the multi-channel independent source , the obtained decomposition results are compared and analyzed with the evaluation criteria of slow eye movement, and those that are consistent with the characteristics of slow eye movement are identified as slow eye movement. The present invention performs wavelet analysis on the multi-channel EOG signal in the time domain, and can quickly extract the slow eye movement from the EOG signal because there is no interference from other source signals.
Description
技术领域technical field
本发明涉及眼电图技术领域,特别涉及一种基于卷积混合模型的慢速眼动识别方法及系统。The invention relates to the technical field of electrooculography, in particular to a slow eye movement recognition method and system based on a convolution hybrid model.
背景技术Background technique
视觉系统是人类获取外部信息最重要的通道,在实验心理学的早期历史中心理学家就开始注意到眼动特征及其规律的心理学意义,利用眼动技术探索人在各种不同条件下的信息加工机制也成为当代心理学的研究热点。眼动特征即眼球的运动,与内部信息加工机制具有密切联系,存在着外源性和内源性两种控制,更多情况下,眼动会受到任务或目的的指引。The visual system is the most important channel for humans to obtain external information. In the early history of experimental psychology, psychologists began to pay attention to the psychological significance of eye movement characteristics and their laws. The information processing mechanism has also become a research hotspot in contemporary psychology. The eye movement feature is the movement of the eyeball, which is closely related to the internal information processing mechanism. There are two kinds of control, exogenous and endogenous. In more cases, the eye movement will be guided by the task or purpose.
而眼电图(Electro-oculogram,EOG)作为一种低成本的眼动信号测量技术,相比较于传统的视频手段,不仅测量更加精确,同时测量设备也具有重量轻、便于长时间记录、更易实现可穿戴式设计等优点。因此,使用 EOG进行眼动信号采集具有广泛的应用前景。As a low-cost eye movement signal measurement technology, Electro-oculogram (EOG) is not only more accurate in measurement than traditional video methods, but also has the advantages of light weight, convenient for long-term recording, and easier measurement. Realize the advantages of wearable design and so on. Therefore, the use of EOG for eye movement signal acquisition has broad application prospects.
根据眼动的特点,眼动信号的类型主要分为三类:注视、扫视和追随眼动。注视中又常常伴随着三种形式的即为细微的慢速眼动:自发性的高频眼动微颤、慢速眼动和微跳。其中慢速眼动是指从清醒过渡到睡眠期间的眼动运动称之为慢速眼动。在EOG数据的采集过程中,由于受试者易于疲累,在采集到的眼动数据中存在慢速眼动。According to the characteristics of eye movement, the types of eye movement signals are mainly divided into three categories: fixation, saccade and follow-up eye movement. Gaze is often accompanied by three forms of subtle slow eye movements: spontaneous high-frequency eye movement microfibrillation, slow eye movement, and microsaccades. Among them, slow eye movement refers to the eye movement during the transition from wakefulness to sleep, which is called slow eye movement. During the collection of EOG data, slow eye movements existed in the collected eye movement data because the subjects were prone to fatigue.
慢速眼动包含这大量有用的信息,在交通心理学以及临床医学等许多领域都有着广泛的应用。比如,可以通过慢速眼动来检测驾驶员的疲劳程度,可以通过慢速眼动对一些复杂的临床病例进行研究与治疗。但是在实际应用中,慢速眼动通常与其它的信号交织在一起,很难单独提取出来。目前,采用线性回归法进行慢速眼动的识别,主要是通过检测到的慢速眼动数量和目视检测的数量之间的比率来进行慢速眼动识别。但是这个指标没有明确算法的性能,识别结果不理想,难以达到实用的目的。Slow eye movement contains a lot of useful information, and has a wide range of applications in many fields such as traffic psychology and clinical medicine. For example, the driver's fatigue level can be detected by slow eye movement, and some complex clinical cases can be studied and treated by slow eye movement. But in practical applications, slow eye movements are usually intertwined with other signals, and it is difficult to extract them individually. At present, the linear regression method is used to identify slow eye movements, mainly through the ratio between the number of detected slow eye movements and the number of visual detections. However, this indicator does not clarify the performance of the algorithm, the recognition results are not ideal, and it is difficult to achieve practical purposes.
发明内容SUMMARY OF THE INVENTION
本发明的目的在于提供一种基于卷积混合模型的慢速眼动识别方法及系统,以提高慢速眼动识别的准确率。The purpose of the present invention is to provide a slow eye movement recognition method and system based on a convolution hybrid model, so as to improve the accuracy of slow eye movement recognition.
为实现以上目的,第一方面,本发明提供一种基于卷积混合模型的慢速眼动识别方法,包括:In order to achieve the above objects, in the first aspect, the present invention provides a slow eye movement recognition method based on a convolutional hybrid model, including:
S1、在频域上,采用复值ICA算法对各频点的眼动数据进行盲源分离,得到各独立源信号在相应频点上的频域独立分量;S1. In the frequency domain, the complex-valued ICA algorithm is used to perform blind source separation on the eye movement data of each frequency point to obtain the frequency domain independent components of each independent source signal at the corresponding frequency point;
S2、对各频点上的独立分量进行尺度补偿,还原独立分量在观测分量中的真实比例成分;S2. Perform scale compensation on the independent components at each frequency point to restore the true proportional components of the independent components in the observed components;
S3、采用约束DOA算法对补偿后的独立分量进行排序调整,使得每个频点上的独立源都按照方向角从小到大排列;S3. The constrained DOA algorithm is used to sort and adjust the independent components after compensation, so that the independent sources on each frequency point are arranged according to the direction angle from small to large;
S4、对尺度补偿后和排序后的各频点的独立分量进行短时傅里叶逆变换处理,得到时域上多通道独立源完整的时间信号;S4, performing short-time inverse Fourier transform processing on the independent components of each frequency point after scale compensation and sorting, to obtain a complete time signal of multi-channel independent sources in the time domain;
S5、对时域上多通道独立源进行小波分解,得到各级小波系数;S5. Perform wavelet decomposition on the multi-channel independent sources in the time domain to obtain wavelet coefficients at all levels;
S6、将各级小波系数与慢速眼动的评判标准进行对比与分析,与慢速眼动特征均相符的则识别为慢速眼动。S6, compare and analyze the wavelet coefficients at all levels and the evaluation criteria of slow eye movement, and identify the slow eye movement if they are consistent with the slow eye movement characteristics.
其中,在所述的步骤S5中,采用采用的小波分解的母函数为db4、小波分解的层数为十层。Wherein, in the step S5, the generating function of wavelet decomposition adopted is db4, and the number of layers of wavelet decomposition is ten layers.
其中,所述的慢速眼动的特征包括:眼动信号频率低于1Hz、眼动信号初始运动速度近似为0以及EOG信号中未出现伪迹信号。The features of the slow eye movement include: the frequency of the eye movement signal is lower than 1 Hz, the initial movement speed of the eye movement signal is approximately 0, and no artifact signal appears in the EOG signal.
其中,所述的步骤S2,具体包括:Wherein, the step S2 specifically includes:
根据复值ICA算法中的各频点的分离矩阵,得到对应频点的混合矩阵,其中分离矩阵和混合矩阵互为逆矩阵;According to the separation matrix of each frequency point in the complex-valued ICA algorithm, the mixing matrix of the corresponding frequency point is obtained, wherein the separation matrix and the mixing matrix are mutually inverse matrices;
利用混合矩阵的系数对各频点的独立分量进行补偿,得到补偿后各频点的独立分量。The independent components of each frequency point are compensated by the coefficients of the mixing matrix, and the independent components of each frequency point after compensation are obtained.
其中,所述的步骤S3,具体包括:Wherein, the step S3 specifically includes:
a、为每一个独立源初始化一个角度;a. Initialize an angle for each independent source;
b、通过Root-Music算法计算每个频点的不同行,可以得到各个源方向的估计,其中分离矩阵的行对应不同的独立源;b. Calculate the different rows of each frequency point through the Root-Music algorithm to obtain the estimation of each source direction, where the rows of the separation matrix correspond to different independent sources;
c、设置各独立源的方向角度与初始化角度的接近性度量为ε(y,θ),并在迭代过程中,判断各独立源的角度与初始化角度是否相同;c. Set the proximity measure between the direction angle of each independent source and the initialization angle as ε(y, θ), and in the iterative process, determine whether the angle of each independent source is the same as the initialization angle;
d、若相同则执行步骤e,不相同则执行步骤f;d. If they are the same, execute step e, and if they are not the same, execute step f;
e、将ε(yj,θj)设置为0,并设置方向角度矩阵T来计算调节矩阵Q;e. Set ε(y j , θ j ) to 0, and set the direction angle matrix T to calculate the adjustment matrix Q;
f、将ε(yj,θj)设置为1,返回所述迭代过程重新计算分离矩阵W。f. Set ε(y j , θ j ) to 1, and return to the iterative process to recalculate the separation matrix W.
其中,在所述的步骤S1之前,还包括:Wherein, before the step S1, it also includes:
对多通道EOG数据进行采集,获得时域上的眼动数据;Collect multi-channel EOG data to obtain eye movement data in the time domain;
对时域上的眼动数据进行带通滤波和去均值处理,得到处理后的眼动数据;Perform band-pass filtering and de-average processing on the eye movement data in the time domain to obtain the processed eye movement data;
对处理后的眼动数据做短时傅里叶变换,将其从时域变换到频域,获得频域上的眼动数据。Perform short-time Fourier transform on the processed eye movement data, transform it from the time domain to the frequency domain, and obtain the eye movement data in the frequency domain.
第二方面,本发明提供一种基于卷积混合模型的慢速眼动识别系统,包括:依次连接的盲源分离模块、尺度补偿模块、排序模块、恢复模块、小波分解模块以及慢速眼动识别模块;In a second aspect, the present invention provides a slow eye movement recognition system based on a convolution hybrid model, comprising: a blind source separation module, a scale compensation module, a sorting module, a restoration module, a wavelet decomposition module and a slow eye movement connected in sequence identification module;
盲源分离模块用于在频域上,采用复值ICA算法对各频点的眼动数据进行盲源分离,得到各独立源信号在相应频点上的频域独立分量并将频域独立分量传输至尺度补偿模块;The blind source separation module is used to perform blind source separation on the eye movement data of each frequency point using the complex-valued ICA algorithm in the frequency domain, and obtain the frequency domain independent components of each independent source signal at the corresponding frequency point and separate the frequency domain independent components. Transfer to the scale compensation module;
尺度补偿模块用于对各频点上的独立分量进行尺度补偿,还原独立分量在观测分量中的真实比例成分并将补偿后的独立分量传输至排序模块;The scale compensation module is used to perform scale compensation for the independent components on each frequency point, restore the true proportional components of the independent components in the observed components, and transmit the compensated independent components to the sorting module;
排序模块用于采用约束DOA算法对补偿后的独立分量进行排序调整,使得每个频点上的独立源都按照方向角从小到大排列;The sorting module is used to sort and adjust the compensated independent components by using the constrained DOA algorithm, so that the independent sources on each frequency point are arranged according to the direction angle from small to large;
恢复模块用于对尺度补偿后和排序后的各频点的独立分量进行短时傅里叶逆变换处理,得到时域上多通道独立源完整的时间信号并将多通道独立源完整的时间信号传输至分解模块;The recovery module is used to perform inverse short-time Fourier transform processing on the independent components of each frequency point after scale compensation and sorting, and obtain the complete time signal of the multi-channel independent source in the time domain and the complete time signal of the multi-channel independent source. Transfer to the decomposition module;
小波分解模块用于对时域上多通道独立源完整的时间信号进行小波分解,得到各级小波系数,并将分解结果传输至慢速眼动识别模块;The wavelet decomposition module is used to perform wavelet decomposition on the complete time signal of multi-channel independent sources in the time domain, obtain wavelet coefficients at all levels, and transmit the decomposition results to the slow eye movement recognition module;
慢速眼动识别模块用于将各级小波系数与慢速眼动的评判标准进行对比与分析,与慢速眼动特征均相符的则识别为慢速眼动。The slow eye movement recognition module is used to compare and analyze the wavelet coefficients at all levels with the slow eye movement evaluation criteria, and identify the slow eye movement if they are consistent with the slow eye movement characteristics.
与现有技术相比,本发明存在以下技术效果:本发明采用复值ICA算法将频域观测数据进行盲源分离,并通过还原独立分量在观测分量中的真实比例成分来解决ICA固有尺度不确定的问题,同时通过约束DOA算法解决ICA固有的排序模糊问题,将各个独立源分离出来并转换为时域中的数据,由于逆傅里叶变换后的时域信号之间互不干扰,在时域中对多通道 EOG信号进行小波分析,对分解后的各级小波系数分别进行慢速眼动分析,由于没有其他源信号的干扰,准确性高,计算量少,并且能快速的从 EOG信号中提取出慢速眼动。Compared with the prior art, the present invention has the following technical effects: the present invention adopts the complex-valued ICA algorithm to carry out blind source separation of the frequency domain observation data, and solves the problem of the inherent scale inconsistency of the ICA by restoring the true proportional component of the independent component in the observation component. At the same time, the inherent sorting fuzzy problem of ICA is solved by constrained DOA algorithm, and each independent source is separated and converted into data in the time domain. Since the time domain signals after inverse Fourier transform do not interfere with each other, in the Wavelet analysis is performed on the multi-channel EOG signal in the time domain, and slow eye movement analysis is performed on the decomposed wavelet coefficients at all levels. Since there is no interference from other source signals, the accuracy is high, the amount of calculation is small, and the EOG can be quickly obtained. Slow eye movements are extracted from the signal.
附图说明Description of drawings
下面结合附图,对本发明的具体实施方式进行详细描述:Below in conjunction with the accompanying drawings, the specific embodiments of the present invention are described in detail:
图1是本发明中EOG信号采集过程中电极在受试者面部的分布情况示意图;Fig. 1 is a schematic diagram of the distribution of electrodes on the subject's face during EOG signal acquisition in the present invention;
图2是本发明中一种基于卷积混合模型的慢速眼动识别方法的流程示意图;2 is a schematic flowchart of a slow eye movement recognition method based on a convolution hybrid model in the present invention;
图3为本发明中多通道EOG信号的鲁棒扫视识别算法流程图;Fig. 3 is the robust saccade identification algorithm flow chart of multi-channel EOG signal in the present invention;
图4为本发明中盲源分离(Blind Source Separation,BSS)基本原理图;Fig. 4 is the basic principle diagram of blind source separation (Blind Source Separation, BSS) in the present invention;
图5为本发明六个相邻频点的时频域波形图;Fig. 5 is the time-frequency domain waveform diagram of six adjacent frequency points of the present invention;
图6为本发明经卷积ICA分离前后的EOG波形图;Fig. 6 is the EOG waveform diagram before and after convolution ICA separation of the present invention;
图7为本发明中线性ICA模型与卷积ICA模型分离结果对比图;Fig. 7 is linear ICA model and convolution ICA model separation result contrast diagram in the present invention;
图8为本发明不同方法下的平均识别率;Fig. 8 is the average recognition rate under different methods of the present invention;
图9为本发明中慢速眼动实验结果图;Fig. 9 is the result graph of slow eye movement experiment in the present invention;
图10为本发明中一种基于卷积混合模型的慢速眼动识别系统的结构示意图。FIG. 10 is a schematic structural diagram of a slow eye movement recognition system based on a convolution hybrid model in the present invention.
以下通过具体实施方式,并结合附图对本发明作进一步说明。The present invention will be further described below through specific embodiments and in conjunction with the accompanying drawings.
具体实施方式Detailed ways
为了更进一步说明本发明的特征,请参阅以下有关本发明的详细说明与附图。所附图仅供参考与说明之用,并非用来对本发明的保护范围加以限制。To further illustrate the features of the present invention, please refer to the following detailed description and accompanying drawings of the present invention. The attached drawings are for reference and description only, and are not intended to limit the protection scope of the present invention.
首先需要说明的是,本发明中在对EOG信号识别前,对EOG信号进行采集的过程为:First of all, it should be noted that in the present invention, before the EOG signal is identified, the process of collecting the EOG signal is as follows:
如图1所示,使用电极对受试者的EOG信号进行采集,眼电信号的采集使用Ag/AgCl电极,为了获取受试者上、下、左、右四个方向的眼动信息以及更多的空间位置信息,在本次采集中共使用了9个电极,其中,电极V1与电极V2安放于受试者左侧(或右侧)眼球上1.5cm与下1.5cm处,用以采集垂直EOG信号;电极H1与电极H2分别安放于受试者左眼左侧 1.5cm与右眼右侧1.5cm处,用以采集水平EOG信号;电极Fp1与电极Fp2 安放于前额位置,用以增强空间信息;参考电极C1和C2分别放置于左右两侧乳凸处,接地电极D位于头顶中心位置。As shown in Figure 1, electrodes were used to collect the EOG signal of the subject, and Ag/AgCl electrodes were used to collect the EOG signal. There is a lot of spatial position information, and a total of 9 electrodes were used in this acquisition. Among them, electrode V1 and electrode V2 were placed 1.5cm above and 1.5cm below the left (or right) eyeball of the subject to collect vertical EOG signal; Electrode H1 and Electrode H2 were placed 1.5cm on the left side of the subject's left eye and 1.5cm on the right side of the right eye to collect horizontal EOG signals; Electrode Fp1 and Electrode Fp2 were placed on the forehead to enhance the space Information; reference electrodes C1 and C2 are placed on the left and right breasts, respectively, and the ground electrode D is located at the center of the top of the head.
在具体进行实验采集时,受试者坐在屏幕前方面对屏幕,屏幕上出现一个“prepare”字符并伴有“beep”的警报声,1秒中后受试者在屏幕上可以看到一个红色箭头提示(分别为:向上箭头、向下箭头、向左箭头与向右箭头),箭头在屏幕上持续出现时间为6秒,在这一时间内,实验要求受试者在看到箭头后向箭头指示方向转动眼球,在看到观测点后转回到中心点,在这一过程中受试者不能眨眼。之后,会有一个2秒的短暂休息,受试者可以眨眼、放松。During the specific experimental collection, the subject sat in front of the screen and faced the screen. A "prepare" character appeared on the screen accompanied by a "beep" alarm sound. After 1 second, the subject could see a beep on the screen. The red arrow prompts (respectively: up arrow, down arrow, left arrow and right arrow), the arrow continues to appear on the screen for 6 seconds, during this time, the experiment requires subjects to see the arrow after seeing the arrow Eyes were rolled in the direction indicated by the arrows, and returned to the center point after seeing the observation point, during which the subject was unable to blink. Afterwards, there was a short 2-second break in which the subject could blink and relax.
如图2至图3所示,本发明公开了一种基于卷积混合模型的慢速眼动识别方法,包括如下步骤S1至S6:As shown in FIG. 2 to FIG. 3 , the present invention discloses a slow eye movement recognition method based on a convolution hybrid model, including the following steps S1 to S6:
S1、在频域上,采用复值ICA算法对各频点的眼动数据进行盲源分离,得到各独立源信号在相应频点上的频域独立分量;S1. In the frequency domain, the complex-valued ICA algorithm is used to perform blind source separation on the eye movement data of each frequency point to obtain the frequency domain independent components of each independent source signal at the corresponding frequency point;
需要说明的是,本实施例中对采集的时域上的多通道EOG数据进行带通滤波和去均值处理,其中带通滤波器的截止频率为0.01Hz~8Hz,然后使用窗口长为256,窗移为128的滑动窗对处理后的眼动数据做短时傅里叶变换(Short-TimeFourierTransform,STFT),将时域上的眼动数据变换为频域上的眼动数据。It should be noted that in this embodiment, the multi-channel EOG data collected in the time domain is subjected to band-pass filtering and de-averaging processing, wherein the cut-off frequency of the band-pass filter is 0.01 Hz to 8 Hz, and then the window length is 256. A sliding window with a window shift of 128 performs a Short-Time Fourier Transform (STFT) on the processed eye movement data, and transforms the eye movement data in the time domain into the eye movement data in the frequency domain.
本实施例中通过对时域上的眼动信号进行带通滤波和去均值处理,去除包括基线飘移、肌电EMG、心电ECG、脑电EEG等信号的干扰,降低了不同噪声信号对原始多通道眼动数据的干扰,从而提高识别正确率。In this embodiment, band-pass filtering and de-averaging are performed on the eye movement signal in the time domain to remove the interference of signals including baseline drift, EMG, ECG, and EEG, and reduce the impact of different noise signals on the original signal. The interference of multi-channel eye movement data, thereby improving the recognition accuracy.
如图4所示,对频域上的眼动数据进行盲源分离的过程具体为:As shown in Figure 4, the process of blind source separation for eye movement data in the frequency domain is as follows:
1)根据多通道观测数据Xi(i=1,2,…N),计算观测数据的协方差矩阵Rx,协方差矩阵的计算公式为:Rx=E{(X-mx)(X-mx)T}T,其中X为观测数据, mX为观测数据的均值,(·)T代表对括号内的公式进行转置运算,E{·}是表示对括号内的数据进行期望运算;求出观测数据的协方差矩阵Rx后,需要对观测数据进行白化处理,实现混合矩阵的正交化,计算其白化矩阵V,计算过程为:1) According to the multi-channel observation data X i (i=1,2,...N), calculate the covariance matrix R x of the observation data, and the calculation formula of the covariance matrix is: R x =E{(Xm x )(Xm x ) T } T , where X is the observation data, m X is the mean value of the observation data, (·) T represents the transpose operation of the formula in the brackets, and E{·} is the expectation operation of the data in the brackets; After the covariance matrix R x of the observation data is obtained, the observation data needs to be whitened to realize the orthogonalization of the mixture matrix, and the whitening matrix V is calculated. The calculation process is as follows:
将协方差矩阵Rx分解为:Rx=EDET,其中E是由Rx的归一化正交特征向量构成的矩阵,D=diag(λ1,λ2,…,λN)是由特征向量相应的特征值构成的对角矩阵。The covariance matrix R x is decomposed into: R x =EDE T , where E is a matrix composed of the normalized orthogonal eigenvectors of R x , and D=diag(λ 1 ,λ 2 ,...,λ N ) is given by A diagonal matrix of eigenvalues corresponding to eigenvectors.
得到的白化矩阵的表现形式为:V=D-12ET。The expression form of the obtained whitening matrix is: V=D -12 E T .
2)利用白化矩阵,通过公式Z(t)=VX(t)对观测数据进行白化处理,并求出白化过程的四阶累积量,以及通过公式N={λ;Nr|1<<r<<M}计算数量不超过M的重要特征,其中λ为特征向量,Nr代表观测数据维数,M代表信源数,r为不超过信源数的一个整数;2) Using the whitening matrix, whiten the observed data through the formula Z(t)=VX(t), and obtain the fourth-order cumulant of the whitening process, and use the formula N={λ; N r |1<<r <<M} Important features whose number of calculations does not exceed M, where λ is the eigenvector, N r represents the dimension of the observation data, M represents the number of sources, and r is an integer that does not exceed the number of sources;
3)使用酉矩阵对公式N={λ;Nr|1<<r<<M}进行联合对角化,其中酉矩阵为U,并通过公式A=W×U计算出混合矩阵A;3) Use the unitary matrix to jointly diagonalize the formula N={λ; N r |1<<r<<M}, where the unitary matrix is U, and calculate the mixing matrix A by the formula A=W×U;
4)由于混合矩阵A与分离矩阵W互为逆矩阵,则分离矩阵W=A-1,根据分离矩阵W即可对在各个频点上的观测数据进行盲源分离。4) Since the mixing matrix A and the separation matrix W are mutually inverse matrices, the separation matrix W=A -1 , and blind source separation can be performed on the observation data at each frequency point according to the separation matrix W.
S2、对各频点上的独立分量进行尺度补偿,还原独立分量在观测分量中的真实比例成分;S2. Perform scale compensation on the independent components at each frequency point to restore the true proportional components of the independent components in the observed components;
具体地,对独立分量进行尺度补偿的具体过程为:Specifically, the specific process of scale compensation for independent components is as follows:
根据复值ICA算法中的各频点的分离矩阵,得到对应频点的混合矩阵,其中分离矩阵和混合矩阵互为逆矩阵;According to the separation matrix of each frequency point in the complex-valued ICA algorithm, the mixing matrix of the corresponding frequency point is obtained, wherein the separation matrix and the mixing matrix are mutually inverse matrices;
利用混合矩阵的系数对各频点的独立分量进行补偿,得到尺度补偿后各频点的独立分量。The independent components of each frequency point are compensated by the coefficients of the mixing matrix, and the independent components of each frequency point after scale compensation are obtained.
具体地,以二维ICA问题为例,定义观测信号为x1、x2,源为s1、s2,则观测信号可表示为:Specifically, taking the two-dimensional ICA problem as an example, define the observed signals as x 1 , x 2 , and the sources as s 1 , s 2 , then the observed signals can be expressed as:
x1=a11s1+a12s2=v11+v12,x 1 =a 11 s 1 +a 12 s 2 =v 11 +v 12 ,
x2=a21s1+a22s2=v21+v22。x 2 =a 21 s 1 +a 22 s 2 =v 21 +v 22 .
其中,vij=aijsij表示独立源sj在观测信号xi中的真实成分即独立源sj在观测信号xi中的投影,由于v11、v21均来自独立源s1,且v11、v21与s1只是在幅度上有所差别。同样,v12、v22与独立源s2的关系也是如此。因此,若W(fk)为所估计的某频点分离矩阵,则可以得到在该频点上的混合矩A(fk)=W-1(fk)。然后利用所得混合矩阵系数可以对各频点独立分量进行补偿,即:Among them, v ij =a ij s ij represents the true component of the independent source s j in the observation signal xi , that is, the projection of the independent source s j in the observation signal xi , since v 11 and v 21 both come from the independent source s 1 , And v 11 , v 21 and s 1 are only different in magnitude. Likewise, the relationship of v 12 , v 22 to the independent source s 2 is the same. Therefore, if W(f k ) is the estimated separation matrix of a certain frequency point, the mixing moment A(f k )=W −1 (f k ) at the frequency point can be obtained. Then, the independent components of each frequency point can be compensated by using the obtained mixing matrix coefficients, namely:
其中,Yj(fk,τ)表示尺度补偿前已分离的第j通道的独立分量, vij(fk,τ)=Aij(fk)Yij(fk,τ)表示经过尺度补偿后,第j个独立分量在第i个观测信号中的真实成分。根据上述分析,在使用上述公式对某频点fk的独立分量进行尺度补偿后,一个频域独立分量将产生N个补偿后的输出,将这N个补偿结构进行后续的如消除排序模糊、不同频点的组合以及逆变换等处理,得到N个来自同一独立源的纯净信号。Among them, Y j (f k ,τ) represents the independent component of the jth channel that has been separated before scale compensation, v ij (f k ,τ)=A ij (f k )Y ij (f k ,τ) represents the scaled After compensation, the true component of the jth independent component in the ith observation signal. According to the above analysis, after using the above formula to perform scale compensation on the independent components of a certain frequency point f k , one frequency domain independent component will generate N compensated outputs, and these N compensation structures will be used for subsequent steps such as eliminating sorting ambiguity, The combination of different frequency points and the inverse transformation are processed to obtain N pure signals from the same independent source.
在实际应用中,可以在N个来自同一独立源的纯净信号择一输出,也可以对N个来自同一独立源的信号进行求平均值后输出。In practical applications, N pure signals from the same independent source can be selected for output, or N signals from the same independent source can be averaged and then output.
S3、采用约束DOA算法对补偿后的独立分量进行处理,使得每个频点上的独立源都按照方向角从小到大排列;S3. The constrained DOA algorithm is used to process the compensated independent components, so that the independent sources on each frequency point are arranged according to the direction angle from small to large;
具体地,对补偿后的独立分量进行排序的过程为:Specifically, the process of sorting the compensated independent components is as follows:
a、为每一个独立源初始化一个角度;a. Initialize an angle for each independent source;
b、通过Root-Music算法计算每个频点的不同行,可以得到各个源方向的估计,其中分离矩阵的行对应不同的独立源;b. Calculate the different rows of each frequency point through the Root-Music algorithm to obtain the estimation of each source direction, where the rows of the separation matrix correspond to different independent sources;
c、设置各独立源的方向角度与初始化角度的接近性度量为ε(y,θ),并在迭代过程中,判断各独立源的角度与初始化角度是否相同;c. Set the proximity measure between the direction angle of each independent source and the initialization angle as ε(y, θ), and in the iterative process, determine whether the angle of each independent source is the same as the initialization angle;
d、若相同则执行步骤e,不相同则执行步骤f;d. If they are the same, execute step e, and if they are not the same, execute step f;
e、将ε(yj,θj)设置为0,并设置方向角度矩阵T来计算调节矩阵Q;e. Set ε(y j , θ j ) to 0, and set the direction angle matrix T to calculate the adjustment matrix Q;
f、将ε(yj,θj)设置为1,返回所述迭代过程重新计算分离矩阵W。f. Set ε(y j , θ j ) to 1, and return to the iterative process to recalculate the separation matrix W.
需要说明的是,为每一个独立源sj初始化一个角度θj,由于独立源位置的不确定性,这里设定第i个独立源的角度小于第i+1个源的角度,并将初始化的角度r(θ)作为一个约束条件。假设在每个频点f分离成功的前提下,分离矩阵W的行对应不同的独立源,通过Root-Music算法计算每个频点的不同行,即可得到对各个源方向的估计。It should be noted that an angle θ j is initialized for each independent source s j . Due to the uncertainty of the position of the independent source, the angle of the i-th independent source is set to be smaller than the angle of the i+1-th source, and the initialized The angle r(θ) is used as a constraint. Assuming that the separation of each frequency point f is successful, the rows of the separation matrix W correspond to different independent sources, and the Root-Music algorithm is used to calculate the different rows of each frequency point, and then the estimation of each source direction can be obtained.
这里为了有效对比约束DOA算法对次序错误的辨别和排序能力,设定通过各频点得到的角度与初始化角度接近性度量为ε(yj,θj),其中yj为各个源方向的估计,将两种角度在迭代过程中进行比较,如果不相同,即ε(yj,θj)=1,则返回迭代过程重新计算分离矩阵W。Here, in order to effectively compare the constrained DOA algorithm's ability to identify and sort order errors, the proximity measurement between the angle obtained by each frequency point and the initialization angle is set as ε(y j , θ j ), where y j is the estimation of each source direction , compare the two angles in the iterative process, if they are not the same, that is, ε(y j , θ j )=1, then return to the iterative process to recalculate the separation matrix W.
如果两个角度相同,即ε(yj,θj)=0,则需要设置一个方向角度矩阵T来计算调节矩阵Q。If the two angles are the same, that is, ε(y j , θ j )=0, a direction angle matrix T needs to be set to calculate the adjustment matrix Q.
其中每个频点f上的独立源按照角度从小到大的排列顺序,设置方向角度矩阵T为:Among them, the independent sources on each frequency point f are arranged in the order of the angles from small to large, and the direction angle matrix T is set as:
在方向角度矩阵T中,对角线显示的是角度排列顺序,对盲源分离后的信号进行幅度补偿后,即可得到对源信号S的估计y:In the direction angle matrix T, the diagonal line shows the angular arrangement order. After performing amplitude compensation on the signal after blind source separation, the estimated y of the source signal S can be obtained:
y=P∧S=PV,y=P∧S=PV,
其中,P是置换矩阵,∧为对角矩阵,S为源信号。为了便于分析,将 y=WX中的X用AS带入,可得y=WAS=Ds。其中,W为分离矩阵,X为观测数据,A为混合矩阵,S为源信号。由ICA的不确定性可知,矩阵D 的每一行和每一列必须只有一个不为零的元素存在。即可转化为D=P∧,其中P为置换矩阵,∧为对角矩阵。P和∧分别引入了ICA输出排序和幅度的不确定性。因此,本实施例设定了一个调节矩阵Q,来对P矩阵进行调节计算,从而解决ICA的排序不确定性问题。Among them, P is the permutation matrix, ∧ is the diagonal matrix, and S is the source signal. In order to facilitate analysis, the X in y=WX is brought in with AS, and y=WAS=Ds can be obtained. Among them, W is the separation matrix, X is the observation data, A is the mixing matrix, and S is the source signal. According to the uncertainty of ICA, each row and column of matrix D must have only one non-zero element. It can be transformed into D=P∧, where P is the permutation matrix and ∧ is the diagonal matrix. P and ∧ introduce uncertainties in the ordering and magnitude of the ICA output, respectively. Therefore, in this embodiment, an adjustment matrix Q is set to perform adjustment calculation on the P matrix, so as to solve the problem of sorting uncertainty of ICA.
进一步地,方向角度矩阵T计算调节矩阵Q的过程为:Further, the process of calculating the adjustment matrix Q by the direction angle matrix T is:
Q=TP-1,Q=TP -1 ,
此时,若置换矩阵P与方向角度矩阵T相同,则此时每个频点上独立源已经按照角度从小到大顺序排列,因此不需要重新进行调节;At this time, if the permutation matrix P is the same as the direction angle matrix T, then the independent sources on each frequency point have been arranged in the order from small to large, so there is no need to re-adjust;
若置换矩阵P与方向角度矩阵T不相同,则将置换矩阵P通过左乘调节矩阵Q,进而得到新的置换矩阵P`;If the permutation matrix P is different from the direction angle matrix T, then the permutation matrix P is left-multiplied by the adjustment matrix Q to obtain a new permutation matrix P`;
通过公式P`=QP=TP-1P=T对置换矩阵P进行处理,得到新的置换矩阵P`,然后通过公式y=P`∧S=P`V重新得到频点上独立源的方向角。此时得到的每个频点f上的独立源是按照方向角从小到大进行排列的,解决了 ICA的排序模糊问题。The permutation matrix P is processed by the formula P`=QP=TP -1 P=T to obtain a new permutation matrix P`, and then the direction of the independent source at the frequency point is re-obtained by the formula y=P`∧S=P`V horn. The independent sources at each frequency point f obtained at this time are arranged according to the direction angle from small to large, which solves the fuzzy sorting problem of ICA.
需要说明的是,ICA输出的排序不确定(permutation ambiguity)是ICA 算法存在的固有局限。在时频域盲解卷积中,涉及到不同窗口的多个频点盲分离,如果不对各频点的ICA分离结果进行匹配,将属于同一个源的频域独立分量组合在一起,即来自不同源的子带信号被错误的拼接在一起,这会对最终的分离效果产生很大影响,使得恢复到时域中的信号产生错乱,进而影响EOG信号的识别结果。通过约束DOA算法对幅度补偿后的各频点独立分量进行排序调整后,使得每个频点f上的独立源都按照方向角从小到大排列,可有效解决每个频点上排序模糊问题,提高盲源分离的质量,有助于识别率的提升。It should be noted that the permutation ambiguity of the ICA output is an inherent limitation of the ICA algorithm. In the blind deconvolution in the time-frequency domain, it involves the blind separation of multiple frequency points in different windows. If the ICA separation results of each frequency point are not matched, the frequency-domain independent components belonging to the same source are combined together, that is, from The sub-band signals of different sources are spliced together by mistake, which will have a great impact on the final separation effect, making the signal restored to the time domain chaotic, and then affecting the recognition result of the EOG signal. The constrained DOA algorithm is used to sort and adjust the independent components of each frequency point after amplitude compensation, so that the independent sources on each frequency point f are arranged according to the direction angle from small to large, which can effectively solve the fuzzy sorting problem of each frequency point. Improving the quality of blind source separation helps to improve the recognition rate.
S4、对尺度补偿后和排序后的各频点的独立分量进行短时傅里叶逆变换处理,得到时域上多通道独立源完整的时间信号;S4, performing short-time inverse Fourier transform processing on the independent components of each frequency point after scale compensation and sorting, to obtain a complete time signal of multi-channel independent sources in the time domain;
需要说明的是,用于在保证各频点不同源对应的分量排列正确且幅度得到恢复的情况下进行短时傅立叶逆变换,最后将得到的时域信号重新截取组合得到对源信号的估计。It should be noted that it is used to perform short-time inverse Fourier transform under the condition that the components corresponding to different sources at each frequency point are correctly arranged and the amplitudes are recovered, and finally the obtained time-domain signals are re-intercepted and combined to obtain an estimate of the source signal.
对其进行短时傅里叶逆变换过程为:The short-time inverse Fourier transform process is:
计算时,按列对已得到的时频矩阵求逆运算,得到在不同时窗位置上的时间信号,然后按时窗从小到大的顺序对时间信号进行拼接,得到源的完整的时间信号。When calculating, invert the obtained time-frequency matrix by column to obtain time signals at different time window positions, and then splicing the time signals in the order of time windows from small to large to obtain the complete time signal of the source.
上述运算过程中,相邻窗口内的时间信号会有部分重叠,重叠的长度是一开始对原始观测信号进行加窗分帧时定义的,为帧长的一半。对相邻窗口内重叠数据的处理一般是将前一窗长的后一半与后一窗长的前一半相加然后除2求平均。In the above operation process, the time signals in adjacent windows will partially overlap, and the length of the overlap is defined when the original observation signal is windowed and divided into frames at the beginning, which is half of the frame length. The processing of overlapping data in adjacent windows is generally to add the second half of the length of the previous window to the first half of the length of the latter window, and then divide by 2 to obtain the average.
S5、对时域上多通道独立源完整的时间信号进行小波分解,得到各级小波系数;S5. Perform wavelet decomposition on the complete time signal of the multi-channel independent source in the time domain to obtain wavelet coefficients at all levels;
具体地,小波分解的公式如下:Specifically, the formula for wavelet decomposition is as follows:
[c,1]=wavedec((Y,N,wname),[c,1]=wavedec((Y,N,wname),
其中,c为小波分解向量,1表示由高到低各级的长度,Y代表进行分解的变量,N代表分解的层数,wname代表母函数。本实施例中对多通道 EOG数据进行小波分解的层数为十层,小波分解的母函数为db4。Among them, c is the wavelet decomposition vector, 1 represents the length of each level from high to low, Y represents the variable to be decomposed, N represents the number of layers of decomposition, and wname represents the generating function. In this embodiment, the number of layers for wavelet decomposition of multi-channel EOG data is ten, and the generating function of wavelet decomposition is db4.
S6、将各级小波系数与慢速眼动的评判标准进行对比与分析,与慢速眼动特征均相符的则识别为慢速眼动。S6, compare and analyze the wavelet coefficients at all levels and the evaluation criteria of slow eye movement, and identify the slow eye movement if they are consistent with the slow eye movement characteristics.
具体地,慢速眼动特征为:(1)信号中出现缓慢的正弦偏移持续超过 1秒钟,即信号频率低于1Hz;(2)信号的初始运动速度接近于零,本实施例中信号的初始运动速度小于0.000001,可认为信号的初始运动速度接近于零;(3)EOG波形没有如眨眼、脑电和肌电等伪迹信号出现。当眼动信号同时满足上述3个条件时,即认为在眼动信号中出现了慢速眼动。结合图9对本实施例中的对比判断过程进行说明,图9B-(a)为一段从时域信号中提取出的慢速眼动:Specifically, the characteristics of slow eye movement are: (1) a slow sinusoidal offset appears in the signal for more than 1 second, that is, the signal frequency is lower than 1 Hz; (2) the initial movement speed of the signal is close to zero, in this embodiment The initial motion speed of the signal is less than 0.000001, which can be considered as close to zero; (3) EOG waveform does not have artifacts such as blinking, EEG and EMG. When the eye movement signal satisfies the above three conditions at the same time, it is considered that slow eye movement occurs in the eye movement signal. The comparison and judgment process in this embodiment will be described with reference to Fig. 9. Fig. 9B-(a) is a segment of slow eye movement extracted from the time domain signal:
图9B-(a)的第一行,是从频域盲分离后回到时域的EOG信号中截取的一段波形,下面二到六行分别为经过小波分解后得到的小波系数,依次下来分别是第六层小波、第七层小波等等直到第十层小波(图中已标出)。从图中可看出,当图9B-(a)分解到第十层小波系数时,信号的频率已经低于1Hz,初始速度为0且EOG波形中没有伪迹信号的出现,所以可判断出其为慢速眼动。而图9B-(a)中的D6、D7很容易看出其信号频率不低于1Hz且有少量伪迹信号的出现,所以其不是慢速眼动。而D8的初始速度不为0,D9的信号频率高于1Hz,所以也不是慢速眼动。综上所述,慢速眼动判定时需要将各级小波系数与慢速眼动的评判标准进行对比于分析,只有同时满足所有特征的波形才被识别为慢速眼动。The first row of Figure 9B-(a) is a waveform cut from the EOG signal returned to the time domain after blind separation in the frequency domain. The next two to six rows are the wavelet coefficients obtained after wavelet decomposition, respectively. It is the sixth layer wavelet, the seventh layer wavelet and so on until the tenth layer wavelet (marked in the figure). It can be seen from the figure that when Figure 9B-(a) is decomposed into the tenth layer of wavelet coefficients, the frequency of the signal is already lower than 1 Hz, the initial velocity is 0, and there is no artifact signal in the EOG waveform, so it can be judged that It is slow eye movement. On the other hand, D6 and D7 in Fig. 9B-(a) can easily see that the signal frequency is not lower than 1 Hz and there is a small amount of artifact signals, so they are not slow eye movements. The initial speed of D8 is not 0, and the signal frequency of D9 is higher than 1Hz, so it is not slow eye movement. To sum up, in the determination of slow eye movement, it is necessary to compare and analyze the wavelet coefficients at all levels with the evaluation criteria of slow eye movement, and only waveforms that meet all the characteristics at the same time can be identified as slow eye movement.
由于逆傅里叶变换后的时域信号之间互不干扰,在时域中对多通道 EOG信号进行小波分析,对分解后的各级小波系数分别进行慢速眼动分析,由于没有其他源信号的干扰,准确性高,计算量少,并且能快速的从 EOG信号中提取出慢速眼动。Since the time domain signals after inverse Fourier transform do not interfere with each other, wavelet analysis is performed on the multi-channel EOG signal in the time domain, and slow eye movement analysis is performed on the decomposed wavelet coefficients at all levels. Signal interference, high accuracy, less calculation, and can quickly extract slow eye movements from the EOG signal.
需要说明的是,如图5所示,两通道EOG信号经复值ICA算法盲源分离后得到的两个独立源的六个相邻频点的时频域波形图。其横坐标表示的是滑动窗的位置,纵坐标表示的是信号的幅度大小。从两幅波形图5-(a)、 5-(b)中可以看出,由上至下顺序的第三通道和第五通道出现了排序模糊问题。It should be noted that, as shown in Figure 5, the time-frequency domain waveforms of the six adjacent frequency points of the two independent sources obtained after the blind source separation of the two-channel EOG signal by the complex-valued ICA algorithm. The abscissa represents the position of the sliding window, and the ordinate represents the amplitude of the signal. It can be seen from the two waveform diagrams 5-(a) and 5-(b) that the third channel and the fifth channel in the top-to-bottom sequence have the problem of order blurring.
图6为多通道眼动数据经卷积ICA分离前后得到的EOG信号波形图。其横坐标表示的是采样点,纵坐标表示的是信号的幅度大小。对比两幅图 6-(a)、6-(b)来看可知,经卷积ICA分离后,眨眼伪迹源信号被分离开。Figure 6 is the EOG signal waveforms obtained before and after the multi-channel eye movement data is separated by convolution ICA. The abscissa represents the sampling point, and the ordinate represents the amplitude of the signal. Comparing the two Figures 6-(a) and 6-(b), it can be seen that after the convolution ICA separation, the blinking artifact source signal is separated.
如图7所示,图7(a)和7(b)分别表示的是经线性ICA和卷积ICA分离后的扫视EOG波形图,其中横坐标显示的是采样点,纵坐标是信号的幅度。图7(c)和7(d)表示的是分别截取图7(a)和7(b)第二通道一段扫视EOG信号的时域和频域波形图。其时域波形图横坐标表示的是采样点,纵坐标表示的是信号的幅度,频域波形图横坐标表示的是频率,纵坐标表示的是信号的幅度。从两图可以清晰看出,经线性ICA分离后,伪迹信号没有分离“干净”,依然有眨眼信号存在,且经线性ICA分离后的扫视EOG信号比经卷积ICA分离后的扫视EOG信号的频带宽。因此,本实施例中优选的采用卷积ICA算法对眼动数据进行盲源分离处理。As shown in Figure 7, Figures 7(a) and 7(b) respectively represent the saccade EOG waveforms separated by linear ICA and convolutional ICA, where the abscissa shows the sampling point, and the ordinate is the amplitude of the signal . Figures 7(c) and 7(d) show the time-domain and frequency-domain waveform diagrams of the sweeping EOG signal of the second channel of Figures 7(a) and 7(b), respectively. The abscissa of the time-domain waveform diagram represents the sampling point, the ordinate represents the amplitude of the signal, the abscissa of the frequency-domain waveform diagram represents the frequency, and the ordinate represents the amplitude of the signal. It can be clearly seen from the two figures that after the linear ICA separation, the artifact signals are not separated "cleanly", and there are still blinking signals, and the saccade EOG signal separated by the linear ICA is higher than the saccade EOG signal separated by the convolution ICA. frequency bandwidth. Therefore, in this embodiment, the convolution ICA algorithm is preferably used to perform blind source separation processing on the eye movement data.
图8示出了不同算法的慢速眼动信号的平均识别率示意图。其横坐标表示的是受试者的顺序,纵坐标表示的是平均识别率。可以看出,经卷积 ICA方法得到的平均识别率为97.254%,比带通滤波法、小波去噪法和及线性ICA方法分别提高了4.854%,7.168%和2.64%。Figure 8 shows a schematic diagram of the average recognition rate of slow eye movement signals for different algorithms. The horizontal axis represents the order of the subjects, and the vertical axis represents the average recognition rate. It can be seen that the average recognition rate obtained by the convolution ICA method is 97.254%, which is 4.854%, 7.168% and 2.64% higher than the bandpass filtering method, the wavelet denoising method and the linear ICA method, respectively.
如图9所示,图9A(a)和图9A(b)是分别经卷积ICA和线性ICA盲分离后的时域EOG波形图。分别对各个通道进行小波分解,可得到在第四通道有两段波形出现了慢速眼动(箭头处所示),结果如图9B(a)和9B(b)所示。从图9B中两图可看出,当分解到第十层时有慢速眼动出现。为了对比线性ICA的分离结果,在卷积分离后波形出现慢速眼动处,分别对线性ICA分离后的各通道波形进行小波分解并分析,实验结果如图9C和9D所示。从四个图中可以看出,线性ICA分离后的波形中没有出现慢速眼动。As shown in FIG. 9, FIG. 9A(a) and FIG. 9A(b) are time-domain EOG waveform diagrams after blind separation of convolution ICA and linear ICA, respectively. Wavelet decomposition is performed on each channel respectively, and it can be found that there are two waveforms in the fourth channel with slow eye movements (indicated by arrows), and the results are shown in Figures 9B(a) and 9B(b). It can be seen from the two figures in Fig. 9B that when decomposed to the tenth layer, there is slow eye movement. In order to compare the separation results of linear ICA, wavelet decomposition and analysis were performed on the waveforms of each channel after linear ICA separation where the waveform appeared slow eye movement after convolution separation. The experimental results are shown in Figures 9C and 9D. As can be seen from the four figures, no slow eye movements were present in the waveforms after linear ICA separation.
另外,如图10所示,本实施例还公开了一种基于卷积混合模型的慢速眼动识别系统,包括:依次连接的盲源分离模块10、尺度补偿模块20、排序模块30、恢复模块40、小波分解模块50以及慢速眼动识别模块60;In addition, as shown in FIG. 10 , this embodiment also discloses a slow eye movement recognition system based on a convolutional hybrid model, including: a blind
盲源分离模块10用于在频域上,采用复值ICA算法对各频点的眼动数据进行盲源分离,得到各独立源信号在相应频点上的频域独立分量并将频域独立分量传输至尺度补偿模块20;The blind
尺度补偿模块20用于对各频点上的独立分量进行尺度补偿,还原独立分量在观测分量中的真实比例成分并将补偿后的独立分量传输至排序模块 30;The
排序模块30用于采用约束DOA算法对补偿后的独立分量进行排序处理,使得每个频点上的独立源都按照方向角从小到大排列;The sorting
恢复模块40用于对尺度补偿后和排序后的各频点的独立分量进行短时傅里叶逆变换处理,得到时域上多通道独立源完整的时间信号并将多通道独立源完整的时间信号传输至小波分解模块50;The
小波分解模块50用于对时域上多通道独立源完整的时间信号进行小波分解,得到各级小波系数,并将分解结果传输至慢速眼动识别模块60;The
慢速眼动识别模块60用于将各级小波系数与慢速眼动的评判标准进行对比与分析,与慢速眼动特征均相符的则识别为慢速眼动。The slow eye
需要说明的是,本实施例公开的一种基于卷积混合模型的慢速眼动识别系统与上述实施例中公开的方法具有相同或相应的技术特征和技术效果,该处不再赘述。It should be noted that the slow eye movement recognition system based on the convolution mixture model disclosed in this embodiment has the same or corresponding technical features and technical effects as the method disclosed in the foregoing embodiment, and details are not repeated here.
以上所述仅为本发明的较佳实施例,并不用以限制本发明,凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。The above descriptions are only preferred embodiments of the present invention, and are not intended to limit the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention shall be included in the protection of the present invention. within the range.
Claims (5)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710695419.7A CN107450730B (en) | 2017-08-15 | 2017-08-15 | A method and system for slow eye movement recognition based on convolutional hybrid model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710695419.7A CN107450730B (en) | 2017-08-15 | 2017-08-15 | A method and system for slow eye movement recognition based on convolutional hybrid model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107450730A CN107450730A (en) | 2017-12-08 |
CN107450730B true CN107450730B (en) | 2020-02-21 |
Family
ID=60492006
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710695419.7A Active CN107450730B (en) | 2017-08-15 | 2017-08-15 | A method and system for slow eye movement recognition based on convolutional hybrid model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107450730B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112036467B (en) * | 2020-08-27 | 2024-01-12 | 北京鹰瞳科技发展股份有限公司 | Abnormal heart sound identification method and device based on multi-scale attention neural network |
CN118262403B (en) * | 2024-03-27 | 2024-11-19 | 北京极溯光学科技有限公司 | Eye movement data processing method, device, equipment and readable storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102125429A (en) * | 2011-03-18 | 2011-07-20 | 上海交通大学 | Alertness detection system based on electro-oculogram signal |
CN106163391A (en) * | 2014-01-27 | 2016-11-23 | 因泰利临床有限责任公司 | System for multiphase sleep management, method for the operation thereof, device for sleep analysis, method for classifying a current sleep phase, and use of the system and the device in multiphase sleep management |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100292545A1 (en) * | 2009-05-14 | 2010-11-18 | Advanced Brain Monitoring, Inc. | Interactive psychophysiological profiler method and system |
-
2017
- 2017-08-15 CN CN201710695419.7A patent/CN107450730B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102125429A (en) * | 2011-03-18 | 2011-07-20 | 上海交通大学 | Alertness detection system based on electro-oculogram signal |
CN106163391A (en) * | 2014-01-27 | 2016-11-23 | 因泰利临床有限责任公司 | System for multiphase sleep management, method for the operation thereof, device for sleep analysis, method for classifying a current sleep phase, and use of the system and the device in multiphase sleep management |
Non-Patent Citations (3)
Title |
---|
"An Algorithm for Reading Activity Recognition";Rui OuYang 等;《IEEE》;20151204;全文 * |
"基于EOG的阅读行为识别中眨眼信号去除算法研究";张贝贝;《信号处理》;20170228;第33卷(第2期);第236至244页 * |
"基于卷积神经网络的眼电信号疲劳检测";朱学敏;《中国优秀硕士学位论文全文数据库(医药卫生科技辑)》;20160715(第2016年第7期);第E080-4页,摘要,正文第15-16、27页 * |
Also Published As
Publication number | Publication date |
---|---|
CN107450730A (en) | 2017-12-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Chen et al. | Removal of muscle artifacts from the EEG: A review and recommendations | |
CN102835955B (en) | A method for automatic removal of electro-oculogram artifacts in EEG signals without threshold setting | |
US11577090B2 (en) | Machine learning based artifact rejection for transcranial magnetic stimulation electroencephalogram | |
Lakshmi et al. | Survey on EEG signal processing methods | |
CN106108893B (en) | Mental imagery training Design of man-machine Conversation method based on eye electricity, brain electricity | |
CN107348958B (en) | A robust saccade EOG signal identification method and system | |
Judith et al. | Artifact removal from EEG signals using regenerative multi-dimensional singular value decomposition and independent component analysis | |
Cheng et al. | The optimal wavelet basis function selection in feature extraction of motor imagery electroencephalogram based on wavelet packet transformation | |
CN109199376B (en) | Decoding method of motor imagery EEG signals based on OA-WMNE brain source imaging | |
CN110269609A (en) | Based on eye electricity artefact separation method in single pass EEG signals | |
Sheoran et al. | Methods of denoising of electroencephalogram signal: A review | |
Miao et al. | Automated CCA-MWF algorithm for unsupervised identification and removal of EOG artifacts from EEG | |
Jamil et al. | Artifact removal from EEG signals recorded in non-restricted environment | |
Gordon et al. | Informed decomposition of electroencephalographic data | |
CN107450730B (en) | A method and system for slow eye movement recognition based on convolutional hybrid model | |
Kaewwit et al. | High accuracy EEG biometrics identification using ICA and AR model | |
CN116849679A (en) | Cross-mode hand action evaluation method | |
Maki et al. | Graph regularized tensor factorization for single-trial EEG analysis | |
Shoker et al. | Removal of eye blinking artifacts from EEG incorporating a new constrained BSS algorithm | |
Sun et al. | A comparative experimental study between instantaneous and convolutional BSS models for saccadic EOG signal separation | |
Wang et al. | A novel physiological signal denoising method coupled with multispectral adaptive wavelet denoising (MAWD) and unsupervised source counting algorithm (USCA) | |
Giraldo-Guzmán et al. | Fetal ECG extraction using independent component analysis by Jade approach | |
Sardouie et al. | Interictal EEG noise cancellation: GEVD and DSS based approaches versus ICA and DCCA based methods | |
Biradar et al. | Heart Rate Estimation from Facial Video Sequences using Fast Independent Component Analysis | |
Kamel et al. | Single-trial subspace-based approach for VEP extraction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |