[go: up one dir, main page]

CN105677043A - Two-stage self-adaptive training method for motor imagery brain-computer interface - Google Patents

Two-stage self-adaptive training method for motor imagery brain-computer interface Download PDF

Info

Publication number
CN105677043A
CN105677043A CN201610107996.5A CN201610107996A CN105677043A CN 105677043 A CN105677043 A CN 105677043A CN 201610107996 A CN201610107996 A CN 201610107996A CN 105677043 A CN105677043 A CN 105677043A
Authority
CN
China
Prior art keywords
trial
matrix
mental imagery
computer interface
brain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610107996.5A
Other languages
Chinese (zh)
Other versions
CN105677043B (en
Inventor
黄志华
文宇坤
黄炜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fuzhou University
Original Assignee
Fuzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuzhou University filed Critical Fuzhou University
Priority to CN201610107996.5A priority Critical patent/CN105677043B/en
Publication of CN105677043A publication Critical patent/CN105677043A/en
Application granted granted Critical
Publication of CN105677043B publication Critical patent/CN105677043B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Neurology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Dermatology (AREA)
  • Biomedical Technology (AREA)
  • Neurosurgery (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

本发明涉及一种运动想象脑机接口的两阶段自适应训练方法,包括先经历的单信任阶段和再进入的互信任阶段。单信任指,系统信任数据,用数据更新分类器。在该阶段,系统先根据用户开展的多个trial进行初步的训练,得到一个初步可行的分类器,再采用增量学习的方法不断更新分类器。互信任指,系统同时信任数据与分类器,让两者相互适应。在该阶段,系统先采用单信任阶段得到的分类器来识别,当用户开展一定量的trial后,根据反馈结果,配合以SVM寻求支持向量的方法优选数据,用增量学习的方法不断更新分类器,对之后的trial用新的分类器进行识别和反馈,反复此过程直至训练结束。本发明提供的方法,能够增强分类器与用户的相互适应,并且耗时短,准确率高。

The invention relates to a two-stage self-adaptive training method of a motor imagery brain-computer interface, which includes a single trust stage experienced first and a mutual trust stage entered again. Single trust means that the system trusts the data and updates the classifier with the data. At this stage, the system first conducts preliminary training based on multiple trials conducted by users to obtain a preliminary feasible classifier, and then uses incremental learning to continuously update the classifier. Mutual trust means that the system trusts the data and the classifier at the same time, so that the two can adapt to each other. At this stage, the system first uses the classifier obtained in the single-trust stage to identify. After the user conducts a certain amount of trials, according to the feedback results, the data is optimized by using SVM to find support vectors, and the classification is continuously updated by incremental learning. The classifier is used to identify and feed back the subsequent trial with a new classifier, and this process is repeated until the end of the training. The method provided by the invention can enhance the mutual adaptation between the classifier and the user, and has short time consumption and high accuracy.

Description

运动想象脑机接口的两阶段自适应训练方法A two-stage adaptive training method for motor imagery brain-computer interface

技术领域technical field

本发明涉及脑机接口人机训练过程的相互适应问题,是针对运动想象类型脑机接口的一种自适应训练方法。The invention relates to the mutual adaptation problem of the human-machine training process of the brain-computer interface, and is an adaptive training method for the brain-computer interface of the motor imagery type.

背景技术Background technique

脑机接口的用户与脑机接口系统之间存在着相互适应的问题。在人机训练过程中,脑机接口系统不断收集数据并通过机器学习的方法适应用户的特点,而用户通过观察脑机接口系统的反馈也主动调节大脑中的活动来适应脑机接口的工作方式。脑机接口用户与脑机接口系统之间的相互适应是一个动态的过程。收集训练数据时不考虑用户反馈,仅在一个静态的训练集上开展机器学习,与脑机接口的这一特性并不一致,不利于改善脑机接口的性能。There is a problem of mutual adaptation between the user of the BCI and the BCI system. In the process of man-machine training, the brain-computer interface system continuously collects data and adapts to the characteristics of the user through machine learning, and the user also actively adjusts the activities in the brain to adapt to the working method of the brain-computer interface by observing the feedback of the brain-computer interface system . The mutual adaptation between the BCI user and the BCI system is a dynamic process. Collecting training data without considering user feedback, and only carrying out machine learning on a static training set is inconsistent with this characteristic of the brain-computer interface, which is not conducive to improving the performance of the brain-computer interface.

发明内容Contents of the invention

有鉴于此,本发明的目的是针对运动想象类型的脑机接口提供一种人机相互适应的训练方法。本发明把运动想象脑机接口的人机训练过程分为两个阶段,前一阶段为单信任阶段,后一阶段为互信任阶段。在单信任阶段,脑机接口系统信任数据,采用增量学习的方法不断用新数据更新分类器。在互信任阶段,脑机接口系统同时信任数据和分类器,让两者相互适应。In view of this, the object of the present invention is to provide a training method for man-machine mutual adaptation for motor imagery brain-computer interface. The invention divides the man-machine training process of the motor imagery brain-computer interface into two stages, the former stage is a single-trust stage, and the latter stage is a mutual-trust stage. In the single-trust stage, the brain-computer interface system trusts the data and uses incremental learning to continuously update the classifier with new data. In the mutual trust stage, the brain-computer interface system trusts both the data and the classifier, allowing the two to adapt to each other.

本发明采用下方式实现:一种运动想象脑机接口的两阶段自适应训练方法,包括以下步骤:The present invention is implemented in the following manner: a two-stage adaptive training method for motor imagery brain-computer interface, comprising the following steps:

步骤S1:此步骤处于单信任阶段,运动想象脑机接口系统信任数据,用数据更新分类器;初始时,用户开展多个trial的运动想象,系统在线采集样本,采用LDA/QR算法得到一个初步的转移矩阵G,并计算分类中心向量集C,形成初始分类器;用户继续开展运动想象,系统用分类器在线识别用户的运动想象并向用户反馈,同时获取新样本;每当完成一个trial,系统采用ILDA/QR算法更新转移矩阵G,并计算分类中心向量集C,形成新的分类器并将之用于下一个trial的识别与反馈,直到该阶段结束;Step S1: This step is in the single-trust stage. The motor imagery brain-computer interface system trusts the data and uses the data to update the classifier; initially, the user conducts multiple trials of motor imagery, the system collects samples online, and uses the LDA/QR algorithm to obtain a preliminary transfer matrix G, and calculate the classification center vector set C to form an initial classifier; the user continues to carry out motor imagery, and the system uses the classifier to identify the user’s motor imagery online and give feedback to the user, and obtain new samples at the same time; each time a trial is completed, The system uses the ILDA/QR algorithm to update the transfer matrix G, and calculates the classification center vector set C to form a new classifier and use it for the identification and feedback of the next trial until the end of this stage;

步骤S2:此步骤处于互信任阶段,运动想象脑机接口系统同时信任数据和分类器,让两者相互适应;初始时,系统采用步骤S1最终得到的分类器在线识别用户的运动想象并向用户反馈;每当用户开展一定数量的trial后,系统采用互信任优选法筛选新样本,采用LDA/QR算法或ILDA/QR算法更新转移矩阵G,并计算分类中心向量集C,形成新的分类器并将之用于接下来的识别与反馈,直到人机训练结束。Step S2: This step is in the stage of mutual trust. The motor imagery brain-computer interface system trusts both the data and the classifier at the same time, so that the two can adapt to each other. Initially, the system uses the classifier finally obtained in step S1 to identify the user's motor imagery online and report to the user Feedback: Whenever the user conducts a certain number of trials, the system uses the mutual trust optimization method to screen new samples, uses the LDA/QR algorithm or ILDA/QR algorithm to update the transfer matrix G, and calculates the classification center vector set C to form a new classifier and Use it for subsequent recognition and feedback until the end of man-machine training.

进一步地,所述步骤S1具体包括以下步骤:Further, the step S1 specifically includes the following steps:

步骤S11:用户开展多个trial的运动想象,系统在线采集信号,每隔一个时间间隔截取一段信号,经特征提取转换成m维特征向量,记为x;Step S11: The user carries out multiple trials of motor imagery, the system collects signals online, intercepts a section of signal every other time interval, and converts it into an m-dimensional feature vector through feature extraction, which is recorded as x;

步骤S12:用所述步骤S11中所得的所有特征向量构造数据矩阵A,并根据类别个数k构造矩阵E;Step S12: use all the eigenvectors obtained in the step S11 to construct a data matrix A, and construct a matrix E according to the number of categories k;

步骤S13:执行LDA/QR算法,得到最优的转移矩阵G,并计算得到类中心向量集C,形成初始分类器;Step S13: Execute the LDA/QR algorithm to obtain the optimal transition matrix G, and calculate the class center vector set C to form an initial classifier;

步骤S14:用户开展一个trial的运动想象;系统在线采集脑电信号,每隔一个时间间隔截取一段信号经特征提取转换为特征向量x,用分类器识别x,根据识别结果移动受控对象向用户反馈;Step S14: The user carries out a trial motor imagery; the system collects EEG signals online, intercepts a segment of the signal every other time interval, converts it into a feature vector x through feature extraction, identifies x with a classifier, and moves the controlled object to the user according to the recognition result feedback;

步骤S15:执行ILDA/QR算法,更新转移矩阵G,接着更新数据矩阵A,并计算类中心向量集C,形成新的分类器;Step S15: Execute the ILDA/QR algorithm, update the transition matrix G, then update the data matrix A, and calculate the class center vector set C to form a new classifier;

步骤S16:本阶段结束;否则,将新分类器用于下一个trial的识别与反馈,返回步骤S14。Step S16: This stage ends; otherwise, use the new classifier for the identification and feedback of the next trial, and return to step S14.

进一步地,所述步骤S2具体包括以下步骤:Further, the step S2 specifically includes the following steps:

步骤S21:设置标志flag为0,构造容器用于存放k类运动想象样本;Step S21: it is 0 to set sign flag, and the construction container is used for depositing k class motor imagery sample;

步骤S22:用户开展一个trial的运动想象;系统在线采集脑电信号,每隔一个时间间隔截取一段信号经特征提取转换为特征向量x,用分类器识别x,根据识别结果移动受控对象向用户反馈;Step S22: The user carries out a trial motor imagery; the system collects EEG signals online, intercepts a segment of the signal every other time interval, converts it into a feature vector x through feature extraction, identifies x with a classifier, and moves the controlled object to the user according to the recognition result feedback;

步骤S23:当前trial结束时,若受控对象击中目标,将该trial的所有样本存入容器,并记录该trial击中目标所耗费的时间;Step S23: when the current trial ends, if the controlled object hits the target, store all samples of the trial into the container, and record the time it takes for the trial to hit the target;

步骤S24:若trial数未达到自适应更新条件,则返回步骤S22;否则,系统采用互信任优选法筛选新样本;Step S24: If the number of trials does not meet the adaptive update condition, return to step S22; otherwise, the system uses the mutual trust optimization method to screen new samples;

步骤S25:若flag等于0,构造数据矩阵A,根据类别个数k构造矩阵E,执行LDA/QR算法得到新的转移矩阵G,设置flag为1;否则,执行ILDA/QR算法更新转移矩阵G,并更新数据矩阵A;Step S25: If the flag is equal to 0, construct a data matrix A, construct a matrix E according to the number of categories k, execute the LDA/QR algorithm to obtain a new transition matrix G, and set the flag to 1; otherwise, execute the ILDA/QR algorithm to update the transition matrix G , and update the data matrix A;

步骤S26:依据转移矩阵G和数据矩阵A计算得到新的类中心向量集C,形成新的分类器,并清空容器;Step S26: Calculate a new class center vector set C according to the transition matrix G and the data matrix A, form a new classifier, and empty the container;

步骤S27:人机训练结束;否则,返回步骤S22。Step S27: the man-machine training is over; otherwise, return to step S22.

进一步地,步骤S24中所述的互信任优选法包含以下步骤:Further, the mutual trust optimization method described in step S24 includes the following steps:

步骤S41:根据击中目标所耗费的时间筛选trial;用容器中所有trial对应的耗费时间构成一个初步集合,找出最小耗费时间,其他耗费时间均以最小耗费时间为对称轴在其对称位置产生一个虚拟时间,所有虚拟时间加入到初步集合形成参考集合,计算参考集合的标准差,选取耗费时间小于最小耗费时间与标准差之和的trial;Step S41: Filter trials according to the time it takes to hit the target; use the time-consuming corresponding to all trials in the container to form a preliminary set, find the minimum time-consuming, and other time-consuming times are generated at their symmetrical positions with the minimum time-consuming time as the symmetric axis A virtual time, all the virtual time is added to the preliminary set to form a reference set, the standard deviation of the reference set is calculated, and the trial that takes less time than the sum of the minimum time spent and the standard deviation is selected;

步骤S42:用支持向量的方法筛选样本;步骤S41选取的trial所对应的所有样本均加入候选样本集,用SVM在候选样本集上训练,选取那些被确定为支持向量的样本;Step S42: Use the support vector method to screen samples; all samples corresponding to the trial selected in step S41 are added to the candidate sample set, train on the candidate sample set with SVM, and select those samples that are determined to be support vectors;

其中,所述trial表示运动想象脑机接口人机训练过程中的一个实验单元;trial开始时,脑机接口系统随机给定一个目标;用户开展运动想象力图向目标移动受控对象,脑机接口系统通过识别运动想象的类型来决定实际移动受控对象的方向;当受控对象击中目标或超时,trial结束。Wherein, the trial represents an experimental unit in the human-computer training process of the motor imagery brain-computer interface; when the trial starts, the brain-computer interface system randomly sets a target; The system determines the direction to actually move the controlled object by identifying the type of motor imagery; the trial ends when the controlled object hits the target or times out.

进一步地,步骤S12与S25中所述构造或更新数据矩阵A的方法为:Further, the method for constructing or updating the data matrix A described in steps S12 and S25 is:

数据矩阵A=[x1,x2,…,xn]=[A1,…,Ak]∈Rm×n,xi∈Rm×1i=1,…,n表示一个m维的样本点,A中共有n个样本点;m由特征提取方法决定,在系统运行过程中是固定不变的;n在系统运行过程中是动态增长的;每一个样本块矩阵表示的是第i类的所有样本点的集合,共有k个类别,nii=1,…,k表示第i类的样本个数, Data matrix A=[x 1 ,x 2 ,…,x n ]=[A 1 ,…,A k ]∈R m×n , x i ∈R m×1 i=1,…,n represents an m-dimensional There are a total of n sample points in A; m is determined by the feature extraction method and is fixed during system operation; n is dynamically increased during system operation; each sample block matrix Represents the set of all sample points of the i-th class, and there are k classes in total, n i i=1,...,k represents the number of samples of the i-th class,

进一步地,步骤S12与S25中所述的根据类别个数k构造矩阵E的具体方法为:Further, the specific method of constructing the matrix E according to the number of categories k described in steps S12 and S25 is:

矩阵其中 e i = 1 1 ... 1 T ∈ R n i , i = 1 , ... , k . matrix in e i = 1 1 ... 1 T ∈ R no i , i = 1 , ... , k .

进一步地,所述LDA/QR算法的输入为数据矩阵A∈Rm×n和E∈Rn×k,输出为转移矩阵G∈Rm×kFurther, the input of the LDA/QR algorithm is a data matrix A∈R m×n and E∈R n×k , and the output is a transfer matrix G∈R m×k ;

具体包括以下步骤:Specifically include the following steps:

步骤S31:计算A的经济QR分解,A=QR,Q∈Rm×n,R∈Rn×n,其中Q是列正交的,R是非奇异的;Step S31: Calculate the economic QR decomposition of A, A=QR, Q∈R m×n , R∈R n×n , where Q is column-orthogonal and R is non-singular;

步骤S32:解一个下三角线性系统RTH=E,求解得到H;Step S32: Solve a lower triangular linear system R T H = E, and obtain H;

步骤S33:计算得到G,G=QH。Step S33: Calculate G, G=QH.

进一步地,所述ILDA/QR算法的输入为列正交矩阵Q、最优的转移矩阵G和新来的样本x及它的类别l,输出为更新的列正交矩阵Q和转移矩阵G;Further, the input of the ILDA/QR algorithm is a column orthogonal matrix Q, an optimal transfer matrix G and a new sample x and its category l, and the output is an updated column orthogonal matrix Q and transfer matrix G;

具体包括以下步骤:Specifically include the following steps:

步骤S41:计算r=QTx,Q=[Q(x-Qr)/α];Step S41: Calculate r=Q T x, Q=[Q(x-Qr)/α];

步骤S42:计算r=-GTx,r(l)=r(l)+1,r=r/α,G=G+Q(:,n+1)rTStep S42: Calculate r=-G T x , r(l)=r(l)+1, r=r/α, G=G+Q(:,n+1)r T .

进一步地,所述的分类中心向量集C的计算方法为:现有已知类别的数据矩阵A和转移矩阵G,计算P=ATG, P = p 1 p 2 . . . p n = P 1 P 2 . . . P n ∈ R n × k , 其中pj∈R1×kj=1,…,n表示数据矩阵A中每一个样本点对应的投影结果,Pi∈R1×ki=1,…,k表示的是属于第i类的所有pj的集合;计算每个类的中心Ci∈R1×k C i = 1 n i Σ p j ∈ P i p j , 得到类中心向量集为 C = C 1 . . . C k . Further, the calculation method of the classification center vector set C is as follows: the data matrix A and the transition matrix G of the existing known categories are calculated P= AT G, P = p 1 p 2 . . . p no = P 1 P 2 . . . P no ∈ R no × k , Where p j ∈ R 1×k j=1,...,n represents the projection result corresponding to each sample point in the data matrix A, P i ∈ R 1×k i=1,...,k represents the i-th class The set of all p j of ; calculate the center C i ∈ R 1×k of each class, C i = 1 no i Σ p j ∈ P i p j , Get the class center vector set as C = C 1 . . . C k .

进一步地,所述分类器的具体分类方法为:待判定样本为x,计算di=||xTG-Ci||2i=1,…,k,选出最小的di,将x判定为对应的类别。Further, the specific classification method of the classifier is: the sample to be determined is x, calculate d i =||x T GC i || 2 i=1,...,k, select the smallest d i , and determine x for the corresponding category.

脑机接口的用户在使用脑机接口系统之前需要通过一个人机训练过程让两者相互适应。在这个过程中用户的状态不是静止不变的,用户在观察脑机接口系统的反馈过程中不断地调节自身的状态来适应脑机接口的工作方式。脑机接口系统采集到的数据集的质量是随着用户状态的变化而变化的。在人机训练过程中让系统能够自动优选高质量的数据是非常重要的。因此,与现有技术相比,本发明具有以下优点:Before using the brain-computer interface system, users of the brain-computer interface need to go through a human-computer training process to make the two adapt to each other. In this process, the user's state is not static, and the user constantly adjusts his state to adapt to the working method of the brain-computer interface while observing the feedback of the brain-computer interface system. The quality of the data set collected by the brain-computer interface system changes with the change of the user's state. It is very important to enable the system to automatically select high-quality data during the human-machine training process. Therefore, compared with the prior art, the present invention has the following advantages:

1.本发明能够让脑机接口系统在人机训练过程中自动优选样本,使得脑机接口用户的良好状态能够突显在新样本集合中。1. The present invention enables the brain-computer interface system to automatically select samples during the human-machine training process, so that the good state of the brain-computer interface user can be highlighted in the new sample set.

2.本发明采用了增量学习的方法。当出现新样本时,无需重新训练,脑机接口系统能够快速吸收新样本所蕴含的信息。2. The present invention adopts the method of incremental learning. When a new sample appears, the brain-computer interface system can quickly absorb the information contained in the new sample without retraining.

3.本发明采用了简洁高效的算法。脑机接口系统能够在人机训练过程中在线自我更新,使得脑机接口系统与用户的相互适应可在人机训练过程中动态完成。3. The present invention adopts a simple and efficient algorithm. The brain-computer interface system can update itself online during the human-computer training process, so that the mutual adaptation between the brain-computer interface system and the user can be dynamically completed during the human-computer training process.

综上,本发明提供的方法能够增强人机训练过程中脑机接口系统与用户的相互适应能力。In summary, the method provided by the present invention can enhance the mutual adaptability between the brain-computer interface system and the user during the man-machine training process.

附图说明Description of drawings

图1为本发明的总流程示意图。Fig. 1 is a schematic diagram of the overall process of the present invention.

图2为本发明的步骤S1的流程示意图。FIG. 2 is a schematic flow chart of step S1 of the present invention.

图3为本发明的步骤S2的流程示意图。Fig. 3 is a schematic flow chart of step S2 of the present invention.

图4为本发明运动想象的一个trial的示意图。Fig. 4 is a schematic diagram of a trial of motor imagery in the present invention.

具体实施方式detailed description

下面结合附图及实施例对本发明做进一步说明。The present invention will be further described below in conjunction with the accompanying drawings and embodiments.

本实施提供一种运动想象脑机接口的两阶段自适应训练方法,如图1所示,包括以下步骤:This implementation provides a two-stage adaptive training method for motor imagery brain-computer interface, as shown in Figure 1, including the following steps:

步骤S1:此步骤处于单信任阶段,运动想象脑机接口系统信任数据,用数据更新分类器;初始时,用户开展多个trial的运动想象,系统在线采集样本,采用LDA/QR算法得到一个初步的转移矩阵G,并计算分类中心向量集C,形成初始分类器;用户继续开展运动想象,系统用分类器在线识别用户的运动想象并向用户反馈,同时获取新样本;每当完成一个trial,系统采用ILDA/QR算法更新转移矩阵G,并计算分类中心向量集C,形成新的分类器并将之用于下一个trial的识别与反馈,直到该阶段结束;Step S1: This step is in the single-trust stage. The motor imagery brain-computer interface system trusts the data and uses the data to update the classifier; initially, the user conducts multiple trials of motor imagery, the system collects samples online, and uses the LDA/QR algorithm to obtain a preliminary transfer matrix G, and calculate the classification center vector set C to form an initial classifier; the user continues to carry out motor imagery, and the system uses the classifier to identify the user’s motor imagery online and give feedback to the user, and obtain new samples at the same time; each time a trial is completed, The system uses the ILDA/QR algorithm to update the transfer matrix G, and calculates the classification center vector set C to form a new classifier and use it for the identification and feedback of the next trial until the end of this stage;

步骤S2:此步骤处于互信任阶段,运动想象脑机接口系统同时信任数据和分类器,让两者相互适应;初始时,系统采用步骤S1最终得到的分类器在线识别用户的运动想象并向用户反馈;每当用户开展一定数量的trial后,系统采用互信任优选法筛选新样本,采用LDA/QR算法或ILDA/QR算法更新转移矩阵G,并计算分类中心向量集C,形成新的分类器并将之用于接下来的识别与反馈,直到人机训练结束。Step S2: This step is in the stage of mutual trust. The motor imagery brain-computer interface system trusts both the data and the classifier at the same time, so that the two can adapt to each other. Initially, the system uses the classifier finally obtained in step S1 to identify the user's motor imagery online and report to the user Feedback: Whenever the user conducts a certain number of trials, the system uses the mutual trust optimization method to screen new samples, uses the LDA/QR algorithm or ILDA/QR algorithm to update the transfer matrix G, and calculates the classification center vector set C to form a new classifier and Use it for subsequent recognition and feedback until the end of man-machine training.

在本实施例中,如图2所示,所述步骤S1具体包括以下步骤:In this embodiment, as shown in FIG. 2, the step S1 specifically includes the following steps:

步骤S11:用户开展多个trial的运动想象,系统在线采集信号,每隔一个时间间隔截取一段信号,经特征提取转换成m维特征向量,记为x;Step S11: The user carries out multiple trials of motor imagery, the system collects signals online, intercepts a section of signal every other time interval, and converts it into an m-dimensional feature vector through feature extraction, which is recorded as x;

步骤S12:用所述步骤S11中所得的所有特征向量构造数据矩阵A,并根据类别个数k构造矩阵E;Step S12: use all the eigenvectors obtained in the step S11 to construct a data matrix A, and construct a matrix E according to the number of categories k;

步骤S13:执行LDA/QR算法,得到最优的转移矩阵G,并计算得到类中心向量集C,形成初始分类器;Step S13: Execute the LDA/QR algorithm to obtain the optimal transition matrix G, and calculate the class center vector set C to form an initial classifier;

步骤S14:用户开展一个trial的运动想象;系统在线采集脑电信号,每隔一个时间间隔截取一段信号经特征提取转换为特征向量x,用分类器识别x,根据识别结果移动受控对象向用户反馈;Step S14: The user carries out a trial motor imagery; the system collects EEG signals online, intercepts a segment of the signal every other time interval, converts it into a feature vector x through feature extraction, identifies x with a classifier, and moves the controlled object to the user according to the recognition result feedback;

步骤S15:执行ILDA/QR算法,更新转移矩阵G,接着更新数据矩阵A,并计算类中心向量集C,形成新的分类器;Step S15: Execute the ILDA/QR algorithm, update the transition matrix G, then update the data matrix A, and calculate the class center vector set C to form a new classifier;

步骤S16:本阶段结束;否则,将新分类器用于下一个trial的识别与反馈,返回步骤S14。Step S16: This stage ends; otherwise, use the new classifier for the identification and feedback of the next trial, and return to step S14.

在本实施例中,如图3所示,所述步骤S2具体包括以下步骤:In this embodiment, as shown in FIG. 3, the step S2 specifically includes the following steps:

步骤S21:设置标志flag为0,构造容器用于存放k类运动想象样本;Step S21: it is 0 to set sign flag, and the construction container is used for depositing k class motor imagery sample;

步骤S22:用户开展一个trial的运动想象;系统在线采集脑电信号,每隔一个时间间隔截取一段信号经特征提取转换为特征向量x,用分类器识别x,根据识别结果移动受控对象向用户反馈;Step S22: The user carries out a trial motor imagery; the system collects EEG signals online, intercepts a segment of the signal every other time interval, converts it into a feature vector x through feature extraction, identifies x with a classifier, and moves the controlled object to the user according to the recognition result feedback;

步骤S23:当前trial结束时,若受控对象击中目标,将该trial的所有样本存入容器,并记录该trial击中目标所耗费的时间;Step S23: when the current trial ends, if the controlled object hits the target, store all samples of the trial into the container, and record the time it takes for the trial to hit the target;

步骤S24:若trial数未达到自适应更新条件,则返回步骤S22;否则,系统采用互信任优选法筛选新样本;Step S24: If the number of trials does not meet the adaptive update condition, return to step S22; otherwise, the system uses the mutual trust optimization method to screen new samples;

步骤S25:若flag等于0,构造数据矩阵A,根据类别个数k构造矩阵E,执行LDA/QR算法得到新的转移矩阵G,设置flag为1;否则,执行ILDA/QR算法更新转移矩阵G,并更新数据矩阵A;Step S25: If the flag is equal to 0, construct a data matrix A, construct a matrix E according to the number of categories k, execute the LDA/QR algorithm to obtain a new transfer matrix G, and set the flag to 1; otherwise, execute the ILDA/QR algorithm to update the transfer matrix G , and update the data matrix A;

步骤S26:依据转移矩阵G和数据矩阵A计算得到新的类中心向量集C,形成新的分类器,并清空容器;Step S26: Calculate a new class center vector set C according to the transition matrix G and the data matrix A, form a new classifier, and empty the container;

步骤S27:人机训练结束;否则,返回步骤S22。Step S27: the man-machine training is over; otherwise, return to step S22.

在本实施例中,步骤S24中所述的互信任优选法包含以下步骤:In this embodiment, the mutual trust optimization method described in step S24 includes the following steps:

步骤S41:根据击中目标所耗费的时间筛选trial;用容器中所有trial对应的耗费时间构成一个初步集合,找出最小耗费时间,其他耗费时间均以最小耗费时间为对称轴在其对称位置产生一个虚拟时间,所有虚拟时间加入到初步集合形成参考集合,计算参考集合的标准差,选取耗费时间小于最小耗费时间与标准差之和的trial;Step S41: Filter trials according to the time it takes to hit the target; use the time-consuming corresponding to all trials in the container to form a preliminary set, find the minimum time-consuming, and other time-consuming times are generated at their symmetrical positions with the minimum time-consuming time as the symmetric axis A virtual time, all the virtual time is added to the preliminary set to form a reference set, the standard deviation of the reference set is calculated, and the trial that takes less time than the sum of the minimum time spent and the standard deviation is selected;

步骤S42:用支持向量的方法筛选样本;步骤S41选取的trial所对应的所有样本均加入候选样本集,用SVM在候选样本集上训练,选取那些被确定为支持向量的样本;Step S42: Use the support vector method to screen samples; all samples corresponding to the trial selected in step S41 are added to the candidate sample set, train on the candidate sample set with SVM, and select those samples that are determined to be support vectors;

其中,所述trial表示运动想象脑机接口人机训练过程中的一个实验单元;trial开始时,脑机接口系统随机给定一个目标;用户开展运动想象力图向目标移动受控对象,脑机接口系统通过识别运动想象的类型来决定实际移动受控对象的方向;当受控对象击中目标或超时,trial结束。Wherein, the trial represents an experimental unit in the human-computer training process of the motor imagery brain-computer interface; when the trial starts, the brain-computer interface system randomly sets a target; The system determines the direction to actually move the controlled object by identifying the type of motor imagery; the trial ends when the controlled object hits the target or times out.

在本实施例中,步骤S12与S25中所述构造或更新数据矩阵A的方法为:In this embodiment, the method for constructing or updating the data matrix A described in steps S12 and S25 is:

数据矩阵A=[x1,x2,…,xn]=[A1,…,Ak]∈Rm×n,xi∈Rm×1i=1,…,n表示一个m维的样本点,A中共有n个样本点;m由特征提取方法决定,在系统运行过程中是固定不变的;n在系统运行过程中是动态增长的;每一个样本块矩阵表示的是第i类的所有样本点的集合,共有k个类别,nii=1,…,k表示第i类的样本个数, Data matrix A=[x 1 ,x 2 ,…,x n ]=[A 1 ,…,A k ]∈R m×n , x i ∈R m×1 i=1,…,n represents an m-dimensional There are a total of n sample points in A; m is determined by the feature extraction method and is fixed during system operation; n is dynamically increased during system operation; each sample block matrix Represents the set of all sample points of the i-th class, and there are k classes in total, n i i=1,...,k represents the number of samples of the i-th class,

在本实施例中,步骤S12与S25中所述的根据类别个数k构造矩阵E的具体方法为:In this embodiment, the specific method of constructing the matrix E according to the number k of categories described in steps S12 and S25 is as follows:

矩阵其中 e i = 1 1 ... 1 T ∈ R n i , i = 1 , ... , k . matrix in e i = 1 1 ... 1 T ∈ R no i , i = 1 , ... , k .

在本实施例中,所述LDA/QR算法的输入为数据矩阵A∈Rm×n和E∈Rn×k,输出为转移矩阵G∈Rm×kIn this embodiment, the input of the LDA/QR algorithm is a data matrix A∈R m×n and E∈R n×k , and the output is a transfer matrix G∈R m×k ;

具体包括以下步骤:Specifically include the following steps:

步骤S31:计算A的经济QR分解,A=QR,Q∈Rm×n,R∈Rn×n,其中Q是列正交的,R是非奇异的;Step S31: Calculate the economic QR decomposition of A, A=QR, Q∈R m×n , R∈R n×n , where Q is column-orthogonal and R is non-singular;

步骤S32:解一个下三角线性系统RTH=E,求解得到H;Step S32: Solve a lower triangular linear system R T H = E, and obtain H;

步骤S33:计算得到G,G=QH。Step S33: Calculate G, G=QH.

在本实施例中,所述ILDA/QR算法的输入为列正交矩阵Q、最优的转移矩阵G和新来的样本x及它的类别l,输出为更新的列正交矩阵Q和转移矩阵G;In this embodiment, the input of the ILDA/QR algorithm is the column orthogonal matrix Q, the optimal transfer matrix G and the new sample x and its category l, and the output is the updated column orthogonal matrix Q and transfer matrix G;

具体包括以下步骤:Specifically include the following steps:

步骤S41:计算r=QTx,Q=[Q(x-Qr)/α];Step S41: Calculate r=Q T x, Q=[Q(x-Qr)/α];

步骤S42:计算r=-GTx,r(l)=r(l)+1,r=r/α,G=G+Q(:,n+1)rTStep S42: Calculate r=-G T x , r(l)=r(l)+1, r=r/α, G=G+Q(:,n+1)r T .

在本实施例中,所述的分类中心向量集C的计算方法为:现有已知类别的数据矩阵A和转移矩阵G,计算P=ATG, P = p 1 p 2 . . . p n = P 1 P 2 . . . P k ∈ R n × k , 其中pj∈R1×kj=1,…,n表示数据矩阵A中每一个样本点对应的投影结果,Pi∈R1×ki=1,…,k表示的是属于第i类的所有pj的集合;计算每个类的中心Ci∈R1×k C i = 1 n i Σ p j ∈ P i p j , 得到类中心向量集为 C = C 1 . . . C k . In this embodiment, the calculation method of the classification center vector set C is as follows: the data matrix A and the transition matrix G of the existing known categories are calculated P= AT G, P = p 1 p 2 . . . p no = P 1 P 2 . . . P k ∈ R no × k , Where p j ∈ R 1×k j=1,...,n represents the projection result corresponding to each sample point in the data matrix A, P i ∈ R 1×k i=1,...,k represents the i-th class The set of all p j of ; calculate the center C i ∈ R 1×k of each class, C i = 1 no i Σ p j ∈ P i p j , Get the class center vector set as C = C 1 . . . C k .

在本实施例中,所述分类器的具体分类方法为:待判定样本为x,计算di=||xTG-Ci||2i=1,…,k,选出最小的di,将x判定为对应的类别。In this embodiment, the specific classification method of the classifier is: the sample to be determined is x, calculate d i =||x T GC i || 2 i=1,...,k, select the smallest d i , Determine x as the corresponding category.

在本实施例中,用户的运动想象实验是由多个run构成的,而每一个run是由多个trial组成的,每一个trial是由多个有重叠的固定长度的Windowslength构成的。如图4所示,一个trial表示运动想象脑机接口人机训练过程中的一个实验单元,该单元从任意给定的目标出现开始,到受控对象击中目标或超时。具体为:首先在0s时,屏幕上出现目标板代表希望用户击中的目标,此时用户便可开始想象对应的运动。2s后用小球表示的受控对象出现在屏幕的正中央,在之后的2s-10s间,小球根据分类器判别用户的想象结果进行相应的移动。如果小球早于10s击中目标板,则提前结束该trial,进入休息阶段。在10s的时候无论小球是否击中目标板,小球都停止运动,trial结束并进入休息阶段。在休息阶段小球和目标板均消失,持续时间为2s,之后又重新开始一个新的trial。在本实施例中,如图4所示,以2s的数据段作为一个定长Windowslength的长度,每隔0.1s取一次Windowslength长的数据,将每次采集到的Windowslength作为在线信号数据段x输入到系统中。In this embodiment, the user's motor imagery experiment is composed of multiple runs, and each run is composed of multiple trials, and each trial is composed of multiple overlapping Windowslengths of fixed length. As shown in Figure 4, a trial represents an experimental unit in the human-computer training process of motor imagery BCI, which starts from the appearance of any given target until the controlled object hits the target or times out. Specifically: firstly, at 0s, a target board appears on the screen representing the target that the user is expected to hit, and at this time the user can start to imagine the corresponding movement. After 2s, the controlled object represented by the ball appears in the center of the screen, and in the following 2s-10s, the ball moves accordingly according to the user's imagination determined by the classifier. If the ball hits the target board earlier than 10s, the trial ends early and enters the rest phase. At 10s, no matter whether the ball hits the target board or not, the ball stops moving, the trial ends and enters the rest stage. Both the ball and the target board disappear during the rest phase, and the duration is 2s, after which a new trial starts again. In this embodiment, as shown in Figure 4, the data segment of 2s is used as the length of a fixed-length Windowslength, and the data of Windowslength length is taken every 0.1s, and the Windowslength collected each time is input as the online signal data segment x into the system.

以上所述仅为本发明的较佳实施例,凡依本发明申请专利范围所做的均等变化与修饰,皆应属本发明的涵盖范围。The above descriptions are only preferred embodiments of the present invention, and all equivalent changes and modifications made according to the scope of the patent application of the present invention shall fall within the scope of the present invention.

Claims (10)

1. the two benches adaptive training method of a Mental imagery brain-computer interface, it is characterised in that comprise the following steps:
Step S1: this step is in single trusted phase, Mental imagery brain machine interface system trust data, updates grader by data; Time initial, user carries out the Mental imagery of multiple trial, system online acquisition sample, adopts LDA/QR algorithm to obtain a preliminary transfer matrix G, and calculates classification center vector set C, forms preliminary classification device; User carries on Mental imagery, and system, with the Mental imagery of grader ONLINE RECOGNITION user and to user feedback, obtains new samples simultaneously; Whenever completing a trial, system adopts ILDA/QR algorithm to update transfer matrix G, and calculates classification center vector set C, forms new grader and it is used for identification and the feedback of next trial, until this stage terminates;
Step S2: this step is in mutual trusted phase, Mental imagery brain machine interface system is trust data and grader simultaneously, allows both be mutually adapted; Time initial, system adopts the step S1 Mental imagery of grader ONLINE RECOGNITION user finally given and to user feedback; After user carries out a number of trial, system adopts mutual trust to appoint optimal seeking method screening new samples, LDA/QR algorithm or ILDA/QR algorithm is adopted to update transfer matrix G, and calculate classification center vector set C, form new grader and it is used for ensuing identification and feedback, until man-machine training terminates.
2. the two benches adaptive training method of a kind of Mental imagery brain-computer interface according to claim 1, it is characterised in that described step S1 specifically includes following steps:
Step S11: user carries out the Mental imagery of multiple trial, system online acquisition signal, intercepts a segment signal every an interval, converts m dimensional feature vector to through feature extraction, be designated as x;
Step S12: construct data matrix A by all characteristic vectors of gained in described step S11, and according to classification number k structural matrix E;
Step S13: perform LDA/QR algorithm, obtains the transfer matrix G of optimum, and calculating obtains apoplexy due to endogenous wind Heart vector collection C, forms preliminary classification device;
Step S14: user carries out the Mental imagery of a trial; System online acquisition EEG signals, intercepts a segment signal every an interval and is converted to characteristic vector x through feature extraction, with grader identification x, moves controll plant to user feedback according to recognition result;
Step S15: perform ILDA/QR algorithm, updates transfer matrix G, then updates data matrix A, and calculates class center vector collection C, forms new grader;
Step S16: this stage terminates; Otherwise, new grader is used for identification and the feedback of next trial, returns step S14.
3. the two benches adaptive training method of a kind of Mental imagery brain-computer interface according to claim 1, it is characterised in that described step S2 specifically includes following steps:
Step S21: arranging mark flag is 0, structure container is used for depositing k type games imagination sample;
Step S22: user carries out the Mental imagery of a trial; System online acquisition EEG signals, intercepts a segment signal every an interval and is converted to characteristic vector x through feature extraction, with grader identification x, moves controll plant to user feedback according to recognition result;
All samples of this trial are stored in container by step S23: when current trial terminates, if controll plant hits the mark, and record this trial and hit the mark the spent time;
Step S24: if trial number is not up to adaptive updates condition, then return step S22; Otherwise, system adopts mutual trust to appoint optimal seeking method screening new samples;
Step S25: if flag is equal to 0, construct data matrix A, according to classification number k structural matrix E, performs LDA/QR algorithm and obtains new transfer matrix G, and arranging flag is 1; Otherwise, perform ILDA/QR algorithm and update transfer matrix G, and update data matrix A;
Step S26: calculate according to transfer matrix G and data matrix A and obtain new class center vector collection C, form new grader empty container;
Step S27: man-machine training terminates; Otherwise, step S22 is returned.
4. the two benches adaptive training method of a kind of Mental imagery brain-computer interface according to claim 3, it is characterised in that the mutual trust described in step S24 appoints optimal seeking method to comprise the steps of
Step S41: according to the time screening trial hitting the mark spent; With one preliminary set of composition that expends time in corresponding to trial all in container, find out minimum expending time in, other expend time in and all produce a virtual time into axis of symmetry in its symmetric position with minimum expending time in, all virtual times join preliminary set and form reference set, calculate the standard deviation of reference set, choose and expend time in less than the minimum trial expended time in standard deviation sum;
Step S42: with the method screening sample supporting vector; All sample standard deviations corresponding to the trial that step S41 chooses add candidate samples collection, train on candidate samples collection with SVM, choose those samples being confirmed as supporting vector;
Wherein, described trial represents an experimental considerations unit in the man-machine training process of Mental imagery brain-computer interface; When trial starts, brain machine interface system is a given target at random; User's imagination of launching a campaign tries hard to move controll plant to target, and brain machine interface system is by identifying that the type of Mental imagery determines the direction of actual movement controll plant; When controll plant hits the mark or time-out, trial terminates.
5. the two benches adaptive training method of a kind of Mental imagery brain-computer interface according to claim 2 and 3, it is characterised in that the method for structure described in step S12 and S25 or renewal data matrix A is:
Data matrix A=[x1,x2,…,xn]=[A1,…,Ak]∈Rm×n, xi∈Rm×1I × 1 ..., n represents the sample point of a m dimension, total n sample point in A; M is determined by feature extracting method, is changeless in system operation; N dynamically increases; Each sample block matrixWhat represent is the set of all sample points of the i-th class, total k classification, niI=1 ..., k represents the number of samples of the i-th class,
6. the two benches adaptive training method of a kind of Mental imagery brain-computer interface according to claim 2 and 3, it is characterised in that described in step S12 and S25 according to classification number k structural matrix E's method particularly includes:
MatrixWherein e i = 1 1 ... 1 T ∈ R n i , i = 1 , ... , k .
7. the two benches adaptive training method of a kind of Mental imagery brain-computer interface according to claim 1-3, it is characterised in that the input of described LDA/QR algorithm is data matrix A ∈ Rm×nWith E ∈ Rn×k, it is output as transfer matrix G ∈ Rm×k;
Specifically include following steps:
Step S31: the economic QR calculating A decomposes, A=QR, Q ∈ Rm×n, R ∈ Rn×n, wherein Q is that row are orthogonal, and R is nonsingular;
Step S32: solve a lower triangular linear system RTH=E, solves and obtains H;
Step S33: calculate and obtain G, G=QH.
8. the two benches adaptive training method of a kind of Mental imagery brain-computer interface according to claim 1-3, it is characterized in that, the input of described ILDA/QR algorithm is row orthogonal matrix Q, optimum transfer matrix G and new sample x and its classification l, is output as the row orthogonal matrix Q and the transfer matrix G that update;
Specifically include following steps:
Step S41: calculate r = Q T x , α = x T x - r T r , Q=[Q (x-Qr)/α];
Step S42: calculate r=-GTX, r (l)=r (l)+1, r=r/ α, G=G+Q (:, n+1) rT
9. the two benches adaptive training method of a kind of Mental imagery brain-computer interface according to claim 1-3, it is characterised in that
The computational methods of described classification center vector set C are: the data matrix A and transfer matrix G of existing known class, calculate P=ATG, P = p 1 p 2 . . . p n = P 1 P 2 . . . P k ∈ R n × k , Wherein pj∈R1×kJ1 ..., n represents the projection result that in data matrix A, each sample point is corresponding, Pi∈R1×kI=1 ..., what k represented is belonging to all p of the i-th classjSet; Calculate the center C of each classi∈R1×k,Obtaining class center vector collection is C = C 1 . . . C k .
10. the two benches adaptive training method of a kind of Mental imagery brain-computer interface according to claim 1-3, it is characterised in that the concrete sorting technique of described grader is: sample to be determined is x, calculates di=| | xTG-Ci||2I=1 ..., k, selects minimum di, x is judged to the classification of correspondence.
CN201610107996.5A 2016-02-26 2016-02-26 The two stages adaptive training method of Mental imagery brain-computer interface Expired - Fee Related CN105677043B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610107996.5A CN105677043B (en) 2016-02-26 2016-02-26 The two stages adaptive training method of Mental imagery brain-computer interface

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610107996.5A CN105677043B (en) 2016-02-26 2016-02-26 The two stages adaptive training method of Mental imagery brain-computer interface

Publications (2)

Publication Number Publication Date
CN105677043A true CN105677043A (en) 2016-06-15
CN105677043B CN105677043B (en) 2018-12-25

Family

ID=56305206

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610107996.5A Expired - Fee Related CN105677043B (en) 2016-02-26 2016-02-26 The two stages adaptive training method of Mental imagery brain-computer interface

Country Status (1)

Country Link
CN (1) CN105677043B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106371590A (en) * 2016-08-29 2017-02-01 华南理工大学 High-performance motor imagery online brain-computer interface system based on OpenVIBE
CN110123313A (en) * 2019-04-17 2019-08-16 中国科学院深圳先进技术研究院 A kind of self-training brain machine interface system and related training method
CN113180695A (en) * 2021-04-20 2021-07-30 西安交通大学 Brain-computer interface signal classification method, system, device and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007138598A2 (en) * 2006-06-01 2007-12-06 Tylerton International Inc. Brain stimulation and rehabilitation
WO2008097200A1 (en) * 2007-02-09 2008-08-14 Agency For Science, Technology And Research A system and method for classifying brain signals in a bci system
CN102629156A (en) * 2012-03-06 2012-08-08 上海大学 Method for achieving motor imagery brain computer interface based on Matlab and digital signal processor (DSP)
CN103429145A (en) * 2010-03-31 2013-12-04 新加坡科技研究局 A method and system for motor rehabilitation
CN103488297A (en) * 2013-09-30 2014-01-01 华南理工大学 Online semi-supervising character input system and method based on brain-computer interface
CN104182042A (en) * 2014-08-14 2014-12-03 华中科技大学 BCI (brain-computer interface) method for multi-modal signals

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007138598A2 (en) * 2006-06-01 2007-12-06 Tylerton International Inc. Brain stimulation and rehabilitation
WO2008097200A1 (en) * 2007-02-09 2008-08-14 Agency For Science, Technology And Research A system and method for classifying brain signals in a bci system
CN103429145A (en) * 2010-03-31 2013-12-04 新加坡科技研究局 A method and system for motor rehabilitation
CN102629156A (en) * 2012-03-06 2012-08-08 上海大学 Method for achieving motor imagery brain computer interface based on Matlab and digital signal processor (DSP)
CN103488297A (en) * 2013-09-30 2014-01-01 华南理工大学 Online semi-supervising character input system and method based on brain-computer interface
CN104182042A (en) * 2014-08-14 2014-12-03 华中科技大学 BCI (brain-computer interface) method for multi-modal signals

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
佟晓丽: "基于想象左右手运动的在线自适应BCI系统研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
刘培奇等: "基于LDA主题模型的标签传递算法", 《计算机应用》 *
吴秀清: "基于QR分解和支持向量的伪逆LDA", 《聊城大学学报(自然科学版)》 *
张锦涛: "P300脑机接口的在线半监督学习算法与系统研究", 《中国优秀硕士学位论文全文数据库 医药卫生科技辑》 *
黄志华: "脑机接口的MapReduce计算模型", 《福州大学学报(自然科学版)》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106371590A (en) * 2016-08-29 2017-02-01 华南理工大学 High-performance motor imagery online brain-computer interface system based on OpenVIBE
CN106371590B (en) * 2016-08-29 2019-06-18 华南理工大学 High-performance online brain-computer interface system for motor imagery based on OpenVIBE
CN110123313A (en) * 2019-04-17 2019-08-16 中国科学院深圳先进技术研究院 A kind of self-training brain machine interface system and related training method
CN110123313B (en) * 2019-04-17 2022-02-08 中国科学院深圳先进技术研究院 Self-training brain-computer interface system and related training method
CN113180695A (en) * 2021-04-20 2021-07-30 西安交通大学 Brain-computer interface signal classification method, system, device and storage medium
CN113180695B (en) * 2021-04-20 2024-04-05 西安交通大学 Brain-computer interface signal classification method, system, equipment and storage medium

Also Published As

Publication number Publication date
CN105677043B (en) 2018-12-25

Similar Documents

Publication Publication Date Title
CN113051404B (en) Knowledge reasoning method, device and equipment based on tensor decomposition
CN111027403B (en) Gesture estimation method, device, equipment and computer readable storage medium
CN102508867B (en) Human-motion diagram searching method
CN109583346A (en) EEG feature extraction and classifying identification method based on LSTM-FC
CN106371610A (en) Method for detecting driving fatigue based on electroencephalogram
CN106503729A (en) A kind of generation method of the image convolution feature based on top layer weights
CN110444199A (en) A kind of voice keyword recognition method, device, terminal and server
CN111352419B (en) Path planning method and system for updating experience playback cache based on time sequence difference
CN105930773A (en) Motion identification method and device
CN104731307A (en) Somatic action identifying method and man-machine interaction device
CN105844258A (en) Action identifying method and apparatus
CN109284812A (en) A kind of video-game analogy method based on improvement DQN
CN104461000A (en) Online continuous human motion recognition method based on few missed signals
CN105677043A (en) Two-stage self-adaptive training method for motor imagery brain-computer interface
CN110109543A (en) C-VEP recognition methods based on subject migration
CN108229401A (en) A kind of multi-modal Modulation recognition method based on AFSA-SVM
CN106203296B (en) The video actions recognition methods of one attribute auxiliary
CN111079547B (en) Pedestrian moving direction identification method based on mobile phone inertial sensor
CN101976451B (en) Motion control and animation generation method based on acceleration transducer
CN110074779A (en) A kind of EEG signal identification method and device
EP1207498A3 (en) Display object generation method in information processing equipment
CN111158476A (en) Key identification method, system, equipment and storage medium of virtual keyboard
CN108509924A (en) The methods of marking and device of human body attitude
CN107329563A (en) A kind of recognition methods of type of action, device and equipment
CN107346207B (en) Dynamic gesture segmentation recognition method based on hidden Markov model

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20181225

CF01 Termination of patent right due to non-payment of annual fee