CN112764544B - A method of combining eye tracker and asynchronous motor imaging technology to realize virtual mouse - Google Patents
A method of combining eye tracker and asynchronous motor imaging technology to realize virtual mouse Download PDFInfo
- Publication number
- CN112764544B CN112764544B CN202110115741.4A CN202110115741A CN112764544B CN 112764544 B CN112764544 B CN 112764544B CN 202110115741 A CN202110115741 A CN 202110115741A CN 112764544 B CN112764544 B CN 112764544B
- Authority
- CN
- China
- Prior art keywords
- user
- coordinates
- eye
- mouse
- hand
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 33
- 238000005516 engineering process Methods 0.000 title claims abstract description 28
- 238000003384 imaging method Methods 0.000 title abstract description 8
- 230000004424 eye movement Effects 0.000 claims abstract description 53
- 210000004556 brain Anatomy 0.000 claims abstract description 39
- 230000033001 locomotion Effects 0.000 claims abstract description 35
- 230000000007 visual effect Effects 0.000 claims abstract description 19
- 238000012545 processing Methods 0.000 claims abstract description 16
- 230000009471 action Effects 0.000 claims abstract description 9
- 238000005070 sampling Methods 0.000 claims description 27
- 238000004422 calculation algorithm Methods 0.000 claims description 20
- 230000004434 saccadic eye movement Effects 0.000 claims description 14
- 238000000605 extraction Methods 0.000 claims description 12
- 230000008569 process Effects 0.000 claims description 7
- 238000004458 analytical method Methods 0.000 claims description 5
- 230000004397 blinking Effects 0.000 claims description 5
- 238000000354 decomposition reaction Methods 0.000 claims description 5
- 230000000193 eyeblink Effects 0.000 claims description 4
- 230000000717 retained effect Effects 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 claims description 3
- 230000005611 electricity Effects 0.000 claims description 3
- 238000005868 electrolysis reaction Methods 0.000 claims description 3
- 230000006870 function Effects 0.000 description 48
- 238000010586 diagram Methods 0.000 description 6
- 208000016285 Movement disease Diseases 0.000 description 4
- 230000000763 evoking effect Effects 0.000 description 4
- 230000002269 spontaneous effect Effects 0.000 description 4
- 238000012790 confirmation Methods 0.000 description 3
- 238000012217 deletion Methods 0.000 description 3
- 230000037430 deletion Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000033764 rhythmic process Effects 0.000 description 2
- 230000000638 stimulation Effects 0.000 description 2
- 241000282412 Homo Species 0.000 description 1
- 241000287181 Sturnus vulgaris Species 0.000 description 1
- 208000003464 asthenopia Diseases 0.000 description 1
- 238000007635 classification algorithm Methods 0.000 description 1
- 230000019771 cognition Effects 0.000 description 1
- 230000001149 cognitive effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000004438 eyesight Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 238000012567 pattern recognition method Methods 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 230000016776 visual perception Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
本发明提供了一种结合眼动仪与异步运动想象技术实现虚拟鼠标的方法包括如下步骤:步骤S1、通过眼动信号实现鼠标的移动功能:通过眼动仪实时采集用户的视觉轨迹,获得用户眼动注视点的坐标,并对用户眼动注视点的坐标进行数据处理,将虚拟鼠标的指针位置与用户的视觉轨迹建立关联,实现虚拟鼠标的移动功能;步骤S2、通过运动想象实现鼠标的左右键点击功能:用户想象左右键相关联的动作,通过解码运动区的脑电波信号,实现鼠标的左右键点击功能。本发明结合眼动仪与异步运动想象技术实现虚拟鼠标进行目标选择,使得用户可以不采取任何动作,只凭眼睛与大脑就实现对计算机的远程遥控。
The present invention provides a method for realizing a virtual mouse by combining an eye tracker and an asynchronous motion imaging technology. The coordinates of the eye-movement fixation point, and data processing is performed on the coordinates of the user's eye-movement fixation point, and the pointer position of the virtual mouse is associated with the visual track of the user, so as to realize the movement function of the virtual mouse; Step S2, realize the movement of the mouse through motor imagination Left and right button click function: The user imagines the actions associated with the left and right buttons, and realizes the left and right button click function of the mouse by decoding the brainwave signals of the motor area. The invention combines the eye tracker and the asynchronous motion imaging technology to realize the target selection of the virtual mouse, so that the user can realize the remote control of the computer only by the eyes and the brain without taking any action.
Description
技术领域technical field
本发明涉及脑科学与认知科学技术领域,特别涉及一种结合眼动仪与异步运动想象技术实现虚拟鼠标的方法。The invention relates to the technical fields of brain science and cognitive science, in particular to a method for realizing a virtual mouse by combining an eye tracker and an asynchronous motion imagery technology.
背景技术Background technique
脑机接口(brain computer interface,BCI)是搭建在人与机器之间的交互系统。通常脑机接口的信号采集可以分为非侵入式,半侵入式以及侵入式三种。其中,非侵入式因为其不对人体造成伤害而有可能在未来最早地被应用于现实生活。按照脑电信号是否由外界刺激引起,可以将脑机接口技术分为诱发式与自发式。常见的诱发式脑机接口技术有稳态视觉电位(steady-state visual evoked potential,SSVEP)与事件相关电位(event-related potential,ERP)等。SSVEP主要是当被试注视以固定频率进行黑白闪烁的视觉刺激时,其脑电被诱发相应频率的谐波信号,从而判别被试意图的方法。ERP属于长潜伏期诱发电位,主要反映认知过程中大脑的神经电生理变化。经典的ERP根据潜伏期可将主要成分分为P1、N1、P2、N2、P3,P或N代表正向或负向,1、2或3分别代表波峰在视觉刺激后约100ms、200ms或300ms出现。最经典的应用就是1988年由Farewell和Donchin等人提出的P300Speller,其刺激范式主要由靶刺激与非靶刺激随机闪烁构成。当概率较小的靶刺激出现后大约300ms,在被试的脑电中能够观察到一个正向波峰,称为P300信号,通过检测该信号,可以判别被试意图。诱发式脑机接口技术通常受限于刺激范式,除此之外,不论是SSVEP还是ERP都需要长时间对被试的眼睛进行闪烁刺激,很容易造成视觉疲劳,也不符合人类的操作习惯。因此该专利中主要采用了自发式脑机接口技术,运动想象(motor imagery,MI)。Brain computer interface (BCI) is an interactive system built between humans and machines. Generally, the signal acquisition of brain-computer interface can be divided into three types: non-invasive, semi-invasive and invasive. Among them, the non-invasive type may be applied to real life at the earliest in the future because it does not cause harm to the human body. According to whether the EEG signal is caused by external stimuli, brain-computer interface technology can be divided into induced and spontaneous. Common evoked brain-computer interface technologies include steady-state visual evoked potential (SSVEP) and event-related potential (ERP). SSVEP is mainly a method for judging the subject's intention when the subject's EEG is induced by the harmonic signal of the corresponding frequency when the subject looks at the visual stimulus that flashes in black and white at a fixed frequency. ERP is a long-latency evoked potential, which mainly reflects the neurophysiological changes of the brain during the cognitive process. Classical ERP can divide the main components into P1, N1, P2, N2, P3 according to the latency, P or N represent positive or negative, 1, 2 or 3 represent the peaks appear about 100ms, 200ms or 300ms after the visual stimulus, respectively. . The most classic application is the P300Speller proposed by Farewell and Donchin in 1988, whose stimulation paradigm is mainly composed of random flickering of target and non-target stimuli. About 300ms after the appearance of the target stimulus with low probability, a positive wave peak, called the P300 signal, can be observed in the subject's EEG. By detecting this signal, the subject's intention can be judged. The evoked brain-computer interface technology is usually limited by the stimulation paradigm. In addition, both SSVEP and ERP need to flicker the eyes of the subjects for a long time, which is easy to cause visual fatigue and does not conform to human operation habits. Therefore, the patent mainly adopts the spontaneous brain-computer interface technology, motor imagery (MI).
MI技术不依赖于外界刺激,是一种内源性自发脑电信号。只需要被试想象进行特定运动任务,而不需要被试进行实际运动即可在不同脑区检测出特异性的波形,从而判别被试意图。当被试在想象身体某个肢体运动时,就会引起大脑感觉运动皮层不同节律(感觉运动节律,Sensorimotor Rhythms,SMRs)的变化。不同的肢体部位的运动想象引起的SMRs具有不同的时空分布特性,其EEG(Electroencephalogram,脑电波)具有较高的可区分性。因此可以通过模式识别方法,解码EEG中的MI模式,并将其转化为控制指令,从而实现对外部设备的操控。但是MI技术受限于其可分类数量,目前可以实现较好分类的有双手、双脚及舌头,也就是说,一次分类只能输出5种类别,难以直接输出更多的指令来实现较复杂的任务。诱发式脑机接口技术之所以可以实现更多数量的目标识别是通过将多任务排布在显示器上实现的。而自发式脑机接口更灵活,更符合人的操作习惯。由此引发了我们对如何在MI有限的输出类别和任务排布之间搭建起转化桥梁的思考。MI technology does not depend on external stimuli and is an endogenous spontaneous EEG signal. It is only necessary for the subject to imagine a specific movement task, without the need for the subject to perform the actual movement, to detect specific waveforms in different brain regions, thereby judging the subject's intention. When the subjects are imagining the movement of a certain limb of the body, it will cause changes in different rhythms of the brain's sensorimotor cortex (Sensorimotor Rhythms, SMRs). SMRs induced by motor imagery of different limbs have different spatiotemporal distribution characteristics, and their EEG (Electroencephalogram, brain waves) have high distinguishability. Therefore, the MI pattern in the EEG can be decoded by the pattern recognition method and converted into a control command, so as to realize the control of the external device. However, the MI technology is limited by the number of classifications. Currently, the hands, feet and tongue can be better classified. That is to say, only 5 categories can be output in one classification, and it is difficult to directly output more instructions to achieve more complex task. The reason why elicited brain-computer interface technology can achieve a larger number of target recognition is by arranging multitasking on the display. The spontaneous brain-computer interface is more flexible and more in line with human operating habits. This led us to think about how to build a transformation bridge between the limited output categories of MI and the assignment of tasks.
眼动仪是心理学基础研究的重要仪器,通常用于记录人在处理视觉信息时的眼动轨迹特征,广泛用于注意、视知觉、阅读等领域的研究。目前许多笔记本电脑开始搭载眼动仪作为卖点,例如Alienware 17R5搭载了Tobii的眼动仪。但是因为其只能追踪人眼视觉的注视轨迹,而不具备确认等功能,在日常生活中的实际用途十分有限,通常用于游戏,或者专业性的心理学分析。Eye tracker is an important instrument for basic research in psychology. It is usually used to record the characteristics of human eye movement when processing visual information. It is widely used in the research of attention, visual perception, reading and other fields. At present, many laptops are equipped with eye trackers as a selling point. For example, the Alienware 17R5 is equipped with Tobii's eye trackers. However, because it can only track the gaze trajectory of human vision and does not have functions such as confirmation, its practical use in daily life is very limited, and it is usually used in games or professional psychological analysis.
发明内容SUMMARY OF THE INVENTION
本发明提供了一种结合眼动仪与异步运动想象技术实现虚拟鼠标的方法,其目的是为了解决背景技术中诱发式脑机接口目标选择较为被动且容易引起被试视觉疲劳,拓展眼动仪的实际用途受限,未将眼动仪与异步运动想象技术应用于虚拟鼠标中的技术问题。The present invention provides a method for realizing a virtual mouse by combining an eye tracker and asynchronous motion imagery technology. The practical use of the device is limited, and the eye tracker and asynchronous motor imagery technology are not applied to the technical problems of the virtual mouse.
为了达到上述目的,本发明提供的一种结合眼动仪与异步运动想象技术实现虚拟鼠标的方法,包括如下步骤:In order to achieve the above purpose, the present invention provides a method for realizing a virtual mouse by combining an eye tracker and an asynchronous motor imagery technology, comprising the following steps:
步骤S1、通过眼动信号实现鼠标的移动功能:通过眼动仪实时采集用户的视觉轨迹,获得用户眼动注视点的坐标,并对用户眼动注视点的坐标进行数据处理,将虚拟鼠标的指针位置与用户的视觉轨迹建立关联,实现虚拟鼠标的移动功能;Step S1, realize the moving function of the mouse through the eye movement signal: collect the user's visual track in real time through the eye tracker, obtain the coordinates of the user's eye movement fixation point, and perform data processing on the coordinates of the user's eye movement fixation point, and the virtual mouse's coordinates are processed. The position of the pointer is associated with the user's visual track to realize the movement function of the virtual mouse;
步骤S2、通过运动想象实现鼠标的左右键点击功能:用户想象左右键相关联的动作,通过解码运动区的脑电波信号,实现鼠标的左右键点击功能。Step S2, realizing the function of clicking the left and right buttons of the mouse through motor imagination: the user imagines the actions associated with the left and right buttons, and realizes the function of clicking the left and right buttons of the mouse by decoding the brain wave signals of the motor area.
优选地,步骤S2具体包括如下步骤:Preferably, step S2 specifically includes the following steps:
步骤S21、当用户想要点击鼠标左键时,想象用左手进行抓握或点击动作;当用户想要点击鼠标右键时,想象用右手进行抓握或者点击动作;Step S21, when the user wants to click the left mouse button, imagine grasping or clicking with the left hand; when the user wants to click the right mouse button, imagine grasping or clicking with the right hand;
步骤S22、通过脑电采集设备,实时采集用户大脑运动区的脑电波信号;Step S22, collecting the brain wave signal of the motor area of the user's brain in real time through the brain electricity collection device;
步骤S23、通过解码用户脑电波,区分用户的左手或右手的运动意图,实现对鼠标左右键功能的控制。Step S23 , by decoding the brain waves of the user, distinguishing the movement intention of the left hand or the right hand of the user, so as to realize the control of the function of the left and right buttons of the mouse.
优选地,步骤S1具体为:Preferably, step S1 is specifically:
步骤S11、获得用户原始注视点坐标:对用户的眼动信息进行校准后,通过python的软件开发工具包,获得用户的原始注视点坐标;Step S11, obtain the coordinates of the user's original gaze point: after calibrating the user's eye movement information, obtain the user's original gaze point coordinates through a python software development kit;
步骤S12、随着用户眼睛移动,获取用户的眼动注视点坐标;Step S12, as the user's eyes move, obtain the coordinates of the user's eye movement gaze point;
步骤S13、剔除眼动仪采集的眼动注视点坐标中包含眨眼、眼跳的坐标,获得用户注视目标的采样点坐标;Step S13, excluding the coordinates of the eye movement fixation point collected by the eye tracker including the coordinates of blinking and saccade, and obtaining the coordinates of the sampling point of the user's fixation target;
步骤S14、将用户采样点坐标表示的眼动视觉轨迹坐标写入缓冲区;Step S14, writing the eye movement visual trajectory coordinates represented by the coordinates of the user sampling points into the buffer;
步骤S15、将缓冲区的眼动视觉轨迹坐标与虚拟鼠标的指针位置建立关联。Step S15 , associating the eye movement visual track coordinates of the buffer with the pointer position of the virtual mouse.
优选地,所述步骤S13具体为:Preferably, the step S13 is specifically:
步骤131、剔除眼动仪捕捉不到眼睛眨眼的坐标:将眼动仪捕捉不到眼睛眨眼的坐标标定为(-1,-1),剔除坐标小于0的点,即可剔除眼动仪捕捉不到眼睛眨眼的坐标;Step 131. Eliminate the coordinates of eye blinks that cannot be captured by the eye tracker: calibrate the coordinates where the eye tracker cannot capture eye blinks as (-1, -1), and remove the points whose coordinates are less than 0, and the eye tracker captures less than the coordinates of the blink of an eye;
步骤132、剔除眼动仪捕捉的眼跳的坐标:根据眼动仪的采样率,获得两次采样点之间的距离,即获得眼动速度;通过速度阈值识别分类算法,将眼动速度大于一定阈值的采样点定为眼跳样本而进行剔除,保留用户注视目标的采样点坐标;Step 132, remove the coordinates of the saccade captured by the eye tracker: according to the sampling rate of the eye tracker, obtain the distance between two sampling points, that is, obtain the eye movement speed; Sampling points with a certain threshold are set as saccade samples and eliminated, and the coordinates of the sampling points of the user's gaze target are retained;
优选地,所述步骤S23中,对所采集的脑电波信号进行特征提取后,通过分类器将脑电波分为鼠标左手按键点击信号和鼠标右手按键点击信号,实现虚拟鼠标的选择功能。Preferably, in the step S23, after feature extraction is performed on the collected brainwave signals, the brainwaves are divided into left-hand mouse button click signals and mouse right-hand button click signals by a classifier to realize the selection function of the virtual mouse.
优选地,所述步骤S23具体包括如下步骤:Preferably, the step S23 specifically includes the following steps:
步骤S231、获取一段至少2000ms的脑电波信号;Step S231, obtaining a brain wave signal of at least 2000ms;
步骤S232、采用滑窗的形式,连续采集100ms或200ms时长的脑电波信号数据,放入分类器;Step S232, adopt the form of sliding window, continuously collect brain wave signal data with a duration of 100ms or 200ms, and put it into the classifier;
步骤S233、脑电解算:采用BCI算法对所截取的脑电波信号进行处理,经过特征提取算法进行特征提取后,由逐步判别分析方法得到样本得分,并将分类器分为左手MI和右手MI分类器、左手MI和空闲状态分类器以及右手MI和空闲状态分类器三种,通过三种分类器以投票的形式得到最终分类结果为左手、右手或者空闲状态中的一种;如果为左手状态,则依次进入步骤S234、步骤S235;如果为空闲状态,则进入步骤S236;如果为右手状态,当虚拟鼠标右键功能为非菜单命令时,则进入步骤S235进行防抖动处理后,执行右手状态指令,当虚拟鼠标右键功能为菜单命令时,则依次进入步骤S234、步骤S235,以执行虚拟鼠标所指向目标处的菜单命令;Step S233, brain electrolysis calculation: use the BCI algorithm to process the intercepted brain wave signal, after the feature extraction algorithm is performed, the sample score is obtained by the stepwise discriminant analysis method, and the classifier is divided into left-hand MI and right-hand MI classification. There are three types of classifier, left-hand MI and idle state classifier, and right-hand MI and idle state classifier. Through the three classifiers, the final classification result is obtained by voting in the form of left-hand, right-hand or idle state; if it is left-hand state, Then proceed to step S234 and step S235 in turn; if it is an idle state, then go to step S236; if it is a right-hand state, when the virtual mouse right button function is a non-menu command, then go to step S235 to perform anti-shake processing, and execute the right-hand state command , when the virtual mouse right button function is a menu command, step S234 and step S235 are sequentially entered to execute the menu command at the target pointed by the virtual mouse;
步骤S234、读取判断为左手状态或右手状态的眼动注视点缓冲区坐标;Step S234, read the coordinates of the eye movement gaze point buffer that is judged to be left-handed state or right-handed state;
步骤S235、采用软件去抖动方法,对眼动注视点缓冲区坐标进行分类再判断:即当一段脑电数据被判定为非空闲状态时,与后一次滑窗的结果相比,如果为同一左手或右手状态,则确认为该状态,且进入步骤S237;否则认为是误触,将其改判为空闲状态,进入步骤S236;Step S235: Use the software de-jitter method to classify and re-judg the coordinates of the eye-movement gaze point buffer: that is, when a piece of EEG data is judged to be in a non-idle state, compared with the result of the next sliding window, if it is the same left hand or right-hand state, then confirm this state, and go to step S237; otherwise, it is considered to be a false touch, and it will be changed to an idle state, and go to step S236;
步骤S236、判定为空闲状态后,则认为用户没有操作需求或者没有找到目标选项,虚拟鼠标不做任何操作,从而实现异步;In step S236, after it is determined to be in an idle state, it is considered that the user has no operation requirement or does not find the target option, and the virtual mouse does not perform any operation, thereby realizing asynchronous;
步骤S237、眼动注视点缓冲区坐标显示于显示器中对应位置的结果反馈区,用户确认选择的是否是其期望目标;若眼动注视点缓冲区坐标显示于显示器中的无效区域,则认为分类器误判了用户意图,将结果改判为空闲状态,进入步骤S236。In step S237, the coordinates of the eye-movement gaze point buffer are displayed in the result feedback area of the corresponding position in the display, and the user confirms whether the selected target is his desired target; If the device misjudges the user's intention, the result is changed to an idle state, and the process proceeds to step S236.
优选地,所述步骤S233中特征提取算法包括经验模态分解算法及共同空间模式算法。Preferably, the feature extraction algorithm in step S233 includes an empirical mode decomposition algorithm and a common space mode algorithm.
优选地,所述步骤S233中当虚拟鼠标右键功能为非菜单命令时,所述右手状态指令具体为:做出删除用户的最后一个选择的指令。Preferably, in the step S233, when the virtual mouse right button function is a non-menu command, the right-hand state instruction is specifically: an instruction to delete the last selection of the user.
优选地,所述步骤S235中,如果连续两段滑窗的分类结果都是左手状态时,则对两次分类期间的12个注视点坐标求平均从而得到用户的期望注视点。Preferably, in the step S235, if the classification results of two consecutive sliding windows are both left-handed, average the coordinates of 12 gaze points during the two classification periods to obtain the user's desired gaze point.
优选地,所述步骤S23还包括如下步骤:Preferably, the step S23 further includes the following steps:
步骤S238、停留500ms不处理数据,再进入下一循环脑电数据处理。Step S238, stay for 500ms without processing data, and then enter the next cycle of EEG data processing.
本发明能够取得下列有益效果:结合眼动仪与异步运动想象技术实现虚拟鼠标进行目标选择,使得用户可以不采取任何动作,只凭眼睛与大脑就实现对计算机的远程遥控。通过本发明,鼠标的所有功能都可以在用户不进行任何动作的情况下实现。结合上电脑自带的软键盘功能,用户能够远程通过眼睛与大脑直接操控电脑的所有功能,包括网页搜索,编写文档,玩游戏等。使得运动障碍患者的助残设备不再仅限于特定字符拼写,具有重大的实际意义。The invention can achieve the following beneficial effects: combining the eye tracker and the asynchronous motion imaging technology to realize the target selection of the virtual mouse, so that the user can realize the remote control of the computer only by the eyes and the brain without taking any action. Through the present invention, all functions of the mouse can be realized without any action by the user. Combined with the soft keyboard function that comes with the computer, users can remotely control all functions of the computer directly through their eyes and brain, including web search, writing documents, playing games, etc. It is of great practical significance that the disabled aid equipment for patients with movement disorders is no longer limited to the spelling of specific characters.
附图说明Description of drawings
图1为本发明的一种结合眼动仪与异步运动想象技术实现虚拟鼠标的方法的的一较佳实施例的示意图;1 is a schematic diagram of a preferred embodiment of a method for realizing a virtual mouse in combination with an eye tracker and asynchronous motion imagery technology of the present invention;
图2为本发明的一种结合眼动仪与异步运动想象技术实现虚拟鼠标的方法的一较佳实施例的显示器的选项背景界面的示意图;2 is a schematic diagram of an option background interface of a display according to a preferred embodiment of a method for implementing a virtual mouse in combination with an eye tracker and an asynchronous motion imaging technique according to the present invention;
图3为本发明的一种结合眼动仪与异步运动想象技术实现虚拟鼠标的方法的一较佳实施例的显示器的选项的有效区域像素坐标的示意图。FIG. 3 is a schematic diagram of pixel coordinates of an effective area of an option of a display according to a preferred embodiment of a method for realizing a virtual mouse by combining an eye tracker and an asynchronous motion imaging technology according to the present invention.
具体实施方式Detailed ways
为使本发明要解决的技术问题、技术方案和优点更加清楚,下面将结合附图及具体实施例进行详细描述。In order to make the technical problems, technical solutions and advantages to be solved by the present invention more clear, the following will be described in detail with reference to the accompanying drawings and specific embodiments.
本发明针对现有的问题,提供了一种结合眼动仪与异步运动想象技术实现虚拟鼠标的方法。本发抖动明的实施例包含眼动仪实时地采集用户的视觉轨迹,并通过python(一种计算机编程语言)将鼠标的指针位置与用户的视觉轨迹建立关联从而实现虚拟鼠标的移动及选择。将运动想象中左、右手两类任务分别作为鼠标的左右键。Aiming at the existing problems, the present invention provides a method for realizing a virtual mouse by combining an eye tracker and an asynchronous motion imaging technology. The embodiment of the shaking invention includes that the eye tracker collects the visual trajectory of the user in real time, and associates the position of the mouse pointer with the visual trajectory of the user through python (a computer programming language) to realize the movement and selection of the virtual mouse. The two tasks of left and right hands in motor imagery are used as the left and right buttons of the mouse, respectively.
眼动和运动想象是两个不同的模态,实现的是不同的功能。眼动信号主要是实现鼠标的移动功能,即眼睛看哪,鼠标的指针就移动到哪。运动想象主要是实现鼠标的左右键功能。Eye movement and motor imagery are two different modalities that achieve different functions. The eye movement signal mainly realizes the movement function of the mouse, that is, where the eyes look, the pointer of the mouse moves. Motor imagination is mainly to realize the function of the left and right buttons of the mouse.
比如要打开一个文件:For example, to open a file:
方法一:步骤1、用户盯着那个文件,眼动仪就会将鼠标的指针与文件对齐;步骤2、用户连续想象两次左手的运动;步骤3、通过脑电波解算知道用户双击了两次左键,从而打开文件。Method 1:
方法二:步骤1同上;步骤2、用户想象右手运动,脑电波解算得知用户意图从而对文件单击右键打开功能菜单;步骤3、用户将目光移动到“打开”功能,眼动仪将鼠标指针与“打开”对齐;步骤4、用户想象左手运动,从而选定“打开”功能。Method 2:
如果要实现浏览网页比较常用的滑动条功能,步骤1、用户将目光与滑动条对齐;步骤2、用户想象左手运动,相当于长按鼠标左键选择滑动条;步骤3、通过眼动上、下或者左、右拉动滑动条,从而实现滑动条功能。If you want to implement the sliding bar function commonly used in web browsing,
如图1所示,本发明的一种结合眼动仪与异步运动想象技术实现虚拟鼠标的方法的一较佳实施例的示意图。图2为显示器的选项背景界面的示意图;图3为显示器的选项的有效区域像素坐标的示意图。As shown in FIG. 1 , it is a schematic diagram of a preferred embodiment of a method for realizing a virtual mouse by combining an eye tracker and an asynchronous motor imagery technology of the present invention. FIG. 2 is a schematic diagram of an option background interface of a display; FIG. 3 is a schematic diagram of pixel coordinates of an effective area of an option of the display.
包括如下步骤:It includes the following steps:
步骤S1、通过眼动信号实现鼠标的移动功能:通过眼动仪实时采集用户的视觉轨迹,获得用户眼动注视点的坐标,并对用户眼动注视点的坐标进行数据处理,将虚拟鼠标的指针位置与用户的视觉轨迹建立关联,实现虚拟鼠标的移动功能;Step S1, realize the moving function of the mouse through the eye movement signal: collect the user's visual track in real time through the eye tracker, obtain the coordinates of the user's eye movement fixation point, and perform data processing on the coordinates of the user's eye movement fixation point, and the virtual mouse's coordinates are processed. The position of the pointer is associated with the user's visual track to realize the movement function of the virtual mouse;
步骤S2、通过运动想象实现鼠标的左右键点击功能:用户想象左右键相关联的动作,通过解码运动区的脑电波信号,实现鼠标的左右键点击功能。Step S2, realizing the function of clicking the left and right buttons of the mouse through motor imagination: the user imagines the actions associated with the left and right buttons, and realizes the function of clicking the left and right buttons of the mouse by decoding the brain wave signals of the motor area.
步骤S1中,采用采样率为60Hz的Tobii pro(一种品牌)眼动仪。在对用户的眼动信息进行校准后,通过python的SDK(软件开发工具包,Software Development Kit),用户的原始注视点坐标可以被得到。因为选择功能需要被试对目标进行注视,所以通过速度阈值识别分类算法,我们将眼动速度大于一定阈值的采样点定为扫视样本而进行剔除,只保留用户注视目标的采样点。In step S1, a Tobii pro (a brand) eye tracker with a sampling rate of 60 Hz is used. After calibrating the user's eye movement information, the user's original gaze point coordinates can be obtained through a python SDK (Software Development Kit). Because the selection function requires the subject to gaze at the target, through the speed threshold recognition and classification algorithm, we set the sampling points with eye movement speed greater than a certain threshold as saccade samples and eliminate them, and only keep the sampling points where the user is gazing at the target.
本实施例中,眼动仪采集到的原始眼动坐标包含眨眼、眼跳与注视的坐标。因为眨眼时,眼动仪捕捉不到眼睛,将坐标标定为(-1,-1),故剔除坐标小于0的点,即可剔除眨眼标定的坐标。而眼跳,或者说扫视,在本实施例中没有实际意义。眼动仪的采样率为固定的60Hz,大约每17ms采一个点,则两次采样点之间的距离即可代表眼动的速度。求取每个采样点相对于上一个采样点的欧式距离,并剔除大于一定阈值的采样点,即可剔除用户在扫视过程中的眼动采样点。由此,本实施例只保留了用户的注视点,便于其后通过python的pymouse(python的鼠标模块)实现了鼠标位置与眼动注视点的同步关联。In this embodiment, the original eye movement coordinates collected by the eye tracker include the coordinates of blinking, saccade and gaze. Because the eye tracker cannot capture the eye when blinking, the coordinates are calibrated as (-1,-1), so the coordinates of the blink calibration can be eliminated by eliminating the points whose coordinates are less than 0. The saccade, or saccade, has no practical significance in this embodiment. The sampling rate of the eye tracker is fixed at 60Hz, and a point is sampled about every 17ms, and the distance between the two sampling points can represent the speed of eye movement. By calculating the Euclidean distance of each sampling point relative to the previous sampling point, and removing the sampling points larger than a certain threshold, the user's eye movement sampling points during the saccade process can be eliminated. Therefore, only the user's gaze point is retained in this embodiment, so that the synchronous association between the mouse position and the eye-movement gaze point can be realized later by using python's pymouse (a python mouse module).
步骤S1具体为:Step S1 is specifically:
步骤S11、获得用户原始注视点坐标:对用户的眼动信息进行校准后,通过python的软件开发工具包,获得用户的原始注视点坐标;Step S11, obtain the coordinates of the user's original gaze point: after calibrating the user's eye movement information, obtain the user's original gaze point coordinates through a python software development kit;
步骤S12、随着用户眼睛移动,获取用户的眼动注视点坐标;Step S12, as the user's eyes move, obtain the coordinates of the user's eye movement gaze point;
步骤S13、剔除眼动仪采集的眼动注视点坐标中包含眨眼、眼跳的坐标,获得用户注视目标的采样点坐标;具体为:Step S13, excluding the coordinates of the eye movement fixation point collected by the eye tracker including the coordinates of blinking and saccade, and obtaining the coordinates of the sampling point of the user's fixation target; specifically:
步骤131、剔除眼动仪捕捉不到眼睛的眨眼的坐标:将眼动仪捕捉不到眼睛的坐标定为(-1,-1),剔除坐标小于0的点,即可剔除眼动仪捕捉不到眼睛眨眼的坐标;Step 131. Eliminate the coordinates of blinks that cannot be captured by the eye tracker: set the coordinates of the eyes that cannot be captured by the eye tracker as (-1, -1), and remove the points whose coordinates are less than 0 to be captured by the eye tracker. less than the coordinates of the blink of an eye;
步骤132、剔除眼动仪捕捉的眼跳的坐标;根据眼动仪的采样率,获得两次采样点之间的距离,即获得眼动速度;通过速度阈值识别分类算法,将眼动速度大于一定阈值的采样点定为眼跳样本而进行剔除,保留用户注视目标的采样点坐标;Step 132: Eliminate the coordinates of the saccade captured by the eye tracker; obtain the distance between two sampling points according to the sampling rate of the eye tracker, that is, obtain the eye movement speed; Sampling points with a certain threshold are set as saccade samples and eliminated, and the coordinates of the sampling points of the user's gaze target are retained;
步骤S14、将用户采样点坐标表示的眼动视觉轨迹坐标写入缓冲区;Step S14, writing the eye movement visual trajectory coordinates represented by the coordinates of the user sampling points into the buffer;
步骤S15、将缓冲区的眼动视觉轨迹坐标与虚拟鼠标的指针位置建立关联。Step S15 , associating the eye movement visual track coordinates of the buffer with the pointer position of the virtual mouse.
步骤S2具体包括如下步骤:Step S2 specifically includes the following steps:
步骤S21、当用户想要点击鼠标左键时,想象用左手进行抓握或点击动作;当用户想要点击鼠标右键时,想象用右手进行抓握或者点击动作;Step S21, when the user wants to click the left mouse button, imagine grasping or clicking with the left hand; when the user wants to click the right mouse button, imagine grasping or clicking with the right hand;
步骤S22、通过脑电采集设备,实时采集用户大脑运动区的脑电波信号;Step S22, collecting the brain wave signal of the motor area of the user's brain in real time through the brain electricity collection device;
步骤S23、通过解码用户脑电波,区分用户的左手或右手的运动意图,实现对鼠标左右键功能的控制。Step S23 , by decoding the brain waves of the user, distinguishing the movement intention of the left hand or the right hand of the user, so as to realize the control of the function of the left and right buttons of the mouse.
步骤S23中,对所采集的脑电波信号进行特征提取后,通过分类器将脑电波分为鼠标左手按键点击信号和鼠标右手按键点击信号,实现虚拟鼠标的选择功能。In step S23, after feature extraction is performed on the collected brainwave signals, the brainwaves are divided into left-hand mouse button click signals and mouse right-hand button click signals by a classifier to implement a virtual mouse selection function.
在脑电方面,为了实现异步功能,虽然是左/右手MI分类任务,但是还需要判别空闲状态。也就是说,每次分类都是一个三分类任务。通常,为保证一定的分类正确率,每次MI分类任务都需要一段至少2000ms的脑电数据。同时,为了尽可能保证实时性,采用滑窗的形式对脑电信号进行处理,即在保证数据总长度为2000ms的前提下,每次向后滑动100ms时长的数据,放入分类器。滑窗的数据长度主要取决于算法的处理速度。为了在一定程度上提高系统的可靠性,减少误判,采用单片机中键盘常用的软件去抖动方法。即当一段脑电数据被判定为非空闲状态时,不立即确认为该状态。而是与后一次滑窗的结果相比,如果仍然为同一状态,则确认为该状态,否则认为是误触,将其改判为空闲状态。In terms of EEG, in order to realize the asynchronous function, although it is a left/right hand MI classification task, it is also necessary to discriminate the idle state. That is, each classification is a three-classification task. Usually, in order to ensure a certain classification accuracy, each MI classification task requires a period of EEG data of at least 2000ms. At the same time, in order to ensure the real-time performance as much as possible, the EEG signal is processed in the form of sliding window, that is, on the premise that the total length of the data is 2000ms, the data with a duration of 100ms is slid backward each time and put into the classifier. The data length of the sliding window mainly depends on the processing speed of the algorithm. In order to improve the reliability of the system to a certain extent and reduce misjudgment, the software de-jitter method commonly used in keyboards in single-chip microcomputers is adopted. That is, when a piece of EEG data is determined to be in a non-idle state, this state is not immediately confirmed. But compared with the result of the last sliding window, if it is still in the same state, it is confirmed as this state, otherwise it is considered to be a false touch, and it is changed to an idle state.
所述步骤S23具体包括如下步骤:The step S23 specifically includes the following steps:
步骤S231、获取一段至少2000ms的脑电波信号;Step S231, obtaining a brain wave signal of at least 2000ms;
步骤S232、采用滑窗的形式,连续采集100ms或200ms时长的脑电波信号数据,放入分类器;Step S232, adopt the form of sliding window, continuously collect brain wave signal data with a duration of 100ms or 200ms, and put it into the classifier;
步骤S233、脑电解算:采用BCI(脑机接口,Brain-Computer Interface)算法对所截取的脑电波信号进行处理,经过特征提取算法(特征提取算法包括经验模态分解(empirical mode decomposition,EMD)算法及共同空间模式(common spatial pattern,CSP)算法)进行特征提取后,由逐步判别分析方法(stepwise linear discriminantanalysis,SWLDA)得到样本得分,并将分类器分为左手MI和右手MI分类器、左手MI和空闲状态分类器以及右手MI和空闲状态分类器三种,通过三种分类器以投票的形式得到最终分类结果为左手、右手或者空闲状态中的一种;如果为左手状态,则进入步骤S234、步骤S235;如果为空闲状态,则进入步骤S236;如果为左手状态,则依次进入步骤S234、步骤S235;如果为空闲状态,则进入步骤S236;如果为右手状态,当虚拟鼠标右键功能为非菜单命令时,则进入步骤S235进行防抖动处理后,执行右手状态指令,当虚拟鼠标右键功能为菜单命令时,则依次进入步骤S234、步骤S235,以执行虚拟鼠标所指向目标处的菜单命令。Step S233, brain electrolysis calculation: use the BCI (brain-computer interface, Brain-Computer Interface) algorithm to process the intercepted brain wave signal, and go through a feature extraction algorithm (the feature extraction algorithm includes empirical mode decomposition (empirical mode decomposition, EMD). algorithm and common spatial pattern (CSP) algorithm) after feature extraction, the sample score is obtained by stepwise linear discriminant analysis (SWLDA), and the classifier is divided into left-hand MI and right-hand MI classifier, left-hand MI There are three types of MI and idle state classifiers, as well as right-hand MI and idle state classifiers. The final classification result is obtained in the form of voting through the three classifiers as one of left-handed, right-handed or idle state; if it is left-handed state, go to step S234, step S235; if it is in the idle state, then go to step S236; if it is in the left hand state, then go to step S234 and step S235 in turn; if it is in the idle state, go to step S236; if it is in the right hand state, when the virtual mouse right button function is When it is not a menu command, then enter step S235 to perform anti-shake processing, and execute the right-hand state command. When the virtual mouse right button function is a menu command, then enter step S234 and step S235 in turn to execute the menu at the target pointed by the virtual mouse. Order.
当虚拟鼠标右键功能为非菜单命令时,所述右手状态指令具体为:做出删除用户的最后一个选择的指令。When the function of the virtual mouse right button is a non-menu command, the right-hand state instruction is specifically: an instruction to delete the last selection of the user.
步骤S234、读取判断为左手状态或右手状态的眼动注视点缓冲区坐标;Step S234, read the coordinates of the eye movement gaze point buffer that is judged to be left-handed state or right-handed state;
步骤S235、采用软件去抖动方法,对眼动注视点缓冲区坐标进行分类再判断:即当一段脑电数据被判定为非空闲状态时,与后一次滑窗的结果相比,如果为同一左手或右手状态,则确认为该状态,且进入步骤S237;否则认为是误触,将其改判为空闲状态,进入步骤S236;Step S235: Use the software de-jitter method to classify and re-judg the coordinates of the eye-movement gaze point buffer: that is, when a piece of EEG data is judged to be in a non-idle state, compared with the result of the next sliding window, if it is the same left hand or right-hand state, then confirm this state, and go to step S237; otherwise, it is considered to be a false touch, and it will be changed to an idle state, and go to step S236;
如果连续两段滑窗的分类结果都是左手状态时,则对两次分类期间的12个注视点坐标求平均从而得到用户的期望注视点;If the classification results of two consecutive sliding windows are left-handed, average the coordinates of the 12 fixation points during the two classification periods to obtain the user's desired fixation point;
步骤S236、判定为空闲状态后,则认为用户没有操作需求或者没有找到目标选项,虚拟鼠标不做任何操作,从而实现异步;In step S236, after it is determined to be in an idle state, it is considered that the user has no operation requirement or does not find the target option, and the virtual mouse does not perform any operation, thereby realizing asynchronous;
步骤S237、眼动注视点缓冲区坐标显示于显示器中对应位置的结果反馈区,用户确认选择的是否是其期望目标;若眼动注视点缓冲区坐标显示于显示器中的无效区域,则认为分类器误判了用户意图,将结果改判为空闲状态,进入步骤S236;In step S237, the coordinates of the eye-movement gaze point buffer are displayed in the result feedback area of the corresponding position in the display, and the user confirms whether the selected target is his desired target; If the device misjudged the user's intention, it changed the result to an idle state, and entered step S236;
步骤S338、停留500ms不处理数据,再进入下一循环脑电数据处理。Step S338, stay for 500ms without processing data, and then enter the next cycle of EEG data processing.
用户通过异步MI按下虚拟鼠标的功能键,记录当前注视点缓冲区坐标,与目标区域比对,选择注视点坐标所在目标区域对应的目标,即为用户选择的目标。原本只能实现两个功能的MI,通过眼动中介,可以拓展到任意数量的功能。而不具备确认功能的眼动,也拓展了其实用性。The user presses the function key of the virtual mouse through the asynchronous MI, records the coordinates of the current fixation point buffer, compares it with the target area, and selects the target corresponding to the target area where the fixation point coordinates are located, which is the target selected by the user. MI, which can only implement two functions originally, can be extended to any number of functions through the eye movement intermediary. The eye movement without the confirmation function also expands its usefulness.
在本实施例中,所述计算机利用脑电波检测设备检测使用者的脑电图并每隔100ms截取一段窗口长度为2000ms的脑电图信号,然后采用BCI算法对所截取的脑电波信号进行处理,经过经验模态分解算法、共同空间模式算法进行特征提取后由逐步判别分析方法得到样本得分,通过上述算法将分类器分为左手MI和右手MI分类器、左手MI和空闲状态分类器以及右手MI和空闲状态分类器三种,通过三种分类器以投票的形式得到最终分类结果为左手、右手或者空闲状态中的一种。如果分类结果为空闲,则认为被试没有操作需求或者没有找到目标选项,计算机不做任何操作,从而实现异步。而当分类结果为左手时,从眼动坐标缓冲区读取注视点坐标,并与滑窗100ms后的数据对比,如果分类结果都是左手则对两次分类期间的12个注视点坐标求平均从而得到用户的期望注视点。将期望注视点与图3中的目标选项区域进行对比,如果注视点落在目标选项区域内,则认为该选项为用户的期望选择,并将结果显示结果反馈区,以帮助用户确认是否是其期望目标。若注视点落在无效区域内,则认为是分类器误判了用户意图,将结果改判为空闲状态。In this embodiment, the computer detects the EEG of the user by using an EEG detection device and intercepts a segment of EEG signal with a window length of 2000ms every 100ms, and then uses the BCI algorithm to process the intercepted EEG signal , after feature extraction by empirical mode decomposition algorithm and common space mode algorithm, the sample scores are obtained by stepwise discriminant analysis method. Through the above algorithms, the classifiers are divided into left-hand MI and right-hand MI classifiers, left-hand MI and idle state classifiers, and right-hand MI classifiers. There are three types of MI and idle state classifiers, and the final classification result is obtained in the form of voting through the three classifiers as one of left-hand, right-hand or idle state. If the classification result is idle, it is considered that the subject has no need for operation or no target option is found, and the computer does not perform any operation, thereby realizing asynchronous. When the classification result is left-handed, the gaze point coordinates are read from the eye movement coordinate buffer and compared with the data after the sliding window for 100ms. If the classification results are all left-handed, the coordinates of the 12 gaze points during the two classification periods are averaged. Thereby, the desired gaze point of the user is obtained. Compare the desired fixation point with the target option area in Figure 3. If the fixation point falls within the target option area, it is considered that the option is the user's desired choice, and the result is displayed in the result feedback area to help the user confirm whether it is his or her choice. desired goal. If the gaze point falls within the invalid area, it is considered that the classifier misjudged the user's intention, and the result is changed to an idle state.
在本实施例中,鼠标右键的用途不大,故将MI分类结果为右手的情况改用做删除用户的最后一个选择。需要注意的是,在每次分类结果不是空闲状态时都需要做防抖动处理,避免用户误触或分类器误判。In this embodiment, the right mouse button is not very useful, so the case where the MI classification result is the right hand is used instead as the last option to delete the user. It should be noted that every time the classification result is not in an idle state, anti-jitter processing needs to be performed to avoid mis-touching by the user or misjudgment by the classifier.
通过步骤S335,可实现通过pymouse包,在每次识别结果为非空闲时,都保持鼠标左键或者右键的按下状态,直到下一次识别结束。Through step S335, through the pymouse package, each time the recognition result is non-idle, the left or right mouse button is kept pressed until the next recognition ends.
当MI连续解算为左手时,结合眼动可以实现页面的左右滑动,或者上下滑动,即鼠标的滚轮功能。由此,通过眼动与异步运动想象的结合可以完全实现鼠标的所有功能,突破了以往运动障碍患者的助残设备只能实现固定字符的拼写。结合电脑自带的软键盘功能,或者打开P300 Speller功能,运动障碍患者可以使用电脑的全部功能,具有重大的实际意义。When MI is continuously solved as left-hand, combined with eye movement, the page can be slid left and right, or up and down, that is, the scroll wheel function of the mouse. As a result, all the functions of the mouse can be fully realized through the combination of eye movement and asynchronous motor imagination, breaking through the fact that the handicap equipment for patients with movement disorders in the past can only realize the spelling of fixed characters. Combined with the soft keyboard function that comes with the computer, or turning on the P300 Speller function, patients with movement disorders can use all the functions of the computer, which is of great practical significance.
当用户通过异步MI按下了虚拟鼠标的功能键时,记录当前注视点的坐标,与目标区域比对。注视点坐标所在目标区域对应的目标,即为用户选择的目标。由此,原本只能实现两个功能的MI,通过眼动中介,可以拓展到任意数量的功能。而不具备确认功能的眼动,也拓展了其实用性。When the user presses the function key of the virtual mouse through the asynchronous MI, the coordinates of the current gaze point are recorded and compared with the target area. The target corresponding to the target area where the gaze point coordinates are located is the target selected by the user. As a result, MI, which can only implement two functions originally, can be extended to any number of functions through the mediation of eye movement. The eye movement without the confirmation function also expands its usefulness.
在每次成功选择任务,或者成功触发删除时,留出500ms时间的数据不进行处理,一方面是为了留出时间给用户寻找下一个目标,另一方面可以防止用户来不及反应导致连续删除或者连续选择同一个目标的情况出现。Each time a task is successfully selected, or the deletion is successfully triggered, the data for 500ms will not be processed. On the one hand, it is to allow time for the user to find the next target, and on the other hand, it can prevent the user from being too late to respond and cause continuous deletion or continuous deletion. The case where the same target is selected appears.
本发明能够取得下列有益效果:结合眼动仪与异步运动想象技术实现虚拟鼠标进行目标选择,使得用户可以不采取任何动作,只凭眼睛与大脑就实现对计算机的远程遥控。通过本发明,鼠标的所有功能都可以在用户不进行任何动作的情况下实现。结合上电脑自带的软键盘功能,用户能够远程通过眼睛与大脑直接操控电脑的所有功能,包括网页搜索,编写文档,玩游戏等。使得运动障碍患者的助残设备不再仅限于特定字符拼写,具有重大的实际意义。The invention can achieve the following beneficial effects: combining the eye tracker and the asynchronous motion imaging technology to realize the target selection of the virtual mouse, so that the user can realize the remote control of the computer only by the eyes and the brain without taking any action. Through the present invention, all functions of the mouse can be realized without any action by the user. Combined with the soft keyboard function that comes with the computer, users can remotely control all functions of the computer directly through their eyes and brain, including web search, writing documents, playing games, etc. It is of great practical significance that the disabled aid equipment for patients with movement disorders is no longer limited to the spelling of specific characters.
以上所述是本发明的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明所述原理的前提下,还可以作出若干改进和润饰,这些改进和润饰也应视为本发明的保护范围。The above are the preferred embodiments of the present invention. It should be pointed out that for those skilled in the art, without departing from the principles of the present invention, several improvements and modifications can be made. It should be regarded as the protection scope of the present invention.
Claims (8)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110115741.4A CN112764544B (en) | 2021-01-28 | 2021-01-28 | A method of combining eye tracker and asynchronous motor imaging technology to realize virtual mouse |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110115741.4A CN112764544B (en) | 2021-01-28 | 2021-01-28 | A method of combining eye tracker and asynchronous motor imaging technology to realize virtual mouse |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112764544A CN112764544A (en) | 2021-05-07 |
CN112764544B true CN112764544B (en) | 2022-04-22 |
Family
ID=75706303
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110115741.4A Active CN112764544B (en) | 2021-01-28 | 2021-01-28 | A method of combining eye tracker and asynchronous motor imaging technology to realize virtual mouse |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112764544B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115993886A (en) * | 2021-10-20 | 2023-04-21 | 北京七鑫易维信息技术有限公司 | Control method, device, equipment and storage medium for virtual image |
CN115509355A (en) * | 2022-09-23 | 2022-12-23 | 中国矿业大学 | MI-BCI interaction control system and method under integrated vision |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111158471A (en) * | 2019-12-18 | 2020-05-15 | 浙江大学 | A human-computer interaction method based on eye movement and brain-computer interface technology |
CN111580674A (en) * | 2020-05-20 | 2020-08-25 | 北京师范大学珠海分校 | A method for realizing eye-controlled mouse and method for realizing keyboard input by identifying eye-movement trajectory |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9075453B2 (en) * | 2011-12-29 | 2015-07-07 | Khalifa University of Science, Technology & Research (KUSTAR) | Human eye controlled computer mouse interface |
JP7532249B2 (en) * | 2017-08-23 | 2024-08-13 | ニューラブル インコーポレイテッド | Brain-computer interface with high-speed eye tracking |
KR20200098524A (en) * | 2017-11-13 | 2020-08-20 | 뉴레이블 인크. | Brain-computer interface with adaptation for high speed, accuracy and intuitive user interaction |
-
2021
- 2021-01-28 CN CN202110115741.4A patent/CN112764544B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111158471A (en) * | 2019-12-18 | 2020-05-15 | 浙江大学 | A human-computer interaction method based on eye movement and brain-computer interface technology |
CN111580674A (en) * | 2020-05-20 | 2020-08-25 | 北京师范大学珠海分校 | A method for realizing eye-controlled mouse and method for realizing keyboard input by identifying eye-movement trajectory |
Also Published As
Publication number | Publication date |
---|---|
CN112764544A (en) | 2021-05-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11972049B2 (en) | Brain-computer interface with high-speed eye tracking features | |
Nikolova et al. | ECG-based emotion recognition: Overview of methods and applications | |
Rao et al. | Brain-computer interfacing [in the spotlight] | |
Bulling et al. | Eye movement analysis for activity recognition using electrooculography | |
Barreto et al. | M Affairs Veterans of | |
Mason et al. | A brain-controlled switch for asynchronous control applications | |
Belkacem et al. | Classification of four eye directions from EEG signals for eye-movement-based communication systems | |
Li et al. | Emotion recognition of subjects with hearing impairment based on fusion of facial expression and EEG topographic map | |
CN109976525B (en) | User interface interaction method and device and computer equipment | |
CN112764544B (en) | A method of combining eye tracker and asynchronous motor imaging technology to realize virtual mouse | |
Ross et al. | Unsupervised multi-modal representation learning for affective computing with multi-corpus wearable data | |
Wang et al. | Integration of artificial intelligence and wearable internet of things for mental health detection | |
Pillalamarri et al. | A review on EEG-based multimodal learning for emotion recognition | |
Wang et al. | Neural decoding of Chinese sign language with machine learning for brain–computer interfaces | |
Jang et al. | Classification of human emotions from physiological signals using machine learning algorithms | |
Choi et al. | Weighted knowledge distillation of attention-LRCN for recognizing affective states from PPG signals | |
Cao et al. | Advancing classroom fatigue recognition: A multimodal fusion approach using self-attention mechanism | |
CN105700687A (en) | Single-trial electroencephalogram P300 component detection method based on folding HDCA algorithm | |
Shahlaei et al. | Quantification of event related brain patterns for the motor imagery tasks using inter-trial variance technique | |
Rylo et al. | Gesture recognition of wrist motion based on wearables sensors | |
CN117930965A (en) | Method for realizing computer control by using multi-mode motor imagery technology | |
CN115185369A (en) | A method of combining eye tracker and P300 technology to realize Chinese character input | |
Davis-Stewart | Stress Detection: Stress Detection Framework for Mission-Critical Application: Addressing Cybersecurity Analysts Using Facial Expression Recognition | |
Kuang et al. | EmoTracer: A User-independent Wearable Emotion Tracer with Multi-source Physiological Sensors Based on Few-shot Learning | |
Benchekroun | Continuous stress detection from physiological signals |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |