CN106371565A - Control method based on eye movement and device applied by control method - Google Patents
Control method based on eye movement and device applied by control method Download PDFInfo
- Publication number
- CN106371565A CN106371565A CN201510568143.7A CN201510568143A CN106371565A CN 106371565 A CN106371565 A CN 106371565A CN 201510568143 A CN201510568143 A CN 201510568143A CN 106371565 A CN106371565 A CN 106371565A
- Authority
- CN
- China
- Prior art keywords
- eye
- view pattern
- positions
- line
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims abstract description 54
- 230000004424 eye movement Effects 0.000 title claims abstract description 46
- 230000033001 locomotion Effects 0.000 claims abstract description 60
- 238000003384 imaging method Methods 0.000 claims abstract description 23
- 210000001747 pupil Anatomy 0.000 claims description 61
- 238000013507 mapping Methods 0.000 claims description 40
- 238000001514 detection method Methods 0.000 claims description 38
- 238000006243 chemical reaction Methods 0.000 claims description 31
- 238000012545 processing Methods 0.000 claims description 23
- 230000009466 transformation Effects 0.000 claims description 20
- 230000009471 action Effects 0.000 claims description 17
- 238000010191 image analysis Methods 0.000 claims description 8
- 230000001131 transforming effect Effects 0.000 claims description 2
- 230000011514 reflex Effects 0.000 claims 1
- 210000001508 eye Anatomy 0.000 description 199
- 238000010586 diagram Methods 0.000 description 21
- 238000012937 correction Methods 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 9
- 210000005252 bulbus oculi Anatomy 0.000 description 6
- 238000012549 training Methods 0.000 description 6
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 5
- 238000003708 edge detection Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 239000011159 matrix material Substances 0.000 description 3
- 238000010606 normalization Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000004438 eyesight Effects 0.000 description 2
- 238000012797 qualification Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 210000001525 retina Anatomy 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000011426 transformation method Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
- 208000008918 voyeurism Diseases 0.000 description 1
Landscapes
- Eye Examination Apparatus (AREA)
- User Interface Of Digital Computer (AREA)
- Position Input By Displaying (AREA)
Abstract
本发明提供一种基于眼部动作的控制方法及其应用的装置。所述控制方法适用于装置,并且包括以下步骤。通过取像单元获取图像序列。分析图像序列,藉以在图像序列的每一张图像中,获得使用者的眼部区域的眼部图像信息。基于眼部图像信息,以检测使用者在注视视图图样的视线移动轨迹。判断视线移动轨迹是否符合相对于视图图样的预设轨迹以产生判断结果,以及依据判断结果,在装置上执行对应操作。本发明可通过视线移动轨迹作为信号的触发,并能够广泛应用于保全系统、眼控电脑等多个领域。
The present invention provides a control method based on eye movements and a device for its application. The control method is suitable for a device and includes the following steps. The image sequence is acquired through the imaging unit. Analyze the image sequence to obtain eye image information of the user's eye area in each image of the image sequence. Based on the eye image information, the movement trajectory of the user's gaze when looking at the view pattern is detected. Determine whether the movement trajectory of the gaze complies with the preset trajectory relative to the view pattern to generate a judgment result, and perform corresponding operations on the device based on the judgment result. The invention can use the movement trajectory of the gaze as a signal trigger, and can be widely used in many fields such as security systems and eye-controlled computers.
Description
技术领域technical field
本发明是有关于一种控制方法及其应用的装置,且特别是有关于一种基于眼部动作的控制方法及其应用的装置。The present invention relates to a control method and its application device, and in particular to an eye movement-based control method and its application device.
背景技术Background technique
以目前技术而言,人眼追踪技术主要可区分为侵入性(invasive)与非侵入性(non-invasive)两种。侵入性的人眼追踪技术主要是在眼睛中设置搜寻线圈(search coil)或使用眼动电波图(electrooculogram)来感测人眼所注视的地点。非侵入性的人眼追踪技术则主要以视觉辨识(vision based)为基础,并可区分为免头戴式(free-head)或头戴式(head-mount)等多种实现方式。In terms of current technology, eye tracking technology can be mainly divided into two types: invasive and non-invasive. Intrusive eye-tracking techniques mainly place search coils in the eyes or use electrooculograms to sense where the eyes are looking. The non-invasive eye tracking technology is mainly based on vision based, and can be divided into various implementation methods such as free-head or head-mount.
随着科技的发展,人眼追踪技术大幅应用于神经科学、电脑科学等各种领域,并常见于保全系统(例如眼动锁)或是眼控电脑中。人眼追踪技术可追踪眼球的移动以得到眼球所凝视的视线坐落点,并可据以对保全系统或电脑装置进行控制而实现眼控功能,或是以凝视的方式来触发对于保全系统或是眼控电脑等装置的控制。With the development of technology, eye tracking technology has been widely used in various fields such as neuroscience and computer science, and is often found in security systems (such as eye locks) or eye-controlled computers. The human eye tracking technology can track the movement of the eyeball to obtain the location of the gaze of the eyeball, and can control the security system or computer device to realize the eye control function, or trigger the security system or computer device by gazing. Control of devices such as eye-controlled computers.
发明内容Contents of the invention
本发明提供一种基于眼部动作的控制方法及其应用的装置,可让使用者在预设的视图图样(view pattern)上进行注视以及移动注视点,并通过检测使用者的视线移动轨迹来判断是否让装置执行相对应的操作。藉此,便可通过使用者的视线移动轨迹来操作装置。The present invention provides a control method based on eye movements and an application device thereof, which allow the user to gaze and move the point of gaze on a preset view pattern, and detect the moving track of the user's line of sight. Determine whether to allow the device to perform the corresponding operation. In this way, the device can be operated through the moving track of the user's eyesight.
本发明提出一种基于眼部动作的控制方法。此控制方法包括下列步骤。通过取像单元获取图像序列。分析图像序列,藉以在图像序列的每一张图像中,获得使用者的眼部区域的眼部图像信息。基于眼部图像信息,以检测使用者在注视视图图样的视线移动轨迹。判断视线移动轨迹是否符合相对于视图图样的预设轨迹,以产生判断结果。以及,依据判断结果,在一装置上执行对应操作。The present invention proposes a control method based on eye movements. This control method includes the following steps. The image sequence is acquired by the imaging unit. The image sequence is analyzed to obtain eye image information of the user's eye area in each image of the image sequence. Based on the eye image information, the line of sight movement track of the user looking at the view pattern is detected. Judging whether the line of sight moving track conforms to the preset track relative to the view pattern, so as to generate a judging result. And, according to the judgment result, perform a corresponding operation on a device.
在本发明的一实施例中,上述视图图样包括多个辨识图案。预设轨迹为对应于至少一个所述辨识图案所依序组成的几何连线。In an embodiment of the present invention, the view pattern includes a plurality of identification patterns. The preset trajectory is a geometric connection line formed sequentially corresponding to at least one of the identification patterns.
在本发明的一实施例中,上述基于眼部图像信息,以检测使用者在注视视图图样的视线移动轨迹的步骤包括:分析眼部图像信息以判断使用者是否注视于视图图样中的预设位置、或注视预设位置超过预设时间,以及当使用者注视于视图图样中的预设位置、或是注视预设位置超过预设时间时,检测使用者在注视视图图样的视线移动轨迹。In an embodiment of the present invention, the above-mentioned step of detecting the movement trajectory of the user's gaze while gazing at the view pattern based on the eye image information includes: analyzing the eye image information to determine whether the user is gazing at the preset view pattern position, or gaze at a preset position for more than a preset time, and when the user gazes at a preset position in the view pattern, or gazes at a preset position for more than a preset time, detect the line of sight movement track of the user staring at the view pattern.
在本发明的一实施例中,上述基于该眼部图像信息,以检测使用者在注视该视图图样的视线移动轨迹的步骤包括:基于眼部图像信息以检测眼部区域内的眼部目标,依据眼部目标执行眼部追踪动作,以依序获得眼部目标位在图像序列中的多个第一位置,以及依据视图图样对应的坐标系统,映射转换第一位置为多个第二位置,以获得使用者在注视视图图样的视线移动轨迹。In an embodiment of the present invention, the above-mentioned step of detecting the eye movement track of the user looking at the view pattern based on the eye image information includes: detecting an eye target in the eye area based on the eye image information, Executing an eye tracking action based on the eye target to sequentially obtain a plurality of first positions of the eye target in the image sequence, and transform the first position into a plurality of second positions according to the coordinate system corresponding to the view pattern, In order to obtain the eye movement track of the user looking at the view pattern.
在本发明的一实施例中,上述的眼部目标包括瞳孔与第一反光点,且依据眼部目标执行眼部追踪动作,以依序获得眼部目标相应于图像序列的所述第一位置的步骤包括:对图像序列的瞳孔与第一反光点进行定位以执行眼部追踪动作,以及依序获得眼部目标相应于图像序列的所述第一位置。并且,上述依据视图图样对应的坐标系统,映射转换第一位置为所述多个第二位置,以获得使用者在注视视图图样的视线移动轨迹的步骤包括:依据瞳孔与第一反光点,计算距离参数,依据第一反光点相对于瞳孔的向量,计算角度参数,并分析距离参数及角度参数的分布关系,以转换第一位置为视图图样对应的坐标系统中的所述第二位置,以及将所述第二位置作为使用者在注视视图图样的视线移动轨迹。In an embodiment of the present invention, the above-mentioned eye object includes a pupil and a first light reflection point, and an eye tracking operation is performed according to the eye object, so as to sequentially obtain the first position of the eye object corresponding to the image sequence The steps include: locating the pupil and the first reflective point of the image sequence to perform an eye tracking action, and sequentially obtaining the first position of the eye target corresponding to the image sequence. Moreover, the step of mapping and transforming the first position into the plurality of second positions according to the coordinate system corresponding to the view pattern, so as to obtain the moving track of the user's line of sight while gazing at the view pattern includes: calculating The distance parameter, calculating the angle parameter according to the vector of the first reflective point relative to the pupil, and analyzing the distribution relationship between the distance parameter and the angle parameter, so as to convert the first position into the second position in the coordinate system corresponding to the view pattern, and The second position is used as a line of sight movement track of the user looking at the view pattern.
在本发明的一实施例中,上述眼部目标还包括第二反光点,且依据视图图样对应的坐标系统,映射转换第一位置为所述多个第二位置,以获得使用者在注视该视图图样的该视线移动轨迹的步骤包括:依据瞳孔、第一反光点与第二反光点,计算面积参数,依据第一反光点与瞳孔的第一连线以及第二反光点与瞳孔的第二连线的至少其一,计算角度参数,分析面积参数及角度参数的分布关系,以转换第一位置为视图图样对应的坐标系统中的所述多个第二位置,以及将所述第二位置作为使用者在注视视图图样的视线移动轨迹。In an embodiment of the present invention, the above-mentioned eye target further includes a second reflective point, and according to the coordinate system corresponding to the view pattern, the first position is converted into the plurality of second positions by mapping, so as to obtain the The step of moving the line of sight of the view pattern includes: calculating the area parameter according to the pupil, the first reflective point and the second reflective point, and calculating the area parameter according to the first connecting line between the first reflective point and the pupil and the second connecting line between the second reflective point and the pupil. At least one of the connecting lines, calculating the angle parameter, analyzing the distribution relationship between the area parameter and the angle parameter, to convert the first position into the plurality of second positions in the coordinate system corresponding to the view pattern, and converting the second position As the user's gaze movement trajectory while gazing at the view pattern.
在本发明的一实施例中,上述依据眼部目标执行眼部追踪动作,以依序获得眼部目标相应于图像序列的所述第一位置的步骤还包括:在获得所述第一位置的其中之一之后,调整眼部区域以再次检测眼部目标,并据以执行眼部追踪动作。In an embodiment of the present invention, the above-mentioned step of executing the eye tracking action based on the eye target to sequentially obtain the first position of the eye target corresponding to the image sequence further includes: after obtaining the first position After one of them, adjust the eye area to detect the eye target again, and perform the eye tracking action accordingly.
在本发明的一实施例中,上述依据判断结果以在装置上执行对应操作的步骤包括:当视线移动轨迹符合预设轨迹时,装置解除锁具的锁定状态,以及当视线移动轨迹不符合预设轨迹时,装置维持锁具的锁定状态。In an embodiment of the present invention, the above-mentioned step of performing corresponding operations on the device according to the judgment result includes: when the line of sight movement track conforms to the preset track, the device releases the locked state of the lock, and when the line of sight movement track does not meet the preset During the trajectory, the device maintains the locked state of the lock.
本发明提出一种基于眼部动作进行控制的装置,此装置包括取像单元、储存单元以及处理单元。取像单元用以获取图像序列。储存单元用以记录多个程序模块,并且储存单元还包括数据库。处理单元耦接取像单元及储存单元,以存取并执行储存单元中记录的所述模块。所述程序模块包括图像分析模块、视线检测模块、判断模块以及控制模块。图像分析模块分析图像序列,藉以在图像序列的每一张图像中,获得使用者的眼部区域的眼部图像信息。视线检测模块基于眼部图像信息,以检测使用者在注视视图图样的视线移动轨迹。判断模块判断视线移动轨迹是否符合相对于视图图样的预设轨迹,以产生判断结果。控制模块依据判断结果以通过装置执行对应操作。The present invention proposes a device for controlling based on eye movement, and the device includes an image capturing unit, a storage unit and a processing unit. The imaging unit is used for acquiring image sequences. The storage unit is used to record a plurality of program modules, and the storage unit also includes a database. The processing unit is coupled to the imaging unit and the storage unit to access and execute the modules recorded in the storage unit. The program modules include an image analysis module, a line of sight detection module, a judgment module and a control module. The image analysis module analyzes the image sequence, so as to obtain eye image information of the user's eye area in each image of the image sequence. The sight detection module is based on the eye image information to detect the movement track of the user's gaze when gazing at the view pattern. The judging module judges whether the line of sight moving track conforms to a preset track relative to the view pattern, so as to generate a judging result. The control module executes corresponding operations through the device according to the judgment result.
在本发明的一实施例中,上述视图图样包括多个辨识图案,且预设轨迹为对应于至少一个所述辨识图案所依序组成的几何连线。In an embodiment of the present invention, the above-mentioned view pattern includes a plurality of identification patterns, and the preset trajectory is a geometric connection corresponding to at least one of the identification patterns formed sequentially.
在本发明的一实施例中,上述视线检测模块分析眼部图像信息以判断使用者是否注视于视图图样中的预设位置、或注视预设位置超过预设时间。以及,当使用者注视于视图图样中的预设位置、或注视预设位置超过预设时间时,视线检测模块检测使用者在注视视图图样的视线移动轨迹。In an embodiment of the present invention, the sight line detection module analyzes the eye image information to determine whether the user gazes at a preset position in the view pattern or gazes at a preset position for more than a preset time. And, when the user gazes at a preset position in the view pattern, or gazes at the preset position for more than a preset time, the sight line detection module detects the user's gaze movement track while gazing at the view pattern.
在本发明的一实施例中,上述视线检测模块包括眼部检测模块、眼部追踪模块、以及映射转换模块。其中,眼部检测模块基于眼部图像信息以检测眼部区域内的眼部目标。眼部追踪模块依据眼部目标执行眼部追踪动作,以依序获得眼部目标位在图像序列中的多个第一位置。映射转换模块依据视图图样对应的坐标系统,映射转换所述第一位置为多个第二位置,以获得使用者在注视视图图样的视线移动轨迹。In an embodiment of the present invention, the sight line detection module includes an eye detection module, an eye tracking module, and a mapping conversion module. Wherein, the eye detection module detects the eye target in the eye area based on the eye image information. The eye tracking module executes an eye tracking action according to the eye target to sequentially obtain a plurality of first positions of the eye target in the image sequence. The mapping transformation module maps and transforms the first position into a plurality of second positions according to the coordinate system corresponding to the view pattern, so as to obtain the moving track of the user's sight line when gazing at the view pattern.
在本发明的一实施例中,上述眼部目标包括瞳孔与第一反光点,且眼部追踪模块对图像序列的瞳孔与第一反光点进行定位以执行眼部追踪动作,以及依序获得眼部目标相应于图像序列的所述第一位置。并且,上述映射转换模块依据瞳孔与第一反光点,计算距离参数,且依据第一反光点相对于瞳孔的向量,计算角度参数,并分析距离参数及角度参数的分布关系,以转换第一位置为视图图样对应的坐标系统中的所述多个第二位置,以及将所述第二位置作为使用者在注视视图图样的视线移动轨迹。In an embodiment of the present invention, the eye target includes the pupil and the first reflection point, and the eye tracking module locates the pupil and the first reflection point in the image sequence to perform the eye tracking action, and sequentially obtains the eye A sub-object corresponds to said first position of the sequence of images. Moreover, the above-mentioned mapping conversion module calculates the distance parameter according to the pupil and the first reflective point, and calculates the angle parameter according to the vector of the first reflective point relative to the pupil, and analyzes the distribution relationship between the distance parameter and the angle parameter to convert the first position are the plurality of second positions in the coordinate system corresponding to the view pattern, and use the second positions as the line of sight movement track of the user looking at the view pattern.
在本发明的一实施例中,上述眼部目标更包括第二反光点,且映射转换模块依据瞳孔、第一反光点与第二反光点,计算面积参数,且依据第一反光点与瞳孔的第一连线以及第二反光点与瞳孔的第二连线的至少其一,计算角度参数,并分析面积参数及角度参数的分布关系,以转换第一位置为视图图样对应的坐标系统中的所述多个第二位置,以及将所述第二位置作为使用者在注视视图图样的视线移动轨迹。In an embodiment of the present invention, the eye target further includes a second reflective point, and the mapping conversion module calculates the area parameter according to the pupil, the first reflective point, and the second reflective point, and calculates the area parameter according to the relationship between the first reflective point and the pupil At least one of the first connection line and the second connection line between the second reflective point and the pupil, calculate the angle parameter, and analyze the distribution relationship between the area parameter and the angle parameter, so as to convert the first position into a coordinate system corresponding to the view pattern. The plurality of second positions, and using the second positions as a line of sight movement track of the user looking at the view pattern.
在本发明的一实施例中,上述眼部追踪模块还在获得第一位置的其中之一之后调整眼部区域,以由眼部检测模块再次检测眼部目标,且眼部追踪模块据以执行该眼部追踪动作。In an embodiment of the present invention, the above-mentioned eye tracking module further adjusts the eye area after obtaining one of the first positions, so that the eye object can be detected again by the eye detection module, and the eye tracking module executes accordingly The eye-tracking action.
在本发明的一实施例中,当视线移动轨迹符合预设轨迹时,上述控制模块控制装置解除锁具的锁定状态,且当视线移动轨迹不符合预设轨迹时,控制模块控制装置维持锁具的锁定状态。In an embodiment of the present invention, when the line of sight movement track conforms to the preset track, the above-mentioned control module control device releases the locked state of the lock, and when the line of sight movement track does not conform to the preset track, the control module control device maintains the lock of the lock state.
基于上述,本发明实施例所提出的基于眼部动作的控制方法及其应用的装置,可检测使用者在注视视图图样的视线移动轨迹,并在判定视线移动轨迹符合相对于视图图样的预设轨迹时,由装置执行相对应的操作。藉此,本发明实施例可通过视线移动轨迹作为信号的触发,并能够广泛应用于保全系统、眼控电脑等多个领域。Based on the above, the eye movement-based control method and its applied device proposed by the embodiments of the present invention can detect the movement trajectory of the user's gaze while gazing at the view pattern, and determine that the movement trajectory of the sight line conforms to the preset relative to the view pattern When track, the corresponding operation will be performed by the device. In this way, the embodiment of the present invention can use the line of sight movement as a signal trigger, and can be widely used in multiple fields such as security systems and eye-controlled computers.
为让本案的上述特征和优点能更明显易懂,下文特举实施例,并配合附图作详细说明如下。In order to make the above-mentioned features and advantages of the present application more comprehensible, the following specific embodiments are described in detail with accompanying drawings.
附图说明Description of drawings
图1是本发明一实施例的基于眼部动作的控制装置的功能方块示意图;FIG. 1 is a schematic functional block diagram of an eye movement-based control device according to an embodiment of the present invention;
图2A、图2B是本发明一实施例的基于眼部动作的控制方法的示意图;2A and 2B are schematic diagrams of an eye movement-based control method according to an embodiment of the present invention;
图3是本发明一实施例的基于眼部动作的控制方法的步骤流程图;FIG. 3 is a flow chart of the steps of a control method based on eye movements according to an embodiment of the present invention;
图4A是本发明又一实施例的基于眼部动作的控制方法的步骤流程图;FIG. 4A is a flow chart of steps of an eye movement-based control method according to another embodiment of the present invention;
图4B是本发明一实施例的视线检测模块的功能方块示意图;FIG. 4B is a schematic functional block diagram of a line of sight detection module according to an embodiment of the present invention;
图4C是本发明一实施例的眼部区域的示意图;FIG. 4C is a schematic diagram of an eye region according to an embodiment of the present invention;
图5A是本发明一实施例的眼部追踪的步骤流程图;FIG. 5A is a flow chart of the steps of eye tracking according to an embodiment of the present invention;
图5B是本发明一实施例的眼部追踪的示意图;FIG. 5B is a schematic diagram of eye tracking according to an embodiment of the present invention;
图6A至图6G是本发明一实施例的坐标系统映射转换的示意图;6A to 6G are schematic diagrams of coordinate system mapping conversion according to an embodiment of the present invention;
图7是本发明一实施例的基于眼部动作进行控制的门禁系统示意图;Fig. 7 is a schematic diagram of an access control system controlled based on eye movements according to an embodiment of the present invention;
图8A、图8B是本发明一实施例的基于眼部动作进行控制的手持式眼控接目装置示意图;8A and 8B are schematic diagrams of a hand-held eye-control eye-connection device controlled based on eye movements according to an embodiment of the present invention;
图9是本发明又一实施例的基于眼部动作的控制方法的示意图。Fig. 9 is a schematic diagram of a control method based on eye movements according to another embodiment of the present invention.
附图标记说明:Explanation of reference signs:
100:装置;100: device;
110、710、810、910:取像单元;110, 710, 810, 910: imaging unit;
120、720、920:处理单元;120, 720, 920: processing unit;
130:储存单元;130: storage unit;
131:图像分析模块;131: image analysis module;
132:视线检测模块;132: Line of sight detection module;
1321:眼部检测模块;1321: eye detection module;
1322:眼部追踪模块;1322: eye tracking module;
1323:映射转换模块;1323: mapping conversion module;
133:判断模块;133: judging module;
134:控制模块;134: control module;
135:数据库;135: database;
200:视图图样;200: view pattern;
210:辨识图案;210: identify patterns;
220:几何连线;220: geometric connection;
230:预设轨迹;230: preset trajectory;
410、510、610:眼部目标;410, 510, 610: eye targets;
512:瞳孔中心点;512: pupil center point;
514、614、616:反光点;514, 614, 616: reflective points;
602:向量;602: vector;
612:瞳孔;612: pupil;
630:坐标点;630: coordinate point;
640:扇形;640: sector;
650:矩形;650: rectangle;
660:坐标系统;660: coordinate system;
700:门禁系统;700: access control system;
730:锁具;730: lock;
740:提示单元;740: Prompt unit;
750:门体;750: door body;
760:罩体;760: cover body;
800:手持式眼控接目装置;800: hand-held eye control device;
820:显示单元;820: display unit;
830:壳体;830: housing;
840:反射镜;840: mirror;
850:光源;850: light source;
860:保险箱;860: safe;
900:眼控装置;900: eye control device;
ER:眼部区域;ER: eye region;
H:横线;H: horizontal line;
IS:图像序列;IS: image sequence;
L:距离;L: distance;
θ:角度;θ: angle;
1~18:区域;1~18: area;
S310~S350、S410~S480、S510~S540:步骤。S310~S350, S410~S480, S510~S540: steps.
具体实施方式detailed description
目前眼控技术多是检测使用者的视线坐落点,而仅能够依据其在屏幕中所凝视的位置来对保全系统或是眼控电脑进行控制。本发明实施例的基于眼部动作的控制方法及其应用的装置则可通过视线移动轨迹作为信号的触发,从而以轨迹的方式来实现眼控技术。藉此,本发明实施例可提供更安全的密码输入方式,且也能够实现更便利且更为直觉的眼动控制。以下即详加说明本发明实施例所提出的基于眼部动作的控制方法及应用其的装置。At present, eye control technology mostly detects the position of the user's line of sight, and can only control the security system or the eye control computer according to the position of the user's gaze on the screen. The eye movement-based control method and the device applied thereto in the embodiments of the present invention can use the line of sight movement track as a signal trigger, so as to realize the eye control technology in the form of a track. Thereby, the embodiment of the present invention can provide a more secure password input method, and can also realize more convenient and more intuitive eye movement control. The following is a detailed description of the eye movement-based control method proposed by the embodiments of the present invention and the device applying it.
图1是本发明一实施例的基于眼部动作进行控制的装置的功能方块示意图。所述基于眼部动作进行控制的装置100包括取像单元110、处理单元120以及储存单元130。于此,所述装置100可例如为保险箱、门禁系统或其他类型需验证使用者的资格(qualification)以决定是否提供使用者一特定权限的保全系统/装置等,或可以是具备眼控功能的电脑等电子装置,本发明不限定所述装置100的具体类型。FIG. 1 is a functional block diagram of an eye movement-based control device according to an embodiment of the present invention. The device 100 for controlling based on eye movements includes an imaging unit 110 , a processing unit 120 and a storage unit 130 . Here, the device 100 can be, for example, a safe, an access control system, or other types of security systems/device that need to verify the user's qualification (qualification) to determine whether to provide the user with a specific authority, or can be an eye control function. For electronic devices such as computers, the present invention does not limit the specific type of the device 100 .
在本实施例中,取像单元110可用以沿特定方向(视取像单元110的配置位置/角度而定)获得一图像序列IS,也即取像单元110连续获取多张图像,并提供给处理单元120。In this embodiment, the imaging unit 110 can be used to obtain an image sequence IS along a specific direction (depending on the configuration position/angle of the imaging unit 110), that is, the imaging unit 110 continuously acquires multiple images and provides them to processing unit 120 .
处理单元120耦接取像单元110,并例如为中央处理单元(centralprocessing unit,简称CPU)、图形处理单元(graphics processing unit,简称GPU),或是其他可程序化的微处理器(microprocessor)等装置。The processing unit 120 is coupled to the image capturing unit 110 and is, for example, a central processing unit (central processing unit, referred to as CPU), a graphics processing unit (graphics processing unit, referred to as GPU), or other programmable microprocessors (microprocessor), etc. device.
储存单元130例如是任何型态的固定或可移动随机存取存储器(randomaccess memory,简称RAM)、只读存储器(read-only memory,简称ROM)、快闪存储器(flash memory)或类似元件或上述元件的组合。储存单元130中包括数据库135,并记录多个程序模块。处理单元120耦接储存单元130,以存取并执行上述程序模块。这些程序模块包括图像分析模块131、视线检测模块132、判断模块133、控制模块134。其中,图像分析模块131对取像单元110所获取的图像序列IS进行图像处理与分析,藉以在图像序列IS的每一张图像中,获得使用者的眼部区域的眼部图像信息。藉此,视线检测模块132即可根据所获得的眼部图像信息来检测使用者的眼部动作,再由判断模块133判断使用者在注视视图图样的视线移动轨迹是否符合一预设轨迹。在此以图2A与图2B的范例对“视图图样”、“预设轨迹”以及上述两者之间的对应关系进行说明。如图2A所示,视图图样200包括多个辨识图案210。辨识图案210可为M×N的矩阵形式排列,在此实施例中,M、N都为4,然不以此为限。需说明的是,上述预设轨迹可以是对应于所述辨识图案210的至少其一所依序组成的几何连线。举例而言,图2A所示出的几何连线220可由多个辨识图案210组成,而图2B则示出了储存于数据库135中的预设轨迹230。可以看出,上述的几何连线220可与预设轨迹230相对应。The storage unit 130 is, for example, any type of fixed or removable random access memory (random access memory, referred to as RAM), read-only memory (read-only memory, referred to as ROM), flash memory (flash memory) or similar components or the above-mentioned A combination of components. The storage unit 130 includes a database 135 and records a plurality of program modules. The processing unit 120 is coupled to the storage unit 130 to access and execute the above program modules. These program modules include an image analysis module 131 , a line of sight detection module 132 , a judging module 133 , and a control module 134 . Wherein, the image analysis module 131 performs image processing and analysis on the image sequence IS acquired by the image capturing unit 110, so as to obtain eye image information of the user's eye area in each image of the image sequence IS. In this way, the sight line detection module 132 can detect the user's eye movement according to the obtained eye image information, and then the judgment module 133 can judge whether the movement track of the user's line of sight when gazing at the view pattern conforms to a preset track. Here, the "view pattern", the "preset track" and the corresponding relationship between the above two are described by taking the example of FIG. 2A and FIG. 2B. As shown in FIG. 2A , the view pattern 200 includes a plurality of identification patterns 210 . The identification pattern 210 can be arranged in the form of an M×N matrix. In this embodiment, both M and N are 4, but it is not limited thereto. It should be noted that, the aforementioned preset trajectory may be a geometric connection corresponding to at least one of the identification patterns 210 formed in sequence. For example, the geometric connection 220 shown in FIG. 2A may be composed of a plurality of identification patterns 210 , while FIG. 2B shows a preset trajectory 230 stored in the database 135 . It can be seen that the above-mentioned geometric connection line 220 may correspond to the preset trajectory 230 .
藉此,控制模块134依据上述视线移动轨迹是否符合预设轨迹的判断结果决定是否在装置100上执行对应操作。例如,当装置100为保全装置,且预设轨迹被设定为用以将保全装置中的锁具解码时,控制模块134即可依据视线移动轨迹是否符合预设轨迹而切换锁具为锁定状态或解锁状态。In this way, the control module 134 determines whether to execute the corresponding operation on the device 100 according to the judgment result of whether the above-mentioned line of sight movement track conforms to the preset track. For example, when the device 100 is a security device, and the preset trajectory is set to decode the lock in the security device, the control module 134 can switch the lock to a locked state or unlocked according to whether the line of sight movement trajectory conforms to the preset trajectory state.
以下对本发明实施例所提出的基于眼部动作的控制方法进行说明。图3是本发明一实施例的基于眼部动作的控制方法的步骤流程图。本实施例的方法适用于上述的电子装置100,在此搭配图1中装置100的各项元件,并以图3的步骤流程图来说明本发明实施例的基于眼部动作以进行控制的概念。The control method based on eye movement proposed by the embodiment of the present invention will be described below. FIG. 3 is a flow chart of steps of an eye movement-based control method according to an embodiment of the present invention. The method of this embodiment is applicable to the above-mentioned electronic device 100. Here, the various components of the device 100 in FIG. 1 are combined, and the concept of control based on eye movements in this embodiment of the present invention is illustrated with the flow chart of the steps in FIG. 3 .
在步骤S310中,装置100通过取像单元110获取图像序列IS。在步骤S320中,图像分析模块131分析图像序列IS,藉以在图像序列IS的每一张图像中,获得使用者的眼部区域的眼部图像信息。并在步骤S330中,视线检测模块132基于眼部图像信息以检测使用者在注视视图图样的视线移动轨迹。In step S310 , the device 100 acquires an image sequence IS through the imaging unit 110 . In step S320 , the image analysis module 131 analyzes the image sequence IS, so as to obtain eye image information of the user's eye area in each image of the image sequence IS. And in step S330 , the line of sight detection module 132 detects the moving track of the line of sight of the user looking at the view pattern based on the eye image information.
需说明的是,在部分实施例中,视线检测模块132可先通过分析眼部图像信息,藉以判断使用者是否注视于视图图样中的预设位置,或是判断使用者是否注视于上述预设位置超过预设时间。当使用者注视于视图图样中的预设位置、或是使用者注视特定预设位置已超过预设时间时,视线检测模块132会对应开始检测使用者在注视视图图样的视线移动轨迹。上述视线检测模块132判断何时开始检测使用者的视线移动轨迹的方式仅为示范性说明,应用本实施例者也可提供使用者通过外部触发的方式,例如按压实体按钮、或通从操作页面点击以选取进入检测视线移动轨迹模式等形式,以触发视线检测模块132对于视线移动轨迹的检测动作,本发明不以此为限。It should be noted that, in some embodiments, the line of sight detection module 132 can first analyze the eye image information to determine whether the user is gazing at the preset position in the view pattern, or determine whether the user is gazing at the above-mentioned preset position. The location exceeds the preset time. When the user gazes at a preset position in the view pattern, or when the user gazes at a specific preset position for more than a preset time, the gaze detection module 132 will correspondingly start to detect the movement track of the user's gaze while gazing at the view pattern. The above-mentioned way for the line-of-sight detection module 132 to determine when to start detecting the movement track of the user’s line-of-sight is only an exemplary description. The person who applies this embodiment can also provide the user with an external trigger method, such as pressing a physical button, or accessing the operation page Click to select and enter the detection line of sight moving track mode, etc., to trigger the line of sight detection module 132 to detect the line of sight moving track, and the present invention is not limited thereto.
请继续图3的步骤流程,在步骤S340中,判断模块133判断视线移动轨迹是否符合相对于视图图样的预设轨迹以产生判断结果,并在步骤S350中,控制模块134可依据判断结果以在装置100上执行对应操作。详言之,请再次参考图2A及图2B的范例。当视线检测模块132检测到使用者在注视视图图样的视线移动轨迹,且所述视线移动轨迹如图2A中所示出的几何连线220时,判断模块133可依据数据库135中所记录的预设轨迹进行比对,以判断上述的视线移动轨迹是否符合数据库135中所记录的预设轨迹。藉此,控制模块134即可依据视线移动轨迹是否符合图2B中的预设轨迹230的判断结果,而在装置100上对应执行不同操作。Please continue the step flow in FIG. 3. In step S340, the judging module 133 judges whether the line of sight moving trajectory conforms to the preset trajectory relative to the view pattern to generate a judging result, and in step S350, the control module 134 can use the judgment result to The corresponding operations are performed on the device 100 . In detail, please refer to the examples of FIG. 2A and FIG. 2B again. When the line of sight detection module 132 detects that the user is gazing at the line of sight movement track of the view pattern, and the line of sight movement track is shown as the geometric connection 220 in FIG. The trajectories are compared to determine whether the above-mentioned gaze movement trajectories conform to the preset trajectories recorded in the database 135 . In this way, the control module 134 can perform different operations on the device 100 according to the judging result whether the line of sight movement track conforms to the preset track 230 in FIG. 2B .
另外,在本实施例的装置100中,其可选择性地设置一提示单元。所述提示单元可用以提示目前装置100是否依据使用者的视线移动轨迹符合预设轨迹,从而执行相对应的操作。于此,提示单元可利用文字显示、灯号显示、语音提示或任何其他可行的提示方式,以令使用者可分辨出装置100当下的状态,本发明对此不限制。In addition, in the device 100 of this embodiment, a reminder unit may be optionally provided. The prompting unit can be used to prompt whether the current device 100 follows the preset trajectory according to the user's line of sight movement, so as to perform corresponding operations. Here, the prompting unit can use text display, light signal display, voice prompt or any other feasible prompting methods to enable the user to recognize the current state of the device 100 , which is not limited in the present invention.
接着,以图4A至图4C的实施例进一步说明上述基于眼部动作进行控制的细部流程。其中,图4A为本发明又一实施例的基于眼部动作的控制方法的步骤流程图,图4B为本发明一实施例的基于眼部动作进行控制的视线检测模块的细部方块示意图,图4C则是本发明一实施例的眼部区域的示意图。以下请参照图4A的步骤流程,并搭配图1的装置100及图4B的视线检测模块132中的各个元件以进行说明。在此实施例中,装置100为一保全装置,且所述基于眼部动作的控制方法可用以控制装置100中的锁具。需说明的是,所述基于眼部动作的控制方法也可应用在任何类型的装置上,本发明不以此为限。Next, the detailed flow of the control based on eye movement is further described by using the embodiments of FIG. 4A to FIG. 4C . Among them, FIG. 4A is a flow chart of steps of a control method based on eye movements according to another embodiment of the present invention, FIG. 4B is a detailed block diagram of a line of sight detection module controlled based on eye movements according to an embodiment of the present invention, and FIG. 4C It is a schematic diagram of the eye region according to an embodiment of the present invention. Please refer to the step flow in FIG. 4A , and describe with each element in the device 100 in FIG. 1 and the line-of-sight detection module 132 in FIG. 4B . In this embodiment, the device 100 is a security device, and the eye movement-based control method can be used to control a lock in the device 100 . It should be noted that the control method based on eye movements can also be applied to any type of device, and the present invention is not limited thereto.
在步骤S410中,装置100通过取像单元110获取图像序列IS,并在步骤S420中,图像分析模块131分析图像序列IS,藉以在图像序列IS的每一张图像中,获得使用者的眼部区域的眼部图像信息。此处的细部描述与图3实施例相似,故请参阅前述。In step S410, the device 100 acquires the image sequence IS through the imaging unit 110, and in step S420, the image analysis module 131 analyzes the image sequence IS, so as to obtain the user's eye Eye image information for the region. The detailed description here is similar to the embodiment in FIG. 3 , so please refer to the foregoing.
如图4B所示,此实施例的视线检测模块132包括眼部检测模块1321、眼部追踪模块1322以及映射转换模块1323。藉此,在步骤S430中,眼部检测模块1321基于眼部图像信息以检测眼部区域内的眼部目标。在步骤S440中,眼部追踪模块1322依据眼部目标执行眼部追踪动作,以依序获得眼部目标位在图像序列中的多个第一位置。并在步骤S450中,映射转换模块1323依据视图图样对应的坐标系统,映射转换所述多个第一位置为多个第二位置,以获得使用者在注视视图图样的视线移动轨迹。换句话说,本实施例可通过上述的眼部检测模块1321、眼部追踪模块1322以及映射转换模块1323,以分析图像序列中对应于使用者眼部目标的连续动作,并利用图像序列与视图图样分别对应的坐标系统之间的转换,据以获得使用者注视视图图样的视线移动轨迹。As shown in FIG. 4B , the gaze detection module 132 of this embodiment includes an eye detection module 1321 , an eye tracking module 1322 and a mapping conversion module 1323 . Thereby, in step S430 , the eye detection module 1321 detects the eye target in the eye area based on the eye image information. In step S440 , the eye tracking module 1322 executes an eye tracking operation according to the eye target, so as to sequentially obtain a plurality of first positions of the eye target in the image sequence. And in step S450 , the mapping conversion module 1323 maps and converts the plurality of first positions into a plurality of second positions according to the coordinate system corresponding to the view pattern, so as to obtain the moving track of the user's gaze when gazing at the view pattern. In other words, this embodiment can use the above-mentioned eye detection module 1321, eye tracking module 1322, and mapping conversion module 1323 to analyze the continuous motion corresponding to the user's eye target in the image sequence, and use the image sequence and view The conversion between the coordinate systems corresponding to the patterns respectively is used to obtain the line of sight movement track of the user looking at the view pattern.
在取得使用者的视线移动轨迹之后,判断模块133可将视线移动轨迹与视图图样的预设轨迹比较,并由控制模块134依据判断结果以通过装置100执行对应操作,其中,上述的预设轨迹可被设定为装置100的锁具的预设解锁密码。因此,在步骤S460中,判断模块133判断视线移动轨迹是否符合相对于视图图样的预设轨迹。在装置100的锁具处于锁定的状态下,若步骤S460判断为是,则控制模块134会解除锁具的锁定状态(即,锁具被切换至解锁状态);反之,若判断为否,则控制模块134会维持锁具的锁定状态,并可还通过提示单元发出警示音或文字以提醒使用者。此外,从另一观点而言,若锁具已处于解锁的状态,则无论步骤S460的判断结果是否为是,锁具都会维持在解锁状态。After obtaining the user's line of sight movement track, the judging module 133 can compare the line of sight movement track with the preset track of the view pattern, and the control module 134 can perform corresponding operations through the device 100 according to the judgment result, wherein the above-mentioned preset track It can be set as the default unlock password of the lock of the device 100 . Therefore, in step S460, the judging module 133 judges whether the moving track of the sight line conforms to the preset track relative to the view pattern. When the lock of the device 100 is in the locked state, if step S460 judges yes, the control module 134 will release the lock state of the lock (that is, the lock is switched to the unlocked state); otherwise, if the judgment is no, the control module 134 The locked state of the lock will be maintained, and a warning sound or text may be issued through the prompt unit to remind the user. In addition, from another point of view, if the lock is already in the unlocked state, no matter whether the determination result of step S460 is yes, the lock will remain in the unlocked state.
值得一提的是,在一实施例中,使用者还可在锁具处于解锁的状态下,通过做出一组连续的眼部动作,以利用对应所述连续的眼部动作的视线移动轨迹,从而自行定义出预设解锁密码。当然,所述预设解锁密码也可为设计者预先定义并储存于数据库135中,对此本发明不限制。It is worth mentioning that, in an embodiment, the user can also make a set of continuous eye movements when the lock is unlocked, so as to use the line of sight movement trajectory corresponding to the continuous eye movements, So as to define the preset unlocking password by yourself. Certainly, the preset unlocking password may also be predefined by a designer and stored in the database 135, which is not limited in the present invention.
另值得一提的是,在一实施例中,装置100还可包括一辨识模块,用以在检测到眼部目标(步骤S430)后,先行辨识使用者的眼部目标的生物特征信息,藉以确认使用者的身份。上述的生物特征信息例如是虹膜或视网膜信息。若使用者的身分正确,辨识模块才允许装置100进一步地执行后续的步骤S440~S480。若不正确,则辨识模块停止或拒绝装置100进行后续的眼部动作辨识步骤。It is also worth mentioning that, in one embodiment, the device 100 may further include an identification module, which is used to first identify the biometric information of the user's eye target after the eye target is detected (step S430), so as to Confirm the identity of the user. The aforementioned biometric information is, for example, iris or retina information. If the identity of the user is correct, the identification module allows the device 100 to further execute subsequent steps S440-S480. If not, the identification module stops or refuses the device 100 to perform subsequent eye movement identification steps.
底下以图4C的示意图对上述实施例进行具体说明。在此实施例中,当眼部检测模块1321搜寻到眼部区域ER内的眼部目标410后,眼部检测模块1321还可针对眼部图像信息进行图像处理,以获得可明确指示出眼部目标410边界的图像信息。藉此,眼部追踪模块1322可依据所述图像信息以对眼部目标410进行追踪,从而获得使用者的视线移动轨迹。举例来说,通过调整增益值(gain)与偏移值(offset)的方式,眼部检测模块1321可调整眼部图像信息的对比度并获得一加强图像。接着,眼部检测模块1321可再对此加强图像依序进行去杂点、边缘锐利化、二值化及再次边缘锐利化等处理,以获得如图4C所示的眼部目标410的图像。The above embodiment will be specifically described below with the schematic diagram of FIG. 4C . In this embodiment, after the eye detection module 1321 searches for the eye target 410 in the eye region ER, the eye detection module 1321 can also perform image processing on the eye image information to obtain Image information of the boundary of the target 410 . In this way, the eye tracking module 1322 can track the eye target 410 according to the image information, so as to obtain the moving track of the user's gaze. For example, by adjusting the gain value (gain) and the offset value (offset), the eye detection module 1321 can adjust the contrast of the eye image information and obtain an enhanced image. Next, the eye detection module 1321 may sequentially perform denoising, edge sharpening, binarization, and edge sharpening again on the enhanced image to obtain the image of the eye object 410 as shown in FIG. 4C .
在取得上述的眼部目标410之后,眼部追踪模块1322会依据眼部目标410而执行眼部追踪动作,藉以获得使用者其眼部目标410在各图像序列中的精确位置。在一实施例中,眼部追踪模块1322可通过眼部目标410中的瞳孔及反光点之间的相对位置关系,从而获得眼部目标410的精确定位。上述的反光点例如是光斑(purkinje image),其可通过固定光源以光束的形式,并在照射人眼后经反射而形成。依据固定光源的位置,反光点也会相应地存在于图像中人眼的特定位置,故可作为定位眼部目标410的参考特征。并且,利用反光点作为定位参考特征的方式,还可避免眼部追踪模块1322对眼部目标进行追踪时,因参考点的光源不足而遭受多余噪声干扰的情形。需说明的是,在以下的实施例中,瞳孔位置是由瞳孔的中心点的位置来表示,且依据作为定位参考特征的反光点的数量不同(例如,在图5B的范例中,眼部目标510包括一个反光点514,而在图6E的范例中,眼部目标610则包括两个反光点614、616),眼部追踪将可以不同形式来实现。After obtaining the above-mentioned eye object 410 , the eye tracking module 1322 performs an eye tracking operation according to the eye object 410 , so as to obtain the precise position of the user's eye object 410 in each image sequence. In one embodiment, the eye tracking module 1322 can obtain the precise positioning of the eye object 410 through the relative positional relationship between the pupil and the reflective point in the eye object 410 . The above-mentioned reflective point is, for example, a purkinje image, which can be formed by fixing a light source in the form of a light beam and reflecting it after illuminating human eyes. According to the position of the fixed light source, the reflective point will correspondingly exist at the specific position of the human eye in the image, so it can be used as a reference feature for locating the eye object 410 . Moreover, using the reflective point as a positioning reference feature can also avoid unnecessary noise interference due to insufficient light source at the reference point when the eye tracking module 1322 tracks the eye target. It should be noted that, in the following embodiments, the position of the pupil is represented by the position of the central point of the pupil, and is different according to the number of reflective points used as positioning reference features (for example, in the example of FIG. 5B , the eye target 510 includes one reflective point 514, while in the example of FIG. 6E the eye target 610 includes two reflective points 614, 616), eye tracking will be implemented in different ways.
在此先以图5图5A示出的眼部追踪的步骤流程。在步骤S510中,眼部追踪模块1322计算眼部区域中的黑白对比变化,以获得位在图像序列中的瞳孔位置。在步骤S520中,眼部追踪模块1322获得反光点的参考位置。并且,在步骤S530中,眼部追踪模块1322依据瞳孔位置与反光点的参考位置的相对关系,调整并获得眼部目标位在图像序列中的第一位置。The steps of eye tracking are shown in FIG. 5 and FIG. 5A. In step S510, the eye tracking module 1322 calculates the black-white contrast change in the eye area to obtain the pupil position in the image sequence. In step S520, the eye tracking module 1322 obtains the reference position of the reflective point. Moreover, in step S530 , the eye tracking module 1322 adjusts and obtains the first position of the eye target in the image sequence according to the relative relationship between the pupil position and the reference position of the reflective point.
详细而言,图5B示出的是眼部目标510中包括瞳孔(对应瞳孔中心点512)及一个反光点514的范例,且眼部目标510对应于图像序列的其中一张图像。眼部检测模块1321可利用瞳孔的统计信息以获得眼部目标510,上述的统计信息例如是相关于眼球和/或瞳孔的面积、长度、宽度、长宽比、形状、黑白对比的统计数据种类至少其一或其组合。在眼部检测模块1321获得眼部目标510之后,眼部追踪模块1322可计算眼部区域中的黑白对比变化。利用人眼中瞳孔和眼球(在此指人眼构造中的虹膜,为眼睛的黑色、蓝色或褐色等非白色区域)边缘对应于黑白对比变化的极值发生处(一般而言,相对于邻近的眼球,瞳孔为较黑(或较暗)图像)的特性,眼部追踪模块1322可取得瞳孔中心点512的大约位置。而对于反光点514而言,眼部追踪模块1322也可以类似上述的方式而取得反光点514的参考位置,并还可考虑将位于瞳孔中心最近处的白色对比区域的条件作为判断是否为反光点514的依据。之后,眼部追踪模块1322对瞳孔进行边缘检测,并依据反光点514的参考位置作为边缘检测的基准点,以取得精确的瞳孔边缘来推算瞳孔中心点512的位置,从而修正瞳孔中心点512位置以实现精确的瞳孔定位。上述边缘检测的方式可利用像素由白(对应于眼球)至黑(对应于瞳孔)或是由黑至白的剧烈变化处以检测边缘。另外,眼部追踪模块1322还可对反光点512的参考位置进行细部调整,藉以提供更精准的参考位置以作为瞳孔定位的基准点。其中,眼部追踪模块1322可将反光点的中心作为其边缘检测的基准点,并依据类似对瞳孔进行边缘检测的流程,从而调整反光点的参考位置。藉此,经由重复且连续地执行上述流程,眼部追踪模块1322可依序取得图像序列中的眼部目标的各个位置。这些位置即对应了使用者在注视视图图样的视线移动轨迹。In detail, FIG. 5B shows an example in which the eye object 510 includes a pupil (corresponding to the pupil center point 512 ) and a light reflection point 514 , and the eye object 510 corresponds to one of the images in the image sequence. The eye detection module 1321 can use the statistical information of the pupil to obtain the eye target 510. The above statistical information is, for example, the type of statistical data related to the area, length, width, aspect ratio, shape, black and white contrast of the eyeball and/or pupil at least one or a combination thereof. After the eye detection module 1321 obtains the eye object 510, the eye tracking module 1322 can calculate the black-white contrast change in the eye region. Using the edge of the pupil and eyeball in the human eye (referring to the iris in the structure of the human eye, which is the non-white area of the eye such as black, blue or brown) corresponds to the extreme value of the contrast between black and white (generally speaking, relative to the adjacent eyeball, the pupil is darker (or darker) image), the eye tracking module 1322 can obtain the approximate position of the pupil center point 512 . As for the reflective point 514, the eye tracking module 1322 can also obtain the reference position of the reflective point 514 in a manner similar to the above, and also consider the condition of the white contrast area located closest to the center of the pupil as a criterion for judging whether it is a reflective point 514 basis. Afterwards, the eye tracking module 1322 performs edge detection on the pupil, and uses the reference position of the reflective point 514 as a reference point for edge detection to obtain an accurate pupil edge to estimate the position of the pupil center point 512, thereby correcting the position of the pupil center point 512 To achieve precise pupil positioning. The above-mentioned edge detection method can detect edges by utilizing sharp changes of pixels from white (corresponding to the eyeball) to black (corresponding to the pupil) or from black to white. In addition, the eye tracking module 1322 can finely adjust the reference position of the reflective point 512 to provide a more accurate reference position as a reference point for pupil positioning. Wherein, the eye tracking module 1322 can use the center of the reflective point as a reference point for its edge detection, and adjust the reference position of the reflective point according to a process similar to that of pupil edge detection. In this way, by repeatedly and continuously executing the above process, the eye tracking module 1322 can sequentially obtain the positions of the eye objects in the image sequence. These positions correspond to the moving track of the user's gaze when gazing at the view pattern.
值得一提的是,图5A的流程图还包括步骤S540。在步骤S540中,在眼部追踪模块1322取得眼部目标510的所述第一位置之后,会进一步地调整眼部区域的范围,以便眼部追踪模块1322可精确且有效率地执行下一次的眼部追踪动作。例如,在一实施例中,眼部追踪模块1322可依据人眼可移动的最大范围而调整眼部区域的范围。另外,在部分实施例中,眼部追踪模块1322也可在进行眼部追踪动作之前,适应性地滤除眼部目标中的噪声,以提升定位并追踪眼部目标510的精确度。上述对于调整眼部区域以及滤除噪声的方式可依需求而适当设计,本发明对此不限制。It is worth mentioning that the flowchart in FIG. 5A also includes step S540. In step S540, after the eye tracking module 1322 acquires the first position of the eye target 510, the range of the eye area will be further adjusted so that the eye tracking module 1322 can accurately and efficiently execute the next Eye tracking action. For example, in one embodiment, the eye tracking module 1322 can adjust the range of the eye area according to the maximum range that the human eye can move. In addition, in some embodiments, the eye tracking module 1322 can also adaptively filter out the noise in the eye target before performing the eye tracking action, so as to improve the accuracy of locating and tracking the eye target 510 . The above methods for adjusting the eye area and filtering noise can be properly designed according to requirements, and the present invention is not limited thereto.
以下再以图5B、图6A至图6D的示意图,对步骤S450中获得使用者在注视视图图样的视线移动轨迹的具体实施例进行说明。在此实施例中,眼部追踪模块1322可利用前述实施例中步骤S510、S520的实施方式,从而取得瞳孔中心点512与反光点514的位置。接着,映射转换模块1323可利用瞳孔中心点512至反光点514之间的距离L,以及通过瞳孔中心点512的一基准线(即图6A中的水平轴),并配合反光点514相对于瞳孔中心点512的向量602所界定出的角度θ,从而经由上述距离L以及角度θ之间的关系,以将使用者在图像序列中的瞳孔位移位置(即对应于上述的第一位置)转移至视图图样的坐标系统,而得到相对应的视线移动轨迹。应用上述距离L及角度θ的参数以进行坐标转换的实施例,底下提供分组对应法及内插对应法作为说明。The specific embodiment of obtaining the line of sight movement track of the user gazing at the view pattern in step S450 will be described below with reference to the schematic diagrams of FIG. 5B , and FIG. 6A to FIG. 6D . In this embodiment, the eye tracking module 1322 can obtain the positions of the pupil center point 512 and the reflective point 514 by using the implementation of steps S510 and S520 in the foregoing embodiments. Then, the mapping conversion module 1323 can use the distance L between the pupil center point 512 and the reflective point 514, and a reference line (ie, the horizontal axis in FIG. The angle θ defined by the vector 602 of the center point 512, through the relationship between the above-mentioned distance L and the angle θ, the user's pupil displacement position in the image sequence (that is, corresponding to the above-mentioned first position) is transferred to The coordinate system of the view pattern is used to obtain the corresponding line of sight movement track. In the embodiment of using the above parameters of the distance L and the angle θ to perform coordinate transformation, the grouping correspondence method and the interpolation correspondence method are provided below as illustrations.
首先以图6B与图6C实施例来说明分组对应法。在此实施例中,分组对应法可以包括训练阶段及应用阶段,其中,映射转换模块1323通过训练阶段以取得坐标转换所需的系数,并可据以在应用阶段时将目前的眼部目标转换至视图图样的坐标系统中。详细而言,在训练阶段时,映射转换模块1323可从图6B的区域1~16中,每次显示其中一组区域供使用者观看(在图6B中示出的是将区域1显示给使用者观看的情况),以令映射转换模块1323可定位并取得使用者对应于各组区域的瞳孔中心及反光点的距离及角度,以将图6B的各组区域的距离-角度分布示出在图6C的距离-角度图。藉此,在分组对应法的应用阶段时,映射转换模块1323可通过取像单元110目前所取得的眼部目标,据以分析此眼部目标中的瞳孔及反光点所对应的距离L与角度θ,并获得距离-角度图中的坐标点,例如图6C的坐标点630。接着,映射转换模块1323可利用最小目标函式找出距离坐标点630最接近的区域,以将目前的眼部目标定位至视图图样的坐标系统,从而完成坐标转换。上述最小目标函式的公式(1)如下所述:Firstly, the grouping correspondence method is described with the embodiments of FIG. 6B and FIG. 6C . In this embodiment, the grouping correspondence method may include a training phase and an application phase, wherein, the mapping conversion module 1323 obtains the coefficients required for coordinate conversion through the training phase, and can convert the current eye object accordingly in the application phase into the coordinate system of the view drawing. In detail, during the training phase, the mapping transformation module 1323 can display a group of areas from areas 1 to 16 in FIG. 6B each time for the user to watch (in FIG. The user’s viewing situation), so that the mapping conversion module 1323 can locate and obtain the distance and angle of the user’s pupil center and reflective point corresponding to each group of regions, so that the distance-angle distribution of each group of regions in FIG. 6B is shown in The distance-angle diagram of Figure 6C. In this way, during the application stage of the grouping correspondence method, the mapping transformation module 1323 can analyze the distance L and the angle corresponding to the pupil and the reflection point in the eye object through the eye object currently obtained by the imaging unit 110 θ, and obtain coordinate points in the distance-angle diagram, such as coordinate point 630 in FIG. 6C . Next, the mapping transformation module 1323 can use the minimum objective function to find the area closest to the coordinate point 630, so as to locate the current eye object to the coordinate system of the view pattern, thereby completing the coordinate transformation. The formula (1) of the above minimum objective function is as follows:
Min Obj.=W1|Dis_t-Dis_i|+W2|Angle_t-Angle_i|,(i=1~16)……(1)Min Obj.=W 1 |Dis_t-Dis_i|+W 2 |Angle_t-Angle_i|, (i=1~16)...(1)
其中,Dis_i及Angle_i是目标距离值与角度值,而Dis_t及Angle_t是目前眼部目标所对应的距离值与角度值。上述距离值与角度值可通过坐标点的x轴与y轴坐标计算而得到,而W1、W2则为分配权重。可以看出,图6B中区域的数量将决定映射转换模块1323以分组对应法转换坐标时的精确程度。因此,当此实施例的分组对应法包括的区域数量越多,映射转换模块1323可相应获得较精确的转换坐标。Wherein, Dis_i and Angle_i are the distance value and angle value of the target, and Dis_t and Angle_t are the distance value and angle value corresponding to the current eye target. The above-mentioned distance value and angle value can be obtained by calculating the x-axis and y-axis coordinates of the coordinate points, and W 1 and W 2 are distribution weights. It can be seen that the number of regions in FIG. 6B will determine the degree of accuracy when the mapping conversion module 1323 converts coordinates using the group correspondence method. Therefore, when the number of regions included in the grouping correspondence method of this embodiment is larger, the mapping transformation module 1323 can correspondingly obtain more accurate transformation coordinates.
另一方面,图6D的实施例则用以说明内插对应法。在此实施例中,映射转换模块1323可通过扭正(un-warping)运算,从而将距离-角度图中的坐标点所画出的扇形640转换为较为规则的矩形650,并将矩形650的位置对应到视图图样所在的坐标系统660中。此实施例同样可包括训练阶段及应用阶段,且训练阶段的实施方式也可与前述实施例类似。不同之处在于,本实施例的映射转换模块1323还利用上述扭正运算以得到正规化分布图,并可以仿射转换技术(affine transform)对所述正规化分布图进行移动校正(movingcalibration)。经扭正运算以获得正规化的距离-角度分布图的公式(2)、(3)如下所述:On the other hand, the embodiment of FIG. 6D is used to illustrate the interpolation correspondence method. In this embodiment, the mapping conversion module 1323 can convert the sector 640 drawn by the coordinate points in the distance-angle diagram into a more regular rectangle 650 through an un-warping operation, and convert the shape of the rectangle 650 into a regular rectangle 650. The location corresponds to the coordinate system 660 in which the viewport resides. This embodiment can also include a training phase and an application phase, and the implementation of the training phase can also be similar to the foregoing embodiment. The difference is that the mapping conversion module 1323 of this embodiment also uses the above-mentioned twisting operation to obtain a normalized distribution map, and can perform moving calibration on the normalized distribution map by affine transform. The formulas (2) and (3) for obtaining the normalized distance-angle distribution diagram through the twisting operation are as follows:
扭正运算将未扭正前的取样点(X,Y)={(x1,y1)...(xn,yn)}转正为目标点(XT,YT)={(xT1,yT1)...(xTn,yTn)},其中,X、Y表示上述的距离L及角度θ,n为取样数目,且a0~a5、b0~b5为扭正转换系数。藉此,映射转换模块1323可通过反矩阵运算而求得系数a0~a5与b0~b5的最佳解。如此一来,映射转换模块1323即可利用上述系数将目前未知的坐标点进行扭正转换。The twisting operation converts the sampling point (X,Y)={(x 1 ,y 1 )...(x n ,y n )} to the target point (X T ,Y T )={( x T1 ,y T1 )...(x Tn ,y Tn )}, where X and Y represent the above distance L and angle θ, n is the number of samples, and a 0 ~ a 5 , b 0 ~ b 5 are Twist positive conversion factor. In this way, the mapping conversion module 1323 can obtain the optimal solution of the coefficients a 0 -a 5 and b 0 -b 5 through the inverse matrix operation. In this way, the mapping conversion module 1323 can use the above coefficients to perform twisting conversion on the currently unknown coordinate points.
另一方面,映射转换模块1323利用仿射技术进行移动校正的公式(4)则如下述:On the other hand, the formula (4) that the mapping conversion module 1323 performs motion correction using affine technology is as follows:
其中,x'、y'为经移动校正后的新坐标,a~f则为仿射转换系数。藉此,映射转换模块1323也可经由反矩阵计算,或是通过输入未校正前显示单元的任三个角落的三对坐标点及校正后的任三个角落的三对坐标点,而求得上述仿射转换系数a~f。藉此,映射转换模块1323即可将目前未知的坐标点进行移动校正,以排除图像缩放、平移、旋转等因素在坐标转换时所造成的影响。相对于图6B、6C所提出的分组对应法的实施例而言,映射转换模块1323使用内插对应法可以较少的取样数量来完成坐标转换。Among them, x', y' are new coordinates after movement correction, and a~f are affine transformation coefficients. In this way, the mapping transformation module 1323 can also be calculated by inverse matrix, or by inputting three pairs of coordinate points of any three corners of the display unit before correction and three pairs of coordinate points of any three corners of the corrected corner to obtain The above-mentioned affine transformation coefficients a to f. In this way, the mapping transformation module 1323 can correct the movement of the currently unknown coordinate points, so as to eliminate the influence caused by factors such as image scaling, translation, and rotation during the coordinate transformation. Compared with the embodiment of the grouping correspondence method proposed in FIGS. 6B and 6C , the mapping transformation module 1323 can complete the coordinate transformation with a smaller number of samples by using the interpolation correspondence method.
需说明的是,在一实施例中,当眼部追踪模块1322依序获得眼部目标位在图像序列中的多个第一位置之后,映射转换模块1323才将这些第一位置以连续动作的形式转换为使用者在注视视图图样的视线移动轨迹。而在另一实施例中,映射转换模块1323则可与眼部追踪模块1322取得眼部目标的各第一位置的动作同步,以将上述第一位置依序地进行坐标转换,藉此获得使用者的视线移动轨迹。应用本实施例者可依其设计需求决定获得视线移动轨迹的方式,本发明对此不限制。It should be noted that, in one embodiment, after the eye tracking module 1322 sequentially obtains a plurality of first positions of the eye target in the image sequence, the mapping conversion module 1323 converts these first positions in a continuous manner. The form is transformed into the trace of the user's gaze when gazing at the view pattern. In another embodiment, the mapping conversion module 1323 can be synchronized with the eye tracking module 1322 to obtain the first positions of the eye objects, so as to sequentially perform coordinate conversion on the above-mentioned first positions, thereby obtaining The eye movement trajectory of the reader. Those who apply this embodiment can determine the way to obtain the line of sight movement track according to their design requirements, which is not limited by the present invention.
另一方面,图6E的范例则说明当眼部目标610包括瞳孔612与2个反光点614、616的情况。与前述实施例类似,眼部追踪模块1322可利用图像的黑白对比变化(例如对眼部目标610的图像进行二值化处理),以找出眼部目标610中的瞳孔612与反光点614、616,并取得其个别的中心点以表示瞳孔612与反光点614、616各自的位置。映射转换模块1323可计算瞳孔612与反光点614、616在不共线时所形成的三角形面积,并可适应性地校正因使用者头部移动而可能造成的误差。详言之,对于考虑上述误差的面积算法,其可通过在计算出瞳孔612与反光点614、616形成的三角形面积后,进一步地以一正规化因子对上述计算所得的面积进行正规化处理。举例而言,上述的正规化因子可为反光点614、616之间的距离的平方除以2,且映射转换模块1323会将前述计算所得的三角形面积除以此正规化因子,从而进行误差校正以获得面积参数。On the other hand, the example in FIG. 6E illustrates the situation when the eye object 610 includes a pupil 612 and two light reflection points 614 , 616 . Similar to the previous embodiments, the eye tracking module 1322 can use the black and white contrast changes of the image (for example, binarize the image of the eye object 610 ) to find the pupil 612 and the reflection point 614 in the eye object 610 , 616 , and obtain their individual center points to represent the respective positions of the pupil 612 and the reflection points 614 and 616 . The mapping conversion module 1323 can calculate the area of the triangle formed by the pupil 612 and the reflective points 614 and 616 when they are not collinear, and can adaptively correct errors that may be caused by the movement of the user's head. In detail, for the area algorithm considering the above error, after calculating the area of the triangle formed by the pupil 612 and the reflective points 614, 616, the above calculated area can be further normalized by a normalization factor. For example, the above-mentioned normalization factor may be the square of the distance between the reflective points 614 and 616 divided by 2, and the mapping transformation module 1323 will divide the calculated triangle area by this normalization factor to perform error correction to get the area parameter.
另一方面,映射转换模块1323并通过瞳孔612与反光点614、616以计算角度参数。在一实施例中,映射转换模块1323可通过横向穿过瞳孔612中心点的横线H,以及瞳孔612与反光点614的连线,以获得上述两线间的第一夹角α1。另外,映射转换模块1323也可通过横线H以及瞳孔612与反光点616的连线,以获得上述两线间的第二夹角α2。换言之,所述的角度参数可以是第一夹角α1、第二夹角α2,也可以是第二夹角α2与第一夹角α1之间的差值(即α2-α1,以瞳孔612为中心,瞳孔612与反光点614的连线,以及瞳孔612与反光点616的连线之间的夹角)。On the other hand, the mapping conversion module 1323 also uses the pupil 612 and the reflection points 614 and 616 to calculate the angle parameter. In one embodiment, the mapping conversion module 1323 can obtain the first angle α1 between the above-mentioned two lines through the horizontal line H passing through the center point of the pupil 612 and the line connecting the pupil 612 and the reflection point 614 . In addition, the mapping conversion module 1323 may also use the horizontal line H and the line connecting the pupil 612 and the light reflection point 616 to obtain the second included angle α2 between the above two lines. In other words, the angle parameter can be the first included angle α1, the second included angle α2, or the difference between the second included angle α2 and the first included angle α1 (that is, α2-α1, taking the pupil 612 as center, the line connecting the pupil 612 and the reflective point 614, and the angle between the line connecting the pupil 612 and the reflective point 616).
因此,映射转换模块1323可在首次使用时,通过如图6F所示的校正区域1~18或是校正点进行校正,以得到角度-面积平面图上的校正坐标。类似于前述实施例的训练阶段,使用者对应于各组区域的瞳孔612及反光点614、616所形成的面积及角度的分布被示出在图6G中,并由映射转换模块1323将上述的原始坐标映射到对应于校正区域1~18的坐标,以取得坐标转换(例如仿射转换方法)所需的系数。如此一来,在进入应用阶段之后,映射转换模块1323即可通过用以坐标转换的系数,从而完成对于瞳孔612及反光点614、616的的坐标转换。Therefore, when the mapping transformation module 1323 is used for the first time, it can perform correction through the correction areas 1-18 or correction points as shown in FIG. 6F to obtain the correction coordinates on the angle-area plan view. Similar to the training stage of the previous embodiment, the distribution of the area and angle formed by the pupil 612 and the reflective points 614, 616 corresponding to each group of regions of the user is shown in FIG. 6G, and the above-mentioned The original coordinates are mapped to coordinates corresponding to the correction regions 1-18 to obtain coefficients required for coordinate transformation (eg, affine transformation method). In this way, after entering the application phase, the mapping conversion module 1323 can complete the coordinate conversion of the pupil 612 and the reflection points 614 and 616 through the coefficients used for coordinate conversion.
图7至图9的实施例为上述基于眼部动作的控制方法的实际应用范例。首先以图7为例进行说明,其中,图7为本发明一实施例的基于眼部动作进行控制的门禁系统示意图。The embodiments shown in FIG. 7 to FIG. 9 are practical application examples of the above eye movement-based control method. First, take FIG. 7 as an example for description, wherein FIG. 7 is a schematic diagram of an access control system controlled based on eye movements according to an embodiment of the present invention.
门禁系统700包括取像单元710、处理单元720、锁具730、提示单元740以及门体750。锁具730设置于门体750上,并用以控制门体750的开启或关闭。在本实施例中,提示单元740例如是灯号显示装置,并通过灯号来提示锁具730的目前配置状态为锁定状态或解锁状态,但本发明实施例并不限制提示单元的形式。The access control system 700 includes an imaging unit 710 , a processing unit 720 , a lock 730 , a prompt unit 740 and a door body 750 . The lock 730 is disposed on the door body 750 and is used to control the opening or closing of the door body 750 . In this embodiment, the prompt unit 740 is, for example, a light signal display device, and uses the light signal to prompt that the current configuration state of the lock 730 is a locked state or an unlocked state, but the embodiment of the present invention does not limit the form of the prompt unit.
在本实施例中,取像单元710例如设置于门体750上,并由一罩体760将取像单元710遮覆,而仅于取像单元710的取像方向上露出一图像获取区,以令使用者可将眼部区域对准该图像获取区,使得取像单元710可针对使用者的眼部区域获取图像序列,以避免他人窥视,从而提高门禁系统700的使用安全性。于此,罩体760的设置同样是设计者可依其设计需求而自行选择是否加入,本发明不以此为限。In this embodiment, the imaging unit 710 is arranged on the door body 750, for example, and the imaging unit 710 is covered by a cover body 760, and only an image acquisition area is exposed in the imaging direction of the imaging unit 710, The user can align the eye area with the image acquisition area, so that the imaging unit 710 can acquire an image sequence for the user's eye area to avoid peeping by others, thereby improving the security of the access control system 700 . Here, the setting of the cover body 760 is also a designer can choose whether to add it according to his design requirements, and the present invention is not limited thereto.
基于门禁系统700的架构,处理单元720可依据取像单元710所获取的图像序列,并可读取如之前实施例所述的储存单元中所记录的程序模块,而依照图2至图6D实施例的步骤流程来检测出使用者所做出的眼部动作,并据以判断使用者的视线移动轨迹是否符合预设轨迹。藉此,处理单元720可依据判断结果决定是否发出对应的控制信号来控制锁具730解除锁定状态,从而令使用者在其视线移动轨迹符合预设轨迹时,可开启门体750以进入门体750后的区域。Based on the architecture of the access control system 700, the processing unit 720 can read the program modules recorded in the storage unit as described in the previous embodiment according to the image sequence acquired by the imaging unit 710, and implement according to FIG. 2 to FIG. 6D The procedure of the example is used to detect the eye movement made by the user, and judge whether the moving track of the user's sight line conforms to the preset track. In this way, the processing unit 720 can determine whether to send a corresponding control signal to control the unlocking state of the lock 730 according to the judgment result, so that the user can open the door body 750 to enter the door body 750 when the moving track of his line of sight conforms to the preset track. area behind.
图8A与图8B是本发明的另一范例,并示出出依照本发明一实施例的基于眼部动作进行控制的手持式眼控接目装置800的示意图。手持式眼控接目装置800包括处理单元、取像单元810、显示单元820、壳体830、反射镜840以及光源850,并连接一保全设备(例如保险箱860)以对使用者进行验证。在本实施例中,手持式眼控接目装置800可通过无线传输界面(例如:短距离无线通信(Short Distance Wireless Communication)、无线射频辨识(RadioFrequency Identification,简称RFID)、蓝牙(Bluetooth)或Wi-Fi等)的方式而与保险箱860相连。取像单元810与光源850都配置于壳体830内,并邻近于手持式眼控接目装置800的窗口的位置。光源850可在取像单元810获取使用者的眼部图像时开启,或者在使用者靠近时开启,藉以提供足够的亮度,本发明不以此为限。显示单元820配置于壳体830内,其可用以显示有关于密码输入信息的画面。而反射镜840则可将显示单元820所显示的画面反射至手持式眼控接目装置800的窗口,以令使用者可通过窗口而观看到显示单元820所显示的画面内容。所述密码输入信息例如为“请输入密码”等提示使用者可开始进行眼动输入的文字,或者指示使用者密码是否输出正确的文字,本发明不以此为限。FIG. 8A and FIG. 8B are another example of the present invention, and show a schematic diagram of a hand-held eye-control eye-connecting device 800 controlled based on eye movements according to an embodiment of the present invention. The handheld eye control device 800 includes a processing unit, an image capturing unit 810, a display unit 820, a housing 830, a mirror 840, and a light source 850, and is connected to a security device (such as a safe 860) to authenticate the user. In this embodiment, the hand-held eye control device 800 can be connected via a wireless transmission interface (for example: Short Distance Wireless Communication (Short Distance Wireless Communication), Radio Frequency Identification (Radio Frequency Identification, referred to as RFID), Bluetooth (Bluetooth) or Wi-Fi -Fi, etc.) connected to the safe 860. Both the image capturing unit 810 and the light source 850 are disposed in the casing 830 and adjacent to the window of the hand-held eye control device 800 . The light source 850 can be turned on when the image capturing unit 810 captures the user's eye image, or turned on when the user approaches, so as to provide sufficient brightness, and the present invention is not limited thereto. The display unit 820 is disposed in the casing 830 and can be used to display a picture related to password input information. The reflector 840 can reflect the picture displayed on the display unit 820 to the window of the hand-held eye control device 800 so that the user can watch the content of the picture displayed on the display unit 820 through the window. The password input information is, for example, words such as "Please enter the password" to prompt the user to start eye movement input, or to indicate whether the user's password is output correctly, and the present invention is not limited thereto.
因此,在使用者欲使用保险箱860时,使用者可以利用手持的方式将手持式眼控接目装置800承靠于眼睛上,并作出眼部动作。类似地,基于手持式眼控接目装置800的架构,取像单元810拍摄使用者眼部并获取图像序列,且处理单元可读取如之前实施例所描述的储存单元中所记录的程序模块,而依照图2至图6D的实施例的步骤流程来检测出使用者所做出的眼部动作,藉以取得使用者注视显示单元820显示的视图图样的视线移动轨迹,并将所述视线移动轨迹作为保险箱860的输入密码,而与一轨迹形式的预设的安全密码进行比对。当上述安全密码与输入密码相符时,处理单元可产生验证成功信息并传送至保险箱860,从而控制将其锁具开启。本实施例的其他细部作动方式可类似于上述实施例,故请参照前述。Therefore, when the user intends to use the safe 860 , the user can hold the hand-held eye-control eye contact device 800 against the eyes and make eye movements. Similarly, based on the architecture of the hand-held eye control device 800, the imaging unit 810 photographs the user's eyes and acquires an image sequence, and the processing unit can read the program modules recorded in the storage unit as described in the previous embodiments , and according to the steps in the embodiment of FIG. 2 to FIG. 6D to detect the eye movements made by the user, so as to obtain the line of sight movement track of the user's gaze at the view pattern displayed on the display unit 820, and move the line of sight The track is used as the input password of the safe 860, and is compared with a preset security password in the form of a track. When the above-mentioned security password matches the input password, the processing unit may generate verification success information and send it to the safe 860 , thereby controlling to open its lock. Other details of the operation of this embodiment can be similar to the above embodiment, so please refer to the foregoing.
图9是本发明又一实施例的基于眼部动作进行控制的眼控装置900的示意图眼控装置900包括取像单元910以及处理单元。在此实施例中,眼部追踪模块提供另一种追踪眼部目标的实施例。详言之,取像单元910可具备可转动地调整方向及角度的镜头,以便将镜头调整成仰望使用者脸部的状态。例如,取像单元910的镜头可以仰角45度的角度朝向该使用者的脸部,以加强使用者鼻孔图像的辨识度,而有助于取得鼻孔图像,从而作为界定眼部目标的位置的参考特征。FIG. 9 is a schematic diagram of an eye control device 900 for controlling based on eye movements according to yet another embodiment of the present invention. The eye control device 900 includes an image capturing unit 910 and a processing unit. In this embodiment, the eye tracking module provides another embodiment of tracking eye objects. In detail, the image capturing unit 910 may have a lens that can be rotatably adjusted in direction and angle, so that the lens can be adjusted to look up at the user's face. For example, the lens of the imaging unit 910 can be directed toward the user's face at an elevation angle of 45 degrees to enhance the recognition of the user's nostril image, and help to obtain the nostril image, thereby serving as a reference for defining the position of the eye target feature.
详细而言,处理单元可通过鼻孔区域相对于其他区域明显较黑的特性,并利用鼻孔区域的最长横轴及最长纵轴的交叉点作为鼻孔的中心点。接着,眼部追踪模块计算两鼻孔中心点之间的间距以决定一起算点坐标(s1,t1),再根据上述间距以及起算点坐标而计算出基准点坐标(s2,t2),其中,s2=s1+k1×D,t2=t1+k2×D,k1=1.6~1.8,k2=1.6~1.8,且较佳是k1=k2。上述k1、k2的数值可依据统计结果所得到,且经由上述关系式所获得的基准点坐标(s2,t2),可刚好落在接近于脸部图像中的一个眼部目标的中心点。如此一来,通过鼻孔特征以及眼睛统计特征,眼部追踪模块可对使用者的眼部目标取得精确定位。Specifically, the processing unit may use the characteristic that the nostril region is significantly darker than other regions, and use the intersection of the longest horizontal axis and the longest vertical axis of the nostril region as the center point of the nostril. Next, the eye tracking module calculates the distance between the center points of the two nostrils to determine the coordinates of the calculation point (s 1 , t 1 ), and then calculates the coordinates of the reference point (s 2 , t 2 ) based on the above distance and the coordinates of the starting point , wherein, s 2 =s 1 +k 1 ×D, t 2 =t 1 +k 2 ×D, k 1 =1.6-1.8, k 2 =1.6-1.8, and preferably k 1 =k 2 . The values of k 1 and k 2 above can be obtained according to statistical results, and the reference point coordinates (s 2 , t 2 ) obtained through the above relational formula can just fall close to an eye target in the face image. center point. In this way, the eye tracking module can accurately locate the user's eye target through the characteristics of the nostrils and statistical characteristics of the eyes.
藉此,基于上述眼控装置900的硬体架构,处理单元可以类似的方式读取如之前实施例所描述的储存单元中所记录的程序模块,而依照图2至图6D的实施例的步骤流程来检测出使用者所做出的眼部动作,并对应取得使用者在注视视图图样的视线移动轨迹,以判断是否符合预设轨迹,从而通过眼控装置900以依据使用者的视线移动轨迹而执行对应操作,例如检测使用者的视线移动轨迹来将电脑屏幕解锁,或是操作电脑视窗的光标等功能。Thus, based on the hardware architecture of the eye control device 900 described above, the processing unit can read the program modules recorded in the storage unit as described in the previous embodiments in a similar manner, and according to the steps of the embodiments shown in FIG. 2 to FIG. 6D process to detect the eye movement made by the user, and correspondingly obtain the moving track of the user’s line of sight while gazing at the view pattern, so as to judge whether it conforms to the preset track, so that the eye control device 900 can use the moving track of the user’s line of sight And perform corresponding operations, such as detecting the movement track of the user's gaze to unlock the computer screen, or operating the cursor of the computer window and other functions.
综上所述,本发明实施例所提出的基于眼部动作的控制方法及应用其的装置,可检测使用者在注视视图图样的视线移动轨迹,并在判定视线移动轨迹符合相对于视图图样的预设轨迹时,由装置执行相对应的操作。藉此,本发明实施例可通过视线移动轨迹作为信号的触发,并能够广泛应用于保全系统、眼控电脑等多个领域。To sum up, the eye movement-based control method and the device applying it proposed by the embodiments of the present invention can detect the movement trajectory of the user's gaze while gazing at the view pattern, and determine that the movement trace of the sight line conforms to the direction relative to the view pattern. When the track is preset, the device will perform the corresponding operation. In this way, the embodiment of the present invention can use the line of sight movement as a signal trigger, and can be widely used in multiple fields such as security systems and eye-controlled computers.
最后应说明的是:以上各实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述各实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围。Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present invention, rather than limiting them; although the present invention has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that: It is still possible to modify the technical solutions described in the foregoing embodiments, or perform equivalent replacements for some or all of the technical features; and these modifications or replacements do not make the essence of the corresponding technical solutions deviate from the technical solutions of the various embodiments of the present invention. scope.
Claims (16)
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW104124060A TWI533234B (en) | 2015-07-24 | 2015-07-24 | Control method based on eye's motion and apparatus using the same |
TW104124060 | 2015-07-24 |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106371565A true CN106371565A (en) | 2017-02-01 |
Family
ID=56509269
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510568143.7A Withdrawn CN106371565A (en) | 2015-07-24 | 2015-09-09 | Control method based on eye movement and device applied by control method |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN106371565A (en) |
TW (1) | TWI533234B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106933349A (en) * | 2017-02-06 | 2017-07-07 | 歌尔科技有限公司 | Unlocking method, device and virtual reality device for virtual reality device |
CN109144250A (en) * | 2018-07-24 | 2019-01-04 | 北京七鑫易维信息技术有限公司 | A kind of method, apparatus, equipment and storage medium that position is adjusted |
CN110069960A (en) * | 2018-01-22 | 2019-07-30 | 北京亮亮视野科技有限公司 | Filming control method, system and intelligent glasses based on sight motion profile |
CN111563241A (en) * | 2019-02-14 | 2020-08-21 | 南宁富桂精密工业有限公司 | Device unlocking method and electronic device using same |
CN111951454A (en) * | 2020-10-16 | 2020-11-17 | 兰和科技(深圳)有限公司 | Fingerprint biological identification unlocking device of intelligent access control and judgment method thereof |
TWI730227B (en) * | 2017-04-20 | 2021-06-11 | 大陸商上海耕岩智能科技有限公司 | Method and device for eye tracking operation |
CN113253846A (en) * | 2021-06-02 | 2021-08-13 | 樊天放 | HID (human interface device) interactive system and method based on gaze deflection trend |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101807110A (en) * | 2009-02-17 | 2010-08-18 | 由田新技股份有限公司 | Pupil positioning method and system |
TW201035813A (en) * | 2009-03-27 | 2010-10-01 | Utechzone Co Ltd | Pupil tracking method and system, and correction method and correction module for pupil tracking |
CN103699210A (en) * | 2012-09-27 | 2014-04-02 | 北京三星通信技术研究有限公司 | Mobile terminal and control method thereof |
CN103902029A (en) * | 2012-12-26 | 2014-07-02 | 腾讯数码(天津)有限公司 | Mobile terminal and unlocking method thereof |
-
2015
- 2015-07-24 TW TW104124060A patent/TWI533234B/en active
- 2015-09-09 CN CN201510568143.7A patent/CN106371565A/en not_active Withdrawn
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101807110A (en) * | 2009-02-17 | 2010-08-18 | 由田新技股份有限公司 | Pupil positioning method and system |
TW201035813A (en) * | 2009-03-27 | 2010-10-01 | Utechzone Co Ltd | Pupil tracking method and system, and correction method and correction module for pupil tracking |
CN103699210A (en) * | 2012-09-27 | 2014-04-02 | 北京三星通信技术研究有限公司 | Mobile terminal and control method thereof |
CN103902029A (en) * | 2012-12-26 | 2014-07-02 | 腾讯数码(天津)有限公司 | Mobile terminal and unlocking method thereof |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106933349A (en) * | 2017-02-06 | 2017-07-07 | 歌尔科技有限公司 | Unlocking method, device and virtual reality device for virtual reality device |
TWI730227B (en) * | 2017-04-20 | 2021-06-11 | 大陸商上海耕岩智能科技有限公司 | Method and device for eye tracking operation |
CN110069960A (en) * | 2018-01-22 | 2019-07-30 | 北京亮亮视野科技有限公司 | Filming control method, system and intelligent glasses based on sight motion profile |
CN109144250A (en) * | 2018-07-24 | 2019-01-04 | 北京七鑫易维信息技术有限公司 | A kind of method, apparatus, equipment and storage medium that position is adjusted |
CN111563241A (en) * | 2019-02-14 | 2020-08-21 | 南宁富桂精密工业有限公司 | Device unlocking method and electronic device using same |
CN111563241B (en) * | 2019-02-14 | 2023-07-18 | 南宁富联富桂精密工业有限公司 | Device unlocking method and electronic device using same |
CN111951454A (en) * | 2020-10-16 | 2020-11-17 | 兰和科技(深圳)有限公司 | Fingerprint biological identification unlocking device of intelligent access control and judgment method thereof |
CN111951454B (en) * | 2020-10-16 | 2021-01-05 | 兰和科技(深圳)有限公司 | Fingerprint biological identification unlocking device of intelligent access control and judgment method thereof |
CN113253846A (en) * | 2021-06-02 | 2021-08-13 | 樊天放 | HID (human interface device) interactive system and method based on gaze deflection trend |
CN113253846B (en) * | 2021-06-02 | 2024-04-12 | 樊天放 | HID interaction system and method based on gaze deflection trend |
Also Published As
Publication number | Publication date |
---|---|
TW201705038A (en) | 2017-02-01 |
TWI533234B (en) | 2016-05-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US12223760B2 (en) | Systems and methods for performing fingerprint based user authentication using imagery captured using mobile devices | |
US11874910B2 (en) | Facial recognition authentication system including path parameters | |
CN106371565A (en) | Control method based on eye movement and device applied by control method | |
US10425814B2 (en) | Control of wireless communication device capability in a mobile device with a biometric key | |
US10042994B2 (en) | Validation of the right to access an object | |
US10038691B2 (en) | Authorization of a financial transaction | |
KR20160068884A (en) | Iris biometric recognition module and access control assembly | |
US20230073410A1 (en) | Facial recognition and/or authentication system with monitored and/or controlled camera cycling | |
JP6452236B2 (en) | Eyeball identification device and eyeball identification method | |
US11507646B1 (en) | User authentication using video analysis |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20170201 |