CN116482854A - Eye tracking using self-mixing interferometry - Google Patents
Eye tracking using self-mixing interferometry Download PDFInfo
- Publication number
- CN116482854A CN116482854A CN202310073547.3A CN202310073547A CN116482854A CN 116482854 A CN116482854 A CN 116482854A CN 202310073547 A CN202310073547 A CN 202310073547A CN 116482854 A CN116482854 A CN 116482854A
- Authority
- CN
- China
- Prior art keywords
- eye
- smi
- sensors
- tracking device
- processor
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/0093—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for monitoring data relating to the user, e.g. head-tracking, eye-tracking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Optics & Photonics (AREA)
- Human Computer Interaction (AREA)
- Position Input By Displaying (AREA)
Abstract
本公开涉及“使用自混合干涉测量的眼睛跟踪”。一种眼睛跟踪设备包括头戴式框架、安装到所述头戴式框架的光学传感器子系统,以及处理器。所述光学传感器子系统包括一组一个或多个SMI传感器。所述处理器被配置为操作所述光学传感器子系统以使所述一组一个或多个SMI传感器朝向用户的眼睛发射一组一个或多个光束;从所述一组一个或多个SMI传感器接收一组一个或多个SMI信号;并且使用所述一组一个或多个SMI信号来跟踪所述眼睛的移动。
This disclosure relates to "Eye Tracking Using Self-Mixing Interferometry". An eye-tracking device includes a head-mounted frame, an optical sensor subsystem mounted to the head-mounted frame, and a processor. The optical sensor subsystem includes a set of one or more SMI sensors. The processor is configured to operate the optical sensor subsystem to cause the set of one or more SMI sensors to emit a set of one or more light beams toward an eye of a user; receive a set of one or more SMI signals from the set of one or more SMI sensors; and use the set of one or more SMI signals to track movement of the eye.
Description
本申请是申请日为2022年9月22日、发明名称为“使用自混合干涉测量的眼睛跟踪”的中国专利申请202211169329.1的分案申请。This application is a divisional application of Chinese patent application 202211169329.1 with a filing date of September 22, 2022 and an invention title of "Eye Tracking Using Self-Mixing Interferometry".
相关申请的交叉引用Cross References to Related Applications
本申请是2021年9月22日提交的美国临时专利申请第63/247,188号的非临时性申请,并且根据35U.S.C.119(e)要求所述美国临时专利申请的权益,其内容以引用方式并入本文。This application is a non-provisional application of U.S. Provisional Patent Application No. 63/247,188 filed September 22, 2021, and claims the benefit of said U.S. Provisional Patent Application under 35 U.S.C. 119(e), the contents of which are incorporated herein by reference.
技术领域technical field
所描述的实施方案通常涉及光学传感,并且更具体地,涉及使用光学传感器跟踪眼睛移动。The described embodiments relate generally to optical sensing, and more specifically, to tracking eye movement using optical sensors.
背景技术Background technique
眼睛监测技术可用于改进近眼显示器(例如,头戴式显示器(HMD))、增强现实(AR)系统、虚拟现实(VR)系统等。例如,凝视向量跟踪,也称为凝视位置跟踪,可以用作显示中央凹渲染或人类计算机交互的输入。传统的眼睛监测技术是基于相机的或基于视频的,并且依赖于眼睛的主动照明、眼睛图像获取以及诸如瞳孔中心和角膜闪光的眼睛特征的提取。这种眼睛监测技术的功耗、形状因子、计算成本和延迟对于更用户友好的下一代HMD、AR和VR系统(例如,重量更轻、电池供电并且功能更全面的系统)可能是显著的负担。Eye monitoring technology can be used to improve near-eye displays (eg, head-mounted displays (HMDs)), augmented reality (AR) systems, virtual reality (VR) systems, and the like. For example, gaze vector tracking, also known as gaze position tracking, can be used as input to display foveal rendering or human-computer interaction. Traditional eye monitoring techniques are camera-based or video-based and rely on active illumination of the eye, eye image acquisition, and extraction of eye features such as pupil center and corneal glint. The power consumption, form factor, computational cost, and latency of this eye-monitoring technology can be a significant burden on more user-friendly next-generation HMD, AR, and VR systems (eg, systems that are lighter weight, battery-powered, and more fully functional).
发明内容Contents of the invention
本公开中描述的系统、设备、方法和装置的实施方案利用一个或多个自混合干涉测量(SMI)传感器来跟踪眼睛移动。在一些实施方案中,SMI传感器可以单独使用或与相机组合使用,以确定凝视向量或眼睛位置。Embodiments of the systems, devices, methods, and devices described in this disclosure utilize one or more self-mixing interferometry (SMI) sensors to track eye movement. In some embodiments, an SMI sensor may be used alone or in combination with a camera to determine gaze vector or eye position.
在第一方面,本公开描述了一种眼睛跟踪设备。所述眼睛跟踪设备能够包括头戴式框架、安装到所述头戴式框架的光学传感器子系统,以及处理器。所述光学传感器子系统能够包括一组一个或多个SMI传感器。所述处理器能够被配置为操作所述光学传感器子系统以使所述一组一个或多个SMI传感器朝向用户的眼睛发射一组一个或多个光束;从所述一组一个或多个SMI传感器接收一组一个或多个SMI信号;并且使用所述一组一个或多个SMI信号来跟踪所述眼睛的移动。In a first aspect, the present disclosure describes an eye tracking device. The eye-tracking device can include a head-mounted frame, an optical sensor subsystem mounted to the head-mounted frame, and a processor. The optical sensor subsystem can include a set of one or more SMI sensors. The processor can be configured to operate the optical sensor subsystem to cause the set of one or more SMI sensors to emit a set of one or more light beams toward an eye of the user; receive a set of one or more SMI signals from the set of one or more SMI sensors; and use the set of one or more SMI signals to track movement of the eye.
在第二方面,本公开描述了另一种眼睛跟踪设备。所述眼睛跟踪设备能够包括一组一个或多个SMI传感器、相机和处理器。所述处理器能够被配置为使所述相机以第一频率获取用户的眼睛的一组图像;使一组一个或多个SMI传感器朝向用户的眼睛发射一组一个或多个光束;以大于所述第一频率的第二频率对由所述一组一个或多个SMI传感器生成的一组一个或多个SMI信号进行采样;使用所述一组图像中的至少一个图像来确定所述眼睛的凝视向量;并且使用所述一组一个或多个SMI信号来跟踪所述眼睛的移动。In a second aspect, this disclosure describes another eye-tracking device. The eye tracking device can include a set of one or more SMI sensors, a camera and a processor. The processor can be configured to cause the camera to acquire a set of images of a user's eye at a first frequency; cause a set of one or more SMI sensors to emit a set of one or more light beams toward the user's eye; sample a set of one or more SMI signals generated by the set of one or more SMI sensors at a second frequency greater than the first frequency; use at least one image of the set of images to determine a gaze vector of the eye;
在第三方面,本公开描述了一种跟踪眼睛的移动的方法。所述方法能够包括操作光学传感器子系统以使得所述光学传感器子系统中的一组一个或多个SMI传感器朝向用户的眼睛发射一组一个或多个光束;从所述一组一个或多个SMI传感器接收一组一个或多个SMI信号;并且使用所述一组一个或多个SMI信号来跟踪所述眼睛的移动。In a third aspect, the present disclosure describes a method of tracking eye movement. The method can include operating an optical sensor subsystem such that a set of one or more SMI sensors in the optical sensor subsystem emits a set of one or more light beams toward an eye of a user; receiving a set of one or more SMI signals from the set of one or more SMI sensors; and tracking movement of the eye using the set of one or more SMI signals.
除了所述示例性方面和实施方案之外,参考附图并通过研究以下描述,更多方面和实施方案将为显而易见的。In addition to the exemplary aspects and embodiments described, further aspects and embodiments will be apparent by study of the following descriptions, with reference to the drawings.
附图说明Description of drawings
通过以下结合附图的详细描述,将容易理解本公开,其中类似的附图标号指代类似的结构元件,并且其中:The present disclosure will be readily understood from the following detailed description taken in conjunction with the accompanying drawings, in which like reference numerals designate like structural elements, and in which:
图1示出了眼睛跟踪设备的示例性框图;Figure 1 shows an exemplary block diagram of an eye tracking device;
图2A示出了眼睛的被跟踪角速度的示例性图;Figure 2A shows an exemplary graph of the tracked angular velocity of the eye;
图2B示出了眼睛的被跟踪凝视的示例性图;Figure 2B shows an exemplary diagram of a tracked gaze of an eye;
图3A示出了第一示例性眼睛跟踪设备,其中光学传感子系统和处理器安装到一副眼镜;Figure 3A shows a first exemplary eye-tracking device in which the optical sensing subsystem and processor are mounted to a pair of glasses;
图3B示出了第二示例性眼睛跟踪设备,其中光学传感子系统和处理器安装到VR头显;Figure 3B shows a second exemplary eye-tracking device in which the optical sensing subsystem and processor are mounted to a VR headset;
图4A示出了可以安装到头戴式框架并且被配置为朝向眼睛发射光的一组示例性SMI传感器的侧视图;4A shows a side view of an exemplary set of SMI sensors that may be mounted to a head-mounted frame and configured to emit light toward the eyes;
图4B示出了图4A中示出的眼睛和SMI传感器组的正视图;Figure 4B shows a front view of the eye and SMI sensor set shown in Figure 4A;
图5示出了可以安装到头戴式框架并且被配置为朝向眼睛发射光的第一组替代SMI传感器的示例性正视图;5 illustrates an exemplary front view of a first set of alternative SMI sensors that may be mounted to a head-mounted frame and configured to emit light toward the eyes;
图6示出了可以安装到头戴式框架并且被配置为朝向眼睛发射光的第二组替代SMI传感器的示例性正视图;6 illustrates an exemplary front view of a second set of alternative SMI sensors that may be mounted to the head-mounted frame and configured to emit light toward the eyes;
图7示出了可以安装到头戴式框架并且被配置为朝向眼睛发射光的第三组替代SMI传感器的示例性侧视图;7 illustrates an exemplary side view of a third set of alternative SMI sensors that may be mounted to a head-mounted frame and configured to emit light toward the eyes;
图8A示出了SMI传感器与分束器组合的示例性使用;Figure 8A shows an exemplary use of an SMI sensor in combination with a beam splitter;
图8B示出了SMI传感器与光束转向部件组合的示例性使用;Figure 8B shows an exemplary use of an SMI sensor in combination with a beam steering component;
图9A示出了SMI传感器与显示子系统的第一示例性集成;Figure 9A shows a first exemplary integration of an SMI sensor with a display subsystem;
图9B示出了SMI传感器与显示子系统的第二示例性集成;Figure 9B shows a second exemplary integration of an SMI sensor with a display subsystem;
图10示出了可以包括在眼睛跟踪设备的光学传感器子系统中的一组示例性部件;Figure 10 illustrates an exemplary set of components that may be included in an optical sensor subsystem of an eye tracking device;
图11A示出了用于使用一组一个或多个SMI传感器来跟踪眼睛移动的第一示例性方法;FIG. 11A illustrates a first exemplary method for tracking eye movement using a set of one or more SMI sensors;
图11B示出了用于使用一组一个或多个SMI传感器与相机或其他传感器组合来跟踪眼睛移动的第二示例性方法;并且11B illustrates a second exemplary method for tracking eye movement using a set of one or more SMI sensors in combination with a camera or other sensor; and
图12A和图12B示出了如何使用一组一个或多个SMI传感器来映射眼睛的一个或多个表面或结构。12A and 12B illustrate how a set of one or more SMI sensors can be used to map one or more surfaces or structures of the eye.
附图中的交叉影线或阴影的用途通常被提供以阐明相邻元件之间的边界并还有利于附图的易读性。因此,存在或不存在无交叉影线或阴影均不表示或指示对特定材料、材料特性、元件比例、元件尺寸、类似图示元件的共同性或在附图中所示的任何元件的任何其他特征、属性、或特性的任何偏好或要求。The use of cross-hatching or shading in the drawings is generally provided to clarify boundaries between adjacent elements and also to facilitate drawing legibility. Accordingly, neither the presence nor absence of cross-hatching or hatching indicates or indicates any preference or requirement for particular materials, material properties, element proportions, element dimensions, commonality to similarly illustrated elements, or any other feature, property, or characteristic of any element shown in the Figures.
附加地,应当理解,各个特征部和元件(以及其集合和分组)的比例和尺寸(相对的或绝对的)以及其间呈现的界限、间距和位置关系在附图中被提供,以仅用于促进对本文所述的各个实施方案的理解,并因此可不必要地被呈现或示出以进行缩放并且并非旨在指示对所示的实施方案的任何偏好或要求,以排除结合其所述的实施方案。Additionally, it should be understood that the proportions and dimensions (relative or absolute) of the various features and elements (and collections and groupings thereof) and the boundaries, spacings and positional relationships presented therebetween are provided in the drawings only to facilitate an understanding of the various embodiments described herein, and thus may not necessarily be presented or shown to scale and are not intended to indicate any preference or requirement over the illustrated embodiments to the exclusion of embodiments described in connection therewith.
具体实施方式Detailed ways
现在将具体地参考在附图中示出的代表性实施方案。应当理解,以下描述不旨在将实施方案限制于一个优选实施方案。相反,其旨在涵盖可被包括在由所附权利要求书限定的所述实施方案的实质和范围内的另选形式、修改形式和等同形式。Reference will now be made in detail to the representative embodiments illustrated in the accompanying drawings. It should be understood that the following description is not intended to limit the embodiments to one preferred embodiment. On the contrary, it is intended to cover alternatives, modifications and equivalents, which may be included within the spirit and scope of the described embodiments as defined by the appended claims.
大多数眼睛跟踪系统是基于相机或图像的,并且不能使用合理的功率量足够快地或以足够高的精度跟踪眼睛移动。需要具有较高精度和灵敏度的较低功率和较低成本的系统。Most eye tracking systems are camera or image based and cannot track eye movement fast enough or with high enough accuracy using a reasonable amount of power. Lower power and lower cost systems with higher accuracy and sensitivity are desired.
眼睛移动的快速并准确的检测和分类(诸如在平滑追踪、扫视、注视、眼球震颤和眨眼之间进行区分)可能是任务关键的,但是对于基于相机、基于视频或基于光电检测器的跟踪系统也可能是具有挑战性的——尤其是在严格的功率预算下。举例来说,注视可以短至几十微秒并且微妙至<0.25度/秒(deg/s)或<0.5deg/s。检测到这种眼睛移动表明需要高分辨率和高帧速率图像获取和处理系统。Fast and accurate detection and classification of eye movements (such as distinguishing between smooth pursuit, saccades, gazes, nystagmus, and blinks) can be mission-critical, but can also be challenging for camera-based, video-based, or photodetector-based tracking systems—especially within tight power budgets. For example, fixations can be as short as tens of microseconds and as subtle as <0.25 degrees per second (deg/s) or <0.5 deg/s. Detection of such eye movement indicates the need for high resolution and high frame rate image acquisition and processing systems.
先前,已经提出了基于低功率成像的眼睛里程计,其中降低的分辨率和较高帧速率图像捕获与较高分辨率和较低帧速率图像捕获融合以用于整体功率节省(例如,与仅以较高帧速率获取较高分辨率图像的系统相比)。作为替代方案,已经提出了具有较低延迟和较低功耗的基于光电二极管阵列的眼睛跟踪器,但尚未被证明在更高分辨率和更高精度应用中有用。基于光电二极管阵列的眼睛跟踪器因此更适合于二进制传感应用,诸如唤醒基于任务关键的视频的眼睛跟踪器,或者在依赖于粗略凝视区检测或凝视移动阈值的近眼显示器(或HMD)系统中唤醒。Previously, eye odometry based on low-power imaging has been proposed, where reduced resolution and higher frame rate image capture is fused with higher resolution and lower frame rate image capture for overall power savings (eg, compared to systems that only acquire higher resolution images at a higher frame rate). As an alternative, photodiode array-based eye trackers with lower latency and lower power consumption have been proposed, but have not yet proven useful in higher-resolution and higher-precision applications. Photodiode array-based eye trackers are therefore better suited for binary sensing applications, such as wake-up in mission-critical video-based eye trackers, or in near-eye display (or HMD) systems that rely on coarse gaze zone detection or gaze movement thresholds.
以下描述涉及使用SMI传感器单独或与其他传感器(诸如相机或其他基于图像的传感器)组合使用来跟踪眼睛移动。出于本描述的目的,SMI传感器被认为包括光发射器(例如,激光光源)和SMI信号检测器(例如,光检测器,诸如光电二极管,或电检测器,诸如测量光发射器的结电压或驱动电流的电路)。一个或多个SMI传感器中的每个SMI传感器可以朝向用户眼睛的一个或多个结构(例如,朝向眼睛的虹膜、巩膜、瞳孔、晶状体、角膜缘、眼睑等中的一者或多者)发射一个或多个固定或扫描光束。SMI传感器可以按照操作安全规范来操作,以便不伤害用户的眼睛。The following description refers to the use of SMI sensors alone or in combination with other sensors, such as cameras or other image-based sensors, to track eye movement. For the purposes of this description, an SMI sensor is considered to include a light emitter (e.g., a laser light source) and an SMI signal detector (e.g., a light detector, such as a photodiode, or an electrical detector, such as a circuit that measures the junction voltage or drive current of the light emitter). Each of the one or more SMI sensors may emit one or more fixed or scanning beams toward one or more structures of the user's eye (e.g., toward one or more of the iris, sclera, pupil, lens, limbus, eyelid, etc. of the eye). SMI sensors can be operated in accordance with operating safety regulations so as not to harm the user's eyes.
在发射光束之后,SMI传感器可以将发射的光的回射部分接收回到其谐振腔中。对于良好质量的回射,可能有用的是将由SMI传感器发射的光束聚焦在眼睛的虹膜或巩膜(或另一漫射结构)上而不是将光束聚焦在眼睛的角膜或瞳孔上。发射光的回射部分的相位变化可以与在谐振腔内生成的光的相位混合,并且可以产生可以被放大和检测的SMI信号。可以分析放大的SMI信号,并且在一些情况下,分析多个SMI信号。可以通过从用户眼睛的多个取向和位置对多普勒频率进行相位跟踪来检索和重建眼睛的旋转移动。After emitting a beam, the SMI sensor can receive the retroreflected portion of the emitted light back into its resonant cavity. For good quality retroreflection, it may be useful to focus the beam emitted by the SMI sensor on the iris or sclera (or another diffusing structure) of the eye rather than on the cornea or pupil of the eye. The phase change of the retroreflected portion of the emitted light can mix with the phase of the light generated within the resonant cavity and can generate an SMI signal that can be amplified and detected. The amplified SMI signal, and in some cases, multiple SMI signals can be analyzed. The rotational movement of the eye can be retrieved and reconstructed by phase tracking the Doppler frequency from multiple orientations and positions of the user's eye.
可以使用SMI传感器以高采样率识别用户注视行为的分类和量化,诸如平滑追踪、扫视、注视、眼球震颤和眨眼,以促进近眼显示器(或HMD)系统上的数字、文本或图像的高效率、高保真、数字内容渲染。Classification and quantification of user gaze behaviors such as smooth pursuit, saccades, fixations, nystagmus, and eye blinks can be identified using SMI sensors at high sampling rates to facilitate efficient, high-fidelity, digital content rendering of numbers, text, or images on near-eye display (or HMD) systems.
在一些实施方案中,SMI传感器数据或确定可以与从较低采样率凝视成像系统(例如,相机)获取的绝对凝视方向传感信息(例如,凝视向量传感或凝视位置传感)融合,从而以高得多的速度和良好的精度实现绝对凝视跟踪。与可以包括一百万个或更多个像素的成像系统相比,可以从一个或几个(例如,两个或三个)SMI传感器获得SMI传感器数据。与成像系统相比,这可以使SMI传感器能够以低得多的功耗生成SMI信号(或SMI传感器数据)。In some embodiments, SMI sensor data or determinations can be fused with absolute gaze direction sensing information (e.g., gaze vector sensing or gaze position sensing) acquired from a lower sampling rate gaze imaging system (e.g., a camera) to achieve absolute gaze tracking at much higher speed and with good accuracy. In contrast to imaging systems, which may include a million or more pixels, SMI sensor data may be obtained from one or a few (eg, two or three) SMI sensors. This can enable SMI sensors to generate SMI signals (or SMI sensor data) at much lower power consumption than imaging systems.
当调制由SMI传感器发射的光的波长时,从SMI传感器获得的SMI信号可以用于用户眼睛的表面、界面和体积结构的绝对测距,大约为约100μm级分辨率。这种绝对距离测量可以提供用于在眼睛旋转期间跟踪眼睛轮廓的位移的锚。When the wavelength of light emitted by the SMI sensor is modulated, the SMI signal obtained from the SMI sensor can be used for absolute ranging of the surface, interface, and volume structures of the user's eye, on the order of about 100 μm resolution. This absolute distance measurement can provide an anchor for tracking the displacement of the eye contour during eye rotation.
当由一个或多个SMI传感器发射的光的波长被调制时,并且当由至少一个SMI传感器发射的光束被扫描和/或发射多个光束时,可获得或构建位移和/或速度图(也称为多普勒云),和/或距离图(也称为深度云)。另外或替代地,当由一个或多个SMI传感器发射的光的波长未被调制时,也可以获得或构建多普勒云。多普勒云可以具有原生高分辨率,其例如对于单个帧位移可以是μm或亚μm水平,或者对于速度可以低至mm/s或deg/s水平。多普勒云的单个或多个帧可被认为是差分深度云。另外或替代地,可以实时处理多普勒云的单个或多个帧的测量结果,以匹配预定义和/或本地校准的差分图或库并提取眼睛跟踪信息和/或位置信息(也称为姿态信息)。本地校准的差分图或库可以通过包括但不限于相机、深度云等来获得。单独使用或与深度云或其他传感模态(例如,眼睛相机图像、运动传感器等)融合使用多普勒云可以提供跟踪眼睛移动或位置信息的准确并且有效的方式。When the wavelength of light emitted by one or more SMI sensors is modulated, and when the beam emitted by at least one SMI sensor is scanned and/or multiple beams are emitted, a displacement and/or velocity map (also known as a Doppler cloud), and/or a distance map (also known as a depth cloud) may be obtained or constructed. Additionally or alternatively, a Doppler cloud may also be obtained or constructed when the wavelength of light emitted by the one or more SMI sensors is not modulated. Doppler clouds can be of native high resolution, which can eg be at the μm or sub-μm level for individual frame displacements, or down to mm/s or deg/s levels for velocities. Single or multiple frames of a Doppler cloud can be considered a differential depth cloud. Additionally or alternatively, single or multiple frame measurements of the Doppler cloud can be processed in real-time to match predefined and/or locally calibrated difference maps or libraries and to extract eye-tracking information and/or position information (also known as pose information). Locally calibrated differential maps or libraries can be obtained including, but not limited to, cameras, depth clouds, etc. Using Doppler clouds alone or fused with depth clouds or other sensing modalities (eg, eye camera images, motion sensors, etc.) can provide an accurate and efficient way of tracking eye movement or position information.
参考图1至图12B描述了这些和其他系统、设备、方法和装置。然而,本领域的技术人员将容易地理解,本文相对于这些附图所给出的详细描述仅出于说明性目的,而不应被理解为是限制性的。These and other systems, devices, methods and apparatuses are described with reference to FIGS. 1-12B . However, those skilled in the art will readily appreciate that the detailed description given herein with respect to these figures is for illustrative purposes only and should not be construed as limiting.
方向性术语,诸如“顶部”、“底部”、“上部”、“下部”、“前部”、“后部”、“上方”、“下方”、“以上”、“以下”、“左侧”、“右侧”等参考下面描述的一些图中的一些部件的取向来使用。因为各种实施方案中的部件可以多个不同的取向定位,所以方向性术语仅用于说明的目的并且不以任何方式进行限制。方向性术语旨在被广义地解释,因此不应被解释为排除以不同方式取向的部件。另外,如本文所用,在用术语“和”或“或”分开项目中任何项目的一系列项目之后的短语“中的至少一者”是将列表作为整体进行修饰,而不是修饰列表中的每个成员。短语“中的至少一者”不要求选择所列出的每个项目中的至少一个;相反,该短语允许包括项目中任何项目中的最少一者和/或项目的任何组合中的最少一者和/或项目中每个项目中的最少一者的含义。举例来说,短语“A、B和C中的至少一者”或“A、B或C中的至少一者”各自是指仅A、仅B或仅C;A、B和C的任意组合;和/或A、B和C中的每一者中的一者或多者。类似地,可以理解,针对本文提供的结合列表或分离列表而呈现的元素的顺序不应被解释为将本公开仅限于所提供的顺序。Directional terms such as "top", "bottom", "upper", "lower", "front", "rear", "above", "under", "above", "below", "left", "right", etc. are used with reference to the orientation of some components in some of the figures described below. Because components in various embodiments can be positioned in a number of different orientations, directional terms are used for descriptive purposes only and are not limiting in any way. Directional terms are intended to be interpreted broadly, and thus should not be interpreted to exclude differently oriented components. Also, as used herein, the phrase "at least one of" following a series of items separating any of the items with the term "and" or "or" modifies the list as a whole and does not modify each member of the list. The phrase "at least one of" does not require selection of at least one of each of the items listed; rather, the phrase allows for meanings that include at least one of any of the items and/or at least one of any combination of items and/or at least one of each of the items. For example, the phrases "at least one of A, B, and C" or "at least one of A, B, or C" each refer to only A, only B, or only C; any combination of A, B, and C; and/or one or more of each of A, B, and C. Similarly, it will be understood that the order in which elements are presented herein, either in combination or separately, should not be construed to limit the present disclosure to only the order presented.
图1示出了眼睛跟踪设备100的示例性框图。眼睛跟踪设备100可以包括头戴式框架102、安装到头戴式框架102的光学传感器子系统104,以及处理器106。在一些实施方案中,眼睛跟踪设备100还可以包括显示子系统108、通信子系统110(例如,无线和/或有线通信子系统)和配电子系统112中的一者或多者。处理器106、显示子系统108、通信子系统110和配电子系统112可以部分地或全部地安装到头戴式框架102,或者与安装到头戴式框架102的一个或多个部件进行射频或电通信(例如,容纳在与安装到头戴式框架102的一个或多个部件进行射频(即,无线)或电(例如,有线)通信的盒子或电子设备(例如,电话或诸如手表的可穿戴设备)中),或者分布在头戴式框架102与盒子或电子设备之间,所述盒子或电子设备与安装到头戴式框架102的一个或多个部件进行射频通信或电通信。光学传感器子系统104、处理器106、显示子系统108、通信子系统110和/或配电子系统112可以使用一种或多种通信协议通过一个或多个总线116或通过空中(例如,无线)通信。FIG. 1 shows an exemplary block diagram of an eye tracking device 100 . Eye tracking device 100 may include a head-mounted frame 102 , an optical sensor subsystem 104 mounted to head-mounted frame 102 , and a processor 106 . In some implementations, eye-tracking device 100 may also include one or more of display subsystem 108 , communication subsystem 110 (eg, wireless and/or wired communication subsystem), and distribution subsystem 112 . Processor 106, display subsystem 108, communications subsystem 110, and distribution subsystem 112 may be partially or fully mounted to, or in radio frequency or electrical communication with, one or more components mounted to head-mounted frame 102 (e.g., housed in a box or electronic device (e.g., a phone or a wearable device such as a watch) in radio-frequency (i.e., wireless) or electrical (e.g., wired) communication with one or more components mounted to head-mounted frame 102), or distributed among Between the headset frame 102 and a box or electronic device that is in radio frequency or electrical communication with one or more components mounted to the headset frame 102 . Optical sensor subsystem 104, processor 106, display subsystem 108, communication subsystem 110, and/or distribution subsystem 112 may communicate over one or more buses 116 or over the air (eg, wirelessly) using one or more communication protocols.
头戴式框架102可以采取一副眼镜、一组护目镜、增强现实(AR)耳机、虚拟现实(VR)耳机或其他形式的头戴式框架102的形式。The headset 102 may take the form of a pair of glasses, a set of goggles, an augmented reality (AR) headset, a virtual reality (VR) headset, or other forms of the headset 102 .
光学传感器子系统104可以包括一组一个或多个SMI传感器114。一组一个或多个SMI传感器中的每个SMI传感器可以包括光发射器和光检测器。光发射器可以包括垂直腔表面发射激光器(VCSEL)、边缘发射激光器(EEL)、垂直外腔表面发射激光器(VECSEL)、量子点激光器(QDL)、量子级联激光器(QCL)或发光二极管(LED)(例如,有机LED(OLED)、谐振腔LED(RC-LED)、微LED(mLED)、超发光LED(SLED)或边缘发射LED)等中的一者或多者。在一些情况下,光检测器(或光电检测器)可以横向邻近光发射器定位(例如,安装或形成在其上安装或形成光检测器的基板上)。在其他情况下,光检测器可以堆叠在光发射器上方或下方。例如,光检测器可以是具有初级发射和次级发射的VCSEL、HCSEL或EEL,并且光检测器可以在与光发射器相同的外延堆叠中外延形成,使得光检测器接收次级发射中的一些或全部次级发射。在后面的这些实施方案中,可以类似地形成光发射器和光检测器(例如,光发射器和光检测器都可以包括多量子阱(MQW)结构,但是光发射器可以是正向偏置的并且光检测器(例如,谐振腔光电检测器(RCPD)可以是反向偏置的)。替代地,光检测器可以形成在基板上,并且光发射器可以单独形成并安装到基板,或者相对于光发射器,使得光发射器的次级光发射撞击光检测器。替代地,光发射器可以形成在基板上,并且光检测器可以单独形成并安装到基板,或者相对于光发射器,使得光发射器的次级光发射撞击光检测器。Optical sensor subsystem 104 may include a set of one or more SMI sensors 114 . Each SMI sensor in a set of one or more SMI sensors may include a light emitter and a light detector. The light emitter may comprise one or more of a vertical cavity surface emitting laser (VCSEL), an edge emitting laser (EEL), a vertical external cavity surface emitting laser (VECSEL), a quantum dot laser (QDL), a quantum cascade laser (QCL), or a light emitting diode (LED) (e.g., an organic LED (OLED), a resonant cavity LED (RC-LED), a micro LED (mLED), a superluminescent LED (SLED), or an edge emitting LED), among others. In some cases, a light detector (or photodetector) may be positioned laterally adjacent to the light emitter (eg, mounted or formed on a substrate on which the light detector is mounted or formed). In other cases, photodetectors can be stacked above or below the photoemitters. For example, the photodetector may be a VCSEL, HCSEL, or EEL having a primary emission and a secondary emission, and the photodetector may be formed epitaxially in the same epitaxial stack as the photoemitter such that the photodetector receives some or all of the secondary emissions. In these latter embodiments, the light emitter and light detector can be formed similarly (e.g., both the light emitter and light detector can comprise a multiple quantum well (MQW) structure, but the light emitter can be forward biased and the light detector (e.g., a resonant cavity photodetector (RCPD) can be reverse biased). Alternatively, the light detector can be formed on a substrate, and the light emitter can be formed separately and mounted to the substrate, or relative to the light emitter, such that the light emitter's secondary light emission hits the light detector Alternatively, the light emitter can be formed on the substrate, and the light detector can be formed separately and mounted to the substrate, or relative to the light emitter, such that the secondary light emission of the light emitter strikes the light detector.
在一些实施方案中,光学传感器子系统104可以包括一组固定或可移动光学部件(例如,一个或多个镜片、光栅、滤光器、分束器、光束转向部件等)。光学传感器子系统104还可以包括图像传感器(例如,包括图像传感器的相机)。In some embodiments, optical sensor subsystem 104 may include a set of fixed or movable optical components (eg, one or more mirrors, gratings, filters, beam splitters, beam steering components, etc.). Optical sensor subsystem 104 may also include an image sensor (eg, a camera including an image sensor).
处理器106可以包括能够处理、接收或传输数据或指令的任何电子设备,无论此类数据或指令是软件还是固件的形式或以其他方式编码。例如,处理器106可包括微处理器、中央处理单元(CPU)、专用集成电路(ASIC)、数字信号处理器(DSP)、控制器或此类设备的组合。如本文所述,术语“处理器”意在涵盖单个处理器或处理单元、多个处理器、多个处理单元或其他适当配置的计算元件。Processor 106 may include any electronic device capable of processing, receiving or transmitting data or instructions, whether such data or instructions are in the form of software or firmware or otherwise encoded. For example, processor 106 may include a microprocessor, a central processing unit (CPU), an application specific integrated circuit (ASIC), a digital signal processor (DSP), a controller, or a combination of such devices. As used herein, the term "processor" is intended to encompass a single processor or processing unit, multiple processors, multiple processing units, or other suitably configured computing elements.
在一些实施方案中,眼睛跟踪设备100的部件可以由多个处理器控制。例如,眼睛跟踪设备100的选择部件(例如,光学传感器子系统104)可由第一处理器控制,并且眼睛跟踪设备100的其他部件(例如,显示子系统108和/或通信子系统110)可由第二处理器控制,其中第一处理器和第二处理器可或不可彼此通信。In some embodiments, components of eye-tracking device 100 may be controlled by multiple processors. For example, selected components of eye-tracking device 100 (e.g., optical sensor subsystem 104) may be controlled by a first processor and other components of eye-tracking device 100 (e.g., display subsystem 108 and/or communication subsystem 110) may be controlled by a second processor, where the first processor and the second processor may or may not be in communication with each other.
在一些实施方案中,显示子系统108可以包括具有一个或多个发光元件的显示器,所述一个或多个发光元件包括例如LED、OLED、液晶显示器(LCD)、电致发光(EL)显示器或其他类型的显示元件。In some embodiments, display subsystem 108 may include a display having one or more light emitting elements including, for example, LEDs, OLEDs, liquid crystal displays (LCDs), electroluminescent (EL) displays, or other types of display elements.
通信子系统110可以使眼睛跟踪设备100能够从用户或另一电子设备发射或接收数据。通信子系统110可以包括触摸传感输入表面、表冠、一个或多个麦克风或扬声器,或者被配置为传输电子、RF或光学信号的有线或无线(例如,射频(RF)或光学)通信接口。无线和有线通信接口的示例包括但不限于蜂窝、Wi-Fi和通信接口。Communication subsystem 110 may enable eye-tracking device 100 to transmit or receive data from a user or another electronic device. Communication subsystem 110 may include a touch-sensing input surface, a crown, one or more microphones or speakers, or a wired or wireless (eg, radio frequency (RF) or optical) communication interface configured to transmit electronic, RF, or optical signals. Examples of wireless and wired communication interfaces include, but are not limited to, cellular, Wi-Fi, and Communication Interface.
配电子系统112可以用能够将能量递送到眼睛跟踪设备100或其部件的电源和/或导体的任何集合来实施。在一些情况下,配电子系统112可以包括一个或多个电池或可充电电池。另外或替代地,配电子系统112可以包括电源连接器或电源线,所述电源连接器或电源线可以用于将眼睛跟踪设备100或其部件连接到远程电源,诸如墙壁插座、远程电池组或眼睛跟踪设备100被拴系到的电子设备。Distribution subsystem 112 may be implemented with any collection of power sources and/or conductors capable of delivering energy to eye-tracking device 100 or components thereof. In some cases, distribution subsystem 112 may include one or more batteries or rechargeable batteries. Additionally or alternatively, distribution subsystem 112 may include a power connector or power cord that may be used to connect eye-tracking device 100 or components thereof to a remote power source, such as a wall outlet, a remote battery pack, or an electronic device to which eye-tracking device 100 is tethered.
处理器106可以被配置为操作光学传感器子系统104。操作光学传感器子系统104可以包括使配电子系统112向光学传感器子系统104供电,向一组一个或多个SMI传感器114提供控制信号,和/或提供控制信号,所述控制信号以电气方式、以机电方式或以其他方式聚焦或调整光学传感器子系统104的光学部件。操作光学传感器子系统104可以使一组一个或多个SMI传感器114朝向用户的眼睛120发射一组一个或多个光束118。处理器106还可以被配置为从一组一个或多个SMI传感器114接收一组一个或多个SMI信号,并且使用一组一个或多个SMI信号跟踪眼睛120的旋转移动。Processor 106 may be configured to operate optical sensor subsystem 104 . Operating optical sensor subsystem 104 may include causing distribution subsystem 112 to provide power to optical sensor subsystem 104, providing control signals to a set of one or more SMI sensors 114, and/or providing control signals that electrically, electromechanically, or otherwise focus or adjust optical components of optical sensor subsystem 104. Operating the optical sensor subsystem 104 may cause the set of one or more SMI sensors 114 to emit a set of one or more light beams 118 toward the user's eye 120 . Processor 106 may also be configured to receive a set of one or more SMI signals from set of one or more SMI sensors 114 and track rotational movement of eye 120 using the set of one or more SMI signals.
在一些情况下,光学传感器子系统104、处理器106、显示子系统108、通信子系统110和/或配电子系统112可以通过一个或多个总线通信,所述总线通常被描绘为总线116。In some cases, optical sensor subsystem 104 , processor 106 , display subsystem 108 , communication subsystem 110 , and/or distribution subsystem 112 may communicate via one or more buses, generally depicted as bus 116 .
在一些情况下,跟踪眼睛120的旋转移动可以包括估计眼睛120的角速度(或凝视移动)。在一些情况下,可以在三个正交方向中的每个正交方向上(例如,在x、y和z方向上)跟踪角速度。在不同方向上跟踪角速度可能需要扫描由所述一组一个或多个SMI传感器114发射的一个或多个光束,分离由所述一组一个或多个SMI传感器114中的一个或多个SMI传感器发射的一个或多个光束,或者配置所述一组一个或多个SMI传感器114以在不同方向上(并且优选地在不同的正交方向上)发射两个或更多个光束。In some cases, tracking the rotational movement of eye 120 may include estimating the angular velocity (or gaze movement) of eye 120 . In some cases, angular velocity may be tracked in each of three orthogonal directions (eg, in the x, y, and z directions). Tracking angular velocity in different directions may require scanning the one or more light beams emitted by the set of one or more SMI sensors 114, splitting the one or more light beams emitted by the one or more SMI sensors in the set of one or more SMI sensors 114, or configuring the set of one or more SMI sensors 114 to emit two or more light beams in different directions (and preferably in different orthogonal directions).
跟踪眼睛120的旋转移动还可以包括跟踪眼睛120的凝视(或凝视位置)。Tracking the rotational movement of the eye 120 may also include tracking the gaze (or gaze position) of the eye 120 .
在一些实施方案中,处理器106可以被配置为对用户的旋转眼睛移动进行分类。举例来说,旋转眼睛移动可以被分类为平滑追踪、扫视、注视、眼球震颤或眨眼中的至少一者。然后,处理器106可以使显示子系统108响应于旋转眼睛移动的分类而调整显示器上的一个或多个图像的渲染。在一些实施方案中,处理器106可以被进一步配置为量化所分类的旋转眼睛移动(例如,用户眨眼有多快或有多少,用户眼睛移动的角速度是多少,或者用户眼睛120的角速度多久或多快变化),并且可以响应于量化的旋转眼睛移动进一步调整一个或多个图像的渲染。In some embodiments, the processor 106 may be configured to classify the user's rotational eye movement. For example, rotational eye movement may be classified as at least one of smooth pursuit, saccade, fixation, nystagmus, or blink. Processor 106 may then cause display subsystem 108 to adjust the rendering of one or more images on the display in response to the classification of the rotational eye movement. In some embodiments, the processor 106 may be further configured to quantify the classified rotational eye movement (e.g., how fast or how much the user blinks, what is the angular velocity of the user's eye movement, or how often or how quickly the angular velocity of the user's eye 120 changes), and may further adjust the rendering of the one or more images in response to the quantified rotational eye movement.
在一些实施方案中,处理器106可以被配置为响应于眼睛120的移动而使显示子系统108改变显示器的状态。改变显示器的状态可以包括改变所显示的内容,但也可以或替代地包括将显示器从低功率或关闭状态转变为较高功率或打开状态,或者替代地,将显示器从较高功率或打开状态转变为低功率或关闭状态。In some embodiments, processor 106 may be configured to cause display subsystem 108 to change the state of the display in response to movement of eye 120 . Changing the state of the display may include changing what is displayed, but may also or alternatively include transitioning the display from a low power or off state to a higher power or on state, or alternatively, transitioning the display from a higher power or on state to a low power or off state.
在一些实施方案中,显示器可以包括安装在基板上的显示像素阵列,并且所述一组一个或多个SMI传感器114可以包括安装在基板上、邻近显示像素阵列或在显示像素阵列内的至少一个SMI传感器。在其他实施方案中,可以将SMI传感器114与显示器分开提供。In some embodiments, the display may include an array of display pixels mounted on a substrate, and the set of one or more SMI sensors 114 may include at least one SMI sensor mounted on the substrate, adjacent to, or within the array of display pixels. In other embodiments, the SMI sensor 114 may be provided separately from the display.
图2A示出了眼睛的被跟踪角速度(或凝视移动)的示例性图200。举例来说,图200示出了在仅两个正交方向上跟踪角速度。当眼睛的外表面或另一结构被建模为二维对象时,只能在两个维度上跟踪角速度。替代地,可以在三个维度上更准确地跟踪角速度。FIG. 2A shows an exemplary graph 200 of tracked angular velocity (or gaze movement) of an eye. For example, graph 200 shows tracking angular velocity in only two orthogonal directions. When the outer surface of the eye or another structure is modeled as a two-dimensional object, angular velocity can only be tracked in two dimensions. Alternatively, angular velocity can be more accurately tracked in three dimensions.
图2B示出了眼睛的被跟踪凝视(或凝视向量或凝视位置)的示例性图210。举例来说,图210示出了仅在两个正交方向上跟踪凝视。当眼睛的外表面或另一结构被建模为二维对象时,只能在两个维度上跟踪凝视。替代地,可以在三个维度上更准确地跟踪凝视。FIG. 2B shows an exemplary graph 210 of a tracked gaze (or gaze vector or gaze location) of an eye. For example, diagram 210 shows that gaze is tracked in only two orthogonal directions. Gaze can only be tracked in two dimensions when the outer surface of the eye or another structure is modeled as a two-dimensional object. Alternatively, gaze can be more accurately tracked in three dimensions.
当设备或应用需要检测和/或分类细微的眼睛移动(诸如平滑追踪、扫视、注视、眼球震颤和眨眼)时,基于SMI的传感可能是特别有用的,因为所有这些眼睛移动都可以通过跟踪眼睛的角速度来检测和分类,并且当涉及检测眼睛的角速度的变化时,基于SMI的传感通常比基于视频或基于光电检测器的传感更快、更准确并且更节能(例如,消耗更少的功率)。此外,通过凝视位置的初始和周期性(低频)校准,如图2B所示的高频凝视位置向量跟踪可以从凝视速度向量跟踪的集成获得,如图2A所示。SMI-based sensing may be particularly useful when a device or application needs to detect and/or classify subtle eye movements such as smooth pursuit, saccades, gazes, nystagmus, and blinks, since all of these eye movements can be detected and classified by tracking the angular velocity of the eye, and SMI-based sensing is generally faster, more accurate, and more energy-efficient (e.g., consumes less power) than video- or photodetector-based sensing when it comes to detecting changes in the angular velocity of the eye. Furthermore, with initial and periodic (low frequency) calibration of gaze position, high frequency gaze position vector tracking as shown in Figure 2B can be obtained from the integration of gaze velocity vector tracking as shown in Figure 2A.
图3A示出了第一示例性眼睛跟踪设备,其中光学传感子系统302和处理器304安装到一副眼镜300。举例来说,眼镜300(例如,一种类型的头戴式框架)被示出为包括第一镜框308和第二镜框310、将第一镜框308连接到第二镜框310的鼻架312、连接到第一镜框308的第一镜腿314和连接到第二镜框310的第二镜腿316。在一些实施方案中,眼镜300可以包括平视显示器或用作AR眼镜。FIG. 3A shows a first exemplary eye-tracking device in which an optical sensing subsystem 302 and a processor 304 are mounted to a pair of glasses 300 . For example, eyeglasses 300 (eg, a type of head-mounted frame) are shown including a first frame 308 and a second frame 310 , a bridge 312 connecting the first frame 308 to the second frame 310 , a first temple 314 connected to the first frame 308 , and a second temple 316 connected to the second frame 310 . In some embodiments, glasses 300 may include a heads-up display or function as AR glasses.
第一镜框308和第二镜框310中的每一者可以固持相应镜片,诸如第一镜片318或第二镜片320。镜片318、320可以或可以不放大、聚焦或以其他方式改变穿过镜片318、320的光。例如,镜片318、320可以矫正用户的视力、阻挡明亮或有害的光,或者简单地提供物理屏障,光可以在没有调整或最小调整的情况下穿过所述物理屏障。在一些实施方案中,第一镜片318和第二镜片320可以由玻璃或塑料形成。在一些实施方案中,第一镜片318和/或第二镜片320可以用作显示器(例如,无源显示屏),文本、数字和/或图像由显示子系统322投影在所述显示器上,所述显示子系统322也可以安装到所述一副眼镜。替代地,第一镜框308和/或第二镜框310可以固持透明或半透明显示器(例如,发光二极管(LED)、有机LED(OLED)或可由显示子系统322操作以显示文本、数字和/或图像的其他发光元件。Each of the first frame 308 and the second frame 310 may hold a respective lens, such as the first lens 318 or the second lens 320 . The optics 318 , 320 may or may not magnify, focus, or otherwise alter the light passing through the optics 318 , 320 . For example, the lenses 318, 320 may correct the user's vision, block bright or harmful light, or simply provide a physical barrier through which light may pass with no or minimal adjustment. In some embodiments, first lens 318 and second lens 320 may be formed of glass or plastic. In some embodiments, first lens 318 and/or second lens 320 may serve as a display (e.g., a passive display screen) on which text, numbers, and/or images are projected by display subsystem 322, which may also be mounted to the pair of eyeglasses. Alternatively, first frame 308 and/or second frame 310 may hold a transparent or translucent display (e.g., light emitting diode (LED), organic LED (OLED), or other light emitting element operable by display subsystem 322 to display text, numbers, and/or images.
作为另一示例,光学传感子系统302可以如参考图1以及图5至图9B中的一者或多者所描述而配置,和/或处理器304可以被配置为如参考图1以及图10至图12B所描述而操作。光学感测子系统302的一个或多个部件(例如,SMI传感器、光学部件、相机等)可以安装在附接到第一镜框308、第二镜框310、鼻架312、第一镜腿314、第二镜腿316、第一镜片318或第二镜片320的一个或多个基板上,或者可以直接安装到这些部件中的一者或多者。类似地,处理器304、显示子系统322、通信子系统324和/或配电子系统326可以安装到这些部件中的一者或多者。在一些实施方案中,光学传感子系统302、处理器304、显示子系统322、通信子系统324和/或配电子系统326中的一部分或全部可以安装在眼镜300的一个或多个部件内,安装在无线或电连接到安装到眼镜300(例如,在用户的电话或可穿戴设备中)的一个或多个部件的设备内,或分布在眼镜300与无线或电连接到眼镜300的一个或多个部件的设备之间。As another example, optical sensing subsystem 302 may be configured as described with reference to FIG. 1 and one or more of FIGS. 5-9B , and/or processor 304 may be configured to operate as described with reference to FIGS. 1 and 10-12B . One or more components of optical sensing subsystem 302 (e.g., SMI sensors, optics, cameras, etc.) may be mounted on one or more substrates attached to first frame 308, second frame 310, bridge 312, first temple 314, second temple 316, first lens 318, or second lens 320, or may be mounted directly to one or more of these components. Similarly, processor 304, display subsystem 322, communication subsystem 324, and/or distribution subsystem 326 may be mounted to one or more of these components. In some embodiments, optical sensor subsidiaries 302, processor 304, display subsystem 322, communication subsystem 324, and/or distribution system 326 can be installed in one or more components of glasses 300, installed in one or more parts of one or more parts installed in the wireless or electrical connection to the glasses 300 (e.g., in the user's phone or wearable device). The equipment connected to one or more components of the glasses 300 in the glasses 300 and the wireless or electricity.
处理器304、显示子系统322、通信子系统324和配电子系统326可以如参考图1所述进一步配置或操作。Processor 304 , display subsystem 322 , communication subsystem 324 , and distribution subsystem 326 may be further configured or operative as described with reference to FIG. 1 .
图3B示出了第二示例性眼睛跟踪设备,其中光学传感子系统352和处理器354安装到虚拟现实(VR)头显350。举例来说,VR头显(一种类型的头戴式框架)被示出为包括可以通过带356附接到用户的头部的VR模块358。FIG. 3B shows a second exemplary eye-tracking device in which an optical sensing subsystem 352 and a processor 354 are mounted to a virtual reality (VR) headset 350 . For example, a VR headset (one type of head-mounted frame) is shown including a VR module 358 that can be attached to a user's head by a strap 356 .
VR模块358可以包括显示子系统360。显示子系统360可以包括用于显示文本、数字和/或图像的显示器。VR module 358 may include display subsystem 360 . Display subsystem 360 may include a display for displaying text, numbers and/or images.
作为示例,光学传感子系统352可以如参考图1以及图5至图9B中的一者或多者所描述而配置,和/或处理器354可以被配置为如参考图1以及图10至图12B所描述而操作。光学感测子系统352的一个或多个部件(例如,SMI传感器、光学部件、相机等)可以安装在附接到VR模块358的一个或多个基板上,或者可以直接安装到VR模块358的外壳。类似地,处理器354、显示子系统360、通信子系统362和/或配电子系统364可以安装到VR模块358。在一些实施方案中,光学传感子系统352、处理器354、显示子系统360、通信子系统362和/或配电子系统364中的一部分或全部可以安装在无线或电连接到安装到VR模块358(例如,在用户的电话或可穿戴设备中)的设备内,或分布在VR模块358与无线或电连接到VR模块358的设备之间。As an example, optical sensing subsystem 352 may be configured as described with reference to FIG. 1 and one or more of FIGS. 5-9B , and/or processor 354 may be configured to operate as described with reference to FIGS. 1 and 10-12B . One or more components of optical sensing subsystem 352 (eg, SMI sensors, optics, cameras, etc.) may be mounted on one or more substrates attached to VR module 358 , or may be mounted directly to the housing of VR module 358 . Similarly, processor 354 , display subsystem 360 , communication subsystem 362 , and/or distribution subsystem 364 may be mounted to VR module 358 . In some embodiments, some or all of optical sensing subsystem 352, processor 354, display subsystem 360, communication subsystem 362, and/or distribution subsystem 364 may be installed within, or distributed between, VR module 358 and a device wirelessly or electrically connected to VR module 358 (e.g., in a user's phone or wearable device).
处理器354、显示子系统360、通信子系统362和配电子系统364可以如参考图1所述进一步配置或操作。Processor 354 , display subsystem 360 , communication subsystem 362 , and distribution subsystem 364 may be further configured or operative as described with reference to FIG. 1 .
图4A和图4B示出了可以安装到头戴式框架(诸如参考图1至图3B描述的头戴式框架中的一个头戴式框架)的一组示例性SMI传感器400、402。所述一组SMI传感器400、402可以形成光学传感器子系统的一部分,诸如参考图1至图3B或本文其他地方描述的光学子系统中的一个光学子系统。所述一组SMI传感器400、402可以被配置为朝向眼睛404发射光。图4A示出了眼睛404和一组SMI传感器400、402的侧视图,并且图4B示出了眼睛404和一组SMI传感器组400、402的正视图。FIGS. 4A and 4B illustrate a set of exemplary SMI sensors 400 , 402 that may be mounted to a head-mounted frame, such as one of the head-mounted frames described with reference to FIGS. 1-3B . The set of SMI sensors 400, 402 may form part of an optical sensor subsystem, such as one of the optical subsystems described with reference to Figures 1 to 3B or elsewhere herein. The set of SMI sensors 400 , 402 may be configured to emit light toward an eye 404 . FIG. 4A shows a side view of an eye 404 and a set of SMI sensors 400 , 402 , and FIG. 4B shows a front view of an eye 404 and a set of SMI sensors 400 , 402 .
举例来说,所述一组SMI传感器400、402被示出为包括图4A和图4B中的两个SMI传感器(例如,第一SMI传感器400和第二SMI传感器402)。在其他实施方案中,所述一组SMI传感器400、402可包括更多或更少的SMI传感器。第一SMI传感器400和第二SMI传感器402可以朝向眼睛404发射相应光束。在一些实施方案中,光束可以在不同方向上定向,所述不同方向可以是或可以不是正交方向。当光束在不同方向上定向时,接收由SMI传感器400、402生成的SMI信号的处理器可以在两个维度上跟踪眼睛404的移动(例如,处理器可以在两个维度上跟踪眼睛404的角速度)。光束可以在相同或不同位置处撞击眼睛404。举例来说,光束被示出为在相同位置撞击眼睛404。By way of example, the set of SMI sensors 400, 402 is shown to include the two SMI sensors in FIGS. 4A and 4B (eg, first SMI sensor 400 and second SMI sensor 402). In other embodiments, the set of SMI sensors 400, 402 may include more or fewer SMI sensors. The first SMI sensor 400 and the second SMI sensor 402 may emit respective light beams toward the eye 404 . In some embodiments, the light beams can be directed in different directions, which may or may not be orthogonal directions. A processor receiving the SMI signals generated by the SMI sensors 400, 402 may track the movement of the eye 404 in two dimensions (eg, the processor may track the angular velocity of the eye 404 in two dimensions) as the light beams are oriented in different directions. The light beams may strike the eye 404 at the same or different locations. For example, the light beam is shown hitting the eye 404 at the same location.
在一些实施方案中,由SMI传感器400、402发射的光可以由光学器件406或408(例如,一个或多个镜片或光束转向部件或其他光学部件)引导或过滤。In some embodiments, light emitted by the SMI sensors 400, 402 may be directed or filtered by optics 406 or 408 (eg, one or more mirrors or beam steering components or other optical components).
图5示出了可以安装到头戴式框架(诸如参考图1至图3B描述的头戴式框架中的一个头戴式框架)的一组替代性SMI传感器500、502、504的正视图。所述一组SMI传感器500、502、504可以形成光学传感器子系统的一部分,诸如参考图1至图3B或本文其他地方描述的光学子系统中的一个光学子系统。所述一组SMI传感器500、502、504可以被配置为朝向眼睛506发射光。FIG. 5 shows a front view of an alternative set of SMI sensors 500 , 502 , 504 that may be mounted to a head-mounted frame, such as one of the headsets described with reference to FIGS. 1-3B . The set of SMI sensors 500, 502, 504 may form part of an optical sensor subsystem, such as one of the optical subsystems described with reference to Figures 1 to 3B or elsewhere herein. The set of SMI sensors 500 , 502 , 504 may be configured to emit light toward an eye 506 .
与参考图4A和图4B描述的所述一组SMI传感器相比,图5所示的所述一组SMI传感器500、502、504包括三个SMI传感器(例如,第一SMI传感器500、第二SMI传感器502和第三SMI传感器504)。第一SMI传感器500、第二SMI传感器502和第三SMI传感器504可以朝向眼睛506发射相应光束。在一些实施方案中,光束可以在不同方向上定向,所述不同方向可以是或可以不是正交方向。当光束在不同方向上定向时,接收由SMI传感器500、502、504生成的SMI信号的处理器可以在三个维度上跟踪眼睛506的移动(例如,处理器可以在三个维度上跟踪眼睛506的角速度)。光束可以在相同或不同位置处撞击眼睛506。举例来说,光束被示出为在相同位置撞击眼睛506。In contrast to the set of SMI sensors described with reference to FIGS. 4A and 4B , the set of SMI sensors 500 , 502 , 504 shown in FIG. 5 includes three SMI sensors (eg, a first SMI sensor 500 , a second SMI sensor 502 , and a third SMI sensor 504 ). First SMI sensor 500 , second SMI sensor 502 , and third SMI sensor 504 may emit respective light beams toward eye 506 . In some embodiments, the light beams can be directed in different directions, which may or may not be orthogonal directions. A processor receiving the SMI signals generated by the SMI sensors 500, 502, 504 may track the movement of the eye 506 in three dimensions (e.g., the processor may track the angular velocity of the eye 506 in three dimensions) as the light beam is directed in different directions. The light beams may strike the eye 506 at the same or different locations. For example, the light beam is shown hitting the eye 506 at the same location.
图6示出了可以安装到头戴式框架(诸如参考图1至图3B描述的头戴式框架中的一个头戴式框架)的第二组替代性SMI传感器600、602、604的示例性正视图。所述一组SMI传感器600、602、604可以形成光学传感器子系统的一部分,诸如参考图1至图3B或本文其他地方描述的光学子系统中的一个光学子系统。所述一组SMI传感器600、602、604可以被配置为朝向眼睛发射光。FIG. 6 shows an exemplary front view of a second set of alternative SMI sensors 600 , 602 , 604 that may be mounted to a head-mounted frame, such as one of the headsets described with reference to FIGS. 1-3B . The set of SMI sensors 600, 602, 604 may form part of an optical sensor subsystem, such as one of the optical subsystems described with reference to Figures 1 to 3B or elsewhere herein. The set of SMI sensors 600, 602, 604 may be configured to emit light toward the eye.
图6所示的所述一组SMI传感器600、602、604包括三个SMI传感器(例如,第一SMI传感器600、第二SMI传感器602和第三SMI传感器604)。第一SMI传感器600、第二SMI传感器602和第三SMI传感器604可以朝向眼睛606发射相应光束。在一些实施方案中,光束可以在不同方向上定向,所述不同方向可以是或可以不是正交方向。当光束在不同方向上定向时,接收由SMI传感器600、602、604生成的SMI信号的处理器可以在三个维度上跟踪眼睛606的移动(例如,处理器可以在三个维度上跟踪眼睛606的角速度)。与参考图5描述的所述一组SMI传感器相比,光束中的两个光束在相同位置处(例如,在第一位置处)撞击眼睛606,并且光束中的一个光束在不同位置处(例如,在不同于第一位置的第二位置处)撞击眼睛606。替代地,所有光束可以被定向成在相同位置处撞击眼睛606,或者所有光束可以被定向成在不同位置处撞击眼睛606。The set of SMI sensors 600, 602, 604 shown in FIG. 6 includes three SMI sensors (eg, a first SMI sensor 600, a second SMI sensor 602, and a third SMI sensor 604). The first SMI sensor 600 , the second SMI sensor 602 and the third SMI sensor 604 may emit respective light beams toward the eye 606 . In some embodiments, the light beams can be directed in different directions, which may or may not be orthogonal directions. A processor receiving the SMI signals generated by the SMI sensors 600, 602, 604 may track the movement of the eye 606 in three dimensions (e.g., the processor may track the angular velocity of the eye 606 in three dimensions) as the light beam is directed in different directions. Compared to the set of SMI sensors described with reference to FIG. 5 , two of the beams hit the eye 606 at the same location (e.g., at a first location) and one of the beams hits the eye 606 at a different location (e.g., at a second location different from the first location). Alternatively, all beams may be directed to strike the eye 606 at the same location, or all beams may be directed to strike the eye 606 at different locations.
包括所述一组SMI传感器600、602、604的光学传感器子系统还可包括相机608。类似于所述一组SMI传感器600、602、604,相机608可以安装到头戴式框架。相机608可以定位和/或定向成获取眼睛606的图像。图像可以是眼睛606的一部分或全部的图像。被配置为操作光学传感器子系统的处理器可以被配置为使用由相机608获取的图像和由一组SMI传感器600、602、604生成的SMI信号跟踪眼睛606的旋转移动。例如,处理器可以使用相机608以第一频率获取眼睛的一组图像。处理器还可以以大于第一频率的第二频率对所述一组一个或多个SMI信号进行采样。可以以时间重叠的方式或在不同时间并行地获取/采样图像和SMI信号样本。在一些情况下,处理器可以获取一个或多个图像;分析图像以确定眼睛606相对于SMI传感器600、602、604和/或由SMI传感器600、602、604发射的光的束的位置;并且如有必要,对光学传感器子系统进行调整以确保光束撞击眼睛606的期望结构。对光学传感器子系统的调整可以包括例如以下中的一者或多者:调整光束转向部件以使一个或多个光束转向、寻址并使得SMI传感器600、602、604的特定子集发射光,等等(参见例如图7和图8B的描述)。替代地(例如作为依赖于相机608的替代方案),SMI传感器600、602、604可以用于在用户移动其眼睛606时执行范围测量,并且可以将范围测量映射到眼睛模型以确定SMI传感器600、602、604是否聚焦在期望的眼睛结构上。眼睛模型可以是通用眼睛模型,或者是当光学传感器子系统在训练和眼睛模型生成模式下操作时针对特定用户生成的眼睛模型。The optical sensor subsystem including the set of SMI sensors 600 , 602 , 604 may also include a camera 608 . Similar to the set of SMI sensors 600, 602, 604, a camera 608 may be mounted to the head mounted frame. Camera 608 may be positioned and/or oriented to capture images of eye 606 . The image may be of a portion or all of the eye 606 . A processor configured to operate the optical sensor subsystem may be configured to track the rotational movement of the eye 606 using images acquired by the camera 608 and SMI signals generated by the set of SMI sensors 600 , 602 , 604 . For example, the processor may use the camera 608 to acquire a set of images of the eye at a first frequency. The processor may also sample the set of one or more SMI signals at a second frequency greater than the first frequency. Image and SMI signal samples may be acquired/sampled in time overlapping fashion or in parallel at different times. In some cases, the processor may acquire one or more images; analyze the images to determine the position of the eye 606 relative to the SMI sensors 600, 602, 604 and/or beams of light emitted by the SMI sensors 600, 602, 604; and, if necessary, make adjustments to the optical sensor subsystem to ensure that the beams hit the desired structure of the eye 606. Adjustments to the optical sensor subsystem may include, for example, one or more of: adjusting beam steering components to steer one or more beams, addressing and causing a specific subset of SMI sensors 600, 602, 604 to emit light, etc. (see, for example, the description of FIGS. 7 and 8B ). Alternatively (e.g., as an alternative to relying on the camera 608), the SMI sensors 600, 602, 604 may be used to perform range measurements as the user moves their eyes 606, and the range measurements may be mapped to an eye model to determine whether the SMI sensors 600, 602, 604 are focusing on the desired eye structure. The eye model may be a generic eye model, or a user-specific generated eye model when the optical sensor subsystem operates in the training and eye model generation modes.
包括所述一组SMI传感器600、602、604的光学传感器子系统还可包括一个或多个光发射器610、612,所述一个或多个光发射器能够照射眼睛606以用于拍摄眼睛606的图像(或用于其他目的)。光发射器610、612可以采用LED、激光器或显示器的其他发光元件等的形式。光发射器610、612可以发射可见光或非可见光(例如,IR光),这取决于相机608被配置为感测的光的类型。光发射器610、612可以用于提供泛光、扫描或光斑照明。The optical sensor subsystem including the set of SMI sensors 600, 602, 604 may also include one or more light emitters 610, 612 capable of illuminating the eye 606 for taking an image of the eye 606 (or for other purposes). The light emitters 610, 612 may take the form of LEDs, lasers, or other light emitting elements of a display, or the like. The light emitters 610, 612 may emit visible light or non-visible light (eg, IR light), depending on the type of light that the camera 608 is configured to sense. The light emitters 610, 612 may be used to provide flood, sweep or spot illumination.
在一些实施方案中,处理器可以使用由一组SMI传感器600、602、604生成的SMI信号来确定或跟踪眼睛606的凝视向量。在一些实施方案中,处理器可以使用利用相机608获取的一个或多个图像来确定或跟踪凝视向量。在一些实施方案中,处理器可以使用利用相机608获取的一个或多个图像与SMI信号组合来跟踪凝视向量。例如,可以使用一个或多个图像来确定凝视向量,然后可以使用SMI信号来跟踪眼睛606的移动并更新凝视向量(或者换句话说,确定凝视向量的移动)。In some embodiments, the processor may determine or track the gaze vector of the eye 606 using the SMI signals generated by the set of SMI sensors 600 , 602 , 604 . In some embodiments, the processor may use one or more images acquired with the camera 608 to determine or track a gaze vector. In some embodiments, the processor may use one or more images acquired using the camera 608 in combination with the SMI signal to track the gaze vector. For example, one or more images may be used to determine a gaze vector, and then the SMI signal may be used to track eye 606 movement and update the gaze vector (or in other words, determine movement of the gaze vector).
图7示出了可以安装到头戴式框架(诸如参考图1至图3B描述的头戴式框架中的一个头戴式框架)的第三组替代性SMI传感器700、702、704的示例性侧视图。所述一组SMI传感器700、702、704可以形成光学传感器子系统的一部分,诸如参考图1至图3B或本文其他地方描述的光学子系统中的一个光学子系统。所述一组SMI传感器700、702、704可以被配置为朝向眼睛706发射光。FIG. 7 shows an exemplary side view of a third set of alternative SMI sensors 700 , 702 , 704 that may be mounted to a headset frame, such as one of the headset frames described with reference to FIGS. 1-3B . The set of SMI sensors 700, 702, 704 may form part of an optical sensor subsystem, such as one of the optical subsystems described with reference to Figures 1 to 3B or elsewhere herein. The set of SMI sensors 700 , 702 , 704 may be configured to emit light toward an eye 706 .
图7所示的所述一组SMI传感器700、702、704包括三个SMI传感器(例如,第一SMI传感器700、第二SMI传感器702和第三SMI传感器704)。第一SMI传感器700、第二SMI传感器702和第三SMI传感器704可以朝向眼睛706发射相应光束,类似于光束可以如何由参考图4A和图4B(或图5或图6)描述的所述一组SMI传感器发射。然而,并非所有SMI传感器700、702、704可以同时发射光束。例如,第二SMI传感器702和第三SMI传感器704可以是SMI传感器的可寻址阵列的一部分,所述阵列在一些情况下可以包括多于两个SMI传感器。SMI传感器阵列可以耦合到可以用于寻址SMI传感器阵列中的不同SMI传感器或不同SMI传感器子集的电路系统。The set of SMI sensors 700, 702, 704 shown in FIG. 7 includes three SMI sensors (eg, a first SMI sensor 700, a second SMI sensor 702, and a third SMI sensor 704). First SMI sensor 700, second SMI sensor 702, and third SMI sensor 704 may emit respective beams toward eye 706, similar to how beams may be emitted by the set of SMI sensors described with reference to FIGS. 4A and 4B (or FIG. 5 or 6). However, not all SMI sensors 700, 702, 704 can emit beams at the same time. For example, second SMI sensor 702 and third SMI sensor 704 may be part of an addressable array of SMI sensors, which in some cases may include more than two SMI sensors. The SMI sensor array can be coupled to circuitry that can be used to address different SMI sensors or different subsets of SMI sensors in the SMI sensor array.
在一些情况下,由第二SMI传感器702和第三SMI传感器704发射的光束(并且在一些情况下,由SMI传感器阵列中的其他SMI传感器发射的光束)可以被导向共享镜片、镜片组、镜片阵列或一个或多个其他光学部件708。In some cases, light beams emitted by second SMI sensor 702 and third SMI sensor 704 (and in some cases, light beams emitted by other SMI sensors in the SMI sensor array) may be directed to a shared mirror, mirror set, mirror array, or one or more other optical components 708.
在一些情况下,包括所述一组SMI传感器的所述光学传感器子系统还可包括相机710,所述相机710可以与参考图6所描述的相机类似地使用。In some cases, the optical sensor subsystem including the set of SMI sensors may also include a camera 710 that may be used similarly to the cameras described with reference to FIG. 6 .
在一些情况下,处理器可以出于不同目的选择性地操作(例如,激活和去激活)SMI传感器阵列(或不同SMI传感器子集)中的不同SMI传感器702、704。例如,处理器可以操作(或使用)电路系统以激活具有特定的一个或多个焦点的特定SMI传感器或特定SMI传感器子集。在一些情况下,处理器可以使相机710获取眼睛706的一个或多个图像。然后,处理器可以分析图像以确定眼睛706的凝视向量,并且可以激活聚焦在眼睛706的特定结构或区域(或多个结构或区域)上的一个或多个SMI传感器702、704(在SMI传感器阵列中)。处理器还可以激活其他SMI传感器,诸如SMI传感器700。In some cases, the processor may selectively operate (eg, activate and deactivate) different SMI sensors 702, 704 in an array of SMI sensors (or different subsets of SMI sensors) for different purposes. For example, a processor may operate (or use) circuitry to activate a particular SMI sensor or a particular subset of SMI sensors with a particular focal point(s). In some cases, processor may cause camera 710 to acquire one or more images of eye 706 . The processor may then analyze the image to determine the gaze vector of the eye 706, and may activate one or more SMI sensors 702, 704 (in an array of SMI sensors) focused on a particular structure or region (or structures or regions) of the eye 706. The processor can also activate other SMI sensors, such as SMI sensor 700 .
图8A示出了SMI传感器800与分束器802组合的示例性使用。SMI传感器800可以是参考图1至图7描述的任何SMI传感器,或下文描述的任何SMI传感器。分束器802可以定位成分离由SMI传感器发射的光束804。光束可以被分成多个光束806、808、810(例如,两个、三个或更多个光束)。在一些情况下,分束器802可以与重定向或转向多个光束806、808、810的一个或多个镜片或光束转向部件相关联。在一些情况下,多个光束806、808、810可以朝向眼睛上(或眼睛中)的共享焦点重定向。FIG. 8A shows an exemplary use of an SMI sensor 800 in combination with a beam splitter 802 . SMI sensor 800 may be any of the SMI sensors described with reference to FIGS. 1-7 , or any of the SMI sensors described below. Beam splitter 802 may be positioned to split beam 804 emitted by the SMI sensor. The light beam may be split into multiple light beams 806, 808, 810 (eg, two, three or more light beams). In some cases, beam splitter 802 may be associated with one or more optics or beam steering components that redirect or turn multiple light beams 806 , 808 , 810 . In some cases, multiple light beams 806, 808, 810 may be redirected toward a shared focal point on (or in) the eye.
图8B示出了SMI传感器850与光束转向部件852组合的示例性使用。SMI传感器850可以是参考图1至图7描述的任何SMI传感器,或下文描述的任何SMI传感器。光束转向部件852可以定位成使由SMI传感器850发射的光束854转向。处理器可以被配置为操作光束转向部件852并使光束854转向到眼睛的不同结构或区域。在一些实施方案中,光束转向部件852可以包括光束聚焦部件或镜片定位机构,可以调整所述光束聚焦部件或镜片定位机构以改变光束沿着其轴线的焦点。在一些实施方案中,光束转向部件852可以用光束聚焦部件代替。FIG. 8B shows an exemplary use of an SMI sensor 850 in combination with a beam steering component 852 . The SMI sensor 850 may be any of the SMI sensors described with reference to FIGS. 1-7 , or any of the SMI sensors described below. Beam steering component 852 may be positioned to steer beam 854 emitted by SMI sensor 850 . The processor may be configured to operate the beam steering component 852 and steer the beam 854 to different structures or regions of the eye. In some embodiments, the beam steering component 852 can include a beam focusing component or lens positioning mechanism that can be adjusted to change the focus of the beam along its axis. In some embodiments, the beam steering component 852 can be replaced with a beam focusing component.
图9A示出了SMI传感器900与显示子系统的第一示例性集成。SMI传感器900可以是参考图1至图7描述的任何SMI传感器,或下文描述的任何SMI传感器。在一些示例中,显示子系统可以是参考图1、图3A或图3B描述的显示子系统。FIG. 9A shows a first exemplary integration of an SMI sensor 900 with a display subsystem. SMI sensor 900 may be any of the SMI sensors described with reference to FIGS. 1-7 , or any of the SMI sensors described below. In some examples, the display subsystem may be the display subsystem described with reference to FIG. 1 , FIG. 3A , or FIG. 3B .
显示子系统可以包括安装或形成在基板908上的显示像素902、904、906的阵列。举例来说,显示子系统被示出为包括蓝色像素902、绿色像素904和红色像素906,但是显示子系统可以包括每个蓝色像素、绿色像素和红色像素的多个实例。在一些情况下,显示像素902、904、906可以包括LED或其他类型的发光元件。The display subsystem may include an array of display pixels 902 , 904 , 906 mounted or formed on a substrate 908 . For example, the display subsystem is shown as including blue pixel 902, green pixel 904, and red pixel 906, although the display subsystem may include multiple instances of each blue, green, and red pixel. In some cases, display pixels 902, 904, 906 may include LEDs or other types of light emitting elements.
SMI传感器900可以安装在基板908上。举例来说,SMI传感器900被示出为安装在基板908上,邻近显示像素902、904、906的阵列。替代地,SMI传感器900可以在显示像素902、904、906的阵列内(即,在显示像素之间)安装在基板908上。在一些实施方案中,可以将多于一个SMI传感器900安装在基板908上,其中每个SMI传感器900被定位成邻近显示像素902、904、906的阵列或在所述显示像素的阵列内。在一些实施方案中,SMI传感器900可以发射IR光。在其他实施方案中,SMI传感器900可以发射可见光、紫外光或其他类型的光。在一些实施方案中,显示像素902、904、906中的一个或多个显示像素可以作为SMI传感器操作。SMI sensor 900 may be mounted on substrate 908 . For example, an SMI sensor 900 is shown mounted on a substrate 908 adjacent to an array of display pixels 902 , 904 , 906 . Alternatively, SMI sensor 900 may be mounted on substrate 908 within the array of display pixels 902, 904, 906 (ie, between display pixels). In some implementations, more than one SMI sensor 900 may be mounted on the substrate 908, with each SMI sensor 900 positioned adjacent to or within an array of display pixels 902, 904, 906. In some embodiments, the SMI sensor 900 can emit IR light. In other embodiments, SMI sensor 900 may emit visible light, ultraviolet light, or other types of light. In some implementations, one or more of display pixels 902, 904, 906 may operate as an SMI sensor.
共享波导910可以定位成将由显示像素902、904、906和SMI传感器900发射的光束导向光束转向部件912,诸如可由微电子机械系统(MEMS)移动的一组一个或多个反射镜。在替代实施方案中,一组波导(例如,一组光纤)可以用于将由显示像素902、904、906和SMI传感器900发射的光导向光束转向部件912。处理器可以操作显示像素902、904、906和光束转向部件912,以在显示器上渲染文本、数字或图像。在一些情况下,显示器可以包括一副眼镜的一个或多个镜片,或在VR头显内的显示器。Shared waveguide 910 may be positioned to direct beams emitted by display pixels 902, 904, 906 and SMI sensor 900 to beam steering component 912, such as a set of one or more mirrors movable by a microelectromechanical system (MEMS). In an alternative embodiment, a set of waveguides (eg, a set of optical fibers) may be used to direct light emitted by display pixels 902 , 904 , 906 and SMI sensor 900 to beam steering component 912 . The processor may operate display pixels 902, 904, 906 and beam steering component 912 to render text, numbers or images on the display. In some cases, the display may include one or more lenses of a pair of glasses, or a display within a VR headset.
共享波导910(或一组波导)可以接收由SMI传感器900发射的光的返回部分,诸如从眼睛反射或散射的光的一部分,并且将发射光的返回部分导向SMI传感器900的谐振腔并使之进入谐振腔。Shared waveguide 910 (or set of waveguides) may receive a return portion of light emitted by SMI sensor 900, such as a portion of light reflected or scattered from an eye, and direct the return portion of the emitted light toward and into a resonant cavity of SMI sensor 900.
图9B示出了SMI传感器950与显示子系统的第二示例性集成。SMI传感器950可以是参考图1至图7描述的任何SMI传感器,或下文描述的任何SMI传感器。在一些示例中,显示子系统可以是参考图1、图3A或图3B描述的显示子系统。FIG. 9B shows a second exemplary integration of the SMI sensor 950 with the display subsystem. SMI sensor 950 may be any of the SMI sensors described with reference to FIGS. 1-7 , or any of the SMI sensors described below. In some examples, the display subsystem may be the display subsystem described with reference to FIG. 1 , FIG. 3A , or FIG. 3B .
显示子系统可以包括安装或形成在基板958上的显示像素952、954、956的阵列和SMI传感器950。显示像素952、954、956和SMI传感器950可以如参考图9A所描述而配置。The display subsystem may include an array of display pixels 952 , 954 , 956 and an SMI sensor 950 mounted or formed on a substrate 958 . Display pixels 952, 954, 956 and SMI sensor 950 may be configured as described with reference to FIG. 9A.
波导960(或一组波导)可以定位成将由显示像素952、954、956和SMI传感器950发射的光束导向另外的共享波导962,或者共享波导960的远端部分(或所述一组波导的远端部分)可以弯曲,并且光可以从另外的共享波导962的外耦合发射,或者从共享波导962或一组波导的远端部分的外耦合发射。处理器可以操作显示像素952、954、956以在显示器上投影文本、数字或图像。The waveguide 960 (or set of waveguides) may be positioned to direct light beams emitted by the display pixels 952, 954, 956 and the SMI sensor 950 to the further shared waveguide 962, or the distal portion of the shared waveguide 960 (or the distal portion of the set of waveguides) may be bent and light may be emitted out-coupled from the further shared waveguide 962, or out-coupled from the distal portion of the shared waveguide 962 or set of waveguides. The processor can operate display pixels 952, 954, 956 to project text, numbers or images on the display.
由SMI传感器950发射的光的返回部分,诸如从眼睛反射或散射的光的一部分,可以穿过波导962、960并且朝向SMI传感器950的谐振腔重定向并且进入所述谐振腔。Returning portions of the light emitted by the SMI sensor 950 , such as a portion of light reflected or scattered from the eye, may pass through the waveguides 962 , 960 and be redirected toward and into the resonant cavity of the SMI sensor 950 .
图10示出了可以包括在眼睛跟踪设备的光学传感器子系统中的一组示例性部件1000。所述一组部件1000通常在光学或光电部件1002的子集、模拟部件1004的子集、数字部件1006的子集和系统部件1008(例如,处理器,并且在一些情况下,其他控制部件)的子集之间划分。FIG. 10 shows an exemplary set of components 1000 that may be included in an optical sensor subsystem of an eye tracking device. The set of components 1000 is typically divided between a subset of optical or optoelectronic components 1002, a subset of analog components 1004, a subset of digital components 1006, and a subset of system components 1008 (eg, processors and, in some cases, other control components).
光学或光电部件1002的子集可以包括激光二极管1010或具有谐振腔的另一光学发射器。部件1002还可以包括光电检测器1012(例如,光电二极管)。光电检测器1012可以集成到与激光二极管1010相同的外延堆叠中(例如,在激光二极管1010上方、下方或邻近所述激光二极管),或者可以形成为与激光二极管1010堆叠或邻近所述激光二极管定位的单独部件。替代地,光电检测器1012可以用测量激光二极管1010的结电压或驱动电流并且电子地生成SMI信号(即,没有光敏元件)的电路代替或补充。激光二极管1010与光电检测器1012或用于生成SMI信号的替代电路系统组合可以称为SMI传感器。A subset of optical or optoelectronic components 1002 may include a laser diode 1010 or another optical transmitter with a resonant cavity. Component 1002 may also include a photodetector 1012 (eg, a photodiode). Photodetector 1012 may be integrated into the same epitaxial stack as laser diode 1010 (e.g., above, below, or adjacent to laser diode 1010), or may be formed as a separate component stacked with or adjacent to laser diode 1010. Alternatively, photodetector 1012 may be replaced or supplemented by circuitry that measures the junction voltage or drive current of laser diode 1010 and electronically generates the SMI signal (ie, without a photosensitive element). Laser diode 1010 in combination with photodetector 1012 or alternative circuitry for generating the SMI signal may be referred to as an SMI sensor.
光学或光电部件1002的子集还可以包括与激光二极管1010和/或光电检测器1012集成的模块级光学器件1014,和/或系统级光学器件1016。模块级光学器件1014和/或系统级光学器件1016可以包括例如镜片、分束器、光束转向部件等。模块级光学器件1014和/或系统级光学器件1016可以确定发射光和返回光(例如,从眼睛散射的光)被引导到哪里。A subset of optical or optoelectronic components 1002 may also include module-level optics 1014 integrated with laser diode 1010 and/or photodetector 1012 , and/or system-level optics 1016 . Module-level optics 1014 and/or system-level optics 1016 may include, for example, optics, beam splitters, beam steering components, and the like. Module-level optics 1014 and/or system-level optics 1016 may determine where emitted light and return light (eg, light scattered from the eye) is directed.
模拟部件1004的子集可以包括数模转换器(DAC)1018和电流调节器1020,用于将驱动电流转换为模拟域并且将其提供给激光二极管1010。部件1004还可以包括用于确保激光二极管1010按照操作安全规格操作的部件1022。部件1004还可以包括用于放大由光电检测器1012生成的SMI信号的跨阻放大器(TIA)和/或其他放大器1024,以及用于将放大的SMI信号转换为数字域的模数转换器(ADC)。部件1004还可以包括用于在SMI信号被放大或以其他方式处理时校正所述SMI信号的部件。A subset of analog components 1004 may include a digital-to-analog converter (DAC) 1018 and a current regulator 1020 for converting the drive current into the analog domain and providing it to the laser diode 1010 . Component 1004 may also include component 1022 for ensuring that laser diode 1010 is operating in accordance with operational safety specifications. Component 1004 may also include a transimpedance amplifier (TIA) and/or other amplifier 1024 for amplifying the SMI signal generated by photodetector 1012, and an analog-to-digital converter (ADC) for converting the amplified SMI signal into the digital domain. Component 1004 may also include means for correcting the SMI signal when the SMI signal is amplified or otherwise processed.
在一些情况下,模拟部件1004的子集可以与光学或光电部件1002的一个以上子集介接(例如,与其多路复用)。例如,部件1004可以与两个、三个或更多个SMI传感器介接。In some cases, a subset of analog components 1004 may interface with (eg, be multiplexed with) more than one subset of optical or optoelectronic components 1002 . For example, component 1004 may interface with two, three, or more SMI sensors.
数字部件1006的子集可以包括调度器1026,用于调度(例如,关联)向激光二极管1010提供驱动电流与向系统部件1008提供从光电检测器1012获得的数字化光电流。部件1006还可以包括向DAC 1018提供数字驱动电流的驱动电流波形发生器1028。部件1006可以还包括用于处理光电检测器1012的放大和数字化输出的数字处理链。数字处理链可以包括例如时域信号预处理电路1030、快速傅里叶变换(FFT)引擎1032、频域信号预处理电路1034以及距离和/或速度估计器1036。在一些情况下,部件1006中的一些或全部部件可以由一组一个或多个处理器实例化。A subset of digital components 1006 may include a scheduler 1026 for scheduling (eg, correlating) providing drive current to laser diode 1010 with providing digitized photocurrent obtained from photodetector 1012 to system component 1008 . Component 1006 may also include a drive current waveform generator 1028 that provides digital drive current to DAC 1018 . Component 1006 may also include a digital processing chain for processing the amplified and digitized output of photodetector 1012 . The digital processing chain may include, for example, a time domain signal preprocessing circuit 1030 , a fast Fourier transform (FFT) engine 1032 , a frequency domain signal preprocessing circuit 1034 , and a distance and/or velocity estimator 1036 . In some cases, some or all of components 1006 may be instantiated by a set of one or more processors.
系统部件1008的子集可以包括例如系统级调度器1038,所述调度器1038可以调度SMI传感器(或SMI传感器)或其他部件何时用于跟踪眼睛的位置(例如,凝视向量)或移动(例如,角速度)。部件1008还可以包括其他传感器,诸如相机1040或惯性测量单元(IMU)1042。在一些情况下,部件1008(或其处理器)可以使用从部件1002的一个或多个子集获取的一个或多个SMI信号、从相机1040获取的一个或多个图像和/或其他测量值(例如,由IMU1042获取的测量值)来跟踪眼睛的位置或移动。部件1008还可以包括传感器融合应用程序1044和各种其他应用程序1046。A subset of system components 1008 may include, for example, a system-level scheduler 1038 that may schedule when an SMI sensor (or SMI sensors) or other components are used to track eye position (e.g., gaze vector) or movement (e.g., angular velocity). Component 1008 may also include other sensors, such as a camera 1040 or an inertial measurement unit (IMU) 1042 . In some cases, component 1008 (or a processor thereof) may track eye position or movement using one or more SMI signals acquired from one or more subsets of component 1002, one or more images acquired from cameras 1040, and/or other measurements (e.g., measurements acquired by IMU 1042). Components 1008 may also include sensor fusion applications 1044 and various other applications 1046 .
可以使用各种类型的驱动电流来驱动激光二极管1010。例如,激光二极管1010可以用DC电流驱动(例如,在DC驱动模式下),以用于对由光电检测器1012生成的SMI信号执行多普勒分析的目的。替代地,当执行测距时,可以在调频连续波形(FMCW)模式下(例如,利用三角波驱动电流)驱动激光二极管1010。替代地,当确定眼睛的相对位移时,可以在谐波驱动模式下(例如,利用IQ调制的驱动电流)驱动激光二极管1010。Various types of drive currents can be used to drive the laser diode 1010 . For example, laser diode 1010 may be driven with a DC current (eg, in a DC drive mode) for the purpose of performing Doppler analysis on the SMI signal generated by photodetector 1012 . Alternatively, the laser diode 1010 may be driven in a frequency modulated continuous waveform (FMCW) mode (eg, with a triangular wave drive current) when performing ranging. Alternatively, the laser diode 1010 may be driven in a harmonic drive mode (eg, with an IQ modulated drive current) when determining the relative displacement of the eye.
图11A示出了用于使用一组一个或多个SMI传感器1104来跟踪眼睛移动的第一示例性方法1100。方法1100包括操作光学传感器子系统1102以使得所述一组一个或多个SMI传感器1104朝向用户的眼睛发射一组一个或多个光束。光学传感器子系统1102和SMI传感器1104可以类似于本文描述的光学传感器子系统和SMI传感器中的任一者来配置。FIG. 11A illustrates a first exemplary method 1100 for tracking eye movement using a set of one or more SMI sensors 1104 . Method 1100 includes operating optical sensor subsystem 1102 such that set of one or more SMI sensors 1104 emits a set of one or more light beams toward the user's eyes. Optical sensor subsystem 1102 and SMI sensor 1104 may be configured similar to any of the optical sensor subsystems and SMI sensors described herein.
在1106,方法1100可以包括从所述一组一个或多个SMI传感器1104接收一组一个或多个SMI信号。At 1106 , method 1100 can include receiving a set of one or more SMI signals from the set of one or more SMI sensors 1104 .
在1108,方法1100可以包括使用一组一个或多个SMI信号跟踪眼睛的旋转移动。1108处的操作可以包括例如使用SMI信号和多普勒干涉测量(在1110)估计眼睛的线速度和角速度。1108处的操作还可以包括(在1112)估计到眼睛的范围(或距离),或者(在1114)估计眼睛的表面质量(例如,表面纹理)。所估计范围或表面质量不仅可以用于估计眼睛的旋转移动,而且还可以用于(在1116)确定眼睛的位置,或者一组一个或多个SMI传感器1104聚焦在其上的眼睛的结构。At 1108, method 1100 can include tracking the rotational movement of the eye using a set of one or more SMI signals. Operations at 1108 may include estimating (at 1110 ) linear and angular velocities of the eye, eg, using the SMI signal and Doppler interferometry. Operations at 1108 may also include (at 1112) estimating the range (or distance) to the eye, or (at 1114) estimating the surface quality (eg, surface texture) of the eye. The estimated extent or surface quality can be used not only to estimate the rotational movement of the eye, but also to determine (at 1116) the position of the eye, or the structure of the eye on which the set of one or more SMI sensors 1104 is focused.
在1118,方法1100可以包括使用操作1108的输出来确定凝视移动。At 1118 , method 1100 can include using the output of operation 1108 to determine gaze movement.
在1120,方法1100可以包括使用操作1108的输出来识别凝视唤醒事件(例如,用户睁开他们的眼睛,或者用户在特定方向上看,或者用户执行特定的一系列眼睛移动)。在一些情况下,1120处的操作或其他操作可以包括识别凝视睡眠事件(例如,用户闭上他们的眼睛、用户在特定方向上观看,或者用户执行特定的一系列眼睛移动)、眨眼事件或其他事件。At 1120, method 1100 can include using the output of operation 1108 to identify a gaze arousal event (eg, the user opens their eyes, or the user looks in a particular direction, or the user performs a particular series of eye movements). In some cases, operations at 1120 or other operations may include identifying gaze sleep events (eg, the user closes their eyes, the user looks in a particular direction, or the user performs a particular series of eye movements), eye blink events, or other events.
在1122,方法1100可包括响应于识别出的特定类型的事件而执行操作(例如,将头戴式显示器或其他设备通电或断电、应答呼叫、激活应用程序、调整音量等)。At 1122, method 1100 can include performing an operation in response to the identified event of a particular type (eg, powering on or off a head mounted display or other device, answering a call, activating an application, adjusting a volume, etc.).
在1124,方法1100可以包括执行多普勒测距以确定眼睛凝视向量的位置变化。在一些情况下,可以使用扩展卡尔曼滤波器(EKF)来执行多普勒测距。At 1124, method 1100 can include performing Doppler ranging to determine a change in position of the eye gaze vector. In some cases, Doppler ranging may be performed using an Extended Kalman Filter (EKF).
在1126,方法1100可以包括更新凝视到头戴式显示器(凝视-HMD)向量(即,确定眼睛凝视向量如何与显示器相交,或者眼睛凝视向量如何相对于显示器移动)。At 1126, method 1100 can include updating the gaze-to-head-mounted-display (gaze-HMD) vector (ie, determining how the eye gaze vector intersects the display, or how the eye gaze vector moves relative to the display).
在1128,方法1100可以包括使HMD(或另一显示器)的显示子系统调整显示器上的文本、数字或图像的渲染。在一些情况下,调整可以响应于对眼睛的移动进行分类(例如,分类为平滑追踪、扫视、注视、眼球震颤或眨眼)。At 1128, method 1100 can include causing a display subsystem of the HMD (or another display) to adjust rendering of text, numbers, or images on the display. In some cases, the adjustment may be in response to classifying eye movement (eg, classifying as smooth pursuit, saccade, fixation, nystagmus, or blink).
图11B示出了用于使用参考图11A所述的所述一组一个或多个SMI传感器1104和操作与相机1152或其他传感器(例如,IMU 1154、外向相机(OFC)1156等)组合来跟踪眼睛的移动的第二示例性方法1150。相机1152可以与本文所述的其他相机类似地配置。11B illustrates a second exemplary method 1150 for tracking the movement of an eye using the set of one or more SMI sensors 1104 described with reference to FIG. Camera 1152 may be configured similarly to other cameras described herein.
在1158,方法1150可以包括使用相机1152获取眼睛的一组一个或多个图像。在一些实施方案中,相机1152可以以第一频率获取一组图像,并且可以以第二频率(例如,与第一频率同步的第二频率)对由SMI传感器1104生成的SMI信号进行采样。频率可以相同或不同,但在一些实施方案中,第二频率可以大于第一频率。以这种方式,通常消耗更多功率并产生更大数量的数据的相机1152可以用于确定眼睛位置或以较低频率生成凝视向量数据,并且确保SMI传感器适当地聚焦并产生良好数据;并且通常消耗较少功率的SMI传感器1104可以或多或少地连续地并且以较高频率使用,以跟踪由相机1152进行的图像捕获之间的眼睛(或凝视向量)的移动。在一些实施方案中,由相机1152获取的图像可以用于生成、更新或自定义眼睛模型,所述眼睛模型可以用于引导或聚焦由SMI传感器发射的光束。At 1158, method 1150 may include using camera 1152 to acquire a set of one or more images of the eye. In some embodiments, camera 1152 may acquire a set of images at a first frequency and may sample the SMI signal generated by SMI sensor 1104 at a second frequency (eg, a second frequency synchronized with the first frequency). The frequencies may be the same or different, but in some embodiments the second frequency may be greater than the first frequency. In this way, the camera 1152, which typically consumes more power and produces a greater amount of data, can be used to determine eye position or generate gaze vector data less frequently, and ensure that the SMI sensor is properly focused and produces good data; In some embodiments, images captured by camera 1152 may be used to generate, update or customize an eye model that may be used to direct or focus the beam emitted by the SMI sensor.
在1160,方法1150可以包括基于由相机1152获取的图像来估计眼睛的凝视向量(或位置)。At 1160 , method 1150 can include estimating a gaze vector (or position) of the eye based on the image acquired by camera 1152 .
在1162,方法1100可以包括确定或更新头部到HMD向量(头部-HMD向量)。换句话说,方法1100可以确定用户的头部如何相对于显示器定位。At 1162, method 1100 can include determining or updating a head-to-HMD vector (head-HMD vector). In other words, method 1100 can determine how the user's head is positioned relative to the display.
在1164,方法1100可以包括执行视觉-多普勒测距以确定眼睛凝视向量的位置变化。在一些情况下,可以使用扩展卡尔曼滤波器(EKF)来执行视觉-多普勒测距。与在方法1100中执行的多普勒测距相比,在1164执行的视觉-多普勒测距可以利用眼睛的基于图像的位置(或凝视向量)分析来融合基于SMI的移动(或位置)分析。At 1164, method 1100 can include performing visual-Doppler odometry to determine a change in position of the eye gaze vector. In some cases, visual-Doppler ranging may be performed using an Extended Kalman Filter (EKF). In contrast to the Doppler odometry performed in method 1100, the visual-Doppler odometry performed at 1164 may utilize image-based position (or gaze vector) analysis of the eyes to fuse SMI-based movement (or position) analysis.
在1126,方法1100可以包括更新凝视到头戴式显示器(凝视-HMD)向量(即,确定眼睛凝视向量如何与显示器相交,或者眼睛凝视向量如何相对于显示器移动)。At 1126, method 1100 can include updating the gaze-to-head-mounted-display (gaze-HMD) vector (ie, determining how the eye gaze vector intersects the display, or how the eye gaze vector moves relative to the display).
在1166,方法1100可以可选地使用IMU 1154或外向相机1156(即,聚焦在用户周围的环境上而不是用户眼睛上的相机)的输出来执行惯性测距、视频测距或视频-惯性测距。然后可以在1168使用视频-惯性测距来确定或更新HMD到世界(HMD-世界)向量。At 1166, method 1100 may optionally use the output of IMU 1154 or outward-facing camera 1156 (ie, a camera that focuses on the environment around the user rather than the user's eyes) to perform inertial, video, or video-inertial odometry. The HMD-to-world (HMD-world) vector can then be determined or updated at 1168 using video-inertial odometry.
在1170,方法1100可以包括确定或更新凝视-世界向量。这样的向量可以用于例如经由一副眼镜增强用户的现实。At 1170, method 1100 can include determining or updating a gaze-world vector. Such vectors can be used to augment the user's reality, eg, via a pair of glasses.
在1128,方法1100可以包括使HMD(或另一显示器)的显示子系统调整(例如,在AR或VR环境中)显示器上的文本、数字或图像的渲染。在一些情况下,调整可以响应于对眼睛的移动进行分类(例如,分类为平滑追踪、扫视、注视、眼球震颤或眨眼)。At 1128, method 1100 can include causing a display subsystem of the HMD (or another display) to adjust (eg, in an AR or VR environment) the rendering of text, numbers, or images on the display. In some cases, the adjustment may be in response to classifying eye movement (eg, classifying as smooth pursuit, saccade, fixation, nystagmus, or blink).
图12A和图12B示出了如何使用一组一个或多个SMI传感器来映射眼睛1200的一个或多个表面或结构。举例来说,图12B和图12B示出了单个SMI传感器1202。在一些示例中,SMI传感器1202可以用多个SMI传感器(例如,多个分立SMI传感器或SMI传感器阵列)代替。SMI传感器可以是参考图1至图11B所述的SMI传感器中的任何SMI传感器。12A and 12B illustrate how one or more surfaces or structures of an eye 1200 can be mapped using a set of one or more SMI sensors. By way of example, a single SMI sensor 1202 is shown in FIGS. 12B and 12B . In some examples, SMI sensor 1202 may be replaced with multiple SMI sensors (eg, multiple discrete SMI sensors or an array of SMI sensors). The SMI sensor may be any of the SMI sensors described with reference to FIGS. 1 to 11B .
图12A和图12B各自示出了同一眼睛1200的两个侧视图。每个图中的第一侧视图(即,侧视图1204和1206)示出了眼睛1200的横截面,并且每个图中的第二侧视图(即,侧视图1208和1210)示出了在分析由SMI传感器生成的SMI信号或由多个不同SMI传感器生成的多个SMI信号之后由处理器识别的各种眼睛结构的计算机生成的模型。图12A示出了处于第一位置的眼睛1200,并且图12B示出了处于第二位置的眼睛1200。12A and 12B each show two side views of the same eye 1200 . The first side view in each figure (i.e., side views 1204 and 1206) shows a cross-section of eye 1200, and the second side view in each figure (i.e., side views 1208 and 1210) shows a computer-generated model of various eye structures identified by the processor after analyzing the SMI signal generated by the SMI sensor or multiple SMI signals generated by a plurality of different SMI sensors. Figure 12A shows the eye 1200 in a first position, and Figure 12B shows the eye 1200 in a second position.
在单个SMI传感器1202的情况下,SMI传感器1202可以借助于MEMS或其他结构1212安装到头戴式框架,所述MEMS或其他结构使得能够跨眼睛1200扫描由SMI传感器1202发射的光束1214,或者由SMI传感器1202发射的光束1214可以由一组一个或多个光学元件接收,所述光学元件可以被调整以跨眼睛1200扫描光束1214。替代地,光束1214可以使用分束器进行分离,并且多个光束可以同时或顺序地撞击眼睛1200。替代地,SMI传感器1202可以在固定位置安装到头戴式框架,并且可以要求用户在SMI传感器1202发射光束时将他们的眼睛1200移动到不同位置。In the case of a single SMI sensor 1202, the SMI sensor 1202 may be mounted to the head-mounted frame by means of a MEMS or other structure 1212 that enables the beam 1214 emitted by the SMI sensor 1202 to be scanned across the eye 1200, or the beam 1214 emitted by the SMI sensor 1202 may be received by a set of one or more optical elements that may be adjusted to scan the beam 1214 across the eye 1200. Alternatively, beam 1214 may be split using a beam splitter, and multiple beams may strike eye 1200 simultaneously or sequentially. Alternatively, the SMI sensor 1202 may be mounted to the head-mounted frame at a fixed location, and the user may be asked to move their eyes 1200 to different positions while the SMI sensor 1202 emits a beam.
处理器(诸如本文所述的处理器中的任何处理器)可以接收由所述一组一个或多个SMI传感器生成的SMI信号,并且使用SMI信号确定眼睛上或眼睛中的一组点的一组范围。范围可以包括绝对范围或相对范围。然后,处理器可以使用所述一组范围生成眼睛的至少一个结构的图。所述图可以是二维(2D)图或三维(3D)图。A processor, such as any of the processors described herein, may receive the SMI signals generated by the set of one or more SMI sensors and use the SMI signals to determine a set of ranges for a set of points on or in the eye. Ranges may include absolute or relative ranges. The processor may then generate a map of at least one structure of the eye using the set of ranges. The graph may be a two-dimensional (2D) graph or a three-dimensional (3D) graph.
在一些实施方案中,处理器可以被配置为使用所述图识别眼睛的结构,或眼睛的第一结构与眼睛的第二结构之间的边界。识别出的结构可以包括例如虹膜1218、巩膜1220、瞳孔1222、晶状体1224、角膜缘1226、眼睑等中的一者或多者。In some embodiments, the processor may be configured to use the map to identify a structure of the eye, or a boundary between a first structure of the eye and a second structure of the eye. Identified structures may include, for example, one or more of iris 1218, sclera 1220, pupil 1222, lens 1224, limbus 1226, eyelids, and the like.
在一些实施方案中,处理器可以操作光学传感器子系统(例如,MEMS、一个或多个光学元件、分束器等)以将一个或多个光束导向识别出的眼睛的结构。在一些情况下,所述结构可以比眼睛的另一结构更漫射。In some embodiments, the processor may operate an optical sensor subsystem (eg, MEMS, one or more optical elements, a beam splitter, etc.) to direct one or more beams of light toward the identified structure of the eye. In some cases, the structure may be more diffuse than another structure of the eye.
在一些实施方案中,处理器可以被配置为使用所述图来确定眼睛的凝视向量1216。在一些实施方案中,处理器还可以或替代地被配置为使用一组一个或多个SMI信号获得或构建多普勒云。该一个或多个SMI信号对应于同时或依次投影或发射一组一个或多个光束中的多个光束,和/或扫描一组一个或多个光束中的至少一个光束。多普勒云可以在利用或不利用VCSEL波长调制的情况下获得或构建。处理器还可以或替代地获得或构建深度云。深度云可以仅利用VCSEL波长调制获得或构建。In some embodiments, the processor may be configured to use the map to determine gaze vectors 1216 for the eyes. In some embodiments, the processor may also or alternatively be configured to obtain or construct a Doppler cloud using a set of one or more SMI signals. The one or more SMI signals correspond to simultaneously or sequentially projecting or emitting multiple beams of a set of one or more beams, and/or scanning at least one beam of a set of one or more beams. Doppler clouds can be obtained or constructed with or without VCSEL wavelength modulation. The processor may also or alternatively obtain or construct a deep cloud. Deep clouds can be obtained or constructed using only VCSEL wavelength modulation.
举例来说,当由一个或多个SMI传感器发射的光的波长被调制时,并且当由至少一个SMI传感器发射的光束被扫描和/或发射多个光束时,可获得或构建多普勒云和/或深度云。另外或替代地,当由一个或多个SMI传感器发射的光的波长未被调制时,可以获得或构建多普勒云。如早前所述,多普勒云的单个或多个帧可被认为是差分深度云。使用实时处理的多普勒云的单个或多个帧的测量结果,可以匹配预定义和/或本地校准的差分图或库,和/或可以提取眼睛跟踪或位置信息。如本文所述,本地校准的差分图或库可以通过包括但不限于相机、深度云等来获得。另外,单独使用或与深度云或其他传感模态(例如,眼睛相机图像、运动传感器等)融合使用多普勒云可以用于提供跟踪眼睛移动或位置信息的准确并且有效的方式。For example, when the wavelength of light emitted by one or more SMI sensors is modulated, and when the beam emitted by at least one SMI sensor is scanned and/or emits multiple beams, a Doppler cloud and/or depth cloud may be obtained or constructed. Additionally or alternatively, a Doppler cloud may be obtained or constructed when the wavelength of light emitted by the one or more SMI sensors is not modulated. As mentioned earlier, single or multiple frames of a Doppler cloud can be considered a differential depth cloud. Using measurements from single or multiple frames of the Doppler cloud processed in real time, predefined and/or locally calibrated difference maps or libraries can be matched, and/or eye tracking or position information can be extracted. As described herein, a locally calibrated differential map or library can be obtained including, but not limited to, cameras, depth clouds, and the like. Additionally, the use of Doppler clouds alone or fused with depth clouds or other sensing modalities (eg, eye camera images, motion sensors, etc.) can be used to provide an accurate and efficient way of tracking eye movement or position information.
上述描述为了进行解释使用了特定命名来提供对所述实施方案的彻底理解。然而,对于本领域的技术人员而言将显而易见的是,在阅读本说明书之后,不需要具体细节即可实践所述实施方案。因此,出于例示和描述的目的,呈现了对本文所述的具体实施方案的前述描述。它们并非旨在是穷举性的或将实施方案限制到所公开的精确形式。对于本领域的普通技术人员而言将显而易见的是,在阅读本说明书之后,鉴于上面的教导内容,许多修改和变型是可能的。The above description, for purposes of explanation, used specific nomenclature to provide a thorough understanding of the described embodiments. It will be apparent, however, to one of ordinary skill in the art, after reading this specification, that the described embodiments can be practiced without the specific details. Thus, the foregoing descriptions of specific embodiments described herein are presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the embodiments to the precise forms disclosed. It will be apparent to those of ordinary skill in the art in view of the present specification that many modifications and variations are possible in light of the above teachings.
如上所述,本技术的一个方面可为采集和使用可从各种来源获得的数据,包括生物计量数据(例如,用户的皮肤或指纹的表面品质)。本公开设想,在一些情况下,该所采集的数据可包括唯一地识别或可用于识别、定位或接触特定人员的个人信息数据。此类个人信息数据可包括例如生物计量数据(例如,指纹数据)和与其链接的数据(例如人口统计数据、基于位置的数据、电话号码、电子邮件地址、家庭地址、与用户的健康或健身水平相关的数据或记录(例如,生命体征测量结果、用药信息、锻炼信息)、出生日期或任何其他识别信息或个人信息)。As noted above, one aspect of the present technology may be the collection and use of data available from various sources, including biometric data (eg, the surface quality of a user's skin or fingerprints). This disclosure contemplates that, in some cases, this collected data may include personal information data that uniquely identifies or can be used to identify, locate, or contact a particular person. Such Personal Information Data may include, for example, biometric data (e.g., fingerprint data) and data linked thereto (e.g., demographic data, location-based data, phone numbers, email addresses, home addresses, data or records related to a user's health or fitness level (e.g., vital sign measurements, medication information, exercise information), date of birth, or any other identifying or personal information).
本公开认识到在本发明技术中使用此类个人信息数据可用于使用户受益。例如,个人信息数据可用于认证用户以访问其设备,或者采集用于用户与增强或虚拟世界的交互的性能度量。此外,本公开还预期个人信息数据有益于用户的其他用途。例如,健康和健身数据可用于向用户的总体健康状况提供见解,或者可用作使用技术来追求健康目标的个人的积极反馈。This disclosure recognizes that the use of such personal information data in the present technology may be used to benefit the user. For example, personal information data may be used to authenticate users to access their devices, or to collect performance metrics for user interactions with augmented or virtual worlds. In addition, this disclosure also contemplates other uses of personal information data that benefit users. For example, health and fitness data can be used to provide insights into a user's overall health, or can be used as positive feedback for individuals using technology to pursue health goals.
本公开设想负责采集、分析、公开、传输、存储或其他使用此类个人信息数据的实体将遵守既定的隐私政策和/或隐私实践。具体地,此类实体应当实行并坚持使用被公认为满足或超出对维护个人信息数据的隐私性和安全性的行业或政府要求的隐私政策和实践。此类政策应该能被用户方便地访问,并应随着数据的采集和/或使用变化而被更新。来自用户的个人信息应当被收集用于实体的合法且合理的用途,并且不在这些合法使用之外共享或出售。此外,应在收到用户知情同意后进行此类采集/共享。此外,此类实体应考虑采取任何必要步骤,保卫和保障对此类个人信息数据的访问,并确保有权访问个人信息数据的其他人遵守其隐私政策和流程。另外,这种实体可使其本身经受第三方评估以证明其遵守广泛接受的隐私政策和实践。另外,应当调整政策和实践,以便采集和/或访问的特定类型的个人信息数据,并适用于包括管辖范围的具体考虑的适用法律和标准。例如,在美国,对某些健康数据的收集或获取可能受联邦和/或州法律的管辖,诸如健康保险流通和责任法案(HIPAA);而其他国家的健康数据可能受到其他法规和政策的约束并应相应处理。因此,在每个国家应为不同的个人数据类型保持不同的隐私实践。This disclosure envisages that entities responsible for collecting, analyzing, disclosing, transmitting, storing or otherwise using such Personal Information data will adhere to established privacy policies and/or privacy practices. Specifically, such entities shall implement and adhere to privacy policies and practices that are recognized as meeting or exceeding industry or government requirements for maintaining the privacy and security of personal information data. Such policies should be easily accessible to users and should be updated as data collection and/or use changes. Personal information from users should be collected for the entity's lawful and reasonable uses and not shared or sold outside of those lawful uses. In addition, such collection/sharing should be done after receiving informed consent from users. In addition, such entities should consider taking any necessary steps to safeguard and secure access to such Personal Information Data and to ensure that others who have access to Personal Information Data comply with their privacy policies and procedures. In addition, such entities may subject themselves to third-party assessments to demonstrate compliance with widely accepted privacy policies and practices. In addition, policies and practices should be tailored to the specific types of personal data collected and/or accessed, and to applicable laws and standards including jurisdiction-specific considerations. For example, in the United States, the collection or acquisition of certain health data may be governed by federal and/or state laws, such as the Health Insurance Portability and Accountability Act (HIPAA); while health data in other countries may be subject to other regulations and policies and should be treated accordingly. Therefore, different privacy practices should be maintained in each country for different types of personal data.
不管前述情况如何,本公开还预期用户选择性地阻止使用或访问个人信息数据的实施方案。即本公开预期可提供硬件元件和/或软件元件,以防止或阻止对此类个人信息数据的访问。例如,就广告递送服务而言,本发明的技术可被配置为在注册服务期间或之后任何时候允许用户选择“选择加入”或“选择退出”参与对个人信息数据的收集。又如,用户可选择不向目标内容递送服务提供数据。在又一个示例中,用户可选择限制维护数据的时间长度,或者完全禁止针对用户的基线配置文件的开发。除了提供“选择加入”和“选择退出”选项外,本公开设想提供与访问或使用个人信息相关的通知。例如,可在下载应用时向用户通知其个人信息数据将被访问,然后就在个人信息数据被应用访问之前再次提醒用户。Notwithstanding the foregoing, this disclosure also contemplates embodiments in which users selectively block the use or access of personal information data. That is, the present disclosure contemplates that hardware elements and/or software elements may be provided to prevent or prevent access to such personal information data. For example, with respect to advertising delivery services, the technology of the present invention can be configured to allow users to choose to "opt in" or "opt out" to participate in the collection of personal information data at any time during or after registration for the service. As another example, a user may choose not to provide data to a targeted content delivery service. In yet another example, the user may choose to limit the length of time the data is maintained, or completely prohibit the development of a baseline profile for the user. In addition to providing "opt-in" and "opt-out" options, this disclosure contemplates providing notice related to access or use of personal information. For example, users can be notified that their personal information data will be accessed when downloading an application, and then reminded again just before personal information data is accessed by the application.
此外,本公开的目的是应管理和处理个人信息数据以最小化无意或未经授权访问或使用的风险。一旦不再需要数据,通过限制数据收集和删除数据可最小化风险。此外,并且当适用时,包括在某些健康相关应用程序中,数据去标识可用于保护用户的隐私。在适当的情况下,可通过移除特定标识符(例如,出生日期等),控制所存储的数据的量或特异性(例如,在城市级而不是在地址级收集位置数据),控制数据的存储方式(例如,跨用户聚合数据)和/或其他方法来促进去标识。Furthermore, it is an object of this disclosure that personal information data should be managed and processed to minimize the risk of unintentional or unauthorized access or use. Risk is minimized by limiting data collection and deleting data once it is no longer needed. Additionally, and when applicable, including in certain health-related applications, data de-identification may be used to protect user privacy. Where appropriate, de-identification may be facilitated by removing specific identifiers (e.g., date of birth, etc.), controlling the amount or specificity of data stored (e.g., collecting location data at the city level rather than at the address level), controlling how data is stored (e.g., aggregating data across users), and/or other methods.
因此,虽然本公开广泛地覆盖了使用个人信息数据来实现一个或多个各种所公开的实施方案,但本公开还预期各种实施方案也可在无需访问此类个人信息数据的情况下被实现。即,本发明技术的各种实施方案不会由于缺少此类个人信息数据的全部或一部分而无法正常进行。例如,可通过基于非个人信息数据或绝对最低数量的个人信息诸如与用户相关联的设备所请求的内容、对内容递送服务可用的其他非个人信息或公开可用的信息来推断偏好,从而选择内容并将该内容递送至用户。Thus, while this disclosure broadly covers the use of personal information data to implement one or more of the various disclosed embodiments, this disclosure also contemplates that various embodiments may also be implemented without access to such personal information data. That is, the various embodiments of the technology of the present invention will not be unable to function normally due to the lack of all or part of such personal information data. For example, content may be selected and delivered to a user by inferring preferences based on non-personal information data or an absolute minimum amount of personal information, such as content requested by a device associated with the user, other non-personal information available to the content delivery service, or publicly available information.
Claims (27)
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US63/247,188 | 2021-09-22 | ||
US202217947874A | 2022-09-19 | 2022-09-19 | |
US17/947,874 | 2022-09-19 | ||
CN202211169329.1 | 2022-09-22 |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211169329.1 Division | 2021-09-22 | 2022-09-22 |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116482854A true CN116482854A (en) | 2023-07-25 |
Family
ID=87245285
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310073547.3A Pending CN116482854A (en) | 2021-09-22 | 2022-09-22 | Eye tracking using self-mixing interferometry |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116482854A (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140071400A1 (en) * | 2012-09-11 | 2014-03-13 | Augmented Vision, Inc. | Compact eye imaging and eye tracking apparatus |
US20160109961A1 (en) * | 2013-06-20 | 2016-04-21 | Uday Parshionikar | Systems, methods, apparatuses, computer readable medium for controlling electronic devices |
US20180129279A1 (en) * | 2015-04-08 | 2018-05-10 | Controlrad Systems Inc. | Devices And Methods For Monitoring Gaze |
CN109715047A (en) * | 2016-09-07 | 2019-05-03 | 威尔乌集团 | Sensor fusion system and method for eye movement tracking application |
RU2017143204A3 (en) * | 2017-12-11 | 2019-06-11 | ||
US10698483B1 (en) * | 2019-04-22 | 2020-06-30 | Facebook Technologies, Llc | Eye-tracking systems, head-mounted displays including the same, and related methods |
CN112462932A (en) * | 2019-09-06 | 2021-03-09 | 苹果公司 | Gesture input system with wearable or handheld device based on self-mixing interferometry |
GB202107238D0 (en) * | 2021-05-20 | 2021-07-07 | Ams Int Ag | Eye movement determination |
-
2022
- 2022-09-22 CN CN202310073547.3A patent/CN116482854A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140071400A1 (en) * | 2012-09-11 | 2014-03-13 | Augmented Vision, Inc. | Compact eye imaging and eye tracking apparatus |
US20160109961A1 (en) * | 2013-06-20 | 2016-04-21 | Uday Parshionikar | Systems, methods, apparatuses, computer readable medium for controlling electronic devices |
US20180129279A1 (en) * | 2015-04-08 | 2018-05-10 | Controlrad Systems Inc. | Devices And Methods For Monitoring Gaze |
CN109715047A (en) * | 2016-09-07 | 2019-05-03 | 威尔乌集团 | Sensor fusion system and method for eye movement tracking application |
RU2017143204A3 (en) * | 2017-12-11 | 2019-06-11 | ||
US10698483B1 (en) * | 2019-04-22 | 2020-06-30 | Facebook Technologies, Llc | Eye-tracking systems, head-mounted displays including the same, and related methods |
CN112462932A (en) * | 2019-09-06 | 2021-03-09 | 苹果公司 | Gesture input system with wearable or handheld device based on self-mixing interferometry |
GB202107238D0 (en) * | 2021-05-20 | 2021-07-07 | Ams Int Ag | Eye movement determination |
Non-Patent Citations (2)
Title |
---|
MEYER,J: "A novel-eye-tracking sensor for AR glasses based on laser self-mixing showing exceptional robustness against illumination", 《ERTA 2020 SHORT PAPERS:ACM SYMPOSIUM ON EYE TRACKING RESEARCH& APPLICATIONS》, 31 December 2020 (2020-12-31), pages 1 - 5 * |
姜婷婷;吴茜;徐亚苹;王瑶璇;: "眼动追踪技术在国外信息行为研究中的应用", 《情报学报》, no. 02, 24 February 2020 (2020-02-24), pages 97 - 110 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20220155860A1 (en) | Controlling an eye tracking camera according to eye movement velocity | |
US10852817B1 (en) | Eye tracking combiner having multiple perspectives | |
CN104603673B (en) | Head-mounted system and the method for being calculated using head-mounted system and rendering digital image stream | |
US10353460B2 (en) | Eye and head tracking device | |
JP6106684B2 (en) | System and method for high resolution gaze detection | |
JP6144681B2 (en) | Head mounted display with iris scan profiling function | |
Meyer et al. | A novel camera-free eye tracking sensor for augmented reality based on laser scanning | |
US20170117005A1 (en) | Wearable emotion detection and feedback system | |
EP3252566B1 (en) | Face and eye tracking and facial animation using facial sensors within a head-mounted display | |
US10698483B1 (en) | Eye-tracking systems, head-mounted displays including the same, and related methods | |
KR101383235B1 (en) | Apparatus for inputting coordinate using eye tracking and method thereof | |
US20160077337A1 (en) | Managing Information Display | |
WO2018076202A1 (en) | Head-mounted display device that can perform eye tracking, and eye tracking method | |
US20230333371A1 (en) | Eye Tracking Using Self-Mixing Interferometry | |
CN103458770A (en) | Optical measuring device and method for capturing at least one parameter of at least one eye wherein an illumination characteristic is adjustable | |
WO2015116475A1 (en) | Radial selection by vestibulo-ocular reflex fixation | |
Topal et al. | A low-computational approach on gaze estimation with eye touch system | |
US20200064627A1 (en) | Illumination assembly with in-field micro devices | |
US11435820B1 (en) | Gaze detection pipeline in an artificial reality system | |
EP3614193A1 (en) | Illumination assembly with in-field micro devices | |
US20160292517A1 (en) | Method for Monitoring the Visual Behavior of a Person | |
US20240219715A1 (en) | Head-Mounted Devices With Dual Gaze Tracking Systems | |
Meyer | Towards energy efficient mobile eye tracking for AR glasses through optical sensor technology | |
Meyer et al. | A highly integrated ambient light robust eye-tracking sensor for retinal projection ar glasses based on laser feedback interferometry | |
CN116482854A (en) | Eye tracking using self-mixing interferometry |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |