[go: up one dir, main page]

CN118918139A - Eyeball tracking processing method, device, equipment and storage medium - Google Patents

Eyeball tracking processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN118918139A
CN118918139A CN202310512179.8A CN202310512179A CN118918139A CN 118918139 A CN118918139 A CN 118918139A CN 202310512179 A CN202310512179 A CN 202310512179A CN 118918139 A CN118918139 A CN 118918139A
Authority
CN
China
Prior art keywords
output mode
eye tracking
eye
wavelength
light source
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310512179.8A
Other languages
Chinese (zh)
Inventor
请求不公布姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing 7Invensun Technology Co Ltd
Original Assignee
Beijing 7Invensun Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing 7Invensun Technology Co Ltd filed Critical Beijing 7Invensun Technology Co Ltd
Priority to CN202310512179.8A priority Critical patent/CN118918139A/en
Priority to PCT/CN2024/091311 priority patent/WO2024230660A1/en
Publication of CN118918139A publication Critical patent/CN118918139A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/16Image acquisition using multiple overlapping images; Image stitching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/96Management of image or video recognition tasks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Human Computer Interaction (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

本发明公开了一种眼球追踪的处理方法、装置、设备及存储介质。包括:确定当前的眼球追踪相关场景;根据所述眼球追踪相关场景确定相机出图模式及光源波长;其中,所述相机出图模式包括如下至少一种:全像素出图模式、感兴趣区域ROI出图模式、降采样出图模式、ROI叠加降采样出图模式;所述光源波长包括第一波长和第二波长;基于所述相机出图模式及所述光源波长对所述眼球追踪相关场景下的数据进行处理。本发明实施例提供的眼球追踪的处理方法,在不同的眼球追踪场景下选择对应的相机出图模式和/或光源波长,不仅可以提高眼球追踪的精度,还可以增加眼球追踪的适应性。

The present invention discloses a processing method, device, equipment and storage medium for eye tracking. It includes: determining the current eye tracking related scene; determining the camera output mode and light source wavelength according to the eye tracking related scene; wherein the camera output mode includes at least one of the following: full pixel output mode, region of interest ROI output mode, downsampling output mode, ROI superposition downsampling output mode; the light source wavelength includes a first wavelength and a second wavelength; based on the camera output mode and the light source wavelength, the data in the eye tracking related scene is processed. The eye tracking processing method provided by the embodiment of the present invention selects the corresponding camera output mode and/or light source wavelength in different eye tracking scenes, which can not only improve the accuracy of eye tracking, but also increase the adaptability of eye tracking.

Description

眼球追踪的处理方法、装置、设备及存储介质Eye tracking processing method, device, equipment and storage medium

技术领域Technical Field

本发明实施例涉及眼球追踪技术领域,尤其涉及一种眼球追踪的处理方法、装置、设备及存储介质。Embodiments of the present invention relate to the field of eye tracking technology, and in particular, to a processing method, device, equipment and storage medium for eye tracking.

背景技术Background Art

现有的眼球追踪技术中,在对用户眼部图像进行采集时,使用的红外光源的波长及相机的出图模式均比较单一。这种方式无法适用眼球追踪过程中的多种使用场景,不仅会影响眼球追踪的精度,且使得眼球追踪的适用性较差。In existing eye tracking technologies, when collecting user eye images, the wavelength of the infrared light source and the image output mode of the camera are both relatively simple. This method cannot be applied to various usage scenarios in the eye tracking process, which not only affects the accuracy of eye tracking, but also makes the applicability of eye tracking poor.

发明内容Summary of the invention

本发明实施例提供一种眼球追踪的处理方法、装置、设备及存储介质,在不同的眼球追踪场景下选择对应的相机出图模式和/或光源波长,不仅可以提高眼球追踪的精度,还可以增加眼球追踪的适应性。The embodiments of the present invention provide an eye tracking processing method, apparatus, device and storage medium, which select corresponding camera output modes and/or light source wavelengths in different eye tracking scenarios, which can not only improve the accuracy of eye tracking, but also increase the adaptability of eye tracking.

第一方面,本发明实施例提供了一种眼球追踪的处理方法,包括:In a first aspect, an embodiment of the present invention provides an eye tracking processing method, comprising:

确定当前的眼球追踪相关场景;其中,所述眼球追踪相关场景包括如下至少一种:可穿戴眼球追踪设备佩戴场景、生物特征注册场景、眼球追踪校准场景、第一眼球追踪场景、第二眼球追踪场景及第三眼球追踪场景;Determine a current eye tracking related scenario; wherein the eye tracking related scenario includes at least one of the following: a wearable eye tracking device wearing scenario, a biometric registration scenario, an eye tracking calibration scenario, a first eye tracking scenario, a second eye tracking scenario, and a third eye tracking scenario;

根据所述眼球追踪相关场景确定相机出图模式及光源波长;其中,所述相机出图模式包括如下至少一种:全像素出图模式、感兴趣区域ROI出图模式、降采样出图模式、ROI叠加降采样出图模式;所述光源波长包括第一波长和第二波长;Determine the camera output mode and the light source wavelength according to the eye tracking related scene; wherein the camera output mode includes at least one of the following: full pixel output mode, region of interest ROI output mode, downsampling output mode, ROI superposition downsampling output mode; the light source wavelength includes a first wavelength and a second wavelength;

基于所述相机出图模式及所述光源波长对所述眼球追踪相关场景下的数据进行处理。The data in the eye tracking related scene is processed based on the camera output mode and the light source wavelength.

第二方面,本发明实施例还提供了一种眼球追踪的处理装置,包括:In a second aspect, an embodiment of the present invention further provides an eye tracking processing device, comprising:

眼球追踪相关场景确定模块,用于确定当前的眼球追踪相关场景;其中,所述眼球追踪相关场景包括如下至少一种:可穿戴眼球追踪设备佩戴场景、生物特征注册场景、眼球追踪校准场景、第一眼球追踪场景、第二眼球追踪场景及第三眼球追踪场景;An eye tracking related scene determination module, used to determine a current eye tracking related scene; wherein the eye tracking related scene includes at least one of the following: a wearable eye tracking device wearing scene, a biometric registration scene, an eye tracking calibration scene, a first eye tracking scene, a second eye tracking scene, and a third eye tracking scene;

机出图模式及光源波长确定模块,用于根据所述眼球追踪相关场景确定相机出图模式及光源波长;其中,所述相机出图模式包括如下至少一种:全像素出图模式、感兴趣区域ROI出图模式、降采样出图模式、ROI叠加降采样出图模式;所述光源波长包括第一波长和第二波长;A module for determining a camera image output mode and a light source wavelength, for determining a camera image output mode and a light source wavelength according to the eye tracking related scene; wherein the camera image output mode includes at least one of the following: a full pixel image output mode, a region of interest ROI image output mode, a downsampling image output mode, and a ROI superimposed downsampling image output mode; and the light source wavelength includes a first wavelength and a second wavelength;

数据处理模块,用于基于所述相机出图模式及所述光源波长对所述眼球追踪相关场景下的数据进行处理。A data processing module is used to process the data in the eye tracking related scene based on the camera output mode and the light source wavelength.

第三方面,本发明实施例还提供了一种电子设备,所述电子设备包括:In a third aspect, an embodiment of the present invention further provides an electronic device, the electronic device comprising:

至少一个处理器;以及at least one processor; and

与所述至少一个处理器通信连接的存储器;其中,a memory communicatively connected to the at least one processor; wherein,

所述存储器存储有可被所述至少一个处理器执行的计算机程序,所述计算机程序被所述至少一个处理器执行,以使所述至少一个处理器能够执行权利要求1-8中任一项所述的眼球追踪的处理方法。The memory stores a computer program executable by the at least one processor, and the computer program is executed by the at least one processor so that the at least one processor can perform the eye tracking processing method according to any one of claims 1 to 8.

第四方面,本发明实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机指令,所述计算机指令用于使处理器执行时实现本发明实施例所述的眼球追踪的处理方法。In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, wherein the computer-readable storage medium stores computer instructions, and the computer instructions are used to enable a processor to implement the eye tracking processing method described in the embodiment of the present invention when executed.

本发明实施例公开了一种眼球追踪的处理方法、装置、设备及存储介质。确定当前的眼球追踪相关场景;其中,眼球追踪相关场景包括如下至少一种:可穿戴眼球追踪设备佩戴场景、生物特征注册场景、眼球追踪校准场景、第一眼球追踪场景、第二眼球追踪场景及第三眼球追踪场景;根据眼球追踪相关场景确定相机出图模式及光源波长;其中,相机出图模式包括如下至少一种:全像素出图模式、感兴趣区域ROI出图模式、降采样出图模式、ROI叠加降采样出图模式;光源波长包括第一波长和第二波长;基于相机出图模式及光源波长对眼球追踪相关场景下的数据进行处理。本发明实施例提供的眼球追踪的处理方法,在不同的眼球追踪场景下选择对应的相机出图模式和/或光源波长,不仅可以提高眼球追踪的精度,还可以增加眼球追踪的适应性。The embodiment of the present invention discloses a processing method, device, equipment and storage medium for eye tracking. Determine the current eye tracking related scene; wherein the eye tracking related scene includes at least one of the following: a wearable eye tracking device wearing scene, a biometric registration scene, an eye tracking calibration scene, a first eye tracking scene, a second eye tracking scene and a third eye tracking scene; determine the camera output mode and the light source wavelength according to the eye tracking related scene; wherein the camera output mode includes at least one of the following: a full pixel output mode, a region of interest ROI output mode, a downsampling output mode, and a ROI superimposed downsampling output mode; the light source wavelength includes a first wavelength and a second wavelength; based on the camera output mode and the light source wavelength, process the data in the eye tracking related scene. The eye tracking processing method provided by the embodiment of the present invention selects the corresponding camera output mode and/or light source wavelength in different eye tracking scenes, which can not only improve the accuracy of eye tracking, but also increase the adaptability of eye tracking.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

图1是本发明实施例一中的一种眼球追踪的处理方法的流程图;FIG1 is a flow chart of an eye tracking processing method in Embodiment 1 of the present invention;

图2是本发明实施例一中的一种在可穿戴眼球追踪设备佩戴场景下的数据处理流程图;FIG2 is a flowchart of data processing in a scenario where a wearable eye tracking device is worn in Embodiment 1 of the present invention;

图3是本发明实施例一中的一种生物特征注册场景下的数据处理流程图;FIG3 is a data processing flow chart of a biometric registration scenario in the first embodiment of the present invention;

图4是本发明实施例一中的一种眼球追踪校准场景下的数据处理流程图;FIG4 is a data processing flow chart of an eye tracking calibration scenario in Embodiment 1 of the present invention;

图5是本发明实施例一中的一种第一眼球追踪场景下的数据处理流程图;FIG5 is a data processing flow chart of a first eye tracking scenario in Embodiment 1 of the present invention;

图6是本发明实施例一中的一种第二眼球追踪场景下的数据处理流程图;FIG6 is a data processing flow chart of a second eye tracking scenario in Embodiment 1 of the present invention;

图7是本发明实施例一中的一种第三眼球追踪场景下的数据处理流程图;7 is a data processing flow chart of a third eye tracking scenario in the first embodiment of the present invention;

图8是本发明实施例二中的一种眼球追踪的处理装置的结构示意图;FIG8 is a schematic diagram of the structure of an eye tracking processing device in Embodiment 2 of the present invention;

图9是本发明实施例三中的一种电子设备的结构示意图。FIG. 9 is a schematic diagram of the structure of an electronic device in Embodiment 3 of the present invention.

具体实施方式DETAILED DESCRIPTION

下面结合附图和实施例对本发明作进一步的详细说明。可以理解的是,此处所描述的具体实施例仅仅用于解释本发明,而非对本发明的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与本发明相关的部分而非全部结构。The present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It is to be understood that the specific embodiments described herein are only used to explain the present invention, rather than to limit the present invention. It should also be noted that, for ease of description, only parts related to the present invention, rather than all structures, are shown in the accompanying drawings.

实施例一Embodiment 1

图1为本发明实施例一提供的一种眼球追踪的处理方法的流程图,本实施例可适用于对眼球追踪中的数据进行处理的情况,该方法可以由眼球追踪的处理装置来执行,该装置可以通过软件和/或硬件的形式实现,可选的,通过电子设备来实现,该电子设备可以是移动终端、PC端或服务器等。具体包括如下步骤:FIG1 is a flow chart of an eye tracking processing method provided by Embodiment 1 of the present invention. This embodiment is applicable to the case of processing data in eye tracking. The method can be executed by an eye tracking processing device, which can be implemented in the form of software and/or hardware. Optionally, it can be implemented by an electronic device, which can be a mobile terminal, a PC or a server, etc. Specifically, it includes the following steps:

S110,确定当前的眼球追踪相关场景。S110, determining a current eye tracking related scene.

其中,眼球追踪相关场景包括如下至少一种:可穿戴眼球追踪设备佩戴场景、生物特征注册场景、眼球追踪校准场景、第一眼球追踪场景、第二眼球追踪场景及第三眼球追踪场景。Among them, eye tracking related scenarios include at least one of the following: a wearable eye tracking device wearing scenario, a biometric registration scenario, an eye tracking calibration scenario, a first eye tracking scenario, a second eye tracking scenario and a third eye tracking scenario.

本实施例中,眼球追踪相关场景可以根据用户对眼球追踪设备的实际需求进行选择,若用户是首次使用眼球追踪设备,则用户需要依次进入可穿戴眼球追踪设备佩戴场景、生物特征注册场景及眼球追踪校准场景,在校准完成后,用户可以选择第一眼球追踪场景、第二眼球追踪场景或者第三眼球追踪场景进行眼球追踪。在不同的眼球追踪相关场景下,可选择对应的相机出图模式及光源波长,以满足当前眼球追踪相关场景的数据处理需求。In this embodiment, the eye tracking related scenes can be selected according to the actual needs of the user for the eye tracking device. If the user is using the eye tracking device for the first time, the user needs to enter the wearable eye tracking device wearing scene, the biometric registration scene and the eye tracking calibration scene in sequence. After the calibration is completed, the user can select the first eye tracking scene, the second eye tracking scene or the third eye tracking scene for eye tracking. In different eye tracking related scenes, the corresponding camera output mode and light source wavelength can be selected to meet the data processing requirements of the current eye tracking related scene.

其中,可穿戴眼球追踪设备佩戴场景可以理解为在穿戴眼球追踪设备的过程中对眼球追踪设备进行调节的场景。生物特征注册场景可以理解为将用户的生物特征存入数据库的过程。眼球追踪校准场景可以理解为获取用户校准系数的过程。第一眼球追踪场景可以理解为普通精度的眼球追踪场景。第二眼球追踪场景可以理解为较高精度的眼球追踪场景。第三眼球追踪场景可以理解为低功耗低精度的眼球追踪场景。Among them, the wearable eye tracking device wearing scenario can be understood as a scenario in which the eye tracking device is adjusted during the wearing process. The biometric registration scenario can be understood as the process of storing the user's biometrics in a database. The eye tracking calibration scenario can be understood as the process of obtaining the user calibration coefficient. The first eye tracking scenario can be understood as an eye tracking scenario of ordinary accuracy. The second eye tracking scenario can be understood as an eye tracking scenario of higher accuracy. The third eye tracking scenario can be understood as a low-power and low-precision eye tracking scenario.

S120,根据眼球追踪相关场景确定相机出图模式及光源波长。S120, determining a camera output mode and a light source wavelength according to the eye tracking related scene.

其中,相机出图模式包括如下至少一种:全像素出图模式、感兴趣区域ROI出图模式、降采样出图模式、ROI叠加降采样出图模式;光源波长包括第一波长和第二波长。Among them, the camera output mode includes at least one of the following: full pixel output mode, region of interest ROI output mode, downsampling output mode, ROI superposition downsampling output mode; the light source wavelength includes a first wavelength and a second wavelength.

其中,全像素出图模式可以理解为将相机采集的所有像素输出,此时相机输出的图像的像素最高。例如:输出的图像可以是1500万像素。感兴趣区域(region of interest,ROI)出图模式可以是将相机采集的图像以方框、圆、椭圆、不规则多边形等方式勾勒出需要处理的区域,称为感兴趣区域,并将感兴趣区域对应的图像输出,ROI出图模式输出的图像的像素要小于全像素出图模式的像素,例如:输出的图像可以是160万像素。降采样出图模式可以是Binning模式或者Skipping模式。Binning模式可以理解为将相邻像元感应的电荷加在一起,以一个像素的模式读出,Binning分为水平方向Binning和垂直方向Binning,水平方向Binning是将相邻的行的电荷加在一起读出,而垂直方向Binning是将相邻的列的电荷加在一起读出。Skipping模式可以理解为抽取指定像素点输出。降采样出图模式输出的图像的像素要小于全像素出图模式的像素,例如:输出的图像可以是160万像素。而ROI叠加降采样出图模式可以理解为将两种出图模式同时使用,产生叠加效果,这种出图模式输出的图像像素更小,数据量也更小,一般用于低功耗低精度眼球追踪时使用。Among them, the full pixel output mode can be understood as outputting all pixels collected by the camera, and the pixel of the image output by the camera is the highest. For example, the output image can be 15 million pixels. The region of interest (ROI) output mode can be to outline the area to be processed in the image collected by the camera in the form of a box, circle, ellipse, irregular polygon, etc., which is called the region of interest, and output the image corresponding to the region of interest. The pixel of the image output by the ROI output mode is smaller than the pixel of the full pixel output mode. For example, the output image can be 1.6 million pixels. The downsampling output mode can be a Binning mode or a Skipping mode. The Binning mode can be understood as adding the charges induced by adjacent pixels together and reading them out in a pixel mode. Binning is divided into horizontal Binning and vertical Binning. Horizontal Binning is to add the charges of adjacent rows together and read them out, while vertical Binning is to add the charges of adjacent columns together and read them out. The Skipping mode can be understood as extracting the specified pixel points for output. The pixels of the image output by the downsampling output mode are smaller than those of the full pixel output mode. For example, the output image can be 1.6 million pixels. The ROI overlay downsampling output mode can be understood as using the two output modes at the same time to produce an overlay effect. The image output by this output mode has smaller pixels and smaller data volume. It is generally used for low-power and low-precision eye tracking.

本实施例中,第一波长和第二波长的范围为800nm-1000nm,第一波长和第二波长可以相同,也可以不同。都可以满足眼球追踪和生物特征识别等的技术要求。但是优选的,第一波长小于第二波长,示例性的,第一波长可以是810nm或者850nm,第二波长可以是940nm。当然第一波长和第二波长也可以是其他数值。In this embodiment, the first wavelength and the second wavelength are in the range of 800nm-1000nm, and the first wavelength and the second wavelength may be the same or different. Both can meet the technical requirements of eye tracking and biometric recognition. However, preferably, the first wavelength is smaller than the second wavelength. For example, the first wavelength may be 810nm or 850nm, and the second wavelength may be 940nm. Of course, the first wavelength and the second wavelength may also be other values.

具体的,根据眼球追踪相关场景确定相机出图模式及光源波长的过程可以是:若眼球追踪相关场景为可穿戴眼球追踪设备佩戴场景,则确定的相机出图模式为降采样出图模式,确定的光源波长为第二波长(即940nm)。若眼球追踪相关场景为生物特征注册场景,则确定的相机出图模式为全像素出图模式或者ROI出图模式,确定的光源波长为第一波长(即810nm或者850nm)。若眼球追踪相关场景为眼球追踪校准场景,则确定的相机出图模式为ROI出图模式或者全像素出图模式,确定的光源波长为第一波长。若眼球追踪相关场景为第一眼球追踪场景,则确定的相机出图模式为ROI出图模式和降采样出图模式,确定的光源波长为第一波长和第二波长。若眼球追踪相关场景为第二眼球追踪场景,则确定的相机出图模式为ROI出图模式,确定的光源波长为第一波长和第二波长。若眼球追踪相关场景为第三眼球追踪场景,则确定的相机出图模式为ROI出图模式和ROI叠加降采样出图模式,确定的光源波长为第一波长或者第一波长和第二波长。Specifically, the process of determining the camera image output mode and the wavelength of the light source according to the eye tracking related scene can be: if the eye tracking related scene is a wearable eye tracking device wearing scene, the determined camera image output mode is the downsampling image output mode, and the determined light source wavelength is the second wavelength (i.e., 940nm). If the eye tracking related scene is a biometric registration scene, the determined camera image output mode is the full pixel image output mode or the ROI image output mode, and the determined light source wavelength is the first wavelength (i.e., 810nm or 850nm). If the eye tracking related scene is an eye tracking calibration scene, the determined camera image output mode is the ROI image output mode or the full pixel image output mode, and the determined light source wavelength is the first wavelength. If the eye tracking related scene is the first eye tracking scene, the determined camera image output mode is the ROI image output mode and the downsampling image output mode, and the determined light source wavelength is the first wavelength and the second wavelength. If the eye tracking related scene is the second eye tracking scene, the determined camera image output mode is the ROI image output mode, and the determined light source wavelength is the first wavelength and the second wavelength. If the eye tracking related scene is the third eye tracking scene, the determined camera output mode is the ROI output mode and the ROI superposition downsampling output mode, and the determined light source wavelength is the first wavelength or the first wavelength and the second wavelength.

S130,基于相机出图模式及光源波长对眼球追踪相关场景下的数据进行处理。S130, processing data in eye tracking related scenarios based on the camera output mode and the light source wavelength.

本实施例中,在不同的眼球追踪相关场景,选择的相机出图模式及光源波长互不相同,基于确定相机出图模式及光源波长对眼球追踪相关场景下的数据进行处理,以实现该眼球追踪相关场景下的功能。In this embodiment, in different eye-tracking related scenarios, the selected camera output modes and light source wavelengths are different. Based on the determined camera output modes and light source wavelengths, the data in the eye-tracking related scenarios are processed to realize the functions in the eye-tracking related scenarios.

可选的,若眼球追踪相关场景为可穿戴眼球追踪设备佩戴场景,基于相机出图模式及光源波长对眼球追踪相关场景下的数据进行处理的方式可以是:基于降采样出图模式及第二波长的光源采集用户的第一眼部图像;基于第一眼部图像对可穿戴眼球追踪设备的位置和/或瞳距进行调节。Optionally, if the eye tracking related scenario is a scenario in which a wearable eye tracking device is worn, the method for processing data in the eye tracking related scenario based on the camera output mode and the light source wavelength may be: capturing a first eye image of the user based on the downsampling output mode and a light source of a second wavelength; and adjusting the position and/or pupil distance of the wearable eye tracking device based on the first eye image.

其中,基于降采样出图模式及第二波长的光源采集用户的第一眼部图像可以理解为:控制第二波长的光源照射用户眼部,然后控制相机采集用户眼部图像,并按照降采样出图模式输出眼部图像,从而获得第一眼部图像。对可穿戴眼球追踪设备的位置的调节可以包括可穿戴眼球追踪设备的上下位置调节及左右位置调节又或者其他方向的调节。对可穿戴眼球追踪设备的瞳距进行调节可以包括对可穿戴眼球追踪设备的目镜水平间距进行调整,目镜水平间距调节可以理解为以两个目镜的中心线为基准,同时向中心线或者两侧进行对称距离调节。包括自动调节和手动旋钮调节。Among them, collecting the first eye image of the user based on the downsampling output mode and the light source of the second wavelength can be understood as: controlling the light source of the second wavelength to illuminate the user's eyes, and then controlling the camera to collect the user's eye image, and outputting the eye image according to the downsampling output mode, so as to obtain the first eye image. Adjusting the position of the wearable eye tracking device may include adjusting the up and down position and left and right position of the wearable eye tracking device or adjusting it in other directions. Adjusting the pupil distance of the wearable eye tracking device may include adjusting the horizontal spacing of the eyepieces of the wearable eye tracking device. The horizontal spacing adjustment of the eyepieces can be understood as taking the center line of the two eyepieces as a reference, and adjusting the distance symmetrically toward the center line or both sides at the same time. It includes automatic adjustment and manual knob adjustment.

本实施例中,在进行可穿戴眼球追踪设备佩戴过程中,若需要调节可穿戴眼球追踪设备,此处要求的眼部图像的分辨率可以相对较低,因此,选择940nm第二波长的光源及降采样出图模式,即可以满足可穿戴眼球追踪设备佩戴,又节省能耗。In this embodiment, during the wearing process of the wearable eye tracking device, if the wearable eye tracking device needs to be adjusted, the resolution of the eye image required here can be relatively low. Therefore, selecting a light source with a second wavelength of 940nm and a downsampling output mode can meet the wearing requirements of the wearable eye tracking device and save energy.

具体的,基于眼部图像对可穿戴眼球追踪设备的位置和/或瞳距进行调节的过程可以是:根据第一眼部图像确定第一眼部特征数据;根据第一眼部特征数据确定第一调节信息和第二调节信息;根据第一调节信息引导用户对可穿戴眼球追踪设备进行位置调节,根据第二调节信息对可穿戴眼球追踪设备的瞳距进行调整。Specifically, the process of adjusting the position and/or pupil distance of the wearable eye tracking device based on the eye image can be: determining first eye feature data based on the first eye image; determining first adjustment information and second adjustment information based on the first eye feature data; guiding the user to adjust the position of the wearable eye tracking device based on the first adjustment information, and adjusting the pupil distance of the wearable eye tracking device based on the second adjustment information.

其中,第一眼部特征数据包括如下至少一项:瞳孔位置、瞳孔形状、眼皮位置、眼角位置,还可以包括视线方向。这里的视线方向为一个大概的视线方向,是根据瞳孔位置和瞳孔形状在上下眼皮以及内外眼角的位置大概判断的视线方向,因为在进行可穿戴眼球追踪设备佩戴过程中,需要用户正视屏幕前方时进行设备的位置和瞳距调节是最准确的,因此在大概判断了用户的视线方向后,如果用户未看向屏幕正前方,则会发出第一提示信息提示用户看向屏幕正前方(可以是语音提示或者文字提示)。此步骤是为了后续的设备位置调节做准备。Among them, the first eye feature data includes at least one of the following: pupil position, pupil shape, eyelid position, eye corner position, and may also include line of sight direction. The line of sight direction here is an approximate line of sight direction, which is roughly determined based on the position of the pupil and the pupil shape at the upper and lower eyelids and the inner and outer corners of the eyes. Because in the process of wearing a wearable eye tracking device, it is most accurate to adjust the position and pupil distance of the device when the user is looking directly in front of the screen. Therefore, after roughly determining the user's line of sight direction, if the user is not looking directly in front of the screen, a first prompt message will be issued to prompt the user to look directly in front of the screen (it can be a voice prompt or a text prompt). This step is to prepare for the subsequent device position adjustment.

为了更好的进行眼球追踪和用户体验,可穿戴眼球追踪设备可以设置一个最佳佩戴位置范围,当用户眼睛落入最佳佩戴位置范围内时即表示已经符合最佳佩戴位置要求。这个最佳佩戴位置范围可以不显示在屏幕上,由系统根据第一眼部特征数据判断用户眼睛位置是否符合最佳佩戴位置要求,并以语音或者文字的方式提示用户进行位置调整(即第一调节信息);在另一个实施例中,也可以在设备屏幕上显示最佳佩戴位置范围(例如代表左眼和右眼的两个圆圈,或者一个矩形框范围等等),同时用户的眼睛图像也会实时显示在设备屏幕上,用户根据屏幕上的图像信息,自己进行设备的位置调节。在本实施例中,第一调节信息即为屏幕上显示的图像信息。In order to better perform eye tracking and user experience, the wearable eye tracking device can set an optimal wearing position range. When the user's eyes fall into the optimal wearing position range, it means that the optimal wearing position requirements have been met. This optimal wearing position range may not be displayed on the screen. The system determines whether the user's eye position meets the optimal wearing position requirements based on the first eye feature data, and prompts the user to adjust the position (i.e., the first adjustment information) in the form of voice or text; in another embodiment, the optimal wearing position range can also be displayed on the device screen (for example, two circles representing the left eye and the right eye, or a rectangular frame range, etc.), and the user's eye image will also be displayed in real time on the device screen. The user adjusts the position of the device according to the image information on the screen. In this embodiment, the first adjustment information is the image information displayed on the screen.

具体的,在获得第一眼部特征数据之后,根据第一眼部特征数据判断用户眼睛是否落入最佳佩戴位置范围内,可穿戴眼球追踪设备的瞳距是否与用户瞳距相匹配,若否,则生成第一调节信息和第二调节信息,并根据第一调节信息引导用户对可穿戴眼球追踪设备进行位置调节,根据第二调节信息对可穿戴眼球追踪设备的瞳距进行调整,使得用户眼睛落入最佳佩戴位置范围内,可穿戴眼球追踪设备的瞳距与用户瞳距相匹配,即符合佩戴要求。示例性的,图2是本实施例中在可穿戴眼球追踪设备佩戴场景下的数据处理过程,如图2所示,用户佩戴可穿戴眼球追踪设备后并将可穿戴眼球追踪设备开机;打开940nm光源照射眼部,并启动眼球追踪相机拍摄用户眼部,并以降采样出图模式输出眼部图像;根据眼部图像确定眼部特征数据,并基于眼部特征数据引导用户对可穿戴眼球追踪设备进行位置调节和/或对可穿戴眼球追踪设备的瞳距进行调整。Specifically, after obtaining the first eye feature data, it is determined whether the user's eyes fall within the optimal wearing position range according to the first eye feature data, whether the pupil distance of the wearable eye tracking device matches the user's pupil distance, if not, the first adjustment information and the second adjustment information are generated, and the user is guided to adjust the position of the wearable eye tracking device according to the first adjustment information, and the pupil distance of the wearable eye tracking device is adjusted according to the second adjustment information, so that the user's eyes fall within the optimal wearing position range, and the pupil distance of the wearable eye tracking device matches the user's pupil distance, that is, it meets the wearing requirements. Exemplarily, FIG2 is a data processing process in the wearable eye tracking device wearing scenario in this embodiment. As shown in FIG2, after the user wears the wearable eye tracking device and turns on the wearable eye tracking device; turns on the 940nm light source to illuminate the eyes, and starts the eye tracking camera to shoot the user's eyes, and outputs the eye image in a downsampling output mode; determines the eye feature data according to the eye image, and guides the user to adjust the position of the wearable eye tracking device and/or adjust the pupil distance of the wearable eye tracking device based on the eye feature data.

需要注意的,对可穿戴眼球追踪设备进行位置调节和瞳距调整可以是一个统一的调节过程,即先进行可穿戴眼球追踪设备的位置调节再对可穿戴眼球追踪设备的瞳距进行调整;也可以是两个独立的调节过程。It should be noted that the position adjustment and pupil distance adjustment of the wearable eye tracking device can be a unified adjustment process, that is, the position adjustment of the wearable eye tracking device is performed first and then the pupil distance of the wearable eye tracking device is adjusted; or it can be two independent adjustment processes.

可选的,若眼球追踪相关场景为生物特征注册场景,则基于相机出图模式及光源波长对眼球追踪相关场景下的数据进行处理的过程可以是:基于全像素出图模式或者ROI出图模式及第一波长的光源采集用户注视不同方向时的眼部图像,获得多张第二眼部图像;分别提取多张第二眼部图像中的局部生物特征;将多个局部生物特征进行拼接,获得目标生物特征;将目标生物特征进行存储。Optionally, if the eye tracking related scene is a biometric registration scene, the process of processing the data in the eye tracking related scene based on the camera output mode and the light source wavelength can be: based on the full pixel output mode or the ROI output mode and the light source of the first wavelength, collecting eye images of the user when looking in different directions, to obtain multiple second eye images; extracting local biometric features from the multiple second eye images respectively; splicing multiple local biometric features to obtain target biometric features; and storing the target biometric features.

其中,生物特征包括如下至少一项:虹膜特征、眼底特征、视网膜血管特征、眼纹巩膜上的血管特征、眼睛形状。在生物特征注册场景中,要提取到用高精度的生物特征,因此对眼部图像的精度要求较高,环境亮度也有更高要求,因此此时光源波长选择810/850nm,这个波段的红外灯亮度会高于940nm的红外灯,相机出图模式选择全像素出图模式或者ROI出图模式。具体的,在可穿戴眼球追踪设备的显示界面(UI)上的不同方向显示引导点,以引导用户转动眼球以注视不同方向,并采集用户注视引导点时的眼部图像,从而获得多张第二眼部图像,然后分别从多张第二眼部图像中提取用户的局部生物特征,最后将多个局部生物特征进行拼接,获得目标生物特征,并将目标生物特征进行存储,从而完成生物特征的注册。Among them, the biometric features include at least one of the following: iris features, fundus features, retinal vascular features, vascular features on the eye pattern sclera, and eye shape. In the biometric registration scenario, high-precision biometric features must be extracted, so the accuracy of the eye image is required to be high, and the ambient brightness is also required to be higher. Therefore, the wavelength of the light source is selected to be 810/850nm. The brightness of the infrared light in this band will be higher than that of the infrared light of 940nm. The camera output mode selects the full pixel output mode or the ROI output mode. Specifically, guide points are displayed in different directions on the display interface (UI) of the wearable eye tracking device to guide the user to turn his eyes to look in different directions, and the eye images when the user looks at the guide points are collected to obtain multiple second eye images, and then the local biometric features of the user are extracted from the multiple second eye images respectively, and finally the multiple local biometric features are spliced to obtain the target biometric features, and the target biometric features are stored to complete the registration of biometric features.

示例性的,图3是生物特征注册场景下的数据处理过程,如图3所示,该过程为:用户完成可穿戴眼球追踪设备的佩戴调节,且符合佩戴要求;打开810/850nm光源照射眼部,并启动眼球追踪相机拍摄用户眼部,并以全像素出图模式输出眼部图像;通过UI界面引导用户转动眼球以注视不同方向,并采集用户注视不同方向时的眼部图像;对不同方向的眼部数据进行局部生物特征的提取,将多个局部生物特征进行拼接,获得目标生物特征;将目标生物特征进行存储,完成生物特征的注册。Exemplarily, FIG3 is a data processing process in a biometric registration scenario. As shown in FIG3 , the process is: the user completes the wearing adjustment of the wearable eye tracking device and meets the wearing requirements; turns on the 810/850nm light source to illuminate the eyes, starts the eye tracking camera to shoot the user's eyes, and outputs the eye image in full-pixel output mode; guides the user to rotate the eyeballs to look in different directions through the UI interface, and collects eye images when the user looks in different directions; extracts local biometric features of eye data in different directions, splices multiple local biometric features, and obtains target biometric features; stores the target biometric features to complete the registration of biometric features.

可选的,若眼球追踪相关场景为眼球追踪校准场景,则基于相机出图模式及光源波长对眼球追踪相关场景下的数据进行处理的方式可以是:基于ROI出图模式或者全像素出图模式及第一波长的光源采集用户注视设定校准点的第三眼部图像;基于第三眼部图像确定校准系数。Optionally, if the eye tracking related scene is an eye tracking calibration scene, the method of processing the data in the eye tracking related scene based on the camera output mode and the wavelength of the light source can be: based on the ROI output mode or the full-pixel output mode and the light source of the first wavelength, a third eye image of the user gazing at the set calibration point is captured; and the calibration coefficient is determined based on the third eye image.

其中,基于ROI出图模式及第一波长的光源采集用户注视设定校准点的第三眼部图像可以理解为:控制第一波长的光源照射用户眼部,然后控制相机采集用户眼部图像,并按照ROI出图模式或者全像素出图模式输出眼部图像,从而获得第三眼部图像。本实施例中,若采用ROI出图模式,在对用户进行眼球追踪校准时,无需对整个眼部图像进行分析,只需要对眼部图像中感兴趣区域进行分析即可,因此采用ROI出图模式和810/850nm光源,不仅不会影响校准精度,还可以减少计算量。Among them, based on the ROI output mode and the first wavelength light source to collect the third eye image of the user looking at the set calibration point can be understood as: controlling the first wavelength light source to irradiate the user's eyes, and then controlling the camera to collect the user's eye image, and output the eye image according to the ROI output mode or the full pixel output mode, so as to obtain the third eye image. In this embodiment, if the ROI output mode is adopted, when the user's eye tracking calibration is performed, it is not necessary to analyze the entire eye image, only the area of interest in the eye image needs to be analyzed. Therefore, the use of the ROI output mode and the 810/850nm light source will not affect the calibration accuracy, but also reduce the amount of calculation.

具体的,基于第三眼部图像确定校准系数的过程可以是:根据第三眼部图像确定第三眼部特征数据;基于第三眼部特征数据确定校准系数;相应的,在基于第三眼部图像确定校准系数之后,还包括如下步骤:将校准系数和生物特征绑定后存储。Specifically, the process of determining the calibration coefficient based on the third eye image may be: determining third eye feature data according to the third eye image; determining the calibration coefficient based on the third eye feature data; accordingly, after determining the calibration coefficient based on the third eye image, it also includes the following step: binding the calibration coefficient and biometric features and storing them.

其中,第三眼部特征数据包括如下至少一项:眼底数据、瞳孔位置、瞳孔形状、虹膜位置、虹膜形状、眼皮位置、眼角位置、光斑位置。本实施例中,在可穿戴眼球追踪设备的显示界面显示设定校准点,然后基于ROI出图模式或者全像素出图模式及第一波长的光源采集用户注视设定校准点的第三眼部图像,从第三眼部图像中提取眼底数据、瞳孔位置、瞳孔形状、虹膜位置、虹膜形状、眼皮位置、眼角位置、光斑位置等眼部特征数据,最后基于眼部特征数据确定各设定校准点对应的校准系数,将校准系数和生物特征绑定后存储。Among them, the third eye feature data includes at least one of the following: fundus data, pupil position, pupil shape, iris position, iris shape, eyelid position, eye corner position, and light spot position. In this embodiment, the calibration point is set on the display interface of the wearable eye tracking device, and then the third eye image of the user gazing at the set calibration point is collected based on the ROI output mode or the full pixel output mode and the light source of the first wavelength, and the fundus data, pupil position, pupil shape, iris position, iris shape, eyelid position, eye corner position, light spot position and other eye feature data are extracted from the third eye image, and finally the calibration coefficient corresponding to each set calibration point is determined based on the eye feature data, and the calibration coefficient is bound to the biometric feature and stored.

示例性的,图4是本实施例中眼球追踪校准场景下的数据处理流程,如图4所示,该流程为:用户完成可穿戴眼球追踪设备的佩戴调节,打开810/850nm光源照射眼部,并启动眼球追踪相机拍摄用户眼部,并以ROI出图模式输出眼部图像;显示设定校准点采集用户注视设定校准点的眼部图像,并提取该眼部图像中眼底数据、瞳孔位置、瞳孔形状、虹膜位置、虹膜形状、眼皮位置、眼角位置、光斑位置等眼部特征数据;基于眼部特征数据确定各设定校准点对应的校准系数;将校准系数和生物特征绑定后存储,从而完成校准。Exemplarily, FIG4 is a data processing flow in the eye tracking calibration scenario in this embodiment. As shown in FIG4 , the flow is: the user completes the wearing adjustment of the wearable eye tracking device, turns on the 810/850nm light source to illuminate the eyes, and starts the eye tracking camera to shoot the user's eyes, and outputs the eye image in ROI output mode; the set calibration point is displayed to collect the eye image of the user looking at the set calibration point, and the fundus data, pupil position, pupil shape, iris position, iris shape, eyelid position, eye corner position, light spot position and other eye feature data in the eye image are extracted; the calibration coefficient corresponding to each set calibration point is determined based on the eye feature data; the calibration coefficient is bound to the biometric feature and stored to complete the calibration.

可选的,若眼球追踪相关场景为第一眼球追踪场景,该场景为普通精度眼球追踪场景,则基于相机出图模式及光源波长对眼球追踪相关场景下的数据进行处理的过程可以是:基于ROI出图模式及第一波长的光源采集用户的第四眼部图像;根据第四眼部图像确定用户的生物特征;根据生物特征获取预存的用户的校准系数;将相机出图模式切换为降采样出图模式,将光源波长切换为第二波长;基于降采样出图模式及第二波长的光源采集用户的第五眼部图像;基于第五眼部图像和校准系数对用户进行眼球追踪。Optionally, if the eye tracking related scene is the first eye tracking scene, which is a normal precision eye tracking scene, the process of processing the data in the eye tracking related scene based on the camera output mode and the light source wavelength may be: based on the ROI output mode and the light source of the first wavelength, collecting the fourth eye image of the user; determining the user's biometric characteristics based on the fourth eye image; obtaining the pre-stored user's calibration coefficient based on the biometric characteristics; switching the camera output mode to the downsampling output mode, and switching the light source wavelength to the second wavelength; based on the downsampling output mode and the light source of the second wavelength, collecting the user's fifth eye image; and performing eye tracking on the user based on the fifth eye image and the calibration coefficient.

本实施例中,在进行眼球追踪时,首先要获取到用户的生物特征,以根据生物特征从数据库获取预存的用户的校准系数,此处对眼部图像的精度要求较高,因此此时选择ROI出图模式及第一波长的光源进行眼部图像的采集。在获得校准系数后,基于校准系数进行眼球追踪时,若对眼球追踪的精度要求不高,此处可以将相机出图模式切换为降采样出图模式,将光源波长切换为第二波长,以进行眼球追踪。In this embodiment, when performing eye tracking, the user's biometrics must first be obtained to obtain the user's pre-stored calibration coefficients from the database based on the biometrics. Here, the accuracy of the eye image is required to be high, so the ROI output mode and the first wavelength light source are selected to collect the eye image. After obtaining the calibration coefficient, when performing eye tracking based on the calibration coefficient, if the accuracy of eye tracking is not required, the camera output mode can be switched to the downsampling output mode, and the light source wavelength can be switched to the second wavelength to perform eye tracking.

其中,基于第五眼部图像和校准系数对用户进行眼球追踪的方式可以参见现有的注视点或者注视方向估计算法,此处不做限定。Among them, the method of tracking the user's eyes based on the fifth eye image and the calibration coefficient can refer to the existing gaze point or gaze direction estimation algorithm, which is not limited here.

示例性的,图5是本实施例中第一眼球追踪场景下的数据处理过程,该过程包括:用户完成可穿戴眼球追踪设备的佩戴调节;打开810/850nm光源照射眼部,并启动眼球追踪相机拍摄用户眼部,并以ROI出图模式输出眼部图像;根据眼部图像确定用户的生物特征,根据生物特征获取预存的用户的校准系数;将相机出图模式切换为降采样出图模式,将光源波长切换为940nm,再次采集用户的眼部图像;基于该眼部图像和校准系数进行眼球追踪。Exemplarily, FIG5 is a data processing process in the first eye tracking scenario in this embodiment, which includes: the user completes the wearing adjustment of the wearable eye tracking device; turns on the 810/850nm light source to illuminate the eyes, starts the eye tracking camera to shoot the user's eyes, and outputs the eye image in ROI output mode; determines the user's biometrics based on the eye image, and obtains the pre-stored user's calibration coefficient based on the biometrics; switches the camera output mode to the downsampling output mode, switches the light source wavelength to 940nm, and collects the user's eye image again; performs eye tracking based on the eye image and the calibration coefficient.

需要注意的是,如果第一波长和第二波长相等,则不需要切换光源。It should be noted that if the first wavelength and the second wavelength are equal, there is no need to switch the light source.

可选的,若眼球追踪相关场景为第二眼球追踪场景,该场景为高精度眼球追踪场景,基于相机出图模式及光源波长对眼球追踪相关场景下的数据进行处理的过程可以是:基于ROI出图模式及第一波长的光源采集用户的第六眼部图像;根据第六眼部图像确定用户的生物特征;根据生物特征获取预存的用户的校准系数;将光源波长切换为第二波长,基于ROI出图模式及第二波长的光源采集用户的第七眼部图像;基于第七眼部图像和校准系数对用户进行眼球追踪。Optionally, if the eye tracking related scene is the second eye tracking scene, which is a high-precision eye tracking scene, the process of processing the data in the eye tracking related scene based on the camera output mode and the light source wavelength may be: based on the ROI output mode and the light source of the first wavelength, a sixth eye image of the user is collected; based on the sixth eye image, the user's biometric characteristics are determined; based on the biometric characteristics, a pre-stored calibration coefficient of the user is obtained; the light source wavelength is switched to the second wavelength, and based on the ROI output mode and the light source of the second wavelength, a seventh eye image of the user is collected; and eye tracking of the user is performed based on the seventh eye image and the calibration coefficient.

本实施例中,在进行眼球追踪时,首先要获取到用户的生物特征,以根据生物特征从数据库获取预存的用户的校准系数,此处对眼部图像的精度要求较高,因此此时选择ROI出图模式及第一波长的光源进行眼部图像的采集。在获得校准系数后,基于校准系数进行眼球追踪时,若对眼球追踪的精度要求较高,则继续使用ROI出图模式并使用第二波长的光源进行眼部图像的采集,并根据再次采集的眼部图像和校准系数进行眼球追踪。In this embodiment, when performing eye tracking, the user's biometrics must first be obtained to obtain the user's pre-stored calibration coefficients from the database based on the biometrics. Here, the accuracy of the eye image is required to be high, so the ROI output mode and the first wavelength light source are selected to collect the eye image. After obtaining the calibration coefficient, when performing eye tracking based on the calibration coefficient, if the accuracy of eye tracking is required to be high, the ROI output mode is continued to be used and the second wavelength light source is used to collect the eye image, and eye tracking is performed based on the re-collected eye image and the calibration coefficient.

示例性的,图6是本实施例中第二眼球追踪场景下的数据处理过程,该过程包括:用户完成可穿戴眼球追踪设备的佩戴调节;打开810/850nm光源照射眼部,并启动眼球追踪相机拍摄用户眼部,并以ROI出图模式输出眼部图像;根据眼部图像确定用户的生物特征,根据生物特征获取预存的用户的校准系数;将光源波长切换为940nm,再次采集用户的眼部图像;基于该眼部图像和校准系数进行眼球追踪。需要注意的是,如果第一波长和第二波长相等,则不需要切换光源。Exemplarily, FIG6 is the data processing process in the second eye tracking scenario in this embodiment, which includes: the user completes the wearing adjustment of the wearable eye tracking device; turns on the 810/850nm light source to illuminate the eyes, and starts the eye tracking camera to shoot the user's eyes, and outputs the eye image in ROI output mode; determines the user's biological characteristics based on the eye image, and obtains the pre-stored user's calibration coefficient based on the biological characteristics; switches the light source wavelength to 940nm, and collects the user's eye image again; performs eye tracking based on the eye image and the calibration coefficient. It should be noted that if the first wavelength and the second wavelength are equal, there is no need to switch the light source.

可选的,若眼球追踪相关场景为第三眼球追踪场景,该场景为低功耗低精度眼球追踪场景,则基于相机出图模式及光源波长对眼球追踪相关场景下的数据进行处理的过程可以是:基于ROI出图模式及第一波长的光源采集用户的第八眼部图像;根据第八眼部图像确定用户的生物特征;根据生物特征获取预存的用户的校准系数;将相机出图模式切换为ROI叠加降采样出图模式,将光源波长切换为第二波长或者依然采用第一波长;基于ROI叠加降采样出图模式及所述第一波长或所述第二波长的光源采集用户的第九眼部图像;基于所述第九眼部图像和所述校准系数对用户进行眼球追踪。Optionally, if the eye tracking related scenario is a third eye tracking scenario, which is a low-power and low-precision eye tracking scenario, the process of processing the data in the eye tracking related scenario based on the camera output mode and the light source wavelength may be: based on the ROI output mode and the light source of the first wavelength, collecting the eighth eye image of the user; determining the user's biometric characteristics based on the eighth eye image; obtaining the pre-stored user's calibration coefficient based on the biometric characteristics; switching the camera output mode to the ROI overlay downsampling output mode, switching the light source wavelength to the second wavelength or still using the first wavelength; collecting the user's ninth eye image based on the ROI overlay downsampling output mode and the light source of the first wavelength or the second wavelength; and performing eye tracking on the user based on the ninth eye image and the calibration coefficient.

本实施例中,在进行眼球追踪时,首先要获取到用户的生物特征,以根据生物特征从数据库获取预存的用户的校准系数,此处对眼部图像的精度要求较高,因此此时选择ROI出图模式及第一波长的光源进行眼部图像的采集。在获得校准系数后,基于校准系数进行眼球追踪时,因为该模式为低功耗低精度的眼球追踪模式,此处可以将相机出图模式切换为ROI叠加降采样出图模式,将光源波长保持不变或者切换为第二波长,以进行眼球追踪,由于输出的眼部图像像素更低,传输的数据也更少,故可以节省功耗。In this embodiment, when performing eye tracking, the user's biometrics must first be obtained to obtain the pre-stored user calibration coefficients from the database based on the biometrics. Here, the accuracy requirements for the eye image are relatively high, so the ROI output mode and the first wavelength light source are selected to collect the eye image. After obtaining the calibration coefficient, when performing eye tracking based on the calibration coefficient, because this mode is a low-power and low-precision eye tracking mode, the camera output mode can be switched to the ROI superimposed downsampling output mode, and the wavelength of the light source remains unchanged or switched to the second wavelength for eye tracking. Since the output eye image has lower pixels and less data is transmitted, power consumption can be saved.

其中,基于第九眼部图像和校准系数对用户进行眼球追踪的方式可以参见现有的注视点或者注视方向估计算法,此处不做限定。Among them, the method of tracking the user's eyes based on the ninth eye image and the calibration coefficient can refer to the existing gaze point or gaze direction estimation algorithm, which is not limited here.

示例性的,图7是本实施例中第三眼球追踪场景下的数据处理过程,该过程包括:用户完成可穿戴眼球追踪设备的佩戴调节;打开810/850nm光源照射眼部,并启动眼球追踪相机拍摄用户眼部,并以ROI出图模式输出眼部图像;根据眼部图像确定用户的生物特征,根据生物特征获取预存的用户的校准系数;将相机出图模式切换为ROI叠加降采样出图模式,光源波长保持不变或者将光源波长切换为940nm,采集用户的眼部图像;基于该眼部图像和校准系数进行眼球追踪。Exemplarily, FIG7 is a data processing process in the third eye tracking scenario in this embodiment, which includes: the user completes the wearing adjustment of the wearable eye tracking device; turns on the 810/850nm light source to illuminate the eyes, starts the eye tracking camera to shoot the user's eyes, and outputs the eye image in ROI output mode; determines the user's biometrics based on the eye image, and obtains the pre-stored user's calibration coefficient based on the biometrics; switches the camera output mode to the ROI superposition downsampling output mode, keeps the light source wavelength unchanged or switches the light source wavelength to 940nm, and collects the user's eye image; performs eye tracking based on the eye image and the calibration coefficient.

需要注意的是,如果第一波长和第二波长相等,则不需要切换光源。It should be noted that if the first wavelength and the second wavelength are equal, there is no need to switch the light source.

本实施例的技术方案,确定当前的眼球追踪相关场景;其中,眼球追踪相关场景包括如下至少一种:可穿戴眼球追踪设备佩戴场景、生物特征注册场景、眼球追踪校准场景、第一眼球追踪场景、第二眼球追踪场景及第三眼球追踪场景;根据眼球追踪相关场景确定相机出图模式及光源波长;其中,相机出图模式包括:全像素出图模式、感兴趣区域ROI出图模式、降采样出图模式、ROI叠加降采样出图模式;光源波长包括第一波长和第二波长;基于相机出图模式及光源波长对眼球追踪相关场景下的数据进行处理。本发明实施例提供的眼球追踪的处理方法,在不同的眼球追踪场景下选择对应的相机出图模式和/或光源波长,不仅可以提高眼球追踪的精度,还可以增加眼球追踪的适应性。The technical solution of this embodiment determines the current eye tracking related scene; wherein the eye tracking related scene includes at least one of the following: a wearable eye tracking device wearing scene, a biometric registration scene, an eye tracking calibration scene, a first eye tracking scene, a second eye tracking scene, and a third eye tracking scene; the camera output mode and the light source wavelength are determined according to the eye tracking related scene; wherein the camera output mode includes: a full pixel output mode, a region of interest ROI output mode, a downsampling output mode, and a ROI superposition downsampling output mode; the light source wavelength includes a first wavelength and a second wavelength; and the data in the eye tracking related scene is processed based on the camera output mode and the light source wavelength. The eye tracking processing method provided by the embodiment of the present invention selects the corresponding camera output mode and/or light source wavelength in different eye tracking scenes, which can not only improve the accuracy of eye tracking, but also increase the adaptability of eye tracking.

实施例二Embodiment 2

图8是本发明实施例二提供的一种眼球追踪的处理装置的结构示意图,如图8所示,该装置,包括:FIG8 is a schematic diagram of the structure of an eye tracking processing device provided in Embodiment 2 of the present invention. As shown in FIG8 , the device includes:

眼球追踪相关场景确定模块810,用于确定当前的眼球追踪相关场景;其中,眼球追踪相关场景包括如下至少一种:可穿戴眼球追踪设备佩戴场景、生物特征注册场景、眼球追踪校准场景、第一眼球追踪场景、第二眼球追踪场景及第三眼球追踪场景;The eye tracking related scene determination module 810 is used to determine the current eye tracking related scene; wherein the eye tracking related scene includes at least one of the following: a wearable eye tracking device wearing scene, a biometric registration scene, an eye tracking calibration scene, a first eye tracking scene, a second eye tracking scene, and a third eye tracking scene;

出图模式及光源波长确定模块820,用于根据眼球追踪相关场景确定相机出图模式及光源波长;其中,相机出图模式包括如下至少一种:全像素出图模式、感兴趣区域ROI出图模式、降采样出图模式、ROI叠加降采样出图模式;光源波长包括第一波长和第二波长;The image output mode and light source wavelength determination module 820 is used to determine the camera image output mode and light source wavelength according to the eye tracking related scene; wherein the camera image output mode includes at least one of the following: full pixel image output mode, region of interest ROI image output mode, downsampling image output mode, ROI superposition downsampling image output mode; the light source wavelength includes a first wavelength and a second wavelength;

数据处理模块830,用于基于相机出图模式及光源波长对眼球追踪相关场景下的数据进行处理。The data processing module 830 is used to process the data in the eye tracking related scenes based on the camera output mode and the light source wavelength.

可选的,出图模式及光源波长确定模块820,还用于:Optionally, the image output mode and light source wavelength determination module 820 is further used to:

若眼球追踪相关场景为可穿戴眼球追踪设备佩戴场景,则确定的相机出图模式为降采样出图模式,确定的光源波长为第二波长;If the eye tracking related scene is a wearable eye tracking device wearing scene, the determined camera image output mode is a downsampling image output mode, and the determined light source wavelength is the second wavelength;

可选的,数据处理模块830,还用于:Optionally, the data processing module 830 is further configured to:

基于降采样出图模式及第二波长的光源采集用户的第一眼部图像;Collecting a first eye image of the user based on a downsampling image output mode and a light source of a second wavelength;

基于第一眼部图像对可穿戴眼球追踪设备的位置和/或瞳距进行调节。The position and/or pupil distance of the wearable eye tracking device are adjusted based on the first eye image.

可选的,数据处理模块830,还用于:Optionally, the data processing module 830 is further configured to:

根据第一眼部图像确定第一眼部特征数据;其中,第一眼部特征数据包括如下至少一项:瞳孔位置、瞳孔形状、眼皮位置、眼角位置;Determine first eye feature data according to the first eye image; wherein the first eye feature data includes at least one of the following: pupil position, pupil shape, eyelid position, and eye corner position;

根据第一眼部特征数据确定第一调节信息和/或第二调节信息;Determine first adjustment information and/or second adjustment information according to the first eye feature data;

根据第一调节信息引导用户对可穿戴眼球追踪设备进行位置调节,和/或根据第二调节信息对可穿戴眼球追踪设备的瞳距进行调整。The user is guided to adjust the position of the wearable eye tracking device according to the first adjustment information, and/or the pupil distance of the wearable eye tracking device is adjusted according to the second adjustment information.

可选的,出图模式及光源波长确定模块820,还用于:Optionally, the image output mode and light source wavelength determination module 820 is further used to:

若眼球追踪相关场景为生物特征注册场景,则确定的相机出图模式为全像素出图模式或者ROI出图模式,确定的光源波长为第一波长;If the eye tracking related scene is a biometric registration scene, the determined camera output mode is a full pixel output mode or a ROI output mode, and the determined light source wavelength is a first wavelength;

可选的,数据处理模块830,还用于:Optionally, the data processing module 830 is further configured to:

基于全像素出图模式或者ROI出图模式及第一波长的光源采集用户注视不同方向时的眼部图像,获得多张第二眼部图像;Based on the full pixel output mode or the ROI output mode and the light source of the first wavelength, the eye images of the user when gazing in different directions are collected to obtain a plurality of second eye images;

分别提取多张第二眼部图像中的局部生物特征;respectively extracting local biological features from a plurality of second eye images;

将多个局部生物特征进行拼接,获得目标生物特征;Splicing multiple local biometric features to obtain target biometric features;

将目标生物特征进行存储;其中,生物特征包括如下至少一项:虹膜特征、眼底特征、视网膜血管特征、眼纹巩膜上的血管特征、眼睛形状。The target biometric features are stored; wherein the biometric features include at least one of the following: iris features, fundus features, retinal vascular features, vascular features on eye patterns and sclera, and eye shape.

可选的,出图模式及光源波长确定模块820,还用于:Optionally, the image output mode and light source wavelength determination module 820 is further used to:

若眼球追踪相关场景为眼球追踪校准场景,则确定的相机出图模式为ROI出图模式或者全像素出图模式,确定的光源波长为第一波长;If the eye tracking related scene is an eye tracking calibration scene, the determined camera image output mode is the ROI image output mode or the full pixel image output mode, and the determined light source wavelength is the first wavelength;

可选的,数据处理模块830,还用于:Optionally, the data processing module 830 is further configured to:

基于ROI出图模式或者全像素出图模式及第一波长的光源采集用户注视设定校准点的第三眼部图像;Based on the ROI output mode or the full pixel output mode and the light source of the first wavelength, a third eye image of the user gazing at the set calibration point is collected;

基于第三眼部图像确定校准系数。Calibration coefficients are determined based on the third eye image.

可选的,数据处理模块830,还用于:Optionally, the data processing module 830 is further configured to:

根据第三眼部图像确定第三眼部特征数据;其中,第三眼部特征数据包括如下至少一项:眼底数据、瞳孔位置、瞳孔形状、虹膜位置、虹膜形状、眼皮位置、眼角位置、光斑位置;Determine third eye feature data according to the third eye image; wherein the third eye feature data includes at least one of the following: fundus data, pupil position, pupil shape, iris position, iris shape, eyelid position, eye corner position, and light spot position;

基于第三眼部特征数据确定校准系数;determining a calibration coefficient based on the third eye feature data;

将校准系数和生物特征绑定后存储。The calibration coefficients are bound to the biometric features and stored.

可选的,出图模式及光源波长确定模块820,还用于:Optionally, the image output mode and light source wavelength determination module 820 is further used to:

若眼球追踪相关场景为第一眼球追踪场景,则确定的相机出图模式为ROI出图模式,确定的光源波长为第一波长;If the eye tracking related scene is the first eye tracking scene, the determined camera output mode is the ROI output mode, and the determined light source wavelength is the first wavelength;

可选的,数据处理模块830,还用于:Optionally, the data processing module 830 is further configured to:

基于ROI出图模式及第一波长的光源采集用户的第四眼部图像;Collecting a fourth eye image of the user based on the ROI output mode and the light source of the first wavelength;

根据第四眼部图像确定用户的生物特征;determining a biometric characteristic of the user based on the fourth eye image;

根据生物特征获取预存的用户的校准系数;Obtaining a pre-stored user calibration coefficient based on biometrics;

将相机出图模式切换为降采样出图模式,将光源波长切换为第二波长;Switch the camera output mode to downsampling output mode, and switch the light source wavelength to the second wavelength;

基于降采样出图模式及第二波长的光源采集用户的第五眼部图像;Collecting a fifth eye image of the user based on the downsampling image output mode and the light source of the second wavelength;

基于第五眼部图像和校准系数对用户进行眼球追踪。Eye tracking of the user is performed based on the fifth eye image and the calibration coefficients.

可选的,出图模式及光源波长确定模块820,还用于:Optionally, the image output mode and light source wavelength determination module 820 is further used to:

若眼球追踪相关场景为第二眼球追踪场景,则确定的相机出图模式为ROI出图模式,确定的光源波长为第一波长;If the eye tracking related scene is the second eye tracking scene, the determined camera output mode is the ROI output mode, and the determined light source wavelength is the first wavelength;

可选的,数据处理模块830,还用于:Optionally, the data processing module 830 is further configured to:

基于ROI出图模式及第一波长的光源采集用户的第六眼部图像;Collecting a sixth eye image of the user based on the ROI output mode and the light source of the first wavelength;

根据第六眼部图像确定用户的生物特征;determining a biometric characteristic of the user based on the sixth ocular image;

根据生物特征获取预存的用户的校准系数;Obtaining a pre-stored user calibration coefficient based on biometrics;

基于ROI出图模式及第二波长的光源采集用户的第七眼部图像;Collecting a seventh eye image of the user based on the ROI output mode and the light source of the second wavelength;

基于第七眼部图像和校准系数对用户进行眼球追踪。Eye tracking of the user is performed based on the seventh eye image and the calibration coefficients.

可选的,出图模式及光源波长确定模块820,还用于:Optionally, the image output mode and light source wavelength determination module 820 is further used to:

若眼球追踪相关场景为第三眼球追踪场景,则确定的相机出图模式为ROI出图模式和ROI叠加降采样出图模式,确定的光源波长为所述第一波长或者所述第一波长和所述第二波长;If the eye tracking related scene is the third eye tracking scene, the determined camera output mode is the ROI output mode and the ROI superposition downsampling output mode, and the determined light source wavelength is the first wavelength or the first wavelength and the second wavelength;

可选的,数据处理模块830,还用于:Optionally, the data processing module 830 is further configured to:

基于ROI出图模式及第一波长的光源采集用户的第八眼部图像;Collecting an eighth eye image of the user based on the ROI output mode and the light source of the first wavelength;

根据第八眼部图像确定所述用户的生物特征;determining a biometric characteristic of the user based on an eighth eye image;

根据生物特征获取预存的用户的校准系数;Obtaining a pre-stored user calibration coefficient based on biometrics;

基于ROI叠加降采样出图模式及第一波长或第二波长的光源采集用户的第九眼部图像;Collecting a ninth eye image of the user based on the ROI superposition downsampling output mode and the light source of the first wavelength or the second wavelength;

基于所述第九眼部图像和所述校准系数对用户进行眼球追踪。Eye tracking of the user is performed based on the ninth eye image and the calibration coefficient.

上述装置可执行本发明前述所有实施例所提供的方法,具备执行上述方法相应的功能模块和有益效果。未在本实施例中详尽描述的技术细节,可参见本发明前述所有实施例所提供的方法。The above device can execute the methods provided by all the above embodiments of the present invention, and has the corresponding functional modules and beneficial effects of executing the above methods. For technical details not described in detail in this embodiment, please refer to the methods provided by all the above embodiments of the present invention.

实施例三Embodiment 3

图9示出了可以用来实施本发明的实施例的电子设备10的结构示意图。电子设备旨在表示各种形式的数字计算机,诸如,膝上型计算机、台式计算机、工作台、个人数字助理、服务器、刀片式服务器、大型计算机、和其它适合的计算机。电子设备还可以表示各种形式的移动装置,诸如,个人数字处理、蜂窝电话、智能电话、可穿戴设备(如头盔、XR眼镜、手表等)和其它类似的计算装置。本文所示的部件、它们的连接和关系、以及它们的功能仅仅作为示例,并且不意在限制本文中描述的和/或者要求的本发明的实现。FIG9 shows a block diagram of an electronic device 10 that can be used to implement an embodiment of the present invention. The electronic device is intended to represent various forms of digital computers, such as laptop computers, desktop computers, workstations, personal digital assistants, servers, blade servers, mainframe computers, and other suitable computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices (such as helmets, XR glasses, watches, etc.) and other similar computing devices. The components shown herein, their connections and relationships, and their functions are merely examples and are not intended to limit the implementation of the present invention described and/or required herein.

如图9所示,电子设备10包括至少一个处理器11,以及与至少一个处理器11通信连接的存储器,如只读存储器(ROM)12、随机访问存储器(RAM)13等,其中,存储器存储有可被至少一个处理器执行的计算机程序,处理器11可以根据存储在只读存储器(ROM)12中的计算机程序或者从存储单元18加载到随机访问存储器(RAM)13中的计算机程序,来执行各种适当的动作和处理。在RAM 13中,还可存储电子设备10操作所需的各种程序和数据。处理器11、ROM 12以及RAM 13通过总线14彼此相连。输入/输出(I/O)接口15也连接至总线14。As shown in FIG9 , the electronic device 10 includes at least one processor 11, and a memory connected to the at least one processor 11 in communication, such as a read-only memory (ROM) 12, a random access memory (RAM) 13, etc., wherein the memory stores a computer program that can be executed by at least one processor, and the processor 11 can perform various appropriate actions and processes according to the computer program stored in the read-only memory (ROM) 12 or the computer program loaded from the storage unit 18 to the random access memory (RAM) 13. In the RAM 13, various programs and data required for the operation of the electronic device 10 can also be stored. The processor 11, the ROM 12, and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to the bus 14.

电子设备10中的多个部件连接至I/O接口15,包括:输入单元16,例如键盘、鼠标等;输出单元17,例如各种类型的显示器、扬声器等;存储单元18,例如磁盘、光盘等;以及通信单元19,例如网卡、调制解调器、无线通信收发机等。通信单元19允许电子设备10通过诸如因特网的计算机网络和/或各种电信网络与其他设备交换信息/数据。A number of components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16, such as a keyboard, a mouse, etc.; an output unit 17, such as various types of displays, speakers, etc.; a storage unit 18, such as a disk, an optical disk, etc.; and a communication unit 19, such as a network card, a modem, a wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices through a computer network such as the Internet and/or various telecommunication networks.

处理器11可以是各种具有处理和计算能力的通用和/或专用处理组件。处理器11的一些示例包括但不限于中央处理单元(CPU)、图形处理单元(GPU)、各种专用的人工智能(AI)计算芯片、各种运行机器学习模型算法的处理器、数字信号处理器(DSP)、以及任何适当的处理器、控制器、微控制器等。处理器11执行上文所描述的各个方法和处理,例如眼球追踪的处理方法。The processor 11 may be a variety of general and/or special processing components with processing and computing capabilities. Some examples of the processor 11 include, but are not limited to, a central processing unit (CPU), a graphics processing unit (GPU), various special artificial intelligence (AI) computing chips, various processors running machine learning model algorithms, a digital signal processor (DSP), and any appropriate processor, controller, microcontroller, etc. The processor 11 executes the various methods and processes described above, such as the processing method of eye tracking.

在一些实施例中,眼球追踪的处理方法可被实现为计算机程序,其被有形地包含于计算机可读存储介质,例如存储单元18。在一些实施例中,计算机程序的部分或者全部可以经由ROM 12和/或通信单元19而被载入和/或安装到电子设备10上。当计算机程序加载到RAM 13并由处理器11执行时,可以执行上文描述的眼球追踪的处理方法的一个或多个步骤。备选地,在其他实施例中,处理器11可以通过其他任何适当的方式(例如,借助于固件)而被配置为执行眼球追踪的处理方法。In some embodiments, the eye tracking processing method may be implemented as a computer program, which is tangibly contained in a computer-readable storage medium, such as a storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed on the electronic device 10 via the ROM 12 and/or the communication unit 19. When the computer program is loaded into the RAM 13 and executed by the processor 11, one or more steps of the eye tracking processing method described above may be performed. Alternatively, in other embodiments, the processor 11 may be configured to perform the eye tracking processing method in any other appropriate manner (e.g., by means of firmware).

本文中以上描述的系统和技术的各种实施方式可以在数字电子电路系统、集成电路系统、场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、芯片上系统的系统(SOC)、负载可编程逻辑设备(CPLD)、计算机硬件、固件、软件、和/或它们的组合中实现。这些各种实施方式可以包括:实施在一个或者多个计算机程序中,该一个或者多个计算机程序可在包括至少一个可编程处理器的可编程系统上执行和/或解释,该可编程处理器可以是专用或者通用可编程处理器,可以从存储系统、至少一个输入装置、和至少一个输出装置接收数据和指令,并且将数据和指令传输至该存储系统、该至少一个输入装置、和该至少一个输出装置。Various implementations of the systems and techniques described above herein can be implemented in digital electronic circuit systems, integrated circuit systems, field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), application specific standard products (ASSPs), systems on chips (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various implementations can include: being implemented in one or more computer programs that can be executed and/or interpreted on a programmable system including at least one programmable processor, which can be a special purpose or general purpose programmable processor that can receive data and instructions from a storage system, at least one input device, and at least one output device, and transmit data and instructions to the storage system, the at least one input device, and the at least one output device.

用于实施本发明的方法的计算机程序可以采用一个或多个编程语言的任何组合来编写。这些计算机程序可以提供给通用计算机、专用计算机或其他可编程数据处理装置的处理器,使得计算机程序当由处理器执行时使流程图和/或框图中所规定的功能/操作被实施。计算机程序可以完全在机器上执行、部分地在机器上执行,作为独立软件包部分地在机器上执行且部分地在远程机器上执行或完全在远程机器或服务器上执行。Computer programs for implementing the methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general-purpose computer, a special-purpose computer, or other programmable data processing device, so that when the computer program is executed by the processor, the functions/operations specified in the flow chart and/or block diagram are implemented. The computer program may be executed entirely on the machine, partially on the machine, partially on the machine and partially on a remote machine as a stand-alone software package, or entirely on a remote machine or server.

在本发明的上下文中,计算机可读存储介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的计算机程序。计算机可读存储介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。备选地,计算机可读存储介质可以是机器可读信号介质。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。In the context of the present invention, a computer-readable storage medium may be a tangible medium that may contain or store a computer program for use by or in combination with an instruction execution system, device or equipment. A computer-readable storage medium may include, but is not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices or equipment, or any suitable combination of the foregoing. Alternatively, a computer-readable storage medium may be a machine-readable signal medium. A more specific example of a machine-readable storage medium may include an electrical connection based on one or more lines, a portable computer disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.

为了提供与用户的交互,可以在电子设备上实施此处描述的系统和技术,该电子设备具有:用于向用户显示信息的显示装置(例如,CRT(阴极射线管)或者LCD(液晶显示器)监视器);以及键盘和指向装置(例如,鼠标或者轨迹球),用户可以通过该键盘和该指向装置来将输入提供给电子设备。其它种类的装置还可以用于提供与用户的交互;例如,提供给用户的反馈可以是任何形式的传感反馈(例如,视觉反馈、听觉反馈、或者触觉反馈);并且可以用任何形式(包括声输入、语音输入或者、触觉输入)来接收来自用户的输入。To provide interaction with a user, the systems and techniques described herein may be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user; and a keyboard and a pointing device (e.g., a mouse or trackball) through which the user can provide input to the electronic device. Other types of devices may also be used to provide interaction with the user; for example, the feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form (including acoustic input, voice input, or tactile input).

可以将此处描述的系统和技术实施在包括后台部件的计算系统(例如,作为数据服务器)、或者包括中间件部件的计算系统(例如,应用服务器)、或者包括前端部件的计算系统(例如,具有图形用户界面或者网络浏览器的用户计算机,用户可以通过该图形用户界面或者该网络浏览器来与此处描述的系统和技术的实施方式交互)、或者包括这种后台部件、中间件部件、或者前端部件的任何组合的计算系统中。可以通过任何形式或者介质的数字数据通信(例如,通信网络)来将系统的部件相互连接。通信网络的示例包括:局域网(LAN)、广域网(WAN)、区块链网络和互联网。The systems and techniques described herein may be implemented in a computing system that includes backend components (e.g., as a data server), or a computing system that includes middleware components (e.g., an application server), or a computing system that includes frontend components (e.g., a user computer with a graphical user interface or a web browser through which a user can interact with implementations of the systems and techniques described herein), or a computing system that includes any combination of such backend components, middleware components, or frontend components. The components of the system may be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: a local area network (LAN), a wide area network (WAN), a blockchain network, and the Internet.

计算系统可以包括客户端和服务器。客户端和服务器一般远离彼此并且通常通过通信网络进行交互。通过在相应的计算机上运行并且彼此具有客户端-服务器关系的计算机程序来产生客户端和服务器的关系。服务器可以是云服务器,又称为云计算服务器或云主机,是云计算服务体系中的一项主机产品,以解决了传统物理主机与VPS服务中,存在的管理难度大,业务扩展性弱的缺陷。A computing system may include a client and a server. The client and the server are generally remote from each other and usually interact through a communication network. The client and server relationship is generated by computer programs running on the corresponding computers and having a client-server relationship with each other. The server may be a cloud server, also known as a cloud computing server or cloud host, which is a host product in the cloud computing service system to solve the defects of difficult management and weak business scalability in traditional physical hosts and VPS services.

应该理解,可以使用上面所示的各种形式的流程,重新排序、增加或删除步骤。例如,本发明中记载的各步骤可以并行地执行也可以顺序地执行也可以不同的次序执行,只要能够实现本发明的技术方案所期望的结果,本文在此不进行限制。It should be understood that the various forms of processes shown above can be used to reorder, add or delete steps. For example, the steps described in the present invention can be executed in parallel, sequentially or in different orders, as long as the desired results of the technical solution of the present invention can be achieved, and this document does not limit this.

上述具体实施方式,并不构成对本发明保护范围的限制。本领域技术人员应该明白的是,根据设计要求和其他因素,可以进行各种修改、组合、子组合和替代。任何在本发明的精神和原则之内所作的修改、等同替换和改进等,均应包含在本发明保护范围之内。The above specific implementations do not constitute a limitation on the protection scope of the present invention. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions can be made according to design requirements and other factors. Any modification, equivalent substitution and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (12)

1.一种眼球追踪的处理方法,其特征在于,包括:1. A method for processing eye tracking, characterized by comprising: 确定当前的眼球追踪相关场景;其中,所述眼球追踪相关场景包括如下至少一种:可穿戴眼球追踪设备佩戴场景、生物特征注册场景、眼球追踪校准场景、第一眼球追踪场景、第二眼球追踪场景及第三眼球追踪场景;Determine a current eye tracking related scenario; wherein the eye tracking related scenario includes at least one of the following: a wearable eye tracking device wearing scenario, a biometric registration scenario, an eye tracking calibration scenario, a first eye tracking scenario, a second eye tracking scenario, and a third eye tracking scenario; 根据所述眼球追踪相关场景确定相机出图模式及光源波长;其中,所述相机出图模式包括如下至少一种:全像素出图模式、感兴趣区域ROI出图模式、降采样出图模式、ROI叠加降采样出图模式;所述光源波长包括第一波长和第二波长;Determine the camera output mode and the light source wavelength according to the eye tracking related scene; wherein the camera output mode includes at least one of the following: full pixel output mode, region of interest ROI output mode, downsampling output mode, ROI superposition downsampling output mode; the light source wavelength includes a first wavelength and a second wavelength; 基于所述相机出图模式及所述光源波长对所述眼球追踪相关场景下的数据进行处理。The data in the eye tracking related scene is processed based on the camera output mode and the light source wavelength. 2.根据权利要求1所述的方法,其特征在于,根据所述眼球追踪相关场景确定相机出图模式及光源波长,包括:2. The method according to claim 1, characterized in that determining the camera output mode and the light source wavelength according to the eye tracking related scene comprises: 若所述眼球追踪相关场景为所述可穿戴眼球追踪设备佩戴场景,则确定的相机出图模式为所述降采样出图模式,确定的光源波长为所述第二波长;If the eye tracking related scene is a wearing scene of the wearable eye tracking device, the determined camera image output mode is the downsampling image output mode, and the determined light source wavelength is the second wavelength; 相应的,基于所述相机出图模式及所述光源波长对所述眼球追踪相关场景下的数据进行处理,包括:Accordingly, the data in the eye tracking related scene is processed based on the camera output mode and the light source wavelength, including: 基于所述降采样出图模式及所述第二波长的光源采集用户的第一眼部图像;Collecting a first eye image of the user based on the downsampling image output mode and the light source of the second wavelength; 基于所述第一眼部图像对可穿戴眼球追踪设备的位置和/或瞳距进行调节。The position and/or pupil distance of the wearable eye tracking device are adjusted based on the first eye image. 3.根据权利要求2所述的方法,其特征在于,基于所述眼部图像对可穿戴眼球追踪设备的位置和/或瞳距进行调节,包括:3. The method according to claim 2, characterized in that adjusting the position and/or pupil distance of the wearable eye tracking device based on the eye image comprises: 根据所述第一眼部图像确定第一眼部特征数据;其中,所述第一眼部特征数据包括如下至少一项:瞳孔位置、瞳孔形状、眼皮位置、眼角位置;Determine first eye feature data according to the first eye image; wherein the first eye feature data includes at least one of the following: pupil position, pupil shape, eyelid position, and eye corner position; 根据所述第一眼部特征数据确定第一调节信息和/或第二调节信息;Determining first adjustment information and/or second adjustment information according to the first eye feature data; 根据所述第一调节信息引导用户对所述可穿戴眼球追踪设备进行位置调节,和/或根据所述第二调节信息对所述可穿戴眼球追踪设备的瞳距进行调整。The user is guided to adjust the position of the wearable eye tracking device according to the first adjustment information, and/or the pupil distance of the wearable eye tracking device is adjusted according to the second adjustment information. 4.根据权利要求1所述的方法,其特征在于,根据所述眼球追踪相关场景确定相机出图模式及光源波长,包括:4. The method according to claim 1, characterized in that determining the camera output mode and the light source wavelength according to the eye tracking related scene comprises: 若所述眼球追踪相关场景为生物特征注册场景,则确定的相机出图模式为所述全像素出图模式或所述ROI出图模式,确定的光源波长为所述第一波长;If the eye tracking related scene is a biometric registration scene, the determined camera image output mode is the full pixel image output mode or the ROI image output mode, and the determined light source wavelength is the first wavelength; 相应的,基于所述相机出图模式及所述光源波长对所述眼球追踪相关场景下的数据进行处理,包括:Accordingly, the data in the eye tracking related scene is processed based on the camera output mode and the light source wavelength, including: 基于所述全像素出图模式或所述ROI出图模式及所述第一波长的光源采集用户注视不同方向时的眼部图像,获得多张第二眼部图像;Based on the full-pixel image output mode or the ROI image output mode and the light source of the first wavelength, eye images of the user when gazing in different directions are collected to obtain a plurality of second eye images; 分别提取所述多张第二眼部图像中的局部生物特征;respectively extracting local biological features from the plurality of second eye images; 将多个所述局部生物特征进行拼接,获得目标生物特征;splicing the plurality of local biometric features to obtain a target biometric feature; 将所述目标生物特征进行存储;其中,所述生物特征包括如下至少一项:虹膜特征、眼底特征、视网膜血管特征、眼纹巩膜上的血管特征、眼睛形状。The target biometric features are stored; wherein the biometric features include at least one of the following: iris features, fundus features, retinal vascular features, vascular features on eye wrinkles and sclera, and eye shape. 5.根据权利要求1所述的方法,其特征在于,根据所述眼球追踪相关场景确定相机出图模式及光源波长,包括:5. The method according to claim 1, characterized in that determining the camera output mode and the light source wavelength according to the eye tracking related scene comprises: 若所述眼球追踪相关场景为眼球追踪校准场景,则确定的相机出图模式为所述ROI出图模式或者所述全像素出图模式,确定的光源波长为所述第一波长;If the eye tracking related scene is an eye tracking calibration scene, the determined camera image output mode is the ROI image output mode or the full pixel image output mode, and the determined light source wavelength is the first wavelength; 相应的,基于所述相机出图模式及所述光源波长对所述眼球追踪相关场景下的数据进行处理,包括:Accordingly, the data in the eye tracking related scene is processed based on the camera output mode and the light source wavelength, including: 基于所述ROI出图模式或所述全像素出图模式及所述第一波长的光源采集用户注视设定校准点的第三眼部图像;Based on the ROI image output mode or the full-pixel image output mode and the light source of the first wavelength, a third eye image of the user gazing at a set calibration point is collected; 基于所述第三眼部图像确定校准系数。Calibration coefficients are determined based on the third eye image. 6.根据权利要求5所述的方法,其特征在于,基于所述第三眼部图像确定校准系数,包括:6. The method according to claim 5, characterized in that determining the calibration coefficient based on the third eye image comprises: 根据所述第三眼部图像确定第三眼部特征数据;其中,所述第三眼部特征数据包括如下至少一项:眼底数据、瞳孔位置、瞳孔形状、虹膜位置、虹膜形状、眼皮位置、眼角位置、光斑位置;Determine third eye feature data according to the third eye image; wherein the third eye feature data includes at least one of the following: fundus data, pupil position, pupil shape, iris position, iris shape, eyelid position, eye corner position, and light spot position; 基于所述第三眼部特征数据确定校准系数;determining a calibration coefficient based on the third eye feature data; 相应的,在基于所述第三眼部图像确定校准系数之后,还包括:Accordingly, after determining the calibration coefficient based on the third eye image, the method further includes: 将所述校准系数和所述生物特征绑定后存储。The calibration coefficient is bound to the biometric feature and then stored. 7.根据权利要求1所述的方法,其特征在于,根据所述眼球追踪相关场景确定相机出图模式及光源波长,包括:7. The method according to claim 1, characterized in that determining the camera output mode and the light source wavelength according to the eye tracking related scene comprises: 若所述眼球追踪相关场景为第一眼球追踪场景,则确定的相机出图模式为所述ROI出图模式和所述降采样出图模式,确定的光源波长为所述第一波长和所述第二波长;If the eye tracking related scene is the first eye tracking scene, the determined camera output mode is the ROI output mode and the downsampling output mode, and the determined light source wavelength is the first wavelength and the second wavelength; 相应的,基于所述相机出图模式及所述光源波长对所述眼球追踪相关场景下的数据进行处理,包括:Accordingly, the data in the eye tracking related scene is processed based on the camera output mode and the light source wavelength, including: 基于所述ROI出图模式及所述第一波长的光源采集用户的第四眼部图像;Collecting a fourth eye image of the user based on the ROI output mode and the light source of the first wavelength; 根据所述第四眼部图像确定所述用户的生物特征;determining a biometric feature of the user based on the fourth eye image; 根据所述生物特征获取预存的所述用户的校准系数;Acquire a pre-stored calibration coefficient of the user according to the biometric feature; 将所述相机出图模式切换为所述降采样出图模式,将所述光源波长切换为所述第二波长;Switching the camera image output mode to the downsampling image output mode, and switching the light source wavelength to the second wavelength; 基于所述降采样出图模式及所述第二波长的光源采集用户的第五眼部图像;Collecting a fifth eye image of the user based on the downsampling image output mode and the light source of the second wavelength; 基于所述第五眼部图像和所述校准系数对用户进行眼球追踪。Eye tracking of the user is performed based on the fifth eye image and the calibration coefficient. 8.根据权利要求1所述的方法,其特征在于,根据所述眼球追踪相关场景确定相机出图模式及光源波长,包括:8. The method according to claim 1, characterized in that determining the camera output mode and the light source wavelength according to the eye tracking related scene comprises: 若所述眼球追踪相关场景为第二眼球追踪场景,则确定的相机出图模式为所述ROI出图模式,确定的光源波长为所述第一波长和所述第二波长;If the eye tracking related scene is the second eye tracking scene, the determined camera output mode is the ROI output mode, and the determined light source wavelength is the first wavelength and the second wavelength; 相应的,基于所述相机出图模式及所述光源波长对所述眼球追踪相关场景下的数据进行处理,包括:Accordingly, the data in the eye tracking related scene is processed based on the camera output mode and the light source wavelength, including: 基于所述ROI出图模式及所述第一波长的光源采集用户的第六眼部图像;Collecting a sixth eye image of the user based on the ROI output mode and the light source of the first wavelength; 根据所述第六眼部图像确定所述用户的生物特征;determining a biometric feature of the user based on the sixth eye image; 根据所述生物特征获取预存的所述用户的校准系数;Acquire a pre-stored calibration coefficient of the user according to the biometric feature; 将所述光源波长切换为所述第二波长;Switching the wavelength of the light source to the second wavelength; 基于所述ROI出图模式及所述第二波长的光源采集用户的第七眼部图像;Collecting a seventh eye image of the user based on the ROI output mode and the light source of the second wavelength; 基于所述第七眼部图像和所述校准系数对用户进行眼球追踪。Eye tracking of the user is performed based on the seventh eye image and the calibration coefficient. 9.根据权利要求1所述的方法,其特征在于,根据所述眼球追踪相关场景确定相机出图模式及光源波长,包括:9. The method according to claim 1, characterized in that determining the camera output mode and the light source wavelength according to the eye tracking related scene comprises: 若所述眼球追踪相关场景为第三眼球追踪场景,则确定的相机出图模式为所述ROI出图模式和ROI叠加降采样出图模式,确定的光源波长为所述第一波长或者所述第一波长和所述第二波长;If the eye tracking related scene is the third eye tracking scene, the determined camera image output mode is the ROI image output mode and the ROI superposition downsampling image output mode, and the determined light source wavelength is the first wavelength or the first wavelength and the second wavelength; 相应的,基于所述相机出图模式及所述光源波长对所述眼球追踪相关场景下的数据进行处理,包括:Accordingly, the data in the eye tracking related scene is processed based on the camera output mode and the light source wavelength, including: 基于所述ROI出图模式及所述第一波长的光源采集用户的第八眼部图像;Collecting an eighth eye image of the user based on the ROI output mode and the light source of the first wavelength; 根据所述第八眼部图像确定所述用户的生物特征;determining a biometric feature of the user based on the eighth eye image; 根据所述生物特征获取预存的所述用户的校准系数;Acquire a pre-stored calibration coefficient of the user according to the biometric feature; 将所述相机出图模式切换为ROI叠加降采样出图模式,将所述光源波长切换为所述第二波长或者依然采用第一波长;Switch the camera image output mode to the ROI stacking downsampling image output mode, and switch the light source wavelength to the second wavelength or still use the first wavelength; 基于所述ROI叠加降采样出图模式及所述第一波长或所述第二波长的光源采集用户的第九眼部图像;Collecting a ninth eye image of the user based on the ROI superimposed downsampling image output mode and the light source of the first wavelength or the second wavelength; 基于所述第九眼部图像和所述校准系数对用户进行眼球追踪。Eye tracking of the user is performed based on the ninth eye image and the calibration coefficient. 10.一种眼球追踪的处理装置,其特征在于,包括:10. An eye tracking processing device, comprising: 眼球追踪相关场景确定模块,用于确定当前的眼球追踪相关场景;其中,所述眼球追踪相关场景包括如下至少一种:可穿戴眼球追踪设备佩戴场景、生物特征注册场景、眼球追踪校准场景、第一眼球追踪场景及第二眼球追踪场景;An eye tracking related scene determination module, used to determine a current eye tracking related scene; wherein the eye tracking related scene includes at least one of the following: a wearable eye tracking device wearing scene, a biometric registration scene, an eye tracking calibration scene, a first eye tracking scene, and a second eye tracking scene; 出图模式及光源波长确定模块,用于根据所述眼球追踪相关场景确定相机出图模式及光源波长;其中,所述相机出图模式包括如下至少一种:全像素出图模式、感兴趣区域ROI出图模式、降采样出图模式、ROI叠加降采样出图模式;所述光源波长包括第一波长和第二波长;The image output mode and light source wavelength determination module is used to determine the camera image output mode and light source wavelength according to the eye tracking related scene; wherein the camera image output mode includes at least one of the following: full pixel image output mode, region of interest ROI image output mode, downsampling image output mode, ROI superposition downsampling image output mode; the light source wavelength includes a first wavelength and a second wavelength; 数据处理模块,用于基于所述相机出图模式及所述光源波长对所述眼球追踪相关场景下的数据进行处理。A data processing module is used to process the data in the eye tracking related scene based on the camera output mode and the light source wavelength. 11.一种电子设备,其特征在于,所述电子设备包括:11. An electronic device, characterized in that the electronic device comprises: 至少一个处理器;以及at least one processor; and 与所述至少一个处理器通信连接的存储器;其中,a memory communicatively connected to the at least one processor; wherein, 所述存储器存储有可被所述至少一个处理器执行的计算机程序,所述计算机程序被所述至少一个处理器执行,以使所述至少一个处理器能够执行权利要求1-8中任一项所述的眼球追踪的处理方法。The memory stores a computer program executable by the at least one processor, and the computer program is executed by the at least one processor so that the at least one processor can perform the eye tracking processing method according to any one of claims 1 to 8. 12.一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机指令,所述计算机指令用于使处理器执行时实现权利要求1-8中任一项所述的眼球追踪的处理方法。12. A computer-readable storage medium, characterized in that the computer-readable storage medium stores computer instructions, and the computer instructions are used to enable a processor to implement the eye tracking processing method according to any one of claims 1 to 8 when executed.
CN202310512179.8A 2023-05-08 2023-05-08 Eyeball tracking processing method, device, equipment and storage medium Pending CN118918139A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202310512179.8A CN118918139A (en) 2023-05-08 2023-05-08 Eyeball tracking processing method, device, equipment and storage medium
PCT/CN2024/091311 WO2024230660A1 (en) 2023-05-08 2024-05-07 Processing method and apparatus for eyeball tracking, device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310512179.8A CN118918139A (en) 2023-05-08 2023-05-08 Eyeball tracking processing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN118918139A true CN118918139A (en) 2024-11-08

Family

ID=93298412

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310512179.8A Pending CN118918139A (en) 2023-05-08 2023-05-08 Eyeball tracking processing method, device, equipment and storage medium

Country Status (2)

Country Link
CN (1) CN118918139A (en)
WO (1) WO2024230660A1 (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105411525B (en) * 2015-11-10 2017-05-31 广州河谷互动医疗科技有限公司 A kind of fundus photograph image intelligent obtains identifying system
JP2017213191A (en) * 2016-05-31 2017-12-07 富士通株式会社 Gaze detection device, gaze detection method, and gaze detection program
CN108828771A (en) * 2018-06-12 2018-11-16 北京七鑫易维信息技术有限公司 Parameter regulation means, device, wearable device and the storage medium of wearable device
CN110427101A (en) * 2019-07-08 2019-11-08 北京七鑫易维信息技术有限公司 Calibration method, device, equipment and the storage medium of eyeball tracking
CN112164043A (en) * 2020-09-23 2021-01-01 苏州大学 Method and system for splicing multiple fundus images

Also Published As

Publication number Publication date
WO2024230660A1 (en) 2024-11-14

Similar Documents

Publication Publication Date Title
CN111598818B (en) Training method and device for face fusion model and electronic equipment
US11250241B2 (en) Face image processing methods and apparatuses, and electronic devices
US20220066553A1 (en) Eye image selection
CN109086726B (en) Local image identification method and system based on AR intelligent glasses
CN104657648B (en) Head-mounted display device and login method thereof
RU2672502C1 (en) Device and method for forming cornea image
Plopski et al. Corneal-imaging calibration for optical see-through head-mounted displays
US20190222830A1 (en) Display systems and methods for determining registration between a display and a user's eyes
US10488925B2 (en) Display control device, control method thereof, and display control system
US9911214B2 (en) Display control method and display control apparatus
US11972042B2 (en) Variable intensity distributions for gaze detection assembly
CN109032351B (en) Fixation point function determination method, fixation point determination device and terminal equipment
CN106526857B (en) Focus adjustment method and device
CN112069480B (en) Display method, device, storage medium and wearable device
WO2020215960A1 (en) Method and device for determining area of gaze, and wearable device
CN113628239B (en) Display optimization method, related device and computer program product
CN111601373A (en) Backlight brightness control method and device, mobile terminal and storage medium
CN118918139A (en) Eyeball tracking processing method, device, equipment and storage medium
WO2024120179A1 (en) Glasses diopter identification method and apparatus, electronic device and storage medium
US10083675B2 (en) Display control method and display control apparatus
CN118113140A (en) Sight tracking method and device, eye control equipment and storage medium
CN112562066B (en) Image reconstruction method and device and electronic equipment
CN115883816A (en) Display method and device, head-mounted display equipment and storage medium
CN104216151A (en) Automatic focusing device and method of LCD
CN114816065A (en) Screen backlight adjusting method, virtual reality device and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination