CN114973392A - Human eye motion tracking system and method - Google Patents
Human eye motion tracking system and method Download PDFInfo
- Publication number
- CN114973392A CN114973392A CN202210672708.6A CN202210672708A CN114973392A CN 114973392 A CN114973392 A CN 114973392A CN 202210672708 A CN202210672708 A CN 202210672708A CN 114973392 A CN114973392 A CN 114973392A
- Authority
- CN
- China
- Prior art keywords
- camera
- image
- user
- eyeball
- pupil
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 35
- 210000005252 bulbus oculi Anatomy 0.000 claims abstract description 99
- 238000012545 processing Methods 0.000 claims abstract description 32
- 230000004424 eye movement Effects 0.000 claims abstract description 22
- 230000000638 stimulation Effects 0.000 claims abstract description 21
- 230000001815 facial effect Effects 0.000 claims abstract description 13
- 210000001508 eye Anatomy 0.000 claims description 54
- 210000001747 pupil Anatomy 0.000 claims description 44
- 238000013507 mapping Methods 0.000 claims description 16
- 238000001514 detection method Methods 0.000 claims description 9
- 230000002159 abnormal effect Effects 0.000 claims description 6
- 230000008569 process Effects 0.000 claims description 6
- 230000003287 optical effect Effects 0.000 claims description 5
- 238000003708 edge detection Methods 0.000 claims description 4
- 238000001914 filtration Methods 0.000 claims description 3
- 238000012216 screening Methods 0.000 claims description 3
- 230000004936 stimulating effect Effects 0.000 claims description 3
- 238000005286 illumination Methods 0.000 claims 1
- 230000000007 visual effect Effects 0.000 description 8
- 238000010586 diagram Methods 0.000 description 6
- 238000012937 correction Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 238000004891 communication Methods 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 210000004087 cornea Anatomy 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000001028 reflection method Methods 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000004418 eye rotation Effects 0.000 description 1
- 210000000887 face Anatomy 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000001678 irradiating effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012806 monitoring device Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000001179 pupillary effect Effects 0.000 description 1
- 238000002310 reflectometry Methods 0.000 description 1
- 230000001502 supplementing effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/19—Sensors therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/193—Preprocessing; Feature extraction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/90—Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Ophthalmology & Optometry (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Eye Examination Apparatus (AREA)
Abstract
The invention discloses a human eye movement tracking system and a method, wherein the system comprises: the first camera is used for shooting a first face image of a user; a second camera user shoots a screen image of the display screen, and the display screen is used for displaying a self-defined stimulation picture; the infrared illuminating lamp is used for emitting infrared rays to eyeballs of a user; the signal processing unit is used for sending an instruction for displaying the stimulation picture to the display screen and determining the initial sight direction of the user according to the first facial image, the stimulation picture and the position of the infrared illuminating lamp; setting the camera positioned in the first sight line direction as an assistant camera, and shooting a second face image of the user by using the assistant camera; and splicing a target eyeball image of the user according to the first surface image and the second surface image of the user, and acquiring a target sight line direction according to the target eyeball image and the screen image. By applying the embodiment of the invention, the accuracy of the determined sight of the user can be improved.
Description
Technical Field
The invention relates to the technical field of visual image processing, in particular to a human eye motion tracking system and a human eye motion tracking method.
Background
The infrared eye tracking is a technology for tracking the visual focus of a user by utilizing infrared rays, and the basic principle is as follows: irradiating a beam of near infrared light to the face of a user, enabling the eyeball of the user to reflect the near infrared light, forming a light spot at a reflection point, and capturing the light spot when a camera shoots an eye image; meanwhile, the pupil center position is obtained by an image processing method. Then, the cornea reflection point is used as a base point of the relative position of the eye camera and the eyeball, and the sight line vector coordinate can be obtained by utilizing the deviation of the pupil center position relative to the cornea reflection point according to the pupil center position obtained by image processing, so that the human eye fixation point is determined.
The method has the technical problems that the user eye movement data are not accurate enough by shooting the user eye movement data through a single camera because individual differences exist in the shape, size, structure, facial posture and the like of human eyes, a nonlinear relation exists between the projection point position of a point on the spherical surface of the eyes in a camera reference system and the eye rotation angle, and the model error exists between the sight line estimation direction and the real sight line direction.
Disclosure of Invention
The invention aims to provide a human eye movement tracking system and a human eye movement tracking method so as to improve the accuracy of the determined user sight line data.
The invention solves the technical problems through the following technical scheme:
the invention provides a human eye motion tracking system, which comprises: a first camera, a second camera, an assistant camera, an infrared illuminating lamp, a signal processing unit and a display screen, wherein,
the first camera is used for shooting a first face image of a user;
a second camera user shoots a screen image of the display screen, and the display screen is used for displaying a self-defined stimulation picture;
the infrared illuminating lamp is used for emitting infrared rays to eyeballs of a user;
the signal processing unit is used for sending an instruction for displaying the stimulation picture to the display screen and determining the initial sight direction of the user according to the first face image, the stimulation picture and the position of the infrared illuminating lamp; setting the camera positioned in the first sight line direction as an assistant camera, and shooting a second face image of the user by using the assistant camera; and splicing a target eyeball image of the user according to the first surface image and the second surface image of the user, and acquiring a target sight direction according to the target eyeball image and the screen image.
Optionally, one or a combination of the first camera, the second camera and the infrared illuminating lamp is installed on the wearable device.
Optionally, the signal processing unit is further configured to obtain an eye movement parameter of the user, where the eye movement parameter includes: the number of blinks, the diameter of the pupil, the central coordinates of the eyeball and the motion track of the central eyeball.
The present invention also provides a human eye motion tracking method applied to a signal processing unit in the system according to any one of claims 1 to 3, the method comprising:
sending an instruction for displaying a stimulation picture to a display screen, and acquiring a first facial image of a user shot by a first camera; identifying the pupil center position and the area of the reflected light spot from the first face image by using a pre-trained eyeball detection model;
calculating the initial sight direction of the user by using a sight mapping model according to the offset of the pupil center position relative to the spot center position;
setting the camera positioned in the first sight line direction as an assistant camera, and shooting a second face image of the user by using the assistant camera;
and splicing the first surface image and the second surface image into a target eyeball image according to the positions of the first camera and the auxiliary camera, and acquiring a target sight line direction according to the target eyeball image and the screen image.
Optionally, the identifying the pupil center position from the first facial image by using the pre-trained eyeball detection model includes:
identifying an eye image area from the first face image, and cutting out the eye image from the face image of the user according to the contour line of the eye image area;
carrying out graying, Gaussian filtering and binarization processing on the eye image in sequence to obtain a preprocessed image;
recognizing an exit pupil region from the preprocessed image by using a pre-trained eyeball detection model;
screening out a target area from the pupil area according to a preset binarization threshold;
detecting a pupil edge from the target area by using an edge detection algorithm, and fitting the pupil edge into an ellipse to obtain a fitted target area;
removing abnormal pupils in the target area according to a preset pupil constraint condition;
and taking the central point of the target area excluding the abnormal pupil as the pupil central position.
Optionally, the setting the camera located in the first sight line direction as an assistant camera includes:
generating a virtual ray by taking the eyes of the user as a starting point and the direction corresponding to the first sight line as a direction, and taking a camera positioned on the virtual ray as an alternative camera;
and aiming at each alternative camera, taking the alternative camera capable of shooting the eyeballs of the user as an assistant camera.
Optionally, the setting the camera located in the first sight line direction as an assistant camera includes:
the user eyes are used as a starting point, the direction corresponding to the first sight line is used as a central axis, an angle is set as a cone angle to generate a first virtual cone, and a camera positioned in the range of the first virtual cone is used as an alternative camera;
and aiming at each alternative camera, taking the alternative camera capable of shooting the eyeballs of the user as an assistant camera.
Optionally, the stitching the first face image and the second face image into the target eyeball image according to the positions of the first camera and the assistant camera respectively includes:
according to the distribution positions of the first cameras and the auxiliary cameras, averagely dividing eyeball areas corresponding to the cameras;
calculating the size of a cutting cone angle according to the corresponding eyeball area, the distance of the first camera relative to the eyeball and the distance of the assistant camera relative to the eyeball;
for each first camera and each auxiliary camera, with the optical axis direction of the camera as a central axis, constructing a second virtual cone by a cutting cone angle, taking an area surrounded by a cutting line on the second virtual cone and a shot eyeball image as a cutting area, and cutting out a corresponding sub-image from the shot eyeball image according to the cutting area;
and converting pixel points of the sub-images into the same coordinate system according to the coordinates of the corresponding cameras, splicing the sub-images into complete eyeball images according to the eyeball areas corresponding to the sub-images, and taking the complete eyeball images as target eyeball images.
Optionally, the process of acquiring the gaze mapping model includes:
identifying the display position of the stimulation picture from the screen image by using the screen image shot by the second camera;
acquiring the sight direction of the user according to the pupil center position and the display position of the stimulating picture; and correcting the sight mapping model by using the position of the infrared illuminating lamp, the central position of the reflected light spot and the offset of the central position of the pupil relative to the reflected light spot.
Compared with the prior art, the invention has the following advantages:
according to the method, the initial sight direction of the user is determined through the first camera, then the assisting camera is determined based on the initial sight direction, and the assisting camera is determined based on the initial sight direction, so that the eyeball image of the user shot by the camera is more accurate, meanwhile, the first camera is closer in distance, and therefore the shooting results of the first camera and the assisting camera are spliced into the target eyeball image, the total distortion of the target eyeball image is smaller than the total distortion shot by a single camera, and the accuracy of the target sight determined based on the target eyeball image is higher.
Moreover, the plurality of cameras respectively shoot subimages in a small range of the optical axis direction of the camera, and then the subimages are spliced into the target eyeball image, and the distortion of the image shot by the camera along the optical axis direction is minimum, so that the accuracy of eye tracking is further improved.
In addition, a plurality of cameras are mutually matched, so that errors and external interference existing in shooting of a single camera can be eliminated.
Drawings
Fig. 1 is a schematic structural diagram of a human eye motion tracking system according to an embodiment of the present invention;
fig. 2 is a schematic diagram of relative positions of a reflected light spot and a pupil in the eye movement tracking method according to the embodiment of the present invention;
fig. 3 is a schematic diagram illustrating a principle of pupil center position identification in a human eye movement tracking method according to an embodiment of the present invention.
Detailed Description
The following examples are given for the detailed implementation and the specific operation procedures, but the scope of the present invention is not limited to the following examples.
Example 1
An embodiment 1 of the present invention provides a human eye motion tracking system, and fig. 1 is a schematic structural diagram of the human eye motion tracking system provided in the embodiment of the present invention, and as shown in fig. 1, the system includes: first camera 3, second camera 6, assistance camera, infrared light 2, signal processing unit 5 and display screen 0, first camera setting is on base 1 for gather the eye image of the person of wearing. The second camera is arranged on the base and used for collecting the visual field image of the stimulation picture in front of the wearer. The signal processing unit is respectively connected with the first camera, the second camera, the infrared illuminating lamp and the display screen 0, the display screen 0 is right in front of the visual field, the self-defined information can be projected on the screen, wherein,
the first camera is used for shooting a first face image of a user; the first face image includes pixel contents corresponding to the user's eye 4.
The second camera user shoots a screen image of the display screen 0, and the display screen 0 is used for displaying a self-defined stimulation picture;
the infrared illuminating lamp is used for emitting infrared rays to eyeballs of a user;
the signal processing unit 5 is used for sending an instruction for displaying the stimulation picture to the display screen 0 and determining the initial sight direction of the user according to the first face image, the stimulation picture and the position of the infrared illuminating lamp 2; setting the camera positioned in the first sight line direction as an assistant camera, and shooting a second face image of the user by using the assistant camera; and splicing a target eyeball image of the user according to the first surface image and the second surface image of the user, and acquiring a target sight direction according to the target eyeball image and the screen image. The signal processing unit comprises an image processing and communication module. The image processing is used for extracting human eye motion information and tracking a visual axis. And the communication module is used for matching and analyzing the tracking result of the human eyes and the self-defined information. The unit can upload real-time processing data to a register, a server, an upper computer or a memory.
As shown in fig. 1, the base 1 may be a conventional spectacle, or may be in the form of a stand of a similar spectacle frame. The first camera 3 may be an iris camera, which functions to image the human eye portion. The first camera 3 is arranged on the frame-shaped base, the front of human eyes at the installation position is inclined downwards, and the shooting angle is based on the shooting of all human eyes and faces. In embodiments not shown, it may also be fixed to the frame left or right or above. The first camera 3 is in communication connection with the signal processing unit 5, and the first face image is transmitted to the signal processing unit 5, and the specific connection mode may be through a data line or wirelessly.
The infrared illuminating lamp 2 is used for supplementing light for the camera and referencing the visual axis direction, and the infrared lamp needs to meet the requirement of human eye safety. It can be placed at any position of the base frame, and only light energy is required to be incident to human eyes. The spatial relative position relationship between the infrared illuminating lamp 2 and the first camera 3 can be determined according to actual requirements. In practical application, 2 or more infrared illuminating lamps can be used.
The display screen 0 is arranged right in front of the human eye movement monitoring device and is also connected with the signal processing unit 5. The information sent by the information unit can be displayed on the screen, for example, the connection mode between the display screen 0 such as a cross frame displayed in the center of the screen and the signal processing unit 5 can be a wired or wireless mode, the screen can display a cross frame, a dot and the like, and the position can be any place on the screen. To adapt to the shape of the user's eyes and improve viewing comfort, the display screen may be a spherical screen.
The signal processing unit 5 determines the initial sight direction of the user by using a pupil corneal reflection method according to the first face image, the stimulation picture and the position of the infrared illuminating lamp 2; the specific determination process is the prior art, and the embodiment of the present invention is not described herein again.
Then, the camera positioned in the extending direction of the first sight line is set as an assisting camera, and a second face image of the user is shot by using the assisting camera. A first eye region is identified from the first facial image and a second eye region is identified from the second facial image. And then, cutting out a corresponding sub-image from the first eyeball area, cutting out another corresponding sub-image from the second eyeball area, splicing the sub-images into a target eyeball image by using an iris feature point matching method, and acquiring a target sight direction according to the target eyeball image and the screen image.
According to the invention, the detection and tracking of the eyeball movement information including the visual axis can be realized through the first camera and the user-defined information on the display screen 0, and the required hardware equipment is simple and has low cost.
Further, in a specific implementation manner of embodiment 1 of the present invention, the signal processing unit is further configured to obtain an eye movement parameter of the user, where the eye movement parameter includes: the number of blinks, the diameter of the pupil, the central coordinates of the eyeball and the motion track of the central eyeball.
Example 2
The following description will be given by taking the line-of-sight tracking as an example in embodiment 2 of the present invention. In practical application, the embodiment of the invention can be used for counting data such as blink times.
Based on embodiment 1 of the present invention, embodiment 2 of the present invention provides a human eye motion tracking method, which is applied to a signal processing unit in the system described in embodiment 1, and the method includes:
s101: sending an instruction for displaying a stimulation picture to a display screen, and acquiring a first facial image of a user shot by a first camera; and identifying the pupil center position and the area of the reflected light spot from the eye detection model trained in advance.
Specifically, fig. 2 is a schematic diagram of relative positions of a reflection spot and a pupil in the eye movement tracking method according to the embodiment of the present invention, as shown in fig. 2, the signal processing unit displays an instruction of the cross frame on the display screen, the display screen displays the cross frame after receiving the instruction, and when the user views the cross frame, the user rotates the eyeball 201. The infrared illuminating lamp emits infrared rays, the eyeballs reflect the infrared rays to form reflection light spots, the first camera shoots a first face image of a user, and the first face image contains an eyeball area of the user, namely the pupil 203 and the reflection light spots 205 of the user.
Fig. 3 is a schematic diagram illustrating a principle of pupil center position identification in a human eye movement tracking method according to an embodiment of the present invention, as shown in fig. 3, the signal processing unit may identify an eye image region from the first face image by using a preset identification algorithm, such as a neural network algorithm, because the eye image region identified at this time is a general region and is not accurate enough. Therefore, the edge detection algorithm needs to be used again to identify the corresponding contour lines from the eye image region. Cutting out an eye image from a face image of a user according to the contour line of the eye image area;
then, sequentially carrying out graying, Gaussian filtering and binarization processing on the eye image by using a preset algorithm to obtain a preprocessed image;
recognizing an exit pupil region from the preprocessed image by using a pre-trained eyeball detection model;
screening out a target area from the pupil area according to a preset binarization threshold;
the pupil edge is detected from the target region using an edge detection algorithm. Because the first camera is not arranged at the cross line position of the screen, a certain included angle exists between the shooting direction of the first camera and the visual axis direction of a user, and therefore when the first camera shoots the pupils, the pupils are in an elliptical shape; therefore, the pupil edge can be fitted to an ellipse to obtain a fitted target area;
according to a preset pupil constraint condition, if the pupil diameter exceeds a preset range, the model is explained to be capable of effectively identifying the pupil, or the ratio of the long axis and the short axis of the ellipse fitted by the shape of the pupil exceeds a set range, the fitting is not accurate enough, and the shot first face image cannot be used, so that the purpose of eliminating the abnormal pupil in the target area is achieved;
and taking the central point of the target area excluding the abnormal pupil as the pupil central position.
Similarly, neural network recognition algorithms can be used to identify areas of the reflected light spot.
S102: and calculating the initial sight direction of the user by using a sight mapping model according to the offset of the pupil center position relative to the spot center position.
For example, the initial gaze direction may be estimated using a pupillary corneal reflection method, i.e., a two-dimensional gaze estimation method.
In practical application, the display position of the stimulation picture can be identified from the screen image shot by the second camera; acquiring the sight direction of the user according to the pupil center position and the display position of the stimulating picture; and correcting the sight mapping model by using the position of the infrared illuminating lamp, the central position of the reflected light spot and the offset of the central position of the pupil relative to the reflected light spot.
In the gaze mapping model correction, two-dimensional eye movement features extracted from the eye image are input as arguments of a mapping function, and a dependent variable of the function is a determined gaze direction or gaze point.
In order to obtain the line-of-sight mapping function, each user needs to be calibrated online. The line-of-sight mapping function model is expressed by the following equation
Px=f(Vx,Vy)
Py=g(Vx,Vy)
Wherein, (Px, Py) is the coordinate of the sight line landing point, the coordinate is the coordinate of the stimulation picture acquired by the second camera, and (Vx, Vy) is the pupil reflection light spot vector. With reference to fig. 3, the signal processing unit obtains the vector of the reflected light spot on the pupil by shooting the first facial image through the first camera. The attention point of the eyeball on the display screen can be determined through sight line estimation, and the information relation between the eyeball and the display screen is judged through the attention point. For example, using a 9-point 6-parameter model, the gaze mapping model may be expressed as
Px=a0+a1Vx+a2Vy+a3VxVy+a4Vx2+a5Vy2
Py=b0+b1Vx+b2Vy+b3VxVy+b4Vx2+b5Vy2
Vx is a horizontal component of a relative offset vector of the pupil center relative to the center of the reflected light spot; vy is the vertical component of the relative offset vector of the pupil center relative to the center of the reflected light spot; a0, a1, a2, a3, a4, a5, b0, b1, b2, b3, b4, b5 are correction coefficients, which are targets of correction of the model, and are determined in the process of correction.
Then, 18 equations are obtained using the 9 first face images to perform polynomial regression on the coefficients a0, a1, a2, a3, a4, a5, and b0, b1, b2, b3, b4, b5, and solve the coefficient determination mapping model. The correction process of the mapping model is the prior art, and the embodiment of the invention is not described herein again.
S103: and setting the camera positioned in the first sight line direction as an assistant camera, and shooting a second face image of the user by using the assistant camera.
In practical application, a two-dimensional sight line estimation method can be used for estimating a focus point of a user on a display screen, eyeballs of the user are used as a starting point, the focus point is used as a point on a ray to construct a virtual ray, and then an initial sight line direction can be obtained.
In order to avoid this situation, an alternative camera capable of capturing the eyeballs of the user is used as the assisting camera for each alternative camera.
In another specific embodiment of this step, in order to increase the number of the assisting cameras, the user's eyes may be used as a starting point, the direction corresponding to the first line of sight is a central axis, and an angle is set as a cone angle to generate a first virtual cone. The first virtual cone is a cone which is axially and symmetrically distributed along the central axis and extends along the central axis. Taking the camera positioned in the range of the first virtual cone as a candidate camera; and aiming at each alternative camera, taking the alternative camera capable of shooting the eyeballs of the user as an assistant camera.
Furthermore, the signal processing unit also transmits a shooting assistance request to the alternative camera, wherein the shooting assistance request comprises a first coordinate of the first camera; the candidate camera receiving the shooting assistance request calculates the optimal shooting range of the candidate camera according to the second coordinate of the candidate camera and the optimal shooting distance pre-calibrated by the manufacturer, and responds to the shooting assistance request when the first coordinate is in the optimal shooting range; and taking the alternative camera responding to the shooting assistance request as a camera for shooting the face image of the user. By applying the embodiment of the invention, the assistant camera with better shooting effect can be screened out, and the accuracy of sight tracking is further improved.
S104: and splicing the first surface image and the second surface image into a target eyeball image according to the positions of the first camera and the auxiliary camera, and acquiring a target sight line direction according to the target eyeball image and the screen image.
Specifically, according to the distribution positions of the first camera and the auxiliary cameras, the eyeball area corresponding to each camera is divided averagely; for example, the first camera is on the left side, the assisting cameras are respectively on the middle and the right side, and the three cameras respectively correspond to one third of the eyeball area. The left third eyeball area is measured from a first face image shot by the first camera, the middle third eyeball area is obtained from the middle assisting camera, and the right third eyeball area is obtained from the right camera.
In practical applications, the patterns formed by connecting lines of projection points of the distribution positions of the cameras on the vertical plane can be triangles, quadrangles, pentagons, hexagons, and the like, so that the eyeball areas corresponding to the cameras should be in the corresponding directions in the eyeball.
In order to briefly describe the technical process for achieving the above effects, the embodiment of the present invention is schematically described by taking an example of a first camera and an example of an assisting camera. The first camera is located on the left side relative to the assisting camera, so that the eyeball area corresponding to the first camera is half on the left side, and the eyeball area corresponding to the assisting camera is half on the right side, that is, half of the eyeball image is obtained from the first surface image shot by the first camera and half of the eyeball image is obtained from the second surface image shot by the assisting camera. Therefore, the second virtual cone is obtained by taking the half of the diameter of the eyeball image in the first surface image as the arc length, taking the distance from the first camera to the eyeball as the radius to calculate the cone angle corresponding to the first camera, and then taking the connecting line from the first camera to the eyeball, namely the optical axis direction of the first camera as the central axis. Similarly, the method for acquiring the second virtual cone of the assistant camera is similar to the above method.
Taking an area surrounded by the second virtual cone and a cut line on the shot eyeball image as a cutting area, and cutting out a corresponding sub-image from the shot eyeball image according to the cutting area;
converting pixel points of the sub-images into the same coordinate system according to coordinates of the corresponding cameras, zooming the sub-images corresponding to the first camera and the sub-images corresponding to the auxiliary cameras to the same size, placing the sub-images corresponding to the first camera on the left side, placing the sub-images corresponding to the auxiliary cameras on the right side, splicing the sub-images into a complete eyeball image by using a method of overlapping characteristic points, and taking the complete eyeball image as a target eyeball image.
Further, during splicing, due to the fact that the sub-images corresponding to the multiple cameras are used for splicing the target eyeball image, even if the gray value of the pupil in one sub-image is lower than a set value, the complete pupil image can be fitted by using other sub-images, and then the offset vector of the pupil center point relative to the reflection light spot is obtained. If a camera is used for obtaining the pupil center point, if the reflectivity of the pupil to infrared rays at the moment is low under the angle, a large error may exist in the sight tracking under the condition, and therefore, the embodiment of the invention can have a high fault-tolerant rate.
Then, the deviation vector of the pupil center point relative to the reflected light spot acquired from the target eyeball image is input into the corrected sight mapping model, so that the target attention point of the user can be obtained, and the connecting line between the eyeball of the user and the target attention point is used as the target sight of the user.
In practical application, the obtaining of the target sight line direction according to the target eyeball image and the screen image may be the prior art.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.
Claims (9)
1. A human eye motion tracking system, the system comprising: a first camera, a second camera, an assistant camera, an infrared illuminating lamp, a signal processing unit and a display screen, wherein,
the first camera is used for shooting a first face image of a user;
a second camera user shoots a screen image of the display screen, and the display screen is used for displaying a self-defined stimulation picture;
the infrared illuminating lamp is used for emitting infrared rays to eyeballs of a user;
the signal processing unit is used for sending an instruction for displaying the stimulation picture to the display screen and determining the initial sight direction of the user according to the first facial image, the stimulation picture and the position of the infrared illuminating lamp; setting the camera positioned in the first sight line direction as an assistant camera, and shooting a second face image of the user by using the assistant camera; and splicing a target eyeball image of the user according to the first surface image and the second surface image of the user, and acquiring a target sight direction according to the target eyeball image and the screen image.
2. The human eye motion tracking system of claim 1, wherein one or a combination of the first camera, the second camera, and the infrared illumination lamp is mounted on a wearable device.
3. The human eye motion tracking system of claim 1, wherein the signal processing unit is further configured to obtain eye movement parameters of the user, wherein the eye movement parameters comprise: the number of blinks, the diameter of the pupil, the central coordinates of the eyeball and the motion track of the central eyeball.
4. A human eye motion tracking method applied to a signal processing unit in the system according to any one of claims 1 to 3, the method comprising:
sending an instruction for displaying a stimulation picture to a display screen, and acquiring a first facial image of a user shot by a first camera; identifying the pupil center position and the area of the reflected light spot from the first face image by using a pre-trained eyeball detection model;
calculating the initial sight direction of the user by using a sight mapping model according to the offset of the pupil center position relative to the spot center position;
setting the camera positioned in the first sight line direction as an assistant camera, and shooting a second face image of the user by using the assistant camera;
and splicing the first surface image and the second surface image into a target eyeball image according to the positions of the first camera and the auxiliary camera, and acquiring a target sight line direction according to the target eyeball image and the screen image.
5. The method for tracking human eye movement according to claim 4, wherein the identifying the pupil center position from the first facial image by using the pre-trained eye detection model comprises:
identifying an eye image area from the first face image, and cutting out the eye image from the face image of the user according to the contour line of the eye image area;
carrying out graying, Gaussian filtering and binarization processing on the eye image in sequence to obtain a preprocessed image;
recognizing an exit pupil region from the preprocessed image by using a pre-trained eyeball detection model;
screening out a target area from the pupil area according to a preset binarization threshold;
detecting a pupil edge from the target area by using an edge detection algorithm, and fitting the pupil edge into an ellipse to obtain a fitted target area;
removing abnormal pupils in the target area according to a preset pupil constraint condition;
and taking the central point of the target area excluding the abnormal pupil as the pupil central position.
6. The method for tracking human eye movement according to claim 4, wherein said setting the camera in the first gaze direction as an assistant camera comprises:
generating a virtual ray by taking the eyes of the user as a starting point and the direction corresponding to the first sight line as a direction, and taking a camera positioned on the virtual ray as an alternative camera;
and aiming at each alternative camera, taking the alternative camera capable of shooting the eyeballs of the user as an assistant camera.
7. The method for tracking human eye movement according to claim 4, wherein said setting the camera in the first gaze direction as an assistant camera comprises:
the user eyes are used as a starting point, the direction corresponding to the first sight line is used as a central axis, an angle is set as a cone angle to generate a first virtual cone, and a camera positioned in the range of the first virtual cone is used as an alternative camera;
and aiming at each alternative camera, taking the alternative camera capable of shooting the eyeballs of the user as an assistant camera.
8. The method for tracking human eye movement according to claim 4, wherein the stitching the first facial image and the second facial image into the target eyeball image according to the positions of the first camera and the assisting camera respectively comprises:
according to the distribution positions of the first cameras and the auxiliary cameras, averagely dividing eyeball areas corresponding to the cameras;
calculating the size of a cutting cone angle according to the corresponding eyeball area, the distance of the first camera relative to the eyeball and the distance of the assistant camera relative to the eyeball;
for each first camera and each assistant camera, taking the optical axis direction of the camera as a central axis, constructing a second virtual cone by cutting cone angles, taking an area surrounded by a cutting line on the second virtual cone and a shot eyeball image as a cutting area, and cutting out a corresponding sub-image from the shot eyeball image according to the cutting area;
and converting pixel points of the sub-images into the same coordinate system according to the coordinates of the corresponding cameras, splicing the sub-images into complete eyeball images according to the eyeball areas corresponding to the sub-images, and taking the complete eyeball images as target eyeball images.
9. The method for tracking human eye movement according to claim 4, wherein the obtaining process of the sight line mapping model comprises:
identifying the display position of the stimulation picture from the screen image by using the screen image shot by the second camera;
acquiring the sight direction of the user according to the pupil center position and the display position of the stimulating picture; and correcting the sight mapping model by using the position of the infrared illuminating lamp, the central position of the reflected light spot and the offset of the central position of the pupil relative to the reflected light spot.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210672708.6A CN114973392A (en) | 2022-06-15 | 2022-06-15 | Human eye motion tracking system and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210672708.6A CN114973392A (en) | 2022-06-15 | 2022-06-15 | Human eye motion tracking system and method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114973392A true CN114973392A (en) | 2022-08-30 |
Family
ID=82963720
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210672708.6A Pending CN114973392A (en) | 2022-06-15 | 2022-06-15 | Human eye motion tracking system and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114973392A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116095407A (en) * | 2022-12-22 | 2023-05-09 | 深圳创维-Rgb电子有限公司 | User interface control method and related device |
CN116646079A (en) * | 2023-07-26 | 2023-08-25 | 武汉大学人民医院(湖北省人民医院) | Method and device for auxiliary diagnosis of ophthalmic diseases |
CN117707330A (en) * | 2023-05-19 | 2024-03-15 | 荣耀终端有限公司 | Electronic equipment and eye movement tracking method |
-
2022
- 2022-06-15 CN CN202210672708.6A patent/CN114973392A/en active Pending
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116095407A (en) * | 2022-12-22 | 2023-05-09 | 深圳创维-Rgb电子有限公司 | User interface control method and related device |
CN117707330A (en) * | 2023-05-19 | 2024-03-15 | 荣耀终端有限公司 | Electronic equipment and eye movement tracking method |
CN116646079A (en) * | 2023-07-26 | 2023-08-25 | 武汉大学人民医院(湖北省人民医院) | Method and device for auxiliary diagnosis of ophthalmic diseases |
CN116646079B (en) * | 2023-07-26 | 2023-10-10 | 武汉大学人民医院(湖北省人民医院) | A method and device for auxiliary diagnosis of ophthalmic diseases |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109558012B (en) | Eyeball tracking method and device | |
CN108427503B (en) | Human eye tracking method and human eye tracking device | |
US6659611B2 (en) | System and method for eye gaze tracking using corneal image mapping | |
US10878237B2 (en) | Systems and methods for performing eye gaze tracking | |
CN114973392A (en) | Human eye motion tracking system and method | |
EP2805671B1 (en) | Ocular videography system | |
JP5869489B2 (en) | Method and apparatus for automatically measuring at least one refractive property of a person's eyes | |
JP6631951B2 (en) | Eye gaze detection device and eye gaze detection method | |
US20240315563A1 (en) | System and method for eye tracking | |
US20220039645A1 (en) | Determining a refractive error of an eye | |
KR20180064413A (en) | Eye tracking using structured light | |
US11300784B2 (en) | Multi-perspective eye acquisition | |
CN111603134B (en) | Eyeball movement testing device and method | |
CN110160749A (en) | Calibrating installation and calibration method applied to augmented reality equipment | |
CN108354585B (en) | Computer-implemented method for detecting corneal vertex | |
EP3542308B1 (en) | Method and device for eye metric acquisition | |
JP2018099174A (en) | Pupil detector and pupil detection method | |
CN111486798B (en) | Image ranging method, image ranging system and terminal equipment | |
JP2019215688A (en) | Visual line measuring device, visual line measurement method and visual line measurement program for performing automatic calibration | |
JP7046347B2 (en) | Image processing device and image processing method | |
JP6906943B2 (en) | On-board unit | |
KR102085285B1 (en) | System for measuring iris position and facerecognition based on deep-learning image analysis | |
JP3726122B2 (en) | Gaze detection system | |
JP2015123262A (en) | Sight line measurement method using corneal surface reflection image, and device for the same | |
TWI864184B (en) | Eye tracking systems and methods |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |