[go: up one dir, main page]

CN117617889A - A strabismus degree detection system based on covering test - Google Patents

A strabismus degree detection system based on covering test Download PDF

Info

Publication number
CN117617889A
CN117617889A CN202311448823.6A CN202311448823A CN117617889A CN 117617889 A CN117617889 A CN 117617889A CN 202311448823 A CN202311448823 A CN 202311448823A CN 117617889 A CN117617889 A CN 117617889A
Authority
CN
China
Prior art keywords
pupil
point
calibration
center
strabismus
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311448823.6A
Other languages
Chinese (zh)
Inventor
王荃
梁厚成
宋文辉
郭勇
吴兵兵
党若琛
胡炳樑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Purui Eye Hospital Co ltd
XiAn Institute of Optics and Precision Mechanics of CAS
Original Assignee
XiAn Institute of Optics and Precision Mechanics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by XiAn Institute of Optics and Precision Mechanics of CAS filed Critical XiAn Institute of Optics and Precision Mechanics of CAS
Priority to CN202311448823.6A priority Critical patent/CN117617889A/en
Publication of CN117617889A publication Critical patent/CN117617889A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/02Subjective types, i.e. testing apparatus requiring the active assistance of the patient
    • A61B3/08Subjective types, i.e. testing apparatus requiring the active assistance of the patient for testing binocular or stereoscopic vision, e.g. strabismus
    • A61B3/085Subjective types, i.e. testing apparatus requiring the active assistance of the patient for testing binocular or stereoscopic vision, e.g. strabismus for testing strabismus
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/0008Apparatus for testing the eyes; Instruments for examining the eyes provided with illuminating means
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/0016Operational features thereof
    • A61B3/0041Operational features thereof characterised by display arrangements
    • A61B3/005Constructional features of the display
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/0075Apparatus for testing the eyes; Instruments for examining the eyes provided with adjusting devices, e.g. operated by control lever
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/0083Apparatus for testing the eyes; Instruments for examining the eyes provided with means for patient positioning
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/14Arrangements specially adapted for eye photography
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/18Status alarms
    • G08B21/24Reminder alarms, e.g. anti-loss alarms
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B3/00Audible signalling systems; Audible personal calling systems
    • G08B3/10Audible signalling systems; Audible personal calling systems using electric transmission; using electromagnetic transmission

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Medical Informatics (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Biophysics (AREA)
  • Ophthalmology & Optometry (AREA)
  • Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • General Physics & Mathematics (AREA)
  • Emergency Management (AREA)
  • Electromagnetism (AREA)
  • Business, Economics & Management (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The invention discloses a system for detecting the degree of strabismus based on a coverage test, which comprises a head rest, a calibration point, a display, a camera, a light source, a voice broadcasting device and a control device, wherein the head rest is connected with the display; the head rest is arranged opposite to the display, and planes of the head rest and the display are parallel to each other; the calibration points are arranged right above the head support through a bracket consisting of two symmetrical upright posts and a cross beam on the upright posts, and the calibration points are arranged in the middle of the cross beam; the camera and the light source are respectively arranged on the display and are positioned in the same plane, the plane is parallel to the plane of the display, and the view field of the camera covers the head rest; the display, the camera and the voice broadcasting device are all connected with the controller. The invention is not affected by kappa angle, can accurately measure the strabismus degree, and solves the defects that the traditional strabismus degree detection technology is subjective and needs professional doctors to participate in measurement.

Description

一种基于遮盖测试的斜视度数检测系统A strabismus degree detection system based on covering test

技术领域Technical field

本发明属于眼科疾病检测技术领域,具体涉及一种基于遮盖测试的斜视度数检测系统。The invention belongs to the technical field of ophthalmic disease detection, and specifically relates to a strabismus degree detection system based on covering test.

背景技术Background technique

正常人注视目标时,表现为双眼同时注视。然而对于斜视患者来说,两侧眼睛通常只有一只眼睛能够对准目标,另一只眼睛发生偏斜。这种双眼不能同时注视目标的现象医学上称为斜视。斜视会对患者的视力造成严重的影响,也会对患者的生活、工作和心里带来阻碍。如果在孩童时期不及时的检测和治疗斜视,成人以后还易引起辨向能力丧失或者弱视。When a normal person looks at a target, both eyes look at the same time. However, for patients with strabismus, usually only one eye on both sides can be aligned with the target, and the other eye is deflected. This phenomenon of the inability of both eyes to focus on the target at the same time is medically called strabismus. Strabismus will have a serious impact on the patient's vision, and will also hinder the patient's life, work and psychology. If strabismus is not detected and treated promptly in childhood, it can easily lead to loss of direction recognition or amblyopia in adulthood.

目前,临床上常见的斜视专科检查方法有角膜映光法、同视机、遮盖测试等。角膜映光法要求检查者坐于患者对面。检查者手持笔试电筒与患者视线同一水平,令患者注视眼前笔试电筒光源。然后检查者在正前方观察光源在角膜上反光点的位置,判断有无眼位偏斜。角膜映光法不能够准确的获得斜视度数,只能用来粗略的判断。如果反射点位于瞳孔边缘处,斜视度数约为10°~15°,位于角膜边缘与瞳孔边缘中间时,斜视度数约为25°~30°,位于角膜边缘时,约为45°。同时该方法不能消除kappa角的影响。同视机是光机电结合的大型光电仪器。同视机测量被试的斜视角时,首先将眼镜筒固定于0°处,被测者通过镜筒注视画片。医生交替点灭画片的光源,并且观察眼球的运动情况。调整一边的镜筒壁,当左右眼各自单独注视画片时都不再移动,此时镜筒臂所指的度数为斜视度数。然而同视机的价格比较昂贵,并且使用同视机测量斜视度数需要专业医师的参与。。遮盖测试能够测量斜视的显隐性,同时简单易行,可重复性高。遮盖测试分为单侧遮盖测试,遮盖去遮盖测试和交替遮盖测试。单侧遮盖测试用来检查显性斜视,遮盖去遮盖测试用来检查隐形斜视,交替遮盖测试对两种斜视类型都能检查。然而遮盖测试受制于医生的主观因素,并且不能量化斜视度数。At present, common clinical strabismus specialist examination methods include corneal photoreflection, homotopia, cover test, etc. Corneal photoreflection requires the examiner to sit opposite the patient. The examiner holds the written test flashlight at eye level with the patient, and asks the patient to look at the light source of the written test flashlight in front of him. The examiner then observes the position of the light source's reflective point on the cornea straight ahead to determine whether there is eye deviation. Corneal photometry cannot accurately obtain the degree of strabismus and can only be used for rough judgment. If the reflection point is located at the edge of the pupil, the degree of strabismus is about 10° to 15°. When it is located between the edge of the cornea and the edge of the pupil, the degree of strabismus is about 25° to 30°. When it is located at the edge of the cornea, the degree of strabismus is about 45°. At the same time, this method cannot eliminate the influence of kappa angle. Synopsis machine is a large-scale photoelectric instrument that combines light, mechanics and electricity. When measuring the subject's strabismus angle with a synoptic machine, first fix the spectacle tube at 0°, and the subject looks at the picture through the lens tube. The doctor alternately turns off the light source of the picture and observes the movement of the eyeballs. Adjust the lens tube wall on one side so that when the left and right eyes look at the picture alone, they will no longer move. At this time, the degree pointed by the lens tube arm is the strabismus degree. However, the price of the homotrope machine is relatively expensive, and the use of the homotrope machine to measure the degree of strabismus requires the participation of a professional physician. . The cover test can measure the overt and hidden degree of strabismus and is simple, easy to perform and highly reproducible. The covering test is divided into unilateral covering test, covering and uncovering test and alternating covering test. The unilateral cover test is used to check for dominant strabismus, the cover-to-cover test is used to check for covert strabismus, and the alternating cover test can check for both types of strabismus. However, cover testing is subject to physician subjective factors and cannot quantify the degree of strabismus.

由此可见,亟需研究一种能够实现客观准确的斜视度数的智能检测技术。It can be seen that there is an urgent need to develop an intelligent detection technology that can achieve objective and accurate strabismus degree.

发明内容Contents of the invention

本发明的目的在于提供一种基于遮盖测试的斜视度数检测系统及检测方法,以解决传统斜视度数检测技术较为主观,且需要专业医师参与测量的问题。The purpose of the present invention is to provide a strabismus degree detection system and detection method based on cover testing to solve the problem that traditional strabismus degree detection technology is relatively subjective and requires professional doctors to participate in measurement.

为了实现上述目的,本发明采用如下技术方案予以解决:In order to achieve the above objects, the present invention adopts the following technical solutions to solve them:

一种基于遮盖测试的斜视度数检测系统,包括头托、定标点、显示器、摄像头、光源、语音播报器和控制装置;其中,所述头托与显示器相对设置且头托与显示器两者所在平面相互平行;所述定标点通过由两个设置对称的立柱和立柱上的横梁组成的支架安装在头托正上方,定标点安装在横梁中部;所述摄像头、光源分别安装在显示器上且两者位于同一平面内,该平面平行于显示器所在平面,所述摄像头的视场覆盖头托;所述显示器、摄像头和语音播报器均连接控制器。A strabismus degree detection system based on covering test, including a headrest, a calibration point, a display, a camera, a light source, a voice announcer and a control device; wherein, the headrest and the display are arranged oppositely and both the headrest and the display are located The planes are parallel to each other; the calibration point is installed directly above the headrest through a bracket composed of two symmetrical columns and a beam on the column, and the calibration point is installed in the middle of the beam; the camera and light source are respectively installed on the display And the two are located in the same plane, which plane is parallel to the plane where the display is located, and the field of view of the camera covers the headrest; the display, camera and voice announcer are all connected to the controller.

进一步的,所述定标点为黑色圆形点。Further, the calibration points are black circular points.

进一步的,所述控制器包括系统定标模块、瞳孔与反光点定位模块、校准模块和遮盖检测模块,其中:Further, the controller includes a system calibration module, a pupil and reflective point positioning module, a calibration module and a cover detection module, wherein:

系统定标模块,用于实现如下流程:步骤11,以显示器的左下角为原点建立世界坐标系,显示器的高为SH,宽为SW,分辨率为RWxRH;步骤12,控制摄像头进行图像采集;步骤13,根据采集到的图像,得到头托的平面像素到距离的转换比例ratio,且手动测量定标点中心在世界坐标系下的坐标(Xb,Yb,Zb);The system calibration module is used to implement the following process: Step 11, establish a world coordinate system with the lower left corner of the display as the origin, the height of the display is SH, the width is SW, and the resolution is RWxRH; Step 12, control the camera to collect images; Step 13, based on the collected image, obtain the conversion ratio ratio from the plane pixel of the headrest to the distance, and manually measure the coordinates of the calibration point center in the world coordinate system (X b , Y b , Z b );

瞳孔与反光点定位模块,用于采集人眼图像,并根据采集的人眼图像得到人的双眼的瞳孔中心的像素坐标、瞳孔中心在世界坐标系下的坐标、瞳孔中的反光点中心的像素坐标;The pupil and reflective point positioning module is used to collect human eye images, and obtain the pixel coordinates of the pupil centers of both eyes, the coordinates of the pupil center in the world coordinate system, and the pixels of the reflective point centers in the pupils based on the collected human eye images. coordinate;

校准模块,用于调用瞳孔与反光点定位模块,为被测人员进行校准,计算得到赫斯博格比值;The calibration module is used to call the pupil and reflective point positioning module to calibrate the person being measured and calculate the Hesburgh ratio;

遮盖检测模块,用于调用瞳孔与反光点定位模块,控制语音播报器提示被测人员采用遮盖测试方法测量斜视度数。The cover detection module is used to call the pupil and reflective point positioning module, and control the voice announcer to prompt the person being tested to use the cover test method to measure the degree of strabismus.

进一步的,所述步骤13包括如下操作:Further, step 13 includes the following operations:

将采集到的图像转换为灰度图,使用高斯滤波或其他方法进行图像去噪;使用自适应阈值分割或其他方法对去噪后的图像进行阈值分割;使用形态学开运算去除阈值分割后的图像中的孤立点、毛刺等得到形态学运算后的图像;使用canny边缘检测或其他方法检测出形态学运算后的图像边缘;使用霍夫变换或其他圆形检测器检测出以上处理得到的图像中的定标点,得到定标点的像素坐标(xb,yb)以及定标点在图像上的像素半径rb;计算得到头托的平面像素到距离的转换比例计算公式为ratio=R/rb,其中,R为定标点的真实半径;定标点中心在世界坐标系下的坐标(Xb,Yb,Zb)通过卷尺或其他测距工具获得。Convert the collected image into a grayscale image, use Gaussian filtering or other methods to denoise the image; use adaptive threshold segmentation or other methods to threshold segment the denoised image; use morphological opening operations to remove the noise after threshold segmentation Isolated points, burrs, etc. in the image are obtained from the image after morphological operation; use canny edge detection or other methods to detect the edges of the image after morphological operation; use Hough transform or other circular detectors to detect the image obtained by the above processing For the calibration point in the image, the pixel coordinates (x b , y b ) of the calibration point and the pixel radius r b of the calibration point on the image are obtained; the calculation formula for the conversion ratio from the plane pixel of the headrest to the distance is calculated as ratio= R/r b , where R is the true radius of the calibration point; the coordinates (X b , Y b , Z b ) of the calibration point center in the world coordinate system are obtained through a tape measure or other distance measurement tools.

进一步的,所述瞳孔与反光点定位模块的实现流程如下:Further, the implementation process of the pupil and reflective point positioning module is as follows:

步骤21,控制摄像头持续采集人眼图像,采集速率为60fps,直接采集到包含人双眼的图像,然后对图像进行水平翻转;Step 21, control the camera to continuously collect human eye images at a collection rate of 60fps, directly collect images containing human eyes, and then flip the images horizontally;

步骤22,根据步骤21得到的图像,首先通过人眼检测方法定位双眼位置,然后分别在两个区域内使用瞳孔定位技术得到两个瞳孔中心的像素坐标;并且计算瞳孔中心在系统标定模块中所建立的世界坐标系下的坐标;Step 22: Based on the image obtained in step 21, first locate the position of both eyes through the human eye detection method, and then use pupil positioning technology in the two areas to obtain the pixel coordinates of the two pupil centers; and calculate the position of the pupil center in the system calibration module. Coordinates in the established world coordinate system;

步骤23,以左眼瞳孔中心(xL,yL)为圆心,以左眼瞳孔半径RL的1.5倍左右为半径作圆形区域,在该区域内寻找左眼瞳孔中的反光点中心的像素坐标(xg,yg);同理,得到右眼瞳孔中的反光点中心的像素坐标。Step 23: Take the left eye pupil center (x L , y L ) as the center of the circle, and use about 1.5 times the left eye pupil radius R L as the radius to make a circular area. Find the center of the reflective point in the left eye pupil in this area. Pixel coordinates (x g , y g ); similarly, obtain the pixel coordinates of the center of the reflective point in the pupil of the right eye.

进一步的,所述步骤22的具体流程如下:Further, the specific process of step 22 is as follows:

(一)所述人眼检测方法采用以下任一种方式:根据采集到的多个人眼图像,(1)人工确定双眼位置;(2)采集并标记多个正样例和负样例,训练Haar-cascade分类器,通过训练好的分类器能够得到图像的双眼位置;(3)通过深度学习的方法训练进行人眼检测;(1) The human eye detection method adopts any of the following methods: (1) manually determine the position of the eyes based on multiple collected human eye images; (2) collect and mark multiple positive and negative samples, and train Haar-cascade classifier, the trained classifier can obtain the position of the eyes of the image; (3) Human eye detection is performed through deep learning method training;

(二)瞳孔位置检测采用深度学习中的分割网络模型,输入图像后得到分割概率图,再通过阈值分割得到二值图像,在二值图像中进行连通域分析,将最大斑点的质心作为左眼瞳孔中心的像素坐标(xL,yL),将最大斑点的外接矩形框的长边作为左眼瞳孔直径2*RL,RL为瞳孔半径;(2) Pupil position detection uses the segmentation network model in deep learning. After inputting the image, the segmentation probability map is obtained, and then the binary image is obtained through threshold segmentation. Connected domain analysis is performed in the binary image, and the centroid of the largest spot is used as the left eye. The pixel coordinates of the pupil center (x L , y L ), take the long side of the circumscribed rectangular frame of the largest spot as the left eye pupil diameter 2*R L , R L is the pupil radius;

(三)根据步骤11所建立的世界坐标系以及步骤12中所测得的定标点在世界坐标系下的坐标(Xb,Yb,Zb)计算左眼瞳孔中心在世界坐标系下的坐标为(XL,YL,ZL);其中, ZL=Zb,其中rb为定标点3在图像上的像素半径,R为定标点的真实半径(cm),(xb,yb)为定标点中心的像素坐标;(3) Calculate the left eye pupil center in the world coordinate system based on the world coordinate system established in step 11 and the coordinates (X b , Y b , Z b ) of the calibration point measured in step 12 in the world coordinate system The coordinates are (X L ,Y L ,Z L ); where, Z L =Z b , where r b is the pixel radius of calibration point 3 on the image, R is the real radius of the calibration point (cm), (x b , y b ) is the pixel coordinate of the calibration point center;

(四)同理,对右眼同样执行步骤(二)、(三),得到右眼瞳孔中心的像素坐标(xR,yR)、瞳孔半径RR、右眼瞳孔中心在世界坐标系下的坐标(XR,YR,ZR)。(4) In the same way, perform steps (2) and (3) for the right eye to obtain the pixel coordinates (x R , y R ) of the pupil center of the right eye, the pupil radius R R , and the pupil center of the right eye in the world coordinate system. The coordinates (X R , Y R , Z R ).

进一步的,所述校准模块的实现流程如下:Further, the implementation process of the calibration module is as follows:

步骤31,初始化9个坐标已知的注视点:P1(X1,Y1,0),P2(X2,Y2,0)……P9(X9,Y9,0);注视点P0的像素坐标为(x0,y0),其世界坐标(X0,Y0,0),则有X0=x0*SW/RW,Y0=(RH-y0)*SW/RW;分别计算得到从注视点P0到注视点P1,P2……P9的偏移注视角度RH为显示器高度(cm),RW为显示器宽度(cm);Step 31, initialize 9 gaze points with known coordinates: P 1 (X 1 ,Y 1 ,0), P 2 (X 2 ,Y 2 ,0)...P 9 (X 9 ,Y 9 ,0); The pixel coordinates of gaze point P 0 are (x 0 , y 0 ), and its world coordinates are (X 0 , Y 0 , 0), then X 0 =x 0 *SW/RW, Y 0 = (RH-y 0 ) *SW/RW; respectively calculate the offset gaze angles from the gaze point P 0 to the gaze points P 1 , P 2 ...P 9 RH is the height of the monitor (cm), RW is the width of the monitor (cm);

步骤32,在显示器上显示注视点P0(X0,Y0,0),控制语音播报器提示待测人员将头部置于头托上,并提醒被测者注视坐标点P0;接着在显示器上依次显示至少两个位置已知的注视点作为校准点,控制语音播报器提醒被试者依次注视这些校准点,在该过程中,系统调用瞳孔与反光点定位模块进行瞳孔与其内的反光点定位;每个校准点显示时长为5s钟,当被试人员的注视时长大于等于2s时认为当前校准点稳定;计算2s内瞳孔中心位置的均值和标准差,若2s内瞳孔中心位置坐标均小于1.5倍的标准差则认为注视稳定,当当前注视点稳定时,记录每个注视点显示最后2s时所有图像帧中的瞳孔中心的像素坐标以及反光点的像素坐标;Step 32: Display the gaze point P 0 (X 0 , Y 0 , 0) on the display, control the voice announcer to prompt the person to be tested to place their head on the headrest, and remind the person to watch the coordinate point P 0 ; then At least two gaze points with known positions are displayed on the display in sequence as calibration points, and the voice announcer is controlled to remind the subject to gaze at these calibration points in sequence. During this process, the system calls the pupil and reflective point positioning module to compare the pupil and the internal Reflective point positioning; each calibration point is displayed for 5 seconds. When the subject's gaze lasts for 2 seconds or more, the current calibration point is considered stable; calculate the mean and standard deviation of the pupil center position within 2 seconds. If the pupil center position coordinates within 2 seconds If the standard deviation is less than 1.5 times, the gaze is considered stable. When the current gaze point is stable, record the pixel coordinates of the pupil center and the pixel coordinates of the reflective points in all image frames during the last 2 seconds of each gaze point display;

步骤33,对于每个注视点显示的最后2s所记录瞳孔中心的像素坐标以及反光点的像素坐标,剔除掉闭眼的情况的数据;Step 33: For the pixel coordinates of the pupil center and the pixel coordinates of the reflective points recorded in the last 2 seconds of each fixation point display, eliminate the data with eyes closed;

计算剔除数据后剩余的瞳孔中心像素坐标横坐标的均值横坐标方差Dxp、纵坐标的均值/>纵坐标方差Dyp;如果某个瞳孔中心横坐标大于/>或者纵坐标大于则校准失败,否则为校准成功;同理,计算反光点中心像素坐标横坐标的均值/>横坐标方差Dxg、纵坐标的均值/>纵坐标方差Dyg;如果某个瞳孔中心横坐标大于/>或者纵坐标大于/>则校准失败,否则为校准成功;Calculate the mean value of the abscissa coordinate of the remaining pupil center pixel coordinates after removing the data The variance of the abscissa Dx p and the mean of the ordinate/> Vertical coordinate variance Dy p ; if the abscissa coordinate of a certain pupil center is greater than/> Or the ordinate is greater than If the calibration fails, otherwise the calibration is successful; similarly, calculate the mean value of the abscissa of the center pixel coordinate of the reflective point/> The variance of the abscissa Dx g and the mean of the ordinate/> Vertical coordinate variance Dy g ; if the abscissa coordinate of a certain pupil center is greater than/> Or the ordinate is greater than/> If the calibration fails, otherwise the calibration is successful;

校准失败后,返回执行步骤31,即重新进入校准模块;若九个注视点均校准成功,则进入步骤34;After the calibration fails, return to step 31, that is, re-enter the calibration module; if all nine fixation points are successfully calibrated, enter step 34;

步骤34,分别计算观测点P0~P9时瞳孔中心到普尔钦斑点中心的水平像素距离d0~d9,其中计算观测点P1~P9瞳孔中心到普尔钦斑点中心的水平像素距离与点P0时的偏移量t1~t9,其中ti=di-d0(i=1、2……9);Step 34: Calculate the horizontal pixel distance d 0 ~ d 9 from the pupil center to the center of Purchin's spot when observing points P 0 ~ P 9 respectively, where Calculate the horizontal pixel distance from the pupil center of observation points P 1 to P 9 to the center of Purchin's spot and the offset t 1 to t 9 at point P 0 , where ti =d i -d 0 (i=1, 2... …9);

步骤35,得到(t1,A1),(t2,A2)……(t9,A9)后,对它们使用最小二乘法进行直线拟合,A=H0t+b0,计算得到赫斯博格比值H0和拟合直线的截距b0Step 35: After obtaining (t 1 ,A 1 ), (t 2 ,A 2 )...(t 9 ,A 9 ), use the least squares method to perform straight line fitting on them, A=H 0 t+b 0 , The Hesburgh ratio H 0 and the intercept b 0 of the fitted straight line are calculated.

进一步的,所述遮盖检测模块中,所述遮盖测试法选择单侧遮盖测试方法、遮盖去遮盖测试方法或交替遮盖法。Further, in the covering detection module, the covering test method selects a unilateral covering test method, a covering and uncovering test method or an alternating covering method.

进一步的,所述遮盖检测模块中采用遮盖去遮盖方法,具体实现流程如下:Furthermore, the covering and de-covering method is used in the covering detection module. The specific implementation process is as follows:

步骤41,控制语音播报器提示被测人员开始遮盖测试,开始测试后,系统调用瞳孔与反光点定位模块,持续采集图像并得到双眼的位置以及瞳孔和反光点中心的像素坐标;此时如果检测不到双眼,则重新进入步骤41,控制语音播报器提示被试调整位置直至检测到双眼,如果检测到双眼以及瞳孔与反光点中心,则进入步骤42;Step 41: Control the voice announcer to prompt the person being tested to start covering the test. After starting the test, the system calls the pupil and reflective point positioning module to continuously collect images and obtain the positions of the eyes and the pixel coordinates of the pupil and reflective point centers; at this time, if the detection If both eyes are not detected, then re-enter step 41 and control the voice announcer to prompt the subject to adjust the position until both eyes are detected. If both eyes and the center of the pupil and the reflective point are detected, then enter step 42;

步骤42,监视双眼的眼部状态:首先,监视开始时眼部处于初始态s1,随后眼部处于就绪态s2,并且控制语音播报器提示被试人员遮盖右眼;系统调用瞳孔与反光点定位模块检测右眼状态,如果连续检测不到眼睛瞳孔帧数在五十帧以内,则右眼处于眨眼态s0;当右眼处于眨眼态且重新检测到右眼瞳孔时,右眼状态退回到就绪态s2;在连续检测不到瞳孔的帧数达到五十帧及以上时,右眼进入遮挡态s3,此时进入步骤43;Step 42, monitor the eye status of both eyes: first, the eyes are in the initial state s1 when monitoring starts, and then the eyes are in the ready state s2, and the voice announcer is controlled to prompt the subject to cover the right eye; the system calls the pupil and reflective point positioning The module detects the state of the right eye. If the number of consecutive frames of the eye pupil cannot be detected within fifty frames, the right eye is in the blinking state s0; when the right eye is in the blinking state and the right eye pupil is re-detected, the right eye state returns to ready. State s2; when the number of consecutive frames in which the pupil cannot be detected reaches fifty frames or more, the right eye enters the occlusion state s3, and then enters step 43;

步骤43,控制语音播报器提示被试人员从右眼移开挡板并遮盖左眼,系统记录挡板移开右眼瞬间,能够同时检测到反光点和瞳孔的第一帧为关键帧,此时右眼为瞬间态s4,系统计算并保存该帧的瞳孔像素坐标和反光点像素坐标的差值p0;在右眼进入瞬间态s4的时间超过2s后,计算60帧以内的瞳孔像素坐标分量的均值和方差,如果坐标分量的方差均小于设定的方差阈值Td则该眼睛进入平稳态s5;计算进入平稳态时60帧内瞳孔中心像素坐标与反光点中心像素坐标的的差值p1;控制语音播报器提示被试遮盖测试结束,进入步骤44;Step 43: Control the voice announcer to prompt the subject to remove the baffle from the right eye and cover the left eye. The system records the moment when the baffle is removed from the right eye. The first frame that can simultaneously detect the reflective point and the pupil is the key frame. This When the right eye is in the instant state s4, the system calculates and saves the difference p 0 between the pupil pixel coordinates of the frame and the pixel coordinates of the reflective point; after the right eye enters the instant state s4 for more than 2 seconds, the system calculates the pupil pixel coordinates within 60 frames. The mean and variance of the components. If the variance of the coordinate components is less than the set variance threshold T d , the eye enters the steady state s5; calculate the pixel coordinates of the pupil center and the pixel center of the reflective point within 60 frames when entering the steady state. Difference p 1 ; control the voice announcer to prompt the subject that the covering test is over, and enter step 44;

步骤44,计算斜视度数:计算右眼的偏移量t=p1-p0,计算得到右眼的斜视度数A=H0t+b0;同理,先遮盖左眼再遮盖右眼,计算得到左眼的偏移量和斜视度数。Step 44, calculate the degree of strabismus: Calculate the offset of the right eye t = p 1 - p 0 , and calculate the degree of strabismus of the right eye A = H 0 t + b 0 ; similarly, cover the left eye first and then the right eye, The offset and strabismus of the left eye were calculated.

相较于现有技术,本发明具有如下技术效果:Compared with the existing technology, the present invention has the following technical effects:

1、本发明不受kappa角的影响,能够准确测量斜视度数。通过对被试的校准得到赫斯伯格比值H,再通过图像算法测量被试在遮盖测试过程中瞳孔中心到反光点中心的偏移量T1和T2,通过H计算斜视度数A0=H(T1-T2)。测量偏移量的过程消除了kappa角的误差。并且本发明在校准阶段为每个被试都计算赫斯伯格比值H,而不用平均值。H的平均值为12.5,被试的差异性在均值的±20%,因此为每位被试都测量赫斯伯格比值使得最终结果更加准确。1. The present invention is not affected by kappa angle and can accurately measure strabismus. The Hesburgh ratio H is obtained by calibrating the subject, and then using the image algorithm to measure the offset T1 and T2 from the center of the subject's pupil to the center of the reflective point during the cover test, and calculate the strabismus degree A0 = H (T1 -T2). The process of measuring the offset eliminates the kappa angle error. Moreover, the present invention calculates the Hesberg ratio H for each subject during the calibration stage instead of using the average value. The mean value of H is 12.5, and the variability among subjects is ±20% of the mean, so measuring the Hesberg ratio for each subject makes the final result more accurate.

2、本方法采用的装置有摄像头,红外光源,电脑主机和电脑显示器,无需使用专业的光学仪器。2. The devices used in this method include cameras, infrared light sources, computer hosts and computer monitors, and there is no need to use professional optical instruments.

3、本发明对眼部状态进行定义,包括初始态、就绪态、眨眼态、遮挡态、瞬间态和稳定态,在瞬间态和稳定态时自动保存图像信息进行斜视度数的计算。因此,本发明不需要专业医师参与,能够智能地测量斜视度数。同时,本发明使用语音播报器与用户进行交互,语音提示被试者进行遮盖测试,并且能够自动检测被测者眼部的遮挡状态,在遮盖测试后得到斜视检测结果。3. The present invention defines the eye state, including initial state, ready state, blinking state, occlusion state, instantaneous state and stable state. In the instantaneous state and stable state, the image information is automatically saved to calculate the degree of strabismus. Therefore, the present invention does not require the participation of professional doctors and can intelligently measure the degree of strabismus. At the same time, the present invention uses a voice announcer to interact with the user, and the voice prompts the subject to perform a covering test, and can automatically detect the occlusion state of the subject's eyes, and obtain strabismus detection results after the covering test.

4、本发明结合单侧遮盖测试,测量被试者是否是显性斜视,如果是则给出斜视度数,否则进行交替遮盖测试,判断患者是否存在隐形斜视,如果存在则给出斜视度数,否则被试正常。因此,本发明既能够测量斜视的显隐性又能够测量准确的斜视度数。4. The present invention combines the unilateral covering test to measure whether the subject has overt strabismus. If so, the degree of strabismus is given. Otherwise, the alternating covering test is performed to determine whether the patient has invisible strabismus. If so, the degree of strabismus is given. Otherwise, the degree of strabismus is given. The subjects were normal. Therefore, the present invention can measure both the obviousness and recessiveness of strabismus and the accurate degree of strabismus.

附图说明Description of drawings

图1是本发明的斜视检测系统的结构示意图。Figure 1 is a schematic structural diagram of the strabismus detection system of the present invention.

图2是本发明的斜视监测系统的工作过程中的眼部状态图。Figure 2 is a diagram of the eye state during the working process of the strabismus monitoring system of the present invention.

图3是人眼检测矩形框示意图。Figure 3 is a schematic diagram of human eye detection rectangular frame.

图4是瞳孔和反光点检测示意图。Figure 4 is a schematic diagram of pupil and reflective point detection.

图中各标号含义如下:The meanings of the symbols in the figure are as follows:

1.头托;2.眼部平面;3.定标点;4.眼睛;5.世界坐标系;6.注视点;7.显示器;8.摄像头;9.红外光源。1. Head rest; 2. Eye plane; 3. Calibration point; 4. Eyes; 5. World coordinate system; 6. Gaze point; 7. Display; 8. Camera; 9. Infrared light source.

具体实施方式Detailed ways

本发明的基本原理为近红外光源发出的近红外光在用户眼睛角膜上会形成高亮度的反射点,成为普尔钦斑点。当头部固定不动时,普尔钦斑点到瞳孔中心的水平偏移量(mm)与注视偏移角度(°)成线性关系,并且该斜率称为赫斯博格比值(Hirschberg ratio)。首先对相机进行定标,然后校准赫斯博格比值H,再对被测者进行遮盖去遮盖实验。实验过程中相机对眼部图像进行采集,通过算法判断斜视眼去遮盖瞬间的图像帧frame1,以及去遮盖眼睛稳定后的图像帧frame2。使用图像处理算法得到两针的水平偏移量T1和T2。通过H计算斜视度数A0=H(T1-T2)。The basic principle of the present invention is that the near-infrared light emitted by the near-infrared light source will form high-brightness reflection points on the cornea of the user's eyes, becoming Purkin spots. When the head is fixed, the horizontal offset of Purchin's spot to the pupil center (mm) is linearly related to the gaze offset angle (°), and the slope is called the Hirschberg ratio. First, calibrate the camera, then calibrate the Hesburgh ratio H, and then conduct a covering and removing experiment on the subject. During the experiment, the camera collects eye images, and uses an algorithm to determine the image frame frame 1 at the moment when the strabismus eye is covered, and the image frame frame 2 after the eye is stabilized. Use an image processing algorithm to obtain the horizontal offsets T1 and T2 of the two needles. Calculate the strabismus degree A0=H(T1-T2) through H.

本发明原理:普尔钦斑点到瞳孔中心的位移的偏移量与注视角度偏移量成正比关系。通过图像处理算法以及眼部状态的判断完成斜视度数的智能检测。Principle of the invention: The offset of the displacement of Purchin's spot to the pupil center is proportional to the offset of the gaze angle. Intelligent detection of strabismus degree is completed through image processing algorithm and eye state judgment.

本发明的基于遮盖测试的斜视度数检测系统,包括头托1、定标点3、显示器7、摄像头8、光源9、语音播报器和控制装置;其中,头托1与显示器7相对设置且头托1与显示器7两者所在平面相互平行,显示器7距离头托1所在平面的距离为30cm~60cm;定标点3通过由两个设置对称的立柱和立柱上的横梁组成的支架安装在头托1正上方,定标点3安装在横梁中部,两个立柱之间的距离为18-25cm,容许一个成年人面朝显示器7的状态下将头部置入。摄像头8、光源9分别安装在显示器上且两者位于同一平面内,该平面平行于显示器7所在平面,摄像头8的视场覆盖头托1,确保能够拍摄到被测人员的眼睛图像。摄像头8与红外光源9相距不大于15cm。显示器7、摄像头8和语音播报器均连接控制器。The strabismus degree detection system based on covering test of the present invention includes a headrest 1, a calibration point 3, a display 7, a camera 8, a light source 9, a voice announcer and a control device; wherein, the headrest 1 is arranged opposite to the display 7 and the head The planes of the support 1 and the monitor 7 are parallel to each other, and the distance between the monitor 7 and the plane of the head support 1 is 30cm to 60cm; the calibration point 3 is installed on the head through a bracket composed of two symmetrical columns and beams on the columns. Directly above the support 1, the calibration point 3 is installed in the middle of the beam. The distance between the two columns is 18-25cm, allowing an adult to place his head while facing the monitor 7. The camera 8 and the light source 9 are respectively installed on the display and are located in the same plane, which is parallel to the plane of the display 7. The field of view of the camera 8 covers the headrest 1 to ensure that the eye image of the person being tested can be captured. The distance between the camera 8 and the infrared light source 9 is no more than 15cm. The display 7, the camera 8 and the voice announcer are all connected to the controller.

优选的,定标点3为黑色圆形点。Preferably, calibration point 3 is a black circular point.

具体的,所述控制器包括系统定标模块、瞳孔与反光点定位模块、校准模块和遮盖检测模块。其中:Specifically, the controller includes a system calibration module, a pupil and reflective point positioning module, a calibration module and a cover detection module. in:

系统定标模块,用于实现如下流程:System calibration module is used to implement the following processes:

步骤11,以显示器7的左下角为原点建立世界坐标系。显示器的高为SH,宽为SW,分辨率为RWxRH。例:24英寸显示器高SH=29.9cm,宽SW=53.15cm,分辨率为1920x1080。Step 11: Establish a world coordinate system with the lower left corner of the display 7 as the origin. The height of the monitor is SH, the width is SW, and the resolution is RWxRH. Example: The height SH of a 24-inch monitor is 29.9cm, the width SW is 53.15cm, and the resolution is 1920x1080.

步骤12,控制摄像头8进行图像采集(设置图像采集格式为YUY2,图像采集速率为60fps);Step 12, control the camera 8 to collect images (set the image acquisition format to YUY2, and the image acquisition rate to 60fps);

步骤13,根据采集到的图像,得到头托1的平面像素到距离的转换比例ratio(mm/像素),且手动测量定标点中心在世界坐标系下的坐标(Xb,Yb,Zb)。Step 13: According to the collected image, obtain the conversion ratio ratio (mm/pixel) of the plane pixel of the headrest 1 to the distance, and manually measure the coordinates of the calibration point center in the world coordinate system (X b , Y b , Z b ).

具体的,步骤13包括如下操作:将采集到的图像转换为灰度图,使用高斯滤波或其他方法进行图像去噪;使用自适应阈值分割或其他方法对去噪后的图像进行阈值分割;使用形态学开运算去除阈值分割后的图像中的孤立点、毛刺等得到形态学运算后的图像;使用canny边缘检测或其他方法检测出形态学运算后的图像边缘;使用霍夫变换或其他圆形检测器检测出以上处理得到的图像中的定标点3,得到定标点3的像素坐标(xb,yb)以及定标点3在图像上的像素半径rb;计算得到头托1的平面像素到距离的转换比例计算公式为ratio=R/rb,其中,R为定标点的真实半径。定标点中心在世界坐标系下的坐标(Xb,Yb,Zb)可通过卷尺或其他测距工具获得。Specifically, step 13 includes the following operations: convert the collected image into a grayscale image, and use Gaussian filtering or other methods to denoise the image; use adaptive threshold segmentation or other methods to perform threshold segmentation on the denoised image; use The morphological opening operation removes isolated points, burrs, etc. in the image after threshold segmentation to obtain the image after morphological operation; uses canny edge detection or other methods to detect the edges of the image after morphological operation; uses Hough transform or other circular shapes The detector detects the calibration point 3 in the image obtained by the above processing, and obtains the pixel coordinates (x b , y b ) of the calibration point 3 and the pixel radius r b of the calibration point 3 on the image; the headrest 1 is calculated The formula for calculating the conversion ratio from plane pixels to distance is ratio=R/r b , where R is the true radius of the calibration point. The coordinates (X b , Y b , Z b ) of the calibration point center in the world coordinate system can be obtained through a tape measure or other distance measurement tools.

瞳孔与反光点定位模块,用于实现如下流程:Pupil and reflective point positioning module is used to implement the following processes:

步骤21,控制摄像头8持续采集人眼图像,采集速率为60fps,直接采集到包含人双眼的图像,然后对图像进行水平翻转。水平翻转是为了使得图像中的左眼对应患者的左眼,图像中的右眼对应患者的右眼,方便观察。Step 21, control the camera 8 to continuously collect human eye images at a collection rate of 60fps, directly collect images containing human eyes, and then flip the images horizontally. The purpose of horizontal flipping is to make the left eye in the image correspond to the patient's left eye, and the right eye in the image correspond to the patient's right eye for easy observation.

步骤22,根据步骤21得到的图像,首先通过人眼检测方法定位双眼位置(通过两个矩形框标出,如图3、图4所示),然后分别在两个区域内使用瞳孔定位技术得到两个瞳孔中心的像素坐标。并且计算瞳孔中心在系统标定模块中所建立的世界坐标系下的坐标。Step 22, based on the image obtained in step 21, first locate the position of the eyes through the human eye detection method (marked by two rectangular boxes, as shown in Figure 3 and Figure 4), and then use the pupil positioning technology in the two areas to obtain Pixel coordinates of the centers of the two pupils. And calculate the coordinates of the pupil center in the world coordinate system established in the system calibration module.

(一)人眼检测方法可采用以下任一种方式:根据采集到的多个人眼图像,(1)人工确定双眼位置。由于在系统固定后,不同被试人员的头部都被固定在头托上,在采集到的图像中,两只人眼的位置相对与整幅图像的位置大体相同,因此可以人工选定两个长宽和位置都固定的矩形框框出两只眼睛的区域。(2)采集并标记多个正样例(包含人眼的图片)和负样例(不包含人眼的图片),训练Haar-cascade分类器,通过训练好的分类器能够得到图像的双眼位置。(3)通过深度学习的方法训练进行人眼检测。例如使用yolo模型,采集并通过lableimg软件标记若干正样本和负样本,然后生成yolo格式的txt样本文件。下载预训练模型用来加速模型训练,训练好后保存模型参数,通过训练好的分类器能够得到图像的双眼位置。(1) The human eye detection method can adopt any of the following methods: (1) Manually determine the position of the eyes based on multiple collected human eye images. Since after the system is fixed, the heads of different subjects are fixed on the headrest. In the collected images, the positions of the two human eyes are roughly the same relative to the positions of the entire image. Therefore, the two human eyes can be manually selected. A rectangular frame with fixed length, width and position defines the area of the two eyes. (2) Collect and label multiple positive samples (including pictures of human eyes) and negative samples (pictures that do not include human eyes), train the Haar-cascade classifier, and obtain the binocular positions of the image through the trained classifier . (3) Human eye detection is performed through deep learning method training. For example, using the yolo model, collect and label several positive and negative samples through labelimg software, and then generate a txt sample file in yolo format. Download the pre-trained model to speed up model training. After training, save the model parameters, and use the trained classifier to obtain the binocular positions of the image.

(二)瞳孔位置检测采用深度学习中的分割网络模型,例如Unet。对采集的带有瞳孔的图像进行标记,将图像中的瞳孔区域作为前景,其他区域作为背景制作标签。采用二元交叉熵损失以及椭圆拟合误差损失作为损失项进行网络训练,训练好后保存模型参数。使用时加载模型参数,输入图像后得到分割概率图,再通过阈值分割得到二值图像。在二值图像中进行连通域分析,将最大斑点的质心作为左眼瞳孔中心的像素坐标(xL,yL),将最大斑点的外接矩形框的长边作为左眼瞳孔直径2*RL(RL为瞳孔半径)。(2) Pupil position detection uses segmentation network models in deep learning, such as Unet. Label the collected images with pupils, using the pupil area in the image as the foreground and other areas as the background to create labels. Binary cross entropy loss and elliptical fitting error loss are used as loss items for network training, and the model parameters are saved after training. When used, the model parameters are loaded, the segmentation probability map is obtained after inputting the image, and then the binary image is obtained through threshold segmentation. Connected domain analysis is performed in the binary image, using the centroid of the largest spot as the pixel coordinate (x L , y L ) of the left eye pupil center, and the long side of the rectangular frame surrounding the largest spot as the left eye pupil diameter 2*R L (R L is the pupil radius).

(三)根据步骤11所建立的世界坐标系以及步骤12中所测得的定标点在世界坐标系下的坐标(Xb,Yb,Zb)计算左眼瞳孔中心在世界坐标系下的坐标为(XL,YL,ZL)。其中, ZL=Zb。其中rb为定标点3在图像上的像素半径,R为定标点的真实半径(cm),(xb,yb)为定标的像素坐标。(3) Calculate the left eye pupil center in the world coordinate system based on the world coordinate system established in step 11 and the coordinates (X b , Y b , Z b ) of the calibration point measured in step 12 in the world coordinate system The coordinates are (X L , Y L , Z L ). in, Z L =Z b . Among them, r b is the pixel radius of the calibration point 3 on the image, R is the real radius of the calibration point (cm), and (x b , y b ) is the pixel coordinate of the calibration.

(四)同理,对右眼同样执行步骤(二)、(三),得到右眼瞳孔中心的像素坐标(xR,yR)、瞳孔半径RR、右眼瞳孔中心在世界坐标系下的坐标(XR,YR,ZR)。(4) In the same way, perform steps (2) and (3) for the right eye to obtain the pixel coordinates (x R , y R ) of the pupil center of the right eye, the pupil radius R R , and the pupil center of the right eye in the world coordinate system. The coordinates (X R , Y R , Z R ).

步骤23,以左眼瞳孔中心(xL,yL)为圆心,以左眼瞳孔半径RL的1.5倍左右为半径作圆形区域,在该区域内寻找左眼瞳孔中的反光点中心。首先对该区域进行阈值分割,由于反光点的亮度较高,分割阈值设定为220左右(图像像素值的范围为0~255)。在分割后得到的二值图像中进行连通域分析,将最大斑点的质心作为左眼瞳孔中的反光点中心的像素坐标(xg,yg)。同理,得到右眼瞳孔中的反光点中心的像素坐标。Step 23: Take the center of the left eye pupil (x L , y L ) as the center of the circle, and use about 1.5 times the radius of the left eye pupil R L as the radius to make a circular area, and find the center of the reflective point in the left eye pupil in this area. First, threshold segmentation is performed on the area. Since the brightness of the reflective points is high, the segmentation threshold is set to about 220 (the range of image pixel values is 0 to 255). Connected domain analysis is performed on the binary image obtained after segmentation, and the centroid of the largest spot is used as the pixel coordinate (x g , y g ) of the center of the reflective point in the pupil of the left eye. In the same way, the pixel coordinates of the center of the reflective point in the pupil of the right eye are obtained.

校准模块,用于为被测人员进行校准,计算得到赫斯博格比值(Hirschbergratio)。The calibration module is used to calibrate the person being measured and calculate the Hirschberg ratio.

步骤31,初始化9个坐标已知的注视点:P1(X1,Y1,0),P2(X2,Y2,0)……P9(X9,Y9,0)。注视点P0的像素坐标为(x0,y0),其世界坐标(X0,Y0,0),则有X0=x0*SW/RW,Y0=(RH-y0)*SW/RW。分别计算得到从注视点P0到注视点P1,P2……P9的偏移注视角度RH为显示器高度,单位为cm。Step 31: Initialize 9 gaze points with known coordinates: P 1 (X 1 ,Y 1 ,0), P 2 (X 2 ,Y 2 ,0)...P 9 (X 9 ,Y 9 ,0). The pixel coordinates of the gaze point P 0 are (x 0 , y 0 ), and its world coordinates are (X 0 , Y 0 , 0), then X 0 =x 0 *SW/RW, Y 0 = (RH-y 0 ) *SW/RW. Calculate the offset gaze angles from the gaze point P 0 to the gaze points P 1 , P 2 ...P 9 respectively. RH is the height of the monitor in cm.

步骤32,在显示器上显示注视点P0(X0,Y0,0),控制语音播报器提示待测人员将头部置于头托1上,并提醒被测者注视坐标点P0。接着在显示器上依次显示至少两个位置已知的注视点作为校准点,控制语音播报器提醒被试者依次注视这些校准点,在该过程中,系统调用瞳孔与反光点定位模块进行瞳孔与其内的反光点定位(即重复执行步骤2)。每个校准点显示时长为5s钟,当被试人员的注视时长大于等于2s时认为当前校准点稳定。默认被试者在观察校准点,计算2s内瞳孔中心位置的均值和标准差,若2s内瞳孔中心位置坐标均小于1.5倍的标准差则认为注视稳定。当当前注视点稳定时,记录每个注视点显示最后2s时所有图像帧中的瞳孔中心的像素坐标以及反光点的像素坐标。在人眼注视图像中未检测到瞳孔中心时认为属于闭眼情况,记录瞳孔的像素坐标为(-1,-1)以方便剔除。反光点同理,在未检测到反光点时,记录反光点坐标为(-1,-1)方便剔除。Step 32: Display the gaze point P 0 (X 0 , Y 0 , 0) on the display, control the voice announcer to prompt the person to be tested to place their head on the headrest 1, and remind the person being tested to gaze at the coordinate point P 0 . Then, at least two fixation points with known positions are displayed on the display in sequence as calibration points, and the voice announcer is controlled to remind the subject to fixate on these calibration points in sequence. During this process, the system calls the pupil and reflective point positioning module to determine the pupil and inner Position the reflective point (i.e. repeat step 2). Each calibration point is displayed for 5 seconds. When the subject's gaze lasts longer than or equal to 2 seconds, the current calibration point is considered stable. By default, the subject is observing the calibration point, and the mean and standard deviation of the pupil center position within 2 s are calculated. If the pupil center position coordinates within 2 s are less than 1.5 times the standard deviation, the gaze is considered stable. When the current fixation point is stable, the pixel coordinates of the pupil center and the pixel coordinates of the reflective point in all image frames during the last 2 seconds of each fixation point display are recorded. When the pupil center is not detected in the human eye gaze image, it is considered that the eyes are closed, and the pixel coordinates of the pupil are recorded as (-1, -1) to facilitate elimination. The same applies to reflective points. When no reflective points are detected, record the coordinates of the reflective points as (-1, -1) for easy elimination.

步骤33,对于每个注视点显示的最后2s所记录瞳孔中心的像素坐标以及反光点的像素坐标,剔除掉闭眼的情况的数据(即坐标为(-1,-1)的情况);Step 33: For the pixel coordinates of the pupil center and the pixel coordinates of the reflective points recorded in the last 2 seconds of each fixation point display, eliminate the data with eyes closed (that is, the situation where the coordinates are (-1, -1));

计算剔除数据后剩余的瞳孔中心像素坐标横坐标的均值横坐标方差Dxp、纵坐标的均值/>纵坐标方差Dyp。如果某个瞳孔中心横坐标大于/>或者纵坐标大于则校准失败,否则为校准成功。同理,计算反光点中心像素坐标横坐标的均值/>横坐标方差Dxg、纵坐标的均值/>纵坐标方差Dyg。如果某个瞳孔中心横坐标大于/>或者纵坐标大于/>则校准失败,否则为校准成功。Calculate the mean value of the abscissa of the remaining pupil center pixel coordinates after removing the data The variance of the abscissa Dx p , the mean of the ordinate/> Vertical coordinate variance Dy p . If the abscissa of a certain pupil center is greater than/> Or the ordinate is greater than The calibration fails, otherwise the calibration is successful. In the same way, calculate the mean value of the abscissa coordinate of the center pixel of the reflective point/> The variance of the abscissa Dx g and the mean of the ordinate/> Vertical coordinate variance Dy g . If the abscissa of a certain pupil center is greater than/> Or the ordinate is greater than/> The calibration fails, otherwise the calibration is successful.

校准失败后,返回执行步骤31,即重新进入校准模块。若九个注视点均校准成功,则进入步骤34;After the calibration fails, return to step 31, that is, re-enter the calibration module. If all nine fixation points are calibrated successfully, proceed to step 34;

步骤34,分别计算观测点P0~P9时瞳孔中心到普尔钦斑点中心的水平像素距离d0~d9,其中计算观测点P1~P9瞳孔中心到普尔钦斑点中心的水平像素距离与点P0时的偏移量t1~t9。其中ti=di-d0(i=1、2……9)。Step 34: Calculate the horizontal pixel distance d 0 ~ d 9 from the pupil center to the center of Purchin's spot when observing points P 0 ~ P 9 respectively, where Calculate the horizontal pixel distance from the pupil center of observation points P 1 to P 9 to the center of Purchin's spot and the offset t 1 to t 9 at point P 0 . Where ti =d i -d 0 (i=1, 2...9).

步骤35,得到(t1,A1),(t2,A2)……(t9,A9)后,对它们使用最小二乘法进行直线拟合,A=H0t+b0,计算得到赫斯博格比值H0和拟合直线的截距b0。Step 35: After obtaining (t 1 ,A 1 ), (t 2 ,A 2 )...(t 9 ,A 9 ), use the least squares method to perform straight line fitting on them, A=H 0 t+b 0 , The Hesburgh ratio H 0 and the intercept b0 of the fitted straight line are calculated.

遮盖检测模块,用于实现如下流程:Cover detection module is used to implement the following process:

控制语音播报器提示被测人员采用遮盖测试方法测量斜视度数,所述遮盖测试法可选择单侧遮盖测试、遮盖去遮盖测试或交替遮盖法。以下以遮盖去遮盖方法为例进行说明。The voice announcer is controlled to prompt the person being tested to measure the degree of strabismus using a covering test method. The covering test method can select a unilateral covering test, a covering-uncovering test or an alternating covering method. The following uses the masking and unmasking method as an example for explanation.

步骤41,控制语音播报器提示被测人员开始遮盖测试。开始测试后,系统调用瞳孔与反光点定位模块,持续采集图像并得到双眼瞳孔反光点中心的像素坐标;此时如果检测不到双眼,则重新进入步骤41,控制语音播报器提示被试调整位置直至检测到双眼,如果检测到双眼瞳孔与反光点中心,则进入步骤42。Step 41: Control the voice announcer to prompt the person being tested to start the covering test. After starting the test, the system calls the pupil and reflective point positioning module to continuously collect images and obtain the pixel coordinates of the reflective point centers of both pupils. If both eyes cannot be detected at this time, step 41 will be re-entered to control the voice announcer to prompt the subject to adjust the position. Until both eyes are detected, if the center of the pupils of both eyes and the reflective point is detected, step 42 is entered.

步骤42,监视双眼的眼部状态。如图2所示,本发明中,将眼部状态设计为包括初始态(s1)、就绪态(s2)、遮挡态(s3)、瞬间态(s4)、平稳态(s5)、眨眼态(s0)。首先,监视开始时眼部处于初始态(s1),随后(一般为在初始态的2s后)眼部处于就绪态(s2),并且控制语音播报器提示被试人员遮盖右眼。系统调用瞳孔与反光点定位模块检测右眼状态,如果连续检测不到眼睛瞳孔帧数在五十帧以内,则右眼处于眨眼态(s0)。当右眼处于眨眼态且重新检测到右眼瞳孔时,右眼状态退回到就绪态(s2)。在连续检测不到瞳孔的帧数达到五十帧及以上时,右眼进入遮挡态(s3),此时进入步骤43。Step 42: Monitor the eye status of both eyes. As shown in Figure 2, in the present invention, the eye state is designed to include an initial state (s1), a ready state (s2), an occlusion state (s3), a transient state (s4), a steady state (s5), and a blinking state. (s0). First, the eyes are in the initial state (s1) when monitoring starts, and then (generally 2 seconds after the initial state) the eyes are in the ready state (s2), and the voice announcer is controlled to prompt the subject to cover the right eye. The system calls the pupil and reflective point positioning module to detect the status of the right eye. If the number of consecutive frames of the eye's pupil cannot be detected within fifty frames, the right eye is in the blinking state (s0). When the right eye is in the blinking state and the right eye pupil is re-detected, the right eye state returns to the ready state (s2). When the number of consecutive frames in which the pupil cannot be detected reaches fifty frames or more, the right eye enters the occlusion state (s3), and step 43 is entered.

步骤43,控制语音播报器提示被试人员从右眼移开挡板并遮盖左眼。系统记录挡板移开右眼瞬间,能够同时检测到反光点和瞳孔的第一帧为关键帧,此时右眼为瞬间态(s4),系统计算并保存该帧的瞳孔像素坐标和反光点像素坐标的差值p0。在右眼进入瞬间态(s4)的时间超过2s后,计算60帧以内的瞳孔像素坐标分量的均值和方差,如果坐标分量的方差均小于设定的方差阈值Td则该眼睛进入平稳态(s5)。计算进入平稳态时60帧内瞳孔中心像素坐标与反光点中心像素坐标的的差值p1。控制语音播报器提示被试遮盖测试结束,进入步骤44。Step 43: Control the voice announcer to prompt the subject to remove the baffle from the right eye and cover the left eye. The system records the moment when the baffle moves away from the right eye. The first frame that can detect the reflective point and the pupil at the same time is the key frame. At this time, the right eye is in the instant state (s4). The system calculates and saves the pupil pixel coordinates and reflective points of this frame. The difference p 0 of pixel coordinates. After the right eye enters the instantaneous state (s4) for more than 2s, calculate the mean and variance of the pupil pixel coordinate components within 60 frames. If the variances of the coordinate components are less than the set variance threshold T d , the eye enters a stable state. (s5). Calculate the difference p 1 between the pixel coordinates of the pupil center and the pixel coordinates of the reflective point center within 60 frames when entering the steady state. Control the voice announcer to prompt the subject that the covering test is over and proceed to step 44.

步骤44,计算斜视度数:计算右眼的偏移量t=p1-p0,计算得到右眼的斜视度数A=H0t+b0Step 44: Calculate the degree of strabismus: calculate the offset of the right eye t=p 1 -p 0 , and calculate the degree of strabismus of the right eye A=H 0 t+b 0 .

同理,也可以先遮盖左眼再遮盖右眼,计算得到左眼的偏移量和斜视度数。In the same way, you can also cover the left eye first and then the right eye to calculate the offset and strabismus of the left eye.

Claims (9)

1.一种基于遮盖测试的斜视度数检测系统,其特征在于,包括头托(1)、定标点(3)、显示器(7)、摄像头(8)、光源(9)、语音播报器和控制装置;其中,所述头托(1)与显示器(7)相对设置且头托(1)与显示器(7)两者所在平面相互平行;所述定标点(3)通过由两个设置对称的立柱和立柱上的横梁组成的支架安装在头托(1)正上方,定标点(3)安装在横梁中部;所述摄像头(8)、光源(9)分别安装在显示器上且两者位于同一平面内,该平面平行于显示器(7)所在平面,所述摄像头(8)的视场覆盖头托(1);所述显示器(7)、摄像头(8)和语音播报器均连接控制器。1. A strabismus degree detection system based on covering test, characterized by including a headrest (1), calibration point (3), display (7), camera (8), light source (9), voice announcer and Control device; wherein, the headrest (1) and the display (7) are arranged oppositely and the planes of the headrest (1) and the display (7) are parallel to each other; the calibration point (3) is set by two A bracket composed of a symmetrical column and a beam on the column is installed directly above the headrest (1), and the calibration point (3) is installed in the middle of the beam; the camera (8) and the light source (9) are respectively installed on the display and both are located in the same plane, which plane is parallel to the plane where the display (7) is located, and the field of view of the camera (8) covers the headrest (1); the display (7), camera (8) and voice announcer are all connected controller. 2.如权利要求1所述的基于遮盖测试的斜视度数检测系统,其特征在于,所述定标点(3)为黑色圆形点。2. The strabismus degree detection system based on covering test according to claim 1, characterized in that the calibration point (3) is a black circular point. 3.如权利要求1所述的基于遮盖测试的斜视度数检测系统,其特征在于,所述控制器包括系统定标模块、瞳孔与反光点定位模块、校准模块和遮盖检测模块,其中:3. The strabismus detection system based on covering test as claimed in claim 1, characterized in that the controller includes a system calibration module, a pupil and reflective point positioning module, a calibration module and an covering detection module, wherein: 系统定标模块,用于实现如下流程:步骤11,以显示器(7)的左下角为原点建立世界坐标系,显示器(7)的高为SH,宽为SW,分辨率为RWxRH;步骤12,控制摄像头(8)进行图像采集;步骤13,根据采集到的图像,得到头托(1)的平面像素到距离的转换比例ratio,且手动测量定标点中心在世界坐标系下的坐标(Xb,Yb,Zb);The system calibration module is used to implement the following process: Step 11, establish a world coordinate system with the lower left corner of the display (7) as the origin, the height of the display (7) is SH, the width is SW, and the resolution is RWxRH; Step 12, Control the camera (8) to collect images; step 13, according to the collected images, obtain the conversion ratio ratio from the plane pixels of the headrest (1) to the distance, and manually measure the coordinates of the calibration point center in the world coordinate system (X b ,Y b ,Z b ); 瞳孔与反光点定位模块,用于采集人眼图像,并根据采集的人眼图像得到人的双眼的瞳孔中心的像素坐标、瞳孔中心在世界坐标系下的坐标、瞳孔中的反光点中心的像素坐标;The pupil and reflective point positioning module is used to collect human eye images, and obtain the pixel coordinates of the pupil centers of both eyes, the coordinates of the pupil center in the world coordinate system, and the pixels of the reflective point centers in the pupils based on the collected human eye images. coordinate; 校准模块,用于调用瞳孔与反光点定位模块,为被测人员进行校准,计算得到赫斯博格比值;The calibration module is used to call the pupil and reflective point positioning module to calibrate the person being measured and calculate the Hesburgh ratio; 遮盖检测模块,用于调用瞳孔与反光点定位模块,控制语音播报器提示被测人员采用遮盖测试方法测量斜视度数。The cover detection module is used to call the pupil and reflective point positioning module, and control the voice announcer to prompt the person being tested to use the cover test method to measure the degree of strabismus. 4.如权利要求3所述的基于遮盖测试的斜视度数检测系统,其特征在于,所述步骤13包括如下操作:4. The strabismus degree detection system based on covering test as claimed in claim 3, characterized in that the step 13 includes the following operations: 将采集到的图像转换为灰度图,使用高斯滤波或其他方法进行图像去噪;使用自适应阈值分割或其他方法对去噪后的图像进行阈值分割;使用形态学开运算去除阈值分割后的图像中的孤立点、毛刺等得到形态学运算后的图像;使用canny边缘检测或其他方法检测出形态学运算后的图像边缘;使用霍夫变换或其他圆形检测器检测出以上处理得到的图像中的定标点(3),得到定标点(3)的像素坐标(xb,yb)以及定标点(3)在图像上的像素半径rb;计算得到头托(1)的平面像素到距离的转换比例计算公式为ratio=R/rb,其中,R为定标点的真实半径;定标点中心在世界坐标系下的坐标(Xb,Yb,Zb)通过卷尺或其他测距工具获得。Convert the collected image into a grayscale image, use Gaussian filtering or other methods to denoise the image; use adaptive threshold segmentation or other methods to threshold segment the denoised image; use morphological opening operations to remove the noise after threshold segmentation Isolated points, burrs, etc. in the image are obtained from the image after morphological operation; use canny edge detection or other methods to detect the edges of the image after morphological operation; use Hough transform or other circular detectors to detect the image obtained by the above processing Calibration point (3) in, get the pixel coordinates (x b , y b ) of the calibration point (3) and the pixel radius rb of the calibration point (3) on the image; calculate the plane of the headrest (1) The calculation formula for the conversion ratio from pixels to distance is ratio=R/r b , where R is the real radius of the calibration point; the coordinates of the center of the calibration point in the world coordinate system (X b , Y b , Z b ) are measured using a tape measure or other ranging tools. 5.如权利要求4所述的基于遮盖测试的斜视度数检测系统,其特征在于,所述瞳孔与反光点定位模块的实现流程如下:5. The strabismus detection system based on covering test as claimed in claim 4, characterized in that the implementation process of the pupil and reflective point positioning module is as follows: 步骤21,控制摄像头(8)持续采集人眼图像,采集速率为60fps,直接采集到包含人双眼的图像,然后对图像进行水平翻转;Step 21, control the camera (8) to continuously collect human eye images at a collection rate of 60fps, directly collect images containing human eyes, and then flip the images horizontally; 步骤22,根据步骤21得到的图像,首先通过人眼检测方法定位双眼位置,然后分别在两个区域内使用瞳孔定位技术得到两个瞳孔中心的像素坐标;并且计算瞳孔中心在系统标定模块中所建立的世界坐标系下的坐标;Step 22: Based on the image obtained in step 21, first locate the position of both eyes through the human eye detection method, and then use pupil positioning technology in the two areas to obtain the pixel coordinates of the two pupil centers; and calculate the position of the pupil center in the system calibration module. Coordinates in the established world coordinate system; 步骤23,以左眼瞳孔中心(xL,yL)为圆心,以左眼瞳孔半径RL的1.5倍左右为半径作圆形区域,在该区域内寻找左眼瞳孔中的反光点中心的像素坐标(xg,yg);同理,得到右眼瞳孔中的反光点中心的像素坐标。Step 23: Take the left eye pupil center (x L , y L ) as the center of the circle, and use about 1.5 times the left eye pupil radius R L as the radius to make a circular area. Find the center of the reflective point in the left eye pupil in this area. Pixel coordinates (x g , y g ); similarly, obtain the pixel coordinates of the center of the reflective point in the pupil of the right eye. 6.如权利要求5所述的基于遮盖测试的斜视度数检测系统,其特征在于,所述步骤22的具体流程如下:6. The strabismus detection system based on covering test as claimed in claim 5, characterized in that the specific process of step 22 is as follows: (一)所述人眼检测方法采用以下任一种方式:根据采集到的多个人眼图像,(1)人工确定双眼位置;(2)采集并标记多个正样例和负样例,训练Haar-cascade分类器,通过训练好的分类器能够得到图像的双眼位置;(3)通过深度学习的方法训练进行人眼检测;(1) The human eye detection method adopts any of the following methods: (1) manually determine the position of the eyes based on multiple collected human eye images; (2) collect and mark multiple positive and negative samples, and train Haar-cascade classifier, the trained classifier can obtain the position of the eyes of the image; (3) Human eye detection is performed through deep learning method training; (二)瞳孔位置检测采用深度学习中的分割网络模型,输入图像后得到分割概率图,再通过阈值分割得到二值图像,在二值图像中进行连通域分析,将最大斑点的质心作为左眼瞳孔中心的像素坐标(xL,yL),将最大斑点的外接矩形框的长边作为左眼瞳孔直径2*RL,RL为瞳孔半径;(2) Pupil position detection uses the segmentation network model in deep learning. After inputting the image, the segmentation probability map is obtained, and then the binary image is obtained through threshold segmentation. Connected domain analysis is performed in the binary image, and the centroid of the largest spot is used as the left eye. The pixel coordinates of the pupil center (x L , y L ), take the long side of the rectangular frame surrounding the largest spot as the left eye pupil diameter 2*R L , R L is the pupil radius; (三)根据步骤11所建立的世界坐标系以及步骤12中所测得的定标点在世界坐标系下的坐标(Xb,Yb,Zb)计算左眼瞳孔中心在世界坐标系下的坐标为(XL,YL,ZL);其中,ZL=Zb,其中rb为定标点3在图像上的像素半径,R为定标点的真实半径(cm),(xb,yb)为定标点中心的像素坐标;(3) Calculate the left eye pupil center in the world coordinate system based on the world coordinate system established in step 11 and the coordinates (X b , Y b , Z b ) of the calibration point measured in step 12 in the world coordinate system The coordinates are (X L , Y L , Z L ); where, Z L =Z b , where r b is the pixel radius of calibration point 3 on the image, R is the real radius of the calibration point (cm), (x b , y b ) is the pixel coordinate of the calibration point center; (四)同理,对右眼同样执行步骤(二)、(三),得到右眼瞳孔中心的像素坐标(xR,yR)、瞳孔半径RR、右眼瞳孔中心在世界坐标系下的坐标(XR,YR,ZR)。(4) In the same way, perform steps (2) and (3) for the right eye to obtain the pixel coordinates (x R , y R ) of the pupil center of the right eye, the pupil radius R R , and the pupil center of the right eye in the world coordinate system. The coordinates (X R , Y R , Z R ). 7.如权利要求6所述的基于遮盖测试的斜视度数检测系统,其特征在于,所述校准模块的实现流程如下:7. The strabismus detection system based on covering test as claimed in claim 6, characterized in that the implementation process of the calibration module is as follows: 步骤31,初始化9个坐标已知的注视点:P1(X1,Y1,0),P2(X2,Y2,0)……P9(X9,Y9,0);注视点P0的像素坐标为(x0,y0),其世界坐标(X0,Y0,0),则有X0=x0*SW/RW,Y0=(RH-y0)*SW/RW;分别计算得到从注视点P0到注视点P1,P2……P9的偏移注视角度RH为显示器高度(cm),RW为显示器宽度(cm);Step 31, initialize 9 gaze points with known coordinates: P 1 (X 1 , Y 1 , 0), P 2 (X 2 , Y 2 , 0)...P 9 (X 9 , Y 9 , 0); The pixel coordinates of gaze point P 0 are (x 0 , y 0 ), and its world coordinates are (X 0 , Y 0 , 0), then X 0 =x 0 *SW/RW, Y 0 = (RH-y 0 ) *SW/RW; calculate the offset gaze angles from the gaze point P 0 to the gaze points P 1 , P 2 ...P 9 respectively. RH is the height of the monitor (cm), RW is the width of the monitor (cm); 步骤32,在显示器上显示注视点P0(X0,Y0,0),控制语音播报器提示待测人员将头部置于头托(1)上,并提醒被测者注视坐标点P0;接着在显示器(7)上依次显示至少两个位置已知的注视点作为校准点,控制语音播报器提醒被试者依次注视这些校准点,在该过程中,系统调用瞳孔与反光点定位模块进行瞳孔与其内的反光点定位;每个校准点显示时长为5s钟,当被试人员的注视时长大于等于2s时认为当前校准点稳定;计算2s内瞳孔中心位置的均值和标准差,若2s内瞳孔中心位置坐标均小于1.5倍的标准差则认为注视稳定,当当前注视点稳定时,记录每个注视点显示最后2s时所有图像帧中的瞳孔中心的像素坐标以及反光点的像素坐标;Step 32: Display the fixation point P 0 (X 0 , Y 0 , 0) on the display, control the voice announcer to prompt the person to be tested to place their head on the headrest (1), and remind the person being tested to fixate on the coordinate point P 0 ; then display at least two gaze points with known positions as calibration points in sequence on the display (7), and control the voice announcer to remind the subject to gaze at these calibration points in sequence. During this process, the system calls the pupil and reflective point positioning The module locates the pupil and the reflective point within it; each calibration point is displayed for 5 seconds. When the subject's gaze lasts for 2 seconds or more, the current calibration point is considered stable; the mean and standard deviation of the pupil center position within 2 seconds are calculated. If If the pupil center position coordinates within 2 seconds are less than 1.5 times the standard deviation, the gaze is considered stable. When the current gaze point is stable, record the pixel coordinates of the pupil center and the pixel coordinates of the reflective points in all image frames in the last 2 seconds of each gaze point display. ; 步骤33,对于每个注视点显示的最后2s所记录瞳孔中心的像素坐标以及反光点的像素坐标,剔除掉闭眼的情况的数据;Step 33: For the pixel coordinates of the pupil center and the pixel coordinates of the reflective points recorded in the last 2 seconds of each fixation point display, eliminate the data with eyes closed; 计算剔除数据后剩余的瞳孔中心像素坐标横坐标的均值横坐标方差Dxp、纵坐标的均值/>纵坐标方差Dyp;如果某个瞳孔中心横坐标大于/>或者纵坐标大于/>则校准失败,否则为校准成功;同理,计算反光点中心像素坐标横坐标的均值/>横坐标方差Dxg、纵坐标的均值/>纵坐标方差Dyg;如果某个瞳孔中心横坐标大于/>或者纵坐标大于/>则校准失败,否则为校准成功;Calculate the mean value of the abscissa of the remaining pupil center pixel coordinates after removing the data The variance of the abscissa Dx p , the mean of the ordinate/> Vertical coordinate variance Dy p ; if the abscissa coordinate of a certain pupil center is greater than/> Or the ordinate is greater than/> If the calibration fails, otherwise the calibration is successful; similarly, calculate the mean value of the abscissa of the center pixel coordinate of the reflective point/> The variance of the abscissa Dx g and the mean of the ordinate/> Vertical coordinate variance Dy g ; if the abscissa coordinate of a certain pupil center is greater than/> Or the ordinate is greater than/> If the calibration fails, otherwise the calibration is successful; 校准失败后,返回执行步骤31,即重新进入校准模块;若九个注视点均校准成功,则进入步骤34;After the calibration fails, return to step 31, that is, re-enter the calibration module; if all nine fixation points are successfully calibrated, enter step 34; 步骤34,分别计算观测点P0~p9时瞳孔中心到普尔钦斑点中心的水平像素距离d0~d9,其中计算观测点P1~p9瞳孔中心到普尔钦斑点中心的水平像素距离与点P0时的偏移量t1~t9,其中ti=di-d0(i=1、2……9);Step 34: Calculate the horizontal pixel distance d 0 ~ d 9 from the pupil center to the center of Purchin's spot when observing points P 0 ~ p 9 respectively, where Calculate the horizontal pixel distance from the pupil center of observation point P 1 to p 9 to the center of Purchin's spot and the offset t 1 to t 9 at point P 0 , where ti =d i -d 0 (i=1, 2... …9); 步骤35,得到(t1,A1),(t2,A2)……(t9,A9)后,对它们使用最小二乘法进行直线拟合,A=H0t+b0,计算得到赫斯博格比值H0和拟合直线的截距b0Step 35: After obtaining (t 1 , A 1 ), (t 2 , A 2 )...(t 9 , A 9 ), use the least squares method to perform straight line fitting on them, A=H 0 t+b 0 , The Hesburgh ratio H 0 and the intercept b 0 of the fitted straight line are calculated. 8.如权利要求1所述的基于遮盖测试的斜视度数检测系统,其特征在于,所述遮盖检测模块中,所述遮盖测试法选择单侧遮盖测试方法、遮盖去遮盖测试方法或交替遮盖法。8. The strabismus degree detection system based on covering test as claimed in claim 1, characterized in that, in the covering detection module, the covering test method selects a unilateral covering test method, a covering and uncovering test method or an alternating covering method. . 9.如权利要求8所述的基于遮盖测试的斜视度数检测系统,其特征在于,所述遮盖检测模块中采用遮盖去遮盖方法,具体实现流程如下:9. The strabismus degree detection system based on covering test as claimed in claim 8, characterized in that the covering and removing method is adopted in the covering detection module, and the specific implementation process is as follows: 步骤41,控制语音播报器提示被测人员开始遮盖测试,开始测试后,系统调用瞳孔与反光点定位模块,持续采集图像并得到双眼的位置以及瞳孔和反光点中心的像素坐标;此时如果检测不到双眼,则重新进入步骤41,控制语音播报器提示被试调整位置直至检测到双眼,如果检测到双眼以及瞳孔与反光点中心,则进入步骤42;Step 41: Control the voice announcer to prompt the person being tested to start covering the test. After starting the test, the system calls the pupil and reflective point positioning module to continuously collect images and obtain the positions of the eyes and the pixel coordinates of the pupil and reflective point centers; at this time, if the detection If both eyes are not detected, then re-enter step 41 and control the voice announcer to prompt the subject to adjust the position until both eyes are detected. If both eyes and the center of the pupil and the reflective point are detected, then enter step 42; 步骤42,监视双眼的眼部状态:首先,监视开始时眼部处于初始态s1,随后眼部处于就绪态s2,并且控制语音播报器提示被试人员遮盖右眼;系统调用瞳孔与反光点定位模块检测右眼状态,如果连续检测不到眼睛瞳孔帧数在五十帧以内,则右眼处于眨眼态s0;当右眼处于眨眼态且重新检测到右眼瞳孔时,右眼状态退回到就绪态s2;在连续检测不到瞳孔的帧数达到五十帧及以上时,右眼进入遮挡态s3,此时进入步骤43;Step 42, monitor the eye status of both eyes: first, the eyes are in the initial state s1 when monitoring starts, and then the eyes are in the ready state s2, and the voice announcer is controlled to prompt the subject to cover the right eye; the system calls the pupil and reflective point positioning The module detects the state of the right eye. If the number of consecutive frames of the eye pupil cannot be detected within fifty frames, the right eye is in the blinking state s0; when the right eye is in the blinking state and the right eye pupil is re-detected, the right eye state returns to ready. State s2; when the number of consecutive frames in which the pupil cannot be detected reaches fifty frames or more, the right eye enters the occlusion state s3, and then enters step 43; 步骤43,控制语音播报器提示被试人员从右眼移开挡板并遮盖左眼,系统记录挡板移开右眼瞬间,能够同时检测到反光点和瞳孔的第一帧为关键帧,此时右眼为瞬间态s4,系统计算并保存该帧的瞳孔像素坐标和反光点像素坐标的差值p0;在右眼进入瞬间态s4的时间超过2s后,计算60帧以内的瞳孔像素坐标分量的均值和方差,如果坐标分量的方差均小于设定的方差阈值Td则该眼睛进入平稳态s5;计算进入平稳态时60帧内瞳孔中心像素坐标与反光点中心像素坐标的的差值p1;控制语音播报器提示被试遮盖测试结束,进入步骤44;Step 43: Control the voice announcer to prompt the subject to remove the baffle from the right eye and cover the left eye. The system records the moment when the baffle is removed from the right eye. The first frame that can simultaneously detect the reflective point and the pupil is the key frame. This When the right eye is in the instant state s4, the system calculates and saves the difference p 0 between the pupil pixel coordinates of the frame and the pixel coordinates of the reflective point; after the right eye enters the instant state s4 for more than 2 seconds, the system calculates the pupil pixel coordinates within 60 frames. The mean and variance of the components. If the variance of the coordinate components is less than the set variance threshold T d , the eye enters the steady state s5; calculate the pixel coordinates of the pupil center and the pixel center of the reflective point within 60 frames when entering the steady state. Difference p 1 ; control the voice announcer to prompt the subject that the covering test is over, and enter step 44; 步骤44,计算斜视度数:计算右眼的偏移量t=p1-p0,计算得到右眼的斜视度数A=H0t+b0;同理,先遮盖左眼再遮盖右眼,计算得到左眼的偏移量和斜视度数。Step 44, calculate the degree of strabismus: Calculate the offset of the right eye t = p 1 - p 0 , and calculate the degree of strabismus of the right eye A = H 0 t + b 0 ; similarly, cover the left eye first and then the right eye, The offset and strabismus of the left eye were calculated.
CN202311448823.6A 2023-11-02 2023-11-02 A strabismus degree detection system based on covering test Pending CN117617889A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311448823.6A CN117617889A (en) 2023-11-02 2023-11-02 A strabismus degree detection system based on covering test

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311448823.6A CN117617889A (en) 2023-11-02 2023-11-02 A strabismus degree detection system based on covering test

Publications (1)

Publication Number Publication Date
CN117617889A true CN117617889A (en) 2024-03-01

Family

ID=90036720

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311448823.6A Pending CN117617889A (en) 2023-11-02 2023-11-02 A strabismus degree detection system based on covering test

Country Status (1)

Country Link
CN (1) CN117617889A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118319232A (en) * 2024-05-14 2024-07-12 波克医疗科技(上海)有限公司 Automatic alignment acquisition system based on eyeball position detection
CN119745316A (en) * 2025-03-05 2025-04-04 上海联影智元医疗科技有限公司 Strabismus detection method, strabismus detection device and wearable device

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118319232A (en) * 2024-05-14 2024-07-12 波克医疗科技(上海)有限公司 Automatic alignment acquisition system based on eyeball position detection
CN118319232B (en) * 2024-05-14 2025-06-17 波克医疗科技(上海)有限公司 An automatic alignment acquisition system based on eye position detection
CN119745316A (en) * 2025-03-05 2025-04-04 上海联影智元医疗科技有限公司 Strabismus detection method, strabismus detection device and wearable device

Similar Documents

Publication Publication Date Title
WO2021135557A1 (en) Artificial intelligence multi-mode imaging analysis apparatus
CN110279391B (en) Eyesight detection algorithm for portable infrared camera
CN117617889A (en) A strabismus degree detection system based on covering test
US9408535B2 (en) Photorefraction ocular screening device and methods
US9004687B2 (en) Eye tracking headset and system for neuropsychological testing including the detection of brain damage
CN113288044B (en) Dynamic vision testing system and method
CN109684915A (en) Pupil tracking image processing method
CN109634431B (en) Medium-free floating projection visual tracking interaction system
CN105520713A (en) Binocular pupil light reflex measuring equipment
CN114931353B (en) Convenient and fast contrast sensitivity detection system
KR20220053209A (en) Apparatus and Method for Measuring The Angel Of Strabismus for Supporting Diagnosis Of Strabismus
CN115670370B (en) Retina imaging method and device for removing vitreous opacity spots of fundus image
CN115590462A (en) Vision detection method and device based on camera
CN109700423B (en) Intelligent vision detection method and device capable of automatically sensing distance
CN113850772B (en) Glaucoma screening method, system, device and storage medium based on AS-OCT video
CN115414002A (en) Eye detection method based on video stream and strabismus screening system
CN101108120A (en) Eye movement test analysis method
EP4574017A1 (en) Quantitative evaluation method for conjunctival congestion, apparatus, and storage medium
CN117530654A (en) A real-time binocular pupil inspection system and detection method
CN116115179A (en) Eye movement examination apparatus
CN113011286B (en) Strabismus discrimination method and system based on video-based deep neural network regression model
CN114569056A (en) Eyeball detection and vision simulation device and eyeball detection and vision simulation method
CN115137292B (en) Intelligent corneal topograph
JPH04279143A (en) Eyeball motion inspector
JPH11309116A (en) Optometer device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20240705

Address after: 710119, No. 17, information Avenue, new industrial park, hi tech Zone, Shaanxi, Xi'an

Applicant after: XI'AN INSTITUTE OF OPTICS AND PRECISION MECHANICS OF CAS

Country or region after: China

Applicant after: Xi'an Purui Eye Hospital Co.,Ltd.

Address before: 710119, No. 17, information Avenue, new industrial park, hi tech Zone, Shaanxi, Xi'an

Applicant before: XI'AN INSTITUTE OF OPTICS AND PRECISION MECHANICS OF CAS

Country or region before: China

TA01 Transfer of patent application right