WO2021042504A1 - 一种基于虚拟现实技术的视网膜检测系统 - Google Patents
一种基于虚拟现实技术的视网膜检测系统 Download PDFInfo
- Publication number
- WO2021042504A1 WO2021042504A1 PCT/CN2019/116196 CN2019116196W WO2021042504A1 WO 2021042504 A1 WO2021042504 A1 WO 2021042504A1 CN 2019116196 W CN2019116196 W CN 2019116196W WO 2021042504 A1 WO2021042504 A1 WO 2021042504A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- subject
- field
- head
- module
- information
- Prior art date
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
- A61B3/113—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining or recording eye movement
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
- A61B5/1113—Local tracking of patients, e.g. in a hospital or private home
- A61B5/1114—Tracking parts of the body
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/68—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
- A61B5/6801—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
- A61B5/6802—Sensor mounted on worn items
- A61B5/6803—Head-worn items, e.g. helmets, masks, headphones or goggles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/012—Head tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/01—Indexing scheme relating to G06F3/01
- G06F2203/011—Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns
Definitions
- the present invention relates to the technical field of retina detection, in particular to a retina detection system, in particular a retina detection system based on virtual reality technology, which can provide multiple human-machine interactive and multiple virtual reality technology-based field of view detections .
- the health of the retina is directly related to the health of people's vision.
- Many visual diseases are manifested as damage or functional decline of the retina. Including but not limited to: macular degeneration, glaucoma, retinal detachment, color blindness and color weakness, which seriously affect occupation and life functions.
- the etiology and pathogenesis of retinal damage or functional decline are diverse. Most of them have gradual deterioration and are not easy to detect at the initial stage.
- the pathogenesis of retinopathy is diverse, and most of them cannot find the cause.
- the basic feature of these diseases is that they are not easy to detect at the initial stage.
- Rh irreversible or controllable are reversible or controllable.
- the diagnosis and treatment of retinal-related diseases follow the principle that the sooner the better, and timely and effective early diagnosis and regular follow-up diagnosis can help improve the condition and improve the quality of life.
- the existing retinal health detection method is an old mechanical fundus scanning, which is cumbersome and poor in accuracy. It requires the subject to keep the eyeballs still, and relies on the personal skills of experienced physicians, requiring a lot of manpower. , Material and financial resources, this is a great challenge for families with retinal health problems and social medical resources. Therefore, there is an urgent need to provide low-cost, high-efficiency, and objectively quantifiable auxiliary methods for visual field measurement. In order to reduce dependence on physicians, it is advocated to use more objective quantitative indicators to assist in the detection of retinal health, and to detect retinopathy in a more timely and accurate manner.
- retinal health testers such as young children and the elderly, find it difficult to follow instructions and continue to focus on fixed flat-panel displays or spherical retinal detectors, and respond quickly to changes, which may cause their fixation points to exceed visual stimuli.
- the data collected by the sensor is invalid. Therefore, the retinal health status information collected through ordinary fixed plane and spherical visual stimulus materials cannot exclude human and environmental interference outside the display device.
- the purpose of the present invention is to provide a retina detection system based on virtual reality technology.
- the present invention is achieved through the following technical solutions.
- a retinal detection system based on virtual reality technology includes: a display subsystem, a head movement tracking subsystem, an eye tracking subsystem, a controller subsystem, and a head-mounted display bracket.
- the system and the eye tracking subsystem are built in the head-mounted display bracket; wherein:
- the display subsystem is used to display a three-dimensional image without boundary of the field of view to the subject, including:
- -A display module which is used to display a stereoscopic image with depth of field to the subject
- the lens module is located between the eyes of the subject and the display module, and is used to magnify and map the light projected by the display module to the eyes of the subject, so that the display module can display three-dimensional
- the image occupies the entire field of view of the subject;
- the controller subsystem is used to control and obtain the visual field boundary of the subject and the test light spot information
- the head movement tracking subsystem is used to detect head movement information and eliminate the influence of head movement on visual detection;
- the eye tracking subsystem is used to detect the gaze point information of the subject.
- the display module adopts a built-in dual-screen display, the two screens of the built-in dual-screen display simulate the viewing angle of the human eye to the real scene, and the spherical coordinates are established based on the field of view of each human eye, The apex of the spherical surface is the center of the human eye's field of view; the display module includes a preliminary measurement mode and an accurate measurement mode, among which:
- the preliminary measurement mode is: the polar axis passes through the apex to form a 0° meridian, and a meridian passing through the apex of the sphere is added counterclockwise from the 0° meridian in the field of view every N° interval, and multiple meridians divide the field of view into multiple regions,
- the display module moves counterclockwise from the 0° meridian to the edge of the field of view along the sphere vertices of each meridian, and projects light spots to the eyes of the subject. When the subject finds the light spots disappear, the controller subsystem is activated.
- the client side of the field of view records the boundary of the field of view in the direction of the field of view; the server side of the controller subsystem is also used to control the embedded dual-screen display to increase the meridian different brightness and color spot scanning at the interval of M° to obtain the measured
- the test light spot information of the subject and then preliminarily outlines the visually insensitive area of the subject; where M° is less than N°.
- the precise measurement mode is: taking the center of the field of view as the origin to establish horizontal and vertical coordinate lines, the horizontal and vertical coordinate lines divide the field of view into four quadrant areas; wherein a number of random points are evenly distributed in each quadrant area
- the subject makes a corresponding choice according to whether the light spot is observed in the field of view, and then accurately measures the visual insensitive area of the subject.
- the precise measurement mode is executed on the basis of the preliminary measurement mode.
- the N° adopts 30°, and accordingly, the field of view will be divided into 12 regions; the M° adopts 5°.
- the brightness and color of the light spots are adjustable.
- the areas of the subject’s visual field that are less sensitive to light are outlined based on the results of the dynamic visual field range measurement.
- the static field of view measurement uses light spots arranged in a matrix to blink one by one.
- the matrix is divided into four quadrants. A plurality of light spots are pre-distributed in each quadrant area. The number of light spots and setting positions do not affect the claims of this patent.
- the subject responds to the flickering of the light spot. Such a test produces more specific results, but the test time is long and the test process is uncomfortable. Choosing an area with weak sensitivity to focus on measurement will shorten the test time, but maintain the test accuracy.
- the lens module includes two lenses, respectively corresponding to the two screens of the in-cell dual-screen display, and each lens is provided with a circular prism array.
- the head movement tracking subsystem includes:
- An accelerometer module the accelerometer module is used for gravity monitoring, so as to determine whether the head-mounted display bracket is upright, and at the same time detect the acceleration of the subject's head on each axis;
- the gyroscope module is used to track the rotation angular velocity and angle change of the subject's head.
- the retina detection system further includes an analysis and evaluation subsystem
- the analysis and evaluation subsystem includes:
- the head motion compensation module based on the head motion information, obtains the head motion mode of the subject
- the visual attention point tracking module continuously tracks a specific area of the retina by compensating for eye movement according to the point of gaze information, and extracts its visual attention mode;
- the retina detection and evaluation module accurately locates the detection position of the retina through head movement patterns and visual attention patterns
- the possible visual field area with impaired vision is evaluated.
- the light sensitivity and color sensitivity of the subject in different visual field areas are obtained according to the test light spot information, and the subject’s visual sensitivity is measured.
- the range of the visible and invisible areas of the retina output the test result of the health status of the retina of the subject.
- the gaze point information includes: gaze position information, gaze sequence information, and gaze duration information of the subject on the stereoscopic image.
- the head movement information includes: head movement speed information, displacement information and rotation direction information of the subject.
- the test light spot information includes: the position of the light spot, the brightness of the light spot, and the color of the light spot seen by the subject in the spherical coordinate space.
- the retinal detection system based on virtual reality technology is a system based on virtual reality technology that combines dynamic meridian light spots and static matrix light spots to provide multiple visual field detection modes, which can be used to test no visual field Three-dimensional image of the boundary.
- the display module is used to display a three-dimensional image (light spot) with depth of field to the subject;
- the lens module is located between the subject’s eyes and the display module, and is used to magnify and map the light spot projected by the display module to In the eyes of the subject, the stereoscopic image displayed by the display module occupies the entire field of view of the subject;
- the controller subsystem is used to control the boundary of the subject’s field of view and the test light point information;
- the head movement tracking subsystem is used to Detect head movement information to eliminate the influence of head movement on visual detection; eye tracking subsystem is used to detect the gaze point information of the subject.
- the invention can objectively and accurately track and evaluate the health status detection result of each area of the retina of the subject, can quantify features, and is accurate and efficient.
- the present invention has the following beneficial effects:
- a retinal detection system based on virtual reality technology proposed by the present invention adopts a head-mounted display bracket, in which the three-dimensional image generated by the display module of the display subsystem, the test process is more comfortable and affordable than the traditional field of view detection method. It is helpful to reduce the impact of discomfort on the test accuracy and improve the coordination of the retinal health check; on the other hand, it also makes the visual stimulus material in the virtual reality environment more three-dimensional and more depth of field than the plane stimulus material, and can simulate people The visual range of the actual scene that is touched helps to improve the effectiveness of the collected data.
- the present invention proposes a retinal detection system based on virtual reality technology. Since the eye tracking subsystem is embedded in the head-mounted display bracket, there is no relative displacement between the eye tracking subsystem and the subject’s eyes. The dynamic tracking subsystem can realize the synchronous follow-up of the head movement, and will not lose focus on the subject's eyes due to the large movement of the head.
- the head movement tracking subsystem is embedded in the head-mounted display bracket, which can follow the head movement without relative displacement, thereby improving the accuracy of head movement detection.
- a retinal detection system based on virtual reality technology proposed by the present invention can objectively and accurately track and evaluate the health status detection results of each region of the subject’s retina according to two indicators of visual attention point information and head movement information , Can quantify features, accurately and efficiently.
- the invention combines dynamic and static field-of-view measurement methods to shorten the measurement time but maintain the accuracy of static measurement.
- FIG. 1 is a structural block diagram of a retina detection system based on virtual reality technology in an embodiment of the present invention
- Fig. 2 is a structural block diagram of a head movement tracking subsystem in an embodiment of the present invention
- Figure 3 is a structural block diagram of an analysis and evaluation subsystem in an embodiment of the present invention.
- FIG. 4 is a schematic diagram of the structure of a head-mounted display in an embodiment of the present invention.
- FIG. 5 is an image diagram of a monocular field of view tested by a meridian dynamic method according to an embodiment of the present invention
- FIG. 6 is an image diagram of a monocular field of view tested by a matrix static method in an embodiment of the present invention
- FIG. 7 is an image diagram of the monocular field of view outlined by the initial screening of the meridian dynamic method according to an embodiment of the present invention.
- FIG. 8 is an image diagram of an accurate monocular field of view drawn by the dynamic preliminary screening of the meridian combined with the static matrix method in an embodiment of the present invention
- 1 is the display subsystem, 11 is the lens module, and 12 is the display module;
- 3 is the head movement tracking subsystem, 31 is the accelerometer module, and 32 is the gyroscope module;
- 4 is the analysis and evaluation subsystem, 41 is the head tracking module, 42 is the eye tracking module, and 43 is the retina detection and evaluation module;
- 5 is a head-mounted display stand
- 61 is the origin of the visual area 62 is the meridian, 63 is the dynamic sightseeing spot, 64 is the starting point of the dynamic light spot moving along the meridian, 65 is the movement direction of the light spot, and 66 is the boundary point where the light spot enters the subject's visual sensitive area , Is also the end point of the light point at the meridian;
- 71 is the center point or origin of the visual area
- 72 is the X-axis that divides the visual area
- 73 is the Y-axis that divides the visual area
- 75 is the static test spot
- 81, 82, 83, 84 are the boundary points where the light point enters the subject's visual area, and 85 is the visually insensitive area of the subject outlined by these boundary points;
- 91 is a static light spot, and 92 is a visually insensitive area outlined by a dynamic meridian;
- the embodiment of the present invention proposes a retina detection system based on virtual reality technology, as shown in FIG. 1, FIG. 4, and FIG. 5, including: a display subsystem 1, an eye tracking subsystem 2, a head tracking subsystem 3, and The head-mounted display bracket 5, the display subsystem 1, the eye tracking subsystem 2 and the head-movement tracking subsystem are all built in the head-mounted display bracket 5.
- the display subsystem 1 is used to show the subject without the boundary of the field of view
- the three-dimensional image includes: a display module 12, which is an embedded dual-screen display with sufficient pixel density and refresh rate, used to display a three-dimensional image with depth of field to the subject; lens module 11, lens module 11 Located between the subject’s eyes and the display module 12, it is used to magnify and map the light projected by the display module 12 to the subject’s eyes, so that the three-dimensional image displayed by the display module 12 occupies the subject’s entire field of view; eye tracking Subsystem 2 is used to detect the gaze point information of the subject; head movement tracking subsystem 3 is used to detect head movement information and eliminate the influence of head movement on vision detection.
- the spherical coordinates are established based on the field of view of each eye.
- the apex 61 of the spherical surface is the center of the human eye.
- the horizontal right meridian passes through the apex, which is the 0° meridian 62.
- the display module 12 starts from the 0° meridian 62 in the field of view.
- the hour hand passes through the apex every N°(may be 30°)
- the field of view is divided into multiple areas (12 areas are formed accordingly), and the dark background is projected to the subject’s eyes along each meridian from the edge to the center of the field of view.
- the moving light spot 63 when the subject finds that the light spot 63 disappears, presses the client of the controller subsystem to record the field of view boundary in the field of view direction, and the brightness and color of the light spot 63 are controllable.
- the client of the controller subsystem After the dynamic measurement, outline the visually insensitive area, use the static measurement to focus on the insensitive area, and the visually sensitive area will be roughly measured.
- the measurement result will be calibrated based on the eye movement and head movement tracking data.
- the final test result is similar to the static field of view measurement result.
- the server end of the controller subsystem is also used to control the built-in dual-screen display to increase the meridian different brightness and color spot scanning of M° interval to obtain the test spot information of the subject; where M° is less than N°. Further, M° can be 5°.
- the areas of the subject’s field of view that are less sensitive to light are outlined according to the results of the dynamic field of view range measurement, as shown in Figure 7.
- the static field of view measurement uses the light spots arranged in a matrix to flicker one by one, as shown in Figure 6, the matrix is divided into four quadrants, and multiple light spots are pre-distributed in each quadrant area. The number of light spots and setting positions do not affect the patent rights Claim. The subject responds to the flickering of the light spot. Such a test produces more specific results, but the test time is long and the test process is uncomfortable. Choosing an area with weak sensitivity to focus on measurement will shorten the test time, but maintain the test accuracy.
- the head-mounted display stand 5 of the present invention has a built-in display subsystem 1, an eye tracking subsystem 2 and a head tracking subsystem 3.
- the pixel density of the display module 12 in the display subsystem 1 needs to be greater than 400ppi, refresh The rate is at least 60 Hz, the display module 12 is embedded in the front end of the head-mounted display bracket 5, and its built-in dual-screen display faces the eyes of the subject.
- the head of the subject needs to stay in a fixed position for 10 to 20 minutes, and the subject needs to react quickly to a flash of light spot during the test.
- testee In this test environment, the testee is prone to discomfort, which will reduce the test accuracy rate, because the field of view test result depends on the testee's response. If the testee cannot respond accurately, the test result accuracy rate will be reduced. If the virtual reality display used in the present invention is used, the subject can move his head during the test, which reduces the discomfort during the test.
- the test method of the present invention also reduces the discomfort of the testee to a certain extent. Dynamic measurement is more comfortable than static measurement, but the measurement accuracy is not high.
- the present invention combines static measurement and dynamic measurement. First, static measurement draws out the problematic area in the field of view, and then uses static measurement to focus on the area of vision defects. Reduce the part that needs to be statically measured and maintain test accuracy.
- the lens module 11 can magnify the light emitted by the display module 12 and project it on the human eye, thereby eliminating the frame of the dual-screen display of the display module 12 in the human eye, so that the subject can be more immersed in the display subsystem 1 In the environment, its feedback behavior to different images is more realistic, which increases the accuracy of the judgment results of the retinal health status;
- the eye tracking subsystem 2 is a device capable of tracking and measuring eyeball position and eye movement information, and is embedded in the head-mounted display bracket 5.
- the eye tracking subsystem 2 can generate an image seen by the pupil through near infrared, and then capture the generated image through a camera.
- the eye tracking subsystem 2 can also realize eye tracking by recognizing the characteristics of the eyeball, such as pupil shape, heterochromatic edge iris, iris boundary, and corneal reflection of a close-pointing light source. Since the eye tracking subsystem 2 in the embodiment of the present invention is embedded in the head-mounted display bracket 5, it always moves in synchronization with the subject’s head, which solves the problem that the position of the existing eye tracking device is fixed. If the subject’s head moves significantly, the eye tracking device will lose focus on the subject’s eyes.
- the two screens of the display module 12 simulate the viewing angle of the human eye to the real scene, and respectively project images of the same scene and different angles to the eyes of the subject.
- the lens module 11 is provided with a piece of lens module 11 on the left and right sides of the head mounted display bracket 5, and each lens module 11 is provided with a circular prism array.
- the circular prism array can enable the lens module 11 to have the same effect as a large curved lens.
- the light from the display module 12 is scattered in the human eye, so that the visual stimulus material presented by the dual-screen display occupies the entire field of view of the subject.
- the position of the circular prism array can be fine-tuned according to the actual situation of the user (such as myopia, hyperopia, eye distance, etc.).
- the retinal detection system based on virtual reality technology further includes: a head tracking subsystem 3, which is also built in the head-mounted The display bracket 5 is used to detect the head movement information of the subject; the analysis and evaluation subsystem 4, the analysis and evaluation subsystem 4 are used to collect the gaze point information and head movement from the eye tracking subsystem 2 and the head tracking subsystem 3 According to the collected gaze point information and head movement information, the subject’s visual attention pattern and head movement pattern can be obtained to determine the detection result of the retinal health status.
- the head movement tracking subsystem 3 includes: an accelerometer module 31, which is used for gravity monitoring, so as to determine whether the head-mounted display stand 5 is upright
- the accelerometer module 31 is also used to detect the acceleration of the subject's head on each axis;
- the gyroscope module 32 is used to track the rotational angular velocity and angle changes of the subject's head.
- the accelerometer module 31 uses the inertial force of the sensing device to measure its acceleration direction and speed on the x, y, and z axes.
- the x-axis and y-axis acceleration measurement sensors can also be used, where the x-axis acceleration is 0g and the y-axis acceleration is 1g.
- the gyroscope module 32 tracks the rotation angular velocity or angle change of the head mounted display support 5 along the x, y, and z three axes to provide more accurate object rotation information for the analysis and evaluation subsystem 4.
- the module can calculate the angular velocity by measuring the angle between the vertical axis of the gyro rotor and the device in the three-dimensional coordinate system, and judge the movement state of the subject’s head in three-dimensional space through the angle and angular velocity.
- the analysis and evaluation subsystem 4 includes: a visual attention point tracking module 42.
- the visual attention point tracking module 42 obtains the measured point based on the gaze point information and the visual attention mode.
- the person’s visual attention mode the head motion compensation module 41, the head motion compensation module 41 obtains the subject’s head motion mode based on the head tracking algorithm according to the head motion information; the retina detection evaluation module 43, the retina detection evaluation The module 43 compensates according to the subject's visual attention point and head movement.
- the display module In the dynamic meridian test, based on the measurement of the subject’s visual field boundary in 12 directions by the display module, the area that may be visually impaired is evaluated, and the more detailed meridian light spots of different colors at 5° intervals are added to scan 63, so as to accurately measure the test subject
- the range of the visible and invisible areas of the subject’s retina including the sensitivity of the visible area and the sensitivity to different colors, and output the test results of the subject’s retinal health; in the static light spot test, the entire field of view
- the X-axis 72 and Y-axis 73 are divided into four quadrants 74.
- the display module will randomly display the light spots 75 arranged in a matrix one by one.
- the head motion compensation module 41 uses the speed, position, and direction information of the head motion over time to obtain the head motion pattern of the subject.
- the input information of the module comes from the acceleration direction and magnitude of the head-mounted display on the multi-axis measured by the accelerometer module 31 and the gyroscope module 32, and the acceleration along the x, y, and z three axes. Rotation angular velocity and angle change.
- the signal input of the head movement compensation module can also be derived from the displacement and rotation angle of the head-mounted display measured by the infrared detection component arranged in the environment of the retina detection system based on the visual attention mode and the head movement mode in virtual reality.
- the visual attention point tracking module 42 uses the eye tracking subsystem 2 to obtain the location and time distribution of the subject’s gaze point and saccade process, and obtains the subject’s attention point and non-focus area information based on the visual attention mode, and extracts its vision Pay attention to the pattern.
- the retinal detection and evaluation module 43 uses the characteristics and corresponding relations of the visual attention pattern and the head movement pattern on the same timeline obtained by head motion compensation 41 and visual attention point tracking 42, and evaluates these characteristics and relationships through retinal health status detection and evaluation
- the algorithm compares the characteristics of individuals with retinal defects and standard developmental individuals to evaluate the health status of the subject’s retina, and outputs the results of the detection of the health status of the retina.
- the module can, but is not limited to, run on a personal computer or server.
- the detection result can be, but not limited to, various display devices such as the display screen of a personal computer or an additional LED display screen.
- the head movement information includes: head movement speed information, displacement information, and rotation direction information of the subject.
- the above embodiment of the present invention provides a retina detection system based on virtual reality technology, which detects the physiological health status of various parts of the retina, including imaging range, sensitivity, color vision and other indicators.
- the display subsystem, head movement/eye movement tracking sub-system The system consists of analysis and evaluation subsystems and controller subsystems.
- the display subsystem and head movement/eye tracking subsystem are built in the head-mounted display, and the analysis and evaluation subsystem runs on the computer connected to the head-mounted display.
- the client of the controller subsystem is manually controlled by the tester, and is wirelessly connected to the computer via Bluetooth.
- the display subsystem includes: a display module, which is an embedded dual-screen display with sufficient pixel density and refresh rate, used to display stereo images with depth of field to the subject’s eyes; lens module, the lens module is located on the subject Between the eye and the display module, it is used to magnify and map the light projected by the display module to the eyes of the subject, so that the three-dimensional image displayed by the display module occupies the entire field of view of the subject.
- Head movement tracking subsystem used to detect head movement and eliminate the influence of head movement on visual inspection.
- Eye tracking subsystem used to detect eye movement and gaze point information of the subject.
- the invention increases the detection range and accuracy of the retina by eliminating the visual field frame and enhancing the depth of field, and the eye tracking subsystem always moves in synchronization with the head, ensuring precise positioning of the detection position of the retina when the eyes are moving.
- Analysis and evaluation subsystem used to integrate and process various sensor information and controller input information, and output the physiological health status of the subject's retina.
- Control subsystem the subject selects input according to the visual effect, which is used to reflect the physiological state of the subject's retina at different positions.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Public Health (AREA)
- Biomedical Technology (AREA)
- Physics & Mathematics (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Heart & Thoracic Surgery (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Pathology (AREA)
- Human Computer Interaction (AREA)
- Veterinary Medicine (AREA)
- Biophysics (AREA)
- General Physics & Mathematics (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Ophthalmology & Optometry (AREA)
- Physiology (AREA)
- Dentistry (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Eye Examination Apparatus (AREA)
Abstract
一种基于虚拟现实技术的视网膜检测系统,包括:显示器模块(12),显示器模块(12)用于向被测者显示具有景深的立体影像(光点);透镜模块(11),透镜模块(11)位于被测者的眼睛和显示器模块(12)之间,用于将显示器模块(12)投射出的光点放大映射到被测者的眼中,使显示器模块(12)显示的立体影像占据被测者的全部视野;控制器子系统(10),用于控制得到被测者的视野边界以及测试光点信息;头动跟踪子系统(3),用于检测头部运动信息,消除头部运动对视觉检测的影响;眼动追踪子系统(2),用于检测被测者的注视点信息。能够客观准确的跟踪并评估被测者的视网膜各个区域的健康状态检测结果,能够量化特征,准确高效。
Description
本发明涉及视网膜检测技术领域,具体地,涉及一种视网膜检测系统,尤其是一种基于虚拟现实技术的视网膜检测系统,能够提供多种人机交互式的多种基于虚拟现实技术的视野范围检测。
视网膜健康状态与人的视觉健康情况直接相关,许多视觉疾病表现为视网膜的损坏或功能衰退。包括但不限于:黄斑变性、青光眼、视网膜脱落、色盲与色弱,严重影响职业与生活功能。视网膜损坏或功能衰退的病因及发病机制多种多样,多数都存在病情逐渐恶化,初期不易察觉,视网膜病变发病机制多样,多数无法找到成因,这些疾病的基本特点是初期不易察觉,一旦发现已经病情严重无法逆转或控制。
视网膜相关疾病的诊断和治疗遵循越早越好的原则,及时而有效的早期诊断和定期复诊有助于改善病情,提高生活质量。但是,现有的视网膜健康状态检测手段是陈旧的机械式眼底扫描,过程繁琐,精度不佳,要求被测者保持眼球不动,并依赖于经验丰富的医师的个人技术,需要投入大量的人力、物力和财力,这对拥有视网膜健康问题的家庭和社会医疗资源都是极大的挑战。因此,目前亟需提供成本低、效率高、可客观量化的视野测量辅助手段。为了减少对医师的依赖,提倡采用更客观的量化指标来辅助检测视网膜健康状态,更加及时准确的发现视网膜病变。
相当比例的视网膜健康状态测试者,比如低龄儿童和老年人,很难听从指令持续专注的注视固定平面显示器或球面视网膜检测仪,并针对变化快速反应,这将导致其注视点分布可能超过视觉刺激材料的限定范围,传感器采集到的数据失效。所以,通过普通的固定平面和球面视觉刺激材料采集到的视网膜健康状态信息无法排除显示设备之外的人为和环境干扰。
同时,普通的平面眼动追踪传感器需要校准使用者的视线焦点,范围较大的头部运动会影响校准结果和实际测试过程中的注视点位置的匹配准确率,两种指标相互影响,会降低采集到的注视点数据的质量。
目前没有发现同本发明类似技术的说明或报道,也尚未收集到国内外类似的资料。
发明内容
针对现有技术中存在的上述不足,本发明的目的是提供一种基于虚拟现实技术的视网膜检测系统。
本发明是通过以下技术方案实现的。
一种基于虚拟现实技术的视网膜检测系统,包括:显示子系统、头动追踪子系统、眼动追踪子系统、控制器子系统以及头戴式显示器支架,所述显示子系统、头动追踪子系统和眼动追踪子系统内置于所述头戴式显示器支架中;其中:
所述显示子系统,用于向被测者显示无视野边界的立体影像,包括:
-显示器模块,所述显示器模块用于向被测者显示具有景深的立体影像;
-透镜模块,所述透镜模块位于被测者的眼睛和所述显示器模块之间,用于将所述显示器模块投射出的光线放大映射到被测者的眼中,使所述显示器模块显示的立体影像占据被测者的全部视野;
所述控制器子系统,用于控制得到被测者的视野边界以及测试光点信息;
所述头动跟踪子系统,用于检测头部运动信息,消除头部运动对视觉检测的影响;
所述眼动追踪子系统,用于检测被测者的注视点信息。
优选地,所述显示器模块采用内嵌式双屏显示器,所述内嵌式双屏显示器的两块屏幕模拟人眼对现实景物的观察角度,以人的每只眼睛的视野范围建立球面坐标,球面顶点是人眼的视野中心;所述显示器模块包括初步测量模式和精确测量模式,其中:
所述初步测量模式为:极轴通过顶点形成0°子午线,在视野范围内从0°子午线开始逆时针每N°间隔增加一条经过球面顶点的子午线,多条子午线将视野划分为多个区域,显示器模块在黑暗背景下,从0°子午线开始逆时针依次沿各条子午线的球面顶点向视野边缘移动并向被测者双眼投射光点,当被测者发现光点消失时 启动控制器子系统的客户端,记录该视野区域方向上的视野边界;所述控制器子系统的服务端还用于控制内嵌式双屏显示器增加M°间隔的子午线不同光亮及颜色光点扫描,得到被测者的测试光点信息,进而初步勾勒出被测试者视觉不敏感的区域;其中M°小于N°。
所述精确测量模式为:以视野中心为原点,建立水平和垂直坐标线,所述水平和垂直坐标线将视野范围分为四个象限区域;其中每个象限区域内平均分布若干个能够随机点亮的光点,被测者根据视野范围内是否观测到点亮的光点而对应做出选择,进而精确测量出被测试者的视觉不敏感区。
优选地,所述精确测量模式在初步测量模式的基础之上执行。
优选地,所述N°采用30°,相应地,所述视野将划分为12个区域;所述M°采用5°。
优选地,所述光点亮度及颜色可调节。
在完成动态视网膜测试后,根据动态视野范围测量的结果勾勒出被测者视野中对光敏感较弱的区域。
在勾勒出被测者视野范围弱的区域后,系统将使用静态视场测试将视觉敏感度较弱区域重点测量,视场中敏感较强区域将会被进行粗略测量,如图8所示。静态视场测量使用呈矩阵排列的光斑逐个闪烁,矩阵分为四个象限,每个象限区域内预先分布多个光点,光点数量和设定位置不影响本专利权利要求。被测者对光斑的闪烁作出反应。如此测试产生结果更加具体,但测试时间长且测试过程使人不适,选择敏感度弱的区域重点测量将缩短测试时间,但保持测试精度。
优选地,所述透镜模块包括两片透镜,分别与内嵌式双屏显示器的两块屏幕相对应,其中每一片透镜均设有圆形棱镜阵列。
优选地,所述头动跟踪子系统包括:
加速计模块,所述加速计模块用于重力监测,从而判断所述头戴式显示器支架是否正立,同时检测被测者头部在各轴上的加速度;
陀螺仪模块,所述陀螺仪模块用于跟踪被测者头部的旋转角速度及角度变化。
优选地,所述视网膜检测系统,还包括分析评价子系统;
所述分析评价子系统,包括:
-头部运动补偿模块,根据头部运动信息,得到被测者头部运动模式;
-视觉注意点跟踪模块,所述视觉注意点追踪模块根据注视点信息,通过补偿 眼部运动持续追踪视网膜特定区域,提取其视觉注意模式;
-视网膜检测评价模块,所述视网膜检测评价模块,通过头部运动模式和视觉注意模式,准确定位视网膜的检测位置,
吗基于对被测者的视野边界的测量,评估视力受损的可能视野区域,同时根据测试光点信息,得到被测者在不同视野区域的感光敏感度和颜色敏感度,测量受测试者的视网膜可视区域和不可视区域的范围,输出被测者的视网膜健康状态检测结果。
优选地,所述注视点信息包括:被测者在立体影像上的注视位置信息、注视顺序信息和注视时长信息。
优选地,所述头部运动信息包括:被测者的头动速度信息、位移信息和旋转方向信息。
优选地,所述测试光点信息包括:被测者在球坐标空间中看到的光点的位置、光点的亮度以及光点的颜色。
本发明提供的基于虚拟现实技术的视网膜检测系统,是一种基于虚拟现实技术,采用动态子午线光点与静态矩阵光点相结合的方法,提供多种视野检测模式的系统,可用于测试无视野边界的立体影像。该系统中:显示器模块用于向被测者显示具有景深的立体影像(光点);透镜模块位于被测者的眼睛和显示器模块之间,用于将显示器模块投射出的光点放大映射到被测者的眼中,使显示器模块显示的立体影像占据被测者的全部视野;控制器子系统,用于控制得到被测者的视野边界以及测试光点信息;头动跟踪子系统,用于检测头部运动信息,消除头部运动对视觉检测的影响;眼动追踪子系统,用于检测被测者的注视点信息。本发明能够客观准确的跟踪并评估被测者的视网膜各个区域的健康状态检测结果,能够量化特征,准确高效。
与现有技术相比,本发明具有如下有益效果:
1、本发明提出的一种基于虚拟现实技术的视网膜检测系统,采用头戴式显示器支架,其中的显示子系统的显示模块生成的立体影像,测试过程比传统视场检测方式舒适且价格适中,有利于减轻不适感对测试准确性的影响,提高了视网膜健康状态检查的配合度;另一方面也使得虚拟现实环境中的视觉刺激材料的比平面刺激材料更立体、更具有景深,能模拟人所接触到的实际场景视觉范围,有助于提升采集数据有效性。
2、本发明提出的一种基于虚拟现实技术的视网膜检测系统,由于眼动追踪子系统内嵌于头戴式显示器支架内部,眼动追踪子系统与被测者眼睛无相对位移,因此该眼动追踪子系统可以实现头部运动同步跟随,不会由于头部大幅动作而对被测者双眼失焦。
3、本发明提出的一种基于虚拟现实技术的视网膜检测系统,头动跟踪子系统内嵌于头戴式显示器支架中,能够无相对位移的跟随头动,提高了头动检测的准确性。
4、本发明提出的一种基于虚拟现实技术的视网膜检测系统,根据视觉的注意点信息和头部运动信息两项指标,客观准确的跟踪并评估被测者的视网膜各个区域的健康状态检测结果,能够量化特征,准确高效。
5.本发明结合动态和静态的视场测量方式,使测量时间缩短但保持静态测量的精度。
通过阅读参照以下附图对非限制性实施例所作的详细描述,本发明的其它特征、目的和优点将会变得更明显:
图1为本发明一实施例中基于虚拟现实技术的视网膜检测系统的结构框图;
图2为本发明一实施例中头动跟踪子系统的结构框图;
图3为本发明一实施例中分析评价子系统的结构框图;
图4为本发明一实施例中头戴式显示器的结构示意图;
图5为本发明一实施例中子午线动态方法测试的单眼视野范围影像图;
图6为本发明一实施例中矩阵静态方法测试的单眼视野范围影像图;
图7为本发明一实施例中子午线动态方法初筛勾勒的单眼视野范围影像图;
图8为本发明一实施例中子午线动态初筛结合矩阵静态法描绘的精确单眼视野范围影像图;
图中:
1为显示子系统,11为透镜模块,12为显示器模块;
2为眼动追踪子系统;
3为头动跟踪子系统,31为加速计模块,32为陀螺仪模块;
4为分析评价子系统,41为头动跟踪模块,42为眼动跟踪模块,43为视网膜检测评价模块;
5为头戴式显示器支架;
61为视觉区域原点62为子午线,63为动态观光点,64为动态光点沿子午线运动的起始点,65为光点的运动方向,66为光点进入被测试者的视觉敏感区的边界点,也是光点在该子午线的运动终点;
71为视觉区域中心点即原点,72为划分视觉区域的X轴,73为划分视觉区域的Y轴,75为静态测试光点;
81、82、83、84为光点进入被测者视觉区域的边界点,85为利用这些边界点所勾勒的被测者的视觉不敏感区域;
91为静态光点,92为动态子午线勾勒的视觉不敏感区域;
10为控制子模块。
下面结合具体实施例对本发明进行详细说明。以下实施例将有助于本领域的技术人员进一步理解本发明,但不以任何形式限制本发明。应当指出的是,对本领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干变形和改进。这些都属于本发明的保护范围。
本发明实施例提出了一种基于虚拟现实技术的视网膜检测系统,参照图1、图4和图5所示,包括:显示子系统1、眼动追踪子系统2、头动跟踪子系统3和头戴式显示器支架5,显示子系统1、眼动追踪子系统2和头动跟踪子系统均内置于头戴式显示器支架5中,显示子系统1,用于向被测者显示无视野边界的立体影像,其包括:显示器模块12,显示器模块12为具有足够像素密度和刷新速率的内嵌式双屏显示器,用于向被测者显示具有景深的立体影像;透镜模块11,透镜模块11位于被测者的眼睛和显示器模块12间,用于将显示器模块12投射出的光线放大映射到被测者的眼中,使显示器模块12显示的立体影像占据被测者的全部视野;眼动追踪子系统2,用于检测被测者的注视点信息;头动跟踪子系统3,用于检测头部运动信息,消除头部运动对视觉检测的影响。以每个眼的视野范围建立球面坐标,球面顶点61是人眼的视野中心,水平右向的子午线通过顶点,是0°子午线62,显示器模块12在在视野范围内从0°子午线62开始逆时针每N°(可以为30°)一条子午线经过顶点,将视野划分为多个区域(相应地形成12个区域),分别向被测者双眼投射黑暗背景下沿各个子午线依次从边缘向视野中心移动 的光点63,当被测者发现光点63消失时按下控制器子系统的客户端,记录该视野方向上的视野边界,光点63亮度及颜色可控。在动态测量后勾勒出视觉不敏感区域,利用静态测量将不敏感区域重点测量,视觉相对敏感区域将被粗略测量。测量结果将会根据眼动和头动跟踪数据而校准结果,最终测试结果类似静态视场测量结果。所述控制器子系统的服务端还用于控制内嵌式双屏显示器增加M°间隔的子午线不同光亮及颜色光点扫描,得到被测者的测试光点信息;其中M°小于N°。进一步地,M°可以采用5°。
在完成动态视网膜测试后,根据动态视野范围测量的结果勾勒出被测者视野中对光敏感较弱的区域,如图7所示。
在勾勒出被测者视野范围弱的区域后,系统将使用静态视场测试将视觉敏感度较弱区域重点测量,视场中敏感较强区域将会被进行粗略测量,如图8所示。静态视场测量使用呈矩阵排列的光斑逐个闪烁,如图6所示,矩阵分为四个象限,每个象限区域内预先分布多个光点,光点数量和设定位置不影响本专利权利要求。被测者对光斑的闪烁作出反应。如此测试产生结果更加具体,但测试时间长且测试过程使人不适,选择敏感度弱的区域重点测量将缩短测试时间,但保持测试精度。
具体地,本发明的头戴式显示器支架5内置了显示子系统1、眼动追踪子系统2和头部跟踪子系统3,显示子系统1中的显示器模块12的像素密度需大于400ppi,刷新速率至少有60Hz,显示器模块12内嵌在头戴式显示器支架5的前端,其自带的双屏显示器正对被测者双眼。在传统静态视场测量过程中,被测者的头部需要在固定位置上停留10到20分钟,且测试中被测者需要迅速对一闪而过的光斑做出反应。在此测试环境下被测者很容易产生不适,会降低测试准确率,因为视场测试结果依赖被测者的反应,若被测者无法准确做出反应,测试结果准确率会降低。如果使用本发明使用的虚拟现实显示,被测者可以在测试过程中移动头部,降低测试时的不适感。本发明的测试方法也在一定程度上降低被测者的不适感,动态测量被相比静态测量舒适,但测量精度不高,本发明结合静态测量和动态测量。首先静态测量画出视野中有问题的区域,再用静态测量重点测量视力缺陷区域。减少需要静态测量的部分,并保持测试准确性。
而透镜模块11可将显示器模块12发出的光线放大后投射至人眼上,进而 可以消除显示器模块12的双屏显示器在人眼中的边框,使得被测者更能沉浸在显示子系统1营造的环境中,其对不同影像作出的反馈行为更加真实,增加了视网膜健康状态判断结果的准确性;
眼动追踪子系统2,为一种能够跟踪测量眼球位置及眼球运动信息的一种设备,内嵌于头戴式显示器支架5中。在本实施例中,眼动追踪子系统2可以通过近红外生成瞳孔所见的图像,再通过相机捕捉生成的图像。眼动追踪子系统2也可以通过辨认眼球的特征,如瞳孔外形、异色边缘虹膜、虹膜边界、近距指向光源的角膜反射来实现眼动跟踪。本发明实施例中的眼动追踪子系统2,由于内嵌在头戴式显示器支架5中,其始终和被测者头部同步运动,解决了现有的眼动追踪装置位置固定,一旦被测者头部大幅度运动,则眼动追踪装置在被测者眼部的焦点会失焦的问题。
进一步地,在上述实施例的基础上,显示器模块12的两块屏幕,模拟人眼对现实景物的观察角度,分别向被测者双眼投射同景不同角度的影像。
进一步地,在上述实施例的基础上,参照图4所示,透镜模块11在头戴式显示器支架5中左右各设有一片,每片透镜模块11均设有圆形棱镜阵列。
具体的,圆形棱镜阵列能够使透镜模块11具有与大块曲面透镜相同的效果,是来自显示器模块12的光线散射在人眼中,使双屏显示器呈现的视觉刺激材料占据被测者整个视野。圆形棱镜阵列的位置可以根据用户的实际情况(如近视、远视、眼距的宽窄等)做微调。
进一步地,在上述实施例的基础上,参照图1、图4所示,基于虚拟现实技术的视网膜检测系统还包括:头动跟踪子系统3,头动跟踪子系统3亦内置于头戴式显示器支架5中,用于检测被测者的头部动作信息;分析评价子系统4,分析评价子系统4用于从眼动追踪子系统2与头动跟踪子系统3收集注视点信息与头部动作信息,并根据收集到的注视点信息与头部动作信息得到被测者的视觉注意模式与头部运动模式,从而判断视网膜健康状态的检测结果。
进一步地,在上述实施例的基础上,如图2所示,头动跟踪子系统3包括:加速计模块31,加速计模块31用于重力监测,从而判断头戴式显示器支架5是否正立;加速计模块31,还用于检测被测者头部在各轴上的加速度;陀螺仪模块32,陀螺仪模块32用于跟踪被测者头部的旋转角速度及角度变化。
具体的,在本实施例中,加速计模块31利用传感装置的惯性力测量其在x、 y、z三轴上的加速方向和速度大小。在其他实施例中,x、y二轴加速度测量传感器也可以使用,其中x轴加速度为0g,y轴加速度为1g。
陀螺仪模块32,跟踪头戴式显示器支架5沿着x、y、z三轴的旋转角速度或角度变化,来为分析评价子系统4提供更精确的物体旋转信息。该模块可以通过测量三维坐标系内陀螺转子的垂直轴与设备之间的夹角计算角速度,通过夹角和角速度来判别被测者头部在三维空间的运动状态。
进一步地,在上述实施例的基础上,如图3所示,分析评价子系统4包括:视觉注意点跟踪模块42,视觉注意点跟踪模块42根据注视点信息,基于视觉注意模式得出被测者的视觉注意模式;头部运动补偿模块41,头部运动补偿模块41根据头部动作信息,基于头部跟踪算法得出被测者的头部运动模式;视网膜检测评价模块43,视网膜检测评价模块43根据被测者的视觉注意点和头部运动补偿。在动态子午线测试中基于显示器模块对12个方向上的受测试者视野边界的测量,评估可能视力受损的区域,增加5°间隔的更细致子午线不同颜色光点63扫描,从而精确测量受测试者的视网膜可视区域和不可视区域的范围,包括可视区域的感光敏感度和对不同颜色的敏感度,输出被测者的视网膜健康状态检测结果;在静态光点测试中,整个视野区域被X轴72,Y轴73分为四个象限74,显示模块将逐个随机显示呈矩阵排列的光斑75,被测试者对于光斑做出看到或者无法看到的反应,系统会根据测试者的反应绘制测试者的视野图;在动态与静态结合的测试中,动态测试画出的视野盲区的初略图,图7。系统会显示静态光斑用于精确测量视觉盲区,光斑随机逐个闪烁,但系统会在动态勾勒图中的不敏感区域分配更多的光点,视觉健康区域也会分配若干亮点,但不会重点测量如图8。测试中眼动和头动检测也会起到作用,保证测试准确。在静态测试后,系统会结合头动和眼动跟踪产生最终结果,结果将图将与传统机械静态测量类似。
具体地,头部运动补偿模块41,基于头部跟踪算法,利用头部运动随时间变化的速度、位置和方向信息,得出被测者头部运动模式。在本实施例中,该模块的输入信息来源于加速计模块31和陀螺仪模块32测量得到的头戴式显示器在多轴上的加速度方向、加速度大小以及沿着x、y、z三轴的旋转角速度及角度变化。头动运动补偿模块的信号输入还可以来源于布置于基于虚拟现实中视觉注意模式和头部运动模式的视网膜检测系统所处环境的红外检测组件测量得到的头戴式显示器的位移和旋转角度。
视觉注意点跟踪模块42,利用眼动追踪子系统2得到的被测者注视点和扫视过程的位置和时间分布,基于视觉注意模式得到被测者的关注点和不关注区域信息,提取其视觉注意模式。
视网膜检测评价模块43,利用头部运动补偿41和视觉注意点跟踪42得到的视觉注意模式和头部运动模式在同一时间线上的特征和对应关系,将这些特征和关系通过视网膜健康状态检测评价算法与视网膜缺损个体和标准发育个体特征进行比较,来评价被测者视网膜健康状态,并输出显示视网膜健康状态检测结果。该模块可以但不限于运行在个人计算机或服务器上。检测结果可以但不限于个人计算机的显示屏或额外的LED显示屏等多种显示装置。
进一步地,在上述实施例的基础上,头部动作信息包括:被测者的头动速度信息、位移信息和旋转方向信息。
本发明上述实施例提供的一种基于虚拟现实技术的视网膜检测系统,检测视网膜各处的生理健康状态,包括成像范围、感光度、色觉等指标,由显示子系统,头动/眼动追踪子系统,分析评价子系统和控制器子系统组成,其中显示子系统和头动/眼动追踪子系统内置于头戴式显示器中,分析评价子系统在头戴式显示器所连接的电脑中运行,控制器子系统的客户端由受测试者手动控制,通过蓝牙与电脑无线连接。显示子系统包括:显示器模块,显示器模块为具有足够像素密度和刷新速率的内嵌式双屏显示器,用于向被测者两眼显示具有景深的立体影像;透镜模块,透镜模块位于被测者眼睛和显示器模块间,用于将显示器模块投射出的光线放大映射到被测者眼中,使显示器模块显示的立体影像占据被测者的全部视野。头动跟踪子系统:用于检测头部运动,消除头部运动对视觉检测的影响。眼动追踪子系统:用于检测眼球运动与被测者的注视点信息。本发明通过消除视野边框和增强景深,增大了的视网膜检测范围和精度,且眼动追踪子系统始终与头部同步运动,确保在眼睛运动时精确定位视网膜的检测位置。分析评价子系统:用于集成和处理各传感器信息和控制器输入信息,输出受测试者的视网膜生理健康状态。控制子系统:受测试者根据视觉效果选择性输入,用于反映受测试者的视网膜不同位置的生理状态。
以上对本发明的具体实施例进行了描述。需要理解的是,本发明并不局限于上述特定实施方式,本领域技术人员可以在权利要求的范围内做出各种变形或修改,这并不影响本发明的实质内容。
Claims (12)
- 一种基于虚拟现实技术的视网膜检测系统,其特征在于,包括:显示子系统、头动追踪子系统、眼动追踪子系统、控制器子系统以及头戴式显示器支架,所述显示子系统、头动追踪子系统和眼动追踪子系统内置于所述头戴式显示器支架中;其中:所述显示子系统,用于向被测者显示无视野边界的立体影像,包括:-显示器模块,所述显示器模块用于向被测者显示具有景深的立体影像;-透镜模块,所述透镜模块位于被测者的眼睛和所述显示器模块之间,用于将所述显示器模块投射出的光线放大映射到被测者的眼中,使所述显示器模块显示的立体影像占据被测者的全部视野;所述控制器子系统,用于控制得到被测者的视野边界以及测试光点信息;所述头动跟踪子系统,用于检测头部运动信息,消除头部运动对视觉检测的影响;所述眼动追踪子系统,用于检测被测者的注视点信息。
- 根据权利要求1所述的基于虚拟现实技术的视网膜检测系统,其特征在于,所述显示器模块采用内嵌式双屏显示器,所述内嵌式双屏显示器的两块屏幕模拟人眼对现实景物的观察角度,以人的每只眼睛的视野范围建立球面坐标,球面顶点是人眼的视野中心。
- 根据权利要求2所述的基于虚拟现实技术的视网膜检测系统,其特征在于,所述显示器模块包括初步测量模式;所述初步测量模式为:极轴通过顶点形成0°子午线,在视野范围内从0°子午线开始逆时针每N°间隔增加一条经过球面顶点的子午线,多条子午线将视野划分为多个区域,显示器模块在黑暗背景下,从0°子午线开始逆时针依次沿各条子午线的球面顶点向视野边缘移动并向被测者双眼投射光点,当被测者发现光点消失时启动控制器子系统的客户端,记录该视野区域方向上的视野边界;所述控制器子系统的服务端还用于控制内嵌式双屏显示器增加M°间隔的子午线不同光亮及颜色光点扫描,得到被测者的测试光点信息,进而初步勾勒出被测试者视觉不敏感的区域;其中M°小于N°。
- 根据权利要求3所述的基于虚拟现实技术的视网膜检测系统,其特征在于,所述显示器模块还包括精确测量模式;所述精确测量模式为:以视野中心为原点,建立水平和垂直坐标线,所述水平和垂直坐标线将视野范围分为四个象限区域;其中每个象限区域内平均分布若干个能够随机点亮的光点,被测者根据视野范围内是否观测到点亮的光点而对应做出选择,进而精确测量出被测试者的视觉不敏感区;所述精确测量模式在初步测量模式的基础之上执行。
- 根据权利要求4所述的基于虚拟现实技术的视网膜检测系统,其特征在于,所述N°采用30°,相应地,所述视野将划分为12个区域;所述M°采用5°。
- 根据权利要求4所述的基于虚拟现实技术的视网膜检测系统,其特征在于,所述光点亮度及颜色可调节。
- 根据权利要求1所述的基于虚拟现实技术的视网膜检测系统,其特征在于,所述透镜模块包括两片透镜,分别与内嵌式双屏显示器的两块屏幕相对应,其中每一片透镜均设有圆形棱镜阵列。
- 根据权利要求1所述的基于虚拟现实技术的视网膜检测系统,其特征在于,所述头动跟踪子系统包括:加速计模块,所述加速计模块用于重力监测,从而判断所述头戴式显示器支架是否正立,同时检测被测者头部在各轴上的加速度;陀螺仪模块,所述陀螺仪模块用于跟踪被测者头部的旋转角速度及角度变化。
- 根据权利要求1-8中任一项所述的基于虚拟现实技术的视网膜检测系统,其特征在于,还包括分析评价子系统;所述分析评价子系统,包括:-头部运动补偿模块,根据头部运动信息,得到被测者头部运动模式;-视觉注意点跟踪模块,所述视觉注意点追踪模块根据注视点信息,通过补偿眼部运动持续追踪视网膜特定区域,提取其视觉注意模式;-视网膜检测评价模块,所述视网膜检测评价模块,通过头部运动模式和视觉注意模式,准确定位视网膜的检测位置,基于对被测者的视野边界的测量,评估视力受损的可能视野区域,同时根据测试光点信息,得到被测者在不同视野区域的感光敏感度和颜色敏感度,测量受测试者的视网膜可视区域和不可视区域的范围,输出被测者的视网膜健康状态检测结果。
- 根据权利要求1-8中任一项所述的基于虚拟现实技术的视网膜检测系统,其特征在于,所述注视点信息包括:被测者在立体影像上的注视位置信息、注视顺序信息和注视时长信息。
- 根据权利要求1-8中任一项所述的基于虚拟现实技术的视网膜检测系统,其特征在于,所述头部运动信息包括:被测者的头动速度信息、位移信息和旋转方向信息。
- 根据权利要求1-8中任一项所述的基于虚拟现实技术的视网膜检测系统,其特征在于,所述测试光点信息包括:被测者在球坐标空间中看到的光点的位置、光点的亮度以及光点的颜色。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910840530.XA CN110537895A (zh) | 2019-09-06 | 2019-09-06 | 一种基于虚拟现实技术的视网膜检测系统 |
CN201910840530.X | 2019-09-06 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021042504A1 true WO2021042504A1 (zh) | 2021-03-11 |
Family
ID=68712630
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/116196 WO2021042504A1 (zh) | 2019-09-06 | 2019-11-07 | 一种基于虚拟现实技术的视网膜检测系统 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110537895A (zh) |
WO (1) | WO2021042504A1 (zh) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110812146B (zh) * | 2019-11-20 | 2022-02-22 | 精准视光(北京)医疗技术有限公司 | 多区域视功能调整方法及装置、虚拟现实头戴式显示设备 |
CN111134693B (zh) * | 2019-12-09 | 2021-08-31 | 上海交通大学 | 基于虚拟现实技术的自闭症儿童辅助检测方法、系统及终端 |
CN112754421B (zh) * | 2021-01-19 | 2024-09-24 | 上海佰翊医疗科技有限公司 | 一种眼球突出度测量装置 |
CA3205671A1 (en) * | 2021-02-22 | 2022-08-25 | Niccolo Maschio | Tracking of retinal traction through digital image correlation |
CN113647900A (zh) * | 2021-06-28 | 2021-11-16 | 中山大学中山眼科中心 | 一种基于个人终端的自助视野检测方法 |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3914090B2 (ja) * | 2002-05-07 | 2007-05-16 | 株式会社ニューオプト | 眼球運動解析システム及び眼球撮像装置 |
CN106037626A (zh) * | 2016-07-12 | 2016-10-26 | 吴越 | 一种头戴式视野检查仪 |
CN107169309A (zh) * | 2017-07-26 | 2017-09-15 | 北京为凡医疗信息技术有限公司 | 基于头戴式检测设备的视野检测方法、系统及检测装置 |
US20180008141A1 (en) * | 2014-07-08 | 2018-01-11 | Krueger Wesley W O | Systems and methods for using virtual reality, augmented reality, and/or a synthetic 3-dimensional information for the measurement of human ocular performance |
CN108209857A (zh) * | 2013-09-03 | 2018-06-29 | 托比股份公司 | 便携式眼睛追踪设备 |
WO2018163166A2 (en) * | 2017-03-05 | 2018-09-13 | Virtuoptica Ltd. | Eye examination method and apparatus therefor |
CN109645955A (zh) * | 2019-01-31 | 2019-04-19 | 北京大学第三医院(北京大学第三临床医学院) | 基于vr和眼动追踪的多功能视觉功能检测装置及方法 |
CN109758107A (zh) * | 2019-02-14 | 2019-05-17 | 郑州诚优成电子科技有限公司 | 一种vr视觉功能检查装置 |
CN109846456A (zh) * | 2019-03-06 | 2019-06-07 | 西安爱特眼动信息科技有限公司 | 一种基于头戴显示设备的视野检查装置 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102265310A (zh) * | 2008-10-28 | 2011-11-30 | 俄勒冈健康科学大学 | 用于视野监测的方法和装置 |
CN109310315B (zh) * | 2016-06-09 | 2022-02-22 | Qd激光公司 | 视野视力检查系统、视野视力检查装置、视野视力检查方法、存储介质及服务器装置 |
CN208319187U (zh) * | 2017-09-06 | 2019-01-04 | 福州东南眼科医院(金山新院)有限公司 | 一种眼科用电脑视野计 |
CN109717828A (zh) * | 2018-10-24 | 2019-05-07 | 中国医学科学院生物医学工程研究所 | 一种视野检查设备及检测方法 |
-
2019
- 2019-09-06 CN CN201910840530.XA patent/CN110537895A/zh active Pending
- 2019-11-07 WO PCT/CN2019/116196 patent/WO2021042504A1/zh active Application Filing
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3914090B2 (ja) * | 2002-05-07 | 2007-05-16 | 株式会社ニューオプト | 眼球運動解析システム及び眼球撮像装置 |
CN108209857A (zh) * | 2013-09-03 | 2018-06-29 | 托比股份公司 | 便携式眼睛追踪设备 |
US20180008141A1 (en) * | 2014-07-08 | 2018-01-11 | Krueger Wesley W O | Systems and methods for using virtual reality, augmented reality, and/or a synthetic 3-dimensional information for the measurement of human ocular performance |
CN106037626A (zh) * | 2016-07-12 | 2016-10-26 | 吴越 | 一种头戴式视野检查仪 |
WO2018163166A2 (en) * | 2017-03-05 | 2018-09-13 | Virtuoptica Ltd. | Eye examination method and apparatus therefor |
CN107169309A (zh) * | 2017-07-26 | 2017-09-15 | 北京为凡医疗信息技术有限公司 | 基于头戴式检测设备的视野检测方法、系统及检测装置 |
CN109645955A (zh) * | 2019-01-31 | 2019-04-19 | 北京大学第三医院(北京大学第三临床医学院) | 基于vr和眼动追踪的多功能视觉功能检测装置及方法 |
CN109758107A (zh) * | 2019-02-14 | 2019-05-17 | 郑州诚优成电子科技有限公司 | 一种vr视觉功能检查装置 |
CN109846456A (zh) * | 2019-03-06 | 2019-06-07 | 西安爱特眼动信息科技有限公司 | 一种基于头戴显示设备的视野检查装置 |
Also Published As
Publication number | Publication date |
---|---|
CN110537895A (zh) | 2019-12-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021042504A1 (zh) | 一种基于虚拟现实技术的视网膜检测系统 | |
US10610093B2 (en) | Method and system for automatic eyesight diagnosis | |
US6659611B2 (en) | System and method for eye gaze tracking using corneal image mapping | |
EP2467053B1 (en) | Apparatus and method for automatically determining a strabismus angle | |
JP5498375B2 (ja) | 視野検査システム、視野検査装置の駆動方法、コンピュータプログラム、情報媒体もしくはコンピュータ読取可能媒体およびプロセッサ | |
CA2750287A1 (en) | Gaze detection in a see-through, near-eye, mixed reality display | |
CN106821301B (zh) | 一种基于计算机的眼球运动距离及双眼运动一致性偏差检测装置及方法 | |
JP2018099174A (ja) | 瞳孔検出装置及び瞳孔検出方法 | |
CN110881981A (zh) | 一种基于虚拟现实技术的阿尔兹海默症辅助检测系统 | |
Nagamatsu et al. | Calibration-free gaze tracking using a binocular 3D eye model | |
CN113080836A (zh) | 非中心注视的视觉检测与视觉训练设备 | |
CN115590462A (zh) | 一种基于摄像头的视力检测方法和装置 | |
JP6747172B2 (ja) | 診断支援装置、診断支援方法、及びコンピュータプログラム | |
CN111528786A (zh) | 斜视代偿头位检测系统及方法 | |
CN111134693B (zh) | 基于虚拟现实技术的自闭症儿童辅助检测方法、系统及终端 | |
Miller et al. | Videographic Hirschberg measurement of simulated strabismic deviations. | |
CN106725280A (zh) | 一种斜视度测量装置 | |
Lin | An eye behavior measuring device for VR system | |
JP6496917B2 (ja) | 視線測定装置および視線測定方法 | |
CN108742510B (zh) | 适用于低龄患儿的斜视度及水平扭转角检测仪 | |
MP et al. | Method for increasing the accuracy of tracking the center of attention of the gaze | |
CN208319187U (zh) | 一种眼科用电脑视野计 | |
JP7548637B2 (ja) | 眼優位性を定量化するためのシステムおよび方法 | |
Shin et al. | A novel computerized visual acuity test for children | |
CN116413049A (zh) | 一种驾驶员头部动作模拟装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19944213 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19944213 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19944213 Country of ref document: EP Kind code of ref document: A1 |