CN118975773A - Vision detection image display method, vision detection device and storage medium - Google Patents
Vision detection image display method, vision detection device and storage medium Download PDFInfo
- Publication number
- CN118975773A CN118975773A CN202411046040.XA CN202411046040A CN118975773A CN 118975773 A CN118975773 A CN 118975773A CN 202411046040 A CN202411046040 A CN 202411046040A CN 118975773 A CN118975773 A CN 118975773A
- Authority
- CN
- China
- Prior art keywords
- display
- vision
- target
- determining
- gaze point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/0016—Operational features thereof
- A61B3/0041—Operational features thereof characterised by display arrangements
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/0016—Operational features thereof
- A61B3/0066—Operational features thereof with identification means for the apparatus
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/0075—Apparatus for testing the eyes; Instruments for examining the eyes provided with adjusting devices, e.g. operated by control lever
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/02—Subjective types, i.e. testing apparatus requiring the active assistance of the patient
- A61B3/028—Subjective types, i.e. testing apparatus requiring the active assistance of the patient for testing visual acuity; for determination of refraction, e.g. phoropters
- A61B3/0285—Phoropters
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/02—Subjective types, i.e. testing apparatus requiring the active assistance of the patient
- A61B3/028—Subjective types, i.e. testing apparatus requiring the active assistance of the patient for testing visual acuity; for determination of refraction, e.g. phoropters
- A61B3/032—Devices for presenting test symbols or characters, e.g. test chart projectors
Landscapes
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Biophysics (AREA)
- Ophthalmology & Optometry (AREA)
- Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Physics & Mathematics (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Eye Examination Apparatus (AREA)
Abstract
The application discloses a display method, a vision testing device and a storage medium of a vision testing image, wherein before each time of displaying a testing character, a target vision grade is determined according to the current testing process, and the display size of the testing character is determined according to the target vision grade; further, after the display size and the display posture are determined, only one detection character is displayed in the display screen at a time according to the display size and the display posture, so that vision detection is performed by the unique character, and thus, the user is not required to be guided by highlighting. Thus, the external variable in the vision testing process can be reduced, and the vision testing result is more accurate.
Description
Technical Field
The present application relates to the field of image data processing, and in particular, to a method for displaying a vision inspection image, a vision inspection apparatus, and a storage medium.
Background
In order to improve the convenience of vision testing, schemes for implementing vision testing based on Virtual Reality (VR) devices or augmented Reality (Augmented Reality, AR) devices have emerged. In such a scheme, a virtual visual acuity test chart is generally displayed in a display interface, and then the observation position of the tested person is prompted by a background color change mode or a highlighting mode, so that the visual acuity condition of the tested person can be determined according to the observation result of the tested person on the observation position.
In such a scheme, since the virtual eye chart needs to remind the observer of the viewing position by highlighting, the optical environment at the viewing position is changed by highlighting regardless of the highlighting, which results in an influence on the visual detection result, resulting in an increase in error of the visual detection result.
Disclosure of Invention
The application mainly aims to provide a display method, vision testing equipment and storage medium for vision testing images, and aims to solve the technical problem that the detection result is inaccurate due to the fact that negative influence factors are introduced in the vision testing process caused by highlighting in the related technology.
To achieve the above object, an embodiment of the present application provides a method for displaying a vision inspection image, including:
determining a target vision grade, and determining the display size of the detected character according to the target vision grade;
determining a display gesture corresponding to the detected character;
and controlling the head-mounted display device to display one detected character in the first display screen according to the display size and the display gesture.
In an embodiment, after the step of controlling the head-mounted display device to display one of the detected characters on the first display screen according to the display size and the display gesture, the method further includes:
Determining a target display area according to the display position of the detected character in the first display screen;
Determining the position of the gaze point according to the gaze point detection result;
And determining a sub-detection result corresponding to the target vision grade according to the position of the fixation point and the position relation between the target display areas.
In an embodiment, after the step of determining the sub-detection result corresponding to the target vision level according to the position of the gaze point and the position relationship between the target display areas, the method further includes:
updating the display gesture of the detected character displayed in the first display screen, and executing the following steps again:
Controlling the head-mounted display device to display one detection character in the first display screen according to the display size and the display gesture;
determining the target display area according to the display position of the detected character in the first display screen;
determining the gaze point position according to the gaze point detection result;
And determining the sub-detection result corresponding to the target vision grade according to the position of the fixation point and the position relation between the target display areas.
In an embodiment, after the step of determining the sub-detection result corresponding to the target vision level according to the position of the gaze point and the position relationship between the target display areas, the method further includes:
and if the update times of the display gestures are greater than or equal to the preset times, determining a final detection result corresponding to the target vision grade according to a plurality of sub detection results.
In an embodiment, after the step of controlling the head-mounted display device to display one of the detected characters on the first display screen according to the display size and the display gesture, the method further includes:
dynamically updating the display gesture of the detected characters displayed in the first display screen;
determining a position change track of the gaze point in the dynamic updating process of the display gesture according to the gaze point detection result;
And determining a detection result corresponding to the target vision grade according to the display gesture change condition and the position change track.
In an embodiment, the method further comprises:
and displaying a black picture in a second display screen of the head-mounted device while displaying one of the detected characters in the first display screen.
In an embodiment, the method further comprises:
generating a vision detection result, and determining the identity of a user according to the biological characteristics acquired by the head-mounted display device;
and storing the user identity and the vision testing result in a correlation mode.
In an embodiment, after the step of storing the user identity and the vision test result, the step of associating further includes:
When receiving an examination report generation instruction, updating the vision detection result into an examination report template;
And outputting the updated physical examination report template.
The embodiment of the application also provides a vision testing device, which comprises: a memory, a processor and a computer program stored on the memory and executable on the processor, the computer program being configured to implement the steps of the method of displaying a vision inspection image as described above.
The embodiment of the application also provides a storage medium, which is a computer readable storage medium, and the storage medium stores a computer program, and the computer program realizes the steps of the vision detection image display method when being executed by a processor.
The embodiment of the application discloses a display method of a vision detection image, which comprises the steps of determining a target vision grade according to the current test process before each display of detection characters, and determining the display size of the detection characters according to the target vision grade; further, after the display size and the display posture are determined, only one detection character is displayed in the display screen at a time according to the display size and the display posture, so that vision detection is performed by the unique character, and thus, the user is not required to be guided by highlighting. Thus, the external variable in the vision testing process can be reduced, and the vision testing result is more accurate.
Drawings
FIG. 1 is a schematic view of an eye chart display effect involved in the related art;
FIG. 2 is a flow chart of an embodiment of a method for displaying an eye test image according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a gesture display for detecting a gesture according to an embodiment of the present application;
Fig. 4 is a schematic diagram of a display effect of a headset according to an embodiment of the present application;
FIG. 5 is a flow chart of another embodiment of a method for displaying an eye test image according to an embodiment of the present application;
FIG. 6 is a flowchart of another embodiment of a method for displaying an eye test image according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a vision testing system according to an embodiment of the present application.
The achievement of the objects, functional features and advantages of the present application will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
In order to improve the convenience of vision testing, schemes for implementing vision testing based on Virtual Reality (VR) devices or augmented Reality (Augmented Reality, AR) devices have emerged. In such a scheme, a virtual visual acuity test chart is generally displayed in a display interface, and then the observation position of the tested person is prompted by a background color change mode or a highlighting mode, so that the visual acuity condition of the tested person can be determined according to the observation result of the tested person on the observation position. For example, referring to fig. 1, an eye chart as shown in fig. 1 may be displayed in a display screen of the headset. Then, during vision testing, a test character is highlighted at different positions in the visual chart according to the vision testing process so as to prompt the user to observe the positions.
In such schemes, since the virtual eye chart needs to be highlighted to alert the observer to the viewing position, this results in that the optical environment at the viewing position is changed by highlighting regardless of the highlighting. While the sensitivity of different persons to different colors or to different light rays is different. Therefore, the optical environment of the observation position is the same for different testers, but because the sensitivity of different individuals to the same optical link change is different, an external variable is introduced in the vision testing process, and thus, the vision testing result is negatively affected by the way, and the error of the vision testing result is increased.
In order to solve the above-mentioned drawbacks in the related art, an embodiment of the present application proposes a scheme for controlling a headset such as VR or AR to display a vision test image, in which a target vision level is determined according to a test procedure, and then a size of a test character is determined based on the vision level, and then only one test character is displayed on a display screen at a time according to the determined size and the determined display gesture, so that a user is not required to be guided to observe by highlighting during the display process. Therefore, the influence of objective factors can be reduced while the convenience of detection of the head-mounted equipment is considered. Thereby achieving the effect of improving the accuracy of the detection result of the vision detection scheme of the head-mounted equipment.
In order to facilitate understanding, the technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
Referring to fig. 2, in an alternative embodiment, the method for displaying the vision inspection image includes the following steps S10 to S30:
s10: determining a target vision grade, and determining the display size of the detected character according to the target vision grade;
In this embodiment, the target vision level refers to the vision level corresponding to the detected character to be displayed later in the current test process. Thus, the above-described vision class can be defined based on a commonly used vision quantification description such as a 5-point recording method or a decimal recording method.
It will be appreciated that the basic principle of vision testing is to determine the vision testing result of a viewer based on the vision grade corresponding to the test character that the viewer can accurately recognize at a fixed viewing distance. In the vision testing chart, the sizes of the testing characters corresponding to different vision grades are different. I.e. the higher the corresponding vision level, the larger the detected character size. Thus, based on this principle, a target vision level is determined first, and then a display size corresponding to the target vision level is determined, and based on the display size, the head-mounted device is controlled to display a corresponding detection character to detect whether the user reaches the target vision level.
Alternatively, the target vision grade may be determined automatically by software based on preset rules. Or the control center can also receive the external input vision grade through the peripheral equipment and set the external input vision grade as the target vision grade. For example, as an alternative, vision tests may be performed by sequentially setting the respective vision levels in order from high to low or from low to high as target vision levels according to program settings.
For example, the vision testing may be performed by first setting the lowest vision level to the target vision level after the user triggers the vision testing procedure. If the test passes, the next vision grade is taken as the target vision grade in the order from low to high. Or firstly, setting the highest vision grade as the target vision grade, and performing vision test. If the test does not pass, the next vision grade is taken as the target vision grade in the order from high to low. To sequentially determine the vision grade to which the user corresponds. This allows the user to perform vision testing alone without the aid of a second person. Alternatively, the method provided in this embodiment may be implemented in a headset, or implemented in a server or a control terminal. If the communication is implemented in the server or the control terminal, the server or the control terminal can communicate with the head-mounted device in a wired or wireless manner, so that the transmission of the instruction is realized.
Or in another example, the method provided in this embodiment may be applied to a server or a control terminal, and the server or the control terminal may receive a control instruction of a user through a peripheral device, and determine the target vision level according to the control instruction. Alternatively, such an embodiment may be applied to a physical examination scenario so that a doctor may perform remote or near vision testing based on a headset.
In addition, as an alternative to determining the display size of the detected character corresponding to the target vision grade, the mapping relationship between each vision grade and the display size may be stored in advance. Further, after determining the target vision grade, the corresponding display size may be directly determined based on the mapping relationship. For example, the mapping may be stored in a table format such that after the target vision level is determined, the corresponding display size may be determined by looking up a table therebetween.
S20: determining a display gesture corresponding to the detected character;
the display gesture refers to a gesture of displaying a character with respect to the display, and the rotation amount of the detected character can be controlled by the display gesture.
Alternatively, as a gesture definition scheme, it may be arranged that the display gesture of the detected character includes four gestures of up, down, left and right. For example, if the detected character is E or C, the opening of the detected character may be controlled to face the upper frame, the lower frame, the left frame, and the right frame of the display device according to the display gesture. It will be appreciated that the borders described herein are not limited to the outline of the display screen, but are merely used to refer to directions.
As another gesture definition method, a display gesture may be defined according to a rotation angle of a detected character. Referring to fig. 3, any gesture may be defined as a reference gesture, and defining other display gestures includes rotating (0 ° 360 °) by any angle α based on the center of the reference gesture, so that more display gestures may be selected.
In this embodiment, the display gesture may be determined randomly or according to an external input. Wherein, for the randomly determined scheme, a direction can be randomly determined as the display gesture by the defined scheme of the display gesture; alternatively, the display gesture may be determined by determining an angle based on a random algorithm. For the manner of external input determination, a displayed gesture adjustment interface may be displayed in the control terminal, so that the display gesture input by the user may be received through the display gesture adjustment interface and taken as the finally determined display gesture.
S30: and controlling the head-mounted display device to display one detected character in the first display screen according to the display size and the display gesture.
The head-mounted device is provided with display screens for the left eye and the right eye, respectively. In the vision test, the eyes generally perform vision test to determine the monocular vision. Thus, after determining the display size and the real gesture, a first display screen in the headset for displaying the detected character may be determined according to the current test procedure. When the left eye vision test is performed, the display screen corresponding to the left eye is the first display screen, and when the right eye vision test is performed, the display screen corresponding to the right eye is the first display screen.
After the first display screen is determined, a detection character is displayed in the first display screen by the control head-mounted equipment, and the background is set to be white so as to simulate the display effect of white background and black character in the visual acuity test chart to the greatest extent.
For example, referring to fig. 4, the determined target vision level is 4.0 (based on the 5-minute recording method), the display gesture is "right", and when the left eye vision test is performed, the left eye corresponding display screen in the head-mounted device can be controlled to display an E-word with the opening to the right. Optionally, in some embodiments, the second display screen may be further configured to display a black screen, so as to avoid interference with the vision detection result caused by display contents corresponding to the glasses that do not perform vision detection.
Optionally, if the same vision level needs to be tested for multiple times, the test can be performed again only by updating the display gesture of the detected character.
In the technical scheme disclosed in the embodiment, before each time of displaying the detected character, determining a target vision grade according to the current test process, and determining the display size of the detected character according to the target vision grade; further, after the display size and the display posture are determined, only one detection character is displayed in the display screen at a time according to the display size and the display posture, so that vision detection is performed by the unique character, and thus, the user is not required to be guided by highlighting. Thus, the external variable in the vision testing process can be reduced, and the vision testing result is more accurate.
Referring to fig. 5, in another alternative embodiment, the method for displaying the vision inspection image further includes the following steps S40 to S60:
S40: determining a target display area according to the display position of the detected character in the first display screen;
In the related detection scheme based on the head-mounted equipment, after the visual chart is displayed and the user is guided to observe the detected characters, whether the user can see the corresponding detected characters is determined by collecting the voice of the user or collecting the action of the user. By means of voice determination input, the sensitivity and accuracy are reduced in a noisy environment, so that the judgment result of the user cannot be accurately received. However, by the gesture scheme, an additional somatosensory participation unit is required to be configured for implementation, resulting in higher cost.
The present embodiment therefore sets a scheme of determining whether the user can see the corresponding detected character based on gaze point detection. When the vision testing process is started, a user can be prompted through characters or voice, and after the testing characters are displayed, the fixation point is placed in a preset area of the testing characters. For example, the cue may be set to "focus the gaze point on the opening area of the detection character".
Alternatively, the size and shape of the target display area may be set in advance. After the detected characters are displayed in the first display screen, a preset rule is adopted, and one target display area is selected in the first display screen according to the display positions of the detected characters. For example, referring to fig. 4, when the detected character is an E-word, the preset rule is to prompt the user to annotate the opening area after displaying the detected character, and then a target display area can be selected according to a preset rectangle in the direction of the opening of the detected character.
S50: determining the position of the gaze point according to the gaze point detection result;
VR and AR glasses may be implemented by eye tracking techniques in determining the position of a user's gaze point in a screen. Among them, the eye tracking (EYE TRACKING) technique is to track the movement of the eyeball by measuring the gaze point position of the eye or the movement of the eyeball relative to the head. In VR/AR devices, eye tracking techniques may help the system learn the visual focus of the user.
Optionally, eye tracking techniques in VR/AR glasses typically employ video/image capture. The camera shoots an eye picture of a user, and characteristic parameters are extracted through an image processing algorithm to determine the eyeball position.
For example, the gaze direction of an eyeball may be approximated by calculating a line between the cornea center (typically determined by cornea reflected light) and the pupil center based on the physiological structure of the eyeball. And the curvature of cornea can lead to light refraction, which directly affects the accuracy of measurement. To solve this problem, the concept of an infrared light source and a scintillation point (purkinje spot) can be introduced. For example, one or more infrared light sources may be mounted at specific locations on the VR/AR glasses. Such that the infrared light source emits infrared light to the cornea of the user. When infrared light is irradiated onto the cornea, a bright reflection spot, i.e., purkinje spot, is formed on the outer surface of the cornea. Since the center of the cornea is the highest point, the reflection intensity is greatest and the glint point is most pronounced when the infrared light is just shining at the center of the cornea. An eye image containing the cornea, pupil and glint points is then captured by a camera on VR/AR glasses. The pupil center and the location of the glint point are identified and located using image processing algorithms. And using the position of the infrared light source (known) and the position of the scintillation point (obtained from the image), the center of curvature of the cornea is calculated by geometric relationship. After the gaze direction is determined, it may be mapped onto a virtual screen in the VR/AR environment to determine the gaze point location of the user.
S60: and determining a sub-detection result corresponding to the target vision grade according to the position of the fixation point and the position relation between the target display areas.
Alternatively, in the vision inspection process, after displaying the inspection character, the user may be prompted to hold the comment point at the opening of the inspection character. Therefore, after the target display area and the gaze point position are determined, that is, by judging whether the gaze point position is within the target display area, it is determined whether the user can see the detected character corresponding to the target vision level. In this way, it is made possible to determine the sub-detection result based on the comment point detection function built in the head-mounted device.
Alternatively, in one embodiment, since the same target vision grade is used, after a sub-test is determined, the corresponding final test for that target vision grade is more reliable. The detected character display gesture displayed in the first display screen may be updated and steps S30 to S60 may be performed again. So that at least two word detection results corresponding to the target vision level can be generated. To determine whether the user has reached a target vision level based on the at least two sub-detection results.
Optionally, before updating the display gesture, the number of times of updating corresponding to the upper target vision level may be determined, and if the number of times of updating the display gesture is smaller than the preset number of times, the display gesture is updated, and then the detected character with the size corresponding to the target vision level but different gesture is displayed again. To retest the user. The preset times associated with different target vision grades can be the same or different. Alternatively, the preset number of times associated with the higher vision level may be set to be greater than or equal to the preset number of times corresponding to the lower vision level. For example, the preset number of times 5.0 is associated is 5;4.0, wherein the associated preset times are 4;3.0, wherein the associated preset times are 3;2.0 the number of associations is 2;1.0 the preset number of associations is 1.
And if the update times of the display gestures are greater than or equal to the preset times, determining a final detection result corresponding to the target vision grade according to a plurality of sub detection results. For example, it may be set that each of the sub-detection results is passed, and it is determined that the user's vision reaches the above-mentioned target vision level, otherwise, it is determined that the above-mentioned target vision level is not reached.
As an alternative embodiment, multiple test characters of different poses are displayed to require the user to perform multiple test tasks multiple times when testing multiple times on the same target vision grade. This results in a lengthy user detection process. In order to solve the problem, after the detected character is displayed in the first display screen, the display gesture of the detected character displayed in the first display screen may be dynamically updated, and then, according to the gaze point detection result, the position change track of the gaze point in the process of dynamically updating the display gesture of the detected character is determined, and if the position change track of the gaze point matches with the position change track, it is determined that the vision of the user has reached the target vision level. Otherwise, judging that the vision grade does not reach the corresponding vision grade.
In this embodiment, whether the user passes the test corresponding to the target vision level can be determined according to the position of the gaze point, and compared with the scheme of obtaining the observation result of the user by voice, the scheme provided by this embodiment can avoid the recognition error of the observation result caused by the interference of external audio, thereby improving the vision detection efficiency and accuracy. Compared with the gesture recognition scheme, the scheme provided by the embodiment can avoid the embarrassment caused by the fact that the user does not need to make a great limb movement in public places. Meanwhile, the observation result is confirmed based on the gaze point detection function of the head-mounted equipment, and compared with the scheme of additionally arranging the somatosensory detection device, the method has the advantage of reducing the production cost of the equipment.
In daily life, vision testing is required in many situations. For example, for a driver license verification scenario, the relevant units need to determine whether their vision meets the driver license verification requirement according to the physical examination report of the user in the hospital. Or in recruitment of relevant work stations such as flyers, air passengers and the like, vision testing of the candidate is also required to determine whether the vision level meets the requirements. While current solutions require vision testing by corresponding personnel to a hospital, or other designated test units. The result is then fed back to the relevant unit. This approach results in a very cumbersome vision testing procedure.
Therefore, in order to simplify the vision testing process, more convenient vision testing is realized. The embodiment also provides an embodiment of a display method for implementing the vision testing image based on the vision testing system.
Referring to fig. 6, in yet another alternative embodiment, the method for displaying the vision inspection image further includes the following steps S70 to S80:
S70: generating a vision detection result, and determining the identity of a user according to the biological characteristics acquired by the head-mounted display device;
S80: and storing the user identity and the vision testing result in a correlation mode.
In this embodiment, the vision detection result corresponding to the user may be determined based on the detection results corresponding to the respective target vision levels. Wherein the vision test results may include left eye vision test results and/or right eye vision test results.
The embodiment provides a vision testing system, which comprises a head-mounted equipment end, a server end and a user end. The head-mounted equipment end is used for displaying detection characters, the server end is used for data processing and transferring, and the user end is used for controlling the head-mounted equipment to perform vision detection. The head-mounted equipment end, the server end and the user end are communicated based on the Internet.
In an alternative embodiment, the user terminal may be configured as a supervision auditing terminal. After the vision testing process is started, the head-mounted equipment end can collect the biological characteristics of the tested object based on the biological characteristic collecting device arranged on the head-mounted equipment end. Wherein the biometric features include, but are not limited to, one or more of bone echo features, iris features, facial features. And after the user characteristics are collected, the user characteristics are sent to the user side. To determine the user identity of the object under test.
Then after the vision testing process is executed, a vision testing result can be determined and sent to the user side, so that the user side can store the vision testing result and identity information in a correlated mode.
In the vision detection process, the user side or the server determines the target vision level and then sends the target vision level to the head-mounted device to execute the subsequent process. The target vision grade may also be determined by the headset itself.
Referring to fig. 7, in another alternative embodiment, the user terminal is configured as a data viewing terminal, and after the server controls the head-mounted device to perform vision testing, the vision testing result is stored in association with the user information. Optionally, a time stamp of the time of receiving the vision test result may be associated with the storing process, and the timeliness of the vision test result may be determined according to the time stamp. So that the user side can view the example detection result.
Illustratively, the head-mounted device is provided with a high definition display screen, sensors (eye tracking), user input devices (such as buttons or touch screens). After receiving the service instruction of the server, characters with different sizes and different directions can be displayed according to the instruction of the server. And recording the recognition result of the display character by the user, such as the user inputting the observation result by adjusting the position of the fixation point. Based on the collected eye movement data of the user. And then the identification result of the user and the sensor data are transmitted to the server in a wireless mode (such as Wi-Fi or Bluetooth).
And the server is used for processing a large amount of data and concurrent requests. Data from the head-mounted device is received, including the user's identification results and vision testing results or sensor data. If the received sensor data is, the data such as the recognition accuracy and response time of the user are analyzed according to a preset algorithm, and the vision level is calculated. The raw data and the calculation results are then stored in a database for subsequent analysis and querying. And/or may be configured to send processed data (e.g., vision testing results) to the user side.
A user terminal (doctor terminal), a device supporting Web or mobile applications. The user may log into the system based on an account number and a password. Vision test results, including historical data and trend analysis, for a particular tester are reviewed. And can also generate vision testing reports, including diagnostic comments and advice, based on the test results. Meanwhile, the report can be exported into PDF or other formats, and printing and sharing are convenient.
Optionally, in the data interaction process, the head-mounted device communicates with the server in a wireless manner, sends the identification result of the user and the sensor data, and receives the instruction and the configuration information of the server. The server communicates with the user terminal through the Web API or the mobile application interface, sends vision detection results and reports, and receives requests and data of the user terminal.
In the data transmission process, an HTTPS protocol can be used for encryption transmission, so that the safety of data is ensured. And meanwhile, the sensitive information of the user is stored in an encrypted mode, so that data leakage is prevented.
Optionally, the system may be further provided with a subscription management function to support the doctor's online subscription tester for vision testing. And the intelligent recommending function is used for recommending proper glasses degree or treatment scheme according to the vision detection result and the historical data of the user. And the data analysis function is used for carrying out statistical analysis on vision test data of a large number of users and providing data support for ophthalmologic research and policy formulation. And supporting multiple language interfaces and report generation, and meeting the requirements of different countries and regions.
The present application provides a vision testing apparatus, comprising: at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can execute the method for displaying the vision inspection image in the first embodiment.
The vision testing equipment provided by the application can solve the technical problem that the related positioning scheme has the defect of insufficient accuracy due to neglecting the relevance among different dimensional factors by adopting the vision testing equipment method in the embodiment. Compared with the prior art, the visual acuity test equipment has the advantages that the visual acuity test equipment has the same advantages as the visual acuity test image display method provided by the embodiment, and other technical features in the visual acuity test equipment are the same as the features disclosed by the method of the embodiment, and are not repeated herein.
It is to be understood that portions of the present disclosure may be implemented in hardware, software, firmware, or a combination thereof. In the description of the above embodiments, particular features, structures, materials, or characteristics may be combined in any suitable manner in any one or more embodiments or examples.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
The present application provides a computer-readable storage medium having computer-readable program instructions (i.e., a computer program) stored thereon for performing the method of displaying a vision inspection image in the above-described embodiments.
The computer readable storage medium provided by the present application may be, for example, a U disk, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access Memory (RAM: random Access Memory), a Read-Only Memory (ROM: read Only Memory), an erasable programmable Read-Only Memory (EPROM: erasable Programmable Read Only Memory or flash Memory), an optical fiber, a portable compact disc Read-Only Memory (CD-ROM: CD-Read Only Memory), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this embodiment, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, or device. Program code embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to: wire, fiber optic cable, RF (Radio Frequency), and the like, or any suitable combination of the foregoing.
The above-described computer-readable storage medium may be embodied in a vision testing device; or may be present alone without being fitted into the vision testing device.
The computer-readable storage medium carries one or more programs that, when executed by the vision testing device, cause the vision testing device to: and constructing an abnormal event chain through the occurrence time of the abnormal event related to the network element equipment, and then describing the vectorized abnormal event chain as the state of the head-mounted equipment. Such that a pre-trained classification model may determine a fault classification based on the state description, thereby determining a fault diagnosis result from the fault classification.
Computer program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of remote computers, the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN: local Area Network) or a wide area network (WAN: wide Area Network), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules involved in the embodiments of the present application may be implemented in software or in hardware. Wherein the name of the module does not constitute a limitation of the unit itself in some cases.
The readable storage medium provided by the application is a computer readable storage medium, and the computer readable storage medium stores computer readable program instructions (namely a computer program) for executing the vision inspection image display method, so that the technical problem of fault diagnosis can be solved. Compared with the prior art, the beneficial effects of the computer readable storage medium provided by the application are the same as those of the method for displaying the vision detection image provided by the above embodiment, and are not described in detail herein.
An embodiment of the application provides a computer program product comprising a computer program which, when executed by a processor, implements the steps of a method for displaying a vision inspection image as described above.
The computer program product provided by the application can solve the technical problem of how to improve the accuracy of the fault diagnosis result. Compared with the prior art, the beneficial effects of the computer program product provided by the embodiment of the application are the same as those of the method for displaying the vision inspection image provided by the above embodiment, and are not described in detail herein.
The foregoing description is only of the preferred embodiments of the present application and is not intended to limit the scope of the application, and all equivalent structures or equivalent processes using the teachings of this application and the accompanying drawings, or direct or indirect application in other relevant fields, are included in the scope of this application
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment.
The foregoing description is only of the preferred embodiments of the present application, and is not intended to limit the scope of the application, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411046040.XA CN118975773A (en) | 2024-07-31 | 2024-07-31 | Vision detection image display method, vision detection device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411046040.XA CN118975773A (en) | 2024-07-31 | 2024-07-31 | Vision detection image display method, vision detection device and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN118975773A true CN118975773A (en) | 2024-11-19 |
Family
ID=93446671
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202411046040.XA Pending CN118975773A (en) | 2024-07-31 | 2024-07-31 | Vision detection image display method, vision detection device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118975773A (en) |
-
2024
- 2024-07-31 CN CN202411046040.XA patent/CN118975773A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20230055308A1 (en) | Digital visual acuity eye examination for remote physician assessment | |
US11659990B2 (en) | Shape discrimination vision assessment and tracking system | |
JP4786119B2 (en) | Optometry system, optometry apparatus, program and recording medium thereof, and standardization method | |
US20200073476A1 (en) | Systems and methods for determining defects in visual field of a user | |
US20220160223A1 (en) | Methods and Systems for Evaluating Vision Acuity and/or Conducting Visual Field Tests in a Head-Mounted Vision Device | |
CN115944266A (en) | Visual function determination method and device based on eye movement tracking technology | |
KR102208508B1 (en) | Systems and methods for performing complex ophthalmic tratment | |
CN118975773A (en) | Vision detection image display method, vision detection device and storage medium | |
WO2022115860A1 (en) | Methods and systems for evaluating vision acuity and/or conducting visual field tests in a head-mounted vision device | |
JP6019721B2 (en) | Objective displacement measuring apparatus and objective displacement measuring method | |
US20230181029A1 (en) | Method and device for determining at least one astigmatic effect of at least one eye | |
US20210386287A1 (en) | Determining refraction using eccentricity in a vision screening system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |