[go: up one dir, main page]

WO2022247482A1 - Virtual display device and virtual display method - Google Patents

Virtual display device and virtual display method Download PDF

Info

Publication number
WO2022247482A1
WO2022247482A1 PCT/CN2022/085632 CN2022085632W WO2022247482A1 WO 2022247482 A1 WO2022247482 A1 WO 2022247482A1 CN 2022085632 W CN2022085632 W CN 2022085632W WO 2022247482 A1 WO2022247482 A1 WO 2022247482A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
virtual
depth
optical
image
Prior art date
Application number
PCT/CN2022/085632
Other languages
French (fr)
Chinese (zh)
Inventor
朱帅帅
毛春静
熊宇辰
王实现
杨林林
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2022247482A1 publication Critical patent/WO2022247482A1/en

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0172Head mounted characterised by optical features
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • H04N13/383Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B2027/0178Eyeglass type

Definitions

  • the embodiments of the present application relate to the field of artificial intelligence, and in particular to a virtual display device and a virtual display method.
  • the virtual display device further includes: a first eye-tracking module, the first eye-tracking module is used to perform eye movement on the first human eye under the control of the processor track.
  • the processor is used to control the first eye-tracking module to perform eye-tracking on the gaze point of the first human eye, and when the depth of convergence of the gaze point of the first human eye is different, the first The optical display module is configured to adjust the depth difference of the imaging surface. Based on this solution, a specific solution for determining the depth of convergence of the user is provided.
  • the virtual display device can track the user's eyes through the eye-tracking module, so as to obtain some data of the user's eyes during observation.
  • the third object displayed at the first virtual location may be the same as the third object displayed at the second virtual location.
  • the object displayed at the first virtual position may be different from the object displayed at the second virtual position.
  • the virtual display device can also adjust the matching between the focus depth and the convergence depth in the corresponding scene according to the solution in the foregoing example, so that while exercising the ciliary muscles, Able to avoid VAC problems.
  • the virtual display device displays the third object at the first virtual position and the second virtual position respectively when the currently displayed scene is a preset scene.
  • the preset scene includes at least one of the following scenes: an advertisement playing scene, and a display resource loading scene.
  • the virtual display device can display objects of different depths to the user in some specific scenarios. These specific scenes may be invalidly displayed scenes when the user uses the virtual display function. For example, before showing the displayed content to the user, when playing an advertisement, the virtual display device can combine the advertisement content to display images to the user's human eyes at different depths, thereby avoiding eye fatigue.
  • the virtual display device can display the astigmatism diopter measurement card to the user, and determine the user's astigmatism axis position according to the sign (such as the corresponding number) corresponding to the angle that cannot be seen clearly indicated by the user's feedback, so as to The astigmatism axis determines the degree of astigmatism in the user's eye.
  • the virtual display device stores correspondences between different user characteristics and corresponding virtual display information.
  • the processor is used to search for a matching entry from the corresponding relationship according to the user characteristics of the current user, and if there is a matching entry, determine that the virtual display information corresponding to the current user is in the matching entry Stored virtual display information.
  • the virtual display information includes the degree of myopia and/or the degree of astigmatism of the corresponding user.
  • the virtual display method further includes: the processor controls a first eye-tracking module in the virtual display device to perform eye-tracking on the first human eye.
  • the processor controls a first eye-tracking module in the virtual display device to perform eye-tracking on the first human eye.
  • the first optical display module is configured to adjust the depth of the imaging surface to be different.
  • the virtual display device further includes: a second optical display module, the second optical display module is used to display images to the second human eyes under the control of the processor, the second The human eye is the one of the user's eyes that is different from the first human eye.
  • the method further includes: the first optical display module realizes the following functions under the control of the processor: displaying the first object, displaying that the depth of the imaging surface of the first object is a third depth, and the depth of the imaging surface of the first object in the The depth in the virtual three-dimensional environment is the second depth, and the first depth is similar to or the same as the third depth.
  • the first optical display module displays the optical power of the virtual image to the first human eye as the third optical power.
  • the third optical power corresponds to a third visual score, which is the visual score corresponding to the third number of pixels.
  • the processor is also used to acquire the user characteristics of the current user before controlling the first optical display module to display the first image, and the processor is specifically used to control the first optical display module to display the first image.
  • the group displays the first image corresponding to the user characteristics of the current user.
  • the optical power of the first optical display module is matched with the myopia degree and/or astigmatism degree of the first human eye of the current user, and the myopia degree of the first human eye of the current user is The degree of power and/or the degree of astigmatism is indicated by the virtual display information corresponding to the user characteristics of the current user.
  • FIG. 4 is a schematic diagram of an imaging mechanism of VR glasses
  • FIG. 20 is a schematic diagram of an imaging mechanism provided by an embodiment of the present application.
  • FIG. 21 is a schematic diagram of an imaging mechanism provided by an embodiment of the present application.
  • Fig. 23 is a schematic diagram of an input mode of recognition feedback provided by an embodiment of the present application.
  • Fig. 31 is a schematic diagram of an astigmatism detection chart provided by an embodiment of the present application.
  • an electronic device that provides a virtual display function through AR, VR or MR technology is VR glasses with a VR display function as an example.
  • 4 is a schematic diagram of VR glasses.
  • two display screens such as a display screen 1 and a display screen 2
  • each display screen has a display function.
  • Each display screen can be used to display corresponding content to one eye (such as left eye or right eye) of the user through a corresponding eyepiece.
  • the display screen 1 the corresponding left-eye image in the virtual scene can be displayed.
  • the light of the left-eye image can pass through the eyepiece 1 and converge at the left eye, so that the left eye can see the left-eye image.
  • the corresponding right-eye image in the virtual scene can be displayed.
  • the light of the right-eye image can pass through the eyepiece 2 and converge at the left eye, so that the right eye can see the right-eye image.
  • the audio module is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signal.
  • the audio module can also be used to encode and decode audio signals.
  • the audio module may be set in the processor 110 , or some functional modules of the audio module may be set in the processor 110 .
  • the optical display module may include an eyepiece 601 , a zoom module 602 , and a display screen 603 .
  • the zoom module and/or the display screen may also be integrated on one or more components, instead of being set independently.
  • the zoom component can be integrated with the eyepiece in the same component (eg, eyepiece lens set).
  • the eyepiece can be integrated in the zoom module.
  • the VR glasses may also include other components (such as an eye-tracking system) for realizing eye-tracking of the user.
  • the eye-tracking system can determine the position of the user's fixation point (or determine the direction of the user's line of sight) through methods such as video eye diagram method, photodiode response method, or pupil-corneal reflection method, thereby realizing user's eye-tracking.
  • the virtual display device that executes the virtual display method is the VR glasses with the composition shown in Figure 9 as an example, and this The implementation flow of the virtual display method in the example is used for this description.
  • the VR glasses can use the X coordinate and Y coordinate (such as (X1, Y1)) of the intersection point of the projection of the user's eyes on the XOY plane as the projection coordinates of the user's current gaze point on the XOY plane as (X1, Y1).
  • the VR glasses can also be in the virtual three-dimensional environment according to the line of sight of the left eye, the Z coordinate (such as Z L ) at the (X1, Y1) position, and the Z coordinate (such as Z L ) at the position (X1, Y1) of the line of sight of the right eye (such as Z R ), to determine the height of the fixation point.
  • the Z coordinate of the gaze point can be In this way, the coordinates of the gaze point in the virtual 3D environment can be determined as
  • the fixation point corresponding to the dominant eye may be used as the fixation point during binocular observation.
  • the virtual object 1 that is being observed near is used as an example.
  • the VR glasses can blur the distant view according to the depth of the virtual object 1 .
  • the user can see an image as shown in (a) in FIG. 13 .
  • the VR glasses can blur the near view, so as to show the user the image shown in (b) in Figure 13, thereby achieving the effect of protruding the distant view.
  • the virtual display device provided by the embodiment of the present application can flexibly and clearly display near objects and distant objects to the user without VAC because the zoom module can be controlled to adjust the depth of the virtual image plane.
  • the virtual display device can also provide the user with functions of relaxing and exercising the ciliary muscle.
  • the user can sequentially judge the opening direction of the corresponding letter from left to right through the first human eye, so that the VR glasses can judge more accurately based on this whether the user can see the visual score (as shown in No. A detection image corresponding to a visual score).
  • Figure 23 Take the user inputting the opening direction through the remote control as an example.
  • the user may input the first recognition feedback indicating the upward direction of the opening by touching a button corresponding to "up" on the remote control (button 2301 shown in (a) in FIG. 23 ).
  • the user may input an upward sliding operation on the touch screen of the remote control (such as the screen 2302 shown in (b) in FIG. 23 ), so as to input the first recognition feedback indicating that the direction of the opening is upward.
  • the VR glasses can receive the recognition of the opening direction of the current first detection image by the user's first human eyes.
  • the VR glasses may guide the user to recognize the first detection image and input the first recognition feedback before receiving the user's first recognition feedback.
  • the VR glasses can guide the user to recognize the first detection image and input the first recognition feedback through voice prompts, text prompts in the displayed virtual scene, or other methods (such as vibration, etc.).
  • the VR glasses can determine whether the subsequent action is performed according to whether the first recognition feedback is consistent with the actual opening direction of the first detection image.
  • the first recognition feedback is inconsistent with the actual opening direction of the first detection image, it indicates that the first human eye cannot clearly see the first display image currently having the first visual score. Then the VR glasses can continue to execute the following S1804.
  • the VR glasses indicate that the first human eye can clearly see the first display image currently having the first visual score. Then the VR glasses can continue to perform the following S1805.
  • the VR glasses can control the number of detected images displayed on the display screen, so that while obtaining the recognition ability of the user's human eyes for the image, the interest is improved, and then the user experience is improved. Effect.
  • the VR glasses may need to control the zoom module to reduce the virtual image distance so that the virtual image distance can match the degree of myopia of the user.
  • the VR glasses can also display the detected images to the user in gradually increasing sizes.
  • the VR glasses can also use the middle size as the starting point, and use the dichotomy method to show the user detection images that are larger than the starting middle size and smaller than the starting middle size, thereby determining The user's eyesight condition, and adjust the position of the virtual image plane accordingly.
  • the VR glasses can control the zoom module to adjust the virtual image plane to the position of -6D, and fine-tune it between -6D and -6.5D to obtain the imaging clarity of red light and green light on the retina Consistent position as a result of tuning.
  • the VR glasses can control the zoom module to adjust the virtual image plane to the position of -5D, and then fine-tune it between -5D and -6D to obtain images with the same definition of red light and green light on the retina. position as a result of tuning.
  • the virtual display device can adjust the axial position of the incident light according to the degree of astigmatism of the human eye so as to avoid unclear imaging caused by astigmatism.
  • the virtual display device provided in the embodiments of the present application can also be used to measure the degree of astigmatism.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)

Abstract

Disclosed in embodiments of the present application are a virtual display device and a virtual display method, capable of avoiding problems such as vergence-accommodation conflict caused in a process of providing a virtual display function and thus avoiding visual fatigue of human eyes caused in the process of providing the virtual display function to a user. The specific solution is: a first optical display module is configured to implement, under the control of a processor, the following functions: displaying a first object, the vergence depth of the first object being a first vergence depth, and an imaging surface depth of the first optical display module being a first depth; and displaying a second object, the vergence depth of the second object being a second vergence depth, and the imaging surface depth of the first optical display module being a second depth. When the first vergence depth and the second vergence depth are different, the optical display module makes adjustment to make the first depth and the second depth different from each other.

Description

虚拟显示设备和虚拟显示方法Virtual display device and virtual display method
本申请要求于2021年05月27日提交国家知识产权局、申请号为202110587591.7、发明名称为“虚拟显示设备和虚拟显示方法”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of a Chinese patent application filed with the State Intellectual Property Office on May 27, 2021, with application number 202110587591.7, and the title of the invention is "Virtual Display Device and Virtual Display Method", the entire contents of which are incorporated in this application by reference middle.
技术领域technical field
本申请实施例涉及人工智能领域,尤其涉及虚拟显示设备和虚拟显示方法。The embodiments of the present application relate to the field of artificial intelligence, and in particular to a virtual display device and a virtual display method.
背景技术Background technique
基于增强现实(Augmented Reality,AR)、虚拟现实(Virtual Reality,VR)或混合现实技术(Mixed Reality,MR)技术的显示技术,可以向用户提供接近真实场景感受的显示方案,因此正在被广泛关注。Display technology based on augmented reality (Augmented Reality, AR), virtual reality (Virtual Reality, VR) or mixed reality technology (Mixed Reality, MR) technology can provide users with a display solution close to the real scene experience, so it is being widely concerned .
然而,目前提供AR、VR或MR显示(即基于AR、VR或MR技术的显示)的方案并不完善,由此损害用户的使用体验。However, current solutions for providing AR, VR or MR displays (that is, displays based on AR, VR or MR technologies) are not perfect, thereby detrimental to user experience.
发明内容Contents of the invention
本申请实施例提供的虚拟显示设备和虚拟显示方法,可以避免在提供虚拟显示功能的过程中产生的视觉辐辏调节冲突(vergence-accommodation conflict,简称VAC)等问题,由此避免在向用户提供虚拟显示功能的过程中导致的人眼的视疲劳。通过该方案,还能够根据用户的人眼对图形的识别能力,调整向用户展示的图像时使用的光焦度,从而使得即使用户的人眼有近视或者散光等问题,也可以通过裸眼使用虚拟显示设备提供的虚拟显示功能。The virtual display device and virtual display method provided by the embodiments of the present application can avoid problems such as vergence-accommodation conflict (VAC for short) generated in the process of providing virtual display functions, thereby avoiding problems such as providing virtual display to users. Visual fatigue of the human eye caused by the display function. Through this solution, it is also possible to adjust the optical power used for the image displayed to the user according to the ability of the user's human eyes to recognize graphics, so that even if the user's human eyes have problems such as myopia or astigmatism, the virtual The virtual display function provided by the display device.
为了达到上述目的,本申请实施例采用如下技术方案:In order to achieve the above purpose, the embodiment of the present application adopts the following technical solutions:
第一方面,提供一种虚拟显示设备,该虚拟显示设备用于向用户提供虚拟三维环境的显示功能。该虚拟显示设备包括:In a first aspect, a virtual display device is provided, and the virtual display device is used to provide a user with a display function of a virtual three-dimensional environment. The virtual display device includes:
处理器,以及第一光学显示模组;所述第一光学显示模组用于在所述处理器的控制下,向第一人眼展示图像,所述第一人眼是所述用户的双眼中的任一个;所述第一光学显示模组用于在所述处理器控制下,实现以下功能:显示第一对象,所述第一对象的辐辏深度为第一辐辏深度,所述第一光学显示模组的成像面深度为第一深度;显示第二对象,所述第二对象的辐辏深度为第二辐辏深度,所述第一光学显示模组的成像面深度为第二深度;其中,所述第一对象和第二对象,所述第一对象和所述第二对象包括在所述虚拟三维环境中,在所述第一辐辏深度和所述第二辐辏深度不同时,所述光学显示模组调整所述第一深度与第二深度不同。A processor, and a first optical display module; the first optical display module is used to display images to the first human eyes under the control of the processor, and the first human eyes are both eyes of the user Any one of them; the first optical display module is used to realize the following functions under the control of the processor: display the first object, the depth of convergence of the first object is the first depth of convergence, and the first The depth of the imaging surface of the optical display module is the first depth; the second object is displayed, the depth of convergence of the second object is the second depth of convergence, and the depth of the imaging surface of the first optical display module is the second depth; wherein , the first object and the second object, the first object and the second object are included in the virtual three-dimensional environment, when the first depth of convergence and the second depth of convergence are different, the The optical display module adjusts the first depth to be different from the second depth.
基于该方案,提供了一种虚拟显示设备的示例。在本示例中,该虚拟显示设备能够在向用户的人眼显示图像时,将显示向的虚像面调整到用户观察该虚拟对象时的辐辏深度相近或相同的深度。由此即可实现在向用户提供虚拟显示功能的过程中,调焦深度与辐辏深度的匹配(即第一虚像面的深度与该第一辐辏深度相近或相同)。从而避免由于调焦深度和辐辏深度的差异导致的VAC的问题。需要说明的是,本示例中的虚拟三维环境可以指虚拟显示设备自行根据显示数据构建的全部虚拟的环境。在另一些实施例中,该虚拟三维环境中还可以包括现实环境中的对象。比如,虚拟显示设备 可以实时采集现实环境中的对象的相关信息,并将这些信息对应的现实环境通过显示屏展示给用户。也就是说,在该场景下,该虚拟三维环境中可以包括一部分现实环境中的对象。在一些视线中,该场景中,虚拟三维环境中还可以包括一部分虚构的对象。需要说明的是,在本示例中,虚拟显示设备可以在第一虚像面的深度与第一辐辏深度的深度差异不超过5%的情况下,确定第一虚像面的深度与该第一辐辏深度相近。当然,上述5%的阈值设定可以是灵活调整的,比如可以设置为6%,8%等。Based on this solution, an example of a virtual display device is provided. In this example, when the virtual display device displays an image to the user's eyes, it can adjust the virtual image plane of the display direction to a depth close to or the same as the depth of convergence when the user observes the virtual object. In this way, in the process of providing the virtual display function to the user, the matching between the focusing depth and the depth of convergence can be realized (that is, the depth of the first virtual image plane is similar to or the same as the first depth of convergence). Thereby avoiding the problem of VAC caused by the difference between the focus depth and the depth of convergence. It should be noted that the virtual three-dimensional environment in this example may refer to all virtual environments constructed by the virtual display device based on display data. In some other embodiments, the virtual three-dimensional environment may also include objects in the real environment. For example, the virtual display device can collect relevant information of objects in the real environment in real time, and display the real environment corresponding to the information to the user through the display screen. That is to say, in this scene, the virtual three-dimensional environment may include a part of objects in the real environment. In some views, the scene may also include some imaginary objects in the virtual three-dimensional environment. It should be noted that, in this example, the virtual display device can determine the difference between the depth of the first virtual image plane and the first depth of convergence under the condition that the difference between the depth of the first virtual image plane and the first depth of convergence does not exceed 5%. similar. Of course, the above threshold setting of 5% can be flexibly adjusted, for example, it can be set to 6%, 8% and so on.
在一种可能的设计中,在所述第一辐辏深度大于所述第二辐辏深度时,所述第一深度大于所述第二深度;在所述第一辐辏深度小于所述第二辐辏深度时,所述第一深度小于所述第二深度。基于该方案,提供了一种本示例中,辐辏深度与调焦深度(如第一深度和第二深度)的变化关系。在该示例中,需要显示的第一对象的辐辏深度越小,那么对应的调焦深度也就越小。类似的,需要显示的对象的辐辏深度越大,那么对应的调焦深度也就越大。由此既可以实现调焦深度与辐辏深度的接近或者匹配,从而缓解或者解决VAC问题。In a possible design, when the first depth of convergence is greater than the second depth of convergence, the first depth of convergence is greater than the second depth; when the first depth of convergence is smaller than the second depth of convergence , the first depth is smaller than the second depth. Based on this solution, a change relationship between the depth of convergence and the depth of focus (such as the first depth and the second depth) in this example is provided. In this example, the smaller the depth of convergence of the first object to be displayed, the smaller the corresponding focusing depth. Similarly, the greater the depth of convergence of the object to be displayed, the greater the corresponding focusing depth. In this way, the approaching or matching of the focusing depth and the convergence depth can be realized, thereby alleviating or solving the VAC problem.
在一种可能的设计中,该虚拟显示设备还包括:第一眼动追踪模组,该第一眼动追踪模组用于在该处理器的控制下,对该第一人眼进行眼动追踪。所述处理器用于控制所述第一眼动追踪模组对所述第一人眼的注视点进行眼动追踪,在所述第一人眼的注视点的辐辏深度不同时,所述第一光学显示模组被配置为调整所述成像面深度不同。基于该方案,提供了一种确定用户的辐辏深度的具体方案。在本示例中,虚拟显示设备可以通过眼动追踪模组,对用户的人眼进行眼动追踪,从而获取用户的人眼在观察过程中一些数据。比如人眼的注视点,又如人眼的视线等数据。根据这些数据,虚拟显示设备就能够确定用户在观察当前的虚拟三维环境中的对象时,双眼观察该物体的视线夹角,从而据此获取当前的辐辏深度。In a possible design, the virtual display device further includes: a first eye-tracking module, the first eye-tracking module is used to perform eye movement on the first human eye under the control of the processor track. The processor is used to control the first eye-tracking module to perform eye-tracking on the gaze point of the first human eye, and when the depth of convergence of the gaze point of the first human eye is different, the first The optical display module is configured to adjust the depth difference of the imaging surface. Based on this solution, a specific solution for determining the depth of convergence of the user is provided. In this example, the virtual display device can track the user's eyes through the eye-tracking module, so as to obtain some data of the user's eyes during observation. For example, the gaze point of the human eye, and data such as the line of sight of the human eye. According to these data, the virtual display device can determine the angle between the line of sight when the user observes the object in the current virtual three-dimensional environment, so as to obtain the current depth of convergence.
在一种可能的设计中,当所述用户的屈光度不同时,所述第一光学显示模组被配置为调整所述成像面深度不同。基于该方案,提供了本示例的另一种实现。在该实现中,如果用户的屈光度不同,那么第一光学显示模组可以向用户提供与该屈光度对应的成像深度。从而使得即使用户的人眼存在屈光不正的问题,也能够通过裸眼使用虚拟显示设备提供的虚拟显示功能。其中,在一些实施例中,用户的屈光度可以包括如下中的至少一项:近视度数,远视度数,以及散光度数。In a possible design, when the diopters of the users are different, the first optical display module is configured to adjust the depth of the imaging surface to be different. Based on this scheme, another implementation of this example is provided. In this implementation, if users have different diopters, the first optical display module may provide the user with an imaging depth corresponding to the diopters. Therefore, even if the user's human eyes have the problem of ametropia, the virtual display function provided by the virtual display device can be used with naked eyes. Wherein, in some embodiments, the diopter of the user may include at least one of the following: myopia, hyperopia, and astigmatism.
在一种可能的设计中,该虚拟显示设备还包括:第二光学显示模组,所述第二光学显示模组用于在所述处理器的控制下,向第二人眼展示图像,所述第二人眼是所述用户的双眼中的不同与所述第一人眼的一个;所述第一光学显示模组用于在所述处理器控制下,实现以下功能:显示所述第一对象,显示所述第一对象的成像面深度为第三深度,所述第一对象的在所述虚拟三维环境中的深度为第二深度,所述第一深度与所述第三深度相近或相同。基于该方案,提供了针对用户双眼中的另一个人眼的显示方案。在本示例中,虚拟显示设备可以通过第二光学显示模组,向用户的第二人眼提供调焦深度与辐辏深度匹配的显示。需要说明的是,由于用户的辐辏深度是在使用双眼观察对象时产生的,因此,第一方面中提供的方案说明,虚拟显示设备在向用户的双眼展示图像的过程中,对应的辐辏深度可以是相同的。In a possible design, the virtual display device further includes: a second optical display module, the second optical display module is used to display images to the second human eyes under the control of the processor, so The second human eye is one of the user's eyes that is different from the first human eye; the first optical display module is used to realize the following function under the control of the processor: display the first An object, showing that the depth of the imaging plane of the first object is a third depth, the depth of the first object in the virtual three-dimensional environment is a second depth, and the first depth is close to the third depth or the same. Based on this solution, a display solution for the other eye of the user's eyes is provided. In this example, the virtual display device can provide the second human eye of the user with a display in which the focus depth matches the convergence depth through the second optical display module. It should be noted that since the user's depth of convergence is generated when observing objects with both eyes, the solution provided in the first aspect shows that when the virtual display device displays images to the user's eyes, the corresponding depth of convergence can be Are the same.
在一种可能的设计中,该第一光学模组包括:第一变焦模块。该第一变焦模块的 光焦度可调。所述第一变焦模块用于在所述处理器的控制下,在显示所述第一对象时,调整所述第一变焦模块的光焦度为第一光焦度,以使得具有所述第一光焦度的第一光学模组能够在所述第一深度成像;或者,所述第一变焦模块用于在所述处理器的控制下,在显示所述第二对象时,调整所述第一变焦模块的光焦度为第二光焦度,以使得具有所述第二光焦度的第一光学模组能够在所述第二深度成像。基于该方案,提供了一种具体的调整虚像面的深度的方案示例。在本示例中,虚拟显示设备中,负责向用户的人眼展示图像的光学模组(如第一人眼对应的第一光学模组)中可以包括变焦模块,该变焦模块具有调整光焦度的能力。通过调整变焦模块的光焦度,就可以实现对虚拟显示设备的光焦度的调整。可以理解的是,不同的光焦度可以对应于虚像面的不同深度。因此,通过调整光焦度,就可以实现调焦深度的调整。比如,可以通过调整光焦度到一定数值,使得虚像面的深度与辐辏深度匹配,从而解决VAC的问题。In a possible design, the first optical module includes: a first zoom module. The optical power of the first zoom module is adjustable. The first zoom module is configured to adjust the optical power of the first zoom module to a first optical power when displaying the first object under the control of the processor, so that the first optical power has the A first optical module of one optical power is capable of imaging at the first depth; or, the first zoom module is configured to, under the control of the processor, adjust the The optical power of the first zoom module is the second optical power, so that the first optical module with the second optical power can image at the second depth. Based on this solution, a specific example of a solution for adjusting the depth of the virtual image plane is provided. In this example, in the virtual display device, the optical module (such as the first optical module corresponding to the first human eye) responsible for displaying images to the user's human eyes may include a zoom module, which has the ability to adjust the optical power Ability. By adjusting the optical power of the zoom module, the optical power of the virtual display device can be adjusted. It is understood that different optical powers may correspond to different depths of the virtual image plane. Therefore, by adjusting the focal power, the adjustment of the focus depth can be realized. For example, the VAC problem can be solved by adjusting the focal power to a certain value so that the depth of the virtual image plane matches the depth of convergence.
在一种可能的设计中,该第一光学显示模组还用于在该处理器的控制下,在该虚拟三维环境中的第一虚拟位置和第二虚拟位置显示第三对象。该第一虚拟位置和第二虚拟位置在该三维环境中,到该用户的人眼的距离不同。基于该方案,提供了一种本申请实施例提供的虚拟显示设备的又一种使用机制。在本示例中,虚拟显示设备可以向用户展示具有不同深度的对象。这样,用户的双眼在观察该不同深度的对象时,就能够自然地控制睫状肌进行伸缩,从而起到锻炼睫状肌的效果。由此可以避免视疲劳,进而起到预防近视的功能。在一些实施例中,第一虚拟位置显示的第三对象可以与第二虚拟位置显示的第三对象相同。在另一些实施例中,第一虚拟位置显示的对象可以与第二虚拟位置显示的对象不同。在向用户的双眼显示不同深度的图像的过程中,虚拟显示设备还可以根据前述示例中的方案,调整对应场景下的调焦深度与辐辏深度的匹配,从而使得在锻炼睫状肌的同时,能够避免VAC的问题。In a possible design, the first optical display module is further configured to display a third object at the first virtual position and the second virtual position in the virtual three-dimensional environment under the control of the processor. The first virtual position and the second virtual position have different distances to the user's eyes in the three-dimensional environment. Based on this solution, another usage mechanism of the virtual display device provided by the embodiment of the present application is provided. In this example, the virtual display device can present objects with different depths to the user. In this way, when the user's eyes observe the objects of different depths, they can naturally control the ciliary muscle to stretch, thereby achieving the effect of exercising the ciliary muscle. In this way, visual fatigue can be avoided, thereby preventing myopia. In some embodiments, the third object displayed at the first virtual location may be the same as the third object displayed at the second virtual location. In some other embodiments, the object displayed at the first virtual position may be different from the object displayed at the second virtual position. In the process of displaying images of different depths to the user's eyes, the virtual display device can also adjust the matching between the focus depth and the convergence depth in the corresponding scene according to the solution in the foregoing example, so that while exercising the ciliary muscles, Able to avoid VAC problems.
在一种可能的设计中,该虚拟显示设备在当前显示场景为预设场景时,分别在该第一虚拟位置和该第二虚拟位置显示该第三对象。其中,该预设场景包括如下场景中的至少1个:广告播放场景,显示资源加载场景。基于该方案,提供了可能的进行睫状肌锻炼的场景示例。在本示例中,虚拟显示设备可以在一些特定场景下,向用户展示不同深度的对象。这些特定场景可以是用户使用虚拟显示功能过程中,无效显示的场景。比如,在向用户展示显示内容之前,播放广告时,虚拟显示设备就可以结合广告内容,在不同的深度向用户的人眼展示图像,从而避免视疲劳。又如,在视频或者图像等显示资源加载过程中,虚拟显示设备可以结合将要显示的显示资源的内容(或者其他预设的内容)向用户展示不同深度的图像,从而起到锻炼睫状肌,避免视频的功能。在本申请的一些实施例中,虚拟显示设备还可以在向用户展示不同的选项时,对各个控件采用不同的深度显示,由此也能够在用户判断选项并做出选择的过程中,看到不同深度的对象,进而避免视疲劳。In a possible design, the virtual display device displays the third object at the first virtual position and the second virtual position respectively when the currently displayed scene is a preset scene. Wherein, the preset scene includes at least one of the following scenes: an advertisement playing scene, and a display resource loading scene. Based on this scheme, a possible scenario example of ciliary muscle exercise is provided. In this example, the virtual display device can display objects of different depths to the user in some specific scenarios. These specific scenes may be invalidly displayed scenes when the user uses the virtual display function. For example, before showing the displayed content to the user, when playing an advertisement, the virtual display device can combine the advertisement content to display images to the user's human eyes at different depths, thereby avoiding eye fatigue. As another example, during the loading process of display resources such as videos or images, the virtual display device can display images of different depths to the user in combination with the content of the display resources to be displayed (or other preset content), so as to exercise the ciliary muscles, Avoid video features. In some embodiments of the present application, the virtual display device can display different options for each control with different depths when presenting different options to the user, so that the user can also see objects at different depths to avoid eye strain.
在一种可能的设计中,该第一光学显示模组还用于在该处理器的控制下,向该第一人眼展示第一检测图像。在显示该第一检测图像时,用于显示该第一检测图像的开口的像素数量为第一数量。该处理器还用于接收第一识别反馈,该第一识别反馈是该用户在使用该第一人眼观察该第一检测图像时输入的指示,该第一识别反馈用于指示该用户是否能够识别该第一检测图像的开口方向。基于该方案,提供了一种判断用户 的人眼对图像的识别能力的方案示例。可以理解的是,虚拟显示设备可以通过在虚拟三维环境中,显示检测图像,并引导用户观察该检测图像,从而根据用户输入的反馈,确定用户对图像的识别能力。在一些示例中,用户对图像的识别能力可以包括用户的人眼的近视度数等。在一些实施例中,以检测图像为设置有开口的字母为例。为了能够准确地判断人眼的近视度数,虚拟显示设备可以根据显示屏上每个像素对应人眼的视分值,确定向用户展示需要的视分值所对应的像素数量,从而据此确定需要显示的字母的尺寸。这样,向用户展示的具有上述像素数量的开口的字母图像,就能够准确地根据用户是否能够识别开口方向而判断人眼的近视度数。可以理解的是,用户能够识别的开口像素数量越小,那么用户的视力就越好,对应的近视度数就越小。In a possible design, the first optical display module is also used to display the first detection image to the first human eye under the control of the processor. When displaying the first detection image, the number of pixels of the opening for displaying the first detection image is the first number. The processor is also used to receive first recognition feedback, which is an indication input by the user when using the first human eyes to observe the first detection image, and the first recognition feedback is used to indicate whether the user can The opening direction of the first detection image is identified. Based on this solution, an example of a solution for judging the image recognition ability of the user's human eyes is provided. It can be understood that the virtual display device can display the detection image in the virtual three-dimensional environment and guide the user to observe the detection image, so as to determine the user's ability to recognize the image according to the feedback input by the user. In some examples, the user's ability to recognize images may include the degree of myopia of the user's human eyes and the like. In some embodiments, it is taken as an example that the detected image is a letter with an opening. In order to accurately judge the degree of myopia of the human eye, the virtual display device can determine the number of pixels corresponding to the required visual score for displaying to the user according to the visual score of each pixel on the display screen, thereby determining the required The size of the displayed letters. In this way, the letter image shown to the user with the opening with the above number of pixels can accurately determine the degree of myopia of the human eye according to whether the user can recognize the direction of the opening. It can be understood that the smaller the number of opening pixels that the user can identify, the better the user's eyesight and the smaller the corresponding degree of myopia.
在一种可能的设计中,该第一数量的像素在向用户显示该第一检测图像的开口时,第一人眼观察该第一检测图像的开口的视分值为第一视分值。在该第一识别反馈指示该用户不能识别该第一检测图像的开口方向的情况下,该处理器用于确定该第一人眼的近视度数为与该第一视分值对应的近视度数。基于该方案,提供了一种具体的判断用户的近视度数的示例。可以理解的是,在进行近视度数的测定时,能够看清的最小字母的尺寸,可以对应到用户的近视度数。而不同的字母尺寸,对于相对位置固定的人眼而言,其视分值也是对应的。因此,在本示例中,虚拟显示设备可以根据用户能够识别的具有最小像素数量的开口所对应的视分值,确定用户的近视度数。比如,在用户确定无法识别当前的字母的开口方向时,则可以输入无法识别的反馈。基于该反馈,虚拟显示设备就能够知晓,用户无法识别具有当前像素数量的开口的方向。该开口方向对应于一定的视分值,因此,根据虚拟显示设备就可以参考该视分值确定用户的近视度数。需要说明的是,在本示例中,该第一识别反馈可以是用户通过遥控器、语音、手势等方式输入给虚拟显示设备的。在另一些实施例中,虚拟显示设备还可以在用户没有在预设的市场内输入正确的识别反馈时,确定用户无法识别当前大小的字母开口方向。另外,在一些实施例中,为了使得对用户人眼的近视度数的测定更加准确,虚拟显示设备可以在用户输入无法识别当前图像的反馈之后,向用户展示另一个与当前尺寸将近或者相同的检测图像,提供给用户进行识别。在用户多次无法识别当前大小的检测图像时,那么虚拟显示设备就可以准确地确定用户无法识别当前检测图像。In a possible design, when the first number of pixels displays the opening of the first detection image to the user, the visual score of the first human eye observing the opening of the first detection image is the first visual score. When the first recognition feedback indicates that the user cannot recognize the opening direction of the first detection image, the processor is configured to determine the degree of myopia of the first human eye as the degree of myopia corresponding to the first vision score. Based on this solution, a specific example of judging the degree of myopia of a user is provided. It can be understood that, when measuring the degree of myopia, the size of the smallest letter that can be seen clearly may correspond to the degree of myopia of the user. For different letter sizes, for the human eyes whose relative positions are fixed, their visual scores are also corresponding. Therefore, in this example, the virtual display device can determine the degree of myopia of the user according to the vision score corresponding to the opening with the smallest number of pixels that the user can identify. For example, when the user determines that the opening direction of the current letter cannot be recognized, an unrecognizable feedback may be input. Based on this feedback, the virtual display device can know that the user cannot recognize the direction of the opening with the current number of pixels. The opening direction corresponds to a certain visual score, so the degree of myopia of the user can be determined by referring to the visual score according to the virtual display device. It should be noted that, in this example, the first recognition feedback may be input to the virtual display device by the user through a remote control, voice, gesture, and the like. In some other embodiments, the virtual display device may also determine that the user cannot recognize the opening direction of the letter of the current size when the user fails to input correct recognition feedback in the preset market. In addition, in some embodiments, in order to make the measurement of the degree of myopia of the user's human eyes more accurate, the virtual display device may display another detection image that is close to or the same size as the current image to the user after the user inputs feedback that the current image cannot be recognized. Image, provided to the user for identification. When the user fails to recognize the detection image of the current size for many times, the virtual display device can accurately determine that the user cannot recognize the current detection image.
在一种可能的设计中,在该第一识别反馈指示该用户能够识别该第一检测图像的开口方向的情况下,该第一光学显示模组还用于在该处理器的控制下,向该第一人眼展示第二检测图像。在显示该第二检测图像时,用于显示该第二检测图像的开口的像素数量为第二数量。该第二数量小于该第一数量。该处理器还用于接收第二识别反馈,该第二识别反馈是该用户在使用该第一人眼观察该第二检测图像时输入的指示,该第二识别反馈用于指示该用户是否能够识别该第二检测图像的开口方向。基于该方案,提供了又一种确定用户对图形的识别能力的示例。在该示例中,虚拟显示设备可以在用户能够识别第一检测图像的情况下,向用户展示不同于该第一检测图像大小的另一个检测图像(如第二检测图像)。在一些实施例中,该第二检测图像的可以小于第一检测图像。结合前述方案,虚拟显示设备可以展示具有更小视分值对应的像素数量的第二检测图像。这样,人眼可以识别具有更小开口的字母图像,由此使得虚拟显示设 备可以更加准确地进行人眼的识别能力的测定。In a possible design, when the first identification feedback indicates that the user can identify the opening direction of the first detection image, the first optical display module is further configured to, under the control of the processor, display The first human eye presents a second detection image. When displaying the second detection image, the number of pixels of the opening for displaying the second detection image is the second number. The second quantity is less than the first quantity. The processor is also used to receive second recognition feedback, which is an indication input by the user when using the first human eyes to observe the second detection image, and the second recognition feedback is used to indicate whether the user can The opening direction of the second detection image is identified. Based on this solution, another example of determining a user's ability to recognize graphics is provided. In this example, the virtual display device may present another detection image (such as a second detection image) of a size different from the first detection image to the user if the user can recognize the first detection image. In some embodiments, the second detection image may be smaller than the first detection image. In combination with the foregoing solution, the virtual display device may display the second detection image with the number of pixels corresponding to a smaller visual score. In this way, human eyes can recognize letter images with smaller openings, thereby enabling the virtual display device to more accurately measure the recognition ability of human eyes.
在一种可能的设计中,该第二数量的像素在向用户显示该第二检测图像的开口时,第一人眼观察该第二检测图像的开口的视分值为第二视分值。在该第二识别反馈指示该用户不能识别该第二检测图像的开口方向的情况下,该处理器用于确定该第一人眼的近视度数为与该第二视分值对应的近视度数。基于该方案,提供了一种根据第二检测图像确定近视度数的方案示例。在本示例中,虚拟显示设备可以在确定用户能够识别的最小的检测图像对应的视分值,确定用户的近视度数。In a possible design, when the second number of pixels displays the opening of the second detection image to the user, the visual score of the opening of the second detection image observed by the first human eyes is the second visual score. When the second recognition feedback indicates that the user cannot recognize the opening direction of the second detected image, the processor is configured to determine the degree of myopia of the first human eye as the degree of myopia corresponding to the second vision score. Based on this solution, an example of a solution for determining the degree of myopia according to the second detection image is provided. In this example, the virtual display device may determine the degree of myopia of the user after determining the vision score corresponding to the smallest detected image that the user can recognize.
在一种可能的设计中,该第一光学显示模组用于在向该第一人眼展示该第一检测图像之前,在该处理器的控制下,调整该第一光学显示模组的光焦度为初始光焦度,使得该第一光学显示模组在最远处成像。基于该方案,提供了一种虚拟显示设备向用户展示检测图像过程中的起始设置的示例。在该示例中,虚拟显示设备可以调整光焦度,使得人眼能够看到的虚像位于最远处。这样,入射到人眼的光线就可以接近于平行光。可以理解的是,对于没有近视的人眼而言,用户的人眼能够准确地将平行光汇聚在视网膜上。这样,基于本示例提供的方案,可以使得虚拟显示设备能够为具有正常视力的人眼提供正常的显示。同时也能够实现对具有近视的人眼进行的度数测定。In a possible design, the first optical display module is used to adjust the light of the first optical display module under the control of the processor before displaying the first detected image to the first human eye. The focal power is the initial focal power, so that the first optical display module forms an image at the furthest distance. Based on this solution, an example in which a virtual display device displays initial settings in the process of detecting images to a user is provided. In this example, the virtual display device can adjust the optical power so that the virtual image that can be seen by human eyes is located at the farthest. In this way, the light incident on the human eye can be close to parallel light. It can be understood that, for human eyes without myopia, the user's human eyes can accurately focus parallel light on the retina. In this way, based on the solution provided in this example, the virtual display device can provide normal display for human eyes with normal vision. At the same time, it can also realize the power measurement of the human eyes with myopia.
在一种可能的设计中,在该第一光学显示模组向该第一人眼展示的第三检查图像的开口为第三数量的像素的情况下,在接收到的识别反馈指示用户无法识别该第四检测图像的开口方向时,该第一光学显示模组向该第一人眼展示虚拟图像的光焦度为第二光焦度。该第二光焦度对应于第三视分值,该第三视分值是与该第三数量的像素对应的视分值。基于该方案,提供了一种向近视用户提供裸眼的虚拟显示体验的方案示例。可以理解的是,近视用户的人眼无法将正常显示的图像顺利汇聚在视网膜上,而是将像面汇聚在视网膜与瞳孔之间,这样如果正常显示图像就会使得近视用户无法通过裸眼看清显示的内容。通过本示例提供的方案,可以根据用户能够识别的最小的开口的像素数量,调整在向用户的人眼展示图像过程中的虚像面(如通过调整光焦度实现虚像面的调整),这样就可以使得近视用户能够将该调整后的虚像面成像在其人眼的视网膜上,由此即可实现近视用户通过裸眼也能够看清楚显示内容的效果。In a possible design, when the opening of the third inspection image displayed by the first optical display module to the first human eye is the third number of pixels, when the received recognition feedback indicates that the user cannot recognize When the opening direction of the fourth detection image is detected, the first optical display module displays the optical power of the virtual image to the first human eye as the second optical power. The second optical power corresponds to a third visual score, which is the visual score corresponding to the third number of pixels. Based on this solution, an example of a solution for providing naked-eye virtual display experience to myopic users is provided. It is understandable that the human eyes of myopic users cannot smoothly converge the normally displayed images on the retina, but converge the image plane between the retina and the pupil, so that if the images are displayed normally, the myopic users cannot see clearly with the naked eye displayed content. Through the solution provided in this example, the virtual image plane in the process of displaying the image to the user's eyes can be adjusted according to the pixel number of the smallest opening that the user can identify (such as adjusting the optical power to realize the adjustment of the virtual image plane), so that It can enable myopic users to image the adjusted virtual image plane on the retina of their human eyes, thereby achieving the effect that myopic users can clearly see the display content with naked eyes.
在一种可能的设计中,该第一光学显示模组包括旋转机构,该旋转机构用于在该处理器的控制下,沿垂直于光轴方向旋转该第一光学显示模组。该第一光学显示模组在向用户展示该第一图像时,该第一光学显示模组的光焦度方向与该第一人眼的散光轴位重合。基于该方案,提供一种能够匹配散光用户的人眼的显示方案。可以理解的是,对于散光用户,其人眼在一定入射角度上是无法看清楚显示的图像的。也就是说,具有散光的人眼,只能看清楚入射光与该人眼的散光轴位重合的图像。在本示例中,虚拟显示设备可以灵活调整第一光学显示模组的光焦度方向,从而实现入射到虚拟显示设备的光线的角度与该人眼的散光轴位重合的效果。由此,就可以使得散光用户也能够看清虚拟显示设备所显示的图像的效果。这样,即使用户的人眼具有散光,虚拟显示设备也能根据本方案,向用户提供裸眼观看体验。In a possible design, the first optical display module includes a rotating mechanism for rotating the first optical display module in a direction perpendicular to the optical axis under the control of the processor. When the first optical display module presents the first image to the user, the focal power direction of the first optical display module coincides with the astigmatism axis of the first human eye. Based on this solution, a display solution that can match the human eyes of users with astigmatism is provided. It can be understood that for users with astigmatism, their human eyes cannot clearly see the displayed image at a certain incident angle. That is to say, the human eye with astigmatism can only clearly see the image where the incident light coincides with the astigmatism axis of the human eye. In this example, the virtual display device can flexibly adjust the focal power direction of the first optical display module, so as to achieve the effect that the angle of light incident on the virtual display device coincides with the astigmatism axis of the human eye. In this way, the user with astigmatism can also clearly see the effect of the image displayed by the virtual display device. In this way, even if the user's human eyes have astigmatism, the virtual display device can provide the user with naked-eye viewing experience according to the solution.
在一种可能的设计中,该处理器还用于根据该第一光学显示模组的光焦度方向与该第一人眼的散光轴位重合时,该旋转机构的旋转情况,确定该第一人眼的散光度数。基于该方案,提供了一种测定用户的散光度数的方案示例。可以理解的是,用户的人 眼的散光度数与用户的散光轴位有关。在本实例中,可以通过调整系统的光焦度方向,确定用户能够看清楚图像时的散光轴位,根据该散光轴位即可确定用户的散光度数。在一些实施例中,虚拟显示设备可以向用户展示散光度数测定卡,并根据用户的反馈所指示的无法看清的角度对应的标识(如对应的数字),确定用户的散光轴位,从而根据散光轴位确定用户人眼的散光度数。In a possible design, the processor is also used to determine the second The degree of astigmatism in a human eye. Based on this solution, an example of a solution for measuring the degree of astigmatism of a user is provided. It can be understood that the degree of astigmatism of the user's human eyes is related to the axis of the user's astigmatism. In this example, the axial position of astigmatism when the user can see the image clearly can be determined by adjusting the optical power direction of the system, and the degree of astigmatism of the user can be determined according to the axial position of astigmatism. In some embodiments, the virtual display device can display the astigmatism diopter measurement card to the user, and determine the user's astigmatism axis position according to the sign (such as the corresponding number) corresponding to the angle that cannot be seen clearly indicated by the user's feedback, so as to The astigmatism axis determines the degree of astigmatism in the user's eye.
在一种可能的设计中,该处理器还用于在控制该第一光学显示模组显示该第一图像之前,获取当前用户的用户特征,该处理器具体用于控制该第一光学显示模组显示与该当前用户的用户特征相应的第一图像。其中,显示该第一图像时该第一光学显示模组的光焦度是与该当前用户的第一人眼的近视度数和/或散光度数匹配的,该当前用户的第一人眼的近视度数和/或散光度数是由该当前用户的用户特征对应的虚拟显示信息指示的。基于该方案,提供了一种向对应用户提供裸眼虚拟体验的方案示例。在本示例中,由于不同的用户的人眼所具有的近视度数和/或散光度数是不同的,因此虚拟显示设备可以根据当前用户的用户特征,识别当前用户,并根据存储的当前用于对应的近视度数和/或散光度数确定用户对于图像的识别能力。进而虚拟显示设备就可以调整到对应的光焦度向用户展示图像,从而使得展示的图像所在的光焦度能够匹配当前用户对图像的识别能力。这样,即使用户的人眼有近视或者散光,也能够在裸眼的情况下,看清楚虚拟显示设备所显示的图像。In a possible design, the processor is also used to acquire the user characteristics of the current user before controlling the first optical display module to display the first image, and the processor is specifically used to control the first optical display module to display the first image. The group displays the first image corresponding to the user characteristics of the current user. Wherein, when displaying the first image, the optical power of the first optical display module is matched with the myopia degree and/or astigmatism degree of the first human eye of the current user, and the myopia degree of the first human eye of the current user is The degree of power and/or the degree of astigmatism is indicated by the virtual display information corresponding to the user characteristics of the current user. Based on this solution, an example of a solution for providing naked-eye virtual experience to corresponding users is provided. In this example, since different users have different degrees of myopia and/or astigmatism, the virtual display device can identify the current user according to the user characteristics of the current user, and use the stored current The degree of myopia and/or astigmatism of the user determines the user's ability to recognize the image. Furthermore, the virtual display device can be adjusted to the corresponding optical power to display images to the user, so that the optical power of the displayed image can match the current user's ability to recognize the image. In this way, even if the user has myopia or astigmatism, he can clearly see the image displayed by the virtual display device with naked eyes.
在一种可能的设计中,该用户特征包括以下特征中的任一项:该当前用户的指纹信息。该当前用户的虹膜特征。该当前用户的账户信息。该当前用户的标识,不同用户的标识不同。基于该方案,提供了一种用户特征的示例。可以理解的是,通过不同的用户特征,虚拟显示设备能够区分不同的用户。在本示例中,提供了一些用户特征的示例,比如,用户特征可以包括生物识别信息,如指纹,虹膜,声纹等信息。又如,该用户特征可以是用户的账户信息,比如用户名,用户昵称等。又如,该用户特征还可以是提前预设的用户的标识,该标识可以用户标识不同的用户。In a possible design, the user feature includes any one of the following features: fingerprint information of the current user. The iris characteristics of this current user. The current user's account information. The identifier of the current user is different for different users. Based on this solution, an example of user characteristics is provided. It can be understood that the virtual display device can distinguish different users through different user characteristics. In this example, some examples of user features are provided. For example, user features may include biometric information such as fingerprints, irises, and voiceprints. As another example, the user feature may be the user's account information, such as user name, user nickname, and so on. For another example, the user feature may also be an identifier of a user preset in advance, and the identifier may be used to identify different users.
在一种可能的设计中,该虚拟显示设备中存储有不同用户特征与对应的虚拟显示信息的对应关系。该处理器用于根据该当前用户的用户特征,从该对应关系中查找匹配的表项,在存在该匹配的表项的情况下,确定该当前用户对应的虚拟显示信息为该匹配的表项中存储的虚拟显示信息。该虚拟显示信息包括对应用户的近视度数和/或散光度数。基于该方案,提供了一种虚拟显示设备根据用户确定对应的显示策略的方案示例。在本示例中,虚拟显示设备可以根据用户的用户特征,识别当前用户。接着,虚拟显示设备可以根据当前用户的特征,从本地存储的(或者云端存储的)对应关系中,查找该用户的用户特征对应的虚拟显示信息。如果能够查找到,那么表明可以根据该虚拟显示信息进行显示。示例性的,虚拟显示设备可以根据该虚拟显示信息,调整光焦度,以便于在对应的虚像面位置分别向用户的双眼提供裸眼显示体验。In a possible design, the virtual display device stores correspondences between different user characteristics and corresponding virtual display information. The processor is used to search for a matching entry from the corresponding relationship according to the user characteristics of the current user, and if there is a matching entry, determine that the virtual display information corresponding to the current user is in the matching entry Stored virtual display information. The virtual display information includes the degree of myopia and/or the degree of astigmatism of the corresponding user. Based on this solution, an example of a solution in which a virtual display device determines a corresponding display strategy according to a user is provided. In this example, the virtual display device can identify the current user according to the user characteristics of the user. Next, the virtual display device can search the virtual display information corresponding to the user characteristics of the user from the correspondence stored locally (or stored in the cloud) according to the characteristics of the current user. If it can be found, it indicates that it can be displayed according to the virtual display information. Exemplarily, the virtual display device may adjust the optical power according to the virtual display information, so as to provide naked-eye display experience to both eyes of the user at corresponding virtual image plane positions.
在一种可能的设计中,在该对应关系中不存在与该当前用户的用户特征的对应的匹配项时,该第一光学显示模组还用于在该处理器的控制下,展示第一图像,在展示该第一图像时的光焦度是与该当前用户的第一人眼的近视度数和/或散光度数匹配的光焦度,该第一人眼的近视度数和/或散光度数是该处理器自动测定的,或者在用户的指示下测定的,或者,该用户手动输入的。基于该方案,提供了又一种根据当前用户 的特征确定显示策略的方案。在本示例中,虚拟显示设备可以根据上述示例中提供的方案,对当前用户的图像识别能力进行测定,比如确定用户的近视度数和/或用户的散光度数。这样就能够根据该用户的图像识别能力向用户展示对应的裸眼显示体验。在本申请的另一些实施例中,用户的近视度数和/或散光度数的测定可以是在用户授权或者指示下进行的。在本申请的另一些实施例中,用户的近视度数和/或散光度数还可以是用户自行输入的。需要说明的是,在虚拟显示设备自行获取用户的图像识别能力之前,可以从本地保存或者云端保存的对应关系中,查找是否存储有当前用户的用户特征对应的虚拟显示信息。如果存在,那就可以根据前述方案执行,即根据该虚拟显示信息进行显示。如果不存在,那么标识无法直接获取当前用户的近视度数和/或散光度数,从而虚拟显示设备就可以按照本示例中的方案获取用户的近视度数和/或散光度数。In a possible design, when there is no matching item corresponding to the user characteristics of the current user in the corresponding relationship, the first optical display module is also used to display the first image, the focal power when displaying the first image is the focal power that matches the myopia and/or astigmatism of the first human eye of the current user, and the myopia and/or astigmatism of the first human eye is determined automatically by the processor, or is determined at the direction of a user, or is manually entered by the user. Based on this solution, another solution for determining the display strategy according to the characteristics of the current user is provided. In this example, the virtual display device may measure the current user's image recognition ability according to the solution provided in the above example, for example, determine the degree of myopia of the user and/or the degree of astigmatism of the user. In this way, the corresponding naked-eye display experience can be presented to the user according to the user's image recognition ability. In some other embodiments of the present application, the determination of the degree of myopia and/or the degree of astigmatism of the user may be performed under the authorization or instruction of the user. In some other embodiments of the present application, the degree of myopia and/or the degree of astigmatism of the user may also be input by the user. It should be noted that, before the virtual display device obtains the user's image recognition ability, it can search whether the virtual display information corresponding to the user characteristics of the current user is stored in the corresponding relationship stored locally or in the cloud. If it exists, it can be executed according to the aforementioned solution, that is, display is performed according to the virtual display information. If it does not exist, the identification cannot directly obtain the current user's myopia and/or astigmatism, so the virtual display device can obtain the user's myopia and/or astigmatism according to the solution in this example.
在一种可能的设计中,该处理器还用于存储该第一人眼的近视度数和/或散光度数与该当前用户的用户特征的对应关系。基于该方案,提供了一种更新用户的图像识别能力的存储的方案示例。在本示例中,虚拟显示设备可以在本地存储的或者来自云端的对应关系中,无法查找到与当前用户对应的表项时,确定当前用户为新用户。那么可以将获取的当前用户的图像识别能力与当前用户的用户特征的对应关系存储起来,以便于后续在向该用户提供虚拟显示时,就不需要再次测定该用户的图像识别能力。In a possible design, the processor is further configured to store a correspondence between the degree of myopia and/or the degree of astigmatism of the first human eye and the user characteristics of the current user. Based on this solution, an example of a solution for updating the storage of the user's image recognition ability is provided. In this example, the virtual display device may determine that the current user is a new user when the corresponding relationship stored locally or from the cloud cannot find an entry corresponding to the current user. Then, the obtained corresponding relationship between the image recognition ability of the current user and the user characteristics of the current user can be stored, so that the image recognition ability of the user does not need to be measured again when the virtual display is provided to the user later.
第二方面,提供一种虚拟显示设备,该虚拟显示设备可以具有如第一方面提供的虚拟显示设备的组成。在本示例中,该第一光学显示模组还用于在该处理器的控制下,在该虚拟三维环境中的第一虚拟位置和第二虚拟位置显示第三对象。该第一虚拟位置和第二虚拟位置在该三维环境中,到该用户的人眼的距离不同。可以理解的是,在第一方面的说明中,可以结合解决VAC问题的技术手段,实现对睫状肌的锻炼的效果。在本示例中,虚拟显示设备还可以仅通过在不同深度显示对象,实现对睫状肌的锻炼效果。In a second aspect, a virtual display device is provided, and the virtual display device may have the composition of the virtual display device provided in the first aspect. In this example, the first optical display module is further used to display a third object at the first virtual position and the second virtual position in the virtual three-dimensional environment under the control of the processor. The first virtual position and the second virtual position have different distances to the user's eyes in the three-dimensional environment. It can be understood that, in the description of the first aspect, the effect of exercising the ciliary muscle can be achieved by combining the technical means for solving the VAC problem. In this example, the virtual display device can also exercise the ciliary muscle only by displaying objects at different depths.
在一种可能的设计中,该虚拟显示设备在当前显示场景为预设场景时,分别在该第一虚拟位置和该第二虚拟位置显示该第三对象。其中,该预设场景包括如下场景中的至少1个:广告播放场景,显示资源加载场景。In a possible design, the virtual display device displays the third object at the first virtual position and the second virtual position respectively when the currently displayed scene is a preset scene. Wherein, the preset scene includes at least one of the following scenes: an advertisement playing scene, and a display resource loading scene.
第三方面,提供一种虚拟显示设备,该虚拟显示设备可以具有如第一方面提供的虚拟显示设备的组成。在本示例中,该第一光学显示模组还用于在该处理器的控制下,向该第一人眼展示第一检测图像。在显示该第一检测图像时,用于显示该第一检测图像的开口的像素数量为第一数量。该处理器还用于接收第一识别反馈,该第一识别反馈是该用户在使用该第一人眼观察该第一检测图像时输入的指示,该第一识别反馈用于指示该用户是否能够识别该第一检测图像的开口方向。In a third aspect, a virtual display device is provided, and the virtual display device may have the composition of the virtual display device provided in the first aspect. In this example, the first optical display module is also used to display the first detection image to the first human eye under the control of the processor. When displaying the first detection image, the number of pixels of the opening for displaying the first detection image is the first number. The processor is also used to receive first recognition feedback, which is an indication input by the user when using the first human eyes to observe the first detection image, and the first recognition feedback is used to indicate whether the user can The opening direction of the first detection image is identified.
在本示例中,虚拟显示设备可以通过在虚拟三维环境中,显示检测图像,并引导用户观察该检测图像,从而根据用户输入的反馈,确定用户对图像的识别能力。在一些示例中,用户对图像的识别能力可以包括用户的人眼的近视度数等。由此实现对用户的验光。需要说明的是,该方案可以支持虚拟显示设备的自主验光,确定用户人眼的近视度数。在另一些实现方式中,该方案还可以用于支持远程眼光。In this example, the virtual display device can display the detection image in the virtual three-dimensional environment and guide the user to observe the detection image, so as to determine the user's ability to recognize the image according to the feedback input by the user. In some examples, the user's ability to recognize images may include the degree of myopia of the user's human eyes and the like. In this way, the optometry of the user is realized. It should be noted that this solution can support the independent optometry of the virtual display device to determine the degree of myopia of the user's eyes. In other implementations, this scheme can also be used to support remote vision.
在一种可能的设计中,该第一数量的像素在向用户显示该第一检测图像的开口时,第一人眼观察该第一检测图像的开口的视分值为第一视分值。在该第一识别反馈指示 该用户不能识别该第一检测图像的开口方向的情况下,该处理器用于确定该第一人眼的近视度数为与该第一视分值对应的近视度数。In a possible design, when the first number of pixels displays the opening of the first detection image to the user, the visual score of the first human eye observing the opening of the first detection image is the first visual score. When the first recognition feedback indicates that the user cannot recognize the opening direction of the first detection image, the processor is configured to determine the degree of myopia of the first human eye as the degree of myopia corresponding to the first vision score.
在一种可能的设计中,在该第一识别反馈指示该用户能够识别该第一检测图像的开口方向的情况下,该第一光学显示模组还用于在该处理器的控制下,向该第一人眼展示第二检测图像。在显示该第二检测图像时,用于显示该第二检测图像的开口的像素数量为第二数量。该第二数量小于该第一数量。该处理器还用于接收第二识别反馈,该第二识别反馈是该用户在使用该第一人眼观察该第二检测图像时输入的指示,该第二识别反馈用于指示该用户是否能够识别该第二检测图像的开口方向。In a possible design, when the first identification feedback indicates that the user can identify the opening direction of the first detection image, the first optical display module is further configured to, under the control of the processor, display The first human eye presents a second detection image. When displaying the second detection image, the number of pixels of the opening for displaying the second detection image is the second number. The second quantity is less than the first quantity. The processor is also used to receive second recognition feedback, which is an indication input by the user when using the first human eyes to observe the second detection image, and the second recognition feedback is used to indicate whether the user can The opening direction of the second detection image is identified.
在一种可能的设计中,该第二数量的像素在向用户显示该第二检测图像的开口时,第一人眼观察该第二检测图像的开口的视分值为第二视分值。在该第二识别反馈指示该用户不能识别该第二检测图像的开口方向的情况下,该处理器用于确定该第一人眼的近视度数为与该第二视分值对应的近视度数。In a possible design, when the second number of pixels displays the opening of the second detection image to the user, the visual score of the opening of the second detection image observed by the first human eyes is the second visual score. When the second recognition feedback indicates that the user cannot recognize the opening direction of the second detected image, the processor is configured to determine the degree of myopia of the first human eye as the degree of myopia corresponding to the second vision score.
在一种可能的设计中,该第一光学显示模组用于在向该第一人眼展示该第一检测图像之前,在该处理器的控制下,调整该第一光学显示模组的光焦度为初始光焦度,使得该第一光学显示模组在最远处成像。In a possible design, the first optical display module is used to adjust the light of the first optical display module under the control of the processor before displaying the first detected image to the first human eye. The focal power is the initial focal power, so that the first optical display module forms an image at the furthest distance.
在一种可能的设计中,在该第一光学显示模组向该第一人眼展示的第三检查图像的开口为第三数量的像素的情况下,在接收到的识别反馈指示用户无法识别该第四检测图像的开口方向时,该第一光学显示模组向该第一人眼展示虚拟图像的光焦度为第三光焦度。该第三光焦度对应于第三视分值,该第三视分值是与该第三数量的像素对应的视分值。In a possible design, when the opening of the third inspection image displayed by the first optical display module to the first human eye is the third number of pixels, when the received recognition feedback indicates that the user cannot recognize When the opening direction of the fourth detection image is detected, the first optical display module displays the optical power of the virtual image to the first human eye as the third optical power. The third optical power corresponds to a third visual score, which is the visual score corresponding to the third number of pixels.
第四方面,提供一种虚拟显示设备,该虚拟显示设备可以具有如第一方面提供的虚拟显示设备的组成。在本示例中,该第一光学显示模组包括旋转机构,该旋转机构用于在该处理器的控制下,沿垂直于光轴方向旋转该第一光学显示模组。该第一光学显示模组在向用户展示该第一图像时,该第一光学显示模组的光焦度方向与该第一人眼的散光轴位重合。In a fourth aspect, a virtual display device is provided, and the virtual display device may have the composition of the virtual display device provided in the first aspect. In this example, the first optical display module includes a rotating mechanism for rotating the first optical display module in a direction perpendicular to the optical axis under the control of the processor. When the first optical display module presents the first image to the user, the focal power direction of the first optical display module coincides with the astigmatism axis of the first human eye.
基于该方案,提供一种能够匹配散光用户的人眼的显示方案。可以理解的是,对于散光用户,其人眼在一定入射角度上是无法看清楚显示的图像的。也就是说,具有散光的人眼,只能看清楚入射光与该人眼的散光轴位重合的图像。在本示例中,虚拟显示设备可以灵活调整第一光学显示模组的光焦度方向,从而实现入射到虚拟显示设备的光线的角度与该人眼的散光轴位重合的效果。由此,就可以使得散光用户也能够看清虚拟显示设备所显示的图像的效果。Based on this solution, a display solution that can match the human eyes of users with astigmatism is provided. It can be understood that for users with astigmatism, their human eyes cannot clearly see the displayed image at a certain incident angle. That is to say, the human eye with astigmatism can only clearly see the image where the incident light coincides with the astigmatism axis of the human eye. In this example, the virtual display device can flexibly adjust the focal power direction of the first optical display module, so as to achieve the effect that the angle of light incident on the virtual display device coincides with the astigmatism axis of the human eye. In this way, the user with astigmatism can also clearly see the effect of the image displayed by the virtual display device.
在一种可能的设计中,该处理器还用于根据该第一光学显示模组的光焦度方向与该第一人眼的散光轴位重合时,该旋转机构的旋转情况,确定该第一人眼的散光度数。In a possible design, the processor is also used to determine the second The degree of astigmatism in a human eye.
第五方面,提供一种虚拟显示设备,该虚拟显示设备可以具有如第一方面提供的虚拟显示设备的组成。在本示例中,该处理器还用于在控制该第一光学显示模组显示该第一图像之前,获取当前用户的用户特征,该处理器具体用于控制该第一光学显示模组显示与该当前用户的用户特征相应的第一图像。其中,显示该第一图像时该第一光学显示模组的光焦度是与该当前用户的第一人眼的近视度数和/或散光度数匹配的,该当前用户的第一人眼的近视度数和/或散光度数是由该当前用户的用户特征对应的 虚拟显示信息指示的。In a fifth aspect, a virtual display device is provided, and the virtual display device may have the composition of the virtual display device provided in the first aspect. In this example, the processor is also used to acquire the user characteristics of the current user before controlling the first optical display module to display the first image, and the processor is specifically used to control the first optical display module to display and display the first image. The first image corresponding to the user characteristics of the current user. Wherein, when displaying the first image, the optical power of the first optical display module is matched with the myopia degree and/or astigmatism degree of the first human eye of the current user, and the myopia degree of the first human eye of the current user is The degree of power and/or the degree of astigmatism is indicated by the virtual display information corresponding to the user characteristics of the current user.
基于该方案,提供了一种向对应用户提供裸眼虚拟体验的方案示例。在本示例中,由于不同的用户的人眼所具有的近视度数和/或散光度数是不同的,因此虚拟显示设备可以根据当前用户的用户特征,识别当前用户,并根据存储的当前用于对应的近视度数和/或散光度数确定用户对于图像的识别能力。进而虚拟显示设备就可以调整到对应的光焦度向用户展示图像,从而使得展示的图像所在的光焦度能够匹配当前用户对图像的识别能力。这样,即使用户的人眼有近视或者散光,也能够在裸眼的情况下,看清楚虚拟显示设备所显示的图像。Based on this solution, an example of a solution for providing naked-eye virtual experience to corresponding users is provided. In this example, since different users have different degrees of myopia and/or astigmatism, the virtual display device can identify the current user according to the user characteristics of the current user, and use the stored current The degree of myopia and/or astigmatism of the user determines the user's ability to recognize the image. Furthermore, the virtual display device can be adjusted to the corresponding optical power to display images to the user, so that the optical power of the displayed image can match the current user's ability to recognize the image. In this way, even if the user has myopia or astigmatism, he can clearly see the image displayed by the virtual display device with naked eyes.
在一种可能的设计中,该用户特征包括以下特征中的任一项:该当前用户的指纹信息。该当前用户的虹膜特征。该当前用户的账户信息。该当前用户的标识,不同用户的标识不同。In a possible design, the user feature includes any one of the following features: fingerprint information of the current user. The iris characteristics of this current user. The current user's account information. The identifier of the current user is different for different users.
在一种可能的设计中,该虚拟显示设备中存储有不同用户特征与对应的虚拟显示信息的对应关系。该处理器用于根据该当前用户的用户特征,从该对应关系中查找匹配的表项,在存在该匹配的表项的情况下,确定该当前用户对应的虚拟显示信息为该匹配的表项中存储的虚拟显示信息。该虚拟显示信息包括对应用户的近视度数和/或散光度数。In a possible design, the virtual display device stores correspondences between different user characteristics and corresponding virtual display information. The processor is used to search for a matching entry from the corresponding relationship according to the user characteristics of the current user, and if there is a matching entry, determine that the virtual display information corresponding to the current user is in the matching entry Stored virtual display information. The virtual display information includes the degree of myopia and/or the degree of astigmatism of the corresponding user.
在一种可能的设计中,在该对应关系中不存在与该当前用户的用户特征的对应的匹配项时,该第一光学显示模组还用于在该处理器的控制下,展示第一图像,在展示该第一图像时的光焦度是与该当前用户的第一人眼的近视度数和/或散光度数匹配的光焦度,该第一人眼的近视度数和/或散光度数是该处理器自动测定的,或者在用户的指示下测定的,或者,该用户手动输入的。In a possible design, when there is no matching item corresponding to the user characteristics of the current user in the corresponding relationship, the first optical display module is also used to display the first image, the focal power when displaying the first image is the focal power that matches the myopia and/or astigmatism of the first human eye of the current user, and the myopia and/or astigmatism of the first human eye is determined automatically by the processor, or is determined at the direction of a user, or is manually entered by the user.
在一种可能的设计中,该处理器还用于存储该第一人眼的近视度数和/或散光度数与该当前用户的用户特征的对应关系。In a possible design, the processor is further configured to store a correspondence between the degree of myopia and/or the degree of astigmatism of the first human eye and the user characteristics of the current user.
需要说明的是,上述第二方面,第三方面,第四方面以及第五方面及其任一种可能的实现提供的虚拟显示设备所对应的功能,可以是单独实现的,也可以互相结合实现的。比如,第二方面及其任一种可能的实现提供的虚拟显示设备对应的功能可以与第三方面和/或第四方面和/或第五方面及其任一种可能的实现提供的虚拟显示设备对应的功能集成在同一个设备中。又如,第三方面及其任一种可能的实现提供的虚拟显示设备对应的功能可以与第四方面和/或第五方面及其任一种可能的实现提供的虚拟显示设备对应的功能集成在同一个设备中。又如,第四方面及其任一种可能的实现提供的虚拟显示设备对应的功能可以与第五方面及其任一种可能的实现提供的虚拟显示设备对应的功能集成在同一个设备中。It should be noted that the functions corresponding to the virtual display device provided by the second aspect, the third aspect, the fourth aspect, and the fifth aspect and any possible implementation thereof may be realized independently or in combination with each other. of. For example, the function corresponding to the virtual display device provided by the second aspect and any possible implementation thereof may be the same as the virtual display device provided by the third aspect and/or the fourth aspect and/or the fifth aspect and any possible implementation thereof. The functions corresponding to the devices are integrated in the same device. As another example, the functions corresponding to the virtual display device provided by the third aspect and any possible implementation thereof may be integrated with the functions corresponding to the virtual display device provided by the fourth aspect and/or the fifth aspect and any possible implementation thereof in the same device. As another example, the function corresponding to the virtual display device provided by the fourth aspect and any possible implementation thereof may be integrated into the same device as the function corresponding to the virtual display device provided by the fifth aspect and any possible implementation thereof.
第六方面,提供一种虚拟显示方法,该虚拟显示方法应用于如第一方面及其任一种可能的实现中提供的虚拟显示设备,该虚拟显示方法用于向用户提供虚拟三维环境的显示功能。该虚拟显示方法包括:第一光学显示模组显示第一对象,该第一对象的辐辏深度为第一辐辏深度,该第一光学显示模组的成像面深度为第一深度。第一光学显示模组显示第二对象,该第二对象的辐辏深度为第二辐辏深度,该第一光学显示模组的成像面深度为第二深度。其中,该第一对象和第二对象,该第一对象和该第二对象包括在该虚拟三维环境中,在该第一辐辏深度和该第二辐辏深度不同时,该光学显 示模组调整该第一深度与第二深度不同。在一种可能的设计中,该虚拟显示方法还包括:该处理器控制该虚拟显示设备中的第一眼动追踪模组对该第一人眼进行眼动追踪。在对该第一人眼的眼动追踪结果不同时,该第二深度不同。In a sixth aspect, a virtual display method is provided, the virtual display method is applied to the virtual display device provided in the first aspect and any possible implementation thereof, and the virtual display method is used to provide a user with a display of a virtual three-dimensional environment Function. The virtual display method includes: the first optical display module displays the first object, the depth of convergence of the first object is the first depth of convergence, and the depth of the imaging surface of the first optical display module is the first depth. The first optical display module displays the second object, the depth of convergence of the second object is the second depth of convergence, and the depth of the imaging surface of the first optical display module is the second depth. Wherein, the first object and the second object, the first object and the second object are included in the virtual three-dimensional environment, when the first depth of convergence and the second depth of convergence are different, the optical display module adjusts the The first depth is different from the second depth. In a possible design, the virtual display method further includes: the processor controls a first eye-tracking module in the virtual display device to perform eye-tracking on the first human eye. When the eye tracking results of the first human eyes are different, the second depths are different.
在一种可能的设计中,在该第一辐辏深度大于该第二辐辏深度时,该第一深度大于该第二深度。在该第一辐辏深度小于该第二辐辏深度时,该第一深度小于该第二深度。In a possible design, when the first depth of convergence is greater than the second depth of convergence, the first depth is greater than the second depth. When the first depth of convergence is smaller than the second depth of convergence, the first depth is smaller than the second depth.
在一种可能的设计中,该处理器控制该第一光学显示模组的第一眼动追踪模组对该第一人眼的注视点进行眼动追踪,在该第一人眼的注视点的辐辏深度不同时,该第一光学显示模组被配置为调整该成像面深度不同。In a possible design, the processor controls the first eye-tracking module of the first optical display module to perform eye-tracking on the gaze point of the first human eye, and the gaze point of the first human eye When the depths of convergence are different, the first optical display module is configured to adjust the depth of the imaging surface to be different.
在一种可能的设计中,当该用户的屈光度不同时,该第一光学显示模组被配置为调整该成像面深度不同。In a possible design, when the diopters of the users are different, the first optical display module is configured to adjust the depth of the imaging surface to be different.
在一种可能的设计中,该虚拟显示设备还包括:第二光学显示模组,该第二光学显示模组用于在该处理器的控制下,向第二人眼展示图像,该第二人眼是该用户的双眼中的不同与该第一人眼的一个。该方法还包括:该第一光学显示模组在该处理器控制下,实现以下功能:显示该第一对象,显示该第一对象的成像面深度为第三深度,该第一对象的在该虚拟三维环境中的深度为第二深度,该第一深度与该第三深度相近或相同。In a possible design, the virtual display device further includes: a second optical display module, the second optical display module is used to display images to the second human eyes under the control of the processor, the second The human eye is the one of the user's eyes that is different from the first human eye. The method further includes: the first optical display module realizes the following functions under the control of the processor: displaying the first object, displaying that the depth of the imaging surface of the first object is a third depth, and the depth of the imaging surface of the first object in the The depth in the virtual three-dimensional environment is the second depth, and the first depth is similar to or the same as the third depth.
在一种可能的设计中,该第一光学模组包括:第一变焦模块。该第一变焦模块的光焦度可调。该方法还包括:该处理器控制该第一变焦模块,在显示该第一对象时,调整该第一变焦模块的光焦度为第一光焦度,以使得具有该第一光焦度的第一光学模组能够在该第一深度成像。或者,该处理器控制该第一变焦模块,在显示该第二对象时,调整该第一变焦模块的光焦度为第二光焦度,以使得具有该第二光焦度的第一光学模组能够在该第二深度成像。In a possible design, the first optical module includes: a first zoom module. The optical power of the first zoom module is adjustable. The method further includes: the processor controls the first zoom module, and when displaying the first object, adjusts the optical power of the first zoom module to the first optical power, so that the The first optical module is capable of imaging at the first depth. Or, the processor controls the first zoom module, and when displaying the second object, adjusts the optical power of the first zoom module to the second optical power, so that the first optical lens with the second optical power The module is capable of imaging at this second depth.
在一种可能的设计中,该方法还包括:该第一光学显示模组在该处理器的控制下,在该虚拟三维环境中的第一虚拟位置和第二虚拟位置显示第三对象。该第一虚拟位置和第二虚拟位置在该三维环境中,到该用户的人眼的距离不同。In a possible design, the method further includes: the first optical display module displays a third object at the first virtual position and the second virtual position in the virtual three-dimensional environment under the control of the processor. The first virtual position and the second virtual position have different distances to the user's eyes in the three-dimensional environment.
在一种可能的设计中,该虚拟显示设备在当前显示场景为预设场景时,分别在该第一虚拟位置和该第二虚拟位置显示该第三对象。其中,该预设场景包括如下场景中的至少1个:广告播放场景,显示资源加载场景。In a possible design, the virtual display device displays the third object at the first virtual position and the second virtual position respectively when the currently displayed scene is a preset scene. Wherein, the preset scene includes at least one of the following scenes: an advertisement playing scene, and a display resource loading scene.
在一种可能的设计中,该方法还包括:该第一光学显示模组在该处理器的控制下,向该第一人眼展示第一检测图像。在显示该第一检测图像时,显示该第一检测图像的开口的像素数量为第一数量。该处理器还用于接收第一识别反馈,该第一识别反馈是该用户在使用该第一人眼观察该第一检测图像时输入的指示,该第一识别反馈用于指示该用户是否能够识别该第一检测图像的开口方向。In a possible design, the method further includes: the first optical display module displays the first detection image to the first human eye under the control of the processor. When displaying the first detection image, the number of pixels of the opening displaying the first detection image is the first number. The processor is also used to receive first recognition feedback, which is an indication input by the user when using the first human eyes to observe the first detection image, and the first recognition feedback is used to indicate whether the user can The opening direction of the first detection image is identified.
在一种可能的设计中,该第一光学显示模组在使用该第一数量的像素在向用户显示该第一检测图像的开口时,第一人眼观察该第一检测图像的开口的视分值为第一视分值。在该第一识别反馈指示该用户不能识别该第一检测图像的开口方向的情况下,该方法还包括:该处理器确定该第一人眼的近视度数为与该第一视分值对应的近视度数。In a possible design, when the first optical display module uses the first number of pixels to display the opening of the first detection image to the user, the first human eye observes the visual field of the opening of the first detection image. The score is the first-look score. In the case where the first recognition feedback indicates that the user cannot recognize the opening direction of the first detection image, the method further includes: the processor determines that the degree of myopia of the first human eye corresponds to the first vision score degree of myopia.
在一种可能的设计中,该方法还包括:在该第一识别反馈指示该用户能够识别该第一检测图像的开口方向的情况下,该第一光学显示模组在该处理器的控制下,向该第一人眼展示第二检测图像。在显示该第二检测图像时,用于显示该第二检测图像的开口的像素数量为第二数量。该第二数量小于该第一数量。该处理器接收第二识别反馈,该第二识别反馈是该用户在使用该第一人眼观察该第二检测图像时输入的指示,该第二识别反馈用于指示该用户是否能够识别该第二检测图像的开口方向。In a possible design, the method further includes: when the first identification feedback indicates that the user can identify the opening direction of the first detection image, the first optical display module under the control of the processor , showing the second detection image to the first human eye. When displaying the second detection image, the number of pixels of the opening for displaying the second detection image is the second number. The second quantity is less than the first quantity. The processor receives second recognition feedback, the second recognition feedback is an instruction input by the user when using the first human eyes to observe the second detection image, and the second recognition feedback is used to indicate whether the user can recognize the second detection image 2. Detect the opening direction of the image.
在一种可能的设计中,该第一光学显示模组在使用该第二数量的像素在向用户显示该第二检测图像的开口时,第一人眼观察该第二检测图像的开口的视分值为第二视分值。在该第二识别反馈指示该用户不能识别该第二检测图像的开口方向的情况下,该方法还包括:该处理器确定该第一人眼的近视度数为与该第二视分值对应的近视度数。In a possible design, when the first optical display module uses the second number of pixels to display the opening of the second detection image to the user, the first human eye observes the visual field of the opening of the second detection image. The score is the second-view score. In the case where the second recognition feedback indicates that the user cannot recognize the opening direction of the second detection image, the method further includes: the processor determines that the degree of myopia of the first human eye corresponds to the second vision score degree of myopia.
在一种可能的设计中,该方法还包括:该第一光学显示模组在向该第一人眼展示该第一检测图像之前,在该处理器的控制下,调整该第一光学显示模组的光焦度为初始光焦度,使得该第一光学显示模组在最远处成像。In a possible design, the method further includes: before the first optical display module presents the first detected image to the first human eye, under the control of the processor, adjusting the first optical display module The optical power of the group is the initial optical power, so that the first optical display module forms an image at the furthest distance.
在一种可能的设计中,在该第一光学显示模组向该第一人眼展示的第三检查图像的开口为第三数量的像素的情况下,在接收到的识别反馈指示用户无法识别该第四检测图像的开口方向时,该第一光学显示模组向该第一人眼展示虚拟图像的光焦度为第三光焦度。该第三光焦度对应于第三视分值,该第三视分值是与该第三数量的像素对应的视分值。In a possible design, when the opening of the third inspection image displayed by the first optical display module to the first human eye is the third number of pixels, when the received recognition feedback indicates that the user cannot recognize When the opening direction of the fourth detection image is detected, the first optical display module displays the optical power of the virtual image to the first human eye as the third optical power. The third optical power corresponds to a third visual score, which is the visual score corresponding to the third number of pixels.
在一种可能的设计中,该第一光学显示模组包括旋转机构,该旋转机构用于在该处理器的控制下,沿垂直于光轴方向旋转该第一光学显示模组。该方法还包括:该第一光学显示模组在向用户展示该第一图像时,该第一光学显示模组的光焦度方向与该第一人眼的散光轴位重合。In a possible design, the first optical display module includes a rotating mechanism for rotating the first optical display module in a direction perpendicular to the optical axis under the control of the processor. The method further includes: when the first optical display module presents the first image to the user, the focal power direction of the first optical display module coincides with the astigmatism axis of the first human eye.
在一种可能的设计中,该处理器根据该第一光学显示模组的光焦度方向与该第一人眼的散光轴位重合时,该旋转机构的旋转情况,确定该第一人眼的散光度数。In a possible design, the processor determines that the first human eye degree of astigmatism.
在一种可能的设计中,该处理器还用于在控制该第一光学显示模组显示该第一图像之前,获取当前用户的用户特征,该处理器具体用于控制该第一光学显示模组显示与该当前用户的用户特征相应的第一图像。其中,显示该第一图像时该第一光学显示模组的光焦度是与该当前用户的第一人眼的近视度数和/或散光度数匹配的,该当前用户的第一人眼的近视度数和/或散光度数是由该当前用户的用户特征对应的虚拟显示信息指示的。In a possible design, the processor is also used to acquire the user characteristics of the current user before controlling the first optical display module to display the first image, and the processor is specifically used to control the first optical display module to display the first image. The group displays the first image corresponding to the user characteristics of the current user. Wherein, when displaying the first image, the optical power of the first optical display module is matched with the myopia degree and/or astigmatism degree of the first human eye of the current user, and the myopia degree of the first human eye of the current user is The degree of power and/or the degree of astigmatism is indicated by the virtual display information corresponding to the user characteristics of the current user.
在一种可能的设计中,该用户特征包括以下特征中的任一项:该当前用户的指纹信息。该当前用户的虹膜特征。该当前用户的账户信息。该当前用户的标识,不同用户的标识不同。In a possible design, the user feature includes any one of the following features: fingerprint information of the current user. The iris characteristics of this current user. The current user's account information. The identifier of the current user is different for different users.
在一种可能的设计中,该虚拟显示设备中存储有不同用户特征与对应的虚拟显示信息的对应关系。该处理器根据该当前用户的用户特征,从该对应关系中查找匹配的表项,在存在该匹配的表项的情况下,确定该当前用户对应的虚拟显示信息为该匹配的表项中存储的虚拟显示信息。该虚拟显示信息包括对应用户的近视度数和/或散光度数。In a possible design, the virtual display device stores correspondences between different user characteristics and corresponding virtual display information. The processor searches for a matching entry from the corresponding relationship according to the user characteristics of the current user, and if there is a matching entry, determines that the virtual display information corresponding to the current user is stored in the matching entry virtual display information. The virtual display information includes the degree of myopia and/or the degree of astigmatism of the corresponding user.
在一种可能的设计中,在该对应关系中不存在与该当前用户的用户特征的对应的匹配项时,该第一光学显示模组在该处理器的控制下,展示第一图像,在展示该第一图像时的光焦度是与该当前用户的第一人眼的近视度数和/或散光度数匹配的光焦度,该第一人眼的近视度数和/或散光度数是该处理器自动测定的,或者在用户的指示下测定的,或者,该用户手动输入的。In a possible design, when there is no matching item corresponding to the user characteristics of the current user in the corresponding relationship, the first optical display module displays the first image under the control of the processor, and the The optical power when displaying the first image is the optical power matching the myopia degree and/or the astigmatism degree of the first human eye of the current user, and the myopia degree and/or the astigmatism degree of the first human eye is the processing automatically determined by the instrument, or determined under the instruction of the user, or manually input by the user.
在一种可能的设计中,该处理器存储该第一人眼的近视度数和/或散光度数与该当前用户的用户特征的对应关系。In a possible design, the processor stores a correspondence between the degree of myopia and/or the degree of astigmatism of the first human eye and the user characteristics of the current user.
第七方面,提供一种虚拟显示设备。虚拟显示设备包括一个或多个处理器和一个或多个存储器;一个或多个存储器与一个或多个处理器耦合,一个或多个存储器存储有计算机指令;当一个或多个处理器执行计算机指令时,使得虚拟显示设备实现上述第一方面以及各种可能的设计中任一种的虚拟显示设备的功能;或者实现上述第二方面以及各种可能的设计中任一种的虚拟显示设备的功能;或者实现上述第三方面以及各种可能的设计中任一种的虚拟显示设备的功能;或者实现上述第四方面以及各种可能的设计中任一种的虚拟显示设备的功能;或者实现上述第五方面以及各种可能的设计中任一种的虚拟显示设备的功能。In a seventh aspect, a virtual display device is provided. The virtual display device includes one or more processors and one or more memories; one or more memories are coupled with one or more processors, and one or more memories store computer instructions; when one or more processors execute computer When instructing, the virtual display device is made to realize the function of the virtual display device in any one of the above-mentioned first aspect and various possible designs; or realize the function of the virtual display device in any one of the above-mentioned second aspect and various possible designs function; or realize the function of the virtual display device in any one of the third aspect and various possible designs; or realize the function of the virtual display device in the fourth aspect and any one of the various possible designs; or realize The function of the virtual display device in any one of the above fifth aspect and various possible designs.
第八方面,提供一种芯片系统,芯片系统包括接口电路和处理器;接口电路和处理器通过线路互联;接口电路用于从存储器接收信号,并向处理器发送信号,信号包括存储器中存储的计算机指令;当处理器执行计算机指令时,设置有该芯片系统的虚拟显示设备实现上述第一方面以及各种可能的设计中任一种的虚拟显示设备的功能;或者实现上述第二方面以及各种可能的设计中任一种的虚拟显示设备的功能;或者实现上述第三方面以及各种可能的设计中任一种的虚拟显示设备的功能;或者实现上述第四方面以及各种可能的设计中任一种的虚拟显示设备的功能;或者实现上述第五方面以及各种可能的设计中任一种的虚拟显示设备的功能。In the eighth aspect, a chip system is provided, the chip system includes an interface circuit and a processor; the interface circuit and the processor are interconnected through lines; the interface circuit is used to receive signals from the memory and send signals to the processor, and the signals include the data stored in the memory Computer instruction; when the processor executes the computer instruction, the virtual display device provided with the system-on-a-chip realizes the function of the virtual display device in any one of the above-mentioned first aspect and various possible designs; or realizes the above-mentioned second aspect and each The function of any one of the virtual display device in the possible designs; or realize the function of the virtual display device in the third aspect and any of the various possible designs; or realize the fourth aspect and the various possible designs above any one of the functions of the virtual display device; or realize the function of the virtual display device of any one of the fifth aspect and various possible designs.
第九方面,提供一种计算机可读存储介质,计算机可读存储介质包括计算机指令,当计算机指令运行时,实现上述第一方面以及各种可能的设计中任一种的虚拟显示设备的功能;或者实现上述第二方面以及各种可能的设计中任一种的虚拟显示设备的功能;或者实现上述第三方面以及各种可能的设计中任一种的虚拟显示设备的功能;或者实现上述第四方面以及各种可能的设计中任一种的虚拟显示设备的功能;或者实现上述第五方面以及各种可能的设计中任一种的虚拟显示设备的功能。In the ninth aspect, a computer-readable storage medium is provided, the computer-readable storage medium includes computer instructions, and when the computer instructions are run, the functions of the virtual display device in the above-mentioned first aspect and any one of various possible designs are realized; Or realize the function of the virtual display device in any one of the above second aspect and various possible designs; or realize the function of the virtual display device in the above third aspect and any one of various possible designs; or realize the above first aspect The function of the virtual display device in any one of the four aspects and various possible designs; or realize the function of the virtual display device in the fifth aspect and any one of the various possible designs.
应当理解的是,上述第二方面到第九方面提供的技术方案,其技术特征均可对应到第一方面及其可能的设计中提供的拍摄方法,因此能够达到的有益效果类似,此处不再赘述。It should be understood that the technical features of the technical solutions provided in the second aspect to the ninth aspect above can all correspond to the shooting methods provided in the first aspect and its possible design, so the beneficial effects that can be achieved are similar, and here Let me repeat.
附图说明Description of drawings
图1为一种辐辏角度的示意图;Fig. 1 is a schematic diagram of a convergence angle;
图2为一种人眼的组成的示意图;Fig. 2 is a schematic diagram of the composition of a human eye;
图3为一种人眼的成像机制的示意图;3 is a schematic diagram of an imaging mechanism of a human eye;
图4为一种VR眼镜的成像机制示意图;4 is a schematic diagram of an imaging mechanism of VR glasses;
图5A为又一种VR眼镜的成像机制示意图;FIG. 5A is a schematic diagram of another imaging mechanism of VR glasses;
图5B为又一种VR眼镜的成像机制示意图;5B is a schematic diagram of another imaging mechanism of VR glasses;
图6A为本申请实施例提供的一种穿戴设备的组成示意图;FIG. 6A is a schematic diagram of the composition of a wearable device provided by an embodiment of the present application;
图6B为本申请实施例提供的一种光学显示模组的组成示意图;FIG. 6B is a schematic diagram of the composition of an optical display module provided by the embodiment of the present application;
图7为本申请实施例提供的一种折叠光学镜组的组成示意图;FIG. 7 is a schematic diagram of the composition of a folded optical mirror group provided by the embodiment of the present application;
图8为本申请实施例提供的一种折叠光学镜组的工作示意图;Fig. 8 is a working schematic diagram of a folded optical mirror group provided by the embodiment of the present application;
图9为本申请实施例提供的一种VR眼镜的组成示意图;FIG. 9 is a schematic diagram of the composition of a VR glasses provided by the embodiment of the present application;
图10为本申请实施例提供的一种VR眼镜的成像示意图;Fig. 10 is a schematic diagram of imaging of VR glasses provided by the embodiment of the present application;
图11为本申请实施例提供的一种虚拟显示方法的流程示意图;FIG. 11 is a schematic flowchart of a virtual display method provided by an embodiment of the present application;
图12为本申请实施例提供的又一种VR眼镜的成像示意图;Fig. 12 is a schematic diagram of imaging of another kind of VR glasses provided by the embodiment of the present application;
图13为本申请实施例提供的一种显示效果的示意图;Fig. 13 is a schematic diagram of a display effect provided by the embodiment of the present application;
图14为又一种人眼的成像机制的示意图;Fig. 14 is a schematic diagram of yet another imaging mechanism of the human eye;
图15为又一种人眼的成像机制的示意图;Fig. 15 is a schematic diagram of yet another imaging mechanism of the human eye;
图16为本申请实施例提供的一种视力表的示意图;Fig. 16 is a schematic diagram of an eye chart provided in an embodiment of the present application;
图17为本申请实施例提供的一种视分值的示意图;Fig. 17 is a schematic diagram of a visual score provided by the embodiment of the present application;
图18为本申请实施例提供的又一种虚拟显示方法的流程示意图;FIG. 18 is a schematic flowchart of another virtual display method provided by the embodiment of the present application;
图19为本申请实施例提供的一种成像机制的示意图;FIG. 19 is a schematic diagram of an imaging mechanism provided by an embodiment of the present application;
图20为本申请实施例提供的一种又成像机制的示意图;FIG. 20 is a schematic diagram of an imaging mechanism provided by an embodiment of the present application;
图21为本申请实施例提供的一种又成像机制的示意图;FIG. 21 is a schematic diagram of an imaging mechanism provided by an embodiment of the present application;
图22为本申请实施例提供的一种检测图像的示意图;Fig. 22 is a schematic diagram of a detection image provided by the embodiment of the present application;
图23为本申请实施例提供的一种识别反馈的输入方式的示意图;Fig. 23 is a schematic diagram of an input mode of recognition feedback provided by an embodiment of the present application;
图24为本申请实施例提供的一种显示机制的示意图;Fig. 24 is a schematic diagram of a display mechanism provided by the embodiment of the present application;
图25为本申请实施例提供的一种又显示机制的示意图;Fig. 25 is a schematic diagram of an additional display mechanism provided by the embodiment of the present application;
图26为本申请实施例提供的一种VR眼镜的交互场景示意图;FIG. 26 is a schematic diagram of an interaction scene of VR glasses provided by an embodiment of the present application;
图27为本申请实施例提供的又一种虚拟显示方法的流程示意图;Fig. 27 is a schematic flowchart of another virtual display method provided by the embodiment of the present application;
图28为本申请实施例提供的一种红绿平衡的方法示意图;Fig. 28 is a schematic diagram of a red-green balance method provided in the embodiment of the present application;
图29为本申请实施例提供的又一种成像机制的示意图;Fig. 29 is a schematic diagram of another imaging mechanism provided by the embodiment of the present application;
图30为本申请实施例提供的又一种成像机制的示意图;FIG. 30 is a schematic diagram of another imaging mechanism provided by the embodiment of the present application;
图31为本申请实施例提供的一种散光检测图卡的示意图;Fig. 31 is a schematic diagram of an astigmatism detection chart provided by an embodiment of the present application;
图32为本申请实施例提供的又一种虚拟显示方法的流程示意图;Fig. 32 is a schematic flowchart of another virtual display method provided by the embodiment of the present application;
图33为本申请实施例提供的一种电子设备的组成示意图;FIG. 33 is a schematic diagram of the composition of an electronic device provided in the embodiment of the present application;
图34为本申请实施例提供的一种芯片系统的组成示意图。FIG. 34 is a schematic diagram of the composition of a chip system provided by an embodiment of the present application.
具体实施方式Detailed ways
以下,对本申请实施例中的部分用语进行解释说明,以便于本领域技术人员理解。In the following, some terms used in the embodiments of the present application are explained, so as to facilitate the understanding of those skilled in the art.
(1)本申请实施例涉及的至少一个,包括一个或者多个;其中,多个是指大于或者等于两个。另外,需要理解的是,在本申请的描述中,“第一”、“第二”等词汇,仅用于区分描述的目的,而不能理解为明示或暗示相对重要性,也不能理解为明示或暗示顺序。比如,第一对象和第二对象并不代表二者的重要程度,或者代表二者的顺序,是为了区分对象。(1) At least one of the embodiments of the present application involves one or more; wherein, a plurality means greater than or equal to two. In addition, it should be understood that in the description of this application, words such as "first" and "second" are only used for the purpose of distinguishing descriptions, and cannot be understood as express or implied relative importance, nor can they be understood as express or imply order. For example, the first object and the second object do not represent the importance of the two, or represent the order of the two, but to distinguish the objects.
在本申请实施例中,“和/或”,是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B 这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。In the embodiment of this application, "and/or" is an association relationship describing associated objects, indicating that there may be three relationships, for example, A and/or B, which may mean: A exists alone, and A and B exist at the same time , there are three cases of B alone. In addition, the character "/" in this article generally indicates that the contextual objects are an "or" relationship.
(2)虚拟现实(Virtual Reality,VR)技术是借助计算机及传感器技术创造的一种人机交互手段。VR技术综合了计算机图形技术、计算机仿真技术、传感器技术、显示技术等多种科学技术,可以创建虚拟环境。虚拟环境包括由计算机生成的、并实时动态播放的三维立体逼真图像为用户带来视觉感知;而且,除了计算机图形技术所生成的视觉感知外,还有听觉、触觉、力觉、运动等感知,甚至还包括嗅觉和味觉等,也称为多感知;此外,还可以检测用户的头部转动,眼睛、手势、或其他人体行为动作,由计算机来处理与用户的动作相适应的数据,并对用户的动作实时响应,并分别反馈到用户的五官,进而形式虚拟环境。示例性的,用户佩戴VR穿戴设备可以看到VR游戏界面,通过手势、手柄等操作,可以与VR游戏界面交互,仿佛身处游戏中。(2) Virtual Reality (VR) technology is a means of human-computer interaction created with the help of computer and sensor technology. VR technology integrates computer graphics technology, computer simulation technology, sensor technology, display technology and other science and technology to create a virtual environment. The virtual environment includes three-dimensional realistic images generated by computers and dynamically played in real time to bring visual perception to users; moreover, in addition to the visual perception generated by computer graphics technology, there are also perceptions such as hearing, touch, force, and movement. It even includes the sense of smell and taste, also known as multi-sensing; in addition, it can also detect the user's head rotation, eyes, gestures, or other human behaviors, and the computer will process the data that is suitable for the user's actions, and The user's actions respond in real time and are fed back to the user's five sense organs respectively to form a virtual environment. Exemplarily, the user can see the VR game interface by wearing the VR wearable device, and can interact with the VR game interface through operations such as gestures and handles, as if in a game.
(3)增强现实(Augmented Reality,AR)技术是指将计算机生成的虚拟对象叠加到真实世界的场景之上,从而实现对真实世界的增强。也就是说,AR技术中需要采集真实世界的场景,然后在真实世界上增加虚拟环境。(3) Augmented Reality (AR) technology refers to superimposing computer-generated virtual objects on real-world scenes to enhance the real world. In other words, AR technology needs to collect real-world scenes, and then add a virtual environment to the real world.
因此,VR技术与AR技术的区别在于,AR技术创建的是完全的虚拟环境,用户看到的全部是虚拟对象;而AR技术是在真实世界上叠加了虚拟对象,即既包括真实世界中对象也包括虚拟对象。比如,用户佩戴透明眼镜,通过该眼镜可以看到周围的真实环境,而且该眼镜上还可以显示虚拟对象,这样,用户既可以看到真实对象也可以看到虚拟对象。Therefore, the difference between VR technology and AR technology is that AR technology creates a complete virtual environment, and all users see are virtual objects; while AR technology superimposes virtual objects on the real world, that is, it includes objects in the real world. Also includes dummy objects. For example, the user wears transparent glasses, through which the real environment around can be seen, and virtual objects can also be displayed on the glasses, so that the user can see both real objects and virtual objects.
(4)混合现实技术(Mixed Reality,MR),是通过在虚拟环境中引入现实场景信息(或称为真实场景信息),将虚拟环境、现实世界和用户之间搭起一个交互反馈信息的桥梁,从而增强用户体验的真实感。具体来说,把现实对象虚拟化,(比如,使用摄像头来扫描现实对象进行三维重建,生成虚拟对象),经过虚拟化的真实对象引入到虚拟环境中,这样,用户在虚拟环境中可以看到真实对象。(4) Mixed reality technology (Mixed Reality, MR) is to build a bridge of interactive feedback information between the virtual environment, the real world and users by introducing real scene information (or called real scene information) into the virtual environment. , thereby enhancing the realism of the user experience. Specifically, the real object is virtualized (for example, using a camera to scan the real object for 3D reconstruction to generate a virtual object), and the virtualized real object is introduced into the virtual environment, so that the user can see in the virtual environment real object.
需要说明的是,本申请实施例提供的技术方案可以适用于VR、AR或MR等技术的电子设备中。电子设备可以通过AR、VR或MR技术,向用户提供虚拟显示功能。由此可以使得用户在不需要身处实际环境中,即可通过虚拟显示功能,体验到虚拟场景中的立体视觉感受。It should be noted that the technical solutions provided in the embodiments of the present application may be applicable to electronic devices of technologies such as VR, AR, or MR. Electronic devices can provide users with virtual display functions through AR, VR or MR technologies. In this way, the user can experience the stereoscopic vision in the virtual scene through the virtual display function without being in the actual environment.
为了能够对通过AR、VR或MR技术向用户的提供虚拟显示功能进行清楚的说明,以下首先对人眼视觉产生机制进行简单说明。In order to clearly explain the virtual display function provided to users through AR, VR or MR technology, the following first briefly describes the mechanism of human vision generation.
可以理解的是,在实际场景中,用户在观看物体时,人眼可以通过获取实际场景中的光信号,并对该光信号在大脑中进行处理,实现视觉感受。其中,该实际场景中的光信号可以包括来自不同物体的反射光,和/或光源直接发出的光信号。由于实际场景的光信号中可以携带有实际场景中的各个物体的相关信息(如大小,位置,颜色等),因此,大脑可以通过对光信号进行处理,就能够获取实际场景中的物体的信息,即获取视觉感受。It can be understood that, in an actual scene, when a user views an object, the human eye can obtain a light signal in the actual scene and process the light signal in the brain to realize visual experience. Wherein, the optical signal in the actual scene may include reflected light from different objects, and/or an optical signal directly emitted by a light source. Since the light signal of the actual scene can carry the relevant information (such as size, position, color, etc.) of each object in the actual scene, the brain can obtain the information of the object in the actual scene by processing the light signal , that is, to obtain visual experience.
需要说明的是,由于人眼(如左眼和右眼)在观看同一物体时,视线角度稍有不同。因此左眼和右眼看到的景象实际上是有差异的。比如,左眼可以获取与左眼视线方向垂直的人眼焦点所在平面的二维图像(以下简称为左眼图像)的光信号。类似的,右眼可以获取与右眼视线方向垂直的人眼焦点所在平面的二维图像(以下简称为右眼 图像)的光信号。左眼图像与右眼图像稍有不同。大脑可以通过对左眼图像和右眼图像的光信号进行处理,从而获取当前场景中不同物体的相关信息。It should be noted that when human eyes (such as left and right eyes) watch the same object, their sight angles are slightly different. Therefore, the scene seen by the left eye and the right eye is actually different. For example, the left eye can acquire the light signal of the two-dimensional image (hereinafter referred to as the left eye image) on the plane where the focus of the human eye is perpendicular to the line of sight of the left eye. Similarly, the right eye can obtain the light signal of the two-dimensional image (hereinafter referred to as the right eye image) on the plane where the focus of the human eye is perpendicular to the line of sight of the right eye. The image for the left eye is slightly different from the image for the right eye. The brain can obtain information about different objects in the current scene by processing the light signals of the left-eye image and the right-eye image.
此外,用户还可以通过获取实际场景中不同物体的深度,获取立体视觉感受。立体视觉感受也可称为双目立体视觉。In addition, users can also obtain stereoscopic vision experience by obtaining the depth of different objects in the actual scene. Stereo vision experience can also be called binocular stereo vision.
示例性的,大脑通过观看实际场景中的物体时,双眼的辐辏深度(Vergence distance),以及变焦深度(Accommodation distance),大脑就可以确定该物体的深度(即景深)。Exemplarily, when the brain looks at an object in an actual scene, the brain can determine the depth of the object (that is, the depth of field) through the vergence distance (Vergence distance) and the zoom depth (Accommodation distance) of the eyes.
其中,大脑可以根据如图1所示的机制,确定辐辏深度。如图1所示,双眼在观察实际场景中的物体时,可以通过控制人眼附近的肌肉,使得左眼和右眼的视线分别向物体转动。大脑可以通过获取双眼的辐辏角度,来判断该物体的深度,即辐辏深度。作为一种示例,如图1所示,在观察如图1所示的物体时,辐辏角度可以为如图所示的双眼视线在观察物体所在位置的夹角。可以理解的是,被观察物体距离人眼越近,辐辏角度就越大,辐辏深度越小。对应的,被观察物体距离人眼越远,辐辏角度就越小,辐辏深度越大。Among them, the brain can determine the depth of convergence according to the mechanism shown in Figure 1 . As shown in Figure 1, when the two eyes observe an object in an actual scene, the sight of the left eye and the right eye can be turned to the object by controlling the muscles near the human eye. The brain can judge the depth of the object by obtaining the convergence angle of the eyes, that is, the convergence depth. As an example, as shown in FIG. 1 , when observing the object shown in FIG. 1 , the convergence angle may be the included angle between the sight lines of the two eyes at the position where the object is observed as shown in the figure. It is understandable that the closer the observed object is to the human eye, the larger the convergence angle and the smaller the convergence depth. Correspondingly, the farther the observed object is from the human eye, the smaller the convergence angle and the greater the convergence depth.
另外,大脑还可以根据变焦深度,判断物体的深度。示例性的,结合图2以及图3对变焦深度进行说明。图2示为人眼的组成示意图。如图2所示,人眼中可以包括晶状体和睫状肌,以及位于眼底的视网膜。In addition, the brain can also judge the depth of the object according to the zoom depth. Exemplarily, the zoom depth is described with reference to FIG. 2 and FIG. 3 . Figure 2 shows a schematic diagram of the composition of the human eye. As shown in FIG. 2 , the human eye may include a lens, a ciliary muscle, and a retina located in the fundus.
晶状体可以起到变焦透镜的作用,对射入人眼的光线进行汇聚处理。以便将入射光线汇聚到人眼眼底的视网膜上,使得实际场景中的景物能够在视网膜上成清晰的像。睫状肌可以用于调节晶状体的形态,比如睫状肌可以通过收缩或放松,调节晶状体的屈光度,达到调整晶状体焦距的效果。从而使得实际场景中不同距离的物体,都可以通过晶状体清晰地在视网膜上的成像。作为一种示例,参考图3,为人眼在观察不同距离的物体时,睫状肌对晶状体的调节示意。如图3中的(a)所示,在人眼观察较远处的物体时,以该物体为非光源为例。来自该物体表面的反射光线可以接近平行光。此时,睫状肌可以将晶状体的状态控制在如图3中(a)所示的状态,如睫状肌放松,控制晶状体呈扁平状,屈光度较小,从而使得平行入射光可以通过晶状体之后汇聚在眼底的视网膜上。而在人眼观察较近的物体时,结合图3中的(b),以该物体为非光源为例。来自该物体表面的反射光可以按照如图3中的(b)所示的光路入射人眼。此时睫状肌可以将晶状体的状态处于图3中的(b)所示的状态,如睫状肌收缩,晶状体凸起,屈光度变大,从而使得如图3中的(b)所示的入射光可以通过晶状体之后汇聚在眼底的视网膜上。The crystalline lens acts as a zoom lens, converging the rays of light entering the eye. In order to converge the incident light onto the retina of the human eye fundus, so that the scene in the actual scene can form a clear image on the retina. The ciliary muscle can be used to adjust the shape of the lens. For example, the ciliary muscle can adjust the diopter of the lens by contracting or relaxing, so as to achieve the effect of adjusting the focal length of the lens. Therefore, objects at different distances in the actual scene can be clearly imaged on the retina through the lens. As an example, refer to FIG. 3 , which is a diagram illustrating the adjustment of the ciliary muscle to the lens when the human eye observes objects at different distances. As shown in (a) of FIG. 3 , when the human eye observes a distant object, it is taken as an example that the object is a non-light source. The reflected rays from the surface of this object can be close to parallel rays. At this time, the ciliary muscle can control the state of the lens to the state shown in Figure 3 (a), such as the ciliary muscle relaxes, and controls the lens to be flat and the diopter is small, so that parallel incident light can pass through the lens Converge on the retina of the fundus. When the human eye observes a relatively close object, in combination with (b) in FIG. 3 , take the object as a non-light source as an example. The reflected light from the surface of the object may enter the human eye according to the optical path shown in (b) in FIG. 3 . At this time, the ciliary muscle can keep the state of the lens in the state shown in (b) in Figure 3, as the ciliary muscle contracts, the lens protrudes, and the diopter becomes larger, so that the lens shown in (b) in Figure 3 Incident light can pass through the lens and then converge on the retina in the fundus of the eye.
也就是说,在人眼观察不同距离物体时,睫状肌的收缩或放松状态是不同的。这样,大脑就可以在人眼清楚地观察物体时,通过当前睫状肌的收缩或放松状态,判断该物体的深度。该深度可以称为变焦深度。That is to say, when the human eye observes objects at different distances, the contraction or relaxation state of the ciliary muscle is different. In this way, the brain can judge the depth of the object through the contraction or relaxation of the anterior ciliary muscle when the human eye clearly observes the object. This depth may be referred to as zoom depth.
目前,电子设备可以通过AR、VR或MR技术,可以结合上述人眼视觉的产生机制,向用户展示虚拟场景,以提供虚拟显示功能。At present, electronic devices can display virtual scenes to users through AR, VR or MR technologies in combination with the above-mentioned generation mechanism of human vision, so as to provide virtual display functions.
示例性的,以通过AR、VR或MR技术提供虚拟显示功能的电子设备为具有VR显示功能的VR眼镜为例。结合图4为一种VR眼镜的示意图。如图4所示,在VR眼镜中可以设置有2个显示屏(如显示屏1以及显示屏2),每个显示屏都具有显示功能。每 个显示屏都可以通过对应的目镜,用于向用户的一个人眼(如左眼或右眼)显示对应的内容。比如,在显示屏1上,可以显示虚拟场景中对应的左眼图像。该左眼图像的光线可以通过目镜1,汇聚在左眼处,从而使得左眼看到左眼图像。类似的,在显示屏2上,可以显示虚拟场景中对应的右眼图像。该右眼图像的光线可以通过目镜2,汇聚在左眼处,从而使得右眼看到右眼图像。Exemplarily, an electronic device that provides a virtual display function through AR, VR or MR technology is VR glasses with a VR display function as an example. 4 is a schematic diagram of VR glasses. As shown in FIG. 4 , two display screens (such as a display screen 1 and a display screen 2 ) may be provided in the VR glasses, and each display screen has a display function. Each display screen can be used to display corresponding content to one eye (such as left eye or right eye) of the user through a corresponding eyepiece. For example, on the display screen 1, the corresponding left-eye image in the virtual scene can be displayed. The light of the left-eye image can pass through the eyepiece 1 and converge at the left eye, so that the left eye can see the left-eye image. Similarly, on the display screen 2, the corresponding right-eye image in the virtual scene can be displayed. The light of the right-eye image can pass through the eyepiece 2 and converge at the left eye, so that the right eye can see the right-eye image.
由此,大脑可以通过对左眼图像和右眼图像进行融合,从而使得用户看到左眼图像和右眼图像对应的虚拟场景中的物体。Thus, the brain can fuse the left-eye image and the right-eye image, so that the user can see objects in the virtual scene corresponding to the left-eye image and the right-eye image.
需要说明的是,由于目镜的汇聚作用,因此,如图5A所示,人眼看到的图像实际上是对应显示屏上显示的图像,在如图5A所示的虚像面上的图像。例如,左眼看到的左眼图像可以是在虚像面上,左眼图像对应的虚像。又如,右眼看到的右眼图像可以是在虚像面上,有限图像对应的虚像。It should be noted that due to the converging effect of the eyepieces, as shown in FIG. 5A , the image seen by human eyes is actually an image on the virtual image plane shown in FIG. 5A corresponding to the image displayed on the display screen. For example, the left-eye image viewed by the left eye may be a virtual image corresponding to the left-eye image on the virtual image plane. For another example, the right-eye image viewed by the right eye may be a virtual image corresponding to a limited image on the virtual image plane.
可以理解的是,在观察实际场景中的物体时,大脑根据辐辏深度和变焦深度判断的物体的深度可以是一致的。而在辐辏深度和变焦深度所指示的物体的深度不一致时,就会出现视疲劳,影响用户的视觉体验。在本示例中,辐辏深度和变焦深度所指示的物体的深度不一致也可称为视觉辐辏调节冲突(Vergence accommodation conflict,VAC)。It is understandable that when observing an object in an actual scene, the depth of the object judged by the brain based on the convergence depth and the zoom depth may be consistent. However, when the depth of the object indicated by the depth of convergence and the depth of zooming are inconsistent, visual fatigue will occur and affect the visual experience of the user. In this example, the depth inconsistency of the object indicated by the vergence depth and the zoom depth may also be referred to as vergence accommodation conflict (Vergence accommodation conflict, VAC).
而在VR眼镜通过如图4或图5A所示的方案向用户展示虚拟场景时,就会出现辐辏深度和变焦深度所指示的物体的深度不一致的情况发生。However, when the VR glasses display the virtual scene to the user through the solution shown in FIG. 4 or FIG. 5A , the depth of the object indicated by the convergence depth and the zoom depth will be inconsistent.
示例性的,结合图5B。在人眼观察对应的显示屏时,为了能够看清楚显示屏上的图像,那么睫状肌就会调整双眼的晶状体,使得虚像面上的图像可以通过晶状体汇聚在视网膜上。因此,变焦距离可以是虚像面到人眼的距离(如图5B所示的深度1)。而VR眼镜在向用户展示的虚拟场景中的物体,往往并不在虚像面上。比如,以虚拟场景中,被观察物体为如图5B所示的三角形为例。用户可以通过旋转眼球,将双眼视线汇聚到虚拟场景中的三角形(如虚线的三角形501)上。辐辏角度如图5B的标识所示。这样,辐辏深度就应当为虚拟场景下,被观察物体(如虚线的三角形501)的深度。比如,该辐辏深度可以为如图5B所示的深度2。For example, refer to FIG. 5B. When the human eye observes the corresponding display screen, in order to be able to see the images on the display screen clearly, the ciliary muscle will adjust the lenses of both eyes so that the images on the virtual image plane can be converged on the retina through the lenses. Therefore, the zoom distance may be the distance from the virtual image plane to the human eye (depth 1 as shown in FIG. 5B ). However, the objects in the virtual scene displayed by the VR glasses to the user are often not on the virtual image plane. For example, in the virtual scene, the object to be observed is a triangle as shown in FIG. 5B as an example. The user can focus the eyesight of both eyes on a triangle in the virtual scene (such as the dotted triangle 501 ) by rotating the eyeballs. The angle of convergence is indicated by the label in Fig. 5B. In this way, the depth of convergence should be the depth of the observed object (such as the dotted triangle 501 ) in the virtual scene. For example, the depth of convergence may be depth 2 as shown in FIG. 5B .
可以看到,此时深度1和深度2并不一致。这样,就会使得大脑无法准确判断被观察物体的深度,从而导致视疲劳等影响用户体验的情况发生。如果长时间如此,还会对用户的视力产生严重的影响。It can be seen that depth 1 and depth 2 are not consistent at this time. In this way, the brain will not be able to accurately judge the depth of the observed object, thereby causing visual fatigue and other situations that affect user experience. If it is like this for a long time, it will also have a serious impact on the user's vision.
为了解决上述问题,本申请实施例提供一种虚拟显示设备和虚拟显示方法,能够避免在通过AR、VR或MR技术向用户提供虚拟显示功能的过程中出现的VAC,从而避免由此导致的视疲劳,提升用户的视觉体验。In order to solve the above problems, the embodiment of the present application provides a virtual display device and a virtual display method, which can avoid VAC that occurs during the process of providing virtual display functions to users through AR, VR or MR technology, thereby avoiding the resulting Fatigue, improve the user's visual experience.
以下结合示例以及附图对本申请实施例提供的方案进行详细说明。The solutions provided by the embodiments of the present application will be described in detail below with reference to examples and accompanying drawings.
示例性的,请参考图6A,以虚拟显示设备为穿戴设备为例,示出了本申请实施例提供的一种穿戴设备的结构示意图。如图6A所示,穿戴设备100可以包括处理器110,存储器120,传感器模块130(可以用于获取用户的姿态),麦克风140,按键150,输入输出接口160,通信模块170,摄像头180,电池190、光学显示模组1100以及眼动追踪模组1200等。For example, please refer to FIG. 6A , which shows a schematic structural diagram of a wearable device provided by an embodiment of the present application, taking the virtual display device as a wearable device as an example. As shown in Figure 6A, the wearable device 100 can include a processor 110, a memory 120, a sensor module 130 (which can be used to obtain the user's gesture), a microphone 140, a button 150, an input and output interface 160, a communication module 170, a camera 180, a battery 190. An optical display module 1100, an eye tracking module 1200, and the like.
可以理解的是,本申请实施例示意的结构并不构成对穿戴设备100的具体限定。 在本申请另一些实施例中,穿戴设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。It can be understood that the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the wearable device 100 . In other embodiments of the present application, the wearable device 100 may include more or fewer components than shown in the illustration, or combine certain components, or separate certain components, or arrange different components. The illustrated components can be realized in hardware, software or a combination of software and hardware.
处理器110通常用于控制穿戴设备100的整体操作,可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),视频处理单元(video processing unit,VPU)控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。The processor 110 is generally used to control the overall operation of the wearable device 100, and may include one or more processing units, for example: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor ( graphics processing unit, GPU), image signal processor (image signal processor, ISP), video processing unit (video processing unit, VPU) controller, memory, video codec, digital signal processor (digital signal processor, DSP) , a baseband processor, and/or a neural-network processing unit (NPU), etc. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated access is avoided, and the waiting time of the processor 110 is reduced, thus improving the efficiency of the system.
在本申请的一些实施例中,处理器110可以用于控制穿戴设备100的光焦度。示例性的,处理器110可以用于控制光学显示模组1100的光焦度,实现对穿戴设备100的光焦度的调整的功能。例如,处理器110可以通过调整光学显示模组1100中各个光学器件(如透镜等)之间的相对位置,使得光学显示模组1100的光焦度得到调整,进而使得光学显示模组1100在向人眼成像时,对应的虚像面的位置可以得到调整。从而达到控制穿戴设备100的光焦度的效果。In some embodiments of the present application, the processor 110 may be used to control the optical power of the wearable device 100 . Exemplarily, the processor 110 may be used to control the optical power of the optical display module 1100 to realize the function of adjusting the optical power of the wearable device 100 . For example, the processor 110 can adjust the relative positions of the various optical devices (such as lenses, etc.) When the human eye is imaging, the position of the corresponding virtual image plane can be adjusted. In this way, the effect of controlling the optical power of the wearable device 100 is achieved.
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口,串行外设接口(serial peripheral interface,SPI)接口等。In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (inter-integrated circuit, I2C) interface, a universal asynchronous receiver/transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general input and output (general -purpose input/output, GPIO) interface, subscriber identity module (subscriber identity module, SIM) interface, and/or universal serial bus (universal serial bus, USB) interface, serial peripheral interface (serial peripheral interface, SPI) interface etc.
在一些实施例中,处理器110可以基于不同帧率对不同对象进行渲染,比如,对近景对象使用高帧率渲染,对远景对象使用低帧率进行渲染。In some embodiments, the processor 110 may render different objects based on different frame rates, for example, use a high frame rate for rendering for nearby objects, and use a low frame rate for rendering for distant objects.
I2C接口是一种双向同步串行总线,包括一根串行数据线(serial data line,SDA)和一根串行时钟线(derail clock line,SCL)。在一些实施例中,处理器110可以包含多组I2C总线。The I2C interface is a bidirectional synchronous serial bus, including a serial data line (serial data line, SDA) and a serial clock line (derail clock line, SCL). In some embodiments, processor 110 may include multiple sets of I2C buses.
UART接口是一种通用串行数据总线,用于异步通信。该总线可以为双向通信总线。它将要传输的数据在串行通信与并行通信之间转换。在一些实施例中,UART接口通常被用于连接处理器110与通信模块170。例如:处理器110通过UART接口与通信模块170中的蓝牙模块通信,实现蓝牙功能。The UART interface is a universal serial data bus used for asynchronous communication. The bus can be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is generally used to connect the processor 110 and the communication module 170 . For example: the processor 110 communicates with the Bluetooth module in the communication module 170 through the UART interface to realize the Bluetooth function.
MIPI接口可以被用于连接处理器110与光学显示模组1100中的显示屏,摄像头180等外围器件。The MIPI interface can be used to connect the processor 110 with the display screen in the optical display module 1100 , the camera 180 and other peripheral devices.
GPIO接口可以通过软件配置。GPIO接口可以被配置为控制信号,也可被配置为数 据信号。在一些实施例中,GPIO接口可以用于连接处理器110与摄像头180,光学显示模组1100中的显示屏,通信模块170,传感器模块130,麦克风140等。GPIO接口还可以被配置为I2C接口,I2S接口,UART接口,MIPI接口等。可选的,摄像头180可以采集包括真实对象的图像,处理器110可以将摄像头采集的图像与虚拟对象融合,通过光学显示模组1100现实融合得到的图像,该示例可以参见图9所示的应用场景,在此不重复赘述。The GPIO interface can be configured by software. The GPIO interface can be configured as a control signal or as a data signal. In some embodiments, the GPIO interface can be used to connect the processor 110 with the camera 180 , the display screen in the optical display module 1100 , the communication module 170 , the sensor module 130 , the microphone 140 and so on. The GPIO interface can also be configured as an I2C interface, I2S interface, UART interface, MIPI interface, etc. Optionally, the camera 180 can collect images including real objects, and the processor 110 can fuse the images collected by the camera with the virtual objects, and realize the fused images through the optical display module 1100. For this example, refer to the application shown in FIG. 9 The scene will not be repeated here.
USB接口是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口可以用于连接充电器为穿戴设备100充电,也可以用于穿戴设备100与外围设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。该接口还可以用于连接其他电子设备,例如手机等。USB接口可以是USB3.0,用于兼容高速显示接口(display port,DP)信号传输,可以传输视音频高速数据。The USB interface is an interface that conforms to the USB standard specification, specifically, it can be a Mini USB interface, a Micro USB interface, a USB Type C interface, etc. The USB interface can be used to connect a charger to charge the wearable device 100, and can also be used to transmit data between the wearable device 100 and peripheral devices. It can also be used to connect headphones and play audio through them. This interface can also be used to connect other electronic devices, such as mobile phones. The USB interface may be USB3.0, which is compatible with high-speed display port (DP) signal transmission, and can transmit video and audio high-speed data.
可以理解的是,本申请实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对穿戴设备100的结构限定。在本申请另一些实施例中,穿戴设备100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。It can be understood that the interface connection relationship between the modules shown in the embodiment of the present application is only a schematic illustration, and does not constitute a structural limitation of the wearable device 100 . In other embodiments of the present application, the wearable device 100 may also adopt different interface connection methods in the above embodiments, or a combination of multiple interface connection methods.
另外,穿戴设备100可以包含无线通信功能,比如,穿戴设备100可以从其它电子设备(比如VR主机或VR服务器)接收渲染后的图像进行显示,或者,接收未渲染的图像然后处理器110对图像进行渲染并显示。通信模块170可以包含无线通信模块和移动通信模块。无线通信功能可以通过天线(未示出)、移动通信模块(未示出),调制解调处理器(未示出)以及基带处理器(未示出)等实现。In addition, the wearable device 100 may include a wireless communication function. For example, the wearable device 100 may receive rendered images from other electronic devices (such as a VR host or a VR server) for display, or may receive an unrendered image and then the processor 110 may process the image Render and display. The communication module 170 may include a wireless communication module and a mobile communication module. The wireless communication function can be realized by an antenna (not shown), a mobile communication module (not shown), a modem processor (not shown), and a baseband processor (not shown).
天线用于发射和接收电磁波信号。穿戴设备100中可以包含多个天线,每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。Antennas are used to transmit and receive electromagnetic wave signals. Multiple antennas may be included in the wearable device 100, and each antenna may be used to cover a single or multiple communication frequency bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: Antenna 1 can be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
移动通信模块可以提供应用在穿戴设备100上的包括第二代(2th generation,2G)网络/第三代(3th generation,3G)网络/第四代(4th generation,4G)网络/第五代(5th generation,5G)网络等无线通信的解决方案。移动通信模块可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块可以由天线接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块还可以对经调制解调处理器调制后的信号放大,经天线转为电磁波辐射出去。在一些实施例中,移动通信模块的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。The mobile communication module can provide applications on the wearable device 100 including second generation (2th generation, 2G) network/third generation (3th generation, 3G) network/fourth generation (4th generation, 4G) network/fifth generation ( 5th generation, 5G) network and other wireless communication solutions. The mobile communication module may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA) and the like. The mobile communication module can receive electromagnetic waves through the antenna, filter and amplify the received electromagnetic waves, and send them to the modem processor for demodulation. The mobile communication module can also amplify the signal modulated by the modem processor, and convert it into electromagnetic wave and radiate it through the antenna. In some embodiments, at least part of the functional modules of the mobile communication module may be set in the processor 110 . In some embodiments, at least part of the functional modules of the mobile communication module and at least part of the modules of the processor 110 may be set in the same device.
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器等)输出声音信号,或通过光学显示模组1100中的显示屏显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块或其他功能模块设置在同一个器件中。A modem processor may include a modulator and a demodulator. Wherein, the modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator sends the demodulated low-frequency baseband signal to the baseband processor for processing. The low-frequency baseband signal is passed to the application processor after being processed by the baseband processor. The application processor outputs sound signals through audio equipment (not limited to speakers, etc.), or displays images or videos through the display screen in the optical display module 1100 . In some embodiments, the modem processor may be a stand-alone device. In some other embodiments, the modem processor may be independent from the processor 110, and be set in the same device as the mobile communication module or other functional modules.
无线通信模块可以提供应用在穿戴设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块经由天线接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线转为电磁波辐射出去。The wireless communication module can provide applications on the wearable device 100 including wireless local area networks (wireless local area networks, WLAN) (such as wireless fidelity (wireless fidelity, Wi-Fi) network), bluetooth (bluetooth, BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field communication technology (near field communication, NFC), infrared technology (infrared, IR) and other wireless communication solutions. The wireless communication module may be one or more devices integrating at least one communication processing module. The wireless communication module receives electromagnetic waves through the antenna, frequency-modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 . The wireless communication module can also receive the signal to be sent from the processor 110, frequency-modulate it, amplify it, and convert it into electromagnetic wave through the antenna to radiate out.
在一些实施例中,穿戴设备100的天线和移动通信模块耦合,使得穿戴设备100可以通过无线通信技术与网络以及其他设备通信。该无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigat ion satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。In some embodiments, the antenna of the wearable device 100 is coupled to the mobile communication module, so that the wearable device 100 can communicate with the network and other devices through wireless communication technology. The wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), wideband code wideband code division multiple access (WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC, FM, and/or IR technology, etc. GNSS can include global positioning system (global positioning system, GPS), global navigation satellite system (global navigation satellite system, GLONASS), Beidou satellite navigation system (beidou navigation satellite system, BDS), quasi-zenith satellite system (quasi- zenith satellite system (QZSS) and/or satellite based augmentation systems (SBAS).
穿戴设备100通过GPU,光学显示模组1100,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接光学显示模组1100和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。The wearable device 100 realizes the display function through the GPU, the optical display module 1100 , and the application processor. The GPU is a microprocessor for image processing, and is connected to the optical display module 1100 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
存储器120可以用于存储计算机可执行程序代码,该可执行程序代码包括指令。处理器110通过运行存储在存储器120的指令,从而执行穿戴设备100的各种功能应用以及数据处理。存储器120可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储穿戴设备100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,存储器120可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。The memory 120 may be used to store computer-executable program code, including instructions. The processor 110 executes various functional applications and data processing of the wearable device 100 by executing instructions stored in the memory 120 . The memory 120 may include an area for storing programs and an area for storing data. Wherein, the stored program area can store an operating system, at least one application program required by a function (such as a sound playing function, an image playing function, etc.) and the like. The data storage area can store data created during use of the wearable device 100 (such as audio data, phonebook, etc.) and the like. In addition, the memory 120 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (universal flash storage, UFS) and the like.
穿戴设备100可以通过音频模块,扬声器,麦克风140,耳机接口,以及应用处理器等实现音频功能。例如音乐播放,录音等。The wearable device 100 can implement audio functions through an audio module, a speaker, a microphone 140, an earphone interface, and an application processor. Such as music playback, recording, etc.
音频模块用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块还可以用于对音频信号编码和解码。在一些实施例中,音频模块可以设置于处理器110中,或将音频模块的部分功能模块设置于处理器110中。The audio module is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signal. The audio module can also be used to encode and decode audio signals. In some embodiments, the audio module may be set in the processor 110 , or some functional modules of the audio module may be set in the processor 110 .
扬声器,也称“喇叭”,用于将音频电信号转换为声音信号。穿戴设备100可以通过扬声器收听音乐,或收听免提通话。Loudspeakers, also called "horns", are used to convert audio electrical signals into sound signals. The wearable device 100 can listen to music through the speaker, or listen to hands-free calls.
麦克风140,也称“话筒”,“传声器”,用于将声音信号转换为电信号。穿戴设备100可以设置至少一个麦克风140。在另一些实施例中,穿戴设备100可以设置两个麦克风140,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,穿戴设备100还可以设置三个,四个或更多麦克风140,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。The microphone 140, also called "microphone" or "microphone", is used to convert sound signals into electrical signals. The wearable device 100 may be provided with at least one microphone 140 . In other embodiments, the wearable device 100 can be provided with two microphones 140, which can also implement a noise reduction function in addition to collecting sound signals. In some other embodiments, the wearable device 100 can also be provided with three, four or more microphones 140 to collect sound signals, reduce noise, identify sound sources, and realize directional recording functions, etc.
耳机接口用于连接有线耳机。耳机接口可以是USB接口,也可以是3.5毫米(mm)的开放移动穿戴设备平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。The headphone jack is used to connect wired headphones. The headphone interface can be a USB interface, or a 3.5mm (mm) open mobile terminal platform (OMTP) standard interface, a cellular telecommunications industry association of the USA (CTIA) standard interface .
在一些实施例中,穿戴设备100可以包括一个或多个按键150,这些按键可以控制穿戴设备,为用户提供访问穿戴设备100上的功能。按键150的形式可以是按钮、开关、刻度盘和触摸或近触摸传感设备(如触摸传感器)。具体的,例如,用户可以通过按下按钮来打开穿戴设备100的光学显示模组1100。按键150包括开机键,音量键等。按键150可以是机械按键。也可以是触摸式按键。穿戴设备100可以接收按键输入,产生与穿戴设备100的用户设置以及功能控制有关的键信号输入。In some embodiments, the wearable device 100 may include one or more buttons 150 , and these buttons may control the wearable device and provide users with access to functions on the wearable device 100 . Keys 150 may be in the form of buttons, switches, dials, and touch or near-touch sensing devices such as touch sensors. Specifically, for example, the user can turn on the optical display module 1100 of the wearable device 100 by pressing a button. The keys 150 include a power key, a volume key and the like. The key 150 may be a mechanical key. It can also be a touch button. The wearable device 100 can receive key input and generate key signal input related to user settings and function control of the wearable device 100 .
在一些实施例中,穿戴设备100可以包括输入输出接口160,输入输出接口160可以通过合适的组件将其他装置连接到穿戴设备100。组件例如可以包括音频/视频插孔,数据连接器等。In some embodiments, the wearable device 100 may include an input-output interface 160, and the input-output interface 160 may connect other devices to the wearable device 100 through suitable components. Components may include, for example, audio/video jacks, data connectors, and the like.
光学显示模组1100用于在处理器的控制下,为用户呈现图像。光学显示模组1100可以通过反射镜、透射镜或光波导等中的一种或几种光学器件,将实像素图像显示转化为近眼投影的虚拟图像显示,实现虚拟的交互体验,或实现虚拟与现实相结合的交互体验。例如,光学显示模组1100接收处理器发送的图像数据信息,并向用户呈现对应的图像。The optical display module 1100 is used for presenting images to the user under the control of the processor. The optical display module 1100 can convert the real pixel image display into a near-eye projection virtual image display through one or several optical devices such as mirrors, transmission mirrors, or optical waveguides, so as to realize virtual interactive experience, or realize virtual and Interactive experience combined with reality. For example, the optical display module 1100 receives image data information sent by the processor, and presents corresponding images to the user.
在一些实施例中,穿戴设备100还可以包括眼动跟踪模组1200,眼动跟踪模组1200用于跟踪人眼的运动,进而确定人眼的注视点。如,可以通过图像处理技术,定位瞳孔位置,获取瞳孔中心坐标,进而计算人的注视点。In some embodiments, the wearable device 100 may further include an eye tracking module 1200, which is used to track the movement of human eyes, and then determine the point of gaze of the human eyes. For example, the position of the pupil can be located by image processing technology, the coordinates of the center of the pupil can be obtained, and then the gaze point of the person can be calculated.
结合对图6A所示的穿戴设备100的说明,本申请实施例提供的虚拟显示设备具有自动调节光焦度的功能。在一些实施例中,该功能可以通过光学显示模组1100实现。With reference to the description of the wearable device 100 shown in FIG. 6A , the virtual display device provided by the embodiment of the present application has the function of automatically adjusting the optical power. In some embodiments, this function can be realized by the optical display module 1100 .
示例性的,请参考图6B,为本申请实施例提供的一种光学显示模组的组成示意图。该虚拟显示设备能够用于支持AR、VR或MR技术提供虚拟显示功能。在具体实现中,该虚拟显示设备可以为头戴式显示(Head-mounted display,HMD)设备,比如AR、VR或MR眼镜,AR、VR或MR头盔,或者AR、VR或MR一体机。或者该虚拟显示设备也可以包括在以上举例头戴式虚拟显示设备中。需要说明的是,在一些实施例中,该虚拟显示设备还可以用于支持混合现实(Mixed Reality,MR)技术的实现。For example, please refer to FIG. 6B , which is a schematic composition diagram of an optical display module provided by an embodiment of the present application. The virtual display device can be used to support AR, VR or MR technologies to provide a virtual display function. In a specific implementation, the virtual display device may be a head-mounted display (Head-mounted display, HMD) device, such as AR, VR or MR glasses, an AR, VR or MR helmet, or an AR, VR or MR all-in-one machine. Alternatively, the virtual display device may also be included in the above example head-mounted virtual display device. It should be noted that, in some embodiments, the virtual display device can also be used to support the realization of mixed reality (Mixed Reality, MR) technology.
如图6B所示,该光学显示模组可以包括目镜601,变焦模块602,以及显示屏603。As shown in FIG. 6B , the optical display module may include an eyepiece 601 , a zoom module 602 , and a display screen 603 .
其中,目镜601可以是菲涅尔透镜,和/或非球面透镜等光学器件或器件组。目镜601可以用于将来自显示屏603的光线投射到用户的眼睛里。在一些实施例中,目镜 601可以具有正光焦度。由此使得来自显示屏603的光线可以在较小的空间内,通过目镜601汇聚到人眼中。示例性的,在不同的实现中,在人眼到显示屏603的距离不同的情况下,也可以通过调整目镜601和显示屏603之间的间距,将显示屏603投影到不同的虚像距位置。该虚像距位置可以是用户人眼所在位置。Wherein, the eyepiece 601 may be a Fresnel lens, and/or an optical device or device group such as an aspherical lens. Eyepiece 601 may be used to project light from display screen 603 into the user's eyes. In some embodiments, eyepiece 601 may have positive optical power. As a result, the light from the display screen 603 can be converged into the human eye through the eyepiece 601 in a small space. Exemplarily, in different implementations, when the distance from the human eye to the display screen 603 is different, the display screen 603 can be projected to different virtual image distance positions by adjusting the distance between the eyepiece 601 and the display screen 603 . The virtual image distance position may be the position of the user's eyes.
作为一种示例,图7示出了本申请实施例提供的一种薄饼(Pancake)折叠光学镜组的示意图。该Pancake折叠光学镜组(以下简称为Pancake镜组)可以用于实现如图6B所示的目镜的功能。如图7所示,该Pancake镜组可以包括至少5个光学组件(如701-705)。其中,701-705中任一个光学组件的具体实现可以是具有对应功能的光学透镜,或者,通过在相邻透镜或其他光学部件上通过光学镀膜实现该组件对应的功能。As an example, FIG. 7 shows a schematic diagram of a pancake folding optical lens assembly provided in an embodiment of the present application. The Pancake folding optical lens group (hereinafter referred to as the Pancake lens group) can be used to realize the function of the eyepiece as shown in FIG. 6B . As shown in FIG. 7 , the pancake lens set may include at least 5 optical components (such as 701-705). Wherein, the specific implementation of any one of the optical components in 701-705 may be an optical lens with a corresponding function, or the corresponding function of the component may be realized by an optical coating on an adjacent lens or other optical components.
在图7的示例中,该Pancake镜组由物方到像方,各个光学组件的顺序可以为701-702-703-704-705。可以看到,在图7中,同时也示出了一种各个光学组件的一种具体示例。在该示例中,701可以是偏振片(Polarizer,P)。702可以是四分之一波片(Quarter wave-plate,QWP)。703可以通过半反半透膜(Beam splitter,BS)实现。704可以是四分之一波片。705可以通过偏振反射膜实现(Polarization reflector,PR)。In the example shown in FIG. 7 , the pancake lens group is from the object side to the image side, and the order of the optical components can be 701-702-703-704-705. It can be seen that in FIG. 7 , a specific example of each optical component is also shown. In this example, 701 may be a polarizer (Polarizer, P). 702 may be a quarter wave plate (Quarter wave-plate, QWP). 703 can be realized by a semi-transverse and semi-permeable membrane (Beam splitter, BS). 704 may be a quarter wave plate. 705 can be realized by polarizing reflector (Polarization reflector, PR).
下面结合示例,对Pancake镜组在工作时对光线的处理机制进行说明。请参考图8,以物方光线由光学显示模组的显示屏发出,像方光线射入人眼为例。The following is an example to explain the light processing mechanism of the Pancake lens group when it is working. Please refer to FIG. 8 , which is an example in which the object-space light is emitted from the display screen of the optical display module, and the image-space light enters the human eye.
如图所示,入射光线在入射701之后,可以按照702-703-704-705的顺序依次透射。As shown in the figure, after the incident light is incident 701, it can be transmitted sequentially in the order of 702-703-704-705.
示例性的,光线可以在经过701后被调制为线偏光。在一些实施例中,701的调制方向可以设置为y轴方向。由此经过701之后入射光线就可以被调制为y轴方向的线偏光。接着,该线偏光可以通过702,由此调整为旋转偏振光。示例性的,以702的快轴方向与y轴成45°,那么线偏光通过702之后就可以被调整为右旋偏振光。该右旋偏振光可以入射703。由于703的半透半反特性,因此右旋偏振光可以有一部分光线透射703,而另一部分光线则被703反射。透射703的右旋偏振光可以入射704。在704的快轴方向与702相同的情况下,该透射703的右旋偏振光可以直接穿透704并入射705。该入射705的右旋偏振光可以被调制为沿着x方向的线偏光,并在705表面被反射。Exemplarily, the light may be modulated into linearly polarized light after passing through 701 . In some embodiments, the modulation direction of 701 may be set as the y-axis direction. Thus, after passing through 701 , the incident light can be modulated into linearly polarized light in the y-axis direction. Next, the linearly polarized light may pass through 702, thereby being adjusted to rotated polarized light. Exemplarily, if the fast axis direction of 702 is 45° to the y-axis, then the linearly polarized light can be adjusted to right-handed polarized light after passing through 702 . The right-handed polarized light can be incident 703 . Due to the transflective property of the 703 , part of the right-handed polarized light can be transmitted through the 703 , while the other part of the light can be reflected by the 703 . Right-handed polarized light transmitted 703 may be incident 704 . In the case that the fast axis direction of 704 is the same as that of 702 , the right-handed polarized light transmitted through 703 can directly pass through 704 and enter 705 . The right-handed polarized light incident 705 can be modulated into linearly polarized light along the x direction, and reflected on the surface of 705 .
被705反射的光线可以通过704-703,并在703反射。Light reflected by 705 can pass through 704-703 and be reflected at 703.
示例性的,在705表面被反射的光线可以透过704并调制为右旋偏振光,该右旋偏振光的一部分光线可以在703表面反射。需要注意的是在703表面反射之后,该右旋偏振光可以被调制为左旋偏振光。Exemplarily, the light reflected on the surface of 705 can pass through 704 and be modulated into right-handed polarized light, and a part of the light of the right-handed polarized light can be reflected on the surface of 703 . It should be noted that after reflection from the 703 surface, the right-handed polarized light can be modulated into left-handed polarized light.
被703反射的左旋偏振光可以通过704-705射出Pancake镜组,最终入射人眼。The left-handed polarized light reflected by 703 can exit the Pancake lens group through 704-705, and finally enter the human eye.
示例性的,经过704之后,该左旋偏振光可以被调制为偏振方向沿y轴方向的线偏光。接着,该沿y轴方向的线偏光就可以通过705射出Pancake镜组,并入射人眼。可以理解的是,在一些实施例中,705的偏振透射特性可以设置为y轴方向的线偏光透射,由此即可保证y轴方向的线偏光可以顺利从705射出。Exemplarily, after step 704, the left-handed polarized light may be modulated into linearly polarized light whose polarization direction is along the y-axis direction. Then, the linearly polarized light along the y-axis direction can exit the Pancake lens group through 705 and enter human eyes. It can be understood that, in some embodiments, the polarized transmission characteristic of 705 can be set to transmit linearly polarized light in the y-axis direction, thereby ensuring that the linearly polarized light in the y-axis direction can be emitted from 705 smoothly.
这样,光线可以通过701-702-703-704-705-704-703-704-705的顺序在Pancake镜组中折叠传输,由此达到了折叠光路的效果。因此就能够在较小的空间内(如VR 眼镜等光学显示模组内部)实现较长光路的光线传输。In this way, the light can be folded and transmitted in the Pancake mirror group through the order of 701-702-703-704-705-704-703-704-705, thereby achieving the effect of folding the optical path. Therefore, light transmission with a longer optical path can be realized in a small space (such as inside an optical display module such as VR glasses).
在一些实施例中,变焦模块602可以调节目镜601中一个或多个光学组件(例如镜片)之间的距离,以改变光焦度,从而调整虚拟面的深度。例如,变焦模块602可以Pancake镜组中一个或多个光学组件的相对位置,以改变光焦度,从而调整虚拟面的深度。In some embodiments, the zoom module 602 can adjust the distance between one or more optical components (such as lenses) in the eyepiece 601 to change the optical power, thereby adjusting the depth of the virtual plane. For example, the zoom module 602 can change the relative position of one or more optical components in the pancake lens group to change the optical power, thereby adjusting the depth of the virtual surface.
需要说明的是,上述图7或图8中的组成仅为一种示例,在本申请的其他实施例中,该Pancake镜组还可以包括更多或更少的光学组件。另外,在上述示例中通过光学透镜实现的功能也可以通过其他光学组件实现,比如通过在相邻的光学透镜上进行对应的光学镀膜实现。It should be noted that the above composition in FIG. 7 or FIG. 8 is only an example, and in other embodiments of the present application, the pancake lens group may also include more or less optical components. In addition, the functions implemented by the optical lens in the above examples may also be implemented by other optical components, such as by performing corresponding optical coatings on adjacent optical lenses.
可以理解的是,结合前述对光学显示模组中目镜的说明,通过上述图7或图8的说明,基于该Pancake镜组,可以通过折叠光路,增大镜组的光焦度,从而达到缩短焦距的效果。这样,就可以达到缩短镜筒(即光学显示模组)沿光轴方向的长度。从而使得该光学显示模组能够适应虚拟显示设备对于小型化的要求。It can be understood that, in combination with the aforementioned description of the eyepiece in the optical display module, through the description of the above-mentioned Figure 7 or Figure 8, based on the Pancake mirror group, the optical path of the mirror group can be folded to increase the optical power of the mirror group, thereby shortening the The effect of focal length. In this way, the length of the lens barrel (that is, the optical display module) along the optical axis can be shortened. Therefore, the optical display module can meet the miniaturization requirement of the virtual display device.
在本申请的不同实施例中,如图6B所示的目镜的具体实现可以不同。比如,该目镜可以使用上述示例中具有如图7或图8所示的Pancake镜组实现。又如,该目镜还可以使用其他光学透镜或镜组实现将显示屏的光线投射到人眼的目的。In different embodiments of the present application, the specific implementation of the eyepiece as shown in FIG. 6B may be different. For example, the eyepiece can be realized by using the pancake lens group shown in FIG. 7 or FIG. 8 in the above example. As another example, the eyepiece can also use other optical lenses or mirror groups to project the light from the display screen to the human eye.
在如图6B所示的光学显示模组中,显示屏603可以包括但不限于液晶显示屏(Liquid Crystal Display,LCD)、有机发光半导体显示屏(Organic Light-Emitting Diode,OLED)、微发光半导体显示屏(Micro Light-Emitting Diode,Micro-LED)、量子点发光半导体显示器(Quantum Dots Light-Emitting Diode,QLED)等显示部件。在本申请实施例中,显示屏作为光学显示模组的图像源,可以用于显示待显示的图像。In the optical display module shown in Figure 6B, the display screen 603 may include, but not limited to, a liquid crystal display (Liquid Crystal Display, LCD), an organic light-emitting semiconductor display (Organic Light-Emitting Diode, OLED), a micro-luminescence semiconductor Display components such as display screen (Micro Light-Emitting Diode, Micro-LED), quantum dot light-emitting semiconductor display (Quantum Dots Light-Emitting Diode, QLED). In the embodiment of the present application, the display screen is used as an image source of the optical display module and can be used to display images to be displayed.
在光学显示模组中,变焦模块602的功能可以通过机械变焦机构、液晶变焦器件、液体变焦器件、阿尔瓦雷斯(Alvarez)透镜等具有自动变焦功能的部件实现。在本申请的一些实施例中,变焦模块602可以通过改变其自身的光焦度,以达到对光学显示模组进行变焦的目的。In the optical display module, the function of the zoom module 602 can be realized by components with automatic zoom functions such as a mechanical zoom mechanism, a liquid crystal zoom device, a liquid zoom device, and an Alvarez lens. In some embodiments of the present application, the zoom module 602 can achieve the purpose of zooming the optical display module by changing its own optical power.
示例性的,在一些实施例中,结合前述对于图6A的说明,基于该图6B的组成,虚拟显示设备中的处理器可以用于控制变焦模块进行光焦度的调整。通过调整光焦度,调整光学显示模组在向用户提供虚拟显示过程中,虚像面到用户人眼的距离,由此实现辐辏深度和调焦深度的匹配。在另一些实施例中,处理器还可以实现光学显示模组在显示过程中与用户的近视度数的匹配。在另一些实施例中,处理器还可以用于控制变焦模块旋转,以便于匹配用户的散光度数对应的轴位,从而匹配用户的散光度数。Exemplarily, in some embodiments, with reference to the foregoing description of FIG. 6A , based on the composition of FIG. 6B , the processor in the virtual display device can be used to control the zoom module to adjust the optical power. By adjusting the optical power, the optical display module can adjust the distance from the virtual image plane to the user's eyes during the process of providing virtual display to the user, thereby realizing the matching of the depth of convergence and the depth of focus. In some other embodiments, the processor can also realize the matching between the optical display module and the degree of myopia of the user during the display process. In some other embodiments, the processor may also be used to control the rotation of the zoom module so as to match the axis position corresponding to the user's astigmatism degree, thereby matching the user's astigmatism degree.
在本申请的以下示例中,光学显示模组/VR眼镜所控制执行的操作,均可由处理器完成,以下不再赘述。In the following examples of the present application, the operations controlled and executed by the optical display module/VR glasses can all be completed by the processor, which will not be described in detail below.
需要说明的是,图6B所示的光学显示模组的组成,仅为一种逻辑的示意。在具体的实现中,目镜,变焦模块以及显示屏的数量可以根据不同需求灵活设置。比如,在一些实施例中,该光学显示模组可以包括两个目镜,两个变焦模块,以及两个显示屏。其中,一个目镜,一个变焦模块以及一个显示屏,可以构成一个子光学显示模组,用于向用户双眼中的一个人眼(如左眼或右眼)提供虚拟显示功能。又如,在另一些实施例中,为左眼以及右眼提供虚拟显示功能的目镜也可以是集成在同一个部件上的, 这样,在该光学显示模组中就可以只包括一个目镜。类似的,在另一些实施例中,变焦模块,和/或显示屏也可以是集成在一个或多个部件上,而并非各自独立设置的。比如,变焦部件可以与目镜集成在同一部件(如目镜镜组)内。又如,目镜可以集成在变焦模块中。It should be noted that the composition of the optical display module shown in FIG. 6B is only a logical illustration. In a specific implementation, the number of eyepieces, zoom modules and display screens can be flexibly set according to different requirements. For example, in some embodiments, the optical display module may include two eyepieces, two zoom modules, and two display screens. Wherein, an eyepiece, a zoom module and a display screen can constitute a sub-optical display module for providing a virtual display function to one of the user's eyes (such as the left eye or the right eye). As another example, in other embodiments, the eyepieces that provide virtual display functions for the left eye and the right eye can also be integrated on the same component, so that only one eyepiece can be included in the optical display module. Similarly, in some other embodiments, the zoom module and/or the display screen may also be integrated on one or more components, instead of being set independently. For example, the zoom component can be integrated with the eyepiece in the same component (eg, eyepiece lens set). As another example, the eyepiece can be integrated in the zoom module.
另外,在本申请的不同实施例中,目镜,变焦模块以及显示屏在光学显示模组中的具体位置也可不同。比如,在一些实施例中,在光学显示模组中,按照人眼视线方向,上述部件在光路上的位置顺序可以为:目镜,变焦模块,显示屏。在另一些实施例中,变焦模块可以设置在目镜和人眼之间。即按照人眼视线方向,各个部件的位置顺序可以为:变焦模块,目镜,显示屏。In addition, in different embodiments of the present application, the specific positions of the eyepiece, the zoom module and the display screen in the optical display module may also be different. For example, in some embodiments, in the optical display module, according to the line of sight of the human eye, the position sequence of the above components on the optical path may be: eyepiece, zoom module, and display screen. In some other embodiments, the zoom module can be arranged between the eyepiece and the human eye. That is, according to the line-of-sight direction of the human eye, the position sequence of each component may be: a zoom module, an eyepiece, and a display screen.
作为一种示例,图9示出了一种具有上述说明中的光学显示模组相关功能的虚拟显示设备的具体实现示例。在该示例中,该虚拟显示设备可以是VR眼镜。其中,该VR眼镜中,可以包括两组具有如图6B所示组成的光学显示模组的组成。在一些实施例中,该如图9所示的VR眼镜的组成,也可以是对于如图6A中的穿戴设备的另一种划分,相关的模块也能够实现响应的功能。As an example, FIG. 9 shows a specific implementation example of a virtual display device having functions related to the optical display module described above. In this example, the virtual display device may be VR glasses. Wherein, the VR glasses may include two groups of optical display modules having the composition as shown in FIG. 6B . In some embodiments, the composition of the VR glasses as shown in FIG. 9 can also be another division of the wearable device as shown in FIG. 6A , and related modules can also implement corresponding functions.
比如,该VR眼镜可以包括显示屏L,变焦模块L,目镜L组成的第一显示部件。该第一显示部件可以用于向用户的左眼提供虚拟显示功能。该VR眼镜还可以包括显示屏R,变焦模块R,目镜R组成的第二显示部件。该第二显示部件可以用于向用户的右眼提供虚拟显示功能。需要说明的是,在本申请实施例中显示部件也可以称为光学显示模组。比如,第一显示部件可以称为第一光学显示模组。又如,第二显示部件也可以称为第二光学显示模组。For example, the VR glasses may include a display screen L, a zoom module L, and a first display component composed of an eyepiece L. The first display component may be used to provide a virtual display function to the user's left eye. The VR glasses may also include a display screen R, a zoom module R, and a second display component composed of an eyepiece R. The second display component may be used to provide a virtual display function to the user's right eye. It should be noted that, in the embodiment of the present application, the display component may also be referred to as an optical display module. For example, the first display component may be referred to as a first optical display module. As another example, the second display component may also be referred to as a second optical display module.
需要说明的是,在本申请的一些实施例中,该VR眼镜中还可以包括其他部件(如称为眼动追踪系统),用于实现对用户的眼动追踪。该眼动追踪系统可以通过视频眼图法或者光电二极管响应法或者瞳孔角膜反射法等方法,确定用户的注视点位置(或者确定用户的视线方向),从而实现用户的眼动追踪。It should be noted that, in some embodiments of the present application, the VR glasses may also include other components (such as an eye-tracking system) for realizing eye-tracking of the user. The eye-tracking system can determine the position of the user's fixation point (or determine the direction of the user's line of sight) through methods such as video eye diagram method, photodiode response method, or pupil-corneal reflection method, thereby realizing user's eye-tracking.
在一些实施例中,以采用瞳孔角膜反射法确定用户的视线方向为例。该眼动追踪系统可以包括一个或多个近红外发光二极管(Light-Emitting Diode,LED)以及一个或多个近红外相机。该近红外LED未在图9中示出。在不同的示例中,该近红外LED可以设置在目镜周围,以便对人眼进行全面的照射。在一些实施例中,近红外LED的中心波长可以为850nm或940nm。该眼动追踪系统可以通过如下方法获取用户的视线方向:由近红外LED对人眼进行照明,近红外相机拍摄眼球的图像,然后根据眼球图像中近红外LED在角膜上的反光点位置以及瞳孔的中心,确定眼球的光轴方向,最后通过用户参与的校准程序得到用户的视线方向。In some embodiments, it is taken as an example to determine a user's line of sight direction by using a pupil-cornea reflection method. The eye tracking system may include one or more near-infrared light-emitting diodes (Light-Emitting Diode, LED) and one or more near-infrared cameras. The near-infrared LED is not shown in FIG. 9 . In a different example, the near-infrared LEDs can be positioned around the eyepiece so as to fully illuminate the human eye. In some embodiments, the near-infrared LED may have a center wavelength of 850 nm or 940 nm. The eye tracking system can obtain the user's line of sight direction through the following methods: the human eye is illuminated by near-infrared LEDs, and the near-infrared camera captures the image of the eyeball, and then according to the position of the reflection point of the near-infrared LED on the cornea and the pupil in the eyeball image The center of the eyeball is determined to determine the direction of the optical axis of the eyeball, and finally the direction of the user's line of sight is obtained through the calibration procedure that the user participates in.
需要说明的是,在本申请的一些实施例中,可以分别为用户的双眼设置各自对应的眼动追踪系统(如图9所示的红外相机R以及红外相机L),以便同步或异步地对双眼进行眼动追踪。在本申请的另一些实施例中,也可以仅在一个人眼附近设置眼动追踪系统,通过该眼动追踪系统获取对应人眼的视线方向,并根据双眼注视点的关系(如用户在通过双眼观察物体时,两个眼睛的注视点位置一般相近或相同),结合用户的双眼间距,即可确定另一个人眼的视线方向或者注视点位置。It should be noted that, in some embodiments of the present application, respective eye-tracking systems (infrared camera R and infrared camera L as shown in FIG. Eye tracking with both eyes. In some other embodiments of the present application, an eye-tracking system can also be set only near one human eye, and the line-of-sight direction of the corresponding human eye can be obtained through the eye-tracking system, and according to the relationship between the gaze points of the two eyes (for example, when the user passes through When observing an object with both eyes, the fixation point positions of the two eyes are generally similar or the same), combined with the user's binocular distance, the line of sight direction or fixation point position of the other eye can be determined.
在本申请实施例中,具有如图6A所示的组成的穿戴设备,或者具有如图9所示组 成的虚拟显示设备(如VR眼镜)可以在向用户提供虚拟显示功能的情况下,控制光学显示模组中的变焦模块,调整虚像面(如图5A或图5B所示的虚像面)的位置。进而调整人眼的变焦深度。调整后的变焦深度可以接近于或等于人眼观察虚拟场景中物体对应的辐辏深度(或称为变焦深度与辐辏深度匹配)。从而避免由于辐辏深度和变焦深度不同导致的视疲劳,提升用户的使用体验。In the embodiment of this application, the wearable device with the composition shown in Figure 6A, or the virtual display device (such as VR glasses) with the composition shown in Figure 9 can control the optical The zoom module in the display module adjusts the position of the virtual image plane (such as the virtual image plane shown in FIG. 5A or FIG. 5B ). This in turn adjusts the zoom depth of the human eye. The adjusted zoom depth may be close to or equal to the depth of convergence corresponding to the observation of objects in the virtual scene by human eyes (or referred to as the matching of the zoom depth and the depth of convergence). In this way, visual fatigue caused by different convergence depths and zoom depths can be avoided, and user experience can be improved.
作为一种示例,以下结合图10,对本申请实施例中,虚拟显示设备(如VR眼镜)控制变焦模块,进而调整变焦深度与辐辏深度匹配的具体实现进行示例性说明。其中以VR眼镜中设置的一个显示部件为例。另一个显示部件的具体实施方式可以参考该示例,不再赘述。As an example, referring to FIG. 10 , in the embodiment of the present application, a virtual display device (such as VR glasses) controls the zoom module, and then adjusts the zoom depth to match the convergence depth. Wherein, a display component provided in VR glasses is taken as an example. For a specific implementation manner of another display component, reference may be made to this example, and details are not repeated here.
示例性的,在一些实施例中,在用户观察虚拟场景中近处的物体(如该观察该物体时对应的辐辏深度可以在如图10中的(a)所示的虚像面1的位置)时,VR眼镜可以控制变价模块调整自身的光焦度(如调整为光焦度A),使得显示部件对应的虚像面的位置落在如图10中的(a)所示的虚像面1的位置。这样就实现了用户在观察虚拟场景中近处物体时的变焦深度与辐辏深度的匹配。Exemplarily, in some embodiments, when the user observes a nearby object in the virtual scene (for example, when observing the object, the corresponding depth of convergence may be at the position of the virtual image plane 1 as shown in (a) in FIG. 10 ) , the VR glasses can control the price-changing module to adjust its own optical power (for example, adjusted to optical power A), so that the position of the virtual image surface corresponding to the display component falls on the virtual image surface 1 as shown in (a) in Figure 10 Location. In this way, the matching between the zoom depth and the convergence depth when the user observes close objects in the virtual scene is realized.
在另一些实施例中,在用户观察虚拟场景中远处的物体(如该观察该物体时对应的辐辏深度可以在如图10中的(b)所示的虚像面1的位置)时,VR眼镜可以控制变焦模块调整自身的光焦度(如调整为光焦度B),使得显示部件对应的虚像面的位置落在如图10中的(b)所示的虚像面2的位置。这样就实现了用户在观察虚拟场景中远处物体时的变焦深度与辐辏深度的匹配。In some other embodiments, when the user observes a distant object in the virtual scene (for example, when observing the object, the corresponding depth of convergence may be at the position of the virtual image plane 1 as shown in (b) in FIG. 10 ), the VR glasses The zoom module can be controlled to adjust its own optical power (for example, adjusted to optical power B), so that the position of the virtual image plane corresponding to the display component falls on the position of the virtual image plane 2 as shown in (b) in FIG. 10 . In this way, the matching between the zoom depth and the convergence depth when the user observes distant objects in the virtual scene is realized.
作为一种可能的实现,光焦度B可以小于光焦度A。比如,光焦度A可以为-1D,光焦度B可以为-3D。As a possible implementation, the optical power B may be smaller than the optical power A. For example, the optical power A can be -1D, and the optical power B can be -3D.
为了使得本领域技术人员能够更加清楚地对本申请实施例提供的虚拟显示方法的实现予以了解,以下以执行该虚拟显示方法的虚拟显示设备为具有如图9所示组成的VR眼镜为例,对本示例中的虚拟显示方法的实施流程进行该说明。In order to enable those skilled in the art to more clearly understand the realization of the virtual display method provided by the embodiment of the present application, the virtual display device that executes the virtual display method is the VR glasses with the composition shown in Figure 9 as an example, and this The implementation flow of the virtual display method in the example is used for this description.
示例性的,参考图11,为本申请实施例提供的一种虚拟显示方法的流程示意图。如图11所示,该方法可以包括:Exemplarily, refer to FIG. 11 , which is a schematic flowchart of a virtual display method provided by an embodiment of the present application. As shown in Figure 11, the method may include:
S1101、在向用户展示虚拟三维环境时,通过眼动追踪系统,确定用户的视线方向。S1101. When presenting the virtual three-dimensional environment to the user, determine the user's line of sight direction through an eye-tracking system.
在本示例中,虚拟三维环境中可以包括与用户较近的物体以及与用户较远的物体。比如,参考图12。与用户较近的物体可以为如图12所示的虚拟物体1,与用户较远的物体可以为如图12所示的虚拟物体2。需要说明的是,为了向用户展示虚拟三维环境中的物体(如虚拟物体1或者虚拟物体2),VR眼镜可以在显示屏上显示不同的图像。结合前述说明,参考图12,在向用户显示不同深度的虚拟物体时,显示屏需要显示不同的内容。比如,以显示屏L为例。在显示虚拟物体1时,显示屏L上可以显示有包括该虚拟物体1的图像。该虚拟物体1的在该图像中的位置可以是如图所示的左眼在观察虚拟物体1时的视线与显示屏L之间的交点附近(如称为位置1)。而在显示虚拟物体2时,显示屏L上可以显示有包括该虚拟物体2的图像。该虚拟物体2的在该图像中的位置可以是如图所示的左眼在观察虚拟物体2时的视线与显示屏L之间的交点附近(如称为位置2)。显而易见的,在显示具有不同深度的虚拟物体时,显示屏上显示的对象的位置是不同的。比如,在如图12所示的场景下,在显示具有大深度的 虚拟物体2时,该对象在显示屏L上的位置(即位置2)相较于显示具有较小深度的虚拟物体1时的位置(即位置1)更加靠近显示屏的中间位置。In this example, the virtual three-dimensional environment may include objects closer to the user and objects farther away from the user. For example, refer to Figure 12. The object closer to the user may be a virtual object 1 as shown in FIG. 12 , and the object farther from the user may be a virtual object 2 as shown in FIG. 12 . It should be noted that, in order to show the user the objects in the virtual three-dimensional environment (such as the virtual object 1 or the virtual object 2), the VR glasses can display different images on the display screen. Referring to FIG. 12 in conjunction with the foregoing description, when displaying virtual objects of different depths to the user, the display screen needs to display different content. For example, take the display screen L as an example. When the virtual object 1 is displayed, the display screen L may display an image including the virtual object 1 . The position of the virtual object 1 in the image may be near the intersection point between the line of sight of the left eye when observing the virtual object 1 and the display screen L as shown in the figure (for example, it is called position 1 ). When the virtual object 2 is displayed, the display screen L may display an image including the virtual object 2 . The position of the virtual object 2 in the image may be near the intersection point between the line of sight of the left eye when observing the virtual object 2 and the display screen L as shown in the figure (for example, it is referred to as position 2 ). Obviously, when virtual objects with different depths are displayed, the positions of objects displayed on the display screen are different. For example, in the scene shown in Figure 12, when a virtual object 2 with a large depth is displayed, the position of the object on the display screen L (that is, position 2) is compared to when a virtual object 1 with a smaller depth is displayed The position of (namely position 1) is closer to the middle position of the display screen.
在本申请实施例中,VR眼镜可以通过上述说明中的瞳孔角膜反射法等方法实现眼动追踪。示例性的,在VR眼镜分别为双眼配置了眼动追踪系统的情况下,可以分别根据对应人眼的眼动追踪系统,确定该人眼的视线方向。比如,以人眼观察虚拟物体1上的P1点为例,在人眼观察虚拟物体1时,VR眼镜可以确定人眼的视线方向可以为如图12所示的观察P1时的视线。又如,以人眼观察虚拟物体2上的P2点为例,在人眼观察虚拟物体2时,VR眼镜可以确定人眼的视线方向可以为如图12所示的观察P2时的视线。In the embodiment of the present application, the VR glasses can realize eye movement tracking through methods such as the pupil cornea reflection method in the above description. Exemplarily, in the case that the VR glasses are respectively equipped with eye-tracking systems for both eyes, the line-of-sight directions of the human eyes can be determined according to the eye-tracking systems of the corresponding human eyes. For example, taking the point P1 on the virtual object 1 observed by the human eye as an example, when the human eye observes the virtual object 1, the VR glasses can determine that the line of sight of the human eye can be the line of sight when observing P1 as shown in FIG. 12 . As another example, taking the point P2 on the virtual object 2 observed by the human eye as an example, when the human eye observes the virtual object 2, the VR glasses can determine that the line of sight of the human eye can be the line of sight when observing P2 as shown in FIG. 12 .
S1102、根据用户的视线方向,确定当前的辐辏深度。S1102. Determine the current depth of convergence according to the user's line of sight direction.
结合前述说明,用户在观察虚拟场景中的物体时,实质上是通过左眼以及右眼分别观察对应物体在虚像面上的投影。With reference to the foregoing description, when the user observes an object in the virtual scene, the user essentially observes the projection of the corresponding object on the virtual image plane through the left eye and the right eye respectively.
比如,用户在观察虚拟物体1时,左眼可以获取该虚拟物体在对应的虚像面A上的投影,该左眼观察到的投影可以是与左眼视线方向垂直的投影。右眼可以获取该虚拟物体在对应的虚像面A上的投影,该右眼观察到的投影可以是与右眼视线方向垂直的投影。而如果虚像面A的深度(即变焦深度)与虚拟场景中虚拟物体的深度(即辐辏深度)不一致,就会产生VAC。For example, when the user observes the virtual object 1, the left eye can obtain the projection of the virtual object on the corresponding virtual image plane A, and the projection observed by the left eye can be a projection perpendicular to the line of sight of the left eye. The right eye can obtain the projection of the virtual object on the corresponding virtual image plane A, and the projection observed by the right eye can be a projection perpendicular to the line of sight of the right eye. However, if the depth of the virtual image plane A (that is, the zoom depth) is inconsistent with the depth of the virtual object in the virtual scene (that is, the convergence depth), VAC will be generated.
在本示例中,VR眼镜可以根据S1101中确定的双眼的视线方向,确定用户当前观察物体在虚拟场景中的深度(即辐辏深度)。作为一种示例,以用户在观察虚拟物体1上的P1点为例。VR眼镜可以根据用户使用左眼以及右眼观察虚拟物体1时的视线,确定用户在观察该P1点对应的辐辏角度为a1。类似的,用户在观察虚拟物体2上的P2点时,VR眼镜可以确定当前的辐辏角度为a2。In this example, the VR glasses can determine the depth (that is, the depth of convergence) of the object currently observed by the user in the virtual scene according to the viewing directions of the two eyes determined in S1101. As an example, take the user observing point P1 on the virtual object 1 as an example. The VR glasses can determine the angle of convergence corresponding to the user observing the point P1 as a1 according to the line of sight when the user observes the virtual object 1 with the left eye and the right eye. Similarly, when the user observes the point P2 on the virtual object 2, the VR glasses can determine that the current convergence angle is a2.
这样,根据该辐辏角度,VR眼镜就能够确定当前用户观察位置的辐辏深度。示例性的,继续以用户正在观察P1点为例。由于该虚拟三维环境是由VR眼镜构建的,因此,在该虚拟三维环境中的各个物体在该虚拟三维环境中的三维坐标是VR眼镜已知的。在一些实施例中,根据虚拟场景中,左眼以及右眼看向P1时,视线的夹角(如图12所示的a1),以及左眼和右眼的眼间距,VR眼镜就能够计算获取用户当前所观察的P1点与用户之间的距离,即辐辏深度。In this way, according to the convergence angle, the VR glasses can determine the convergence depth of the current user observation position. Exemplarily, continue to take the example that the user is observing point P1. Since the virtual three-dimensional environment is constructed by the VR glasses, the three-dimensional coordinates of each object in the virtual three-dimensional environment in the virtual three-dimensional environment are known by the VR glasses. In some embodiments, according to the angle between the line of sight (a1 shown in Figure 12) and the distance between the left and right eyes when the left and right eyes look at P1 in the virtual scene, the VR glasses can calculate and obtain The distance between the point P1 currently observed by the user and the user, that is, the depth of convergence.
在另一些实施例中,VR眼镜可以根据S1101中,获取的用户的视线方向,确定双眼视线的交点在虚拟三维环境中的位置。那么该位置就可以是虚拟三维环境中用户的注视点。根据该注视点在虚拟三维环境中的三维坐标已知,结合用户在虚拟三维环境中的三维坐标,VR眼镜就可以计算获取用户到该注视点之间的距离,由此即可获取辐辏深度。In some other embodiments, the VR glasses may determine the position of the intersection point of the sight lines of the two eyes in the virtual three-dimensional environment according to the user's sight direction acquired in S1101. Then the position may be the gaze point of the user in the virtual three-dimensional environment. According to the known 3D coordinates of the gaze point in the virtual 3D environment, combined with the 3D coordinates of the user in the virtual 3D environment, the VR glasses can calculate the distance between the user and the gaze point, and thus obtain the convergence depth.
需要说明的是,在虚拟三维环境中,用户的视线不一定可以准确地相交于一点。因此,在本申请的一些实施例中,VR眼镜可以通过如下方法确定注视点在虚拟三维环境中的三维坐标:It should be noted that in the virtual three-dimensional environment, the user's line of sight may not exactly intersect at one point. Therefore, in some embodiments of the present application, the VR glasses can determine the three-dimensional coordinates of the gaze point in the virtual three-dimensional environment by the following method:
VR眼镜可以将用户双眼视线在XOY平面上的投影的交点的X坐标以及Y坐标(如(X1,Y1)),作为用户当前的注视点在XOY平面上的投影坐标为(X1,Y1)。VR眼镜还可以根据左眼的视线在虚拟三维环境中,该(X1,Y1)位置处的Z坐标(如Z L), 以及右眼的视线在(X1,Y1)位置处的Z坐标(如Z R),确定注视点的高度。比如,注视点的Z坐标可以为
Figure PCTCN2022085632-appb-000001
这样,就可以确定注视点在虚拟三维环境中的坐标为
Figure PCTCN2022085632-appb-000002
The VR glasses can use the X coordinate and Y coordinate (such as (X1, Y1)) of the intersection point of the projection of the user's eyes on the XOY plane as the projection coordinates of the user's current gaze point on the XOY plane as (X1, Y1). The VR glasses can also be in the virtual three-dimensional environment according to the line of sight of the left eye, the Z coordinate (such as Z L ) at the (X1, Y1) position, and the Z coordinate (such as Z L ) at the position (X1, Y1) of the line of sight of the right eye (such as Z R ), to determine the height of the fixation point. For example, the Z coordinate of the gaze point can be
Figure PCTCN2022085632-appb-000001
In this way, the coordinates of the gaze point in the virtual 3D environment can be determined as
Figure PCTCN2022085632-appb-000002
在本申请的另一些实施例中,在VR眼镜通过眼动追踪确定用户的主视眼时,则可以将该主视眼对应的注视点作为双眼观察时的注视点。In some other embodiments of the present application, when the VR glasses determine the user's dominant eye through eye movement tracking, the fixation point corresponding to the dominant eye may be used as the fixation point during binocular observation.
S1103、根据辐辏深度,调整虚像面的位置,使得变焦深度与辐辏深度匹配。S1103. Adjust the position of the virtual image plane according to the depth of convergence, so that the zoom depth matches the depth of convergence.
其中,变焦深度与辐辏深度匹配可以包括变焦深度与辐辏深度相同,或者,调价深度与辐辏深度的差值小于预设阈值。在变焦深度与辐辏深度匹配的情况下,可以避免长时间使用虚拟显示设备(如VR眼镜)过程中产生的VAC。Wherein, the matching between the zoom depth and the convergence depth may include that the zoom depth and the convergence depth are the same, or the difference between the adjustment depth and the convergence depth is smaller than a preset threshold. In the case that the zoom depth matches the vergence depth, VAC generated during long-term use of a virtual display device (such as VR glasses) can be avoided.
示例性的,VR眼镜可以通过调整变焦模块的光焦度,调整虚像面的位置,从而达到调整变焦深度的效果。在本申请实施例中,VR眼镜可以根据如图10所示的调整方式,实现变焦模块的光焦度的调整。此处不再赘述。Exemplarily, the VR glasses can adjust the position of the virtual image plane by adjusting the optical power of the zoom module, so as to achieve the effect of adjusting the zoom depth. In the embodiment of the present application, the VR glasses can adjust the optical power of the zoom module according to the adjustment manner shown in FIG. 10 . I won't repeat them here.
结合图12,以用户观察虚拟物体1为例。VR眼镜可以将虚像面调整到如图12所示的虚像面A的位置。这样,人眼为了能够看清该虚像面A上的物体,就可以控制睫状肌调整晶状体的状态,使得能够将虚像面A上的物体在视网膜上清晰成像。此时,人眼确定的变焦深度就可是人眼到虚像面A的距离。对应的,用户可以控制双眼旋转,以便将视线聚焦到虚拟三维环境中的P1点,而由于P1点在虚像面A上,或者P1点接近虚像面A。因此,用户根据双眼转动情况确定的辐辏深度也可以是(或者接近)人眼到虚拟虚像面A的距离。由此就实现了变焦深度与辐辏深度的匹配。Referring to FIG. 12 , take the user observing the virtual object 1 as an example. The VR glasses can adjust the virtual image plane to the position of virtual image plane A as shown in FIG. 12 . In this way, in order to see the objects on the virtual image plane A clearly, the human eye can control the ciliary muscle to adjust the state of the lens, so that the objects on the virtual image plane A can be clearly imaged on the retina. At this time, the zoom depth determined by the human eye is the distance from the human eye to the virtual image plane A. Correspondingly, the user can control the rotation of the eyes so as to focus the line of sight on the point P1 in the virtual three-dimensional environment, and because the point P1 is on the virtual image plane A, or the point P1 is close to the virtual image plane A. Therefore, the depth of convergence determined by the user according to the rotation of the eyes may also be (or be close to) the distance from the human eye to the virtual virtual image plane A. In this way, the matching of the zoom depth and the convergence depth is realized.
类似的,在用户观察虚拟物体2的情况下,VR眼镜可以将虚像面调整到如图12所示的虚像面B的位置,从而实现变焦深度与辐辏深度的匹配。Similarly, when the user observes the virtual object 2, the VR glasses can adjust the virtual image plane to the position of the virtual image plane B as shown in FIG. 12 , so as to realize the matching between the zoom depth and the convergence depth.
这样,在用户通过VR眼镜观察虚拟场景中的物体时,就不会出现由于变焦深度与辐辏深度不一致导致的视疲劳。由此即可达到在向用户提供虚拟显示功能的同时,提升用户视觉体验,避免对视力的损伤的效果。In this way, when the user observes objects in the virtual scene through the VR glasses, there will be no visual fatigue caused by the inconsistency between the zoom depth and the convergence depth. In this way, while providing the virtual display function to the user, the visual experience of the user can be improved and the damage to eyesight can be avoided.
需要说明的是,在本申请的一些实施例中,为了向用户提供更好的视觉体验,虚拟显示设备还可以在按照如图11所示的方法完成变焦深度和辐辏深度的匹配(即执行S1103)外,根据变焦深度(或辐辏深度),对当前显示的虚拟三维环境中的不同物体进行虚化处理,从而凸显用户在观察虚拟三维环境中的物体时,能够获得更加直观的立体感受。It should be noted that, in some embodiments of the present application, in order to provide users with a better visual experience, the virtual display device can also complete the matching of the zoom depth and the convergence depth according to the method shown in Figure 11 (that is, execute S1103 ), different objects in the currently displayed virtual 3D environment are blurred according to the zoom depth (or convergence depth), so as to highlight that the user can obtain a more intuitive three-dimensional experience when observing objects in the virtual 3D environment.
示例性的,结合图12,参考图13。在一些实施例中,以用于正在观察近处的虚拟物体1为例。VR眼镜可以根据虚拟物体1所在的深度,对远景进行虚化处理。从而使得用户可以看到如图13中的(a)所示的图像。可以看到,在该图像中,由于对远景进行了虚化处理,因此可以使得用户对当前观察的虚拟物体1具有更加清楚的观察体验,从而达到模拟用户观察真实场景时的离焦虚化体验。类似的,在用户观察远处的虚拟物体1时,VR眼镜可以对近景进行虚化处理,从而向用户展示如图13中的(b)所示的图像,进而达到凸出远景的效果。For example, refer to FIG. 13 in combination with FIG. 12 . In some embodiments, the virtual object 1 that is being observed near is used as an example. The VR glasses can blur the distant view according to the depth of the virtual object 1 . Thus, the user can see an image as shown in (a) in FIG. 13 . It can be seen that in this image, due to the blurring of the distant view, the user can have a clearer observation experience of the currently observed virtual object 1, thereby achieving the simulation of the defocused blurring experience when the user observes the real scene . Similarly, when the user observes the virtual object 1 in the distance, the VR glasses can blur the near view, so as to show the user the image shown in (b) in Figure 13, thereby achieving the effect of protruding the distant view.
需要说明的是,在虚拟显示设备对远景/近景进行虚化处理时,可以根据所要处理的物体对应的深度(如变焦深度或者辐辏深度)与当前用户在观察物体的深度之间的 差异,确定虚化处理的程度。比如,以VR眼镜对远景进行虚化处理为例。当远景中的物体对应的深度与用户当前观察的近景中的物体的深度差异较大(如大于深度阈值)时,则对该远景中的物体执行高度虚化处理。对应的,当远景中的物体对应的深度与用户当前观察的近景中的物体的深度差异较小(如小于深度阈值)时,则VR眼镜可以适当降低对该远景中的物体的虚化处理的程度,从而增加用户所观察图像的层次感。在本示例中,是以通过深度阈值确定虚化处理程度为例进行说明的,在本申请的另一些实施例中,VR眼镜中还可以预设多个深度阈值,每个深度阈值对应一个虚化处理程度,从而达到根据深度差异的多层虚化处理的效果。It should be noted that when the virtual display device blurs the distant view/near view, it can be determined according to the difference between the depth corresponding to the object to be processed (such as zoom depth or convergence depth) and the depth at which the current user is observing the object. The degree of blurring. For example, take VR glasses as an example to blur the distant view. When the depth corresponding to the object in the foreground is greatly different from the depth of the object in the foreground currently observed by the user (for example, greater than the depth threshold), the object in the foreground is highly blurred. Correspondingly, when the difference between the depth corresponding to the object in the foreground and the depth of the object in the foreground currently observed by the user is small (for example, less than the depth threshold), the VR glasses can appropriately reduce the effect of blurring the object in the foreground. degree, thereby increasing the layering of the image observed by the user. In this example, the degree of blurring is determined by the depth threshold as an example. In other embodiments of the present application, multiple depth thresholds can be preset in the VR glasses, and each depth threshold corresponds to a virtual blurring. The degree of blurring can be adjusted to achieve the effect of multi-layer blurring according to the depth difference.
结合前述说明,用户在长时间观察近处物体时,会使得睫状肌长时间收缩。由此会对用户的视力造成严重的影响。比如导致假性近视,甚至真性近视。In combination with the foregoing description, when the user observes a nearby object for a long time, the ciliary muscle will be contracted for a long time. This will seriously affect the eyesight of the user. Such as lead to false myopia, or even true myopia.
示例性的,结合图3,参考图14。如图3中的说明,用户在使用人眼观察远处物体时,入射人眼的光线接近平行光,该光线在人眼中的光路如图3中的(a)所示。在用户使用人眼观察近处物体时,可以收缩睫状肌,调整晶状体的屈光度,光线在人眼中的光路如图3中的(b)所示。在长时间观察近处物体时,睫状肌长时间处于收缩状态,此时如果人眼去观察其他距离的物体(如远处的物体)时,睫状肌就可能无法及时放松,导致晶状体的屈光度依然维持在较高的水平,此时光线在入射人眼之后,光路如图15所示。可以看到,光线在经过晶状体之后,无法聚焦在视网膜上。这样也就无法视网膜上成清晰的像。用户也就无法看清楚对应的物体。在一段时间之后,睫状肌逐渐放松,晶状体的屈光度对应调整,光线在经过晶状体之后的光路逐渐恢复到如图3中的(a)所示的状态,用户也就逐渐能够看清楚物体。这种短时间内无法看清物体的状态即可称为假性近视。如果用户的人眼长期处于假性近视的状态,就可能演变为真性近视,即睫状肌无法调整晶状体的屈光度回复到如图3中的(a)所示的状态,在平行光入射人眼之后,其聚焦的位置处于晶状体和视网膜之间。For example, refer to FIG. 14 in combination with FIG. 3 . As illustrated in FIG. 3 , when a user observes a distant object with the human eye, the light entering the human eye is close to parallel light, and the light path of the light in the human eye is shown in (a) in FIG. 3 . When the user uses the human eye to observe nearby objects, the ciliary muscle can be contracted to adjust the diopter of the lens. The light path of the light in the human eye is shown in (b) in Figure 3 . When observing nearby objects for a long time, the ciliary muscle is in a state of contraction for a long time. At this time, if the human eye observes objects at other distances (such as distant objects), the ciliary muscle may not be able to relax in time, resulting in loss of the lens. The diopter is still maintained at a relatively high level. At this time, after the light enters the human eye, the light path is shown in Figure 15. It can be seen that after the light passes through the lens, it cannot be focused on the retina. In this way, a clear image cannot be formed on the retina. The user cannot see the corresponding object clearly. After a period of time, the ciliary muscle gradually relaxes, the diopter of the lens is adjusted accordingly, and the optical path of the light after passing through the lens gradually returns to the state shown in (a) in Figure 3, and the user can gradually see objects clearly. This state of being unable to see objects clearly in a short period of time is called pseudomyopia. If the user's human eyes are in the state of pseudomyopia for a long time, it may evolve into true myopia, that is, the ciliary muscle cannot adjust the diopter of the lens to return to the state shown in (a) in Figure 3, when parallel light enters the human eye It then focuses between the lens and the retina.
本申请实施例提供的虚拟显示设备,由于能够控制变焦模块调整虚像面的深度,因此能够灵活地向用户清晰地展示近处物体以及远处物体,同时不出现VAC。在本申请的一些实施例中,虚拟显示设备还可以向用户提供睫状肌放松和锻炼的功能。The virtual display device provided by the embodiment of the present application can flexibly and clearly display near objects and distant objects to the user without VAC because the zoom module can be controlled to adjust the depth of the virtual image plane. In some embodiments of the present application, the virtual display device can also provide the user with functions of relaxing and exercising the ciliary muscle.
示例性的,在虚拟显示设备检测到用户通过虚拟显示设备观察近处的虚拟物体的时间较长(如大于预设的时长)时,就可以提示用户进行休息,或者,切换显示深度较大的虚拟物体。比如,在用户观察如图13中的(a)所示场景的时长大于预设的时长时,VR眼镜可以提示用户用眼时间过长。在不同实现中,VR眼镜可以通过在显示屏上显示提示性信息(如文字信息等),或者通过振动,或者通过语音等方式,提示用户用眼时间过长。接着,VR眼镜可以在用户的控制下,或者自发的,显示具有较大深度虚拟物体,比如显示如图13中的(b)所示的场景。在另一些实施例中,虚拟显示设备还可以在虚像面上显示同一个物体,通过调焦模块改变该物体在虚拟环境中的显示的虚像面的深度,从而使得用户可以在不同的深度的虚像面上聚焦的效果。这样,虚拟显示设备可以引导人眼的睫状肌放松,从而避免假性近视。另外,通过反复引导人眼的睫状肌进行放松和收缩,也可以锻炼睫状肌,避免肌肉僵化,从而提升对于视力问题的抵抗能力。Exemplarily, when the virtual display device detects that the user observes a nearby virtual object through the virtual display device for a long time (for example, longer than a preset duration), it may prompt the user to take a break, or switch to display a virtual object with a greater depth. virtual objects. For example, when the user observes the scene shown in (a) in FIG. 13 for longer than the preset duration, the VR glasses may prompt the user to use the eyes for too long. In different implementations, the VR glasses can prompt the user to use the eyes for too long by displaying prompt information (such as text information, etc.) on the display screen, or by vibrating, or by voice. Next, the VR glasses can display a virtual object with a greater depth under the control of the user, or spontaneously, such as displaying the scene shown in (b) in FIG. 13 . In some other embodiments, the virtual display device can also display the same object on the virtual image plane, and change the depth of the virtual image plane displayed by the object in the virtual environment through the focusing module, so that the user can view the virtual image at different depths. The effect of focusing on the surface. In this way, the virtual display device can guide the ciliary muscle of the human eye to relax, thereby avoiding pseudomyopia. In addition, by repeatedly guiding the ciliary muscle of the human eye to relax and contract, the ciliary muscle can also be exercised to avoid muscle stiffness, thereby improving resistance to vision problems.
作为一种可能的实现,以虚拟显示设备为VR眼镜为例。VR眼镜可以用户使用期 间,提供上述不同深度物体的显示,以对人眼起到保护作用。示例性的,VR眼镜可以在播放广告的过程中,或者视频加载的等待过程中,通过控制变焦模块调整VR眼镜中光学显示模组的光焦度,向用户展示不同深度的虚拟物体。在一些实施例中,VR眼镜可以在向用户展示不同深度的虚拟物体的过程中,通过语音或者文字提示或者其他人机交互方式,引导用户控制人眼观察该不同深度的虚拟物体,从而实现调节人眼睫状肌的效果。As a possible implementation, take the virtual display device as VR glasses as an example. VR glasses can provide the display of the above-mentioned objects with different depths during the user's use, so as to protect the human eyes. Exemplarily, the VR glasses can display virtual objects of different depths to the user by controlling the zoom module to adjust the optical power of the optical display module in the VR glasses during the advertisement playing process or the waiting process of the video loading. In some embodiments, during the process of showing virtual objects of different depths to the user, the VR glasses can guide the user to control the human eyes to observe the virtual objects of different depths through voice or text prompts or other human-computer interaction methods, so as to realize adjustment The effect of the ciliary muscle in the human eye.
结合上述说明,在人眼的睫状肌长期处于收缩状态下时,可能会出现视疲劳甚至近视。对于提供虚拟显示功能的虚拟显示设备(如VR眼镜)而言,其用户群体中的近视用户占比也越来越高。Combined with the above description, when the ciliary muscle of the human eye is in a contracted state for a long time, visual fatigue or even myopia may occur. For virtual display devices (such as VR glasses) that provide virtual display functions, the proportion of myopic users in the user group is also increasing.
一般而言,在正常生活中,近视用户可以通过佩戴眼镜等方式,实现视力的矫正。示例性的,结合图14。对于近视用户,光线进入人眼之后的聚焦面可能位于晶状体以及视网膜之间,这样,光线(如平行光)在入射人眼之后,就无法在视网膜上清晰成像。用户可以佩戴近视镜,调整光线在入射人眼之前的光路,已达到光线在入射人眼之后能够在视网膜上清晰成像的效果。比如,参考图15,以近视镜为凹透镜为例。平行光在入射人眼之前,可以通过凹透镜折射,进而使得光线在通过晶状体之后,能够汇聚在视网膜上,由此就可以使得发出光线的物体能够在视网膜上清晰成像。Generally speaking, in normal life, myopic users can achieve vision correction by wearing glasses and other methods. For example, refer to FIG. 14 . For myopic users, the focal plane of the light entering the human eye may be located between the lens and the retina, so that the light (such as parallel light) cannot be clearly imaged on the retina after entering the human eye. Users can wear myopia glasses to adjust the optical path of light before it enters the human eye, so that the light can be clearly imaged on the retina after entering the human eye. For example, referring to FIG. 15 , take the myopia mirror as a concave lens as an example. Before the parallel light enters the human eye, it can be refracted by the concave lens, so that the light can converge on the retina after passing through the lens, so that the object emitting the light can be clearly imaged on the retina.
然而,在近视用户使用虚拟显示设备时,佩戴眼镜显然是不够便利的。因此,为了能够使得近视用户在佩戴虚拟显示设备时,使得显示屏显示的画面能够被用户看清楚(即显示屏对应的虚像可以在人眼的视网膜上清晰成像)。在一些虚拟显示设备的实现中,虚拟显示设备会向用户提供手动调整人眼和显示屏之期间的光学系统的焦距,比如结合图4所示的VR眼镜的组成示意。在该实现中,该VR眼镜可以向用户提供调整目镜光焦度的控制方式。比如,在VR眼镜上可以提供机械旋转机构,使得用户在旋转该机械旋转机构时,能够达到调整目镜光焦度的效果。通过调整目镜的光焦度,使得用户可以通过手动控制的方式,通过人眼看清楚显示屏对应的虚像,即使得显示屏的虚像在人眼的视网膜上清晰成像。在该示例中,调整后目镜既可以起到模拟近视镜的作用。由此,即可使得近视用户能够在不佩戴近视镜的情况下,也能够通过虚拟显示设备使用其提供的虚拟显示功能。However, when a myopic user uses a virtual display device, it is obviously not convenient enough to wear glasses. Therefore, in order to enable a myopic user to wear a virtual display device, the picture displayed on the display screen can be clearly seen by the user (that is, the virtual image corresponding to the display screen can be clearly imaged on the retina of the human eye). In the implementation of some virtual display devices, the virtual display device will provide the user with the manual adjustment of the focal length of the optical system between the human eye and the display screen, such as the composition diagram of VR glasses shown in FIG. 4 . In this implementation, the VR glasses can provide the user with a control method to adjust the optical power of the eyepiece. For example, a mechanical rotation mechanism can be provided on the VR glasses, so that the user can achieve the effect of adjusting the optical power of the eyepiece when rotating the mechanical rotation mechanism. By adjusting the optical power of the eyepiece, the user can clearly see the virtual image corresponding to the display screen through the human eyes through manual control, that is, the virtual image of the display screen is clearly formed on the retina of the human eye. In this example, the adjusted eyepiece acts both to simulate myopia. Thus, the myopic user can use the virtual display function provided by the virtual display device without wearing myopia glasses.
需要说明的是,人眼在观察物体是具有一定的自动调节能力。比如,在被观察物体在人眼中的成像落在视网膜前方或后方时,人眼会自动控制睫状肌调整晶状体的凹凸状态,以尽量调整成像位置,从而将被观察物体在人眼中的成像位置调节到视网膜上。It should be noted that the human eye has a certain automatic adjustment ability when observing objects. For example, when the image of the observed object in the human eye falls in front or behind the retina, the human eye will automatically control the ciliary muscle to adjust the concave-convex state of the lens, so as to adjust the imaging position as much as possible, so as to adjust the imaging position of the observed object in the human eye Adjusted to the retina.
基于此,如果采用上述手动调节的方案,即使用户通过手动控制调整目镜的光焦度,使得显示屏的虚像可以在视网膜上清晰成像。由于人眼自动调节机制,使得该状态下的显示屏的虚像在视网膜上清晰成像是由于睫状肌过度挤压晶状体导致的。也就是说,在睫状肌正常控制晶状体的范围内,当前目镜的光焦度就无法保证显示屏的虚像可以在视网膜上清晰成像。因此,如果现实装置根据当前目镜的光焦度向用户提供虚拟显示功能,就会使得用户人眼的睫状肌长期过度挤压晶状体,从而产生眼部不适,严重时可能会影响用户的眼健康。Based on this, if the above-mentioned manual adjustment solution is adopted, even if the user adjusts the optical power of the eyepiece through manual control, the virtual image of the display screen can be clearly imaged on the retina. Due to the automatic adjustment mechanism of the human eye, the virtual image of the display screen in this state is clearly imaged on the retina because the ciliary muscle excessively squeezes the lens. That is to say, within the range where the ciliary muscle normally controls the lens, the current optical power of the eyepiece cannot guarantee that the virtual image on the display screen can be clearly imaged on the retina. Therefore, if the reality device provides the user with a virtual display function based on the focal power of the current eyepiece, the ciliary muscle of the user's eye will over-extrude the lens for a long time, resulting in eye discomfort, which may affect the user's eye health in severe cases .
本申请实施例提供的虚拟显示设备(如图6A-图10中任一种穿戴设备或者虚拟显 示设备),在向用户提供虚拟显示功能的过程中,能够结合变焦装置的自动调节功能,实现准确地测量用户人眼的近视情况。本方案中测量获取人眼的近视情况,能够避免人眼自动调节能力的影响,从而根据人眼的真实情况(如近视情况),避免在用户使用虚拟显示设备的虚拟显示功能时产生的眼部不适,进而提升使用体验。The virtual display device provided by the embodiment of the present application (such as any wearable device or virtual display device in Fig. 6A-Fig. 10) can combine the automatic adjustment function of the zoom device to realize accurate To accurately measure the nearsightedness of the user's eyes. In this solution, the myopia of the human eye is measured and obtained, which can avoid the influence of the automatic adjustment ability of the human eye, so that according to the real situation of the human eye (such as myopia), the eye will be avoided when the user uses the virtual display function of the virtual display device. Discomfort, thereby improving the user experience.
以下结合上述图6A-图9中对于本申请实施例提供的穿戴设备或者虚拟显示设备的描述,通过附图对本申请实施例提供的根据用户的人眼的真实情况提供虚拟显示的方案进行详细说明。In the following, in conjunction with the above-mentioned description of the wearable device or virtual display device provided by the embodiment of the application in Figure 6A-Figure 9, the scheme of providing virtual display according to the real situation of the user's human eyes provided by the embodiment of the application will be described in detail through the accompanying drawings .
为了便于说明,以下以用户为近视用户,虚拟显示设备为具有如图9所示的组成的VR眼镜为例。For the convenience of description, the following assumes that the user is a myopic user and the virtual display device is VR glasses with the composition shown in FIG. 9 as an example.
在本示例的一些实施例中,在VR眼镜向用户提供虚拟显示功能之前,可以确定用户的双眼中,每个人眼对图像的识别能力。在一些实施例中,人眼对图像的识别能力可以通过人眼的度数进行标识。In some embodiments of this example, before the VR glasses provide the virtual display function to the user, the image recognition ability of each of the user's eyes may be determined. In some embodiments, the ability of human eyes to recognize images can be identified by the power of human eyes.
可以理解的是,目前可以通过国际通用的视力表来确定人眼的度数。比如,图16示出了一种视力表。其中可以包括用于判断人眼识别能力的字母,以及该尺寸的字母对应的视力。比如,第2行字母对应的视力为4.2°(对数视力度数),或者0.15°(国际视力度数)。在本示例中,该确定人眼的度数的过程也可以称为验光的过程。在使用视力表对用户进行验光时,可以在距离用户5m的位置悬挂该视力表。通过引导用户分别使用左眼和右眼确定视力表上字母的开口方向,以最小能确定开口方向的字母所在行对应的度数,作为左眼或右眼的度数。比如,以使用左眼观察视力表为例,最小能够看清视力表上5.0(1.0)对应一行的字母开口方向,则表明左眼的度数可以为1.0。在此情况下,5.0(1.0)对应一行的字母开口在视网膜上成像宽度约为5um,那么左眼的分辨率能够达到最小5um的宽度。此时一般认为左眼的视力正常,并未出现近视。It is understandable that at present, the degree of the human eye can be determined through an internationally accepted eye chart. For example, Figure 16 shows an eye chart. It can include the letters used to judge the recognition ability of the human eye, and the corresponding visual acuity of the letters of this size. For example, the visual acuity corresponding to the letters in the second row is 4.2° (logarithmic visual acuity), or 0.15° (international visual acuity). In this example, the process of determining the power of the human eye may also be referred to as a process of optometry. When using the eye chart to perform optometry on the user, the eye chart can be hung at a position 5m away from the user. By guiding the user to use the left eye and the right eye to determine the opening direction of the letters on the eye chart, the degree corresponding to the line where the letter is located is the minimum that can determine the opening direction, as the degree of the left eye or the right eye. For example, taking the use of the left eye to observe the eye chart as an example, the minimum ability to see the direction of the letter opening corresponding to 5.0 (1.0) on the eye chart indicates that the degree of the left eye can be 1.0. In this case, 5.0 (1.0) corresponds to a row of letter openings with an imaging width of about 5um on the retina, so the resolution of the left eye can reach a minimum width of 5um. At this time, it is generally believed that the vision of the left eye is normal, and there is no myopia.
可以理解的是,人眼想要能够判断字母的开口方向,就需要能够根据视网膜上字母的成像,区分开口方向对应的宽度。比如,如果人眼能够看清5.0(1.0)对应一行的字母开口方向,则表明人眼能够区分视网膜上5um成像的宽度。而字母在人眼视网膜上的成像尺寸,显然与字母与人眼的距离有关。这也就使得通过视力表进行验光时,一定要保证人眼与字母之间的距离。It is understandable that if the human eye wants to be able to judge the opening direction of a letter, it needs to be able to distinguish the width corresponding to the opening direction based on the imaging of the letter on the retina. For example, if the human eye can clearly see the direction of the letter opening corresponding to a row of 5.0 (1.0), it means that the human eye can distinguish the width of the 5um image on the retina. The imaging size of letters on the retina of the human eye is obviously related to the distance between the letter and the human eye. This also makes it necessary to ensure the distance between the human eye and the letter when performing optometry through the eye chart.
通过这种传统的验光方式,能够获取用户双眼各自的度数,但是需要使用如图16所示的视力表,同时对场地有较大的要求(如该场地需要提供被验光者与视力表5m的距离)。这样显然是不够便捷的。另外,基于该传统的眼光方式确定用户人眼对图像的识别能力,也无法应用在虚拟显示的场景下。Through this traditional optometry method, the respective diopters of both eyes of the user can be obtained, but the eye chart shown in Figure 16 needs to be used, and at the same time, there are relatively large requirements for the venue (for example, the venue needs to provide a 5m distance between the optometrist and the eye chart. distance). This is obviously not convenient enough. In addition, the traditional way of determining the recognition ability of the user's eyes to the image cannot be applied in the scene of virtual display.
对于此,采用本申请实施例提供的VR眼镜,可以采用如下方案,确定用户的人眼对图像的识别能力。For this, using the VR glasses provided in the embodiment of the present application, the following solution may be adopted to determine the image recognition ability of the user's human eyes.
示例性的,VR眼镜可以在显示屏上,依次向用户展示具有不同大小开口的检查图像(例如该检查图像可以包括如图16所示的视力表上的字母等),结合用户的反馈,确定用户对图像的识别能力。Exemplarily, the VR glasses can sequentially show the user inspection images with openings of different sizes on the display screen (for example, the inspection images can include letters on the eye chart as shown in FIG. 16 ), combined with user feedback, determine The user's ability to recognize images.
在本申请的一些实施例中,VR眼镜可以通过人眼识别图像的视分值,确定用户对图像的识别能力。In some embodiments of the present application, the VR glasses can determine the user's ability to recognize the image through the visual score of the image recognized by human eyes.
其中,视分值可以是指人眼观察图像时,能够分辨的图像的最小角度。比如,结合图17,为一种视分值的示意图。以用户最小能够区分如1701所示的字母的开口。那么,就可以将该字母的开口与人眼的夹角(如图所示的视线夹角),作为该人眼的视分值。Wherein, the visual score may refer to a minimum angle of an image that can be distinguished by human eyes when observing the image. For example, referring to FIG. 17 , it is a schematic diagram of a visual score. Openings of letters shown as 1701 at the minimum that the user can distinguish. Then, the angle between the opening of the letter and the human eye (the angle of sight as shown in the figure) can be used as the visual score of the human eye.
由于视分值是一个角度的标识,因此可以使得人眼和检查图像之间的距离可以不受限制,自由调整。Since the visual score is an indication of an angle, the distance between the human eye and the inspection image can be freely adjusted without restriction.
在获取人眼的视分值之后,VR眼镜可以根据视分值与人眼示例的对应关系,结合人眼的视分值,确定人眼的视力情况。After obtaining the visual score of the human eye, the VR glasses can determine the visual acuity of the human eye based on the corresponding relationship between the visual score and the human eye example, combined with the visual score of the human eye.
作为一种可能的实现,该视分值与人眼视力的对应关系可以是预置在VR眼睛中的,也可以是VR眼镜在需要使用时从云端获取的。在一些实施例中,表1示出了一种视分值与人眼视力的对应关系。As a possible implementation, the correspondence between the vision score and the human vision can be preset in the VR eye, or can be obtained from the cloud when the VR glasses need to be used. In some embodiments, Table 1 shows a correspondence between vision scores and human visual acuity.
表1Table 1
对数视力表logarithmic vision chart 国际视力表International Eye Chart 视分值Score
4.04.0 0.10.1 10′10′
4.14.1 0.120.12 7.947′7.947′
4.24.2 0.150.15 6.312′6.312′
4.34.3 0.200.20 5.013′5.013′
4.44.4 0.250.25 3.982′3.982′
4.54.5 0.30.3 3.163′3.163′
4.64.6 0.40.4 2.512′2.512′
4.74.7 0.50.5 1.996′1.996′
4.84.8 0.60.6 1.585′1.585′
4.94.9 0.80.8 1.259′1.259′
5.05.0 1.01.0 1′1'
如表1所示,VR眼镜可以在人眼能够分辨的视分值为10’时,确定对应的视力度数就可以为4.0(0.1)。类似的,VR眼镜可以在人眼能够分辨的视分值为7.947’时,确定对应的视力度数就可以为4.1(0.12)。VR眼镜可以在人眼能够分辨的视分值为1.259’时,确定对应的视力度数就可以为4.9(0.8)。以此类推。As shown in Table 1, VR glasses can determine the corresponding visual acuity to be 4.0 (0.1) when the visual score that the human eye can distinguish is 10'. Similarly, VR glasses can determine the corresponding visual acuity to be 4.1 (0.12) when the visual score that the human eye can distinguish is 7.947'. VR glasses can determine the corresponding visual acuity to be 4.9 (0.8) when the visual score that the human eye can distinguish is 1.259'. and so on.
请参考图18,为本申请实施例提供的一种虚拟显示方法的流程示意图。基于该流程,VR眼镜就能够获取用户的双眼各自对应的图像识别能力。其中,以图像识别能力通过视力标识为例。如图所述,该方案可以包括:Please refer to FIG. 18 , which is a schematic flowchart of a virtual display method provided by an embodiment of the present application. Based on this process, the VR glasses can obtain the corresponding image recognition capabilities of the user's eyes. Among them, take the image recognition ability as an example through vision identification. As illustrated, the program can include:
S1801、在用户使用VR眼镜时,向用户的第一人眼展示第一视分值对应的第一检测图像。S1801. When the user uses the VR glasses, display the first detected image corresponding to the first visual score to the first human eye of the user.
其中,第一人眼可以是用户的左眼或右眼中的任意一个。第一视分值可以是如表1所示的对应关系中的任一个视分值。为了能够对人眼的视力进行准确地测量,在本申请的一些实施例中,在VR眼镜开始检测用户的视力时,第一视分值可以是对应视角最大的视分值,比如如表1所示的10’。Wherein, the first human eye may be any one of the user's left eye or right eye. The first visual score may be any visual score in the corresponding relationship shown in Table 1. In order to accurately measure the vision of the human eye, in some embodiments of the present application, when the VR glasses start to detect the user's vision, the first visual score may be the visual score with the largest corresponding viewing angle, for example, as shown in Table 1 10' shown.
在本示例中,VR眼镜可以通过在第一人眼对应的显示屏(如第一显示屏)上显示 第一检测图像,从而使得第一人眼可以在第一显示屏对应的虚像位置,观察到第一检测图像。示例性的,结合图19。以第一人眼为左眼,VR眼睛中向左眼提供虚拟显示功能的部件为第一显示部件为例。如图19所示,VR眼镜可以控制第一显示部件的显示屏上显示第一检测图像。这样,用户就可以通过左眼,看到该第一检测图像在如图所示的虚像面的位置显示的第一检测图像。In this example, the VR glasses can display the first detection image on the display screen corresponding to the first human eye (such as the first display screen), so that the first human eye can observe the to the first detected image. For example, refer to FIG. 19 . Take the first human eye as the left eye, and the component in the VR eye that provides the virtual display function to the left eye as the first display component as an example. As shown in FIG. 19 , the VR glasses can control the display screen of the first display component to display the first detection image. In this way, the user can see the first detection image displayed at the position of the virtual image plane as shown in the figure through the left eye of the first detection image.
用户可以通过人眼(如第一人眼)观察该第一检测图像,判断是否能够看清该第一视分值对应的第一检测图像。比如,用户可以判断第一检测图像中包括的字母或图形的开口方向,并向VR眼镜输入判断结果,以便于VR眼镜确定第一人眼是否能够看清当前具有第一视分值的第一检测图像。The user can observe the first detection image with human eyes (such as the first human eye), and judge whether the first detection image corresponding to the first visual score can be seen clearly. For example, the user can judge the opening direction of the letters or graphics included in the first detection image, and input the judgment result to the VR glasses, so that the VR glasses can determine whether the first human eye can clearly see the current first image with the first visual score. Detect images.
示例性的,以VR眼镜显示的检测图像包括C字形字母为例。如图20所示,在该S1801中,C字形字母的开口大小可以对应到第一视分值。Exemplarily, it is taken that the detection image displayed by the VR glasses includes a C-shaped letter as an example. As shown in FIG. 20, in this S1801, the opening size of the C-shaped letter may correspond to the first visual score.
也就是说,VR眼镜需要保证用户看到的C字形字母的开口尺寸,对应于第一人眼的视分值为第一视分值。That is to say, the VR glasses need to ensure that the opening size of the C-shaped letter seen by the user corresponds to the first visual score of the first human eye.
在本申请实施例中,VR眼镜可以根据当前的视场角大小,以及向该视场角中的图形提供显示的像素个数,确定在显示具有第一视分值的C字形字母时,该C字形字母需要具有的尺寸大小。In the embodiment of the present application, the VR glasses can determine, according to the current field of view and the number of pixels displayed for the graphics in the field of view, when displaying the C-shaped letter with the first visual score, the The size that the C-shaped letter needs to have.
可以理解的是,显示屏在显示图像的过程中,是以像素为单位进行显示的。VR眼镜可以通过每度像素数(Pixel Per Degree,PPD)确定第一显示屏上,第一检测图像的显示尺寸,从而保证第一人眼看到的第一检测图像的开口为第一视分值。It can be understood that, in the process of displaying images on the display screen, the display is performed in units of pixels. VR glasses can determine the display size of the first detection image on the first display screen by Pixel Per Degree (PPD), so as to ensure that the opening of the first detection image seen by the first human eye is the first visual score .
比如,结合图21。以左眼在观察第一显示屏时的视场角为角度A为例。如图21所示,显示屏上参与向用户的左眼提供图像显示的像素个数可以为N个像素。可以看到N个像素。这样,该场景下的PPD可以根据如下公式(1)获取。For example, in conjunction with Figure 21. Take the angle A as an example where the viewing angle of the left eye when observing the first display screen is taken as an example. As shown in FIG. 21 , the number of pixels participating in providing image display to the user's left eye on the display screen may be N pixels. N pixels can be seen. In this way, the PPD in this scenario can be obtained according to the following formula (1).
Figure PCTCN2022085632-appb-000003
Figure PCTCN2022085632-appb-000003
这样,根据该PPD,VR眼镜就可以结合显示屏与用户人眼(如第一人眼)之间的距离,确定要显示具有第一视分值对应的C字形字母时,该C字形字母开口对应的像素数量。进而确定显示C字形字母对应的像素。据此,VR眼镜就可以保证用户看到的C字形字母的开口尺寸,对应于第一人眼的视分值为第一视分值。比如实现如图20所示的显示效果。可以理解的是,通过上述方案,VR眼镜可以根据视分值,确定需要显示的第一检测图像的尺寸大小。相比于直接计算第一检测图像的尺寸大小,可以更加准确地确定需要显示第一检测图像的像素数,由此向用户展示更加准确的具有第一视分值的第一检测图像。In this way, according to the PPD, the VR glasses can combine the distance between the display screen and the user's human eyes (such as the first human eye) to determine that when the C-shaped letter corresponding to the first visual score is to be displayed, the opening of the C-shaped letter The corresponding number of pixels. Further, the pixel corresponding to the C-shaped letter is determined. Accordingly, the VR glasses can ensure that the opening size of the C-shaped letter seen by the user corresponds to the first visual score of the first human eye. For example, the display effect shown in FIG. 20 is realized. It can be understood that, through the above solution, the VR glasses can determine the size of the first detection image to be displayed according to the visual score. Compared with directly calculating the size of the first detection image, the number of pixels required to display the first detection image can be more accurately determined, thereby presenting a more accurate first detection image with the first visual score to the user.
需要说明的是,在本申请的一些实施例中,该第一检测图像可以包括一个用于检测人眼视力的字母或图形。示例性的,结合图22。图22中的(a)示出了一种第一检测图像中包括一个用于检测视力的字母的示例。如图22中的(a)所示,第一检测图像中的字母可以具有一个开口(如字母C的缺口)。VR眼镜可以根据前述说明,确定该C字形字母在第一显示屏上的显示尺寸,从而向用户显示开口对应第一视分值的C字形字母。It should be noted that, in some embodiments of the present application, the first detection image may include a letter or a figure for detection of human eyesight. For example, refer to FIG. 22 . (a) in FIG. 22 shows an example in which a letter for detecting eyesight is included in the first detection image. As shown in (a) of FIG. 22 , the letter in the first detection image may have an opening (such as a notch of letter C). The VR glasses can determine the display size of the C-shaped letter on the first display screen according to the foregoing description, so as to display the C-shaped letter whose opening corresponds to the first visual score to the user.
在本申请的另一些实施例中,该第一检测图像可以包括多个用于检测人眼视力的 字母或图形。示例性的,继续以第一检测图像中包括C字形字母为例。VR眼镜可以在第一显示屏上,一次性向用户展示多个开口对应第一视分值的C字形字母。其中,相邻两个C字形字母的开口方向可以不同。这样,用户就可以依次识别这多个C字形字母的开口方向,以便于VR眼镜可以根据用户输入的识别结果,更加准确地判断用户的视力情况。示例性的,图22中的(b)示出了一种第一检测图像包括多个C字形字母的显示示例。基于该第一检测图像,用户可以通过第一人眼从左向右依次判断对应字母的开口方向,由此使得VR眼镜可以基于此更加准确地判断用户是否能够看清该视分值(如第一视分值)对应的检测图像。In some other embodiments of the present application, the first detection image may include a plurality of letters or graphics used to detect human eyesight. Exemplarily, continue to take the example that the first detection image includes a C-shaped letter. The VR glasses can display multiple C-shaped letters with openings corresponding to the first visual score to the user at one time on the first display screen. Wherein, the opening directions of two adjacent C-shaped letters may be different. In this way, the user can sequentially recognize the opening directions of the multiple C-shaped letters, so that the VR glasses can more accurately judge the user's vision according to the recognition results input by the user. Exemplarily, (b) in FIG. 22 shows a display example in which the first detection image includes a plurality of C-shaped letters. Based on the first detection image, the user can sequentially judge the opening direction of the corresponding letter from left to right through the first human eye, so that the VR glasses can judge more accurately based on this whether the user can see the visual score (as shown in No. A detection image corresponding to a visual score).
另外,为了避免第一人眼的检测和第二人眼的检测过程的相互干扰,在本申请的一些实施例中,VR眼镜可以在第一显示屏显示第一检测图像的过程中,在另一个显示屏上(如第二显示屏上)显示黑屏,或者关闭第二显示屏。而在按照本申请实施例提供的方案完成第一人眼的检测之后,VR眼镜就可以按照类似方案,控制第二显示屏进行显示,以实现另一个人眼(如第二人眼)的视力检测。In addition, in order to avoid mutual interference between the detection process of the first human eye and the detection process of the second human eye, in some embodiments of the present application, the VR glasses can display the first detection image on the first display screen, and the other A black screen is displayed on one display (such as the second display), or the second display is turned off. After the detection of the first human eye is completed according to the scheme provided in the embodiment of the present application, the VR glasses can control the second display screen to display according to a similar scheme, so as to realize the vision of another human eye (such as the second human eye) detection.
S1802、接收用户的第一识别反馈。S1802. Receive first identification feedback from the user.
用户的第一人眼可以通过第一显示部件,看到开口对应第一视分值的第一检测图像。The user's first human eyes can see the first detection image corresponding to the first visual score with the opening through the first display component.
在本示例中,用户可以根据看到的第一检测图像的开口方向,向VR眼镜输入第一识别反馈。在一些实施例中,该第一识别反馈可以用于指示用户判断的第一检测图像的开口方向。In this example, the user may input the first recognition feedback to the VR glasses according to the opening direction of the first detected image seen. In some embodiments, the first identification feedback may be used to indicate the opening direction of the first detection image judged by the user.
示例性的,以第一检测图像中包括1个C字形字母为例。用户可以在看到该C字形字母之后,将该C字形字母的开口方向输入VR眼镜。也就是说,在该示例中,第一识别反馈可以是C字形字母的开口方向。Exemplarily, take a C-shaped letter included in the first detection image as an example. After seeing the C-shaped letter, the user can input the opening direction of the C-shaped letter into the VR glasses. That is to say, in this example, the first recognition feedback may be the opening direction of the C-shaped letter.
结合图23。以用户通过遥控器输入开口方向为例。在用户通过第一人眼,识别C字形字母的开口方向为向上时。在一些实施例中,用户可以通过触碰遥控器上“向上”对应的按钮(如图23中的(a)所示的按钮2301),以便输入指示开口方向向上的第一识别反馈。在另一些实施例中,用户可以通过在遥控器的触摸屏(如图23中的(b)所示的屏幕2302)上,输入向上滑动的操作,以便输入指示开口方向向上的第一识别反馈。Combined with Figure 23. Take the user inputting the opening direction through the remote control as an example. When the user recognizes that the opening direction of the C-shaped letter is upward through the first human eyes. In some embodiments, the user may input the first recognition feedback indicating the upward direction of the opening by touching a button corresponding to "up" on the remote control (button 2301 shown in (a) in FIG. 23 ). In some other embodiments, the user may input an upward sliding operation on the touch screen of the remote control (such as the screen 2302 shown in (b) in FIG. 23 ), so as to input the first recognition feedback indicating that the direction of the opening is upward.
需要说明的是,上述示例中,是以用户通过遥控器输入第一识别反馈为例进行说明的。在本申请的另一些实施例中,用户还可以通过语音指令,和/或手势指令等方式,实现第一识别反馈的输入。本申请实施例对应用户输入第一识别反馈的具体方式不作限制。It should be noted that, in the above example, the user inputs the first identification feedback through the remote controller as an example for illustration. In some other embodiments of the present application, the user may also implement the input of the first recognition feedback through voice commands and/or gesture commands. The embodiment of the present application does not limit the specific manner in which the user inputs the first identification feedback.
这样,VR眼镜就可以接收到用户的第一人眼对于当前的第一检测图像的开口方向的识别情况。In this way, the VR glasses can receive the recognition of the opening direction of the current first detection image by the user's first human eyes.
在本申请的一些实施例中,为了能够使得用户正确地输入第一识别反馈,VR眼镜可以在接收到用户的第一识别反馈之前,引导用户识别第一检测图像,并输入第一识别反馈。比如,VR眼镜可以通过语音提示,或者在显示的虚拟场景中通过文字提示,或者通过其他方式(如震动等)引导用户识别第一检测图像,并输入第一识别反馈。In some embodiments of the present application, in order to enable the user to correctly input the first recognition feedback, the VR glasses may guide the user to recognize the first detection image and input the first recognition feedback before receiving the user's first recognition feedback. For example, the VR glasses can guide the user to recognize the first detection image and input the first recognition feedback through voice prompts, text prompts in the displayed virtual scene, or other methods (such as vibration, etc.).
S1803、根据第一识别反馈,确定用户是否能够识别第一检测图像。S1803. Determine whether the user can recognize the first detection image according to the first recognition feedback.
在本申请实施例中,VR眼镜可以根据第一识别反馈,与第一检测图像的实际开口方向是否一致,判断后续执行动作。示例性的,在第一识别反馈与第一检测图像的实际开口方向不一致时,则表明第一人眼无法能够看清当前具有第一视分值的第一显示图像。那么VR眼镜就可以继续执行以下S1804。对应的,VR眼镜可以在第一识别反馈与第一检测图像的实际开口方向一致时,则表明第一人眼能够看清当前具有第一视分值的第一显示图像。那么VR眼镜就可以继续执行以下S1805。In the embodiment of the present application, the VR glasses can determine whether the subsequent action is performed according to whether the first recognition feedback is consistent with the actual opening direction of the first detection image. Exemplarily, when the first recognition feedback is inconsistent with the actual opening direction of the first detection image, it indicates that the first human eye cannot clearly see the first display image currently having the first visual score. Then the VR glasses can continue to execute the following S1804. Correspondingly, when the first recognition feedback is consistent with the actual opening direction of the first detection image, the VR glasses indicate that the first human eye can clearly see the first display image currently having the first visual score. Then the VR glasses can continue to perform the following S1805.
需要说明的是,在本申请的一些实施例中,VR眼镜在确定第一识别反馈与第一检测图像的实际开口方向不一致时,再次向用户展示具有第一视分值的检测图像,该检测图像的开口方向可以与第一检测图像不同。以便于用户再次判断该第一视分值的检测图像的开口方向。在重复上述的动作预设次数(如三次)之后,如果用户输入的识别反馈依然与检测图像的开口方向不一致,那么VR眼镜就可以明确用户无法看清该第一视分值对应的图像,进而执行S1804。由此能够提升VR眼镜对于用户视力判断的准确性。It should be noted that, in some embodiments of the present application, when the VR glasses determine that the first identification feedback is inconsistent with the actual opening direction of the first detection image, the detection image with the first visual score is displayed to the user again, and the detection The opening direction of the image may be different from the first detection image. In order to facilitate the user to judge again the opening direction of the detection image of the first visual score. After repeating the above actions for a preset number of times (such as three times), if the recognition feedback input by the user is still inconsistent with the opening direction of the detected image, then the VR glasses can make it clear that the user cannot clearly see the image corresponding to the first visual score, and then Execute S1804. In this way, the accuracy of the VR glasses in judging the user's vision can be improved.
S1804、根据第一视分值,确定第一人眼的视力。S1804. Determine the visual acuity of the first human eye according to the first visual acuity score.
可以理解的是,在第一人眼无法辨认第一检测图像的开口方向的情况下,那么VR眼镜就可以根据第一视分值,确定第一人眼的视力。It can be understood that if the first human eye cannot identify the opening direction of the first detection image, then the VR glasses can determine the visual acuity of the first human eye according to the first visual score.
比如,结合表1。在一些实施例中,VR眼镜可以根据表1中的对应关系,确定第一视分值对应的视力。那么该视力就可以是第一人眼的视力。比如,如表1所示,在第一视分值为7.947′时,则VR眼镜可以确定第一人眼的视力为4.1(0.12)。又如,在第一视分值为1.259′时,则VR眼镜可以确定第一人眼的视力为4.9(0.8)。For example, combine Table 1. In some embodiments, the VR glasses can determine the visual acuity corresponding to the first visual score according to the correspondence in Table 1. The vision may then be that of the first human eye. For example, as shown in Table 1, when the first visual score is 7.947', the VR glasses can determine that the visual acuity of the first human eye is 4.1 (0.12). For another example, when the first visual score is 1.259', the VR glasses can determine that the visual acuity of the first human eye is 4.9 (0.8).
这样,VR眼镜就可以实现对第一人眼的图像识别能力的确定。In this way, the VR glasses can realize the determination of the image recognition ability of the first human eye.
S1805、展示第二视分值对应的第二检测图像。S1805. Display the second detection image corresponding to the second visual score.
其中,第二视分值对应的第二检测图像的尺寸,可以小于第一视分值对应的第一检测图像的尺寸。从而达到梯度缩小检测用户示例的目的。Wherein, the size of the second detection image corresponding to the second visual score may be smaller than the size of the first detection image corresponding to the first visual score. So as to achieve the purpose of gradient reduction to detect user examples.
示例性的,结合表1,以第一视分值为2.512′为例。在S1803中,在VR眼镜确定用户的第一识别反馈对应的字母开口方向与第一检测图像的实际开口方向一致的情况下。VR眼镜可以在第一显示屏上展示第二检测图像的视分值可以为1.996′。也就是说,第二检测图像对于视力的检测相比于第一检测图像更加严苛,由此就可以进一步确定用户在能够看清楚第一检测图像的情况下,是否能够看清更小的图像(如第二检测图像)。Exemplarily, referring to Table 1, the first visual score is 2.512' as an example. In S1803, when the VR glasses determine that the opening direction of the letter corresponding to the user's first recognition feedback is consistent with the actual opening direction of the first detected image. The visual score of the second detected image displayed by the VR glasses on the first display screen may be 1.996′. That is to say, the second detection image is more stringent in the detection of eyesight than the first detection image, so it can be further determined whether the user can clearly see the smaller image when the first detection image can be clearly seen (such as the second detection image).
在本步骤中,VR眼镜确定第二检测图像的方法与确定第一检测图像的方法类似。比如,VR可以根据PPD以及第二视分值,计算确定第二检测图像的尺寸,进而根据改尺寸在第一显示屏上显示具有第二视分值的第二检测图像。其具体实现类似,此处不再赘述。In this step, the method for the VR glasses to determine the second detection image is similar to the method for determining the first detection image. For example, VR can calculate and determine the size of the second detection image according to the PPD and the second visual score, and then display the second detection image with the second visual score on the first display screen according to the changed size. The specific implementation thereof is similar and will not be repeated here.
S1806、接收用户的第二识别反馈。S1806. Receive second identification feedback from the user.
其中,第二识别反馈可以是用户在使用第一人眼观察第二检测图像之后,向VR眼镜输入的用于指示第二检测图像的开口方向的指示。其具体执行过程以及实现方式可以参考第一识别反馈。Wherein, the second recognition feedback may be an instruction input by the user into the VR glasses for indicating the opening direction of the second detection image after observing the second detection image with the first human eyes. For the specific execution process and implementation manner, please refer to the first recognition feedback.
S1807、在第二识别反馈与第二检测图像的实际开口方向不同的情况下,根据第二 视分值,确定第一人眼的视力。S1807. In the case that the second identification feedback is different from the actual opening direction of the second detection image, determine the visual acuity of the first human eye according to the second visual score.
该步骤的具体实现类似于上述S1804,此处不再赘述。The specific implementation of this step is similar to the above S1804, and will not be repeated here.
需要说明的是,如图18所示的示例中,是以用户能够识别出第一检测图像的开口方向,而不能识别出第二检测图像的开口方向为例进行说明的。也就是说,VR眼镜可以向用户提供对应视分值梯度下降的检测图像,供用户判断,并根据无法清楚地识别检测图像开口方向时,对应的最大视分值,确定用户人眼的视力。It should be noted that, in the example shown in FIG. 18 , the user can recognize the opening direction of the first detection image but cannot recognize the opening direction of the second detection image as an example. That is to say, the VR glasses can provide the user with a detection image corresponding to a gradient drop of the visual score for the user to judge, and determine the visual acuity of the user's human eye based on the corresponding maximum visual score when the opening direction of the detection image cannot be clearly identified.
作为另一种可能的实现方式,继续参考图18。VR眼镜可以在执行1806之后,在第二识别反馈与第二检测图像的实际开口方向不同的情况下,按照虚线路径循环执行S1801。即继续向用户展示其他的检测图像。可以理解的是,在该示例中,VR眼镜再次向用户展示的检测图像对应的开口尺寸可以小于第二检测图像对应的开口尺寸,由此进一步增加用户识别检测图像的难度,直到获取人眼无法识别的最大视分值为止。As another possible implementation manner, continue to refer to FIG. 18 . After performing step 1806, the VR glasses may perform S1801 in a loop according to the dotted path in the case that the second identification feedback is different from the actual opening direction of the second detection image. That is, continue to display other detection images to the user. It can be understood that, in this example, the opening size corresponding to the detection image displayed by the VR glasses to the user again may be smaller than the opening size corresponding to the second detection image, thereby further increasing the difficulty for the user to recognize the detection image until the detection image cannot be obtained by the human eye. up to the maximum visual score identified.
由此,VR眼镜就可以通过确定用户人眼的最大能够分辨的视分值,确定该人眼的度数,进而明确该人眼对图像的识别能力。Therefore, the VR glasses can determine the degree of the human eye by determining the maximum resolvable visual score of the user's human eye, and then clarify the human eye's ability to recognize images.
在本申请实施例的具体实现中,VR眼镜可以通过控制显示在显示屏上的检测图像的数量,使得在获取用户人眼对于图像的识别能力的同时,提升趣味性,继而达到提升用户体验的效果。In the specific implementation of the embodiment of the present application, the VR glasses can control the number of detected images displayed on the display screen, so that while obtaining the recognition ability of the user's human eyes for the image, the interest is improved, and then the user experience is improved. Effect.
示例性的,结合图18所示的方案,参考图24。第一显示屏可以在执行S1801,即展示第一检测图像时,显示如图24中的2410所示的界面。可以看到,在本示例中,该界面上可以显示有多个(如5个)对应第一视分值的检测图像(如C字形字母)。不同的C字形字母可以具有不同的开口方向,由此使得用户可以依次从左向右进行判断。比如,在用户无法准确地识别界面2410中最左边的C字形字母的开口方向的情况下,VR眼镜可以在第一显示屏上显示如2420所示的界面。以便于用户继续判断具有第一视分值的C字形字母(如2402所示区域内的字母)的开口方向,进而使得VR眼镜能够更加准确地判断第一人眼是否能够识别第一视分值。可以看到,在该界面2420上,用户已经识别的字母2401可以隐藏显示或者不显示。由此实现视觉上的消除的效果。这样用户就可以继续判断界面上最左边的一个C字形字母的开口方向。对应的,在用户输入的第一识别反馈所指示的开口方向与C字形字母实际的开口方向一致的情况下,VR眼镜可以在第一显示屏上显示如2430所示的界面。以便于向用户提供更小的视分值进行判断。可以看到,在该界面2430上,用户已经判断过的字母2401所在位置为空,即不显示该位置的字母或者隐藏显示该位置的字母,由此实现消除的视觉效果。此后,用户就可以继续通过第一人眼,判断2403所示区域中,显示的最左边的字母的开口方向,以便于VR眼镜确定用户是否能够识别具有更小视分值的图像。Exemplarily, in combination with the solution shown in FIG. 18 , refer to FIG. 24 . The first display screen may display an interface as shown at 2410 in FIG. 24 when performing S1801, that is, displaying the first detection image. It can be seen that, in this example, multiple (eg, 5) detection images (eg, C-shaped letters) corresponding to the first visual score may be displayed on the interface. Different C-shaped letters may have different opening directions, so that users can make judgments from left to right in turn. For example, in the case that the user cannot accurately identify the opening direction of the leftmost C-shaped letter in the interface 2410, the VR glasses may display an interface as shown in 2420 on the first display screen. In order to facilitate the user to continue to judge the opening direction of the C-shaped letter (such as the letters in the area indicated by 2402) with the first visual score, and thus enable the VR glasses to more accurately judge whether the first human eye can recognize the first visual score . It can be seen that on the interface 2420, the letters 2401 that the user has recognized can be hidden or not displayed. A visual elimination effect is thereby achieved. In this way, the user can continue to judge the opening direction of the leftmost C-shaped letter on the interface. Correspondingly, when the opening direction indicated by the first recognition feedback input by the user is consistent with the actual opening direction of the C-shaped letter, the VR glasses may display an interface as shown at 2430 on the first display screen. In order to provide users with a smaller visual score for judgment. It can be seen that on the interface 2430, the position of the letter 2401 that the user has judged is empty, that is, the letter at this position is not displayed or the letter at this position is hidden, thereby achieving the visual effect of elimination. After that, the user can continue to judge the opening direction of the leftmost letter displayed in the area shown in 2403 through the first human eye, so that the VR glasses can determine whether the user can recognize images with smaller visual scores.
由此,通过如图24的示例所提供的具体的实现方式,VR眼镜可以在显示检测图像的过程中,通过隐藏显示或者不显示已经判断的字母所在位置的图像,达到视觉上的消除的效果,从而提升整个检测过程的趣味性,进而提升用户体验。同时,可以更方便地定位到未判断的字母,提高检测的效率。Therefore, through the specific implementation method provided by the example in Figure 24, the VR glasses can hide or not display the image of the position of the letter that has been judged in the process of displaying the detection image, so as to achieve the effect of visual elimination , so as to improve the interest of the whole detection process, thereby enhancing the user experience. At the same time, it is more convenient to locate unjudged letters and improve the efficiency of detection.
另外,上述示例中,是以在一个界面上(如第一显示屏的界面上)同时仅向用户展示具有一个视分值的一个或多个检测图像为例进行说明的。在本申请的另一些实施例中,VR眼镜还可以在第一显示屏上向第一人眼展示具有不同视分值的多个检测图像, 并通过图像或者语音或者震动等方式的引导,使得用户能够按照一定的流程分别判断各个视分值对应图像的开口方向。由此VR眼镜也能够获取用户能够识别的最小视分值。In addition, in the above example, the description is made by showing only one or more detection images with one visual score to the user at the same time on one interface (such as the interface of the first display screen). In other embodiments of the present application, the VR glasses can also display multiple detection images with different visual scores to the first human eye on the first display screen, and guide them through images, voices, or vibrations, so that The user can determine the opening direction of the image corresponding to each visual score according to a certain process. In this way, the VR glasses can also obtain the minimum visual score that can be recognized by the user.
示例性的,结合图25。VR眼镜可以根据上述图18所示的方案中,各个视分值对应的字母尺寸的确定方法,确定不同视分值对应的字母的尺寸大小。并在第一显示屏上,一次性向用户展示包括如图25所示的具有不同视分值的多个检测图像的完整图像或者部分图像。VR眼镜可以提示用户判断该图像中,具体某一行的图像,以便于引导用户完成视力检测。比如,如图25所示,VR眼镜可以在需要用户判断的图像所在行附近,显示如2501所示的提示符号(如箭头),并通过语音或者文字提示等方式,引导用户对该行显示的检测图像进行识别,并输入对应的识别反馈。作为一种示例,VR眼镜可以通过语音提示的方式,引导用户:“请判断箭头所指行的字母中,左侧第1个字母的开口方向”。由此使得用户可以对箭头所指行的左侧第1个字母的开口方向进行判断。在用户输入对该字母的开口方向的识别反馈之后,在该识别反馈指示用户能够正确地识别该行字母的情况下,VR眼镜可以将提示符号向下移动一行(如显示2502),同时隐藏2501或者不显示2501。进而引导用户对具有更小视分值的字母的开口方向进行判断。以此类推,直至VR眼镜确定用户能够识别的最小视分值。For example, refer to Figure 25. The VR glasses can determine the size of letters corresponding to different visual scores according to the method for determining the letter size corresponding to each visual score in the scheme shown in FIG. 18 above. And on the first display screen, a complete image or a partial image including a plurality of detected images with different visual scores as shown in FIG. 25 is displayed to the user at one time. The VR glasses can prompt the user to judge the image of a specific row in the image, so as to guide the user to complete the vision test. For example, as shown in Figure 25, the VR glasses can display a prompt symbol (such as an arrow) as shown in 2501 near the line where the image that needs to be judged by the user is located, and guide the user to the line displayed on the line through voice or text prompts, etc. Detect the image for recognition, and input the corresponding recognition feedback. As an example, the VR glasses can guide the user through voice prompts: "Please judge the opening direction of the first letter on the left among the letters in the row pointed by the arrow". Thus, the user can judge the opening direction of the first letter on the left side of the line indicated by the arrow. After the user inputs the recognition feedback of the opening direction of the letter, in the case that the recognition feedback indicates that the user can correctly recognize the row of letters, the VR glasses can move the prompt symbol down one row (such as display 2502), and hide 2501 at the same time Or do not display 2501. Further, the user is guided to judge the opening direction of the letter with a smaller visual score. And so on, until the VR glasses determine the minimum visual score that the user can recognize.
可以理解的是,通过上述图18-图25的说明,在上述实施例中,VR眼镜可以通过在相同的虚像面上,提供对应于不同的视分值的不同尺寸的检测图像。也就是说,在该基于视分值的检测方案中,VR眼镜不需要调整虚像面的位置以及光线入射人眼之后的成像面位置,即可实现对用户视力的检测。而由于该基于视分值的显示方案中,检测图像的显示尺寸是与当前虚像面的位置(如根据PDD)对应的,因此不需要控制虚像面与人眼的距离为5m。It can be understood that, based on the above descriptions of FIGS. 18-25 , in the above embodiments, the VR glasses can provide detection images of different sizes corresponding to different visual scores on the same virtual image plane. That is to say, in this visual score-based detection scheme, the VR glasses can detect the user's vision without adjusting the position of the virtual image plane and the position of the imaging plane after the light enters the human eye. And because in the visual score-based display scheme, the display size of the detected image corresponds to the position of the current virtual image plane (eg, according to PDD), it is not necessary to control the distance between the virtual image plane and the human eye to be 5m.
在本申请的另一些实施例中,VR眼镜还可以通过变焦模块,将虚像面与人眼的具体调整到5m位置。VR眼镜可以在第一显示屏上显示如国际视力表相同尺寸的虚像,并引导用户判别该虚拟场景中,能够识别的最小尺寸的字母开口,进而据此确定用户的视力情况。该过程相当于在虚拟场景中,模拟真实环境中的验光过程,由此即可在不对实际场地提出要求的情况下,实现对用户视力的确定的目的。In other embodiments of the present application, the VR glasses can also adjust the virtual image surface and the human eye to a specific position of 5m through the zoom module. The VR glasses can display a virtual image of the same size as the international eye chart on the first display screen, and guide the user to identify the smallest recognizable letter opening in the virtual scene, and then determine the user's vision condition accordingly. This process is equivalent to simulating the optometry process in the real environment in the virtual scene, so that the purpose of determining the user's vision can be realized without requiring the actual site.
在完成一个人眼的检测之后,VR眼镜可以使用类似的方案,对另一个人眼进行检测,由此即可获取用户双眼对于图像的识别能力。After completing the detection of one human eye, the VR glasses can use a similar scheme to detect the other human eye, so as to obtain the image recognition ability of the user's eyes.
需要说明的是,在上述图18-图25所提供的方案,能够实现对用户双眼视力的检测(即完成验光)。由此获取用户双眼各自的近视度数。显而易见的,该方案能够使得用户可以通过VR眼镜实现验光,而不需要前往专业的验光机构才能知晓双眼的近视度数。It should be noted that the solutions provided in the above-mentioned FIGS. 18-25 can realize the detection of the user's binocular vision (that is, complete the optometry). In this way, the degree of myopia of both eyes of the user is obtained. Obviously, this solution enables users to achieve optometry through VR glasses, without the need to go to a professional optometry institution to know the degree of myopia of both eyes.
另外,上述示例中,用户可以通过VR眼镜的引导,实现近视度数的检测。在本申请的另一些实施例中,如图26所示,VR眼睛还可以与云端(如服务器等)进行交互,以便于云端可以通过VR眼镜,向用户进行更加专业科学的验光指导。其中,云端可以通过向VR眼镜发送指令,以便控制VR眼镜对用户进行验光。在一些实现中,云端的指令可以是根据与云端进行交互的其他终端上传的,该指令可以是专业验光师输入的,也可以其他专业人士输入的,也可以是云端或者其他终端根据预设的执行方案自行确定的。In addition, in the above example, the user can realize the detection of the degree of myopia through the guidance of VR glasses. In other embodiments of the present application, as shown in FIG. 26 , the VR eyes can also interact with the cloud (such as a server, etc.), so that the cloud can provide users with more professional and scientific optometry guidance through the VR glasses. Among them, the cloud can send instructions to the VR glasses to control the VR glasses to perform optometry for the user. In some implementations, the instructions on the cloud can be uploaded based on other terminals that interact with the cloud. The instructions can be input by a professional optometrist or other professionals, or can be based on the cloud or other terminals. The execution plan is determined by itself.
在本申请实施例中,VR眼镜可以在获取用户对图像的识别能力(如用户双眼的近视度数)之后,据此调整向用户展示的内容,以便于近视用户在不佩戴眼镜的情况下,也可以通过VR眼镜体验清晰的虚拟显示功能。In this embodiment of the application, the VR glasses can adjust the content displayed to the user after acquiring the user's ability to recognize images (such as the degree of myopia of the user's eyes), so that the nearsighted user can also You can experience clear virtual display functions through VR glasses.
示例性的,VR眼镜可以通过上述如图18-图28中任一种方案,以及前述说明中的任一种方案,获取用户双眼的近视度数。Exemplarily, the VR glasses can obtain the degree of myopia of both eyes of the user through any of the schemes in Figure 18 to Figure 28 and any of the schemes in the foregoing description.
以下结合附图,对VR眼镜根据用户的近视度数自适应调整显示内容的具体方案进行说明。In the following, a specific scheme for adaptively adjusting the display content of the VR glasses according to the degree of myopia of the user will be described with reference to the accompanying drawings.
作为一种可能的实现,以下表2中示出了一种人眼的近视度数与VR眼镜的光焦度的对应关系示意。该对应关系可以是预置在VR眼镜中的,也可以是在VR眼镜需要使用时,从云端获取的。As a possible implementation, Table 2 below shows a schematic representation of the corresponding relationship between the degree of myopia of the human eye and the optical power of the VR glasses. The corresponding relationship may be preset in the VR glasses, or may be obtained from the cloud when the VR glasses need to be used.
表2Table 2
视力vision 近视度数degree of myopia 光焦度optical power
4.04.0 650650 -6.5D-6.5D
4.14.1 600600 -6D-6D
4.24.2 550550 -5.5D-5.5D
4.34.3 500500 -5D-5D
4.44.4 450450 -4.5D-4.5D
4.54.5 400400 -4D-4D
4.64.6 325325 -3.25D-3.25D
4.74.7 250250 -2.5D-2.5D
4.84.8 125125 -1.25D-1.25D
4.94.9 5050 -0.5D-0.5D
根据表2,VR眼镜就可以根据用户的人眼视力,确定对应的光焦度。比如,在用户的第一人眼的视力为4.6时,VR眼镜可以确定光焦度为-3.25D。又如,在用户的第一人眼的视力为4.8时,VR眼镜可以确定光焦度为-1.25D。如此类推。需要说明的是,表2所示的对应关系仅为一种示例,并未一一罗列出所有可能的光焦度与近视度数的对应关系。在本申请实施例的另一些实现中,该表2中还可以包括更多或更少的对应关系。According to Table 2, the VR glasses can determine the corresponding optical power according to the user's human eyesight. For example, when the visual acuity of the user's first human eye is 4.6, the VR glasses can determine the focal power to be -3.25D. For another example, when the visual acuity of the user's first human eye is 4.8, the VR glasses can determine that the optical power is -1.25D. And so on. It should be noted that the correspondence shown in Table 2 is only an example, and does not list all possible correspondences between optical powers and degrees of myopia. In other implementations of the embodiments of the present application, more or fewer corresponding relationships may be included in Table 2.
在本示例中,VR眼镜可以根据预置的策略,确定光焦度对应的调节机制。比如,以通过机械结构的调整实现变焦模块的功能为例。VR眼镜中可以预置有不同光焦度与变焦模块中,各个透镜的相对位置的对应关系。由此,在确定光焦度之后,VR眼镜就可以控制变焦模块,将各个透镜的相对位置调整为该光焦度对应的位置,由此实现光焦度的对应调整。In this example, the VR glasses can determine an adjustment mechanism corresponding to the optical power according to a preset policy. For example, take the realization of the function of the zoom module through the adjustment of the mechanical structure as an example. The corresponding relationship between different optical powers and the relative positions of each lens in the zoom module can be preset in the VR glasses. Thus, after determining the optical power, the VR glasses can control the zoom module to adjust the relative positions of the lenses to the positions corresponding to the optical power, thereby realizing the corresponding adjustment of the optical power.
这样,VR眼镜即可通过调整光焦度,将显示屏的虚拟像面到人眼的距离调整到合适的范围内,从而使得虚像面可以在用户的人眼中成清晰的像。需要说明的是,由于在虚像面在人眼中成像之前,VR眼镜控制变焦模块调整了光焦度,因此该虚像面到人 眼中成像的像面的距离(如称为虚像距)是与用户当前的近视度数匹配。由此就可以使得虚像面上的图像可以清晰地在人眼的视网膜上成像,从而使得用户可以通过裸眼看清VR眼镜所展示的图像。In this way, the VR glasses can adjust the distance between the virtual image surface of the display screen and the human eye to an appropriate range by adjusting the optical power, so that the virtual image surface can form a clear image in the user's eyes. It should be noted that since the VR glasses control the zoom module to adjust the optical power before the virtual image plane is imaged in the human eye, the distance from the virtual image plane to the image plane imaged in the human eye (for example, called the virtual image distance) is the same as the user's current Myopia matching. As a result, the image on the virtual image plane can be clearly imaged on the retina of the human eye, so that the user can clearly see the image displayed by the VR glasses with the naked eye.
为了使得本领域技术人员能够更加清楚地对本申请实施例提供的方案有所了解。以下结合附图,以VR眼镜根据如图18所示的方案获取用户的近视度数,并根据该近视度数调整向用户展示的内容为例。需要说明的是,VR眼镜可以根据用户双眼中的一个人眼,执行如下方案,实现对该人眼的显示图像的调节。另一个人眼的执行策略类似。In order to enable those skilled in the art to understand the solutions provided in the embodiments of the present application more clearly. In the following, with reference to the accompanying drawings, it is taken as an example that the VR glasses obtain the degree of myopia of the user according to the scheme shown in FIG. 18 , and adjust the content displayed to the user according to the degree of myopia. It should be noted that, according to one of the eyes of the user, the VR glasses can execute the following scheme to realize the adjustment of the displayed image of the eye. The execution strategy of the other human eye is similar.
可以理解的是,在人眼没有近视或者近视度数很小时,入射人眼的平行光可以顺利地汇聚在人眼的视网膜上。而在虚像面处于最远处(即0D)时,入射人眼的光线最接近平行光。因此,在本申请实施例中,可以以0D对应的虚像距为基准,向用户展示不同视分值的检测图像,从而根据用户的识别情况,调整到与人眼匹配的虚像距的位置。由此达到向人眼提供与人眼近视度数匹配的虚拟图像的效果。示例性的,请参考图27,为本申请实施例提供的一种虚拟显示方案的流程示意图。如图所示,该方案可以包括:It can be understood that when the human eye has no myopia or the degree of myopia is very small, the parallel light entering the human eye can be smoothly converged on the retina of the human eye. When the virtual image plane is at the farthest point (ie 0D), the light incident on the human eye is closest to the parallel light. Therefore, in the embodiment of the present application, based on the virtual image distance corresponding to 0D, the detection images of different visual scores can be shown to the user, so as to adjust to the position of the virtual image distance matching the human eye according to the user's recognition situation. In this way, the effect of providing the human eye with a virtual image matching the degree of myopia of the human eye is achieved. For example, please refer to FIG. 27 , which is a schematic flowchart of a virtual display solution provided by the embodiment of the present application. As shown, the program can include:
S2701、控制变焦模块,将虚拟像面调整到第一位置。例如该第一位置可以为0D位置。S2701. Control the zoom module to adjust the virtual image plane to the first position. For example, the first position may be the 0D position.
其中,0D位置可以是VR眼镜控制变焦模块调整虚像距能够达到的最远的距离。在一些实施例中,可以通过处理器控制变焦模块的焦距,使得虚拟像面调整到0D位置。Wherein, the 0D position may be the furthest distance that can be achieved by controlling the zoom module of the VR glasses to adjust the virtual image distance. In some embodiments, the processor may control the focal length of the zoom module, so that the virtual image plane is adjusted to the 0D position.
可以理解的是,在后续调整过程中,VR眼镜可能需要控制变焦模块,降低虚像距,以便于虚像距能够匹配用户的近视度数。It is understandable that in the subsequent adjustment process, the VR glasses may need to control the zoom module to reduce the virtual image distance so that the virtual image distance can match the degree of myopia of the user.
S2702、显示第一视分值对应的检测图像1。例如,该第一视分值可以为10′。那么检测图像1就可以是10′对应尺寸的检测图像。S2702. Display the detection image 1 corresponding to the first visual score. For example, the first visual score may be 10'. Then the detection image 1 may be a detection image with a size corresponding to 10'.
本示例中,该视分值为10′对应的检测图像1的尺寸确定以及显示,可以参考上述图18所示方案中的第一检测图像的确定以及显示方式,此处不再赘述。In this example, the determination and display of the size of the detection image 1 corresponding to the visual score of 10' can refer to the determination and display of the first detection image in the solution shown in FIG. 18 above, which will not be repeated here.
需要说明的是,10′对应的检测图像1可以是本方案中具有较大开口尺寸的检测图像,因此对于用户的人眼而言,识别难度最低。通过如下方案,VR眼镜可以逐渐减小检测图像的开口尺寸,从而达到阶梯提升识别难度的效果,进而使得VR眼镜能够更加准确地判断用户的近视度数。It should be noted that the detection image 1 corresponding to 10' may be a detection image with a relatively large opening size in this solution, so for the user's human eyes, the recognition difficulty is the lowest. Through the following solution, the VR glasses can gradually reduce the opening size of the detection image, so as to achieve the effect of increasing the recognition difficulty step by step, and then enable the VR glasses to more accurately determine the degree of myopia of the user.
需要说明的是,本申请实施例中,S2701以及S2702的执行并不存在严格的先后顺序之分。比如,在一些实施例中,S2701可以在S2702之前执行,即VR眼镜可以先通过处理器调整虚像面的位置,然后在该虚像面上显示检测图像1。在另一些实施例中,S2701也可以与S2702同时执行,即VR眼镜可以在调整虚像面的位置的过程中,即在对应的虚像面的位置显示检测图像1。在另一些实施例中,S2701还可以是在S2702之后执行的,即VR眼镜可以根据第一视分值确定检测图像1的尺寸,并在当前虚像面上进行显示,之后通过处理器调整虚像面的位置,从而达到在0D位置显示检测图像1的效果。It should be noted that, in the embodiment of the present application, the execution of S2701 and S2702 does not have a strict sequence. For example, in some embodiments, S2701 may be performed before S2702, that is, the VR glasses may first adjust the position of the virtual image plane through the processor, and then display the detected image 1 on the virtual image plane. In some other embodiments, S2701 and S2702 can also be executed at the same time, that is, the VR glasses can display the detection image 1 at the position of the corresponding virtual image plane during the process of adjusting the position of the virtual image plane. In other embodiments, S2701 can also be performed after S2702, that is, the VR glasses can determine the size of the detected image 1 according to the first visual score, and display it on the current virtual image plane, and then adjust the virtual image plane through the processor position, so as to achieve the effect of displaying the detection image 1 at the 0D position.
S2703A、确定用户是否能够识别检测图像1。S2703A. Determine whether the user can recognize the detection image 1.
在用户能够识别检测图像1的开口方向时,执行S2704。在用户无法识别检测图 像1的开口方向时,执行S2703B。When the user can recognize the opening direction of the detected image 1, S2704 is executed. When the user cannot recognize the opening direction of the detection image 1, execute S2703B.
可以理解的是,结合图18的S1802,用户可以向VR眼镜输入与检测图像1对应的识别反馈,从而使得VR眼镜知晓用户是否能够识别检测图像1的开口方向。It can be understood that, in conjunction with S1802 in FIG. 18 , the user can input recognition feedback corresponding to the detection image 1 to the VR glasses, so that the VR glasses know whether the user can recognize the opening direction of the detection image 1 .
S2703B、控制变焦模块,将虚拟像面调整到第二位置。例如第二位置可以是-7D对应的位置。S2703B. Control the zoom module to adjust the virtual image plane to the second position. For example, the second position may be the position corresponding to -7D.
在用户无法识别检测图像1的开口方向时,那么就表明用户的近视度数较高,VR眼镜就可以控制变焦模块,调整虚像距,使得虚像面处于-7D对应的位置,从而使得显示屏的虚像面可以尽量在人眼中,成像在较近的位置,从而匹配用户的近视度数,使得用户通过该人眼的裸眼能够看清虚像面上的图像。When the user cannot identify the opening direction of the detected image 1, it indicates that the user has a high degree of myopia. The VR glasses can control the zoom module and adjust the virtual image distance so that the virtual image surface is at the position corresponding to -7D, thereby making the virtual image of the display screen The surface can be imaged in a relatively close position in the human eye as much as possible, so as to match the degree of myopia of the user, so that the user can clearly see the image on the virtual image surface through the naked eye of the human eye.
S2704、显示第二视分值对应的检测图像2。例如,第二视分值可以为7.947′。那么检测图像2就可以是7.947′对应尺寸的检测图像。S2704. Display the detection image 2 corresponding to the second visual score. For example, the second visual score may be 7.947'. Then the detection image 2 can be a detection image with a size corresponding to 7.947′.
在VR眼镜确定用户能够识别具有较大视分值的检测图像(如检测图像1)后,就可以适当减小视分值,显示检测图像2,以确定用户的近视度数。After the VR glasses determine that the user can recognize the detection image with a larger visual score (such as detection image 1), the visual score can be appropriately reduced and the detection image 2 can be displayed to determine the degree of myopia of the user.
S2705A、确定用户是否能够识别检测图像2。S2705A. Determine whether the user can recognize the detection image 2.
在用户能够识别检测图像2的开口方向时,执行S2706。在用户无法识别检测图像2的开口方向时,执行S2705B。When the user can recognize the opening direction of the detected image 2, S2706 is executed. When the user cannot recognize the opening direction of the detected image 2, S2705B is executed.
S2705B、控制变焦模块,将虚拟像面调整到第三位置。例如第二位置可以是-6D对应的位置。S2705B. Control the zoom module to adjust the virtual image plane to a third position. For example, the second position may be the position corresponding to -6D.
S2706、显示第三视分值对应的检测图像3。例如,第三视分值可以为5′。那么检测图像3就可以是5′对应尺寸的检测图像。S2706. Display the detection image 3 corresponding to the third visual score. For example, the third look score may be 5'. Then the detection image 3 may be a detection image with a size corresponding to 5'.
S2707A、确定用户是否能够识别检测图像3。S2707A. Determine whether the user can recognize the detection image 3.
在用户能够识别检测图像3的开口方向时,执行S2708。在用户无法识别检测图像3的开口方向时,执行S2707B。When the user can recognize the opening direction of the detected image 3, S2708 is executed. When the user cannot recognize the opening direction of the detected image 3, S2707B is executed.
S2707B、控制变焦模块,将虚拟像面调整到第四位置。例如第四位置可以是-5D对应的位置。S2707B. Control the zoom module to adjust the virtual image plane to a fourth position. For example, the fourth position may be the position corresponding to -5D.
S2708、显示第四视分值对应的检测图像4。例如,第四视分值可以为3.2′。那么检测图像4就可以是3.2′对应尺寸的检测图像。S2708. Display the detection image 4 corresponding to the fourth visual score. For example, the fourth look score may be 3.2'. Then the detection image 4 may be a detection image with a size corresponding to 3.2'.
S2709A、确定用户是否能够识别检测图像4。S2709A. Determine whether the user can recognize the detection image 4.
在用户能够识别检测图像4的开口方向时,执行S2710。在用户无法识别检测图像4的开口方向时,执行S2709B。When the user can recognize the opening direction of the detected image 4, S2710 is executed. When the user cannot recognize the opening direction of the detected image 4, S2709B is executed.
S2709B、控制变焦模块,将虚拟像面调整到第五位置。例如第五位置可以是-4D对应的位置。S2709B. Control the zoom module to adjust the virtual image plane to a fifth position. For example, the fifth position may be a position corresponding to -4D.
S2710、显示第五视分值对应的检测图像5。例如,第五视分值可以为2′。那么检测图像5就可以是2′对应尺寸的检测图像。S2710. Display the detection image 5 corresponding to the fifth visual score. For example, the fifth look score may be 2'. Then the detection image 5 can be a detection image with a size corresponding to 2′.
S2711A、确定用户是否能够识别检测图像5。S2711A. Determine whether the user can recognize the detection image 5 .
在用户能够识别检测图像5的开口方向时,执行S2712。在用户无法识别检测图像5的开口方向时,执行S2711B。When the user can recognize the opening direction of the detected image 5, S2712 is executed. When the user cannot recognize the opening direction of the detected image 5, S2711B is executed.
S2711B、控制变焦模块,将虚拟像面调整到第六位置。例如第六位置可以是-2.5D对应的位置。S2711B. Control the zoom module to adjust the virtual image plane to the sixth position. For example, the sixth position may be a position corresponding to -2.5D.
S2712、显示第六视分值对应的检测图像6。例如,第六视分值可以为1.5′。那么检测图像6就可以是1.5′对应尺寸的检测图像。S2712. Display the detection image 6 corresponding to the sixth visual score. For example, the sixth visual score may be 1.5'. Then the detection image 6 may be a detection image with a size corresponding to 1.5'.
S2713A、确定用户是否能够识别检测图像6。S2713A. Determine whether the user can recognize the detection image 6 .
在用户能够识别检测图像6的开口方向时,执行S2714。在用户无法识别检测图像5的开口方向时,执行S2712B。When the user can recognize the opening direction of the detected image 6, S2714 is executed. When the user cannot recognize the opening direction of the detected image 5, S2712B is executed.
S2713B、控制变焦模块,将虚拟像面调整到第七位置。例如第七位置可以是-1D对应的位置。S2713B. Control the zoom module to adjust the virtual image plane to the seventh position. For example, the seventh position may be the position corresponding to -1D.
S2714、控制变焦模块,将虚拟像面调整到第一位置。结合S2701,第一位置可以是0D对应的位置。S2714. Control the zoom module to adjust the virtual image plane to the first position. In combination with S2701, the first position may be a position corresponding to 0D.
经过上述流程之后,如果用户依然能够识别具有最小视分值的检测图像6的开口方向,那么,则表明用户的人眼视力较好,能够不需要调整系统的光焦度,正常显示即可将虚像面上的图像清晰地成像在人眼的视网膜上。因此,VR眼镜可以执行S2714,将虚拟像面条恒到0D位置,即调整到虚像距最远的位置,从而对人眼提供清晰的虚拟显示。After the above process, if the user can still recognize the opening direction of the detection image 6 with the minimum visual score, then it indicates that the user's human eyesight is good, and the system can be displayed normally without adjusting the optical power of the system. The image on the virtual image plane is clearly imaged on the retina of the human eye. Therefore, the VR glasses can execute S2714 to keep the virtual image noodle at the 0D position, that is, adjust to the position where the virtual image distance is the farthest, so as to provide a clear virtual display for human eyes.
需要说明的是,上述说明中,是以向用户展示的检测图像的尺寸梯度减小为例进行说明的。在本申请的另一些实施例中,VR眼镜还可以梯度增加的尺寸向用户展示检测图像。在本申请的另一些实施例中,VR眼镜还可以中间尺寸为起始,采用二分法,分别向用户展示大于该起始的中间尺寸以及小于该起始的中间尺寸的检测图像,由此确定用户的视力情况,并据此进行虚像面位置的调整。It should be noted that, in the above description, the gradient reduction of the size of the detection image displayed to the user is taken as an example for description. In some other embodiments of the present application, the VR glasses can also display the detected images to the user in gradually increasing sizes. In some other embodiments of the present application, the VR glasses can also use the middle size as the starting point, and use the dichotomy method to show the user detection images that are larger than the starting middle size and smaller than the starting middle size, thereby determining The user's eyesight condition, and adjust the position of the virtual image plane accordingly.
可以理解的是,在图27所示的方案中,各个视分值对应的虚像面的光焦度位置,均可以通过上述表2获取。在不同的实现中,表2的对应关系的梯度和数量不同,据此实施的图27所示的方案也可不同。It can be understood that, in the solution shown in FIG. 27 , the optical power positions of the virtual image planes corresponding to each visual score can be obtained through the above Table 2. In different implementations, the gradients and quantities of the corresponding relationships in Table 2 are different, and the solutions shown in FIG. 27 implemented accordingly may also be different.
这样,VR眼镜就可以根据用户人眼的近视度数,自适应地调整光焦度,进而使得即使人眼处于近视状态,也不需要佩戴眼镜,裸眼即可使用VR眼镜提供的虚拟显示功能。In this way, the VR glasses can adaptively adjust the optical power according to the degree of myopia of the user's human eyes, so that even if the human eye is in a state of myopia, the virtual display function provided by the VR glasses can be used without wearing glasses.
结合前述关于人眼自动调节功能的说明,人眼可以在识别不同视分值的图像时,通过自主调节,尽量识别图像的开口方向。在本申请的一些实施例中,为了避免在用户通过人眼识别检测图像的开口方向的过程中,使用睫状肌过度调整晶状体导致视力检测不准确的情况发生,在本申请的一些实施例中,可以在上述方案的基础上,引入红绿平衡方案。Combined with the above description about the automatic adjustment function of the human eye, the human eye can recognize the opening direction of the image as much as possible through self-adjustment when recognizing images with different visual scores. In some embodiments of the present application, in order to avoid inaccurate visual acuity detection caused by excessive adjustment of the lens by using the ciliary muscle when the user recognizes and detects the opening direction of the image with human eyes, in some embodiments of the present application , the red-green balance scheme can be introduced on the basis of the above-mentioned scheme.
示例性的,结合图28。不同光线在通过人眼(如人眼的晶状体)成像时,由于光线的波长各不相同,因此通过晶状体之后,其光路的折射路径是不同的。如图28所示,在白光可以汇聚在视网膜上时,绿光的汇聚点可以位于晶状体和视网膜之间,而红光的汇聚点可以位于视网膜后方。For example, refer to FIG. 28 . When different light rays pass through the human eye (such as the lens of the human eye) to form an image, the refraction paths of the light paths after passing through the lens are different due to the different wavelengths of the light rays. As shown in Fig. 28, when white light can be concentrated on the retina, the converging point of green light can be located between the lens and the retina, and the converging point of red light can be located behind the retina.
在本示例中,在VR眼镜确定用户的近视度数之后,可以结合红绿平衡方案,进一步微调虚像距,以便于在与用户人眼的真实近视度匹配的虚像距位置,向人眼展示虚拟图像。In this example, after the VR glasses determine the degree of myopia of the user, the virtual image distance can be further fine-tuned in combination with the red-green balance scheme, so that the virtual image distance can be displayed to the human eye at the position of the virtual image distance that matches the real myopia degree of the user's human eye. .
比如,结合图27,以执行S2703A之后,VR眼镜确定用户无法识别检测图像1的开口方向为例。如图27所示,VR眼镜可以执行S2703B,即控制变焦模块,将虚拟像 面调整到-7D位置。结合红绿平衡方案,VR眼镜可以在将虚像面调整到-7D对应的位置之后,继续控制变焦模块,在-7D对应位置附近微调(如-6.5D~-7D之间微调),寻找到红光与绿光在视网膜上成像清晰度一致的位置。VR眼镜可以采用该像面位置,作为微调结果,以便在向用户当前人眼展示虚拟图像的过程中,采用该为调结果,向用户展示清晰的虚像。For example, referring to FIG. 27 , it is taken as an example that after the execution of S2703A, the VR glasses determine that the user cannot recognize the opening direction of the detection image 1 . As shown in Figure 27, the VR glasses can execute S2703B, that is, control the zoom module to adjust the virtual image plane to the -7D position. Combined with the red-green balance scheme, VR glasses can continue to control the zoom module after adjusting the virtual image surface to the position corresponding to -7D, and fine-tune around the corresponding position of -7D (such as fine-tuning between -6.5D and -7D) to find red The position where light and green light are imaged in the same sharpness on the retina. The VR glasses can use the position of the image plane as a fine-tuning result, so that in the process of displaying the virtual image to the user's current human eyes, the adjustment result can be used to show the user a clear virtual image.
类似的,在执行S2705B时,VR眼镜可以在控制变焦模块,将虚拟像面调整到-6D位置之后,在-6D~-6.5D之间微调,获取红光与绿光在视网膜上成像清晰度一致的位置作为为调结果。或者,在执行S2707B时,VR眼镜可以在控制变焦模块,将虚拟像面调整到-5D位置之后,在-5D~-6D之间微调,获取红光与绿光在视网膜上成像清晰度一致的位置作为为调结果。或者,在执行S2709B时,VR眼镜可以在控制变焦模块,将虚拟像面调整到-4D位置之后,在-4D~-5D之间微调,获取红光与绿光在视网膜上成像清晰度一致的位置作为为调结果。或者,在执行S2711B时,VR眼镜可以在控制变焦模块,将虚拟像面调整到-2.5D位置之后,在-2.5D~-4D之间微调,获取红光与绿光在视网膜上成像清晰度一致的位置作为为调结果。或者,在执行S2713B时,VR眼镜可以在控制变焦模块,将虚拟像面调整到-1D位置之后,在-1D~-2.5D之间微调,获取红光与绿光在视网膜上成像清晰度一致的位置作为为调结果。Similarly, when executing S2705B, the VR glasses can control the zoom module to adjust the virtual image plane to the position of -6D, and fine-tune it between -6D and -6.5D to obtain the imaging clarity of red light and green light on the retina Consistent position as a result of tuning. Or, when executing S2707B, the VR glasses can control the zoom module to adjust the virtual image plane to the position of -5D, and then fine-tune it between -5D and -6D to obtain images with the same definition of red light and green light on the retina. position as a result of tuning. Or, when executing S2709B, the VR glasses can control the zoom module to adjust the virtual image plane to the position of -4D, and fine-tune it between -4D and -5D to obtain images with the same definition of red light and green light on the retina. position as a result of tuning. Or, when executing S2711B, the VR glasses can control the zoom module to adjust the virtual image plane to the position of -2.5D, and fine-tune it between -2.5D~-4D to obtain the imaging clarity of red light and green light on the retina Consistent position as a result of tuning. Or, when executing S2713B, the VR glasses can control the zoom module to adjust the virtual image plane to the position of -1D, and fine-tune it between -1D to -2.5D to obtain the same definition of red light and green light on the retina. The position of is used as the tuning result.
这样,通过上述如图27所示的粗调,结合红绿平衡方案的微调,即可获取与用户的人眼的真实近视度数匹配的结果。据此进行的显示就可以避免用户的人眼处于过度调节的状态,从而避免由此导致的视疲劳等问题的发生。In this way, through the above coarse adjustment as shown in FIG. 27 , combined with the fine adjustment of the red-green balance scheme, a result matching the real degree of myopia of the user's human eyes can be obtained. The display based on this can prevent the user's human eyes from being in a state of over-adjustment, thereby avoiding the occurrence of problems such as visual fatigue caused by this.
可以理解的是,上述说明中,均以通过人眼的近视度数标识用户的人眼对图形的识别能力为例进行说明的。可以理解的是,对于一些用户而言,人眼除了近视之外,可能还有散光的问题存在。不同的散光度数可以对应到不同的散光轴位方向。It can be understood that, in the above description, the recognition ability of the user's human eyes for graphics is identified by the degree of myopia of the human eyes as an example for illustration. It is understandable that for some users, in addition to myopia, the human eye may also have problems with astigmatism. Different degrees of astigmatism can correspond to different axial directions of astigmatism.
示例性的,结合图29以及图30对散光进行说明。如图29所示,对于散光用户,在不同的角度,平行光入射人眼之后通过晶状体折射,在人眼内的汇聚位置可能是不同的。比如,如图29中的(a)所示,XOZ平面中的平行光线入射到人眼之后,可以在视网膜上汇聚。同时,YOZ平面内的平行光线入射到人眼之后,其汇聚点就可能不再视网膜上(如图29中的(b)所示位于晶状体和视网膜之间)。这样,就会导致部分角度的图像无法被人眼清晰成像的情况发生。比如,结合图30。如图30中的(a)所示,在人眼没有散光的情况下,对于各个角度的入射光线,都能够均匀地在人眼中成像,即各个角度的光线在人眼中的成像情况都是近似的。而对于有散光的人眼而言,就可能有部分角度的光线无法在人眼中清晰成像。比如参考图30中的(b),在当前坐标系下,竖直方向的光线能够较好地成像,而光线角度越接近水平,其成像清晰度就越差。Exemplarily, astigmatism will be described with reference to FIG. 29 and FIG. 30 . As shown in Figure 29, for users with astigmatism, at different angles, after the parallel light enters the human eye and is refracted by the lens, the converging position in the human eye may be different. For example, as shown in (a) of FIG. 29 , after the parallel rays in the XOZ plane enter the human eye, they can converge on the retina. At the same time, after the parallel rays in the YOZ plane are incident on the human eye, their converging point may not be on the retina (as shown in (b) in Figure 29, it is located between the lens and the retina). In this way, it will result in the situation that the images at some angles cannot be clearly imaged by human eyes. For example, in conjunction with Figure 30. As shown in (a) in Figure 30, when there is no astigmatism in the human eye, the incident light rays at various angles can be uniformly imaged in the human eye, that is, the imaging conditions of the light rays at various angles in the human eye are approximately of. For human eyes with astigmatism, there may be some angles of light that cannot be clearly imaged in the human eye. For example, referring to (b) in FIG. 30 , in the current coordinate system, light rays in the vertical direction can be imaged better, and the closer the angle of the light rays is to the horizontal, the poorer the imaging definition is.
而在实际场景或者VR眼镜提供的虚拟场景下,入射人眼的光线一般并非仅来自一个角度,这样就会导致用户观察物体时产生的模糊,影响用户体验。为了使得散光用户也能够通过裸眼使用VR眼镜提供的虚拟显示功能,本申请实施例还提供一种方案,能够确定散光用户的散光度数。进而根据散光度数对向用户展示的图像进行校正,从而使得散光用户也能够裸眼使用VR眼镜的虚拟显示功能。However, in the actual scene or the virtual scene provided by VR glasses, the light entering the human eye generally does not come from only one angle, which will cause blur when the user observes the object and affect the user experience. In order to enable the user with astigmatism to use the virtual display function provided by the VR glasses with naked eyes, an embodiment of the present application further provides a solution that can determine the degree of astigmatism of the user with astigmatism. Furthermore, the image displayed to the user is corrected according to the degree of astigmatism, so that the user with astigmatism can also use the virtual display function of the VR glasses with naked eyes.
作为一种示例,该方案可以通过本申请实施例提供的虚拟显示设备实现。比如, 以该虚拟显示设备为具有如图10所示组成的VR眼镜为例。在本示例中,该VR眼镜中的变焦模块可以产生柱状光焦度,并具备旋转功能。在变焦模块旋转时,可以使得变焦模块绕对应视线中心旋转(如绕对应镜筒中心旋转),从而使得变焦模块以及目镜组成光学系统的柱状光焦度方向与用户的散光轴位重合。在一些实施例中,如果人眼同时还有近视度数,那么该变焦模块的光焦度可以是与该人眼的近视度数匹配的光焦度。具体的匹配和调节方式可以参考以上实施例中提供的方案,此处不再赘述。As an example, this solution may be implemented through the virtual display device provided in the embodiment of the present application. For example, take the virtual display device as VR glasses with the composition shown in FIG. 10 as an example. In this example, the zoom module in the VR glasses can produce cylindrical power and has a rotation function. When the zoom module rotates, the zoom module can be rotated around the center of the corresponding line of sight (such as around the center of the corresponding lens barrel), so that the direction of the cylindrical power of the optical system composed of the zoom module and the eyepiece coincides with the user's astigmatism axis. In some embodiments, if the human eye also has a degree of myopia, the optical power of the zoom module may match the optical power of the human eye with the degree of myopia. For specific matching and adjustment methods, reference may be made to the solutions provided in the above embodiments, which will not be repeated here.
由此,在该虚拟显示设备向用户提供虚拟显示时,入射到用户人眼的光线就可以是与散光轴位重合的光线,从而使得该光线可以在人眼中清晰地成像,避免由于散光导致的成像不清晰的情况发生。这也就使得散光用户也可以裸眼使用VR眼镜提供的虚拟显示方案。Therefore, when the virtual display device provides a virtual display to the user, the light incident on the user's eye can be the light that coincides with the axis of astigmatism, so that the light can be clearly imaged in the human eye, avoiding the distortion caused by astigmatism. The image is not clear. This also allows users with astigmatism to use the virtual display solution provided by VR glasses with naked eyes.
类似以前述针对近视检测的过程说明,在本申请实施例中,虚拟显示设备可以根据人眼的散光度数调整入射光线的轴位以便避免由于散光造成的成像不清晰。在本申请的另一些实施例中,结合前述远程验光的方案说明,本申请实施例提供的虚拟显示设备还可以用于进行散光度数的测定。Similar to the aforementioned process for myopia detection, in the embodiment of the present application, the virtual display device can adjust the axial position of the incident light according to the degree of astigmatism of the human eye so as to avoid unclear imaging caused by astigmatism. In some other embodiments of the present application, combined with the foregoing description of the remote optometry solution, the virtual display device provided in the embodiments of the present application can also be used to measure the degree of astigmatism.
示例性的,VR眼镜可以在测定人眼的散光度数时,结合当前人眼的近视度数,进行对应的测定。比如,以近视度数为M(单位为m -1)为例。VR眼镜可以控制变焦模块,使得虚像面与人眼经过近视矫正之后5m的位置。 Exemplarily, when the VR glasses measure the degree of astigmatism of the human eye, a corresponding measurement can be performed in combination with the degree of myopia of the current human eye. For example, take the degree of myopia as M (the unit is m −1 ) as an example. The VR glasses can control the zoom module so that the virtual image plane is 5m away from the human eye after myopia correction.
示例性的,VR眼镜可以根据如下公式(2),确定经过近视矫正之后5m位置。Exemplarily, the VR glasses can determine the 5m position after myopia correction according to the following formula (2).
Figure PCTCN2022085632-appb-000004
Figure PCTCN2022085632-appb-000004
其中,M为近视度数。V为在虚拟场景下,与人眼的距离。Among them, M is the degree of myopia. V is the distance from the human eye in the virtual scene.
VR眼镜可以在该虚像面上显示如图31所示的散光检测图卡。VR眼镜可以引导用户输入看不清的角度。比如,该角度可以为如图31所示的1-12中任一个数字或多个数字对应的角度。在一些实施例中,用户可以通过语音输入,或者遥控器文字输入等方式,实现该看不清楚的角度的相关信息的输入。根据用户输入的角度信息,VR眼镜就可以确定与用户输入的数字信息所对应的散光轴位方向。例如,用户输入的数字为1,则VR眼镜就可以确定散光轴位方向为60°。类似的,若用户输入数字为5,则VR眼镜就可以确定散光轴位方向为120°。以此类推。The VR glasses can display the astigmatism detection chart shown in FIG. 31 on the virtual image plane. VR glasses can guide users to input angles that they cannot see clearly. For example, the angle may be an angle corresponding to any one or more numbers in 1-12 as shown in FIG. 31 . In some embodiments, the user can input the relevant information of the unclear angle through voice input or text input of the remote control. According to the angle information input by the user, the VR glasses can determine the axial direction of astigmatism corresponding to the digital information input by the user. For example, if the number input by the user is 1, the VR glasses can determine that the axial direction of astigmatism is 60°. Similarly, if the number input by the user is 5, the VR glasses can determine that the axial direction of astigmatism is 120°. and so on.
在本示例中,VR眼镜可以根据用户的散光轴位方向控制变焦机构进行旋转,然后调节变焦机构的柱状光焦度,直至用户可以同时看清如图31所示的散光检测图卡上的线。此时,变焦机构的柱状光焦度即为用户的散光度数。这样,就可以实现对用户人眼的散光检测。In this example, the VR glasses can control the zoom mechanism to rotate according to the direction of the user's astigmatism axis, and then adjust the cylindrical power of the zoom mechanism until the user can see clearly the lines on the astigmatism detection chart shown in Figure 31 at the same time. . At this time, the cylindrical power of the zoom mechanism is the astigmatism power of the user. In this way, astigmatism detection of the user's eyes can be realized.
在一些实施例中,结合前述远程验光方案的实现,该检测散光的过程也可以是远程实现的。比如,进行远程检测的医师可以通过云端,控制VR眼镜引导用户进行散光检测。In some embodiments, combined with the implementation of the aforementioned remote optometry solution, the process of detecting astigmatism can also be implemented remotely. For example, a doctor performing remote detection can control VR glasses to guide users to perform astigmatism detection through the cloud.
需要说明的是,上述针对近视矫正以及散光矫正的方案说明中,均是以用户的一个人眼为例进行说明的。在需要对用户的双眼进行校正的场景下,VR眼镜可以分别对两个人眼按照上述说明中的方案执行对应的步骤,由此即可实现根据双眼各自的图像识别能力(如近视和/或散光)进行对应的调整,从而保证用户的裸眼虚拟显示体验的 效果。It should be noted that, in the above descriptions of the solutions for the correction of myopia and the correction of astigmatism, an eye of the user is taken as an example for description. In the scene where the user's eyes need to be corrected, the VR glasses can respectively perform corresponding steps on the two human eyes according to the scheme in the above description, so that the image recognition capabilities (such as myopia and/or astigmatism) of the two eyes can be realized. ) to make corresponding adjustments, so as to ensure the effect of the naked-eye virtual display experience of the user.
可以理解的是,不同用户的人眼对于图像的识别能力是不同的。在本申请的一些实施例中,VR眼镜可以将获取当用户的近视和/或散光度数,与用户信息生成对应关系,以便于在该用户下次使用VR眼镜时,可以避免对于相同用户的重复检测。It is understandable that the human eyes of different users have different recognition capabilities for images. In some embodiments of the present application, the VR glasses can generate a corresponding relationship between the acquired myopia and/or astigmatism degree of the user and the user information, so that when the user uses the VR glasses next time, it can avoid repeating the same user detection.
示例性的,在一些实施例中,VR眼镜可以将用户的虹膜信息与近视和/或散光度数生成对应关系存储在本地或者云端,以便于该用户下次使用该VR眼镜时,可以根据用户的虹膜信息,确定用户的近视和/或散光度数,进而控制变焦模块调整光焦度,和/或控制变焦模块调整入射光线的轴位。从而向用户提供基于裸眼的虚拟显示。Exemplarily, in some embodiments, the VR glasses can store the corresponding relationship between the user's iris information and the degree of myopia and/or astigmatism locally or in the cloud, so that when the user uses the VR glasses next time, it can be based on the user's The iris information determines the degree of myopia and/or astigmatism of the user, and then controls the zoom module to adjust the optical power, and/or controls the zoom module to adjust the axial position of the incident light. In this way, a virtual display based on naked eyes is provided to the user.
请参考图32,为本申请实施例提供的又一种虚拟显示方法。如图所示,该方案可以包括:Please refer to FIG. 32 , which is another virtual display method provided by the embodiment of the present application. As shown, the program can include:
S3201、在用户使用VR眼镜时,提取用户的虹膜特征。S3201. When the user uses the VR glasses, extract the iris features of the user.
示例性的,VR眼镜可以通过其眼动追踪模块,对用户的人眼进行拍摄。通过分析拍摄获取的人眼的图像,提取该人眼的虹膜特征。Exemplarily, the VR glasses can take pictures of the user's eyes through its eye-tracking module. The iris feature of the human eye is extracted by analyzing the captured image of the human eye.
在本申请实施中,VR眼镜可以仅提取用户的一个人眼的虹膜特征。由此即可在节省算力的前提下,实现用户身份的确认。In the implementation of this application, the VR glasses can only extract the iris feature of one human eye of the user. In this way, the confirmation of user identity can be realized under the premise of saving computing power.
S3202、根据获取的虹膜特征,确定是否存在该用户匹配的虚拟显示信息。S3202. Determine whether there is virtual display information matching the user according to the acquired iris features.
其中,该虚拟显示信息可以包括近视度数和/或散光度数。Wherein, the virtual display information may include degrees of myopia and/or degrees of astigmatism.
在一些实施例中,在VR眼镜中,可以存储有不同用户的虹膜特征与近视度数和/或散光度数的对应关系。这样,VR眼镜就可以在获取当前用户的虹膜特征之后,通过查询的方式,从上述对应关系中,查找是否存在与该虹膜特征对应的匹配项。如果存在,那么表示存在该用户匹配的虚拟显示信息。反之,如果未查找到与该虹膜特征对应的匹配项,则表示不存在该用户匹配的虚拟显示信息。In some embodiments, in the VR glasses, the corresponding relationship between iris features and degrees of myopia and/or degrees of astigmatism of different users may be stored. In this way, after acquiring the iris feature of the current user, the VR glasses can search whether there is a matching item corresponding to the iris feature from the above correspondence relationship by way of query. If it exists, it means that there is virtual display information matching the user. Conversely, if no matching item corresponding to the iris feature is found, it means that there is no matching virtual display information for the user.
在另一些实施例中,不同用户的虹膜特征与近视度数和/或散光度数的对应关系可以存储在云端。VR眼镜可以将获取的当前用户的虹膜特征发送给云端,以便云端通过查询的方式,查找是否存在与该虹膜特征对应的匹配项。如果云端确定存在与该虹膜特征对应的匹配项,则可以将该与该虹膜特征对应的匹配项在上述对应关系中,对应的屈光不正的处方下发给VR眼镜。其中,在一些实施例中,该屈光不正的处方可以包括近视度数和/或散光度数。VR眼镜可以在接收到近视度数和/或散光度数的情况下,确认存在该用户匹配的虚拟显示信息,并且,与该用户匹配的虚拟显示信息可以为从云端接收到的近视度数和/或散光度数。对应的,云端可以在确定不存在与该虹膜特征对应的匹配项时,向VR眼镜下发未找到通知,以便于VR眼镜确定不存在与该虹膜特征对应的匹配项。或者,云端可以在确定不存在与该虹膜特征对应的匹配项时,不向VR眼镜下发任何信息。这样,VR眼镜可以在预设时间内,没有接收到近视度数和/或散光度数的情况下,确定不存在与该虹膜特征对应的匹配项。In some other embodiments, the corresponding relationship between iris features of different users and degrees of myopia and/or degrees of astigmatism may be stored in the cloud. The VR glasses can send the acquired iris feature of the current user to the cloud, so that the cloud can find out whether there is a matching item corresponding to the iris feature through query. If the cloud determines that there is a matching item corresponding to the iris feature, the matching item corresponding to the iris feature can be issued to the VR glasses with a prescription for ametropia corresponding to the above correspondence. Wherein, in some embodiments, the prescription for refractive error may include myopia and/or astigmatism. The VR glasses can confirm that there is virtual display information matching the user when receiving the degree of myopia and/or astigmatism, and the virtual display information matching the user can be the degree of myopia and/or astigmatism received from the cloud degree. Correspondingly, when it is determined that there is no matching item corresponding to the iris feature, the cloud may send a not found notification to the VR glasses, so that the VR glasses can determine that there is no matching item corresponding to the iris feature. Alternatively, the cloud may not send any information to the VR glasses when it is determined that there is no matching item corresponding to the iris feature. In this way, the VR glasses can determine that there is no matching item corresponding to the iris feature without receiving the degree of myopia and/or the degree of astigmatism within a preset time.
需要说明的是,在本申请的另一些实施例中,该虚拟显示信息中包括的近视度数和/或散光度数可以是通过对应的光焦度和/或轴位标识的。It should be noted that, in some other embodiments of the present application, the degree of myopia and/or the degree of astigmatism included in the virtual display information may be identified by corresponding optical power and/or axial position.
另外,由于变焦模块在改变光焦度的过程中,会导致整个显示系统的放大倍率发生变化。也就是说,用户看到的VR内容的大小会发生变化。此外,变焦器件在改变光焦度的过程中,还会导致显示系统的畸变参数发生改变。因此,在本申请的一些实施 例中,该虚拟显示信息还可以包括与调整近视度数过程中光焦度对应的畸变校正参数和/或放大倍率参数。In addition, since the zoom module changes the optical power, the magnification of the entire display system will change. That is, the size of the VR content that the user sees changes. In addition, in the process of changing the optical power of the zoom device, the distortion parameters of the display system will also be changed. Therefore, in some embodiments of the present application, the virtual display information may also include distortion correction parameters and/or magnification parameters corresponding to the optical power in the process of adjusting the myopia degree.
在存在该用户匹配的虚拟显示信息的情况下,VR眼镜可以执行以下S3203。If there is virtual display information matching the user, the VR glasses may perform the following S3203.
在不存在该用户匹配的虚拟显示信息的情况下,VR眼镜可以执行以下S3204。If there is no virtual display information matched by the user, the VR glasses may perform the following S3204.
S3203、调取该虚拟显示信息,并根据该虚拟显示信息向该用户提供虚拟显示。S3203. Call the virtual display information, and provide a virtual display to the user according to the virtual display information.
在确定存在匹配项的情况下,VR眼镜就可以调取该虚拟显示信息中包括的各项参数,并根据这些参数,向当前用户的人眼提供虚拟显示。When it is determined that there is a matching item, the VR glasses can call various parameters included in the virtual display information, and provide a virtual display to the human eyes of the current user according to these parameters.
S3204、确定该用户的虚拟显示信息。S3204. Determine the virtual display information of the user.
S3205、根据该用户的虚拟显示信息,向该用户提供虚拟显示。S3205. According to the user's virtual display information, provide the user with a virtual display.
其中,确定该用户的虚拟显示信息的过程,可以参考上述示例中,确定对用户的近视度数的检测过程,以及对用户的散光度数的检测过程。由此即可确定该用户的虚拟显示信息。根据虚拟显示信息向用户提供虚拟显示的过程,可以参考上述示例中,根据近视度数调整光焦度的方案,以及根据用户的散光度数调整入射光线的轴位的方案。此处不再赘述。Wherein, the process of determining the virtual display information of the user may refer to the process of determining the detection process of the user's myopia degree and the detection process of the user's astigmatism degree in the above example. Thus, the virtual display information of the user can be determined. The process of providing virtual display to the user according to the virtual display information can refer to the solution of adjusting the optical power according to the degree of myopia and the solution of adjusting the axial position of the incident light according to the degree of astigmatism of the user in the above example. I won't repeat them here.
在一些实施例中,由于当前用户的虹膜特征在已存储的虹膜特征中未找到匹配项,因此表明该用户可能为新用户。在该情况下,为了保证用户虹膜特征的信息安全,在执行S3204之前,VR眼镜可以获取用户的授权。这样,VR眼镜就可以建立当前用户虹膜特征与对应的虚拟显示信息的对应关系。在一些实现中,VR眼镜可以存储该对应关系,以便于该用户在下次使用VR眼镜时,可以避免重复检测。In some embodiments, since the iris characteristics of the current user do not find a match in the stored iris characteristics, it indicates that the user may be a new user. In this case, in order to ensure the information security of the iris characteristics of the user, before performing S3204, the VR glasses may obtain authorization from the user. In this way, the VR glasses can establish a corresponding relationship between the iris features of the current user and the corresponding virtual display information. In some implementations, the VR glasses can store the corresponding relationship, so that repeated detection can be avoided when the user uses the VR glasses next time.
需要说明的是,在如图32所示的方案中,是以通过用户的虹膜特征标识不同用户为例进行说明的。在本申请的另一些实施例中,虚拟显示设备还可以通过其他方式标识不同的用户。比如,虚拟显示设备可以通过不同用户的用户账号,和/或不同用户的生物识别信息(如眼间距,指纹,声纹等)标识不同的用户。那么,类似与上述虹膜特征与虚拟显示信息的对应关系,虚拟显示设备中可以存储有对应的用于标识不同用户的特征信息与虚拟显示信息的对应关系,从而实现上述图32对应的功能。It should be noted that, in the scheme shown in FIG. 32 , different users are identified by their iris characteristics as an example. In some other embodiments of the present application, the virtual display device may also identify different users in other ways. For example, the virtual display device may identify different users through user accounts of different users, and/or biometric information (such as distance between eyes, fingerprints, voiceprints, etc.) of different users. Then, similar to the above-mentioned correspondence between iris features and virtual display information, the virtual display device may store corresponding correspondence between feature information for identifying different users and virtual display information, so as to realize the above-mentioned function corresponding to FIG. 32 .
上述主要从电子设备的角度对本申请实施例提供的方案进行了介绍。为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。The foregoing mainly introduces the solutions provided by the embodiments of the present application from the perspective of electronic devices. In order to realize the above functions, it includes corresponding hardware structures and/or software modules for performing various functions. Those skilled in the art should easily realize that the present application can be implemented in the form of hardware or a combination of hardware and computer software in combination with the units and algorithm steps of each example described in the embodiments disclosed herein. Whether a certain function is executed by hardware or computer software drives hardware depends on the specific application and design constraints of the technical solution. Skilled artisans may use different methods to implement the described functions for each specific application, but such implementation should not be regarded as exceeding the scope of the present application.
本申请实施例可以根据上述方法示例对其中涉及的设备进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。需要说明的是,本申请实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。The embodiments of the present application may divide the involved devices into functional modules according to the above method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The above-mentioned integrated modules can be implemented in the form of hardware or in the form of software function modules. It should be noted that the division of modules in the embodiment of the present application is schematic, and is only a logical function division, and there may be other division methods in actual implementation.
示例性的,图33示出了的一种电子设备3300的组成示意图。该电子设备3300可以对应到上述实施例中的任一种虚拟显示设备。如图33所示,该电子设备3300可 以包括:处理器3301和存储器3302。该存储器3302用于存储计算机执行指令。示例性的,在一些实施例中,当该处理器3301执行该存储器3302存储的指令时,可以使得该电子设备3300执行上述实施例中任一种所示的虚拟显示方法。Exemplarily, FIG. 33 shows a schematic composition diagram of an electronic device 3300 . The electronic device 3300 may correspond to any virtual display device in the foregoing embodiments. As shown in FIG. 33 , the electronic device 3300 may include: a processor 3301 and a memory 3302. The memory 3302 is used to store computer-executable instructions. Exemplarily, in some embodiments, when the processor 3301 executes the instructions stored in the memory 3302, the electronic device 3300 may be made to execute the virtual display method shown in any one of the above embodiments.
需要说明的是,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应功能模块的功能描述,在此不再赘述。It should be noted that all relevant content of the steps involved in the above method embodiments can be referred to the function description of the corresponding function module, and will not be repeated here.
图34示出了的一种芯片系统3400的组成示意图。该芯片系统3400可以设置在上述实施例中的任一种虚拟显示设备中。该芯片系统3400可以包括:处理器3401和通信接口3402,用于支持相关设备实现上述实施例中所涉及的功能。在一种可能的设计中,芯片系统还包括存储器,用于保存虚拟显示设备必要的程序指令和数据。该芯片系统,可以由芯片构成,也可以包含芯片和其他分立器件。需要说明的是,在本申请的一些实现方式中,该通信接口3402也可称为接口电路。FIG. 34 shows a schematic composition diagram of a chip system 3400 . The chip system 3400 may be set in any virtual display device in the foregoing embodiments. The chip system 3400 may include: a processor 3401 and a communication interface 3402, configured to support related devices to implement the functions involved in the foregoing embodiments. In a possible design, the chip system further includes a memory for storing necessary program instructions and data of the virtual display device. The system-on-a-chip may consist of chips, or may include chips and other discrete devices. It should be noted that, in some implementation manners of the present application, the communication interface 3402 may also be called an interface circuit.
需要说明的是,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应功能模块的功能描述,在此不再赘述。It should be noted that all relevant content of the steps involved in the above method embodiments can be referred to the function description of the corresponding function module, and will not be repeated here.
在上述实施例中的功能或动作或操作或步骤等,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件程序实现时,可以全部或部分地以计算机程序产品的形式来实现。该计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或者数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包括一个或多个可以用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质(例如,软盘、硬盘、磁带),光介质(例如,DVD)、或者半导体介质(例如固态硬盘(solid state disk,SSD))等。The functions or actions or operations or steps in the above-mentioned embodiments may be fully or partially implemented by software, hardware, firmware or any combination thereof. When implemented using a software program, it may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, the processes or functions according to the embodiments of the present application will be generated in whole or in part. The computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable devices. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from a website, computer, server, or data center Transmission to another website site, computer, server or data center by wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.). The computer-readable storage medium may be any available medium that can be accessed by a computer, or may include one or more data storage devices such as servers and data centers that can be integrated with the medium. The available medium may be a magnetic medium (such as a floppy disk, a hard disk, or a magnetic tape), an optical medium (such as a DVD), or a semiconductor medium (such as a solid state disk (solid state disk, SSD)), etc.
尽管结合具体特征及其实施例对本申请进行了描述,显而易见的,在不脱离本实施例的范围的情况下,可对其进行各种修改和组合。相应地,本说明书和附图仅仅是所附权利要求所界定的本申请的示例性说明,且视为已覆盖本申请范围内的任意和所有修改、变化、组合或等同物。显然,本领域的技术人员可以对本申请进行各种改动和变型而不脱离本实施例的范围。这样,倘若本申请的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包括这些改动和变型在内。Although the application has been described in conjunction with specific features and embodiments thereof, it will be apparent that various modifications and combinations can be made thereto without departing from the scope of the embodiments. Accordingly, the specification and drawings are merely illustrative of the application as defined by the appended claims and are deemed to cover any and all modifications, variations, combinations or equivalents within the scope of this application. Apparently, those skilled in the art can make various changes and modifications to this application without departing from the scope of this embodiment. In this way, if these modifications and variations of the application fall within the scope of the claims of the application and their equivalent technologies, the application also intends to include these modifications and variations.

Claims (37)

  1. 一种虚拟显示设备,其特征在于,所述虚拟显示设备用于向用户提供虚拟三维环境的显示功能;所述虚拟显示设备包括:A virtual display device, characterized in that the virtual display device is used to provide users with a display function of a virtual three-dimensional environment; the virtual display device includes:
    处理器,以及第一光学显示模组;所述第一光学显示模组用于在所述处理器的控制下,向第一人眼展示图像,所述第一人眼是所述用户的双眼中的任一个;A processor, and a first optical display module; the first optical display module is used to display images to the first human eyes under the control of the processor, and the first human eyes are both eyes of the user any of
    所述第一光学显示模组用于在所述处理器控制下,实现以下功能:The first optical display module is used to realize the following functions under the control of the processor:
    显示第一对象,所述第一对象的辐辏深度为第一辐辏深度,所述第一光学显示模组的成像面深度为第一深度;Displaying a first object, the depth of convergence of the first object is a first depth of convergence, and the depth of an imaging plane of the first optical display module is a first depth;
    显示第二对象,所述第二对象的辐辏深度为第二辐辏深度,所述第一光学显示模组的成像面深度为第二深度;Displaying a second object, the depth of convergence of the second object is a second depth of convergence, and the depth of the imaging surface of the first optical display module is a second depth;
    其中,第一辐辏深度和所述第二辐辏深度不同时,所述光学显示模组调整所述第一深度与第二深度不同。Wherein, when the first depth of convergence is different from the second depth of convergence, the optical display module adjusts the difference between the first depth and the second depth.
  2. 根据权利要求1所述的虚拟显示设备,其特征在于,The virtual display device according to claim 1, wherein:
    在所述第一辐辏深度大于所述第二辐辏深度时,所述第一深度大于所述第二深度;在所述第一辐辏深度小于所述第二辐辏深度时,所述第一深度小于所述第二深度。When the first depth of convergence is greater than the second depth of convergence, the first depth is greater than the second depth; when the first depth of convergence is smaller than the second depth of convergence, the first depth is less than the second depth.
  3. 根据权利要求1或2所述的虚拟显示设备,其特征在于,所述虚拟显示设备还包括:第一眼动追踪模组,所述第一眼动追踪模组用于在所述处理器的控制下,对所述第一人眼进行眼动追踪;The virtual display device according to claim 1 or 2, characterized in that, the virtual display device further comprises: a first eye-tracking module, the first eye-tracking module is used in the processor Under control, performing eye movement tracking on the first human eye;
    所述处理器用于控制所述第一眼动追踪模组对所述第一人眼的注视点进行眼动追踪,The processor is used to control the first eye-tracking module to perform eye-tracking on the fixation point of the first human eye,
    在所述第一人眼的注视点的辐辏深度不同时,所述第一光学显示模组被配置为调整所述成像面深度不同。When the depths of convergence of the gaze points of the first human eyes are different, the first optical display module is configured to adjust the depth of the imaging surface to be different.
  4. 根据权利要求1-3中任一项所述的虚拟显示设备,其特征在于,当所述用户的屈光度不同时,所述第一光学显示模组被配置为调整所述成像面深度不同。The virtual display device according to any one of claims 1-3, wherein when the diopters of the users are different, the first optical display module is configured to adjust the depth of the imaging surface to be different.
  5. 根据权利要求1-4中任一项所述的虚拟显示设备,其特征在于,所述虚拟显示设备还包括:第二光学显示模组,所述第二光学显示模组用于在所述处理器的控制下,向第二人眼展示图像,所述第二人眼是所述用户的双眼中的不同与所述第一人眼的一个;The virtual display device according to any one of claims 1-4, wherein the virtual display device further comprises: a second optical display module, the second optical display module is used for processing displaying an image to a second eye, the second eye being the one of the user's eyes different from the first eye, under the control of the user;
    所述第一光学显示模组用于在所述处理器控制下,实现以下功能:The first optical display module is used to realize the following functions under the control of the processor:
    显示所述第一对象,显示所述第一对象的成像面深度为第三深度,所述第一对象的在所述虚拟三维环境中的深度为第二深度,所述第一深度与所述第三深度相近或相同。displaying the first object, displaying that the depth of the imaging plane of the first object is a third depth, the depth of the first object in the virtual three-dimensional environment is a second depth, and the first depth and the The third depth is similar or the same.
  6. 根据权利要求1-5中任一项所述的虚拟显示设备,其特征在于,所述第一光学模组包括:第一变焦模块;所述第一变焦模块的光焦度可调;The virtual display device according to any one of claims 1-5, wherein the first optical module comprises: a first zoom module; the optical power of the first zoom module is adjustable;
    所述第一变焦模块用于在所述处理器的控制下,在显示所述第一对象时,调整所述第一变焦模块的光焦度为第一光焦度,以使得具有所述第一光焦度的第一光学模组能够在所述第一深度成像;或者,The first zoom module is configured to adjust the optical power of the first zoom module to a first optical power when displaying the first object under the control of the processor, so that the first optical power has the a first optical module of one optical power capable of imaging at said first depth; or,
    所述第一变焦模块用于在所述处理器的控制下,在显示所述第二对象时,调整所述第一变焦模块的光焦度为第二光焦度,以使得具有所述第二光焦度的第一光学模组 能够在所述第二深度成像。The first zoom module is configured to adjust the optical power of the first zoom module to the second optical power when displaying the second object under the control of the processor, so that the first The two-power first optical module is capable of imaging at the second depth.
  7. 根据权利要求1-6中任一项所述的虚拟显示设备,其特征在于,The virtual display device according to any one of claims 1-6, characterized in that,
    所述第一光学显示模组还用于在所述处理器的控制下,在所述虚拟三维环境中的第一虚拟位置和第二虚拟位置显示第三对象;The first optical display module is further configured to display a third object at a first virtual position and a second virtual position in the virtual three-dimensional environment under the control of the processor;
    所述第一虚拟位置和第二虚拟位置在所述三维环境中,到所述用户的人眼的距离不同。The first virtual position and the second virtual position have different distances from the user's eyes in the three-dimensional environment.
  8. 根据权利要求7所述的虚拟显示设备,其特征在于,The virtual display device according to claim 7, wherein:
    所述虚拟显示设备在当前显示场景为预设场景时,分别在所述第一虚拟位置和所述第二虚拟位置显示所述第三对象;The virtual display device displays the third object at the first virtual position and the second virtual position respectively when the currently displayed scene is a preset scene;
    其中,所述预设场景包括如下场景中的至少1个:Wherein, the preset scene includes at least one of the following scenes:
    广告播放场景,显示资源加载场景。The ad playing scene shows the resource loading scene.
  9. 根据权利要求1-8中任一项所述的虚拟显示设备,其特征在于,The virtual display device according to any one of claims 1-8, characterized in that,
    所述第一光学显示模组还用于在所述处理器的控制下,向所述第一人眼展示第一检测图像;在显示所述第一检测图像时,用于显示所述第一检测图像的开口的像素数量为第一数量;The first optical display module is also used to display a first detection image to the first human eye under the control of the processor; when displaying the first detection image, it is used to display the first detection image. detecting that the number of pixels of the opening of the image is the first number;
    所述处理器还用于接收第一识别反馈,所述第一识别反馈是所述用户在使用所述第一人眼观察所述第一检测图像时输入的指示,所述第一识别反馈用于指示所述用户是否能够识别所述第一检测图像的开口方向。The processor is further configured to receive a first recognition feedback, the first recognition feedback is an instruction input by the user when using the first human eyes to observe the first detection image, and the first recognition feedback uses to indicate whether the user can identify the opening direction of the first detection image.
  10. 根据权利要求9所述的虚拟显示设备,其特征在于,The virtual display device according to claim 9, wherein:
    所述第一光学显示模组在使用所述第一数量的像素在向用户显示所述第一检测图像的开口时,第一人眼观察所述第一检测图像的开口的视分值为第一视分值;When the first optical display module uses the first number of pixels to display the opening of the first detection image to the user, the visual score of the first human eye observing the opening of the first detection image is the second a score;
    在所述第一识别反馈指示所述用户不能识别所述第一检测图像的开口方向的情况下,In the case where the first recognition feedback indicates that the user cannot recognize the opening direction of the first detection image,
    所述处理器用于确定所述第一人眼的近视度数为与所述第一视分值对应的近视度数。The processor is configured to determine the degree of myopia of the first human eye as the degree of myopia corresponding to the first visual score.
  11. 根据权利要求9或10所述的虚拟显示设备,其特征在于,The virtual display device according to claim 9 or 10, characterized in that,
    在所述第一识别反馈指示所述用户能够识别所述第一检测图像的开口方向的情况下,In the case where the first recognition feedback indicates that the user can recognize the opening direction of the first detection image,
    所述第一光学显示模组还用于在所述处理器的控制下,向所述第一人眼展示第二检测图像;在显示所述第二检测图像时,用于显示所述第二检测图像的开口的像素数量为第二数量;所述第二数量小于所述第一数量;The first optical display module is also used to display a second detection image to the first human eye under the control of the processor; when displaying the second detection image, it is used to display the second detection image. The number of pixels of the opening of the detection image is a second number; the second number is smaller than the first number;
    所述处理器还用于接收第二识别反馈,所述第二识别反馈是所述用户在使用所述第一人眼观察所述第二检测图像时输入的指示,所述第二识别反馈用于指示所述用户是否能够识别所述第二检测图像的开口方向。The processor is further configured to receive second recognition feedback, the second recognition feedback is an instruction input by the user when using the first human eyes to observe the second detection image, and the second recognition feedback uses to indicate whether the user can identify the opening direction of the second detection image.
  12. 根据权利要求11所述的虚拟显示设备,其特征在于,The virtual display device according to claim 11, wherein:
    所述第一光学显示模组在使用所述第二数量的像素在向用户显示所述第二检测图像的开口时,第一人眼观察所述第二检测图像的开口的视分值为第二视分值;When the first optical display module uses the second number of pixels to display the opening of the second detection image to the user, the visual score of the first human eye observing the opening of the second detection image is the first binocular score;
    在所述第二识别反馈指示所述用户不能识别所述第二检测图像的开口方向的情况下,In the case where the second recognition feedback indicates that the user cannot recognize the opening direction of the second detection image,
    所述处理器用于确定所述第一人眼的近视度数为与所述第二视分值对应的近视度数。The processor is used to determine the degree of myopia of the first human eye as the degree of myopia corresponding to the second visual score.
  13. 根据权利要求9-12中任一项所述的虚拟显示设备,其特征在于,The virtual display device according to any one of claims 9-12, characterized in that,
    所述第一光学显示模组用于在向所述第一人眼展示所述第一检测图像之前,在所述处理器的控制下,调整所述第一光学显示模组的光焦度为初始光焦度,使得所述第一光学显示模组在最远处成像。The first optical display module is used to adjust the optical power of the first optical display module under the control of the processor before displaying the first detection image to the first human eye. The initial optical power enables the first optical display module to image at the furthest distance.
  14. 根据权利要求9-13中任一项所述的虚拟显示设备,其特征在于,The virtual display device according to any one of claims 9-13, characterized in that,
    在所述第一光学显示模组向所述第一人眼展示的第三检查图像的开口为第三数量的像素的情况下,在接收到的识别反馈指示用户无法识别所述第四检测图像的开口方向时,In the case where the opening of the third inspection image displayed by the first optical display module to the first human eye is a third number of pixels, the received recognition feedback indicates that the user cannot recognize the fourth inspection image When the opening direction of
    所述第一光学显示模组向所述第一人眼展示虚拟图像的光焦度为第三光焦度;所述第三光焦度对应于第三视分值,所述第三视分值是与所述第三数量的像素对应的视分值。The optical power of the first optical display module to display the virtual image to the first human eye is the third optical power; the third optical power corresponds to the third visual fraction, and the third visual fraction A value is a view score value corresponding to the third number of pixels.
  15. 根据权利要求1-14中任一项所述的虚拟显示设备,其特征在于,The virtual display device according to any one of claims 1-14, characterized in that,
    所述第一光学显示模组包括旋转机构,所述旋转机构用于在所述处理器的控制下,沿垂直于光轴方向旋转所述第一光学显示模组;The first optical display module includes a rotation mechanism for rotating the first optical display module in a direction perpendicular to the optical axis under the control of the processor;
    所述第一光学显示模组在向用户展示所述第一图像时,所述第一光学显示模组的光焦度方向与所述第一人眼的散光轴位重合。When the first optical display module presents the first image to the user, the focal power direction of the first optical display module coincides with the astigmatism axis of the first human eye.
  16. 根据权利要求15所述的虚拟显示设备,其特征在于,The virtual display device according to claim 15, wherein:
    所述处理器还用于根据所述第一光学显示模组的光焦度方向与所述第一人眼的散光轴位重合时,所述旋转机构的旋转情况,确定所述第一人眼的散光度数。The processor is also used to determine the rotation of the rotating mechanism when the focal power direction of the first optical display module coincides with the astigmatism axis of the first human eye, and determine the degree of astigmatism.
  17. 根据权利要求1-16中任一项所述的虚拟显示设备,其特征在于,The virtual display device according to any one of claims 1-16, characterized in that,
    所述处理器还用于在控制所述第一光学显示模组显示所述第一图像之前,获取当前用户的用户特征,The processor is further configured to obtain user characteristics of the current user before controlling the first optical display module to display the first image,
    所述处理器具体用于控制所述第一光学显示模组显示与所述当前用户的用户特征相应的第一图像;The processor is specifically configured to control the first optical display module to display a first image corresponding to the user characteristics of the current user;
    其中,显示所述第一图像时所述第一光学显示模组的光焦度是与所述当前用户的第一人眼的近视度数和/或散光度数匹配的,所述当前用户的第一人眼的近视度数和/或散光度数是由所述当前用户的用户特征对应的虚拟显示信息指示的。Wherein, when displaying the first image, the optical power of the first optical display module matches the degree of myopia and/or the degree of astigmatism of the first human eye of the current user, and the first degree of astigmatism of the current user The degree of myopia and/or the degree of astigmatism of the human eye is indicated by the virtual display information corresponding to the user characteristics of the current user.
  18. 根据权利要求17所述的虚拟显示设备,其特征在于,The virtual display device according to claim 17, wherein:
    所述用户特征包括以下特征中的任一项:所述当前用户的指纹信息;所述当前用户的虹膜特征;所述当前用户的账户信息;所述当前用户的标识,不同用户的标识不同。The user features include any one of the following features: the fingerprint information of the current user; the iris feature of the current user; the account information of the current user; the identification of the current user, which is different for different users.
  19. 根据权利要求17或18所述的虚拟显示设备,其特征在于,所述虚拟显示设备中存储有不同用户特征与对应的虚拟显示信息的对应关系;The virtual display device according to claim 17 or 18, wherein the virtual display device stores correspondences between different user characteristics and corresponding virtual display information;
    所述处理器用于根据所述当前用户的用户特征,从所述对应关系中查找匹配的表项,在存在所述匹配的表项的情况下,确定所述当前用户对应的虚拟显示信息为所述匹配的表项中存储的虚拟显示信息;所述虚拟显示信息包括对应用户的近视度数和/或散光度数。The processor is configured to search for a matching entry from the corresponding relationship according to the user characteristics of the current user, and if there is a matching entry, determine that the virtual display information corresponding to the current user is the The virtual display information stored in the matching entry; the virtual display information includes the degree of myopia and/or the degree of astigmatism of the corresponding user.
  20. 根据权利要求19所述的虚拟显示设备,其特征在于,The virtual display device according to claim 19, wherein:
    在所述对应关系中不存在与所述当前用户的用户特征的对应的匹配项时,When there is no matching item corresponding to the user feature of the current user in the corresponding relationship,
    所述第一光学显示模组还用于在所述处理器的控制下,展示第一图像,The first optical display module is further configured to display a first image under the control of the processor,
    在展示所述第一图像时的光焦度是与所述当前用户的第一人眼的近视度数和/或散光度数匹配的光焦度,所述第一人眼的近视度数和/或散光度数是所述处理器自动测定的,或者在用户的指示下测定的,或者,所述用户手动输入的。The optical power when displaying the first image is the optical power matching the myopia degree and/or astigmatism degree of the first human eye of the current user, and the myopia degree and/or astigmatism degree of the first human eye The degree is determined automatically by the processor, or at the direction of the user, or manually entered by the user.
  21. 根据权利要求20所述的虚拟显示设备,其特征在于,The virtual display device according to claim 20, wherein:
    所述处理器还用于存储所述第一人眼的近视度数和/或散光度数与所述当前用户的用户特征的对应关系。The processor is further configured to store the correspondence between the degree of myopia and/or the degree of astigmatism of the first human eye and the user characteristics of the current user.
  22. 一种虚拟显示方法,其特征在于,所述虚拟显示方法应用于如权利要求1-21中任一项所述的虚拟显示设备,所述虚拟显示方法用于向用户提供虚拟三维环境的显示功能;所述方法包括:A virtual display method, characterized in that the virtual display method is applied to the virtual display device according to any one of claims 1-21, and the virtual display method is used to provide users with a display function of a virtual three-dimensional environment ; the method comprising:
    第一光学显示模组显示第一对象,所述第一对象的辐辏深度为第一辐辏深度,所述第一光学显示模组的成像面深度为第一深度;The first optical display module displays the first object, the depth of convergence of the first object is the first depth of convergence, and the imaging plane depth of the first optical display module is the first depth;
    第一光学显示模组显示第二对象,所述第二对象的辐辏深度为第二辐辏深度,所述第一光学显示模组的成像面深度为第二深度;The first optical display module displays a second object, the depth of convergence of the second object is a second depth of convergence, and the imaging plane depth of the first optical display module is a second depth;
    其中,第一辐辏深度和所述第二辐辏深度不同时,所述光学显示模组调整所述第一深度与第二深度不同。Wherein, when the first depth of convergence is different from the second depth of convergence, the optical display module adjusts the difference between the first depth and the second depth.
  23. 根据权利要求22所述的方法,其特征在于,The method of claim 22, wherein,
    在所述第一辐辏深度大于所述第二辐辏深度时,所述第一深度大于所述第二深度;在所述第一辐辏深度小于所述第二辐辏深度时,所述第一深度小于所述第二深度。When the first depth of convergence is greater than the second depth of convergence, the first depth is greater than the second depth; when the first depth of convergence is smaller than the second depth of convergence, the first depth is less than the second depth.
  24. 根据权利要求22或23所述的方法,其特征在于,所述方法还包括:The method according to claim 22 or 23, further comprising:
    所述处理器控制所述第一光学显示模组的第一眼动追踪模组对所述第一人眼的注视点进行眼动追踪,The processor controls the first eye-tracking module of the first optical display module to perform eye-tracking on the fixation point of the first human eye,
    在所述第一人眼的注视点的辐辏深度不同时,所述第一光学显示模组被配置为调整所述成像面深度不同。When the depths of convergence of the gaze points of the first human eyes are different, the first optical display module is configured to adjust the depth of the imaging surface to be different.
  25. 根据权利要求22-24中任一项所述的方法,其特征在于,所述第一光学模组包括:第一变焦模块;所述第一变焦模块的光焦度可调;所述方法还包括:The method according to any one of claims 22-24, wherein the first optical module includes: a first zoom module; the optical power of the first zoom module is adjustable; and the method further include:
    所述处理器控制所述第一变焦模块,在显示所述第一对象时,调整所述第一变焦模块的光焦度为第一光焦度,以使得具有所述第一光焦度的第一光学模组能够在所述第一深度成像;或者,The processor controls the first zoom module, and when displaying the first object, adjusts the optical power of the first zoom module to the first optical power, so that the a first optical module capable of imaging at said first depth; or,
    所述处理器控制所述第一变焦模块,在显示所述第二对象时,调整所述第一变焦模块的光焦度为第二光焦度,以使得具有所述第二光焦度的第一光学模组能够在所述第二深度成像。The processor controls the first zoom module, and when displaying the second object, adjusts the optical power of the first zoom module to the second optical power, so that the The first optical module is capable of imaging at the second depth.
  26. 根据权利要求22-25中任一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 22-25, further comprising:
    所述处理器控制所述第一光学显示模组,在所述虚拟三维环境中的第一虚拟位置和第二虚拟位置显示第三对象;The processor controls the first optical display module to display a third object at a first virtual position and a second virtual position in the virtual three-dimensional environment;
    所述第一虚拟位置和第二虚拟位置在所述三维环境中,到所述用户的人眼的距离不同;The first virtual position and the second virtual position have different distances to the user's eyes in the three-dimensional environment;
    其中,在当前显示场景为预设场景时,所述第一光学显示模组分别在所述第一虚拟位置和所述第二虚拟位置显示所述第二对象;Wherein, when the currently displayed scene is a preset scene, the first optical display module displays the second object at the first virtual position and the second virtual position respectively;
    其中,所述预设场景包括如下场景中的至少1个:Wherein, the preset scene includes at least one of the following scenes:
    广告播放场景,显示资源加载场景。The ad playing scene shows the resource loading scene.
  27. 根据权利要求22-26中任一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 22-26, further comprising:
    所述处理器控制所述第一光学显示模组,向所述第一人眼展示第一检测图像;在显示所述第一检测图像时,用于显示所述第一检测图像的开口的像素数量为第一数量;The processor controls the first optical display module to display a first detection image to the first human eye; when displaying the first detection image, the pixels used to display the opening of the first detection image the quantity is the first quantity;
    所述处理器接收第一识别反馈,所述第一识别反馈是所述用户在使用所述第一人眼观察所述第一检测图像时输入的指示,所述第一识别反馈用于指示所述用户是否能够识别所述第一检测图像的开口方向;The processor receives a first recognition feedback, the first recognition feedback is an instruction input by the user when using the first human eyes to observe the first detection image, and the first recognition feedback is used to indicate the Whether the user can identify the opening direction of the first detection image;
    所述第一光学显示模组在使用所述第一数量的像素在向用户显示所述第一检测图像的开口时,第一人眼观察所述第一检测图像的开口的视分值为第一视分值;When the first optical display module uses the first number of pixels to display the opening of the first detection image to the user, the visual score of the first human eye observing the opening of the first detection image is the second a score;
    在所述第一识别反馈指示所述用户不能识别所述第一检测图像的开口方向的情况下,In the case where the first recognition feedback indicates that the user cannot recognize the opening direction of the first detected image,
    所述处理器确定所述第一人眼的近视度数为与所述第一视分值对应的近视度数。The processor determines that the degree of myopia of the first human eye is the degree of myopia corresponding to the first visual score.
  28. 根据权利要求27所述的方法,其特征在于,The method of claim 27, wherein,
    在所述第一识别反馈指示所述用户能够识别所述第一检测图像的开口方向的情况下,所述方法还包括:In the case where the first recognition feedback indicates that the user can recognize the opening direction of the first detection image, the method further includes:
    所述处理器控制所述第一光学显示模组,向所述第一人眼展示第二检测图像;在显示所述第二检测图像时,用于显示所述第二检测图像的开口的像素数量为第二数量;所述第二数量小于所述第一数量;The processor controls the first optical display module to display a second detection image to the first human eyes; when displaying the second detection image, the pixels used to display the opening of the second detection image the quantity is a second quantity; the second quantity is less than the first quantity;
    所述处理器接收第二识别反馈,所述第二识别反馈是所述用户在使用所述第一人眼观察所述第二检测图像时输入的指示,所述第二识别反馈用于指示所述用户是否能够识别所述第二检测图像的开口方向;The processor receives second recognition feedback, the second recognition feedback is an instruction input by the user when observing the second detection image with the first human eyes, the second recognition feedback is used to indicate the Whether the user can identify the opening direction of the second detection image;
    所述第一光学显示模组在使用所述第二数量的像素在向用户显示所述第二检测图像的开口时,第一人眼观察所述第二检测图像的开口的视分值为第二视分值;When the first optical display module uses the second number of pixels to display the opening of the second detection image to the user, the visual score of the first human eye observing the opening of the second detection image is the first binocular score;
    在所述第二识别反馈指示所述用户不能识别所述第二检测图像的开口方向的情况下,In the case where the second recognition feedback indicates that the user cannot recognize the opening direction of the second detection image,
    所述处理器确定所述第一人眼的近视度数为与所述第二视分值对应的近视度数。The processor determines that the degree of myopia of the first human eye is the degree of myopia corresponding to the second visual score.
  29. 根据权利要求27或28中任一项所述的方法,其特征在于,A method according to any one of claims 27 or 28, wherein,
    在所述第一光学显示模组向所述第一人眼展示的第三检查图像的开口为第三数量的像素的情况下,在接收到的识别反馈指示用户无法识别所述第四检测图像的开口方向时,In the case where the opening of the third inspection image displayed by the first optical display module to the first human eye is a third number of pixels, the received recognition feedback indicates that the user cannot recognize the fourth inspection image When the opening direction of
    所述处理器控制所述第一光学显示模组向所述第一人眼展示虚拟图像的光焦度为第三光焦度;所述第三光焦度对应于第三视分值,所述第三视分值是与所述第三数量的像素对应的视分值。The processor controls the optical power of the first optical display module to display the virtual image to the first human eye to be a third optical power; the third optical power corresponds to a third visual score, so The third visual score is a visual score corresponding to the third number of pixels.
  30. 根据权利要求22-29中任一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 22-29, further comprising:
    所述处理器在控制所述第一光学显示模组显示所述第一图像之前,获取当前用户的用户特征,Before the processor controls the first optical display module to display the first image, acquire the user characteristics of the current user,
    所述处理器在控制所述第一光学显示模组显示所述第一图像,包括:The processor controlling the first optical display module to display the first image includes:
    所述处理器控制所述第一光学显示模组显示与所述当前用户的用户特征相应的第一图像;The processor controls the first optical display module to display a first image corresponding to the user characteristics of the current user;
    其中,显示所述第一图像时所述第一光学显示模组的光焦度是与所述当前用户的第一人眼的近视度数和/或散光度数匹配的,所述当前用户的第一人眼的近视度数和/或散光度数是由所述当前用户的用户特征对应的虚拟显示信息指示的;Wherein, when displaying the first image, the optical power of the first optical display module matches the degree of myopia and/or the degree of astigmatism of the first human eye of the current user, and the first degree of astigmatism of the current user The degree of myopia and/or astigmatism of the human eye is indicated by the virtual display information corresponding to the user characteristics of the current user;
    所述用户特征包括以下特征中的任一项:所述当前用户的指纹信息;所述当前用户的虹膜特征;所述当前用户的账户信息;所述当前用户的标识,不同用户的标识不同。The user features include any one of the following features: the fingerprint information of the current user; the iris feature of the current user; the account information of the current user; the identification of the current user, which is different for different users.
  31. 根据权利要求30所述的方法,其特征在于,所述方法中存储有不同用户特征与对应的虚拟显示信息的对应关系;所述方法还包括:The method according to claim 30, wherein the method stores the correspondence between different user characteristics and corresponding virtual display information; the method further comprises:
    所述处理器用于根据所述当前用户的用户特征,从所述对应关系中查找匹配的表项,在存在所述匹配的表项的情况下,确定所述当前用户对应的虚拟显示信息为所述匹配的表项中存储的虚拟显示信息;所述虚拟显示信息包括对应用户的近视度数和/或散光度数。The processor is configured to search for a matching entry from the corresponding relationship according to the user characteristics of the current user, and if there is a matching entry, determine that the virtual display information corresponding to the current user is the The virtual display information stored in the matching entry; the virtual display information includes the degree of myopia and/or the degree of astigmatism of the corresponding user.
  32. 根据权利要求31所述的方法,其特征在于,所述方法还包括:The method according to claim 31, further comprising:
    在所述对应关系中不存在与所述当前用户的用户特征的对应的匹配项时,When there is no matching item corresponding to the user feature of the current user in the corresponding relationship,
    所述第一光学显示模组在所述处理器的控制下,展示第一图像,The first optical display module displays a first image under the control of the processor,
    在展示所述第一图像时的光焦度是与所述当前用户的第一人眼的近视度数和/或散光度数匹配的光焦度,所述第一人眼的近视度数和/或散光度数是所述处理器自动测定的,或者在用户的指示下测定的,或者,所述用户手动输入的。The optical power when displaying the first image is the optical power matching the myopia degree and/or astigmatism degree of the first human eye of the current user, and the myopia degree and/or astigmatism degree of the first human eye The degree is determined automatically by the processor, or at the direction of the user, or manually entered by the user.
  33. 根据权利要求32所述的方法,其特征在于,所述方法还包括:The method of claim 32, further comprising:
    所述处理器存储所述第一人眼的近视度数和/或散光度数与所述当前用户的用户特征的对应关系。The processor stores the corresponding relationship between the degree of myopia and/or the degree of astigmatism of the first human eye and the user characteristics of the current user.
  34. 一种虚拟显示设备,其特征在于,所述虚拟显示设备用于向用户提供虚拟三维环境的显示功能;A virtual display device, characterized in that the virtual display device is used to provide users with a display function of a virtual three-dimensional environment;
    所述虚拟显示设备包括一个或多个处理器和一个或多个存储器;所述一个或多个存储器与所述一个或多个处理器耦合,所述一个或多个存储器存储有计算机指令;The virtual display device includes one or more processors and one or more memories; the one or more memories are coupled to the one or more processors, and the one or more memories store computer instructions;
    当所述一个或多个处理器执行所述计算机指令时,使得所述电子设备执行如权利要求22-33中任一项所述的虚拟显示方法。When the one or more processors execute the computer instructions, the electronic device is made to execute the virtual display method according to any one of claims 22-33.
  35. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质包括计算机指令,当所述计算机指令运行时,执行如权利要求22-33中任一项所述的虚拟显示方法。A computer-readable storage medium, characterized in that the computer-readable storage medium includes computer instructions, and when the computer instructions are executed, the virtual display method according to any one of claims 22-33 is executed.
  36. 一种虚拟显示设备,其特征在于,所述虚拟显示设备用于向用户提供虚拟三维环境的显示功能;所述虚拟显示设备包括:A virtual display device, characterized in that the virtual display device is used to provide users with a display function of a virtual three-dimensional environment; the virtual display device includes:
    处理器,以及第一光学显示模组;a processor, and a first optical display module;
    所述第一光学显示模组用于在所述处理器的控制下,向第一人眼展示第一图像,所述第一图像中包括第一对象,所述第一对象是所述虚拟三维环境中的物体;所述第一人眼是所述用户的双眼中的任一个;The first optical display module is used to display a first image to the first human eyes under the control of the processor, the first image includes a first object, and the first object is the virtual three-dimensional objects in the environment; the first human eye is either of the user's eyes;
    所述第一光学显示模组显示所述第一对象,显示所述第一对象的成像面深度为第一深度,所述第一对象的在所述虚拟三维环境中的深度为第二深度,所述第一深度与所述第二深度相近或相同。The first optical display module displays the first object, the depth of the imaging surface of the first object is displayed as a first depth, and the depth of the first object in the virtual three-dimensional environment is a second depth, The first depth is similar to or the same as the second depth.
  37. 根据权利要求36所述的虚拟显示设备,其特征在于,所述第一光学模组在显示所述第一对象时,调整所述第一光学模组的光焦度,使得显示所述第一对象的虚像面深度为所述第一深度。The virtual display device according to claim 36, wherein when the first optical module is displaying the first object, the optical power of the first optical module is adjusted so that the first object is displayed. The virtual image plane depth of the object is the first depth.
PCT/CN2022/085632 2021-05-27 2022-04-07 Virtual display device and virtual display method WO2022247482A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110587591.7A CN115407504A (en) 2021-05-27 2021-05-27 Virtual display apparatus and virtual display method
CN202110587591.7 2021-05-27

Publications (1)

Publication Number Publication Date
WO2022247482A1 true WO2022247482A1 (en) 2022-12-01

Family

ID=84156628

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/085632 WO2022247482A1 (en) 2021-05-27 2022-04-07 Virtual display device and virtual display method

Country Status (2)

Country Link
CN (1) CN115407504A (en)
WO (1) WO2022247482A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118655739A (en) * 2024-08-22 2024-09-17 歌尔光学科技有限公司 Optical projection system and AR optical equipment

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024148536A1 (en) * 2023-01-11 2024-07-18 Jade Bird Display (shanghai) Limited Method and system for image evaluation of near-eye displays
TWI858918B (en) * 2023-09-14 2024-10-11 宏達國際電子股份有限公司 Image display device
CN117998071B (en) * 2024-04-07 2024-06-18 清华大学 Eye movement tracking light field 3D display method and device, electronic equipment and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106233189A (en) * 2014-01-31 2016-12-14 奇跃公司 multifocal display system and method
CN108124509A (en) * 2017-12-08 2018-06-05 深圳前海达闼云端智能科技有限公司 Image display method, wearable intelligent device and storage medium
CN108369325A (en) * 2015-12-08 2018-08-03 欧库勒斯虚拟现实有限责任公司 Focus adjusts virtual reality headset
CN108478184A (en) * 2018-04-26 2018-09-04 京东方科技集团股份有限公司 Eyesight measurement method and device, VR equipment based on VR
CN108663799A (en) * 2018-03-30 2018-10-16 蒋昊涵 A kind of display control program and its display control method of VR images
US20190258054A1 (en) * 2018-02-20 2019-08-22 University Of Rochester Optical approach to overcoming vergence-accommodation conflict
CN110325895A (en) * 2017-02-21 2019-10-11 脸谱科技有限责任公司 It focuses and adjusts more plane head-mounted displays
CN110431470A (en) * 2017-01-19 2019-11-08 脸谱科技有限责任公司 Focal plane is shown
CN110727111A (en) * 2019-10-23 2020-01-24 深圳惠牛科技有限公司 Head-mounted display optical system and head-mounted display equipment
US20200183152A1 (en) * 2018-12-10 2020-06-11 Daqri, Llc Optical hyperfocal reflective systems and methods, and augmented reality and/or virtual reality displays incorporating same

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105866949B (en) * 2015-01-21 2018-08-17 成都理想境界科技有限公司 The binocular AR helmets and depth of field adjusting method of the depth of field can be automatically adjusted
CN105072436A (en) * 2015-08-28 2015-11-18 胡东海 Automatic adjustment method and adjustment device of virtual reality and augmented reality imaging depth-of-field
US10627901B2 (en) * 2018-09-14 2020-04-21 Facebook Technologies, Llc Vergence determination

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106233189A (en) * 2014-01-31 2016-12-14 奇跃公司 multifocal display system and method
CN108369325A (en) * 2015-12-08 2018-08-03 欧库勒斯虚拟现实有限责任公司 Focus adjusts virtual reality headset
CN110431470A (en) * 2017-01-19 2019-11-08 脸谱科技有限责任公司 Focal plane is shown
CN110325895A (en) * 2017-02-21 2019-10-11 脸谱科技有限责任公司 It focuses and adjusts more plane head-mounted displays
CN108124509A (en) * 2017-12-08 2018-06-05 深圳前海达闼云端智能科技有限公司 Image display method, wearable intelligent device and storage medium
US20190258054A1 (en) * 2018-02-20 2019-08-22 University Of Rochester Optical approach to overcoming vergence-accommodation conflict
CN108663799A (en) * 2018-03-30 2018-10-16 蒋昊涵 A kind of display control program and its display control method of VR images
CN108478184A (en) * 2018-04-26 2018-09-04 京东方科技集团股份有限公司 Eyesight measurement method and device, VR equipment based on VR
US20200183152A1 (en) * 2018-12-10 2020-06-11 Daqri, Llc Optical hyperfocal reflective systems and methods, and augmented reality and/or virtual reality displays incorporating same
CN110727111A (en) * 2019-10-23 2020-01-24 深圳惠牛科技有限公司 Head-mounted display optical system and head-mounted display equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118655739A (en) * 2024-08-22 2024-09-17 歌尔光学科技有限公司 Optical projection system and AR optical equipment

Also Published As

Publication number Publication date
CN115407504A (en) 2022-11-29

Similar Documents

Publication Publication Date Title
WO2022247482A1 (en) Virtual display device and virtual display method
US20220121280A1 (en) Enhancing the performance of near-to-eye vision systems
JP7699254B2 (en) Display system and method for determining vertical alignment between left and right displays and a user's eyes - Patents.com
AU2014229610B2 (en) Wavefront generation for ophthalmic applications
KR102056221B1 (en) Method and apparatus For Connecting Devices Using Eye-tracking
CN114255204B (en) Amblyopia training method, device, equipment and storage medium
CN106802486A (en) Method for adjusting focal length and head-mounted display
US12210150B2 (en) Head-mountable display systems and methods
WO2015051605A1 (en) Image collection and locating method, and image collection and locating device
CN113419350A (en) Virtual reality display device, picture presentation method, picture presentation device and storage medium
US12118145B2 (en) Electronic apparatus
CN112926523B (en) Eyeball tracking method and system based on virtual reality
CN114895790A (en) Man-machine interaction method and device, electronic equipment and storage medium
WO2023001113A1 (en) Display method and electronic device
WO2023116541A1 (en) Eye tracking apparatus, display device, and storage medium
WO2021057420A1 (en) Method for displaying control interface and head-mounted display
CN114624883B (en) Mixed reality glasses system based on flexible curved surface transparent micro display screen
WO2024045446A1 (en) Iris image capture method based on head-mounted display, and related product
WO2024021251A1 (en) Identity verification method and apparatus, and electronic device and storage medium
WO2023035911A1 (en) Display method and electronic device
CN108805856B (en) Near-sighted degree on-site verification system
CN120182135A (en) Image display method, image display device, electronic device and storage medium
CN107544661B (en) Information processing method and electronic equipment
WO2024230660A1 (en) Processing method and apparatus for eyeball tracking, device, and storage medium
CN119135873A (en) Optical parameter setting method of device and terminal device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22810215

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22810215

Country of ref document: EP

Kind code of ref document: A1