Disclosure of Invention
The embodiment of the invention provides virtual reality glasses, which are used for improving display resolution and solving the problems of visual fatigue and dizziness caused by convergence focusing conflict.
The embodiment of the invention provides a pair of virtual reality glasses, which comprises a display chip and a lens array, wherein the display chip is positioned on the light inlet side of the lens array, and a virtual image formed by light emitted by the display chip through the lens array is positioned on one side of the display chip far away from the lens array;
The display chip comprises a plurality of pixel units which are arranged in rows and columns along a first direction and a second direction, the pixel units which are arranged along the first direction have the same polarization state, and the polarization states of two adjacent rows of pixel units are orthogonal along the second direction; the first direction is perpendicular to the second direction;
The lens array comprises a plurality of lenses which are arranged along a third direction and a fourth direction, the lenses comprise first lenses and second lenses, one second lens is arranged between two adjacent first lenses along the third direction, one second lens is arranged between two adjacent first lenses along the fourth direction, the first lenses are orthogonal to the polarization states of the second lenses, and the third direction is perpendicular to the fourth direction.
Optionally, the first lens and the second lens have the same focal length; the first lens and the second lens are mounted on different planes.
Alternatively to this, the method may comprise,
Wherein f is the focal length of the first lens or the second lens, emax is the maximum value of the pupil of the human eye, emin is the minimum value of the pupil of the human eye, L is the distance between the first lens and the pupil of the human eye, D is the entrance pupil diameter of the lens,Errors are tracked for the pupil position of the human eye.
Optionally, the first lens and the second lens have different focal lengths; the first lens and the second lens are arranged on the same plane.
Optionally, the distance between the centers of adjacent lenses is t,Wherein emin is the minimum value of the pupil of human eyes, D is the entrance pupil diameter of the lens,Errors are tracked for the pupil position of the human eye.
Optionally, the display device further comprises a driving module, wherein the driving module is used for driving the display chip and/or the lens array to move along the axial direction and driving the display chip and/or the lens array to vibrate along the vertical axis direction; the axial direction is perpendicular to the plane where the display chip is located, and the vertical axis direction is parallel to the plane where the display chip is located.
Optionally, the lens array further includes a lens array support frame, and the lens is rotationally fixed on the lens array support frame;
The driving module comprises a telescopic device, and the telescopic device is used for driving the lens to rotate so as to enable the lens array to swing along the vertical axis direction.
Optionally, the homeotropic direction is parallel to the second direction.
Optionally, the virtual image includes at least two display layers sequentially arranged; the virtual reality glasses further comprise a sensor and a controller, wherein the sensor is used for detecting the position of the lens array and feeding back the position of the lens array to the controller in real time, and the controller is used for controlling the time point of lighting the display chip according to the position of the lens array so that the human eyes can see the picture displayed by the complete display layers.
Optionally, the virtual image includes at least two display layers sequentially arranged; the virtual reality glasses further comprise a human eye tracking device and a controller, wherein the human eye tracking device is used for tracking and determining the position of the pupil of the human eye, so that clear images can be observed in any eye movement range; the controller adjusts the display brightness of each display layer through adjusting the brightness of the pixel units, so that the images formed by visual fusion of the display layers are displayed to a distance which accords with the visual comfort characteristic of human eyes.
Optionally, the virtual image includes a first display layer, a second display layer, a third display layer, and a fourth display layer that are sequentially arranged, and the first display layer is located between the display chip and the second display layer.
Optionally, the display module comprises a plurality of display modules, wherein the display modules comprise display chips and lens arrays which are arranged in parallel, the display chips are spliced, and the lens arrays are spliced;
and a non-zero included angle exists between two adjacent display chips.
Optionally, the first lens and the second lens have the same focal length; the first lens and the second lens are mounted on the same plane.
Optionally, the system further comprises a mechanical adjusting device, wherein the mechanical adjusting device is used for adjusting the initial position of the lens array and/or the display chip and is used for adapting to eyes of people with different eyesight.
Optionally, the device also comprises a fixing device, a pupil detection device and a driving and detection device;
The fixing device and the pupil detection device comprise an eye tracking device, and the driving and detection device comprises a driving module, a sensor and a controller;
the eye tracking device is used for tracking and determining the position of the pupil of the human eye, so that clear images can be observed in any eye movement range;
the driving module is used for driving the display chip and/or the lens array to move along the axial direction and driving the display chip and/or the lens array to vibrate along the vertical axis direction;
The sensor is used for detecting the position of the lens array and feeding back the position to the controller in real time.
Optionally, the system further comprises an eye tracking device and a controller, wherein the eye tracking device is used for tracking and determining the position of the pupil of the human eye;
The controller projects images to the pupils of human eyes according to the positions of the through holes of the human eyes, so that the virtual reality glasses have dynamic exit pupils, and clear images can be observed in any eye movement range.
The embodiment of the invention provides virtual reality glasses, which comprise a display chip and a lens array, wherein the display chip is provided with pixel units with orthogonal polarization states, the lens array is provided with a first lens and a second lens with orthogonal polarization states, and the first lens and the second lens are arranged at intervals along a third direction and a fourth direction. Therefore, the pixel units observed by human eyes through the lenses with different polarization states are not interfered with each other, the display areas of the display chips can be enlarged due to the different polarization states of the adjacent lenses, the refreshing frequency of the display chips and the luminous area of the pixel units are not changed, the effective areas of the display chips corresponding to the single lenses are enlarged through the arrangement of the polarization states, and the display resolution is improved. And solves the problem of visual fatigue and dizziness caused by convergence focusing conflict.
Detailed Description
The invention is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention. It should be further noted that, for convenience of description, only some, but not all of the structures related to the present invention are shown in the drawings.
Fig. 1 is a schematic structural diagram of a pair of virtual reality glasses according to an embodiment of the present invention, fig. 2 is a schematic structural diagram of a pair of relative relationship between a display chip and a lens array according to an embodiment of the present invention, fig. 3 is a front view of a display chip according to an embodiment of the present invention, fig. 4 is a front view of a lens array according to an embodiment of the present invention, and referring to fig. 2 to fig. 4, the pair of virtual reality glasses includes a display chip 10 and a lens array 20, the display chip 10 is located on an incident side of the lens array 20, and a virtual image formed by light emitted by the display chip 10 through the lens array 20 is located on a side of the display chip 10 far from the lens array 20.
The display chip 10 includes a plurality of pixel units arranged in rows and columns in a first direction and a second direction, and the plurality of pixel units arranged in the first direction have the same polarization state, i.e., the plurality of pixel units in the same row have the same polarization state. Along the second direction, the polarization states of two adjacent rows of pixel units are orthogonal. The first direction is perpendicular to the second direction. Illustratively, the plurality of pixel cells includes a first pixel cell 11 and a second pixel cell 12, the polarization state of the first pixel cell 11 being orthogonal to the polarization state of the second pixel cell 12.
The mutually orthogonal polarization states may be linear polarization or circular polarization, for example. For example, the polarization state of the first pixel unit 11 is vertical linear polarization, and the polarization state of the second pixel unit 12 is horizontal linear polarization. Or the polarization state of the first pixel unit 11 is left-handed circular polarization, and the polarization state of the second pixel unit 12 is right-handed circular polarization. Of course, the mutually orthogonal polarization states are not limited to linear polarization and circular polarization, but may be other forms of polarization.
The lens array 20 includes a plurality of lenses arranged in rows and columns in the third direction and the fourth direction, and the plurality of lenses includes a first lens 21 and a second lens 22. In the third direction, a second lens 22 is spaced between two adjacent first lenses 21, a first lens 21 is spaced between two adjacent second lenses 22, and the first lenses 21 and the second lenses 22 are arranged at intervals one by one. In the fourth direction, a second lens 22 is spaced between two adjacent first lenses 21, a first lens 21 is spaced between two adjacent second lenses 22, and the first lenses 21 and the second lenses 22 are arranged at intervals one by one. The polarization states of the first lens 21 and the second lens 22 are orthogonal. In one embodiment, the first direction is parallel to the third direction and the second direction is parallel to the fourth direction. In another embodiment, the first direction intersects the third direction and the second direction intersects the fourth direction.
Illustratively, the polarization state of the first lens 21 is vertical linear polarization, and the polarization state of the second lens 22 is horizontal linear polarization. Or the polarization state of the first lens 21 is left-hand circular polarization, and the polarization state of the second lens 22 is right-hand circular polarization.
It can be understood that when the polarization state of the light emitted by the pixel unit is the same as the polarization state of the lens (i.e. the polarization state is parallel), the light can completely pass through; when the polarization state of the light emitted by the pixel unit is orthogonal to the polarization state of the lens, the light does not pass through (i.e. is cut off) at all.
In an embodiment, the polarization state of the first lens 21 is the same as the polarization state of the first pixel unit 11 and orthogonal to the polarization state of the second pixel unit 12. The polarization state of the second lens 22 is orthogonal to the polarization state of the first pixel unit 11 and the same as the polarization state of the second pixel unit 12. At this time, the first lens 21 may transmit the light emitted from the first pixel unit 11 and cut off the light emitted from the second pixel unit 12. The second lens 22 can transmit the light emitted from the second pixel unit 12 and cut off the light emitted from the first pixel unit 11.
The embodiment of the invention provides a pair of virtual reality glasses, which comprises a display chip 10 and a lens array 20, wherein the display chip 10 is provided with pixel units with orthogonal polarization states, the lens array 20 is provided with a first lens 21 and a second lens 22 with orthogonal polarization states, and the first lens 21 and the second lens 22 are uniformly arranged at intervals along a third direction and a fourth direction. Therefore, the pixel units observed by human eyes through the lenses with different polarization states are not interfered with each other, the display area of the display chip 10 can be enlarged due to the different polarization states of the adjacent lenses, the refreshing frequency of the display chip 10 and the luminous area of the pixel units are not changed, the effective area of the display chip corresponding to a single lens is enlarged through the arrangement of the polarization states, and the display resolution is improved. And solves the problem of visual fatigue and dizziness caused by convergence focusing conflict.
Fig. 5 is a perspective view of another lens array according to an embodiment of the present invention, fig. 6 is a side view of the lens array shown in fig. 5, and fig. 7 is a schematic view of a display effect according to an embodiment of the present invention, and referring to fig. 5-7, the first lens 21 and the second lens 22 have the same focal length. The first lens 21 and the second lens 22 are mounted on different planes. So that virtual images formed by the first lens 21 and the second lens 22 are located at different plane positions.
Referring to fig. 1 to 7, a virtual image formed by light emitted from the display chip 10 after passing through the second lens 22 is located on the first display layer 31, and a virtual image formed by light emitted from the display chip 10 after passing through the first lens 21 is located on the second display layer 32. The first display layer 31 is located between the second display layer 32 and the display chip 10.
According to the Rayleigh criterion, the angular resolution formula of the optical system is as follows: sin beta = 1.22 lambda/D. Where β is the angular resolution, λ is the wavelength, D is the entrance pupil diameter, and β is approximately equal to 0.406mRad when d=1.5 mm, λ=500 nm. However, the depth of field of the lens is smaller at this time, so that in order to achieve better depth vision, virtual reality glasses are required to perform multi-layer display to achieve more real stereoscopic vision.
In practical situations, the human eye vision will change according to the environment, the resolution is usually less than 1 '(1/60 degree), in one example of the present invention, assuming that the angular resolution of the normal vision is 1.5' (1/40 degree) (0.436 mrad), when forming a virtual image, the object distance is close to the focus due to the large image distance, and at this time, the angular resolution β of the imaging of the pixel unit satisfies:
β~pitch/f。 (1)
Where pitch is the size of the pixel cell. f is the focal length of the lens. The symbols "-" indicate approximately equal.
Fig. 8 is a schematic diagram of a portion of a display chip and a lens array, and referring to fig. 8, a dotted square array is an area of the display chip 10 to be displayed under a corresponding lens. The solid circular array is a lens array. In order to ensure that the images formed by the adjacent lenses are spliced completely, the area in the middle of the solid line square and the dotted line square in fig. 8 is set as a fusion area, the fusion areas corresponding to the adjacent lenses display the same image and overlap after being projected to the pupils of human eyes, and the images formed by the solid line adjacent lenses are fused. The distance between the centers of adjacent lenses is t.
Fig. 9 is a schematic view of a light spot projected to the pupil of the human eye when the pupil of the human eye is the smallest, and referring to fig. 9, the distance between the centers of adjacent lenses is t, and the polarization states of the adjacent lenses are different, so that:
wherein emin is the minimum value of the pupil of human eyes, D is the entrance pupil diameter of the lens, Errors are tracked for the pupil position of the human eye. When the distance is far, the entrance pupil diameter D is equivalent to the size of a spot of the pixel unit which is imaged by the lens and then enters the pupil of the human eye, so that the pupil of the human eye can view an image near infinity, and the brightness of the spliced pixels of adjacent lenses is consistent, and the formula (2) needs to be satisfied. The pupil diameter is sized so that all of the spots exiting from adjacent lenses are incident on the pupil of the human eye. When d=1.5 mm, t=2 mm,At the time of the measurement, emin is 4.5mm. That is, when the virtual reality glasses are worn, the pupil diameter is larger than 4.5mm, so that a complete and clear image can be watched. While 4mm-6mm is the comfortable pupil diameter of the human eye in a normal illumination environment, the diameter of the pupil of the human eye is adjusted to less than 4mm only in an extremely bright environment. Under the general circumstances, because virtual reality glasses only display chip provides light, external light can not shine the human eye pupil, so the region that human eye pupil is located is darker, and the diameter of human eye pupil can be greater than 4mm generally. When the display picture is brighter and the pupil of the human eye is contracted, the embodiment of the invention can adjust the whole display brightness of the display chip, so that the pupil diameter of the human eye cannot be adjusted due to too bright pupil, and the display picture watched by the pupil of the human eye cannot be lost due to too bright display.
Fig. 10 is a schematic diagram of an optical path of a pair of virtual reality glasses according to an embodiment of the present invention, and referring to fig. 10, an image displayed by the display chip 10 includes a line segment a1-b1 and a line segment c1-d1. The lens array 20 includes a lens a, a lens B, and a lens C. The size of the human eye pupil 30 is changed along with the change of the external light brightness, and when the external light brightness is large, the diameter of the human eye pupil 30 is reduced; when the external light brightness is small, the diameter of the human eye pupil 30 becomes large. The minimum value of the human eye pupil 30 is denoted as emin, and the maximum value of the human eye pupil 30 is denoted as emax. The imaging objective is u, i.e. the separation between the display chip 10 and the lens array 20 is u. The first lens 21 (i.e., lens array 20) is spaced a distance L from the pupil 30 of the human eye. f is the focal length of the lens. Specifically, when the first lens 21 and the second lens 22 have the same focal length, f is the focal length of the first lens 21 or the second lens 22. The interval between any two of the lens a, the lens B and the lens C is 2mm. The pixel units on the display chip 10 can be observed by the pupil 30 of the human eye only through one lens imaging, so that the divergence angle of the pixel units on the display chip 10 is ensured. Illustratively, the light beam exiting through the lens B at the point B1 can be incident on the pupil 30 of the human eye, and the light beam exiting through the lens C at the point C1 can also be incident on the pupil 30 of the human eye. Since the virtual image position of the display chip 10 imaged through the lens array 20 is far, it can be approximately considered that the point c1 and the point b1 are displayed at the same point on the imaging plane. When the displayed image is darker, the diameter of the human eye pupil 30 will become larger, and the light beam emitted from the b1 point through the lens C will not enter the human eye pupil 30, thereby avoiding occurrence of display crosstalk.
Optionally, the focal length f of the lens satisfies:
wherein D is the entrance pupil diameter of the lens, Errors are tracked for the pupil position of the human eye.
Illustratively, when l=17 mm,D=1.5 mm, emin=4.5 mm, emax=8 mm, giving f <13.8mm. As can be seen from the above formula (1), the larger the focal length f of the lens, the smaller the angular resolution β, and the higher the resolution. As an example, a lens with a focal length f of 10mm and an entrance pupil diameter D of 1.5mm may satisfy display requirements. When l=17 mm, the lens array 20 is closely spaced from the pupil 30 of the human eye in conformity with the conventional wearing of glasses.
Fig. 11 is a side view of another lens array according to an embodiment of the present invention, and referring to fig. 11, the first lens 21 and the second lens 22 have different focal lengths. The first lens 21 and the second lens 22 are mounted on the same plane. In the embodiment of the invention, the first lens 21 and the second lens 22 with different focal lengths are adopted, the first lens 21 and the second lens 22 are arranged on the same plane, and two different display layers are respectively formed after the light emitted by the display chip 10 passes through the first lens 21 and the second lens 22.
In another embodiment, the first lens 21 and the second lens 22 have different focal lengths, and the first lens 21 and the second lens 22 are mounted on different planes, and the light emitted by the display chip 10 passes through the first lens 21 and the second lens 22 to form two different display layers respectively.
In the above embodiments, different display layers are formed by setting lenses of different focal lengths on the same plane, or setting lenses of different focal lengths on the same plane. In some subsequent embodiments, the distance between the display chip 10 and the lens array 20 is changed by a dynamic setting mode, so that an original display layer is respectively displayed as a plurality of display layers at different time points; or the original two display layers are respectively displayed as a plurality of display layers at different time points.
Fig. 12 is a schematic view of a first state of another pair of virtual reality glasses according to an embodiment of the present invention, fig. 13 is a schematic view of a second state of the pair of virtual reality glasses shown in fig. 12, and referring to fig. 12 to fig. 13, the pair of virtual reality glasses further includes a driving module (not shown in fig. 12 and fig. 13) for driving the display chip 10 and/or the lens array 20 to move along the axial direction L1, and driving the display chip 10 and/or the lens array 20 to vibrate (e.g., move) along the vertical axis direction L2. The axial direction L1 is perpendicular to the plane of the display chip 10, and the vertical axis direction L2 is parallel to the plane of the display chip 10. The axial direction L1 and the vertical axis direction L2 are mutually perpendicular.
Fig. 14 is a schematic diagram of another display effect provided by the embodiment of the present invention, and referring to fig. 12-14, the driving module is used to drive the lens array 20 to move. As shown in fig. 12, in the first state of the virtual reality glasses, a first distance is provided between the lens array 20 and the display chip 10. After the driving module drives the lens array 20 to move along the axial direction L1, as shown in fig. 13, the virtual reality glasses are in the second state, and a second distance is provided between the lens array 20 and the display chip 10. The dashed box in fig. 13 illustrates the position before the lens array 20 moves in the axial direction L1. Wherein the first distance is not equal to the second distance. Thus, by changing the distance between the display chip 10 and the lens array 20, the original display layer is respectively displayed as two display layers at different time points; the original two display layers may be displayed as four display layers at different time points.
Illustratively, when the lens array 20 satisfies "lenses of different focal lengths are on the same plane, or lenses of different focal lengths are on the same plane", four display layers, i.e., the first display layer 31, the second display layer 32, the third display layer 33, and the fourth display layer 34, as shown in fig. 14, may be formed by changing the distance between the display chip 10 and the lens array 20. The second display layer 32 is located between the first display layer 31 and the third display layer 33, and the third display layer 33 is located between the second display layer 32 and the fourth display layer 34.
In one embodiment, the lens array 20 is axially moved while also performing vertical axis vibration. The vertical axis vibration is vibration in the vertical axis direction L2. Due to the vertical axis vibration, after a pixel unit on the display chip 10 is imaged by a lens with the same polarization state, vibration scanning can also occur to an imaging point of the pixel unit, and the imaging point is scanned into a pixel linear array on an image plane. The vertical axis vibration and the axial vibration are overlapped and vibrated, and one pixel unit on the display chip 10 can be imaged on two surfaces through the lens to form two pixel linear arrays. The axial vibration is vibration in the axial direction L1. In the vertical axis vibration, the vibration direction of the lens array 20 makes a certain angle with the arrangement direction of the pixel units, i.e., the vibration direction of the lens array 20 intersects with the first direction. Thus, the scanning line imaged by the pixel unit is two oblique lines. The purpose of the vibration of the lens array 20 is to form two imaging display surfaces by axial vibration, and to form a display effect on the two display layers far higher than the resolution of the display chip by vertical axis vibration. In short, by vertical axis vibration, it is used to improve display resolution.
Illustratively, when the angle between the second direction and the fourth direction is 7 ° (tan (7 °) to 1/8), the respective rates of one pixel unit in the lateral direction become 8 times the original ones by vertical axis vibration.
In another embodiment, only vertical axis vibration may be present, and no axial vibration. That is, the resolution is increased only by vertical vibration. The display layer is not increased by axial vibration. It may be applicable to increase the display layer by "setting the first lens 21 and the second lens 22 to have the same focal length and the first lens 21 and the second lens 22 to be disposed on different planes", or by "setting the first lens 21 and the second lens 22 to have different focal lengths and the first lens 21 and the second lens 22 to be disposed on the same plane" in a static manner. Or in scenes where resolution is increased, but there is no need to increase the display layer.
Alternatively, the homeotropic direction L2 is parallel to the second direction. The first direction forms a small included angle with the third direction, and the second direction forms a small included angle with the fourth direction. The homeotropic direction L2 is parallel to the column direction of the pixel cells in the display chip 10.
The embodiment of the invention further introduces an imaging mode of the virtual reality glasses. In an embodiment, the virtual reality glasses may form at least two display layers, and display a screen on the at least two display layers. The display resolution of each display layer is close to or reaches the display picture of retina quality resolution, the display depth of field is increased, and the problem of visual fatigue caused by convergence focusing conflict is solved.
Referring to fig. 7, the virtual image includes at least two display layers disposed in sequence. The virtual reality glasses further include a sensor for detecting the position of the lens array 20 and feeding back to the controller in real time, and a controller (not shown in fig. 7) for controlling the time point of lighting the display chip 10 according to the position of the lens array 20 so that the human eye can see the complete picture displayed on each display layer.
Illustratively, referring to fig. 7, the virtual image includes a first display layer 31 and a second display layer 32. When the lens array 20 is in the first position, the virtual reality glasses form the first display layer 31, and the controller lights up the pixel unit for realizing the first display layer 31 in the display chip 10. When the lens array 20 is in the second position, the virtual reality glasses form the second display layer 32, and the controller illuminates the pixel unit in the display chip 10 for implementing the second display layer 32.
Illustratively, referring to fig. 14, the virtual image includes a first display layer 31, a second display layer 32, a third display layer 33, and a fourth display layer 34. The distance between the first display layer 31 and the human eye pupil 30 is P1, the distance between the second display layer 32 and the human eye pupil 30 is P2, the distance between the third display layer 33 and the human eye pupil 30 is P3, and the distance between the fourth display layer 34 and the human eye pupil 30 is P4. The value of P1 is determined according to the apparent distance of human eyes, the value of P4 is determined according to the comfort of watching far images, and the values of P2 and P3 are determined according to each display layer and the vibration distance of the lens. The fourth display layer 34 is further from the lens array 20 and has a front depth of field approaching infinity.
Illustratively, p1=0.25 m, p2=0.36 m, p3=0.69 m, p4=4 m.
When the human eye pupil 30 views the image of the display chip 10 through the lens array 20, a proper display area of the display chip 10 is selected according to the position difference between the lens array 20 and the human eye pupil 30, so that after each area of the display chip 10 is imaged through different lenses, the image area (i.e. the display layer) can be completely spliced into a large-scale image area. When the pupil 30 of the human eye is seen through the lens array 20, the pupil 30 of the human eye can only see the images formed in each area of the display chip 10 through the lenses, and cannot see the gaps between the lenses because the lenses are small in size and within the apparent distance of the human eye.
In another embodiment, the virtual reality glasses may form at least two display layers, and make at least two display layers in the at least two display layers generate visual fusion, and the picture is displayed on the virtual display layers after the visual fusion.
Fig. 15 is a schematic view of another display effect provided by an embodiment of the present invention, and referring to fig. 15, a virtual image includes at least two display layers sequentially disposed. The virtual reality glasses further include a human eye tracking device and a controller (not shown in fig. 15), wherein the human eye tracking device is used for tracking and determining the position of the pupil 30 of the human eye, so that a clear image can be observed in any eye movement range. The controller adjusts the display brightness of each display layer by adjusting the brightness of the pixel units, so that the images formed by visual fusion of the display layers are displayed to a distance which accords with the visual comfort characteristic of human eyes. The eye movement range refers to the movement range of the pupil of the human eye, and the pupil of the human eye can rotate in all directions such as up, down, left, right and the like. The virtual reality glasses provided by the invention enable human eyes to have a large eye movement range (namely eye movement distance), so that clear images can be observed in any eye movement range.
Illustratively, referring to fig. 15, the virtual image includes a first display layer 31 and a second display layer 32. When the pupil 30 of the human eye views the same display screen with different brightness on the first display layer 31 and the second display layer 32, the human eye vision will combine the two display screens into one layer (the first virtual display layer 351), the first virtual display layer 351 is located at a position between the first display layer 31 and the second display layer 32, the position is determined by the brightness of the display screens of the first display layer 31 and the second display layer 32, and the first virtual display layer 351 is closer to the brighter display layer.
The exit pupil and imaging distance of the conventional VR eye optical system determine the divergence angle of the light emitted by the pixels, and when the imaging is far, the divergence angle is very small, so that the eye movement range of human eyes is limited to a great extent, and when the imaging exceeds the eye movement range of the optical system, the phenomenon of unclear or missing display images occurs. In the embodiment of the invention, the virtual reality glasses further comprise a human eye tracking device and a controller, wherein the human eye tracking device is used for tracking and determining the position of the pupil of the human eye. The controller projects images to the pupils of the eyes according to the positions of the through holes of the eyes, so that the virtual reality glasses have dynamic exit pupils, and clear images can be observed in any eye movement range. That is, by dynamically tracking the exit pupil, a large eye movement range is supported even though the beam divergence angle of the pixel unit after passing through the lens is small.
Fig. 16 is a schematic view of another display effect provided by an embodiment of the present invention, and referring to fig. 16, a virtual image includes a first display layer 31, a second display layer 32, a third display layer 33, and a fourth display layer 34. According to the position of the pupil 30 of the human eye determined by the human eye tracking device, when the lens array 20 vibrates to a specific position, the light emitting brightness of the pixel unit in the display chip 10 is determined, and the brightness of at least one of the first display layer 31, the second display layer 32, the third display layer 33 and the fourth display layer 34 is adjusted so that at least two of the first display layer 31, the second display layer 32, the third display layer 33 and the fourth display layer 34 form a virtual display layer (for example, the second display layer 32 and the third display layer 33 form a second virtual display layer 352). According to the embodiment of the invention, the display brightness of the adjacent display layers is adjusted, so that the visual fusion picture is displayed to the distance which accords with the visual comfort characteristic of human eyes, and the depth of field of the lens can be combined, and the clear picture can be comfortably seen by the human eyes at both the bright visual distance and infinity due to the eye adjusting characteristic.
Fig. 17 is a schematic diagram of rotation of a lens according to an embodiment of the present invention, referring to fig. 17, the lens array further includes a lens array support 23, and the lens (the first lens 21 or the second lens 22) is rotationally fixed on the lens array support 23. The driving module comprises a telescopic device 51, wherein the telescopic device 51 is used for driving the lenses to rotate, and the single lenses independently swing in a deflection mode with an included angle changing relative to the optical axis. The swing amplitude can be less than +/-10 degrees, and the swinging plane is parallel to the connecting line of two adjacent anisotropic polarized lenses. Optically equivalent to translational vibration along the vertical axis. Therefore, the embodiment of the invention realizes the swinging of the lens in the vertical axis direction, thereby realizing the swinging of the lens array along the vertical axis direction.
For example, the telescopic device 51 may be an electrical signal driving device such as a piezoelectric ceramic, a memory wire, etc., the telescopic device 51 drives the lens to slightly rotate, at this time, the image formed by the display chip through the lens also rotates, the display pixels of the image plane are displaced, and the original image plane and the rotated image plane can be considered to be on the same plane due to the small rotation angle and a certain depth of field of the lens, only the up-down position of the pixel unit is changed.
Assuming that the rotation angle of the lens is alpha, and the distance between the first lens and the pupil of the human eye is L, the moving distance h of the pixel unit satisfies the following conditions: h=sin (α) L.
In an embodiment, a plurality of lenses may be provided with a telescopic device 51; in another embodiment, a lens may be provided with a telescopic device 51.
Fig. 18 is a schematic diagram of an optical path of another pair of virtual reality glasses according to an embodiment of the present invention, referring to fig. 18, only an e a area on the display chip 10 is imaged by the lens a and is seen by the pupil 30 of the human eye, an e b area is imaged by the lens b and is seen by the pupil 30 of the human eye, and when L > u is small, the areas where e a and e b are located do not overlap, so that the pixels in the e b area do not emit light and are imaged by the lens a and the pixels in the e a do not emit light and are imaged by the lens b and are not imaged by the pupil 30 of the human eye. However, when the space light is darker, the pupil 30 of the human eye is larger, the maximum pupil diameter can reach 8mm, and the areas where e a and e b are located are also enlarged, so that the risk of overlapping the areas of e a and e b occurs.
Fig. 19 is a schematic view of an effective area of a display chip viewed by a human eye through a single lens under different brightness environments, and fig. 19, a k1 area is an effective imaging display area corresponding to a minimum value emin of a pupil 30 of the human eye on the display chip 10, and a k2 area is an effective display area corresponding to a maximum value emax of the pupil 30 of the human eye on the display chip 10. In order to meet the requirement that the pixel units corresponding to the lenses are independently displayed, the imaging display of the adjacent lenses is not affected, and a k1 area needs to be displayed. In this way, the region between the k1 region and the k2 region may cause the pixel unit to be wasted, and the pixel unit utilization of the display chip 10 is reduced.
Alternatively, the first lens 21 and the second lens 22 have the same focal length, and the first lens 21 and the second lens 22 are mounted on the same plane. Because the polarization states of the adjacent pixel units are different, the polarization states of the adjacent lenses are also different, and only when the polarization states of the pixel units are the same as those of the lenses, light beams emitted by the pixel units can be imaged through the lenses, at the moment, the pixel units observed by the pupils of human eyes through the lenses with different polarization states are not interfered with each other, and the situation that the display areas overlap or the imaging are interfered with each other can not occur. Because the polarization states of the adjacent lenses are different, the display area of the display chip can be enlarged to the k3 area in the illustration, the refresh frequency of the display chip 10 and the light emitting area of the pixel unit of the display chip 10 are not changed, and the display resolution is provided.
Fig. 20 is a schematic structural diagram of another pair of virtual reality glasses according to an embodiment of the present invention, and referring to fig. 20, the pair of virtual reality glasses includes a plurality of display modules, and each display module includes a display chip and a lens array that are disposed in parallel. The display modules are a first module 1001, a second module 1002 and a third module 1003, respectively. The display module includes a display chip 10 and a plurality of lens arrays 20. The plurality of display modules are tiled, that is, the plurality of display chips 10 are tiled, and the plurality of lens arrays 20 are tiled. The adjacent two display chips have non-zero included angles, and because the display chips in the same display module are arranged in parallel with the lens arrays, the adjacent two lens arrays have non-zero included angles. Therefore, the display field of view is enlarged through the splicing of the display modules.
Illustratively, the angle between the first module 1001 and the second module 1002 is θ, 20+.ltoreq.θ+.ltoreq.30°. The angle between the second module 1002 and the third module 1003 may be θ. That is, the angle between the first module 1001 and the second module 1002 is equal to the angle between the second module 1002 and the third module 1003.
Fig. 21 is a schematic structural diagram of another pair of virtual reality glasses according to an embodiment of the present invention, referring to fig. 21, the pair of virtual reality glasses includes a fixing device and pupil detection device 40, and a driving and detection device 50, wherein the fixing device and pupil detection device 40 includes an eye tracking device, and the driving and detection device 50 includes a driving module, a sensor and a controller. The virtual reality glasses further comprise mechanical adjustment means, which may be provided in the fixing means and pupil detection means 40 or in the driving and detection means 50 or separately. The mechanical adjusting device is used for adjusting the initial position of the lens array 20 and/or the display chip 10, and is suitable for eyes of people with different eyesight.
As an example, the embodiment of the invention also provides a design parameter of the virtual reality glasses. The lens interval t is 2mm, the entrance pupil diameter of the lens is 1.5mm, and L is 17mm. f=10 mm, and the first display layer 31, the second display layer 32, the third display layer 33, and the fourth display layer 34 may be set as shown in the following table according to the imaging formula:
Wherein, P1 and P2 are two display layers formed by axial vibration imaging of the lens with the same array surface, P3 and P4 are two display layers formed by axial vibration imaging of the lens with another array surface, the surface distance between the two lens array surfaces is 243um, and the axial vibration displacement of the lens is a1=117 um. The lens interval setting due to different polarization attribute on the vertical axis component of vibration, vibration needs to be at least to the position of the partition lens, the resolution of the display chip 10 is determined by the size of the luminous point, the installation direction and the included angle of the vibration long axis, and the period span is larger due to polarization, so the Y effective amplitude A2 of the vertical axis component of the lens array is more than 2mm. In summary, the two-dimensional vibration has a long (vertical axis) short (axial) axis vibration ratio k=a2/a1 > 2000/117=17, the axial displacement of the lens is small relative to the vertical axis displacement, which is almost negligible, and the axial vibration process of the lens array can be ignored.
As an example, another design parameter of the virtual reality glasses is also provided in the embodiment of the present invention. Because the virtual reality glasses can adjust the distance between the display chip 10 and the lens array 20, when the eyes of the user are near to each other, the distance between the lens array 20 and the display chip 10 can be adjusted, so that the imaging surface is closer to the eyes than the normal eyes, and thus, a myopic viewer can watch the same display content as the normal eyes only by wearing VR eyes. When the eyes are short-sighted by 500 degrees, the picture of the first display layer is adjusted forwards. The distance of the lens array 20 from the display chip 10 is adjusted by 0.44mm compared to a non-myopic wearer, which distance can be achieved by mechanical adjustment means.
The lens interval t is 2mm, the entrance pupil diameter of the lens is 1.5mm, and L is 17mm. f=10mm, when the human eye is myopic by 500 degrees, the first display layer 31, the second display layer 32, the third display layer 33 and the fourth display layer 34 may be set as shown in the following table:
Note that the above is only a preferred embodiment of the present invention and the technical principle applied. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, and that various obvious changes, rearrangements, combinations, and substitutions can be made by those skilled in the art without departing from the scope of the invention. Therefore, while the invention has been described in connection with the above embodiments, the invention is not limited to the embodiments, but may be embodied in many other equivalent forms without departing from the spirit or scope of the invention, which is set forth in the following claims.