CN115686219A - Image rendering method, head-mounted display device, and readable storage medium - Google Patents
Image rendering method, head-mounted display device, and readable storage medium Download PDFInfo
- Publication number
- CN115686219A CN115686219A CN202211431175.9A CN202211431175A CN115686219A CN 115686219 A CN115686219 A CN 115686219A CN 202211431175 A CN202211431175 A CN 202211431175A CN 115686219 A CN115686219 A CN 115686219A
- Authority
- CN
- China
- Prior art keywords
- image
- eyeball
- user
- rendering
- predicted
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000009877 rendering Methods 0.000 title claims abstract description 152
- 238000000034 method Methods 0.000 title claims abstract description 82
- 238000012545 processing Methods 0.000 claims abstract description 35
- 230000033001 locomotion Effects 0.000 claims abstract description 27
- 230000007704 transition Effects 0.000 claims abstract description 15
- 230000000007 visual effect Effects 0.000 claims abstract description 11
- 210000005252 bulbus oculi Anatomy 0.000 claims description 121
- 210000001508 eye Anatomy 0.000 claims description 73
- 210000003128 head Anatomy 0.000 claims description 56
- 210000001747 pupil Anatomy 0.000 claims description 26
- 238000005192 partition Methods 0.000 claims description 20
- 238000013507 mapping Methods 0.000 claims description 12
- 230000003247 decreasing effect Effects 0.000 claims description 5
- 238000003708 edge detection Methods 0.000 claims description 4
- 230000008569 process Effects 0.000 abstract description 25
- 208000002173 dizziness Diseases 0.000 abstract description 6
- 230000003190 augmentative effect Effects 0.000 description 22
- 238000005516 engineering process Methods 0.000 description 16
- 238000004590 computer program Methods 0.000 description 14
- 238000001514 detection method Methods 0.000 description 13
- 238000010586 diagram Methods 0.000 description 10
- 230000000694 effects Effects 0.000 description 9
- 230000004886 head movement Effects 0.000 description 9
- 230000003183 myoelectrical effect Effects 0.000 description 8
- 239000011521 glass Substances 0.000 description 7
- 239000002699 waste material Substances 0.000 description 7
- 230000009286 beneficial effect Effects 0.000 description 6
- 230000006870 function Effects 0.000 description 5
- 230000008859 change Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 4
- 239000000463 material Substances 0.000 description 4
- 230000005043 peripheral vision Effects 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 230000004424 eye movement Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000000638 solvent extraction Methods 0.000 description 3
- 208000012886 Vertigo Diseases 0.000 description 2
- 230000009471 action Effects 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 210000000624 ear auricle Anatomy 0.000 description 2
- 210000005069 ears Anatomy 0.000 description 2
- 238000005265 energy consumption Methods 0.000 description 2
- 210000001061 forehead Anatomy 0.000 description 2
- 231100000889 vertigo Toxicity 0.000 description 2
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 1
- 238000004040 coloring Methods 0.000 description 1
- 210000004087 cornea Anatomy 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000011982 device technology Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 230000004438 eyesight Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000007654 immersion Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000002604 ultrasonography Methods 0.000 description 1
Images
Landscapes
- Processing Or Creating Images (AREA)
Abstract
The application discloses an image rendering method, a head-mounted display device and a readable storage medium, wherein the image rendering method comprises the following steps: predicting to obtain an image frame edge of a prediction window image according to the head motion attitude information, the frame rate refreshing time interval and the maximum observation visual angle, and executing a rendering thread to render pixels in the image frame edge; if the pixels in the edge of the image frame are not rendered at the preset time node before the next clock cycle is reached, executing an asynchronous time warping thread to perform direction warping processing on the current window image which is rendered at the last time to obtain a transition window image; and displaying the transition window image until the pixel rendering in the edge of the image frame is finished, and displaying the rendered prediction window image. The method and the device can reduce dizziness caused by time delay in the process of rendering the image.
Description
Technical Field
The present application relates to the field of wearable device technologies, and in particular, to an image rendering method, a head-mounted display device, and a readable storage medium.
Background
As an emerging technology, the Extended Reality technology (XR) is gradually entering the human vision and is applied and popularized in various industries. The Augmented Reality technology specifically includes a Virtual Reality technology (VR), an Augmented Reality technology (AR), a Mixed Reality technology (MR), and the like.
With the development of the augmented reality technology, the resolution and the refresh rate are further improved, which means that when an image is transmitted, the larger the signal quantity output per frame is, the higher the requirement on the transmission bandwidth is, and the rendering capability of the system and the transmission capability from the system end to the display end are challenged greatly. At present, when an image is applied to an ultra-high resolution augmented reality, the rendered virtual image is refreshed on a display device by an image rendering technology, and a user experiences a virtual reality/augmented reality effect through a head-mounted display device.
Since the rendering process takes time, a time delay between actual and perceived results. For example, during the rendering process, the head of the user or the head-mounted device worn by the user may move, so that a certain time delay exists between the posture information of the head of the user and the image data output by the head-mounted device, and if the time delay is too large, dizziness may be caused.
Disclosure of Invention
The application mainly aims to provide an image rendering method, a head-mounted display device and a readable storage medium, and aims to solve the technical problem that time delay is easy to occur in the process of rendering an image by an augmented reality device, so that dizziness is caused.
In order to achieve the above object, the present application provides an image rendering method, which is applied to a head-mounted display device, and the method includes:
dynamically detecting head motion attitude information of a user, acquiring a pre-stored frame rate refreshing time interval and a maximum observation visual angle, and predicting to obtain an image frame edge of a prediction window image of the next time step according to the head motion attitude information, the frame rate refreshing time interval and the maximum observation visual angle;
executing a rendering thread to render pixels within the image frame edge;
if the pixels in the edge of the image frame are not rendered at a preset time node before the next clock cycle is reached, executing an asynchronous time warping thread to perform direction warping processing on the current window image which is rendered at the last time to obtain a transition window image;
and displaying the transition window image until the pixel rendering in the edge of the image frame is finished, and displaying the rendered prediction window image.
Optionally, after the step of executing a rendering thread to render the pixels in the edge of the image frame, the method further includes:
and if the pixels in the edge of the image frame are not rendered in the next clock cycle before the preset time node arrives, displaying the rendered prediction window image.
Optionally, the step of executing a rendering thread to render the prediction window image includes:
detecting a predicted eyeball observation point of a user in the predicted window image, and dividing the predicted window image into at least two subarea images from near to far according to a preset subarea rule by taking the predicted eyeball observation point as a reference;
rendering the partitioned images according to rendering resolutions corresponding to the partitioned images, wherein the rendering resolutions of the partitioned images are sequentially decreased from near to far.
Optionally, the step of detecting the predicted eyeball observation point of the user in the prediction window image comprises:
acquiring a current eyeball image of a user, determining an eyeball model with the highest matching degree with the current eyeball image, and taking the eyeball model with the highest matching degree as a current actual eyeball model;
and inquiring to obtain an eyeball observation point mapped by the current actual eyeball model from a preset eyeball model mapping database, and taking the mapped eyeball observation point as a predicted eyeball observation point of the user in the prediction window image.
Optionally, the detecting a predicted eyeball observation point of the user in the prediction window image comprises:
acquiring a current eyeball image of a user, and carrying out gray processing on the current eyeball image;
determining a pupil area image according to the current eyeball image after the graying processing, and carrying out binarization processing on the pupil area image;
performing edge detection on the pupil area image after binarization processing to obtain pupil edge points, and performing ellipse fitting on the pupil edge points to obtain the current pupil center;
and determining a predicted eyeball observation point of the user in the predicted window image according to the current pupil center.
Optionally, the step of determining a predicted eyeball observation point of the user in the prediction window image according to the current pupil center includes:
inquiring to obtain a predicted eyeball observation point mapped by the current pupil center from a pre-calibrated pupil center mapping data table;
and taking the mapped eyeball observation points as predicted eyeball observation points of the user in the predicted window image.
Optionally, the step of dividing the prediction window image into at least two partitioned images from near to far further includes:
and performing brightness display control on the partitioned images according to the regional backlight brightness corresponding to the partitioned images, wherein the regional backlight brightness of the partitioned images is gradually reduced from near to far.
Optionally, the step of dividing the prediction window image into at least two partitioned images from near to far according to a preset partitioning rule by using the predicted eyeball observation point as a reference includes:
determining a prediction watching area according to the prediction eyeball observation point;
dividing an intention observation image in the prediction gazing region in the prediction window image into a first partition image;
dividing an unintended observation image outside the prediction gazing area in the prediction window image into a second partition image;
the performing brightness display control on the partitioned image according to the area backlight brightness corresponding to the partitioned image comprises:
and performing brightness display control on the first subarea image by using first area backlight brightness, and performing brightness display control on the second subarea image by using second area backlight brightness, wherein the first area backlight brightness is greater than the second area backlight brightness.
The present application further provides a head mounted display device, the head mounted display device is a physical device, the head mounted display device includes: a memory, a processor and a program of the image rendering method stored on the memory and executable on the processor, which program, when executed by the processor, may implement the steps of the image rendering method as described above.
The present application also provides a readable storage medium, which is a computer readable storage medium, on which a program for implementing an image rendering method is stored, and the program for implementing the image rendering method is executed by a processor to implement the steps of the image rendering method as described above.
The present application also provides a computer program product comprising a computer program which, when executed by a processor, implements the steps of the image rendering method as described above.
According to the method, head motion attitude information of a user is dynamically detected, an image frame edge of a prediction window image of the next time step is obtained through prediction, a rendering thread is executed to render pixels in the image frame edge, if the pixels in the image frame edge are not rendered in a preset time node before the next clock cycle arrives, an asynchronous time warping thread is executed to perform direction warping processing on the current window image which is rendered in the last time to obtain a transition window image, the transition window image is displayed until the pixels in the image frame edge are rendered, the rendered prediction window image is displayed, and therefore the delay of the display window image frame in the stage of the process of obtaining and processing the image frame can be reduced, the delay of the display window image frame in the rendering process of the current frame image frame is effectively prevented, the phenomenon that the rendering of the current frame image frame is not completed is effectively prevented, the problem that the image frame is easy to expand due to jitter of a rendering device is solved, and the problem that the image frame rate is easy to expand and the image expanding time is easily caused is solved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and, together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a schematic flowchart illustrating a first embodiment of an image rendering method according to the present application;
FIG. 2 is a flowchart illustrating a second embodiment of an image rendering method according to the present application;
FIG. 3 is a frame diagram illustrating an image frame edge of a predicted view window image according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram illustrating head pose information of a user wearing a head-mounted display device according to an embodiment of the present application;
FIG. 5 is a schematic view of a scene with an intended viewing angle for identifying a user according to an embodiment of the present application;
FIG. 6 is a schematic diagram illustrating a predicted gaze direction of a user in one embodiment of the present application;
fig. 7 is a schematic device structure diagram of a hardware operating environment related to a head-mounted display device in an embodiment of the present application.
The implementation of the objectives, functional features, and advantages of the present application will be further described with reference to the accompanying drawings.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below. It should be apparent that the described embodiments are only some embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In this embodiment, the head-mounted display device of the present application may be, for example, a Mixed Reality (Mixed Reality) -MR device (e.g., MR glasses or MR helmet), an Augmented Reality (Augmented Reality) -AR device (e.g., AR glasses or AR helmet), a Virtual Reality- (Virtual Reality) -VR device (e.g., VR glasses or VR helmet), an Extended Reality (Extended Reality) -XR device (e.g., XR glasses or XR helmet), or some combination thereof.
Example one
At present, when an image is applied to an ultra-high resolution augmented reality, the rendered virtual image is refreshed on a display device by an image rendering technology, and a user experiences a virtual reality/augmented reality effect through a head-mounted display device. Since the rendering process takes time, a time delay between actual and perceived results. For example, during the rendering process, the head of the user or the head-mounted device worn by the user may move, so that a certain time delay exists between the posture information of the head of the user and the image data output by the head-mounted device, and if the time delay is too large, dizziness may be caused.
Based on this, referring to fig. 1, fig. 1 is a schematic flowchart illustrating a first embodiment of an image rendering method according to the present application, where in this embodiment, the image rendering method is applied to a head-mounted display device, and the method includes:
step S10, dynamically detecting head motion attitude information of a user, acquiring a pre-stored frame rate refreshing time interval and a maximum observation visual angle, and predicting to obtain an image frame edge of a prediction window image of the next time step according to the head motion attitude information, the frame rate refreshing time interval and the maximum observation visual angle;
in this embodiment, the head-mounted display device is worn on the head of a user.
In the present embodiment, the head movement posture information may include a displacement value and an angle change value of the head, wherein the angle change value may include angle change values of a pitch angle (pitch), a yaw angle (yaw), and a roll angle (roll), which may be referred to fig. 4.
It is easily understood that the head pose information (i.e. the current head pose) of the user is dynamically detected, wherein the camera may be one or more of a TOF (Time of Flight) camera, an infrared camera, a millimeter wave camera and an ultrasound camera. In another embodiment, the dynamic detection of the head pose information of the user may be accomplished by sending the head pose information to the head mounted display device in real time by other devices communicatively coupled to the head mounted display device. For example, a camera installed in an activity place where the head-mounted display device is applied tracks and locates the head-mounted display device (or the head of the user), so as to obtain head posture information of the user, and sends the head posture information to the head-mounted display device in real time, so that the head-mounted display device obtains the dynamically detected head posture information in real time.
In the present embodiment, as known to those skilled in the art, for the augmented reality technology, in order to simulate human eye sensory changes in the real world and improve the immersion of the user in the augmented reality content, the visual field images that can be seen by the user are often different under different head posture information. And the current view field environment image is the view field image which can be seen corresponding to the current head pose (different head pose information corresponds to different view field environment images). That is, at the current head pose (i.e., particular eye position), the maximum range of XR content images that can be seen by the user is the user's current field of view environment image. As will be readily understood by those skilled in the art, during the content display process of the head-mounted display device, the head pose information of the user may be changed in real time, and the head-mounted display device may acquire or acquire the head pose information of the user in real time to update the current view field environment image.
In this embodiment, the frame rate refresh time interval refers to a frame rate refresh interval of a screen, and may represent a refresh frequency of the head-mounted display device. For example, the refresh frequency of the head-mounted display device may be 60Hz, 120Hz, etc., i.e., 60, 120 refreshes per second, etc., with the frame rate refresh time interval being 1/60 second, 1/120 second, etc. The maximum viewing angle is a Field of View (FOV) of a user corresponding to a maximum window image displayed by the head-mounted display device.
To aid understanding, an example is listed: taking a monocular 1920 × 1080 as an example, two eyes are 3840 × 1080, according to the detected head motion pose information a, it is then determined that the rotation speed of the head of the user to the right is about 0.4 seconds and is 180 °, that is, 450 °/s, according to the head motion pose information a, and taking a 60fps refresh rate as an example, the frame rate refresh time interval is 16.6ms, that is, after a certain frame of data is displayed, the head of the user can rotate 450 ° × 0.0166s =7.47 °, the rotation amount is about 1.86 °, and taking a maximum observation angle of view of 38 ° as an example, the number of pixels is 1.86/38 × 1920= 93. The requirement that 93 pixel points are reserved around the normal Camera margin value can be met, and no significance is achieved for more pixels. If each person's angular velocity is at most theta, then the amount of rotation is (theta (1/FPS)) °, and with FPS, FOV and image Width Width, the angular velocity of head rotation is at most 450 deg/s, then the pixel points are Width (theta (1/FPS)) °/FOV. And according to the obtained pixel points, the size of the image frame edge of the prediction window image at the next time step can be predicted. Specifically, the predicted frame edge of the image can be shown in fig. 3, where frame blank time is a frame blank time, frame length lines is a frame length line, and effective pixl array is a total pixel number array. Because the faster the frame rate, the less the required margin, and because the frame-to-frame interval is small, the camera margin (camera edge) is configured according to the frame rate prediction, the redundant margin is reduced, the sensor output is reduced, and the power consumption is reduced.
Step S20, executing a rendering thread to render the pixels in the edge of the image frame;
step S30, if the pixels in the edge of the image frame are not rendered at the preset time node before the next clock cycle is reached, executing an asynchronous time warping thread to perform direction warping processing on the current window image which is rendered at the last time to obtain a transition window image;
it should be noted that the current window image refers to the XR content image that has been rendered the most recently and is visible to the user in the current head pose. Wherein the current head pose may include a spatial position and an angle of the current head, wherein the angle may include a pitch angle (pitch) rotated based on an X-axis, a yaw angle (yaw) rotated based on a Y-axis, and a roll angle (roll) rotated based on a Z-axis, as shown in fig. 4.
And S40, displaying the transition window image until the pixel rendering in the edge of the image frame is finished, and displaying the rendered prediction window image.
Illustratively, after the step of executing a rendering thread to render the pixels in the edge of the image frame in step S20, the method further includes:
and step A10, if the pixels in the edge of the image frame are not rendered completely at the preset time node before the next clock cycle, displaying the rendered prediction window image.
In this embodiment, to reduce the delay in rendering images in a displayed scene, a partial virtual reality device employs a Time Warping (TW) technique. The time warping technique is a technique of correcting an image frame, which solves a scene rendering delay problem by warping (or correcting) scene data after rendering based on a change in user action after rendering. Since the time-warping process is performed closer to the display time, the new display image obtained through the time-warping process is closer to the image that the user wishes to see. Meanwhile, since the time warping technique is only processing a two-dimensional image, it is similar to affine transformation in image processing, and it does not bring excessive system workload overhead. Generally, the time warping process and the rendering process are in the same thread, which results in too long processing time for the thread, and affects the resolution of the image delay problem.
To this end, embodiments of the present application propose an improved image rendering method that uses Asynchronous Time Warping (ATW), which further optimizes the image delay problem using Asynchronous time warping techniques. In particular, the asynchronous time-warping technique may further optimize the time-warping technique described above, which arranges the rendering and time-warping, respectively, in two different threads, such that the rendering step and the time-warping step may be performed asynchronously, thereby reducing the overall runtime of the rendering and time-warping processes. For example, when the virtual reality application cannot maintain a sufficient frame rate, the asynchronous time warping thread may reprocess the previously rendered scene data according to the current user gesture to generate a frame picture (intermediate frame) conforming to the current user gesture, so as to reduce the jitter of the picture and better reduce the delay.
It should be noted that, in an ideal case, the rendering engine uses pre-measured real-time head pose information (such as orientation information and position information) before sending the content to the user for display. However, in reality, since the rendering process requires time, a time delay between reality and perception is caused, and at this time, the picture seen by the user is shaken, that is, the device cannot render the picture corresponding to the head motion synchronously, and when the picture shakes, people naturally feel dizzy. In the embodiment, by dynamically detecting head motion posture information of a user, acquiring a pre-stored frame rate refresh time interval and a pre-stored maximum observation angle, predicting an image frame edge of a prediction window image of a next time step according to the head motion posture information, the frame rate refresh time interval and the pre-stored maximum observation angle, executing a rendering thread to render pixels in the image frame edge, and if the pixels in the image frame edge are not rendered in a preset time node before a next clock cycle arrives, executing an asynchronous time warping thread to perform direction warping processing on a current window image which is rendered recently to obtain a transition window image, displaying the transition window image until the pixels in the image frame edge are rendered completely, so that the rendered prediction window image is displayed by combining position information prediction and asynchronous time warping technologies, virtualizing and displaying a compensation frame (i.e. the transition window image), so that the frame rate of an image frame can be improved, the delay of a certain stage in the process of acquiring and processing the image frame can be reduced, the delay of displaying the image frame can be effectively prevented from being jittered, and the problem of easy blurring of the rendering of a current frame due to the phenomenon of a rendering in an extended rendering process can be solved.
Furthermore, in the current situation of an ultra-high resolution augmented reality application image, the augmented reality device has a large rendering pressure on the image, which easily causes a frame rate of a picture displayed by the augmented reality device to be insufficient, and the picture has a pause phenomenon, and cannot meet the requirements of a user on the smoothness of the picture.
Based on this, referring to fig. 2, the step of executing a rendering thread to render the prediction window image includes:
s51, detecting a predicted eyeball observation point of a user in the predicted window image, and dividing the predicted window image into at least two partitioned images from near to far according to a preset partitioning rule by taking the predicted eyeball observation point as a reference;
in this embodiment, not all of the images of the region of the XR content image that the user can see in the current head pose (i.e., the current view image) are the regions of interest to the user, there are regions of interest to the user's eyes, and regions of no interest to the user's eyes. In the embodiment, the eyeball observation point of the user in the current window image is detected, so that the area close to the eyeball observation point in the current window image is determined to be the area concerned by eyes based on the eyeball observation point, and the area far away from the eyeball observation point in the current window image is determined to be the area not concerned by eyes. It is easy to understand that, in general, the closer to the eyeball observation point, the higher the attention of the user's eyes to the area, the more representative the area that the user intends to observe, and the farther from the eyeball observation point, the lower the attention of the user's eyes to the area, the more representative the area that the user does not intend to observe.
In some embodiments, an eye image of the user may be acquired by an eye detection device mounted on the head-mounted display device, and a calculation is performed based on eye feature information extracted from the eye image to obtain coordinates of a fixation point of the user's eyes when the user looks at the display screen, so as to obtain an eyeball observation point of the current window image. The eye detection device can be a micro electro-Mechanical System (MEMS), and the MEMS includes an infrared scanning mirror, an infrared light source, and an infrared receiver. Currently, the eye detection apparatus may be a capacitive sensor disposed in an eye region of a user, and detect eye movement by using a capacitance value between an eyeball and a capacitive plate of the capacitive sensor, so as to determine current eye position information of the user, and determine an eyeball observation point of the user in a current window image according to the current eye position information. In addition, the eye detection device can also be a myoelectric current detector, the myoelectric current detector is connected with electrodes placed at the bridge of the nose, the forehead, the ears and the earlobes of the user, myoelectric current signals of the parts are collected by the electrodes, eyeball movement is detected through a detected myoelectric current signal mode, current eye position information of the user is determined, and then an eyeball observation point of the user in a current window image is determined according to the current eye position information.
And S52, rendering the partitioned images according to the rendering resolutions corresponding to the partitioned images, wherein the rendering resolutions of the partitioned images are sequentially decreased from near to far.
In one embodiment, the head motion pose information of the user can be dynamically detected through an inertial sensor and/or a camera mounted on the head-mounted display device. In another embodiment, the dynamic detection of the head movement posture information of the user can be completed by transmitting the head movement posture information of the user to the head-mounted display device in real time through other devices in communication connection with the head-mounted display device. For example, a camera installed in an activity place where the head-mounted display device is applied tracks and positions the head-mounted display device (or the head of the user), so as to obtain head motion posture information of the user, and sends the head motion posture information to the head-mounted display device in real time, so that the head-mounted display device obtains the dynamically detected head motion posture information in real time.
It is known that users often make different head movements for different intended viewing angles, for example, turning the head to the right often represents that the user wants to see the right picture, and turning the head to the left often represents that the user wants to see the left picture.
It is readily understood that in the XR content image of the greatest extent that the user can see in the current head pose (i.e., the current field of view environment image), not all of the images of the regions are of interest to the user, there are regions of interest to the user's eyes, and regions of no interest to the user's eyes. It is easy to understand that, in general, the region corresponding to the movement trend of the head movement gesture often represents the region that the user intends to observe (the user wants to see the right picture and often turns the head to the right), and the region not matching the movement trend corresponding to the head movement gesture often represents the region that the user does not intend to observe.
Therefore, the embodiment determines, based on the head movement posture information, that the region of the prediction window image that matches the movement trend corresponding to the head movement posture is a region focused by eyes (i.e., a region corresponding to an intended observation angle), and determines that the region of the prediction window image that does not match the movement trend corresponding to the head movement posture is a region not focused by eyes (i.e., a region corresponding to an unintended observation angle).
When studying the viewing experience of a user on an image in a head-mounted display device, it is found that the process of rendering a predicted view field environment in the head-mounted display device generally includes: the method comprises the steps of moving image data materials such as triangles and material maps required for rendering scene images to a GPU (Graphics Processing Unit) through a CPU (Central Processing Unit), rendering the image data materials through a rendering pipeline by the GPU to obtain initial images, then rendering the initial images by using an image rendering post-Processing technology, and rendering the initial images by coloring and the like, and finally obtaining images which can be displayed for a user under the current augmented reality scene.
In the conventional image rendering technology, an image displayed on the whole target screen is generally rendered with a higher rendering quality so as to meet the viewing requirements of a user, however, if the main eye attention area of the user in the field of view only covers a part of the area of the screen, the image displayed outside the main eye attention area in the prior art is also rendered with a higher rendering quality, which causes a waste of rendering resources.
With the rise of head-mounted display equipment, people increasingly use VR/AR equipment products, the augmented reality mobile terminals need to do a large amount of image rendering calculation, the power consumption requirement is high, the equipment endurance is influenced greatly, and if the user can be identified and does not pay attention to a part of content effectively, the part of rendering can be reduced. For example, if it is found that the user does not pay attention to the area image a of the display screen, the quality of image rendering parameters such as image dead pixel repair, noise elimination, color interpolation and the like for the area image a can be reduced, and even these rendering effects for the area image a can be cancelled, so that the reduction of the rendering resolution of the area image a is realized, the image processing work is reduced, and the effect of reducing the power consumption is achieved.
Therefore, according to the technical scheme of the embodiment, the predicted eyeball observation point of the user in the predicted window image is detected, the predicted window image is divided into at least two subarea images from near to far according to the preset subarea rule by taking the predicted eyeball observation point as a reference, the subarea images are rendered according to the rendering resolutions corresponding to the subarea images, wherein the rendering resolutions of the subarea images are sequentially decreased from near to far, so that the rendering resolution of the image in the area concerned by the eyes is high, and the rendering resolution of the image in the area not concerned by the eyes (peripheral vision) is low, so that the user experience is not influenced, or on the premise of improving the user experience, the GPU resource is saved, more reasonable rendering of the image is realized, the waste of the rendering resource caused by higher rendering quality of the image outside the area concerned by the eyes is avoided, redundant rendering in the image rendering process is reduced as much as possible, the power consumption of the image rendering by the GPU is reduced, and the frame rate of the GPU rendering is reduced while the viewing requirement of the high resolution and the high content on the rendering capability of the GPU is met.
As an example, in step S51, the step of dividing the prediction window image into at least two divisional images according to a preset divisional rule from near to far with the predicted eyeball observation point as a reference comprises:
step B10, determining a prediction fixation area according to the prediction eyeball observation point;
in this embodiment, an eye image of the user may be acquired based on an eye tracking technology, pupil center and spot position information of the user (a spot is a reflection bright spot formed by a screen of the head-mounted display device on an eye cornea of the user) may be acquired according to the eye image of the user, a predicted eye observation point of the user may be determined according to the pupil center and spot position information of the user, and then a predicted gazing area of the user on the predicted window image may be determined according to the predicted eye observation point, as shown in fig. 6. The terminal can calculate the predicted gazing area according to the predicted eyeball observation point and the visual angle range of the user. For example, the circle obtained by taking the gazing point as the center of a circle and the view angle range as a circle with a preset numerical radius is the predicted gazing area of the user. Similarly, a polygon with a gaze point as the center and a view angle range as a preset size may be used, and the obtained polygon may be used as the predicted gaze area of the user. The shape of the predicted gaze area may be one of circular, rectangular, square, polygonal.
It should be understood that the human eye has different image sharpness for different areas of the image. In the visible range of the user (namely in the prediction window image), the image area mainly concerned by the eyeball is sensitive and clearly imaged, and the imaging of other image areas is fuzzy. The image corresponding to the predicted gaze area is an image area where the eyes of the user are mainly interested, and the image corresponding to the other part of the predicted view window image is another image area where the eyes of the user are not interested.
Step B20, dividing the intention observation image in the prediction gazing region in the prediction window image into a first partition image;
in this embodiment, the first subarea image is a main area viewed by a user, and a rendering resolution with a larger ratio should be arranged for rendering, and the adjustment parameters of the rendering resolution include, but are not limited to, adjustments of image color, resolution, pixels, light and shadow effects, and shadow effects.
Step B30, dividing the unintended observation image outside the prediction gazing area in the prediction window image into a second partition image;
in the step S52, rendering the partition image according to the rendering resolution corresponding to the partition image includes:
step B40, rendering the first partition image at a first rendering resolution and rendering the second partition image at a second rendering resolution, wherein the first rendering resolution is greater than the second rendering resolution.
In this embodiment, the first partition image may be rendered first, and then the second partition image may be rendered, or the first partition image and the second partition image may be rendered simultaneously, where the technology allows.
According to the embodiment, a predicted watching area is determined according to a predicted eyeball observation point, an intention observation image in the predicted watching area in the predicted window image is divided into a first partition image, an unintended observation image outside the predicted watching area in the predicted window image is divided into a second partition image, the first partition image is rendered at a first rendering resolution, and the second partition image is rendered at a second rendering resolution, wherein the first rendering resolution is greater than the second rendering resolution, so that the occupation rate of rendering resources of a non-watching area is reduced, the high definition and high refresh rate of the watching area image are improved, the rendering resolution of the image is high in the area concerned by eyes, the rendering resolution of the image is low in the area not concerned by the eyes (peripheral vision), the user experience is not influenced, or on the premise of improving the user experience, GPU resources are saved, more reasonable rendering of the image is realized, the waste of rendering resources caused by the higher rendering quality of the image outside the watching area concerned by the eyes is avoided, redundant rendering of the image is reduced, the hardware rendering experience of the image is improved, and the focusing effect of the image is improved.
Further, the intended observation angle of view of the user can be determined according to the predicted eyeball observation point and the head turning trend by detecting the predicted eyeball observation point of the user in the predicted window image.
In this embodiment, an eye image of a user may be acquired by an eye detection device mounted on a head-mounted display device, and calculation is performed based on eye feature information extracted from the eye image, so as to obtain coordinates of a fixation point when the eyes of the user look at a display screen, and obtain an eyeball observation point of a prediction window image. The eye detection device can be a micro electro-Mechanical System (MEMS), and the MEMS includes an infrared scanning mirror, an infrared light source, and an infrared receiver. Currently, the eye detection device may be a capacitive sensor arranged in an eye region of a user, detect eye movement by using a capacitance value between an eyeball and a capacitive plate of the capacitive sensor, determine current eye position information and an eye movement trend of the user, and then determine a predicted eyeball observation point of the user in a predicted window image according to the current eye position information and the eyeball. In addition, the eye detection device can also be a myoelectric current detector, the myoelectric current detector is connected with electrodes placed at the bridge of the nose, the forehead, the ears and the earlobes of the user, myoelectric current signals of the parts are collected by the electrodes, eyeball movement is detected through a detected myoelectric current signal mode, current eye position information and an eyeball movement trend of the user are determined, and then a predicted eyeball observation point of the user in a predicted window image is determined according to the current eye position information and the eyeball movement trend.
To facilitate understanding, an example is shown, for example, when a user turns his head to the right, the user may want to view a right-side image with a high probability, and at this time, the gaze point identification is performed, that is, if a predicted gaze point (i.e., a predicted eyeball observation point) identified by the eye detection apparatus is also located on a right-side image (the gaze point is located on the right side of the display center line), at this time, the user often wants to view the right-side image, and the intended observation angle is an observation angle correspondingly included in the right-side image, as shown in fig. 5, where the rendering resolution of an image in a leftmost 1/a equal-divided region of the image may be reduced, so as to achieve the purpose of reducing the image rendering load/pressure of the head-mounted display device. For another example, if the user turns left, the user wants to see the left image with a high probability, and if the gaze point identified by the eye detection device is also on the left image (the gaze point is on the right side of the display centerline), this time often represents that the user wants to see the left image, and the intended observation angle is the observation angle correspondingly included in the left image, where the rendering resolution of the image in the rightmost 1/a equal division area of the image can be reduced, so as to achieve the purpose of reducing the image rendering load/pressure of the head-mounted display device, where it should be noted that a is generally greater than 2, for example, a is equal to 3 or 4. In one example, a is equal to 4, i.e., if the user turns his head to the left, the rendering resolution is reduced for the frame in the 1/4 equal division area at the rightmost side of the frame. And if the user turns the head to the right, reducing the rendering resolution of the picture of the 1/4 equal division area at the leftmost side of the picture.
In a possible implementation, the step of detecting a predicted eye observation point of the user in the prediction window image comprises:
step C10, collecting a current eyeball image of a user, determining an eyeball model with the highest matching degree with the current eyeball image, and taking the eyeball model with the highest matching degree as a current actual eyeball model;
in this embodiment, the current eyeball image may be subjected to image recognition based on a preset image recognition algorithm, so as to identify the eyeball model with the highest matching degree with the current eyeball image. The preset image recognition algorithm has been studied by those skilled in the art, and is not described herein.
And step C20, searching and obtaining an eyeball observation point mapped by the current actual eyeball model from a preset eyeball model mapping database, and taking the mapped eyeball observation point as a predicted eyeball observation point of the user in the prediction window image.
As will be understood by those skilled in the art, different types of eyeball models (e.g., different information such as exit pupil distance, pupil shape, pupil region position, and current pupil spot position in the eyeball model) often correspond to different eyeball observation points.
In the present embodiment, the eyeball model mapping database stores information of a plurality of types of eyeball models and a mapping relationship between each eyeball model and an eyeball observation point in a one-to-one mapping manner.
In the embodiment, the current eyeball image of the user is collected, the eyeball model with the highest matching degree with the current eyeball image is determined, the eyeball model with the highest matching degree is used as the current actual eyeball model, the eyeball observation point mapped by the current actual eyeball model is inquired and obtained from the preset eyeball model mapping database, and therefore the predicted eyeball observation point of the user in the prediction window image is accurately obtained.
In another possible implementation, the detecting an eye observation point of the user in the current window image includes:
step D10, collecting a current eyeball image of a user, and carrying out gray processing on the current eyeball image;
in this embodiment, the current eyeball image of the user can be captured by a camera mounted on the head-mounted display device.
Step D20, determining a pupil area image according to the current eyeball image subjected to the gray processing, and performing binarization processing on the pupil area image;
step D30, performing edge detection on the pupil area image after binarization processing to obtain pupil edge points, and performing ellipse fitting on the pupil edge points to obtain the current pupil center;
and D40, determining a predicted eyeball observation point of the user in the predicted window image according to the current pupil center.
As an example, the step of determining an eyeball observation point of the user in the current window image according to the current pupil center comprises:
step E10, inquiring an eyeball observation point mapped by the current pupil center from a pre-calibrated pupil center mapping data table;
those skilled in the art will appreciate that pupil centers at different locations often correspond to different eye observation points. It should be noted that, in the pre-calibrated pupil center mapping data table, a plurality of pupil centers at different positions and a one-to-one mapping relationship between each pupil center and an eyeball observation point are stored.
And E20, taking the mapped eyeball observation point as a predicted eyeball observation point of the user in the prediction window image.
In the embodiment, a current eyeball image of a user is collected, graying processing is carried out on the current eyeball image, a pupil area image is determined according to the current eyeball image after the graying processing, binarization processing is carried out on the pupil area image, then edge detection is carried out on the pupil area image after the binarization processing, pupil edge points are obtained through detection, ellipse fitting is carried out on the pupil edge points, a current pupil center is obtained through fitting, and then a predicted eyeball observation point of the user in a predicted window image can be accurately obtained based on the current pupil center.
Example two
Based on the foregoing embodiments of the present application, in another embodiment of the present application, the same or similar contents as those in the foregoing embodiment may be referred to the above description, and are not repeated herein. On this basis, the step of dividing the prediction window image into at least two partitioned images from near to far further comprises the following steps:
and F10, performing brightness display control on the partitioned images according to the regional backlight brightness corresponding to the partitioned images, wherein the regional backlight brightness of the partitioned images is gradually reduced from near to far.
In the conventional image rendering technology, generally, a high area backlight brightness is used to perform brightness display control on an image displayed on a whole target screen so as to meet the viewing requirements of a user, however, if a main eye attention area of the user in a field of view only covers a part of an area of the screen, an image displayed outside the main eye attention area in the prior art is also subjected to the high area backlight brightness, which causes waste of power consumption.
Therefore, according to the technical scheme of this embodiment, the luminance display control is performed on the divisional image according to the regional backlight luminance corresponding to the divisional image, wherein the regional backlight luminance of the divisional image is sequentially decreased from near to far, so that the backlight luminance of the image is high in the region where the eyes are concerned, and the backlight luminance of the image is low in the region where the eyes are not concerned (peripheral vision), so that the user experience is not affected, or the display energy consumption is saved on the premise of improving the user experience, the more reasonable luminance display control on the image is realized, and the power consumption waste caused by the high backlight luminance of the image outside the eye watching region is avoided.
In a possible embodiment, the step of dividing the current view image into at least two partitioned images according to a preset partitioning rule from near to far based on the predicted eye observation point comprises:
g10, determining a prediction watching area according to the prediction eyeball observation point;
step G20, dividing the intention observation image in the prediction gazing region in the prediction window image into a first partition image;
in the present embodiment, the first subarea image is a main area viewed by the user, and a larger area backlight brightness should be arranged.
Step S43, dividing the unintended observation image outside the prediction gazing area in the prediction window image into a second partition image;
in this embodiment, the second subarea image is a non-main area viewed by the user, and the backlight brightness of the area should be arranged to be lower.
The performing brightness display control on the partitioned image according to the area backlight brightness corresponding to the partitioned image comprises:
and G30, performing brightness display control on the first subarea image by first area backlight brightness, and performing brightness display control on the second subarea image by second area backlight brightness, wherein the first area backlight brightness is greater than the second area backlight brightness.
According to the embodiment, the predicted watching region is determined according to the predicted eyeball observation point, the intention observation image in the predicted watching region in the window image is divided into the first subarea image, the non-intention observation image outside the predicted watching region in the predicted window image is divided into the second subarea image, then the brightness display control is carried out on the first subarea image by the first area backlight brightness, and the brightness display control is carried out on the second subarea image by the second area backlight brightness, wherein the first area backlight brightness is greater than the second area backlight brightness, so that the waste of the brightness power consumption of the non-watching region is reduced, the brightness and the definition of the watching region image are improved, the backlight brightness of the image is high in the area concerned by eyes, the backlight brightness of the image is low in the area not concerned by the eyes (peripheral vision field), the user feeling is not influenced, or the display energy consumption is saved on the premise of improving the user feeling, the more reasonable brightness display control on the image is realized, and the waste caused by the high display brightness of the image outside the eye watching region is avoided.
EXAMPLE III
An embodiment of the present invention provides a head-mounted display device, including: at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to execute the image rendering method according to the first embodiment.
Referring now to FIG. 7, shown is a schematic diagram of a head mounted display device suitable for use in implementing embodiments of the present disclosure. Head mounted display devices in embodiments of the present disclosure may include, but are not limited to, mixed Reality (Mixed Reality) -MR devices, augmented Reality (Augmented Reality) -AR devices, virtual Reality- (Virtual Reality) -VR devices, extended Reality (Extended Reality) -XR devices, or some combination thereof, among others. The head mounted display device shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 7, the head-mounted display device may include a processing means 1001 (e.g., a central processing unit, a graphic processor, etc.) which may perform various appropriate actions and processes according to a program stored in a read only memory (ROM 1002) or a program loaded from a storage means into a random access memory (RAM 1004). In the RAM1004, various programs and data necessary for the operation of the AR glasses are also stored. The processing device 1001, the ROM1002, and the RAM1004 are connected to each other via a bus 1005. An input/output (I/O) interface is also connected to bus 1005.
Generally, the following systems may be connected to the I/O interface 1006: an input device 1007 including, for example, a touch screen, a touch pad, a keyboard, a mouse, an image sensor, a microphone, an accelerometer, a gyroscope, or the like; output devices 1008 including, for example, liquid Crystal Displays (LCDs), speakers, vibrators, and the like; a storage device 1003 including, for example, a magnetic tape, a hard disk, or the like; and a communication device 1009. The communications apparatus 1009 may allow the AR glasses to communicate wirelessly or by wire with other devices to exchange data. While the figures illustrate AR glasses with various systems, it is to be understood that not all of the illustrated systems are required to be implemented or provided. More or fewer systems may alternatively be implemented or provided.
In particular, the processes described above with reference to the flow diagrams may be implemented as computer software programs, according to embodiments of the present disclosure. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication means, or installed from the storage means 1003, or installed from the ROM 1002. The computer program, when executed by the processing device 1001, performs the above-described functions defined in the methods of embodiments of the present disclosure.
By adopting the image rendering method in the first embodiment or the second embodiment, the head-mounted display device provided by the invention can solve the technical problem of dizziness caused by time delay in the process of rendering the image by the augmented reality device. Compared with the prior art, the beneficial effects of the head-mounted display device provided by the embodiment of the invention are the same as the beneficial effects of the image rendering method provided by the first embodiment, and other technical features of the head-mounted display device are the same as those disclosed in the method of the previous embodiment, which are not repeated herein.
It should be understood that portions of the present disclosure may be implemented in hardware, software, firmware, or a combination thereof. In the foregoing description of embodiments, the particular features, structures, materials, or characteristics may be combined in any suitable manner in any one or more embodiments or examples.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily think of the changes or substitutions within the technical scope of the present invention, and shall cover the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.
Example four
Embodiments of the present invention provide a computer-readable storage medium having computer-readable program instructions stored thereon, where the computer-readable program instructions are used to execute the image rendering method in the first embodiment.
Embodiments of the present invention provide a computer readable storage medium, such as a USB flash drive, but are not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, or device, or any combination thereof. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present embodiment, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, or device. Program code embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable storage medium may be embodied in a head-mounted display device; or may be separate and not incorporated into the head-mounted display device.
The computer readable storage medium carries one or more programs which, when executed by the head mounted display device, cause the head mounted display device to: dynamically detecting head motion attitude information of a user, acquiring a pre-stored frame rate refreshing time interval and a maximum observation visual angle, and predicting to obtain an image frame edge of a prediction window image of the next time step according to the head motion attitude information, the frame rate refreshing time interval and the maximum observation visual angle; executing a rendering thread to render pixels within the image frame edge; if the pixels in the edge of the image frame are not rendered at the preset time node before the next clock cycle is reached, executing an asynchronous time warping thread to perform direction warping processing on the current window image which is rendered at the last time to obtain a transition window image; and displaying the transition window image until the pixel rendering in the edge of the image frame is finished, and displaying the rendered prediction window image.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present disclosure may be implemented by software or hardware. Wherein the names of the modules do not in some cases constitute a limitation of the unit itself.
The computer readable storage medium provided by the invention stores the computer readable program instruction for executing the image rendering method, and can solve the technical problem of vertigo caused by time delay easily occurring in the process of rendering the image by the augmented reality device. Compared with the prior art, the beneficial effects of the computer-readable storage medium provided by the embodiment of the present invention are the same as the beneficial effects of the image rendering method provided in the first embodiment or the second embodiment, and are not described herein again.
EXAMPLE five
Embodiments of the present invention further provide a computer program product, which includes a computer program, and when the computer program is executed by a processor, the steps of the image rendering method as described above are implemented.
The computer program product provided by the application can solve the technical problem that the vertigo is caused by time delay easily occurring in the process of rendering the image by the augmented reality equipment. Compared with the prior art, the beneficial effects of the computer program product provided by the embodiment of the present invention are the same as the beneficial effects of the image rendering method provided by the first embodiment or the second embodiment, and are not described herein again.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings, or which are directly or indirectly applied to other related technical fields, are included in the scope of the present application.
Claims (10)
1. An image rendering method, applied to a head-mounted display device, the method comprising:
dynamically detecting head motion attitude information of a user, acquiring a pre-stored frame rate refreshing time interval and a maximum observation visual angle, and predicting to obtain an image frame edge of a prediction window image of the next time step according to the head motion attitude information, the frame rate refreshing time interval and the maximum observation visual angle;
executing a rendering thread to render pixels within the image frame edge;
if the pixels in the edge of the image frame are not rendered at the preset time node before the next clock cycle is reached, executing an asynchronous time warping thread to perform direction warping processing on the current window image which is rendered at the last time to obtain a transition window image;
and displaying the transition window image until the pixel rendering in the edge of the image frame is finished, and displaying the rendered prediction window image.
2. The image rendering method of claim 1, wherein after the step of executing a rendering thread to render pixels within the image frame edge, further comprising:
and if the pixels in the edge of the image frame are not rendered in the next clock cycle before the preset time node arrives, displaying the rendered prediction window image.
3. The image rendering method of claim 1, wherein the step of executing a rendering thread to render the prediction window image comprises:
detecting a predicted eyeball observation point of a user in the predicted window image, and dividing the predicted window image into at least two subarea images from near to far according to a preset subarea rule by taking the predicted eyeball observation point as a reference;
rendering the partitioned images according to rendering resolutions corresponding to the partitioned images, wherein the rendering resolutions of the partitioned images are sequentially decreased from near to far.
4. The image rendering method of claim 3, wherein the step of detecting a predicted eye observation point of the user in the prediction window image comprises:
acquiring a current eyeball image of a user, determining an eyeball model with the highest matching degree with the current eyeball image, and taking the eyeball model with the highest matching degree as a current actual eyeball model;
and inquiring to obtain an eyeball observation point mapped by the current actual eyeball model from a preset eyeball model mapping database, and taking the mapped eyeball observation point as a predicted eyeball observation point of the user in the prediction window image.
5. The image rendering method of claim 3, wherein the detecting of the predicted eye observation point of the user in the prediction window image comprises:
acquiring a current eyeball image of a user, and carrying out gray processing on the current eyeball image;
determining a pupil area image according to the current eyeball image after the graying processing, and carrying out binarization processing on the pupil area image;
performing edge detection on the pupil area image after binarization processing to obtain pupil edge points, and performing ellipse fitting on the pupil edge points to obtain the current pupil center;
and determining a predicted eyeball observation point of the user in the predicted window image according to the current pupil center.
6. The image rendering method of claim 5, wherein the step of determining a predicted eye observation point of the user in the prediction window image according to the current pupil center comprises:
inquiring to obtain a predicted eyeball observation point mapped by the current pupil center from a pre-calibrated pupil center mapping data table;
and taking the mapped eyeball observation point as a predicted eyeball observation point of the user in the prediction window image.
7. The image rendering method of claim 3, wherein the step of dividing the prediction window image into at least two partitioned images from near to far further comprises:
and performing brightness display control on the partitioned images according to the regional backlight brightness corresponding to the partitioned images, wherein the regional backlight brightness of the partitioned images is gradually reduced from near to far.
8. The image rendering method according to claim 7, wherein the step of dividing the prediction window image into at least two divisional images from near to far according to a preset division rule with the predicted eyeball observation point as a reference comprises:
determining a prediction watching area according to the prediction eyeball observation point;
dividing an intention observation image in the prediction gazing region in the prediction window image into a first partition image;
dividing an unintended observation image outside the prediction gazing area in the prediction window image into a second partition image;
the performing brightness display control on the partitioned image according to the area backlight brightness corresponding to the partitioned image comprises:
and performing brightness display control on the first subarea image by using first area backlight brightness, and performing brightness display control on the second subarea image by using second area backlight brightness, wherein the first area backlight brightness is greater than the second area backlight brightness.
9. A head-mounted display device, comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the steps of the image rendering method of any one of claims 1 to 8.
10. A readable storage medium, characterized in that the readable storage medium is a computer readable storage medium, on which a program implementing an image rendering method is stored, the program implementing the image rendering method being executed by a processor to implement the steps of the image rendering method according to any one of claims 1 to 8.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202211431175.9A CN115686219A (en) | 2022-11-16 | 2022-11-16 | Image rendering method, head-mounted display device, and readable storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202211431175.9A CN115686219A (en) | 2022-11-16 | 2022-11-16 | Image rendering method, head-mounted display device, and readable storage medium |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN115686219A true CN115686219A (en) | 2023-02-03 |
Family
ID=85051125
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202211431175.9A Pending CN115686219A (en) | 2022-11-16 | 2022-11-16 | Image rendering method, head-mounted display device, and readable storage medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN115686219A (en) |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2025091228A1 (en) * | 2023-10-31 | 2025-05-08 | 京东方科技集团股份有限公司 | Image generation method, display device and server |
| CN120013819A (en) * | 2025-01-21 | 2025-05-16 | 优酷文化科技(北京)有限公司 | Rendering system and method for virtual scenes |
| CN120766623A (en) * | 2025-06-16 | 2025-10-10 | 东莞市顺为光电有限公司 | A backlight module operation method and system based on artificial intelligence |
-
2022
- 2022-11-16 CN CN202211431175.9A patent/CN115686219A/en active Pending
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2025091228A1 (en) * | 2023-10-31 | 2025-05-08 | 京东方科技集团股份有限公司 | Image generation method, display device and server |
| CN120013819A (en) * | 2025-01-21 | 2025-05-16 | 优酷文化科技(北京)有限公司 | Rendering system and method for virtual scenes |
| CN120013819B (en) * | 2025-01-21 | 2025-08-01 | 优酷文化科技(北京)有限公司 | Rendering system and method for virtual scene |
| CN120766623A (en) * | 2025-06-16 | 2025-10-10 | 东莞市顺为光电有限公司 | A backlight module operation method and system based on artificial intelligence |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11836289B2 (en) | Use of eye tracking to adjust region-of-interest (ROI) for compressing images for transmission | |
| US10775886B2 (en) | Reducing rendering computation and power consumption by detecting saccades and blinks | |
| US10739849B2 (en) | Selective peripheral vision filtering in a foveated rendering system | |
| US10720128B2 (en) | Real-time user adaptive foveated rendering | |
| CN115686219A (en) | Image rendering method, head-mounted display device, and readable storage medium | |
| US10859830B2 (en) | Image adjustment for an eye tracking system | |
| CN115761089A (en) | Image rendering method and device, head-mounted display equipment and readable storage medium | |
| US20200241731A1 (en) | Virtual reality vr interface generation method and apparatus | |
| CN109741289B (en) | Image fusion method and VR equipment | |
| CN115914603A (en) | Image rendering method, head-mounted display device and readable storage medium | |
| US11308685B2 (en) | Rendering computer-generated reality text | |
| EP4557272A1 (en) | Image processing method and apparatus | |
| CN109271022B (en) | Display method and device of VR equipment, VR equipment and storage medium | |
| CN115713783A (en) | Image rendering method and device, head-mounted display equipment and readable storage medium | |
| US20250316113A1 (en) | Method and apparatus of line-of-sight detection, electronic device and storage medium | |
| CN115576637A (en) | Screen capture method, system, electronic device and readable storage medium | |
| EP4084473A1 (en) | Method and system for balancing the load of an image generator | |
| CN120673693A (en) | Display control method, display control device, electronic apparatus, storage medium, and program product | |
| CN119155451A (en) | Eyeball tracking-based image processing method, device, equipment and storage medium | |
| KR20240099029A (en) | Method and device for naked eye 3D displaying vehicle instrument | |
| CN120339558A (en) | Electronic device, method and storage medium | |
| CN119893066A (en) | Anti-dazzling wide dynamic technology application method based on virtual reality |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |