[go: up one dir, main page]

CN119893066A - Anti-dazzling wide dynamic technology application method based on virtual reality - Google Patents

Anti-dazzling wide dynamic technology application method based on virtual reality Download PDF

Info

Publication number
CN119893066A
CN119893066A CN202411956558.7A CN202411956558A CN119893066A CN 119893066 A CN119893066 A CN 119893066A CN 202411956558 A CN202411956558 A CN 202411956558A CN 119893066 A CN119893066 A CN 119893066A
Authority
CN
China
Prior art keywords
image
user
virtual reality
visual field
strong light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202411956558.7A
Other languages
Chinese (zh)
Inventor
金雷钢
郭代琳
黄南平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Onstar Technology Co ltd
Original Assignee
Shenzhen Onstar Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Onstar Technology Co ltd filed Critical Shenzhen Onstar Technology Co ltd
Priority to CN202411956558.7A priority Critical patent/CN119893066A/en
Publication of CN119893066A publication Critical patent/CN119893066A/en
Pending legal-status Critical Current

Links

Landscapes

  • Controls And Circuits For Display Device (AREA)

Abstract

The invention discloses an anti-dazzling wide dynamic technology application method based on virtual reality, belongs to the technical field of virtual reality, aims at solving the problem that a user may feel dizzy or uncomfortable when using VR equipment due to mismatching between vision and action perception, and comprises the steps of restraining a strong light area of an image based on a DSP technology, capturing eyeball motion data of the user in real time through the virtual reality equipment, obtaining a sight direction and an eyeball motion track of the user, predicting future visual field requirements of the user according to the sight direction and the eyeball motion track information of the user, dynamically adjusting the visual field range of the image in advance, improving the definition and detail performance of the image based on the predicted and adjusted visual field range, and properly reducing rendering quality for the area outside the sight to save calculation resources.

Description

Anti-dazzling wide dynamic technology application method based on virtual reality
Technical Field
The invention belongs to the technical field of virtual reality, and particularly relates to an anti-dazzling wide dynamic technology application method based on virtual reality.
Background
The virtual reality technology comprises a computer, electronic information and simulation technology, and the basic implementation mode is that the computer technology is mainly utilized and integrates various high-tech latest development achievements such as three-dimensional graphic technology, multimedia technology, simulation technology, display technology, uniform technology and the like, and a realistic virtual world with various sensory experiences such as three-dimensional vision, touch sense, smell sense and the like is generated by means of equipment such as the computer, so that a person in the virtual world generates an immersive sense.
At present, the immersive virtual reality technology is a technology for simulating a real or fictional scene, and enables a user to feel as if the user is actually placed in the immersive virtual reality technology through a highly realistic virtual environment, so that high interaction and reality are realized. The technology transfers the user interface from the traditional plane screen to the panoramic display with a strong stereoscopic impression, so that a user can enter the virtual world to interact with the environment, the objects, the roles and the like by wearing a head-mounted display or other devices such as body tracking, hand tracking and the like, thereby obtaining the experiences of real vision, hearing, touch and the like. Therefore, the user interface design method based on the immersive virtual reality is generated, and becomes a research hotspot in the current interface design field.
However, in some scenes of playing virtual reality content in the prior art, people may participate in the virtual reality world, and during the experience, the highlight portion in the VR image often causes a dazzling phenomenon, and meanwhile, due to the mismatch between the vision and the motion perception, the user may feel dizziness or discomfort when using the VR device, so that it is seen that the dazzling and the insufficient dynamic range are common problems in the virtual reality technology, which greatly reduces the immersion and comfort of the user in the virtual reality world.
Therefore, there is a need for an anti-glare wide dynamic technology application method based on virtual reality, which solves the problem that users may feel dizziness or discomfort when using VR devices due to the mismatch between vision and motion perception in the prior art.
Disclosure of Invention
In order to solve the technical problems, the anti-dazzling wide dynamic technology application method based on virtual reality is provided, the technical scheme enhances dark detail through inhibiting a strong light area in a VR image, achieves an anti-dazzling effect, reduces dizziness of a user, improves experience comfort of the user, dynamically predicts future visual field requirements of the user according to the visual line direction of the user, dynamically adjusts the visual field range of the image in advance, improves image definition and detail expression, achieves a wide dynamic visual field effect, avoids the situation of mismatching among vision, hearing and action perception, and provides wider virtual environment experience for the user.
In order to achieve the purpose, the invention provides the following technical scheme that the anti-dazzling wide dynamic technology application method based on virtual reality comprises the following steps:
S1, acquiring an input VR image, and preprocessing the image;
s2, inhibiting a strong light area of the image based on a DSP technology;
S3, capturing eyeball movement data of a user in real time through virtual reality equipment, and obtaining the sight direction and the eyeball movement track of the user;
S4, predicting future visual field requirements of the user according to the visual line direction and eyeball motion track information of the user, and dynamically adjusting the visual field range of the image in advance;
S5, based on the prediction adjusted visual field range, image definition and detail expression are improved, and rendering quality is properly reduced for the area outside the visual line so as to save computing resources.
In the scheme, the step S1 of preprocessing the image includes denoising the image to reduce noise interference in the image and enhance the contrast, brightness and other properties of the image, so that the difference between the highlight part and the surrounding area is more obvious, and further, the subsequent processing is easier. By filter-based denoising methods, such as gaussian filtering, mean filtering, these methods reduce noise by smoothing the image.
It is further worth mentioning that the step S2 includes:
The DSP processor analyzes the image pixel by pixel to detect the strong light area, and judges which pixels belong to the strong light area by setting a threshold value, wherein the threshold value is required to be adjusted according to the actual image condition and the application scene;
The detected strong light region is suppressed, and the DSP processor adjusts the brightness value by applying a nonlinear transformation formula, which is generally expressed as
Wherein, Is the luminance value of the input image,Is the brightness value of the output image, f is a nonlinear function, and in strong light inhibition, a proper nonlinear function can be selected to reduce the brightness value of a strong light area, and meanwhile, the brightness of a surrounding area is kept unchanged or slightly enhanced;
and (3) performing further optimization processing on the image with the strong light suppressed, including denoising, filtering and sharpening steps, so as to improve the overall quality and definition of the image.
Among them, DSP (DIGITAL SIGNAL Processing), digital signal Processing, is a technology for performing processes such as acquisition, conversion, filtering, compression, and recognition on signals in digital form by using a digital computer or a dedicated Processing device. The DSP technology is characterized in that the DSP has a special software and hardware structure, can very quickly execute operations such as addition, subtraction, multiplication, division and the like, and is particularly suitable for signal processing tasks with high real-time requirements.
The DSP technology is utilized to process the strong light area in the image, the signal brightness of the video is adjusted to be in a normal range, the front-back contrast in the same image is avoided to be too large, the image quality can be improved, the detail information is reserved, the visual comfort and the image recognition capability can be improved, and powerful support is provided for various image processing applications.
It should be further noted that the eye tracking and sensing technology in the step S3 is a biomedical engineering technology, and tracking and measuring of eye movements are achieved by monitoring small movements of the eyes. This technique enables capturing the eye movement locus of the user in real time, thereby determining the line of sight direction of the user.
It will be appreciated that during human eye imaging, foveal vision is sharp, coverage is small but visual acuity is high, and peripheral vision is blurred. By combining the characteristics, the eyeball tracking technology only renders the image region at the focus of the user's vision, improves the definition and detail expression of the region, and performs fuzzy rendering on the peripheral vision region so as to reduce the load of a graphic processor. An eye tracking system generally includes an eye tracking sensor, a processing unit, and a display control module. The sensor is responsible for capturing eyeball movement data, the processing unit processes and analyzes the data, the sight direction is determined, and then the display control module dynamically adjusts the image rendering area according to the sight direction.
It should be further noted that the step S4 includes:
preprocessing data collected by an eyeball tracking sensor;
Extracting features related to the user's line of sight from the preprocessed data, including gaze point location;
predicting gaze point location of a user based on a linear regression model
The field of view range is adjusted based on the predicted gaze point position.
As a preferred embodiment, the predicting the gaze point position of the user based on the linear regression model includes a calculation formula
Wherein, Is the predicted gaze point location, which is a two-dimensional coordinate;
Is an intercept term;
,,..., Is a regression coefficient;
,,..., Is a coordinate value with a plurality of gaze point positions.
As a preferred embodiment, the adjusting the field of view range based on the predicted gaze point position includes:
assuming that the coordinates of the current field of view center are Line of sight offsetThe calculation is as follows:
According to AndCan determine in which direction the field of view should be adjusted;
Wherein, Respectively the left, upper, right and lower boundary values of the original visual field range,The left, upper, right and lower boundary values of the adjusted visual field range are respectively defined, k is an adjustment proportion, and k is a fixed value, so that the adjustment range of the visual field range is determined.
As a preferred embodiment, the step S5 includes applying the calculated viewing range adjustment parameters to a rendering engine of the virtual reality device, where the rendering engine recalculates the rendering area and size of the image according to the parameters, so as to dynamically adjust the viewing range.
Further, high-definition rendering technology such as antialiasing and texture detail enhancement is applied in the predicted visual field range, so that images of the areas are ensured to have higher resolution and richer details, and rendering is performed by adopting lower resolution in the secondary visual field range, so that the consumption of computing resources can be remarkably reduced, and meanwhile, the fluency of the whole picture is ensured.
Compared with the prior art, the anti-dazzling wide dynamic technology application method based on virtual reality provided by the invention at least comprises the following beneficial effects:
the method has the advantages that the dark part details are enhanced through inhibiting the strong light area in the VR image, the anti-dazzling effect is realized, the dizziness of the user is lightened, the experience comfort level of the user is improved, the future visual field requirement of the user is dynamically predicted according to the visual line direction of the user, the visual field range of the image is dynamically adjusted in advance, the image definition and detail expression are improved, the wide dynamic visual field effect is realized, the condition of mismatching among vision, hearing and action perception is avoided, wider virtual environment experience is provided for the user, and the user experience of virtual reality equipment is remarkably improved.
Drawings
Fig. 1 is a schematic flow chart of an anti-dazzling wide dynamic technology application method based on virtual reality.
Detailed Description
The following description is presented to enable one of ordinary skill in the art to make and use the invention. The preferred embodiments in the following description are by way of example only and other obvious variations will occur to those skilled in the art.
Referring to fig. 1, the invention provides an anti-dazzling wide dynamic technology application method based on virtual reality, which comprises the following steps:
S1, acquiring an input VR image, and preprocessing the image;
s2, inhibiting a strong light area of the image based on a DSP technology;
S3, capturing eyeball movement data of a user in real time through virtual reality equipment, and obtaining the sight direction and the eyeball movement track of the user;
S4, predicting future visual field requirements of the user according to the visual line direction and eyeball motion track information of the user, and dynamically adjusting the visual field range of the image in advance;
S5, based on the prediction adjusted visual field range, image definition and detail expression are improved, and rendering quality is properly reduced for the area outside the visual line so as to save computing resources.
It can be understood that the preprocessing the image in step S1 includes denoising the image to reduce noise interference in the image and enhance the contrast, brightness, and other properties of the image, so that the difference between the highlight portion and the surrounding area is more obvious, and further, the subsequent processing is easier. By filter-based denoising methods, such as gaussian filtering, mean filtering, these methods reduce noise by smoothing the image.
It is understood that the step S2 includes:
The DSP processor analyzes the image pixel by pixel to detect the strong light area, and judges which pixels belong to the strong light area by setting a threshold value, wherein the threshold value is required to be adjusted according to the actual image condition and the application scene;
The detected strong light region is suppressed, and the DSP processor adjusts the brightness value by applying a nonlinear transformation formula, which is generally expressed as
Wherein, Is the luminance value of the input image,Is the brightness value of the output image, f is a nonlinear function, and in strong light inhibition, a proper nonlinear function can be selected to reduce the brightness value of a strong light area, and meanwhile, the brightness of a surrounding area is kept unchanged or slightly enhanced;
and (3) performing further optimization processing on the image with the strong light suppressed, including denoising, filtering and sharpening steps, so as to improve the overall quality and definition of the image.
The DSP, digital signal processing, is a technology for performing processes such as acquisition, transformation, filtering, compression, and identification on signals in digital form by using a digital computer or special processing equipment. The DSP technology is characterized in that the DSP has a special software and hardware structure, can very quickly execute operations such as addition, subtraction, multiplication, division and the like, and is particularly suitable for signal processing tasks with high real-time requirements.
The DSP chip is internally integrated with a plurality of functional units, can simultaneously execute a plurality of instructions, has extremely high operation speed, and can execute tens of millions of complex instruction programs per second. Programmable DSP chips are usually programmable and can be programmed and configured according to different application requirements to realize flexible and diverse signal processing algorithms. And the power consumption of the DSP chip is continuously reduced along with the progress of the manufacturing process, and the DSP chip is suitable for various portable equipment and embedded systems. The DSP technology can realize accurate processing and analysis of signals, and meets the requirement of high-precision application.
The DSP technology is utilized to process the strong light area in the image, the signal brightness of the video is adjusted to be in a normal range, the front-back contrast in the same image is avoided to be too large, the image quality can be improved, the detail information is reserved, the visual comfort and the image recognition capability can be improved, and powerful support is provided for various image processing applications.
The eyeball tracking and sensing technology in the step S3 is a biomedical engineering technology, and tracking and measuring of the eye movement are realized by monitoring the micro movement of the eyeball. This technique enables capturing the eye movement locus of the user in real time, thereby determining the line of sight direction of the user.
It will be appreciated that during human eye imaging, foveal vision is sharp, coverage is small but visual acuity is high, and peripheral vision is blurred. By combining the characteristics, the eyeball tracking technology only renders the image region at the focus of the user's vision, improves the definition and detail expression of the region, and performs fuzzy rendering on the peripheral vision region so as to reduce the load of a graphic processor. An eye tracking system generally includes an eye tracking sensor, a processing unit, and a display control module. The sensor is responsible for capturing eyeball movement data, the processing unit processes and analyzes the data, the sight direction is determined, and then the display control module dynamically adjusts the image rendering area according to the sight direction.
The step S4 comprises the following steps:
preprocessing data collected by an eyeball tracking sensor;
Extracting features related to the user's line of sight from the preprocessed data, including gaze point location;
predicting gaze point location of a user based on a linear regression model
The field of view range is adjusted based on the predicted gaze point position.
It will be appreciated that the linear regression model-based prediction of the gaze point location of a user includes a calculation formula
Wherein, Is the predicted gaze point location, which is a two-dimensional coordinate;
Is an intercept term;
,,..., Is a regression coefficient;
,,..., Is a coordinate value with a plurality of gaze point positions.
Further, the adjusting the field of view range based on the predicted gaze point position includes:
Assuming that the coordinates of the current field of view center are Line of sight offsetThe calculation is as follows:
According to AndCan determine in which direction the field of view should be adjusted;
Wherein, Respectively the left, upper, right and lower boundary values of the original visual field range,The left, upper, right and lower boundary values of the adjusted visual field range are respectively defined, k is an adjustment proportion, and k is a fixed value, so that the adjustment range of the visual field range is determined.
In the implementation process of expanding the visual field range of a user, hardware design can be optimized, for example, wide-angle lens design is adopted, a wider angle is captured by adjusting focal length and lens shape, meanwhile, image distortion is reduced by utilizing an optical correction technology, resolution and refresh rate of VR display equipment are improved, definition and smoothness of pictures can be remarkably improved, visual experience of the user is enhanced, high-resolution display screens such as OLED (organic light emitting diode) or Mini LED (light emitting diode) are selected, the display screens can provide higher color saturation and contrast, the pictures are more vivid and lifelike, high-precision sensors such as gyroscopes, accelerometers and magnetometers can be adopted, head and hand movements of the user can be accurately tracked, more natural interactive experience is realized, a multi-screen splicing technology is introduced, a plurality of small screens are spliced into a larger display area, seamless connection is realized through image splicing and fusion algorithms, the visual field of the user is widened, the burden of the user can be reduced by improving the material and design of the VR head, such as soft head band and ventilation material and reasonable weight distribution, and the proper temperature of the head can be increased and adjusted, and the comfort of the user is improved.
The step S5 includes applying the calculated viewing range adjustment parameters to a rendering engine of the virtual reality device, where the rendering engine recalculates the rendering area and size of the image according to the parameters, so as to dynamically adjust the viewing range.
Further, high-definition rendering technology such as antialiasing and texture detail enhancement is applied in the predicted visual field range, so that images of the areas are ensured to have higher resolution and richer details, and rendering is performed by adopting lower resolution in the secondary visual field range, so that the consumption of computing resources can be remarkably reduced, and meanwhile, the fluency of the whole picture is ensured.
According to the line of sight change of the user, the rendering quality of different areas is dynamically adjusted and predicted, so that the rendering quality of the area outside the line of sight can be effectively reduced while the definition and detail expression of the image are ensured, and therefore, the computing resources are saved and the overall performance is improved.
In summary, the method has the advantages of enhancing dark detail through inhibiting a strong light area in a VR image, realizing anti-dazzling effect, relieving dizziness of a user and improving experience comfort of the user, dynamically predicting future visual field requirements of the user according to the visual line direction of the user, dynamically adjusting the visual field range of the image in advance, improving definition and detail expression of the image, realizing wide dynamic visual field effect, avoiding the situation of mismatch among visual sense, hearing sense and action sense, providing wider virtual environment experience for the user, and remarkably improving user experience of virtual reality equipment.
The foregoing has shown and described the basic principles, principal features and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, and that the above embodiments and descriptions are merely illustrative of the principles of the present invention, and various changes and modifications may be made therein without departing from the spirit and scope of the invention, which is defined by the appended claims. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (7)

1. An anti-dazzling wide dynamic technology application method based on virtual reality is characterized by comprising the following steps:
S1, acquiring an input VR image, and preprocessing the image;
s2, inhibiting a strong light area of the image based on a DSP technology;
S3, capturing eyeball movement data of a user in real time through virtual reality equipment, and obtaining the sight direction and the eyeball movement track of the user;
S4, predicting future visual field requirements of the user according to the visual line direction and eyeball motion track information of the user, and dynamically adjusting the visual field range of the image in advance;
S5, based on the prediction adjusted visual field range, image definition and detail expression are improved, and rendering quality is properly reduced for the area outside the visual line so as to save computing resources.
2. The method for applying the anti-glare wide dynamic technique based on virtual reality as recited in claim 1, wherein the preprocessing of the image in step S1 includes denoising the image to reduce noise interference in the image and enhance the contrast, brightness and other properties of the image, so that the difference between the highlight part and the surrounding area is more obvious, and further the subsequent processing is easier.
3. The virtual reality-based anti-glare wide dynamic technology application method according to claim 2, wherein the step S2 comprises:
The DSP processor analyzes the image pixel by pixel to detect the strong light area, and judges which pixels belong to the strong light area by setting a threshold value, wherein the threshold value is required to be adjusted according to the actual image condition and the application scene;
The detected strong light region is suppressed, and the DSP processor adjusts the brightness value by applying a nonlinear transformation formula, which is generally expressed as
Wherein, Is the luminance value of the input image,Is the brightness value of the output image, f is a nonlinear function, and in strong light inhibition, a proper nonlinear function can be selected to reduce the brightness value of a strong light area, and meanwhile, the brightness of a surrounding area is kept unchanged or slightly enhanced;
and (3) performing further optimization processing on the image with the strong light suppressed, including denoising, filtering and sharpening steps, so as to improve the overall quality and definition of the image.
4. A virtual reality-based anti-glare wide dynamic technology application method according to claim 3, characterized in that the step S4 comprises:
preprocessing data collected by an eyeball tracking sensor;
Extracting features related to the user's line of sight from the preprocessed data, including gaze point location;
predicting gaze point location of a user based on a linear regression model
The field of view range is adjusted based on the predicted gaze point position.
5. The method for applying virtual reality based anti-glare wide dynamic technology according to claim 4, wherein predicting a gaze point location of a user based on a linear regression model comprises a calculation formula
Wherein, Is the predicted gaze point location, which is a two-dimensional coordinate;
Is an intercept term;
,,..., Is a regression coefficient;
,,..., Is a coordinate value with a plurality of gaze point positions.
6. The virtual reality-based anti-glare wide dynamic technology application method of claim 5, wherein adjusting the gaze range based on the predicted gaze point location comprises:
Assuming that the coordinates of the current field of view center are Line of sight offsetThe calculation is as follows:
According to AndCan determine in which direction the field of view should be adjusted;
Wherein, Respectively the left, upper, right and lower boundary values of the original visual field range,The left, upper, right and lower boundary values of the adjusted visual field range are respectively defined, k is an adjustment proportion, and k is a fixed value, so that the adjustment range of the visual field range is determined.
7. The method of claim 6, wherein the step S5 comprises applying the calculated viewing range adjustment parameters to a rendering engine of the virtual reality device, and the rendering engine recalculates the rendering area and size of the image according to the calculated viewing range adjustment parameters, thereby realizing the dynamic adjustment of the viewing range.
CN202411956558.7A 2024-12-29 2024-12-29 Anti-dazzling wide dynamic technology application method based on virtual reality Pending CN119893066A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411956558.7A CN119893066A (en) 2024-12-29 2024-12-29 Anti-dazzling wide dynamic technology application method based on virtual reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411956558.7A CN119893066A (en) 2024-12-29 2024-12-29 Anti-dazzling wide dynamic technology application method based on virtual reality

Publications (1)

Publication Number Publication Date
CN119893066A true CN119893066A (en) 2025-04-25

Family

ID=95420383

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411956558.7A Pending CN119893066A (en) 2024-12-29 2024-12-29 Anti-dazzling wide dynamic technology application method based on virtual reality

Country Status (1)

Country Link
CN (1) CN119893066A (en)

Similar Documents

Publication Publication Date Title
US11480804B2 (en) Distributed foveated rendering based on user gaze
US10643307B2 (en) Super-resolution based foveated rendering
EP3739431B1 (en) Method for determining point of gaze, contrast adjustment method and device, virtual reality apparatus, and storage medium
EP3488315B1 (en) Virtual reality display system having world and user sensors
US20190384381A1 (en) Selective peripheral vision filtering in a foveated rendering system
JP2022504475A (en) Motion smoothing of reprojection frame
US20100110069A1 (en) System for rendering virtual see-through scenes
JP2017174125A (en) Information processing apparatus, information processing system, and information processing method
WO2020003860A1 (en) Information processing device, information processing method, and program
US11543655B1 (en) Rendering for multi-focus display systems
WO2022004130A1 (en) Information processing device, information processing method, and storage medium
CN115914603A (en) Image rendering method, head-mounted display device and readable storage medium
US11726320B2 (en) Information processing apparatus, information processing method, and program
CN118337980A (en) XR simulation method, medium and system suitable for brightness greatly changing environment
US12277623B2 (en) Attention-driven rendering for computer-generated objects
CN113534949A (en) Head mounted display device and control method thereof
US11823343B1 (en) Method and device for modifying content according to various simulation characteristics
US20250239030A1 (en) Vertex pose adjustment with passthrough and time-warp transformations for video see-through (vst) extended reality (xr)
US20250298466A1 (en) Adaptive foveation processing and rendering in video see-through (vst) extended reality (xr)
CN119893066A (en) Anti-dazzling wide dynamic technology application method based on virtual reality
US20250095193A1 (en) Information processing apparatus and representative coordinate derivation method
CN115202475A (en) Display method, display device, electronic equipment and computer-readable storage medium
Hwang et al. Augmented edge enhancement for vision impairment using Google Glass
US20250341890A1 (en) Optimizing image rendering in display apparatus
US20260030793A1 (en) Dynamic attentional region generation and rendering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination