[go: up one dir, main page]

CN120686470A - Image adjustment method based on XR glasses, XR glasses, electronic device and medium - Google Patents

Image adjustment method based on XR glasses, XR glasses, electronic device and medium

Info

Publication number
CN120686470A
CN120686470A CN202510533787.6A CN202510533787A CN120686470A CN 120686470 A CN120686470 A CN 120686470A CN 202510533787 A CN202510533787 A CN 202510533787A CN 120686470 A CN120686470 A CN 120686470A
Authority
CN
China
Prior art keywords
virtual target
distance
glasses
depth
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202510533787.6A
Other languages
Chinese (zh)
Inventor
高弋戈
陈杨
徐观
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Jishi Medical Technology Co ltd
Original Assignee
Suzhou Jishi Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Jishi Medical Technology Co ltd filed Critical Suzhou Jishi Medical Technology Co ltd
Priority to CN202510533787.6A priority Critical patent/CN120686470A/en
Publication of CN120686470A publication Critical patent/CN120686470A/en
Pending legal-status Critical Current

Links

Abstract

The application relates to an image adjusting method based on XR glasses, the XR glasses, an electronic device and a medium, wherein the XR glasses comprise a first display screen and a second display screen, and the convergence angle is calculated according to pupil distance, lens distance, distance of a virtual target and focal length of the virtual target by acquiring the pupil distance of a user, lens distance of the XR glasses, distance of the virtual target and focal length of the virtual target, the depth of field and the position of the virtual target in the XR glasses are adjusted according to the convergence angle, so that convergence conflicts existing in the convergence adjusting process are relieved, and visual fatigue of the user is relieved.

Description

Image adjusting method based on XR (X-ray) glasses, XR glasses, electronic device and medium
Technical Field
The application relates to the field of XR (X-ray) glasses, in particular to an image adjusting method based on the XR glasses, an electronic device and a medium.
Background
Myopia is a worldwide problem of vision that occurs in association with excessive accommodation of the eyeball. Long-term near-distance eye use can lead to excessive tension of eye muscles and excessive regulation of the eyeball, thereby leading to the growth of the eye axis and finally forming myopia. At present, myopia prevention and control mainly depends on methods of wearing glasses, cornea shaping glasses and the like, and auxiliary means of outdoor activities, eye exercises and the like, but has limited effects.
In recent years, with the rapid development of augmented reality technology, research on myopia prevention and control using augmented reality technology has also progressed. Currently, XR glasses supporting virtual picture adjustment appear on the market, which can realize naked eye 3d training and realize parallax fusion to a certain extent. However, such devices tend to induce visual fatigue in the user, rather deepening myopia.
Currently, there is no effective solution to the problem of eye fatigue of users caused by XR glasses in the related art.
Disclosure of Invention
In view of the foregoing, it is desirable to provide an XR glasses-based image adjustment method, an XR glasses, an electronic device, and a medium that can alleviate visual fatigue of a user.
In a first aspect, the present application provides an XR glasses-based image adjustment method, the XR glasses comprising a first display and a second display, the method comprising:
The method comprises the steps of obtaining pupil distance of a user, lens distance of XR glasses, distance of a virtual target and focal length of the virtual target;
Calculating a convergence angle according to the pupil distance, the lens distance, the distance of the virtual target and the focal length of the virtual target;
And adjusting the depth of field and the position of the virtual target in the XR glasses according to the convergence angle.
In one embodiment, calculating the convergence angle according to the user pupil distance, the lens distance of the XR glasses, the distance of the virtual target, and the focal length of the virtual target includes:
Adjusting the distance of the virtual target based on the lens spacing, and taking the adjusted distance of the virtual target as the opposite side of a right triangle;
adjusting the pupil distance based on the focal length of the virtual target, and taking the adjusted pupil distance as an adjacent side of the right triangle;
and calculating the angle of the included angle between the opposite side and the adjacent side to obtain the convergence angle.
In one embodiment, calculating the angle of the included angle between the opposite side and the adjacent side to obtain the convergence angle includes:
α=arctan ((d×t)/(2×i×f)), where α represents the convergence angle, d represents the distance of the virtual target, T represents the lens pitch, I represents the pupil distance, and f represents the focal length of the virtual target.
In one embodiment, adjusting the depth of field and the position of the virtual target in the XR glasses according to the convergence angle includes:
calculating a target depth of field value of the virtual target in the XR glasses according to the convergence angle and the distance between the virtual target, and adjusting the focal length of the virtual target according to the target depth of field value;
And calculating horizontal position offset according to the convergence angle, the pupil distance and the lens distance, and respectively horizontally offsetting the virtual targets in the first display screen and the second display screen according to the horizontal position offset.
In one embodiment, adjusting the focal length of the virtual target according to the target depth of field value includes:
Detecting the current distance of the virtual target, and judging whether the current distance of the virtual target is different from the target depth of field value or not;
and if the current distance to the virtual target is different from the target depth of field value, gaussian blur processing is applied to the virtual target.
In one embodiment, after adjusting the depth of field and the position of the virtual target in the XR glasses according to the convergence angle, the method further comprises:
The method comprises the steps of acquiring historical training data and visual fatigue feedback data of a user, wherein the historical training data comprises virtual target depth-of-field adjustment data and virtual target position adjustment data, and the visual fatigue feedback data comprises physiological index data and subjective feedback data of the user;
the historical training data and the asthenopia feedback data are used as training sets to train a machine learning model, wherein the input of the machine learning model comprises the physiological index data and subjective feedback data of the user, and the output of the machine learning model comprises the adjustment parameters of depth of field and position;
And inputting the physiological index data and subjective feedback data of the current user as prediction variables into the trained machine learning model, predicting current adjustment parameters, and adjusting the depth of field and the response speed and amplitude of position adjustment of the virtual target in the XR glasses according to the current adjustment parameters.
The application further provides XR glasses, which comprise a sensor module, a processing module and a display module, wherein the sensor module is used for collecting pupil distance of a user, the processing module is used for executing the method of the first aspect, the display module comprises a first display screen and a second display screen, and the first display screen and the second display screen are used for displaying the same virtual target with different distances.
In one embodiment, the sensor module includes a pupil distance sensor and an eye tracking sensor.
In a third aspect, the present application also provides an electronic device comprising a memory storing a computer program and a processor implementing the steps of the method according to the first aspect above when the processor executes the computer program.
In a fourth aspect, the present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method of the first aspect described above.
According to the image adjusting method based on the XR glasses, the electronic device and the medium, when the XR glasses calculate the convergence angle, the pupil distance of a user and the distance of a virtual target are considered, the lens distance of the glasses and the virtual target focal length are also considered, so that the individual difference of the XR glasses and the display mode of the virtual target in practical application are linked, the accuracy of the convergence angle is improved, and the depth of field and the position of the virtual target are adjusted according to the convergence angle more reasonably, so that the convergence conflict existing in the convergence adjusting process is relieved, the problem of visual fatigue is solved, and myopia prevention and control are facilitated.
Drawings
FIG. 1 is a block diagram of the hardware architecture of a terminal of an image adjustment method in one embodiment;
FIG. 2 is a block diagram of an XR glasses in one embodiment;
FIG. 3 is a flow diagram of an XR glasses-based image conditioning method in one embodiment;
FIG. 4 is a block diagram of XR glasses in another embodiment;
FIG. 5 is a schematic view of a convergence adjustment procedure based on XR glasses in one embodiment;
FIG. 6 is a schematic diagram of a convergence adjustment feedback training process based on XR glasses in one embodiment;
fig. 7 is a schematic diagram of an eye training procedure based on XR glasses in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
Unless defined otherwise, technical or scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terms "a," "an," "the," "these" and similar terms in this application are not intended to be limiting in number, but may be singular or plural. The terms "comprises," "comprising," "includes," "including," "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, and system, article, or apparatus that comprises a list of steps or modules (units) is not limited to the list of steps or modules (units), but may include other steps or modules (units) not listed or inherent to such process, method, article, or apparatus. The terms "connected," "coupled," and the like in this disclosure are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. The term "plurality" as used herein means two or more. "and/or" describes the association relationship of the association object, and indicates that three relationships may exist, for example, "a and/or B" may indicate that a exists alone, a and B exist simultaneously, and B exists alone. Typically, the character "/" indicates that the associated object is an "or" relationship. The terms "first," "second," "third," and the like, as referred to in this disclosure, merely distinguish similar objects and do not represent a particular ordering for objects.
The method embodiments provided in the present embodiments may be performed in an electronic device, which may be a terminal, a computer or similar computing device. For example, the terminal is operated, and fig. 1 is a block diagram of a hardware structure of the terminal of the image adjusting method according to an embodiment of the present application. As shown in fig. 1, the terminal may include one or more (only one is shown in fig. 1) processors 101 and a memory 102 for storing data, wherein the processors 101 may include, but are not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA. The terminal may further include a transmission device 103 for a communication function and an input-output device 104. It will be appreciated by those skilled in the art that the structure shown in fig. 1 is merely illustrative and is not intended to limit the structure of the terminal. For example, the terminal may also include more or fewer components than shown in fig. 1, or have a different configuration than shown in fig. 1.
The memory 102 may be used to store a computer program, for example, a software program of application software and a module, such as a computer program corresponding to the XR glasses-based image adjustment method in the present embodiment, and the processor 101 executes the computer program stored in the memory 102 to perform various functional applications and data processing, that is, to implement the above-described method. Memory 102 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 102 may further include memory remotely located with respect to the processor 101, which may be connected to the terminal via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 103 is used to receive or transmit data via a network. The network includes a wireless network provided by a communication provider of the terminal. In one example, the transmission device 103 includes a network adapter (Network Interface Controller, simply referred to as NIC) that can connect to other network devices through a base station to communicate with the internet. In one example, the transmission device 103 may be a Radio Frequency (RF) module, which is used to communicate with the internet wirelessly.
Parallax refers to the difference in spatial position of the same object at different viewing angles. The double-screen split vision utilizes the parallax principle to respectively project images with different visual angles of the same scene to the left eye and the right eye, so that the effect of simulating a real far-near object scene is realized. Specifically, the XR glasses left and right display screens respectively display left and right view angle images of the same scene, and a certain parallax angle alpha exists between the two images. When the user wears the XR glasses, the left and right eyes receive the images from the left and right display screens, respectively, and the brain of the user synthesizes the two images into one stereoscopic image due to the existence of the parallax angle, so that a depth sensation is generated. The relationship between the parallax angle and the object distance d and the viewing angle variation Δθ is as follows:
α=Δθ×d
Wherein, the larger the parallax angle alpha is, the stronger the depth feeling is.
After a certain parallax is formed by the split vision images, when a user observes an object, eyes automatically rotate inwards, so that visual axes of the eyes are intersected on the target object to form a convergence adjusting process, the eyes obtain images with different visual angles of the same object, and then the images are fused in the brain to form a complete stereoscopic image, so that visual definition and stereoscopic impression are improved. For myopic patients, the diopter of the eyeball changes, so that objects at a distance cannot be focused on the retina, and visual definition is affected. And the convergence adjustment can help a myopic patient focus the vision on a target object, so that the vision definition is improved.
The inward rotation angle, that is, the convergence angle, of the eyes can be generally calculated according to the pupil distance of the user and the distance of the virtual target, and the following formula for calculating the convergence angle of the augmented reality glasses or the virtual reality glasses or other similar devices for realizing binocular vision is as follows:
α=arctan(d/I)
the method comprises the steps of setting a convergence angle, setting d as distance of a virtual target, setting I as pupil distance of a user and setting d as meter.
The distance d of the virtual target is taken as the opposite side of the right triangle, the pupil distance I of the user is taken as the adjacent side of the right triangle, and the convergence angle alpha is the included angle between the opposite side and the adjacent side. The farther the virtual target distance is, the larger the value of d and the smaller the value of alpha, the smaller the angle at which the eyes are turned inward. Conversely, the closer the virtual target distance, the smaller the value of d, and the larger the value of α, the larger the angle at which both eyes are turned inward.
The double screen split vision may have a vergence conflict (Vergence-accommodation conflict, VAC). When a viewer is viewing a binocular image having horizontal parallax, the viewer's line of sight is focused on an imaging plane (e.g., a screen), and a three-dimensional virtual image perceived by the viewer based on the binocular image having horizontal parallax is located in front of or behind the imaging plane. When the line of sight of the viewer is focused on the imaging surface, the refractive power of the eye of the viewer is the actual refractive power, and when the line of sight of the viewer is focused on the perceived three-dimensional virtual image, the refractive power of the eye of the viewer is the equivalent refractive power, and the difference between the equivalent refractive power and the actual refractive power is referred to as convergence conflict. The naked eye 3d training of the related art realizes parallax fusion, but the user is required to be in a fixed position, the front and back offset cannot occur, a complete vision fusion point cannot be formed, the convergence conflict can be aggravated, so that the visual fatigue of the user is easily caused, the myopia is aggravated, and dizziness or other physiological uncomfortable feeling can be even generated.
Based on the analysis of the above-described situation, the present embodiment provides an XR (Extended Reality) glasses, which can pull the coordinated movement of the external eye muscle of the user by changing the parallax, and can improve the coordination of the external eye muscle by making periodic training, thereby improving the adjustment capability. In the training process, the device can solve the problem of convergence conflict, relieve visual fatigue and is suitable for myopia prevention and control. Fig. 2 is a block diagram of the XR glasses, which includes a sensor module 1, a processing module 2 and a display module 3, wherein the sensor module 1 is used for collecting the interpupillary distance of a user, the processing module 2 is used for executing image adjustment, the display module 3 includes a first display screen 31 and a second display screen 32, and the first display screen 31 and the second display screen 32 can display the same virtual target with different distances. Specifically, the processing module 2 is responsible for receiving the sensing data collected by the sensor module 1, performing image processing and picture rendering operations according to the sensing data, and further adjusting the display parameters of the display module 3 to realize image adjustment, wherein the image adjustment comprises convergence adjustment. Optionally, image adjustment may also include zoom out picture distance and pupil distance correction. In this embodiment, the XR glasses may be AR (Augmented Reality) glasses, VR (Virtual Reality) glasses, or MR (Mixed Reality) glasses, which is not limited in this embodiment.
Fig. 3 is an image adjustment method based on XR glasses in the present embodiment, and the process includes the following steps:
step S101, obtaining a pupil distance of a user, a lens pitch of XR glasses, a distance of a virtual target, and a focal length of the virtual target.
When the user wears the XR glasses, and the XR glasses start to operate, the sensor module 1 collects the interpupillary distance of the user, namely, the distance between the centers of pupils of two eyes of a person. The lens spacing of an XR glasses refers to the distance between the centers of two lenses in the XR glasses. The distance of the virtual target refers to the distance between the perceived position of the virtual target in the user's field of view and the user's eyes. The focal length of the virtual target refers to the focal distance of the user's eyes when they are directed at the virtual target.
Step S102, calculating the convergence angle according to the pupil distance, the lens distance, the distance of the virtual target and the focal length of the virtual target.
The processing module 2 adjusts the distance of the virtual target based on the lens distance, takes the adjusted distance of the virtual target as the opposite side of the right triangle, adjusts the pupil distance based on the focal length of the virtual target, takes the adjusted pupil distance as the adjacent side of the right triangle, and calculates the angle of the included angle between the opposite side and the adjacent side to obtain the convergence angle. The specific calculation formula is as follows:
α=arctan((d×T)/(2×I×f))
wherein, alpha represents a convergence angle in degrees; d represents the distance of the virtual target in meters, T represents the lens spacing in meters, I represents the pupil distance in meters, and f represents the focal length of the virtual target in meters.
Step S103, the depth of field and the position of the virtual object in the XR glasses are adjusted according to the convergence angle.
In this step, the adjustment parameters of the virtual target depth of field and the position, including the target depth of field value and the horizontal position offset, may be determined according to the convergence angle. Wherein, the perceived depth of the virtual target can be adjusted based on the target depth of field value, and the parallax offset between the left and right eyes with respect to the virtual target can be adjusted based on the horizontal position offset.
In the steps S101 to S103, when the XR glasses calculate the vergence angle, besides considering the pupil distance of the user and the distance of the virtual target, the lens distance of the glasses and the virtual target focal length are considered, so that the individual difference of the XR glasses and the display mode of the virtual target in practical application are linked, the precision of the vergence angle is improved, and the depth of field and the position of the virtual target are adjusted according to the vergence angle more reasonably, so that vergence conflict existing in the vergence adjustment process is relieved, the asthenopia problem is solved, and myopia prevention and control are facilitated.
In one embodiment, fig. 4 provides a block diagram of another XR glasses, wherein the sensor module 1 comprises an interpupillary distance sensor 11 and an eye tracking sensor 12. The interpupillary distance sensor 11 is used for collecting interpupillary distance of a user, and the eyeball tracking sensor 22 is used for collecting eyeball motion information of the user. Further, the XR glasses also comprise a control module 4 for controlling the operation of the XR glasses and interacting with a user.
In one implementation, fig. 5 provides a convergence adjustment flow chart based on XR glasses, which may be run on the XR glasses of fig. 4, as shown in fig. 5, comprising the steps of:
Step S201, a target depth of field value of the virtual target in the XR glasses is calculated according to the convergence angle and the distance of the virtual target, and the focal length of the virtual target is adjusted according to the target depth of field value.
The target depth value D' can be calculated from the convergence angle α and the distance D of the virtual target through a first mapping relationship, and the following is calculated:
D′=d×cos(α)
After the target depth-of-field value D 'is calculated, the perceived depth of the virtual target is adjusted by adjusting the focal length f of the virtual target to match the target depth-of-field value D'. In practical application, the rendering parameters of two display screens in the display module 3 can be dynamically adjusted, so that the focus of the virtual target is located at the calculated depth of field position.
In some embodiments, the method can also detect the current distance of the virtual target in real time, judge whether the current distance of the virtual target is different from the target depth of field value, and if so, apply Gaussian blur processing to the virtual target to simulate natural blur gradient when focusing the eyes and ensure consistency of visual perception when focusing the eyes of a user.
Step S202, calculating horizontal position offset according to the convergence angle, the pupil distance and the lens distance, and horizontally offsetting virtual targets in the first display screen and the second display screen respectively according to the horizontal position offset.
The convergence angle α, the user pupil distance I, and the lens distance T may be calculated by using the second mapping relationship to calculate the horizontal position offset Δx, which is calculated as follows:
Δx=(I×tan(α))/2
After the horizontal position offset Δx is calculated, the virtual targets of the first display screen 31 and the second display screen 31 are respectively horizontally offset, and the offset is ±Δx so as to match the parallax requirement corresponding to the convergence angle α, thereby realizing parallax offset of the left-eye image and the right-eye image. In practical application, the horizontal position offset can be dynamically calibrated by combining the data acquired by the eyeball tracking sensor 12 in real time, so that the position of the virtual target in the direction of the user's sight is ensured to be stable, and the visual dislocation caused by head inching is avoided.
In some embodiments, after adjusting the depth of field and the position of the virtual target in the XR glasses according to the convergence angle, the convergence state of the user may also be monitored by the eye tracking sensor 12, and it may be determined whether the deviation of the current convergence angle from the target value exceeds a threshold value, and if it is determined that the deviation of the current convergence angle from the target value exceeds the threshold value, the depth of field and the position of the virtual target in the XR glasses may be recalculated and adjusted.
In the steps S201 to S202, the XR glasses can accurately adapt to the physiological characteristics of the user and the display requirements of the virtual target, dynamically optimize the depth of field and the position, and effectively relieve the conflict in the convergence adjustment process, thereby reducing the asthenopia and improving the myopia control effect.
In some embodiments, fig. 6 provides a vergence adjustment feedback training method based on XR glasses, as shown in fig. 6, after adjusting the depth of field and the position of the virtual target in the XR glasses according to the vergence angle, the method further includes the following steps:
step S301, historical training data and asthenopia feedback data of a user are obtained.
The historical training data comprises virtual target depth of field adjustment data and virtual target position adjustment data, and the asthenopia feedback data comprises physiological index data and subjective feedback data of a user;
step S302, training a machine learning model by taking the historical training data and the asthenopia feedback data as training sets.
The input of the machine learning model comprises physiological index data and subjective feedback data of a user, and the output of the machine learning model comprises adjustment parameters of depth of field and position.
Step S303, the physiological index data and the subjective feedback data of the current user are used as prediction variables to be input into a trained machine learning model, the current adjustment parameters are predicted, and the response speed and the response amplitude of the depth of field and the position adjustment of the virtual target in the XR glasses are adjusted according to the current adjustment parameters.
In this embodiment, according to the historical training data and the asthenopia feedback data of the user, the response speed and the response amplitude of the depth of field and the position adjustment are adaptively optimized, so that the comfort of the user for using the XR glasses for a long time is improved.
In one embodiment, XR glasses support distance vision training and interpupillary distance correction training in addition to dual-screen split vision training, vergence adjustment training.
The double-screen split vision training is to simulate a real far and near object scene by utilizing a double-screen split vision function, effectively exercise the adjusting function of eyes and improve the elasticity and coordination of eye muscles.
The convergence adjusting training means that the convergence adjusting function of eyes is trained through the double-screen vision dividing function, namely, eyes can be adjusted to focus on a target object at the same time, and the actual visual experience is more similar.
The far-looking training means that the picture distance is pulled far, the virtual target can be projected to the target position by adjusting the focal length of the virtual target, and the target position is not lower than 5 meters. In the training process, the user can watch the real scene at a distance through the double-screen vision dividing function, avoid staring at the virtual picture for a long time, and effectively protect the eyesight. The process of mapping points in three-dimensional space onto a two-dimensional plane by perspective projection is formulated as follows:
x'=x×f/(z+f)
y'=y×f/(z+f)
Where (x, y, z) is the point coordinates in three-dimensional space, (x ', y') is the point coordinates on the two-dimensional plane, and f is the focal length. By adjusting the focal length f, the projection position of the picture on the two-dimensional plane can be changed, thereby realizing the effect of zooming out the picture distance.
The pupil distance correction training means that the depth of field and the picture position of the virtual image can be automatically adjusted according to the pupil distance of a user, so that the users with different pupil distances can obtain the best visual experience, and the training effect is improved. The calculation formula of the pupil distance correction is as follows:
d'=d×(I+T)/(2×I)
Where d' is the optimal depth of field of the virtual image, d is the actual distance of the target object, I is the pupil distance of the user, and T is the lens pitch of the XR glasses. According to the formula, the XR glasses can automatically adjust the depth of field and the picture position of the virtual image, ensure that users with different interpupillary distances can obtain clear stereoscopic vision experience, and improve training effect.
In one embodiment, the XR glasses can adjust the training scheme in real time according to the eye condition, training effect and other data of the user, so that training efficiency and curative effect are improved. Illustratively, fig. 7 provides an XR glasses-based eye training method comprising the steps of:
In step S401, the user wears XR glasses and starts the system.
In step S402, the XR glasses recognize the user pupil distance, and automatically adjust the depth of field and the frame position of the virtual image according to the pupil distance information.
In step S403, the XR glasses adjust the distance of the picture in real time according to the eye movement of the user and the preset parameters of the system, and perform the distance adjustment training through the dual-screen split vision function.
Step S404, the user performs corresponding visual training according to the system prompt. Such as identifying objects of different distances, performing visual discrimination, etc.
Step S405, the system adjusts the training scheme according to the training effect of the user. Such as increasing the difficulty of training, extending the training time, etc., until the desired control effect is achieved.
According to the implementation, far virtual images are watched by using XR glasses, parallax and convergence adjustment training is performed by using a double-screen split vision function, and therefore the purpose of myopia control is achieved.
In addition, in combination with the image adjustment method based on XR glasses provided in the above embodiment, a storage medium may be provided in this embodiment. The storage medium having stored thereon a computer program which, when executed by a processor, implements any of the XR glasses based image conditioning methods of the previous embodiments.
In one embodiment, the computer program when executed by a processor performs the steps of:
The method comprises the steps of obtaining pupil distance of a user, lens distance of XR glasses, distance of a virtual target and focal length of the virtual target;
Calculating a convergence angle according to the pupil distance, the lens distance, the distance of the virtual target and the focal length of the virtual target;
the depth of field and the position of the virtual target in the XR glasses are adjusted according to the convergence angle.
In one embodiment, the computer program when executed by a processor performs the steps of:
Adjusting the distance of the virtual target based on the lens spacing, and taking the adjusted distance of the virtual target as the opposite side of the right triangle;
adjusting the pupil distance based on the focal length of the virtual target, and taking the adjusted pupil distance as an adjacent side of the right triangle;
And calculating the angle of the included angle between the opposite side and the adjacent side to obtain the convergence angle.
In one embodiment, the computer program when executed by a processor performs the steps of:
Calculating a target depth of field value of the virtual target in the XR glasses according to the convergence angle and the distance of the virtual target, and adjusting the focal length of the virtual target according to the target depth of field value;
And calculating horizontal position offset according to the convergence angle, the pupil distance and the lens distance, and respectively horizontally shifting virtual targets in the first display screen and the second display screen according to the horizontal position offset.
In one embodiment, the computer program when executed by a processor performs the steps of:
Detecting the current distance of the virtual target, and judging whether the current distance of the virtual target is different from the target depth value or not;
And if the current distance to the virtual target is different from the target depth of field value, gaussian blur processing is applied to the virtual target.
In one embodiment, the computer program when executed by a processor performs the steps of:
the method comprises the steps of obtaining historical training data and visual fatigue feedback data of a user, wherein the historical training data comprises virtual target depth of field adjustment data and virtual target position adjustment data, and the visual fatigue feedback data comprises physiological index data and subjective feedback data of the user;
The method comprises the steps of taking historical training data and asthenopia feedback data as training sets to train a machine learning model, wherein the input of the machine learning model comprises physiological index data and subjective feedback data of a user, and the output of the machine learning model comprises adjustment parameters of depth of field and position;
Inputting the physiological index data and subjective feedback data of the current user as prediction variables into a trained machine learning model, predicting current adjustment parameters, and adjusting the response speed and amplitude of the depth of field and position adjustment of the virtual target in the XR glasses according to the current adjustment parameters
In one embodiment, the computer program when executed by a processor performs the steps of:
and adjusting the focal length of the virtual target, and projecting the virtual target to a target position which is not lower than 5 meters.
The user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or sufficiently authorized by each party.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magneto-resistive random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (PHASE CHANGE Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in various forms such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), etc. The databases referred to in the embodiments provided herein may include at least one of a relational database and a non-relational database. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processor referred to in the embodiments provided in the present application may be a general-purpose processor, a central processing unit, a graphics processor, a digital signal processor, a programmable logic unit, a data processing logic unit based on quantum computing, or the like, but is not limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application and are described in detail herein without thereby limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of the application should be assessed as that of the appended claims.

Claims (10)

1.一种基于XR眼镜的图像调节方法,其特征在于,所述XR眼镜包括第一显示屏和第二显示屏,所述方法包括:1. An image adjustment method based on XR glasses, characterized in that the XR glasses include a first display screen and a second display screen, and the method comprises: 获取用户的瞳距、XR眼镜的镜片间距、虚拟目标的距离、虚拟目标的焦距;Obtain the user's pupil distance, the distance between the lenses of the XR glasses, the distance of the virtual target, and the focal length of the virtual target; 根据所述瞳距、所述镜片间距、所述虚拟目标的距离、所述虚拟目标的焦距,计算辐辏角度;Calculating a convergence angle according to the pupil distance, the lens distance, the distance of the virtual target, and the focal length of the virtual target; 根据所述辐辏角度调整所述虚拟目标在所述XR眼镜中的景深和位置。The depth of field and position of the virtual target in the XR glasses are adjusted according to the convergence angle. 2.根据权利要求1所述的基于XR眼镜的图像调节方法,其特征在于,根据所述用户瞳距、XR眼镜的镜片间距、虚拟目标的距离、虚拟目标的焦距,计算辐辏角度,包括:2. The image adjustment method based on XR glasses according to claim 1, characterized in that the convergence angle is calculated based on the user's pupil distance, the distance between the lenses of the XR glasses, the distance of the virtual target, and the focal length of the virtual target, comprising: 基于所述镜片间距调整所述虚拟目标的距离,并将调整后的所述虚拟目标的距离作为直角三角形的对边;adjusting the distance of the virtual target based on the lens spacing, and using the adjusted distance of the virtual target as the opposite side of a right triangle; 基于所述虚拟目标的焦距调整所述瞳距,并将调整后的所述瞳距作为所述直角三角形的邻边;adjusting the pupil distance based on the focal length of the virtual target, and using the adjusted pupil distance as an adjacent side of the right triangle; 计算所述对边和所述邻边之间夹角的角度,得到所述辐辏角度。The angle between the opposite side and the adjacent side is calculated to obtain the convergence angle. 3.根据权利要求2所述的基于XR眼镜的图像调节方法,其特征在于,计算所述对边和所述邻边之间夹角的角度,得到所述辐辏角度,包括:3. The image adjustment method based on XR glasses according to claim 2, wherein calculating the angle between the opposite side and the adjacent side to obtain the convergence angle comprises: α=arctan((d×T)/(2×I×f));其中,α代表所述辐辏角度,d代表所述虚拟目标的距离,T代表所述镜片间距,I代表所述瞳距,f代表所述虚拟目标的焦距。α=arctan((d×T)/(2×I×f)); wherein α represents the convergence angle, d represents the distance of the virtual target, T represents the lens spacing, I represents the pupil distance, and f represents the focal length of the virtual target. 4.根据权利要求1所述的基于XR眼镜的图像调节方法,其特征在于,根据所述辐辏角度调整所述虚拟目标在所述XR眼镜中的景深和位置,包括:4. The image adjustment method based on XR glasses according to claim 1, wherein adjusting the depth of field and position of the virtual target in the XR glasses according to the convergence angle comprises: 根据所述辐辏角度和所述虚拟目标的距离计算出所述虚拟目标在所述XR眼镜中的目标景深值,并根据所述目标景深值调节所述虚拟目标的焦距;Calculating a target depth of field value of the virtual target in the XR glasses according to the convergence angle and the distance of the virtual target, and adjusting the focal length of the virtual target according to the target depth of field value; 根据所述辐辏角度、所述瞳距以及所述镜片间距计算出水平位置偏移量,并根据所述水平位置偏移量,将所述第一显示屏和所述第二显示屏中的所述虚拟目标分别进行水平偏移。A horizontal position offset is calculated according to the convergence angle, the pupil distance, and the lens distance, and the virtual objects in the first display screen and the second display screen are horizontally offset according to the horizontal position offset. 5.根据权利要求4所述的基于XR眼镜的图像调节方法,其特征在于,根据所述目标景深值调节所述虚拟目标的焦距,包括:5. The image adjustment method based on XR glasses according to claim 4, wherein adjusting the focal length of the virtual target according to the target depth of field value comprises: 检测所述虚拟目标的当前距离,并判断所述虚拟目标的当前距离与所述目标景深值是否存在差异;Detecting the current distance of the virtual target and determining whether there is a difference between the current distance of the virtual target and the target depth of field value; 若判断到所述虚拟目标的当前距离与所述目标景深值存在差异,则对所述虚拟目标施加高斯模糊处理。If it is determined that there is a difference between the current distance to the virtual target and the target depth of field value, Gaussian blur processing is applied to the virtual target. 6.根据权利要求1所述的基于XR眼镜的图像调节方法,其特征在于,在根据所述辐辏角度调整所述虚拟目标在所述XR眼镜中的景深和位置之后,所述方法还包括:6. The image adjustment method based on XR glasses according to claim 1, characterized in that after adjusting the depth of field and position of the virtual object in the XR glasses according to the convergence angle, the method further comprises: 获取所述用户的历史训练数据和视疲劳反馈数据;其中,所述历史训练数据包括:虚拟目标景深调整数据、虚拟目标位置调整数据;所述视疲劳反馈数据包括:用户的生理指标数据和主观反馈数据;Acquiring historical training data and visual fatigue feedback data of the user; wherein the historical training data includes: virtual target depth of field adjustment data and virtual target position adjustment data; the visual fatigue feedback data includes: user's physiological index data and subjective feedback data; 将所述历史训练数据和所述视疲劳反馈数据作为训练集,训练机器学习模型;其中,所述机器学习模型的输入包括所述用户的生理指标数据和主观反馈数据,所述机器学习模型的输出包括景深和位置的调整参数;Using the historical training data and the visual fatigue feedback data as a training set, training a machine learning model; wherein the input of the machine learning model includes the physiological indicator data and subjective feedback data of the user, and the output of the machine learning model includes adjustment parameters of depth of field and position; 将当前所述用户的生理指标数据和主观反馈数据作为预测变量输入至训练好的所述机器学习模型,预测出当前调整参数,并根据所述当前调整参数调整所述虚拟目标在所述XR眼镜中景深和位置调整的响应速度和幅度。The current physiological indicator data and subjective feedback data of the user are input into the trained machine learning model as prediction variables to predict the current adjustment parameters, and the response speed and amplitude of the depth of field and position adjustment of the virtual target in the XR glasses are adjusted according to the current adjustment parameters. 7.一种XR眼镜,其特征在于,包括:传感器模块、处理模块和显示模块;其中,所述传感器模块用于采集用户的瞳距;所述处理模块用于执行上述权利要求1至6中任一项所述的方法;所述显示模块包括第一显示屏和第二显示屏,所述第一显示屏和所述第二显示屏用于显示不同距离的同一虚拟目标。7. XR glasses, comprising: a sensor module, a processing module, and a display module; wherein the sensor module is used to collect the user's pupil distance; the processing module is used to execute the method of any one of claims 1 to 6; the display module includes a first display screen and a second display screen, wherein the first display screen and the second display screen are used to display the same virtual target at different distances. 8.根据权利要求7所述的XR眼镜,其特征在于,所述传感器模块包括瞳距传感器和眼球追踪传感器。8. The XR glasses according to claim 7, wherein the sensor module includes a pupil distance sensor and an eye tracking sensor. 9.一种电子装置,包括存储器和处理器,所述存储器存储有计算机程序,其特征在于,所述处理器执行所述计算机程序时实现权利要求1至6中任一项所述的方法的步骤。9. An electronic device comprising a memory and a processor, wherein the memory stores a computer program, wherein the processor implements the steps of the method according to any one of claims 1 to 6 when executing the computer program. 10.一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1至6中任一项所述的方法的步骤。10. A computer-readable storage medium having a computer program stored thereon, wherein when the computer program is executed by a processor, the steps of the method according to any one of claims 1 to 6 are implemented.
CN202510533787.6A 2025-04-25 2025-04-25 Image adjustment method based on XR glasses, XR glasses, electronic device and medium Pending CN120686470A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202510533787.6A CN120686470A (en) 2025-04-25 2025-04-25 Image adjustment method based on XR glasses, XR glasses, electronic device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202510533787.6A CN120686470A (en) 2025-04-25 2025-04-25 Image adjustment method based on XR glasses, XR glasses, electronic device and medium

Publications (1)

Publication Number Publication Date
CN120686470A true CN120686470A (en) 2025-09-23

Family

ID=97071998

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202510533787.6A Pending CN120686470A (en) 2025-04-25 2025-04-25 Image adjustment method based on XR glasses, XR glasses, electronic device and medium

Country Status (1)

Country Link
CN (1) CN120686470A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN120871445A (en) * 2025-09-26 2025-10-31 歌尔股份有限公司 Optical adjustment method, head display device, and computer-readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN120871445A (en) * 2025-09-26 2025-10-31 歌尔股份有限公司 Optical adjustment method, head display device, and computer-readable storage medium
CN120871445B (en) * 2025-09-26 2026-01-30 歌尔股份有限公司 Optical adjustment method, head display device, and computer-readable storage medium

Similar Documents

Publication Publication Date Title
US12326570B2 (en) Eyewear devices with focus tunable lenses
JP7094266B2 (en) Single-depth tracking-accommodation-binocular accommodation solution
US7428001B2 (en) Materials and methods for simulating focal shifts in viewers using large depth of focus displays
CN108600733B (en) Naked eye 3D display method based on human eye tracking
US9629539B2 (en) Eyeglasses-wearing simulation method, program, device, eyeglass lens-ordering system and eyeglass lens manufacturing method
CN106484116B (en) Method and device for processing media files
US11570426B2 (en) Computer-readable non-transitory storage medium, web server, and calibration method for interpupillary distance
JP2020202569A (en) Virtual eyeglass set for viewing actual scene that corrects for different location of lenses than eyes
CN106526867B (en) Video screen display control method, device, and head-mounted display device
CN107272200A (en) A kind of focal distance control apparatus, method and VR glasses
CN108124509B (en) Image display method, wearable intelligent device and storage medium
CN120686470A (en) Image adjustment method based on XR glasses, XR glasses, electronic device and medium
Celikcan et al. Attention-aware disparity control in interactive environments
CN111757089A (en) Method and system for rendering images using pupil enhancement adjustment of the eye
CN117412020A (en) Parallax adjustment method, device, storage medium and computing device
US20130321389A1 (en) System and method for 3d imaging
CN205485061U (en) Can focus simultaneously and virtual reality glasses of interpupillary distance
CN109031667B (en) Virtual reality glasses image display area transverse boundary positioning method
Wu et al. Depth-disparity calibration for augmented reality on binocular optical see-through displays
CN116097644A (en) 2D digital image capturing system and analog 3D digital image sequence
Jin et al. Comparison of Differences between Human Eye Imaging and HMD Imaging
CN120635370A (en) Myopia intervention training image processing method, XR glasses and storage medium
JP2025158970A (en) Variable focus augmented reality imaging device and imaging method
KR20250148497A (en) Varifocal extended reality image device and method for providing image
CN114052642A (en) A naked eye 3D spinal endomicroscope system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination