CN119450190A - Method, device, equipment and medium for adjusting image distance of smart wearable device - Google Patents
Method, device, equipment and medium for adjusting image distance of smart wearable device Download PDFInfo
- Publication number
- CN119450190A CN119450190A CN202411404794.8A CN202411404794A CN119450190A CN 119450190 A CN119450190 A CN 119450190A CN 202411404794 A CN202411404794 A CN 202411404794A CN 119450190 A CN119450190 A CN 119450190A
- Authority
- CN
- China
- Prior art keywords
- distance
- pupil
- target
- target user
- image combining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- User Interface Of Digital Computer (AREA)
Abstract
The application provides a method, a device, equipment and a medium for adjusting an imaging distance of intelligent wearable equipment, according to the method, the first imaging distance is calculated through the first pupil distance of the target user, the custom-made imaging distance calculation is realized for the target user, and the applicability of the intelligent wearable device is improved. According to the depth of the current scene, the current parallax of the target user and the second pupil distance, the adjustment quantity of the image combining distance is calculated, so that the image combining distance can be dynamically changed according to the actual use environment, the adaptability and the flexibility of equipment display are enhanced, and the display position of the target display object is matched with the sight distance of the target user by adjusting the image combining distance of the intelligent wearing equipment, so that the positioning accuracy of the target display object is improved, the visual fatigue of the user is reduced, and the visual experience comfort of the user is improved.
Description
Technical Field
The application relates to the technical field of intelligent display, in particular to an imaging distance adjusting method, device, equipment and medium of intelligent wearable equipment.
Background
Augmented Reality (AR) technology is a technology of superimposing virtual images into the real world, and is commonly applied to smart wearable devices, such as AR devices, VR devices, and the like. At present, the augmented reality technology has been widely applied to the fields of games, education, medical treatment, construction, and the like.
In current smart wearable devices, the imaging distance is typically fixed. The image combining distance is fixed, so that the pupil distance difference of different users is difficult to adapt, and the image combining distance cannot be dynamically adjusted according to the scene in actual use. The mode of fixed image combining distance easily causes the problems of visual fatigue of users, inaccurate positioning of virtual objects and the like, and influences the user experience.
Therefore, how to improve the flexibility of adjusting the imaging distance of the intelligent wearable device becomes a technical problem to be solved.
Disclosure of Invention
The application provides a method, a device, equipment and a medium for adjusting the image combining distance of intelligent wearable equipment, aiming at improving the flexibility of adjusting the image combining distance of the intelligent wearable equipment.
In a first aspect, the present application provides a method for adjusting a combining distance of an intelligent wearable device, where the method includes:
Calculating a first imaging distance of the intelligent wearable device based on a first pupil distance of the target user;
Calculating a first adjustment amount of a combined image distance based on the current scene depth, the current parallax of the target user and the second pupil distance of the target user;
Calculating a target image combining distance based on the first image combining distance and the first adjustment amount of the image combining distance;
And based on the target image combining distance, performing image combining distance adjustment on the intelligent wearable equipment so that the display position of the target display object is matched with the sight distance of the target user.
In a second aspect, the present application further provides an image combining distance adjusting device of an intelligent wearable device, where the image combining distance adjusting device of the intelligent wearable device includes:
The first imaging distance calculation module is used for calculating a first imaging distance of the intelligent wearable device based on a first pupil distance of the target user;
The first adjustment amount calculating module is used for calculating a first adjustment amount of the imaging distance based on the current scene depth, the current parallax of the target user and the second pupil distance of the target user;
the target image combining distance module is used for calculating a target image combining distance based on the first image combining distance and the first adjustment amount of the image combining distance;
And the image combining distance adjusting module is used for adjusting the image combining distance of the intelligent wearable device based on the target image combining distance so that the display position of the target display object is matched with the sight distance of the target user.
In a third aspect, the present application further provides a smart wearable device, where the smart wearable device includes a processor, a memory, and a computer program stored on the memory and executable by the processor, where the steps of the method for adjusting an imaging distance of the smart wearable device are implemented when the computer program is executed by the processor.
In a fourth aspect, the present application further provides a computer readable storage medium, where a computer program is stored on the computer readable storage medium, where the computer program, when executed by a processor, implements the steps of the method for adjusting an imaging distance of an intelligent wearable device as described above.
The application provides a method, a device, equipment and a medium for adjusting an imaging distance of intelligent wearable equipment, according to the method, the first imaging distance is calculated through the first pupil distance of the target user, the custom-made imaging distance calculation is realized for the target user, and the applicability of the intelligent wearable device is improved. And according to the image combining distance adjustment quantity, adjusting the image combining distance of the intelligent wearing equipment so that the display position of the target display object is matched with the sight distance of the target user, enabling the image combining distance to dynamically change according to the actual use environment, improving the adjustment flexibility of the image combining distance of the intelligent wearing equipment and the positioning accuracy of the target display object, enhancing the adaptability and the flexibility of equipment display, reducing the visual fatigue of the user and improving the visual experience comfort of the user.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a first embodiment of an imaging distance adjusting method of an intelligent wearable device according to an embodiment of the present application;
Fig. 2 is a schematic flow chart of a second embodiment of an imaging distance adjusting method of an intelligent wearable device according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a first embodiment of an imaging distance adjusting device of an intelligent wearable device provided by the application;
Fig. 4 is a schematic block diagram of a structure of an intelligent wearable device according to an embodiment of the present application.
The achievement of the objects, functional features and advantages of the present application will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The flow diagrams depicted in the figures are merely illustrative and not necessarily all of the elements and operations/steps are included or performed in the order described. For example, some operations/steps may be further divided, combined, or partially combined, so that the order of actual execution may be changed according to actual situations.
Some embodiments of the present application are described in detail below with reference to the accompanying drawings. The following embodiments and features of the embodiments may be combined with each other without conflict.
Referring to fig. 1, fig. 1 is a flowchart of a first embodiment of an image combining distance adjusting method for an intelligent wearable device according to an embodiment of the present application.
As shown in fig. 1, the method for adjusting the imaging distance of the smart wearable device includes steps S101 to S104.
S101, calculating a first imaging distance of the intelligent wearable device based on a first pupil distance of a target user;
in an embodiment, the first pupil of the target user may be acquired by a sensor device built into the smart wearable device, or the target user may actively input the first pupil data into the smart wearable device (e.g., AR glasses).
In an embodiment, the sensor device may be an image sensor, such as a CCD (Charge Coupled Device ), CMOS (Complementary Metal Oxide Semiconductor, complementary metal oxide semiconductor), or the like.
In an embodiment, the first pupil distance may be pupil distance data measured when the target user wears the intelligent wearing device for the first time, the first pupil distance may also be pupil distance data acquired last time when the target user wears the intelligent wearing device last time, and the first pupil distance may also be pupil distance data acquired first time when the target user wears the intelligent wearing device currently.
In an embodiment, the target user may be the user wearing the smart wearable device for the first time, the target user may also be the user wearing the smart wearable device last time, and the target user may also be the user currently wearing the smart wearable device.
In general, the interpupillary distance (IPD, inter-pupillary distance) refers to the distance between the center points of the two eyes. Accurate interpupillary distance data is crucial to the adjustment of the image display pose of intelligent wearing equipment, can avoid image distortion and visual fatigue, improves the user and wears the use experience sense of intelligent wearing equipment.
In an embodiment, the smart wearable device may preset a standard pupil distance data before leaving the factory, and then set a preset imaging distance according to the preset standard pupil distance data. For example, the pupil distance of a typical adult male is between 60 and 73 mm, and that of a female is between 53 and 68 mm. And the standard pupil distance data of the intelligent wearable device can be determined according to the empirical value or acquired pupil distance detection big data.
In an embodiment, the target user may actively input the first pupil distance when wearing the smart wearable device, or collect the first pupil distance of the user through an image sensor built in the smart wearable device. According to the data difference between the first pupil distance and the preset pupil distance of the intelligent wearing equipment, the image combining distance adapted to the target user is calculated, and then the preset image combining distance is adjusted, so that display content and the like of the intelligent wearing equipment are displayed at the adjusted image combining distance, the display quality of the intelligent wearing equipment is improved, and the use experience of the target user is improved.
Illustratively, the second adjustment amount of the imaging distance is determined according to the pupil distance difference value of the first pupil distance and the preset pupil distance.
In an embodiment, when determining the second adjustment amount of the imaging distance according to the first pupil distance and the preset pupil distance, the following steps may be adopted to calculate:
Determining a difference value by first calculating a pupil distance difference value between the first pupil distance and a preset pupil distance:
Difference = preset pupil distance-first pupil distance
And calculating an adjustment amount, namely calculating a second adjustment amount of the imaging distance according to the pupil distance difference value and specific parameters (such as magnification, focal length and the like) of the optical system.
For example, the second adjustment amount of the image combining distance may be calculated by:
The magnification refers to a proportional relation between the size of an image formed by the imaging system and the actual size of an object, namely the magnification of the intelligent wearable device on display content.
And calculating the first image combining distance based on the preset image combining distance and the second adjustment amount of the image combining distance.
The calculation formula of the first imaging distance may be:
D1=D0+ΔD1
Wherein D 1 represents a first image-combining distance, Δd 1 represents a second adjustment amount of the image-combining distance, and D 0 represents a preset image-combining distance.
In one embodiment, ΔD 1 may be determined by an empirical formula or a calibration process, typically based on the pupil distance difference of the first pupil distance and the preset pupil distance.
Further, a preset pupil distance and a preset imaging distance of the intelligent wearable device are obtained, a second adjustment amount of the imaging distance is determined based on a pupil distance difference value of the first pupil distance and the preset pupil distance, and the first imaging distance is determined based on the preset imaging distance and the second adjustment amount of the imaging distance.
For example, the corresponding relation between the pupil distance difference value of the first pupil distance and the preset pupil distance and the adjustment amount of the image combining distance can be learned by acquiring the corresponding relation between the pupil distance and the image combining distance through big data, or taking the preset pupil distance and the corresponding preset image combining distance as a reference, and an empirical formula is constructed according to the corresponding relation to be used as the calculation basis of the second adjustment amount delta D 1 of the image combining distance.
For example, the empirical formula of the second adjustment amount Δd 1 of the imaging distance can be expressed as:
ΔD1=λ0·(d1-d0)
Wherein d 1 represents a first pupil distance of the target user, d 0 represents a preset pupil distance of the intelligent wearable device, and lambda 0 represents a conversion coefficient of the pupil distance difference value and the image combining distance adjustment amount.
It should be understood that the empirical formulas provided by the embodiments of the present application are merely illustrative and are not representative of actual use. The corresponding relation between the pupil distance difference value and the image combining distance adjustment quantity (namely, the calculation mode of the image combining distance adjustment quantity) can be determined according to the actual application condition of the intelligent wearable device.
In an embodiment, when a user wears the intelligent wearable device, along with the changes of the pupil distance, the visual angle and the scene depth of the user, the imaging distance needs to be dynamically adjusted accordingly, so that visual fatigue caused by long-time use is reduced, the positioning accuracy of the display position of the virtual object is improved, and the use experience of the user is improved.
Further, acquiring an eye image of the target user, performing pupil positioning detection on the eye image based on a feature point detection model to obtain a left eye pupil coordinate and a right eye pupil coordinate of the target user, and calculating a second interpupillary distance of the target user based on the left eye pupil coordinate and the right eye pupil coordinate.
In an embodiment, when the target user wears the smart wearable device, the eye image of the target user may be acquired through a sensor device (such as an image sensor) built into the smart wearable device. It will be appreciated that the eye image requires a clear view of the pupil of the target user.
In one embodiment, a binocular vision system may be used to capture an eye image of the target user to obtain precise coordinates of the pupil of the target user.
In one embodiment, the captured eye image may be pre-processed, including graying, denoising, enhancement, etc., to improve image quality in preparation for feature point detection. And performing pupil positioning detection on the eye image by using a feature point detection model, such as a deep learning-based method, and determining the accurate position of the pupil in the image so as to obtain the left-eye pupil coordinate and the right-eye pupil coordinate. And calculating the distance between the left eye pupil coordinate and the right eye pupil coordinate according to the detected left eye pupil coordinate and the right eye pupil coordinate, namely a second pupil distance.
Illustratively, if the coordinates of the left eye pupil are (100, 150) and the coordinates of the right eye pupil are (120, 150), the second interpupillary distance may be calculated by the following equation:
in an embodiment, the unit of the pupil distance may be converted according to the resolution of the image and the actual measurement unit.
And further, based on the feature point detection model, detecting pupil outline features in the eye images, and based on a pupil positioning algorithm, performing positioning calculation on the pupil center point coordinates to obtain left eye pupil coordinates and right eye pupil coordinates of the target user.
The feature point detection model may identify key points in the eye image using feature point detection algorithms such as Harris corner detection, SIFT (Scale-INVARIANT FEATURE TRANSFORM, scale invariant feature transform), SURF (Speeded Up Robust Features, acceleration robust features), or ORB (Oriented FAST and Rotated BRIEF, rapid feature point extraction and description).
In an embodiment, pupil location algorithms, such as cascade classifier based on Haar features or pupil detection model based on deep learning, are further used to identify pupil areas based on feature point detection. After the pupil region is detected, the accurate contour of the pupil is extracted by pupil positioning algorithms, such as morphological operations, edge detection, thresholding, and the like.
In an embodiment, a least square method or a sub-pixel edge detection technology may be used to perform ellipse fitting on the pupil outline feature, and then calculate the variance between the ellipse center and the edge distance, where the point with the smallest variance is used as the pupil center. And then taking the coordinate point of the pupil center in the camera coordinate system where the image sensor is located as pupil coordinates.
In an embodiment, pupil coordinates may also be calculated by locating the pupil center based on gradients, corneal reflections, and the like. For example, the method based on cornea reflection is to assist in positioning the pupil center by detecting the reflection light spot on the cornea, so that the image processing area can be reduced to a certain extent, and the instantaneity and the recognition rate of the reflection light spot are improved. In the practical application process, the pupil coordinates (including the left eye pupil coordinates and the right eye pupil coordinates) of the target user can be acquired by adopting a proper pupil positioning algorithm according to the practical application requirements.
It will be appreciated that the pupil positioning algorithm is a related art disclosed in the art, and embodiments of the present application are not described in detail herein.
S102, calculating a first adjustment amount of a combined image distance based on the current scene depth, the current parallax of the target user and the second pupil distance of the target user;
In one embodiment, the current parallax refers to the parallax of the left eye and the right eye of the target user, that is, the difference in directions generated when the target user views the same target display object with eyes. The angle between the two points viewed from the target display object is taken as the parallax angle of the current parallax.
In an embodiment, the current scene depth refers to depth information of a scene observed by the target user.
In an embodiment, a calculation formula of the first adjustment amount of the image combining distance may be expressed as follows:
Δd 2 =f (IPD, current parallax, scene depth)
F is a function for dynamically adjusting the integrated image distance by combining the pupil distance (IPD, inter-pupillary Distance) of the target user, the parallax and the scene depth of the intelligent wearable device.
In an embodiment, the first adjustment amount of the image combining distance may be real-time adjustment, that is, data such as the current scene depth, the current parallax of the target user, the second pupil distance of the target user, etc. are collected in real time, the first adjustment amount of the image combining distance is calculated, and the image combining distance of the intelligent wearable device is correspondingly adjusted.
In another embodiment, the real-time adjustment of the image combining distance may have a problem of excessive data calculation, so that the triggering condition of the image combining distance adjustment may also be set according to the actual requirement of the user. Whether the image combining distance adjustment is needed or not is judged by detecting specific data, such as large scene depth change, obvious user pupil distance change and the like, and the data which can obviously influence the current display effect of the intelligent wearable device is monitored to serve as a basis for judging whether the image combining distance adjustment is needed or not, so that the calculated amount is reduced.
In an embodiment, at least one group of target monitoring data is acquired, wherein the target monitoring data comprises one or more of scene depth, user parallax and user pupil distance, a data difference value between the target monitoring data and corresponding preset reference data is calculated, and when the data difference value between any one of the target monitoring data and the corresponding preset reference data is greater than or equal to a preset difference value threshold value, the current scene depth, the current parallax of the target user and the second pupil distance of the target user are acquired based on a preset data acquisition mode.
In an embodiment, when the data difference between all the target monitoring data and the corresponding preset reference data is smaller than the preset difference threshold, the image matching distance adjustment calculation and operation are not performed, and the target monitoring data is continuously collected and detected.
In an embodiment, the target monitoring data may also be other data that affects the above data, such as user identity information, age or age of the user, movement status of the user, etc.
When a plurality of different users wear the same intelligent wearing equipment at different times, the interpupillary distances of the different users are required to be acquired according to the identity information of the users, and then the synthetic distance is adjusted. For teenagers with faster physical development, the pupil distance of the teenagers may change obviously within a certain period of time, so that pupil distance data of a user can be updated at intervals for the user in rapid development, and the imaging distance of the intelligent wearing equipment can be correspondingly adjusted. When the user is in a motion state, the scene depth may have obvious change, so that the integration image distance needs to be adjusted according to the scene depth, so that the virtual object is adapted to the display requirements in different scenes.
In one embodiment, a trigger condition or the like to adjust the imaging distance may be set. When the intelligent wearable device detects that certain data (such as scene depth, user parallax and the like) meet the triggering condition, the current scene depth, the current parallax of the target user and the second interpupillary distance of the target user are acquired, and a first adjustment amount of the image combining distance is calculated.
In an embodiment, the second pupil distance of the target user may be tracked and calculated in real time in the pupil distance calculating process, a pupil distance change threshold may be set, and when the pupil distance data is greatly changed, for example, when the pupil distance difference between the second pupil distance and the first pupil distance (or a preset pupil distance) is greater than the pupil distance change threshold, the calculation of the first adjustment amount of the image combining distance is triggered, and the current image combining distance (for example, the first image combining distance or the preset image combining distance) of the intelligent wearable device is adjusted according to the calculation result, so as to adapt to the optimal viewing distance corresponding to the second pupil distance.
In an embodiment, the scene depth of the intelligent wearable device can also be monitored in real time. When the current scene depth of the intelligent wearable device is obviously changed, if the scene depth change amount is larger than a preset scene depth change threshold value, triggering measurement and calculation of a second pupil distance of a target user once, and then calculating a first adjustment amount of an imaging distance according to the current scene depth, the second pupil distance and the current parallax, and adjusting the imaging distance of the intelligent wearable device.
The scene depth difference between the currently acquired scene depth and the reference scene depth can be calculated as the scene depth variation by taking the scene depth corresponding to the preset imaging distance or the scene depth of other known imaging distances as a reference.
For example, the first pupil distance corresponding to the first imaging distance (or the pupil distance corresponding to the last adjustment of the imaging distance or the preset pupil distance, which is illustrated here by the first pupil distance) may be used as a reference. The intelligent wearing equipment acquires the second pupil distance of the target user in real time through tools such as an image sensor, a pupil positioning algorithm and the like, acquires the current parallax of the target user and the current scene depth of the intelligent wearing equipment when the pupil distance difference value of the second pupil distance relative to the first pupil distance is larger than a preset pupil distance difference value threshold, and calculates the adjustment quantity of the optimal image combining distance corresponding to the second pupil distance relative to the first image combining distance, namely the first adjustment quantity of the image combining distance.
In another embodiment, the current scene depth of the intelligent wearable device, the current parallax of the target user and the second pupil distance of the target user can be periodically acquired, the first adjustment amount of the imaging distance corresponding to each acquisition period is calculated, and the imaging distance of the intelligent wearable device is adjusted. For example, the first adjustment amount of the image combining distance is calculated every one minute, and the image combining distance is adjusted on the display object of the intelligent wearable device according to the first adjustment amount of the image combining distance.
For example, the acquisition period may be set according to the usage scenario, for example, when the target user is in a state of sitting still or the like, and the scene change frequency is low, the acquisition period may be set longer, for example, the acquisition of related data every 15 minutes and the calculation of the first adjustment amount of the imaging distance may be performed, and the imaging distance may be adjusted once, while when the target user is in a motion state (such as walking, running, riding, etc.), the acquisition period may be set shorter according to the motion state. For example, the imaging distance is adjusted every 10 seconds during walking, the imaging distance is adjusted every 5 seconds during running, the imaging distance is adjusted every 3 seconds during riding, and the like.
In one embodiment, the calculation formula of the first adjustment amount of the image combining distance may be expressed as follows:
Δd 2 =f (IPD, parallax, scene depth) =k 1·(d2-d1)+k2·μ0+k3·ω0
The Δd 2 represents a first adjustment amount of the image combining distance, D 2 represents a second pupil distance, D 1 represents a first pupil distance, μ 0 represents a current parallax of a target user, ω 0 represents a current scene depth of the intelligent wearable device, k 1、k2、k3 represents a weight coefficient, a fixed value can be preset, and adjustment is also performed according to actual requirements.
S103, calculating a target image combining distance based on the first image combining distance and the first adjustment amount of the image combining distance;
In an embodiment, the first imaging distance and the first adjustment amount of the imaging distance are added to obtain the target imaging distance.
In one embodiment, the direction needs to be considered when calculating the first adjustment amount of the imaging distance.
For example, in the display space of the smart wearable device, the movement toward the user may be set to be indicated as negative, i.e., the image combining distance decreases, and the movement away from the user may be set to be indicated as positive, i.e., the image combining distance increases.
And S104, based on the target image combining distance, performing image combining distance adjustment on the intelligent wearable device so that the display position of the target display object is matched with the sight distance of the target user.
In an embodiment, the target display object is rendered according to the target image combining distance, the rendering engine resets the display position of the target display object, namely the target image combining distance, and then updates the display position and the display effect of the target display object according to the adjusted target image combining distance, so that the display position of the target display object is matched with the sight distance of the target user.
Wherein the adjustment of the display effect may be determined according to the display position and the characteristics of the target display object.
For example, if the imaging distance is reduced, the overall display volume of the target display object may be set to be reduced, or the display font may be set to be reduced, or the like. If the imaging distance is increased, the overall display volume of the target display object can be enlarged, or the display font can be enlarged, etc.
According to the embodiment, the integrated image distance is dynamically adjusted, so that the integrated image distance is suitable for pupil distance and viewing angles of different users, and the asthenopia caused by long-time use is reduced. By means of intelligent scene adaptation technology and combining scene depth information, the image integration distance is dynamically adjusted, and the positioning accuracy of the virtual object in the real environment is improved. According to the personalized requirements (such as interpupillary distance) of the user, the image integration distance is dynamically adjusted, and better user experience is provided.
The embodiment provides a method for adjusting the image combining distance of intelligent wearable equipment, which calculates a first image combining distance through a first pupil distance of a target user, realizes custom-made image combining distance calculation for the target user, and improves the applicability of the intelligent wearable equipment. According to the depth of the current scene, the current parallax of the target user and the second pupil distance, the adjustment quantity of the image combining distance is calculated, so that the image combining distance can be dynamically changed according to the actual use environment, the adaptability and the flexibility of equipment display are enhanced, and the display position of the target display object is matched with the sight distance of the target user by adjusting the image combining distance of the intelligent wearing equipment, so that the positioning accuracy of the target display object is improved, the visual fatigue of the user is reduced, and the visual experience comfort of the user is improved.
Referring to fig. 2, fig. 2 is a flowchart of a second embodiment of an image combining distance adjusting method for an intelligent wearable device according to an embodiment of the present application.
As shown in fig. 2, based on the embodiment shown in fig. 1, after step S103, the method further includes:
s201, respectively calculating a left eye matrix and a right eye matrix of the target user based on a preset reference matrix and the second pupil distance;
in an embodiment, a reference matrix of the smart wearable device may be set.
For example, assume that two lenses of a smart wearable device are planar lenses and lie in the same plane. The method comprises the steps of taking the midpoint position of the central connecting lines of two lenses of the intelligent wearing equipment as a datum point, taking the central connecting lines of the two lenses as a transverse axis X, taking the datum point as an intersection point and taking a direction axis perpendicular to a plane where the two planes are located as a depth axis Z, and taking the datum point as an intersection point and taking a direction axis perpendicular to the transverse axis X and the depth axis Z as a longitudinal axis Y, so that a reference coordinate system is constructed.
Illustratively, a reference matrix may be constructed with reference points as starting points and the Z-axis as directions, and the reference matrix may be represented as a3×3 identity matrix, but only the Z-axis direction has non-zero elements, namely:
The reference matrix thus constructed represents a unit vector in the Z-axis direction.
It will be appreciated that the construction of the reference matrix may be performed according to practical application requirements, and the reference matrix constructed by the embodiment of the present application is merely illustrated as an example, and is not particularly limited to the construction of the reference matrix.
Further, current view angle data of the target user is obtained, wherein the current view angle data comprises left eye view angle data and right eye view angle data, a left eye rotation matrix of the left eye view angle data relative to the reference matrix and a right eye rotation matrix of the right eye view angle data relative to the reference matrix are calculated based on the current view angle data and the second pupil distance respectively, and a left eye matrix and a right eye matrix of the target user are calculated based on the reference matrix, the left eye rotation matrix and the right eye rotation matrix.
In one embodiment, the current perspective data may include pupil coordinates and gaze directions, including left eye pupil coordinates and left eye gaze directions, right eye pupil coordinates and right eye gaze directions.
The pupil coordinates may be obtained by a pupil positioning algorithm, and the second interpupillary distances of the left-eye pupil and the right-eye pupil may be calculated according to the left-eye pupil coordinates and the right-eye pupil coordinates.
In one embodiment, the gaze direction may be implemented using eye tracking techniques. Such as pupil cornea reflection, retinal image localization, structured light tracking, optical waveguide eye tracking, eye tracking sensors based on laser scanning, and the like.
Illustratively, pupil keratometry determines the gaze direction by tracking the spot of the corneal reflection. The infrared light source irradiates the eye, the cornea reflects the light rays, and the reflected light points are captured by the infrared camera, so that the sight line direction is calculated.
Optical waveguide eye tracking illustratively uses optical waveguide technology in combination with eye tracking to capture image data of an eyeball through a plurality of infrared LEDs and cameras, supporting real-time eye tracking and gaze point rendering.
In an embodiment, the rotation angle of the left/right eye matrix relative to the reference matrix is calculated according to the second pupil distance of the target user and the device parameters of the smart wearable device.
In an embodiment, the rotation angle may be calculated by a trigonometric function, and assuming that the pupil distance offset is d, the rotation angles θ of the left and right eyes may be calculated by the following formula:
Where f is the focal length, typically determined by the optical design of the smart wearable device. d represents the pupil distance offset, θ left represents the left eye rotation angle, and θ right represents the right eye rotation angle.
The calculation formula of the pupil distance offset of the left eye and the right eye can be expressed as follows:
Wherein d represents a pupil distance offset, d 2 represents a second pupil distance, and d 0 represents a preset pupil distance of the intelligent wearable device.
In one embodiment, a rotation matrix is constructed based on the rotation angle. Constructing a rotation matrix of the left and right eyes by using the calculated rotation angles:
Where R left represents the left rotation matrix and R right represents the right rotation matrix.
In one embodiment, assuming the reference matrix is denoted as M base, the rotation matrix for the left/right eye is multiplied by the reference matrix to obtain the final left/right eye matrix:
Mleft=Mbase·Rleft
Mright=Mbase·Rright
from this, can calculate the rotation angle of left and right eyes for the reference matrix to provide necessary visual angle information for the visual rendering of intelligent wearing equipment, and make the user can obtain best visual experience when using intelligent wearing equipment, reduce visual fatigue and uncomfortable sense.
According to the embodiment, the current visual angle data of the target user, including the left eye visual angle data and the right eye visual angle data, can be obtained, and customized visual angle adjustment can be provided for each user, so that individuation and comfort of augmented reality experience can be improved.
S202, determining a target display pose of the target display object at the target imaging distance based on the left eye matrix and the right eye matrix;
In an embodiment, in the smart wearable device, the rendering engine uses the right-eye matrix to determine the correct position of the target display object in the right-eye view, and similarly uses the left-eye matrix to determine the correct position of the target display object in the left-eye view.
In an embodiment, in the intelligent wearable device, the rendering engine may determine the correct positions of the target display object in the left eye view and the right eye view according to the left eye matrix and the right eye matrix, so that the left eye view and the right eye view can realize view coincidence at the target image merging distance, thereby avoiding generating obvious parallax and improving the visual experience effect of the user.
It will be appreciated that the relevant rendering techniques of views by the rendering engine are known in the art and are not specifically defined or described herein.
According to the method and the device for achieving the view conversion, the rotation matrixes of the left eye and the right eye are calculated by using the current view angle data and the second pupil distance, so that the accuracy of the view conversion is ensured, the virtual object is displayed in the view of the user in an accurate pose, and the sense of reality and the sense of immersion of the augmented reality image are improved.
And S203, based on the target display pose, carrying out pose adjustment on the target display object so as to enable the display pose of the target display object to be matched with the sight distance of the target user.
In one embodiment, the target display object is subject to pose adjustments, including pan, rotate, and zoom operations, based on the target display pose to ensure that the object is properly aligned in the user's field of view. For example, if the user moves his head, the position and orientation of the object need to be updated in real time to be consistent with the relative position of the user's line of sight.
The sight distance of the user refers to the straight line distance between the eyes of the user and the target object.
In one embodiment, the adjusted pose information will be used by the rendering engine to ensure that the target display object is properly displayed in the field of view of the target user.
In an embodiment, the target image-combining distance and the pose of the target display object can be adjusted respectively. Because the sight and the head position of the user may be continuously changed, the change of the head pose does not necessarily affect the scene depth, or the pupil distance and the parallax of the user are changed, but the relative pose of the target display object and the head of the user is necessarily changed, so that the intelligent wearable device can update the pose of the target display object in real time, and the imaging distance of the target display object can be adjusted according to the actual application scene.
Generally, a motion sensor (e.g., gyroscope, image sensor, etc.) may be employed to track the user's gaze and head position and adjust the pose of the object in real time. The target display object is enabled to always keep target display pose display relative to the target user, so that the target display pose display is matched with the sight distance of the target user.
According to the embodiment, the left eye matrix and the right eye matrix of the target user are calculated, and the display pose of the target display object at the target image combining distance is determined based on the matrices, so that the image combining distance can be optimized, the display position of the virtual object is more in accordance with the natural vision habit of the user, and the visual fatigue is reduced. And the target display object is subjected to pose adjustment based on the target display pose, so that the display pose is matched with the sight distance of the user, the augmented reality system is dynamically adapted to the visual angle change of the user, the flexibility and the responsiveness of the system are improved, meanwhile, the user can be ensured to obtain clear and accurate virtual object display at different visual angles and distances, and the augmented reality experience of the user is improved.
Referring to fig. 3, fig. 3 is a schematic structural diagram of a first embodiment of an image combining distance adjusting device of an intelligent wearable device according to the present application, where the image combining distance adjusting device of the intelligent wearable device is used for executing the image combining distance adjusting method of the intelligent wearable device.
As shown in fig. 3, the image combining distance adjusting device 300 of the intelligent wearable device comprises a first image combining distance calculating module 301, a first adjustment amount calculating module 302, a target image combining distance module 303 and an image combining distance adjusting module 304.
The first imaging distance calculation module 301 is configured to determine a first imaging distance of the intelligent wearable device based on a first pupil distance of the target user;
a first adjustment amount calculation module 302, configured to determine a first adjustment amount of a combined image distance based on a current scene depth, a current parallax of the target user, and a second pupil distance of the target user;
a target image combining distance module 303, configured to determine a target image combining distance based on the first image combining distance and the first adjustment amount of the image combining distance;
And the image-combining distance adjusting module 304 is configured to adjust the image-combining distance of the intelligent wearable device based on the target image-combining distance, so that a display position of a target display object in a display space of the intelligent wearable device is matched with a viewing distance of the target user.
In an embodiment, the imaging distance adjusting device 300 of the smart wearable device further includes a pose adjusting module, including:
a matrix calculating unit, configured to calculate a left eye matrix and a right eye matrix of the target user based on a preset reference matrix and the second pupil distance, respectively;
a position determining unit configured to determine a target display pose of the target display object at the target imaging distance based on the left-eye matrix and the right-eye matrix;
And the pose adjusting unit is used for adjusting the pose of the target display object based on the target display pose so as to enable the display pose of the target display object to be matched with the sight distance of the target user.
In an embodiment, the matrix calculation unit includes:
A viewing angle data obtaining subunit, configured to obtain current viewing angle data of the target user, where the current viewing angle data includes left-eye viewing angle data and right-eye viewing angle data;
A rotation matrix calculation subunit configured to calculate a left-eye rotation matrix of the left-eye view angle data with respect to the reference matrix and a right-eye rotation matrix of the right-eye view angle data with respect to the reference matrix, respectively, based on the current view angle data and the second pupil distance;
A left-eye matrix calculation subunit configured to calculate the left-eye matrix based on the reference matrix and the left-eye rotation matrix;
a right-eye matrix calculation subunit for calculating the right-eye matrix based on the reference matrix and the right-eye rotation matrix.
In an embodiment, the imaging distance adjusting device 300 of the smart wearable device further includes a second pupil distance acquisition module, including:
the eye image acquisition unit is used for acquiring an eye image of the target user;
The pupil coordinate acquisition unit is used for carrying out pupil positioning detection on the eye image based on the feature point detection model to obtain the left eye pupil coordinate and the right eye pupil coordinate of the target user;
and a second pupil distance calculating unit configured to calculate a second pupil distance of the target user based on the left eye pupil coordinate and the right eye pupil coordinate.
In an embodiment, the pupil coordinate acquisition unit includes:
A feature detection subunit, configured to detect pupil outline features in the eye image based on the feature point detection model;
And the pupil positioning subunit is used for performing positioning calculation on the pupil center point coordinates based on a pupil positioning algorithm to obtain left eye pupil coordinates and right eye pupil coordinates of the target user.
In an embodiment, the first imaging distance calculating module 301 includes:
the preset data acquisition unit is used for acquiring a preset pupil distance and a preset image combining distance of the intelligent wearable equipment;
A second adjustment amount calculating unit configured to determine a second adjustment amount of the combined image distance based on a pupil distance difference value of the first pupil distance and the preset pupil distance;
The first image combining distance calculating unit is used for determining the first image combining distance based on the preset image combining distance and the second adjustment amount of the image combining distance.
In an embodiment, the image capturing distance adjusting device 300 of the smart wearable device further includes an image capturing distance adjustment determining module, including:
The monitoring data acquisition unit is used for acquiring at least one group of target monitoring data;
The data difference value calculation unit is used for calculating the data difference value between the target monitoring data and the corresponding preset reference data;
And the image merging distance adjustment judging unit is used for acquiring the current scene depth, the current parallax of the target user and the second pupil distance of the target user based on a preset data acquisition mode when the data difference value is larger than or equal to a preset difference value threshold.
It should be noted that, for convenience and brevity of description, a person skilled in the art may clearly understand that, for the specific working process of the above-described device and each module, reference may be made to a corresponding process in the foregoing embodiment of the image combining distance adjusting method of the intelligent wearable device, which is not described herein again.
The apparatus provided by the above embodiments may be implemented in the form of a computer program that may be run on a smart wearable device as shown in fig. 4.
Referring to fig. 4, fig. 4 is a schematic block diagram of a structure of an intelligent wearable device according to an embodiment of the present application. The smart wearable device may be a server.
Referring to fig. 4, the smart wearable device includes a processor, a memory, and a network interface connected by a system bus, wherein the memory may include a non-volatile storage medium and an internal memory.
The non-volatile storage medium may store an operating system and a computer program. The computer program comprises program instructions which, when executed, cause the processor to execute any one of the methods for adjusting the imaging distance of the intelligent wearable device.
The processor is used for providing computing and control capabilities and supporting the operation of the entire smart wearable device.
The internal memory provides an environment for the execution of a computer program in the non-volatile storage medium, which when executed by the processor, causes the processor to perform any one of the imaging distance adjustment methods of the smart wearable device.
The network interface is used for network communication such as transmitting assigned tasks and the like. It will be appreciated by those skilled in the art that the structure shown in fig. 4 is merely a block diagram of a portion of the structure associated with the present inventive arrangements and is not limiting of the smart wearable device to which the present inventive arrangements are applied, and that a particular smart wearable device may include more or fewer components than shown, or may combine certain components, or may have a different arrangement of components.
It should be appreciated that the processor may be a central processing unit (CentralProcessingUnit, CPU), which may also be other general purpose processors, digital signal processors (DigitalSignalProcessor, DSP), application specific integrated circuits (ApplicationSpecificIntegratedCircuit, ASIC), field programmable gate arrays (Field-ProgrammableGateArray, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. Wherein the general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Wherein in one embodiment the processor is configured to run a computer program stored in the memory to implement the steps of:
Determining a first imaging distance of the intelligent wearable device based on a first pupil distance of the target user;
Determining a first adjustment amount of a combined image distance based on a current scene depth, a current parallax of the target user and a second pupil distance of the target user;
determining a target image combining distance based on the first image combining distance and the first adjustment amount of the image combining distance;
And based on the target image combining distance, performing image combining distance adjustment on the intelligent wearable device so that the display position of a target display object in the display space of the intelligent wearable device is matched with the sight distance of the target user.
In an embodiment, after implementing the image combining distance adjustment for the smart wearable device based on the target image combining distance, the processor is further configured to implement:
calculating a left eye matrix and a right eye matrix of the target user based on a preset reference matrix and the second pupil distance;
Determining a target display pose of the target display object at the target image combining distance based on the left eye matrix and the right eye matrix;
and based on the target display pose, carrying out pose adjustment on the target display object so as to enable the display pose of the target display object to be matched with the sight distance of the target user.
In an embodiment, when implementing the calculation of the left eye matrix and the right eye matrix of the target user based on the preset reference matrix and the second pupil distance, the processor is configured to implement:
acquiring current view angle data of the target user, wherein the current view angle data comprises left eye view angle data and right eye view angle data;
calculating a left eye rotation matrix of the left eye view angle data with respect to the reference matrix and a right eye rotation matrix of the right eye view angle data with respect to the reference matrix based on the current view angle data and the second pupil distance;
Calculating the left eye matrix based on the reference matrix and the left eye rotation matrix;
The right eye matrix is calculated based on the reference matrix and the right eye rotation matrix.
In an embodiment, before implementing the calculating the first adjustment amount of the imaging distance based on the current scene depth, the current parallax of the target user, and the second pupil distance of the target user, the processor is further configured to implement:
Collecting an eye image of the target user;
Performing pupil positioning detection on the eye image based on a feature point detection model to obtain left eye pupil coordinates and right eye pupil coordinates of the target user;
And calculating the second interpupillary distance of the target user based on the left eye pupil coordinate and the right eye pupil coordinate.
In an embodiment, when implementing the feature point detection model and performing pupil positioning detection on the eye image, the processor is configured to implement:
Detecting pupil outline features in the eye image based on the feature point detection model;
and positioning calculation is carried out on the pupil center point coordinates based on a pupil positioning algorithm, so as to obtain the left eye pupil coordinates and the right eye pupil coordinates of the target user.
In an embodiment, when implementing the first pupil distance based on the target user, the processor is configured to implement:
Acquiring a preset pupil distance and a preset image combining distance of the intelligent wearable equipment;
Determining a second adjustment amount of the image combining distance based on the pupil distance difference value of the first pupil distance and the preset pupil distance;
And determining the first image combining distance based on the preset image combining distance and the second adjustment amount of the image combining distance.
In an embodiment, before implementing the calculating the first adjustment amount of the imaging distance based on the current scene depth, the current parallax of the target user, and the second pupil distance of the target user, the processor is further configured to implement:
acquiring at least one set of target monitoring data;
calculating a data difference value between the target monitoring data and corresponding preset reference data;
And when the data difference value is greater than or equal to a preset difference value threshold value, acquiring the current scene depth, the current parallax of the target user and the second interpupillary distance of the target user based on a preset data acquisition mode.
The embodiment of the application also provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program, the computer program comprises program instructions, and the processor executes the program instructions to realize the method for adjusting the imaging distance of any intelligent wearable device.
The computer readable storage medium may be an internal storage unit of the smart wearable device according to the foregoing embodiment, for example, a hard disk or a memory of the smart wearable device. The computer readable storage medium may also be an external storage device of the smart wearable device, such as a plug-in hard disk equipped on the smart wearable device, a smart memory card (SMARTMEDIACARD, SMC), a secure digital (SecureDigital, SD) card, a flash memory card (FLASHCARD), etc.
While the application has been described with reference to certain preferred embodiments, it will be understood by those skilled in the art that various changes and substitutions of equivalents may be made and equivalents will be apparent to those skilled in the art without departing from the scope of the application. Therefore, the protection scope of the application is subject to the protection scope of the claims.
Claims (10)
1. An image combining distance adjusting method of intelligent wearable equipment is characterized by comprising the following steps:
Determining a first imaging distance of the intelligent wearable device based on a first pupil distance of the target user;
Determining a first adjustment amount of a combined image distance based on a current scene depth, a current parallax of the target user and a second pupil distance of the target user;
determining a target image combining distance based on the first image combining distance and the first adjustment amount of the image combining distance;
And based on the target image combining distance, performing image combining distance adjustment on the intelligent wearable device so that the display position of a target display object in the display space of the intelligent wearable device is matched with the sight distance of the target user.
2. The method for adjusting the imaging distance of the intelligent wearable device according to claim 1, wherein after the adjusting the imaging distance of the intelligent wearable device based on the target imaging distance, the method further comprises:
calculating a left eye matrix and a right eye matrix of the target user based on a preset reference matrix and the second pupil distance;
Determining a target display pose of the target display object at the target image combining distance based on the left eye matrix and the right eye matrix;
and based on the target display pose, carrying out pose adjustment on the target display object so as to enable the display pose of the target display object to be matched with the sight distance of the target user.
3. The method for adjusting the imaging distance of the smart wearable device according to claim 2, wherein the calculating the left eye matrix and the right eye matrix of the target user based on the preset reference matrix and the second pupil distance includes:
acquiring current view angle data of the target user, wherein the current view angle data comprises left eye view angle data and right eye view angle data;
Calculating a left eye rotation matrix of the left eye view angle data with respect to the reference matrix and a right eye rotation matrix of the right eye view angle data with respect to the reference matrix, respectively, based on the current view angle data and the second pupil distance;
Calculating the left eye matrix based on the reference matrix and the left eye rotation matrix;
The right eye matrix is calculated based on the reference matrix and the right eye rotation matrix.
4. The method for adjusting the imaging distance of the smart wearable device according to claim 1, wherein before calculating the first adjustment amount of the imaging distance based on the current scene depth, the current parallax of the target user, and the second pupil distance of the target user, further comprises:
Collecting an eye image of the target user;
Performing pupil positioning detection on the eye image based on a feature point detection model to obtain left eye pupil coordinates and right eye pupil coordinates of the target user;
And calculating the second interpupillary distance of the target user based on the left eye pupil coordinate and the right eye pupil coordinate.
5. The method for adjusting the imaging distance of the intelligent wearable device according to claim 4, wherein the performing pupil positioning detection on the eye image based on the feature point detection model to obtain the left eye pupil coordinate and the right eye pupil coordinate of the target user includes:
Detecting pupil outline features in the eye image based on the feature point detection model;
and positioning calculation is carried out on the pupil center point coordinates based on a pupil positioning algorithm, so as to obtain the left eye pupil coordinates and the right eye pupil coordinates of the target user.
6. The method for adjusting the imaging distance of the intelligent wearable device according to claim 1, wherein the calculating the first imaging distance of the intelligent wearable device based on the first pupil distance of the target user includes:
Acquiring a preset pupil distance and a preset image combining distance of the intelligent wearable equipment;
Determining a second adjustment amount of the image combining distance based on the pupil distance difference value of the first pupil distance and the preset pupil distance;
And determining the first image combining distance based on the preset image combining distance and the second adjustment amount of the image combining distance.
7. The method for adjusting the imaging distance of the smart wearable device according to claim 1, wherein before calculating the first adjustment amount of the imaging distance based on the current scene depth, the current parallax of the target user, and the second pupil distance of the target user, further comprises:
Acquiring at least one group of target monitoring data, wherein the target monitoring data comprises one or more of scene depth, user parallax and user interpupillary distance;
calculating a data difference value between the target monitoring data and corresponding preset reference data;
and when the data difference value between any one of the target monitoring data and the corresponding preset reference data is larger than or equal to a preset difference value threshold value, acquiring the current scene depth, the current parallax of the target user and the second pupil distance of the target user based on a preset data acquisition mode.
8. The utility model provides an intelligent wearing equipment's close image distance adjusting device which characterized in that, intelligent wearing equipment's close image distance adjusting device includes:
The first imaging distance calculation module is used for calculating a first imaging distance of the intelligent wearable device based on a first pupil distance of the target user;
The first adjustment amount calculating module is used for calculating a first adjustment amount of the imaging distance based on the current scene depth, the current parallax of the target user and the second pupil distance of the target user;
the target image combining distance module is used for calculating a target image combining distance based on the first image combining distance and the first adjustment amount of the image combining distance;
And the image combining distance adjusting module is used for adjusting the image combining distance of the intelligent wearable device based on the target image combining distance so that the display position of the target display object is matched with the sight distance of the target user.
9. A smart wearable device, characterized in that it comprises a processor, a memory, and a computer program stored on the memory and executable by the processor, wherein the computer program, when executed by the processor, implements the steps of the imaging distance adjustment method of a smart wearable device according to any of claims 1 to 7.
10. A computer readable storage medium, wherein a computer program is stored on the computer readable storage medium, and wherein the computer program, when executed by a processor, implements the steps of the method for adjusting the imaging distance of the smart wearable device according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411404794.8A CN119450190A (en) | 2024-10-09 | 2024-10-09 | Method, device, equipment and medium for adjusting image distance of smart wearable device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411404794.8A CN119450190A (en) | 2024-10-09 | 2024-10-09 | Method, device, equipment and medium for adjusting image distance of smart wearable device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN119450190A true CN119450190A (en) | 2025-02-14 |
Family
ID=94517080
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202411404794.8A Pending CN119450190A (en) | 2024-10-09 | 2024-10-09 | Method, device, equipment and medium for adjusting image distance of smart wearable device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN119450190A (en) |
-
2024
- 2024-10-09 CN CN202411404794.8A patent/CN119450190A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US12067689B2 (en) | Systems and methods for determining the scale of human anatomy from images | |
CN107357429B (en) | Method, apparatus, and computer-readable storage medium for determining gaze | |
JP5137833B2 (en) | Gaze direction detection device and gaze direction detection method | |
WO2019137215A1 (en) | Head pose and distraction estimation | |
KR101909006B1 (en) | Image registration device, image registration method, and image registration program | |
US12056274B2 (en) | Eye tracking device and a method thereof | |
US20150029322A1 (en) | Method and computations for calculating an optical axis vector of an imaged eye | |
CN104809424B (en) | Method for realizing sight tracking based on iris characteristics | |
CN102125422A (en) | Pupil center-corneal reflection (PCCR) based sight line evaluation method in sight line tracking system | |
CN106529409A (en) | Eye ocular fixation visual angle measuring method based on head posture | |
KR20160094190A (en) | Apparatus and method for tracking an eye-gaze | |
US20160247322A1 (en) | Electronic apparatus, method and storage medium | |
JP7030317B2 (en) | Pupil detection device and pupil detection method | |
KR101628493B1 (en) | Apparatus and method for tracking gaze of glasses wearer | |
CN114391117A (en) | Eye tracking delay enhancement | |
JP2012239566A (en) | Measuring apparatus for glasses, and three-dimensional measuring apparatus | |
US11751764B2 (en) | Measuring a posterior corneal surface of an eye | |
JP7255436B2 (en) | Eyeball structure estimation device | |
KR20150069739A (en) | Method measuring fish number based on stereovision and pattern recognition system adopting the same | |
KR101817436B1 (en) | Apparatus and method for displaying contents using electrooculogram sensors | |
CN119450190A (en) | Method, device, equipment and medium for adjusting image distance of smart wearable device | |
KR101817952B1 (en) | See-through type head mounted display apparatus and method of controlling display depth thereof | |
CN115525139A (en) | Method and device for acquiring gazing target in head-mounted display equipment | |
JP2018149234A (en) | Fixation point estimation system, fixation point estimation method, and fixation point estimation program | |
JP2017091190A (en) | Image processor, image processing method, and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |