CN206906983U - Augmented reality equipment - Google Patents
Augmented reality equipment Download PDFInfo
- Publication number
- CN206906983U CN206906983U CN201720273433.3U CN201720273433U CN206906983U CN 206906983 U CN206906983 U CN 206906983U CN 201720273433 U CN201720273433 U CN 201720273433U CN 206906983 U CN206906983 U CN 206906983U
- Authority
- CN
- China
- Prior art keywords
- virtual
- display screen
- image display
- camera
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000003190 augmentative effect Effects 0.000 title claims abstract description 71
- 230000008859 change Effects 0.000 claims abstract description 19
- 230000003993 interaction Effects 0.000 claims abstract description 16
- 230000005540 biological transmission Effects 0.000 claims description 24
- 230000001133 acceleration Effects 0.000 claims description 19
- 238000006243 chemical reaction Methods 0.000 claims description 17
- 238000004364 calculation method Methods 0.000 claims description 9
- 239000011521 glass Substances 0.000 claims description 7
- 238000011161 development Methods 0.000 claims description 4
- 230000004886 head movement Effects 0.000 claims description 2
- 230000000694 effects Effects 0.000 abstract description 6
- 210000003128 head Anatomy 0.000 description 18
- 230000003287 optical effect Effects 0.000 description 14
- 230000006870 function Effects 0.000 description 7
- 230000009286 beneficial effect Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 3
- 230000010354 integration Effects 0.000 description 3
- 238000002834 transmittance Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 210000001061 forehead Anatomy 0.000 description 2
- 238000002310 reflectometry Methods 0.000 description 2
- 230000001960 triggered effect Effects 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 1
- 206010040007 Sense of oppression Diseases 0.000 description 1
- 239000012141 concentrate Substances 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 210000005069 ears Anatomy 0.000 description 1
- 210000000887 face Anatomy 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000006698 induction Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000000034 method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000003014 reinforcing effect Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Abstract
The utility model is applied to AR technical fields, discloses augmented reality equipment, it includes:Camera, for shooting outside true environment;Image display panel, for showing virtual feature;Half-reflecting half mirror, for the integrated scene for overlapping to obtain by virtual feature and outside true environment for human eye viewing, and the half-reflecting half mirror is the monocular structure that eyes share;Light path converting component, for virtual feature shown on image display panel to be projected into half-reflecting half mirror;Data processor, be connected with camera, image display panel data, be identified, position for the shooting information to camera, computing and according to operation result control image display panel change display information, so as to realize the spatial interaction of user and virtual feature.Light path system of the present utility model is the monocular system that eyes share, and without carrying out split screen computing, saves operand and power consumption, compatibility is strong and realizes the spatial interaction of user and virtual feature, improves the experience effect of user.
Description
Technical Field
The utility model belongs to the technical field of the AR, especially, relate to augmented reality equipment.
Background
Augmented Reality (AR) is a new technology that integrates real world information and virtual world information, and a user can see both the external real world and projected virtual features when using an augmented reality device.
Some augmented reality devices are already on the market, however, the following problems still generally exist in the existing augmented reality devices:
1) the existing augmented reality equipment only enables a user to see a superposition scene of the virtual features and the external real world, and cannot realize the spatial interaction of the user and the virtual features, namely, the user cannot control the virtual features seen through the augmented reality equipment, and the experience effect is poor.
2) The existing augmented reality equipment is not provided with a Software Development Kit (SDK) and is not beneficial to the development of developers, so that the problem of setting the coincidence of virtual features and real objects is difficult to realize in the specific use process.
3) The mode of the present augmented reality equipment mainly adopts the display module of centre gripping in watching the display screen in people's eyes the place ahead, and its is with high costs, visual angle is few, and in order to protect the fragile display screen of watching in people's eyes the place ahead, watches the display screen and generally designs to adopt multilayer glass to press from both sides the LCD screen, like this, can concentrate great weight to the bridge of the nose of user, causes the discomfort of user's use.
4) The existing augmented reality equipment is usually provided with a computing system, and a user must use the system, which is mainly caused by that the displayed content is divided into binocular display content and parallax content, so that the compatibility is poor, and a universal smart phone or a universal system cannot be used. Meanwhile, the split-screen operation consumes larger operation amount and power consumption, and the control system has higher cost and is not beneficial to the long-time continuous operation of the augmented reality equipment. In addition, the interpupillary distance can not be adjusted to current augmented reality equipment, has certain influence when using to the user.
SUMMERY OF THE UTILITY MODEL
An object of the utility model is to overcome above-mentioned prior art's at least one weak point, provide augmented reality equipment, it has solved current augmented reality equipment compatibility poor, with high costs, the consumption is big, can't realize the mutual technical problem in space of user and virtual characteristic.
In order to achieve the above purpose, the utility model adopts the technical scheme that: augmented reality device, comprising:
the camera is used for shooting an external real environment;
an image display screen for displaying the virtual features;
the semi-reflecting and semi-transmitting lens is used for enabling human eyes to watch integrated scenes obtained by overlapping virtual features and an external real environment, and is of a monocular structure shared by two eyes;
the light path conversion component is used for projecting the virtual features displayed on the image display screen onto the semi-reflecting and semi-transparent mirror;
and the data processor is in data connection with the camera and the image display screen and is used for identifying, positioning and calculating the camera shooting information of the camera and controlling the image display screen to change the display information according to the calculation result, so that the spatial interaction between the user and the virtual feature is realized.
Optionally, the virtual feature displayed by the image display screen is a virtual button, a virtual control interface, or a virtual icon, and the data processor can identify, position, and calculate the camera information of the camera and control the image display screen to output different virtual features as the virtual button, the virtual control interface, or the virtual icon according to the calculation result, so that the user can click and control the virtual button, the virtual control interface, or the virtual icon; or,
the virtual characteristics displayed by the image display screen are virtual models, and the data processor can identify, position and calculate the camera shooting information of the camera and control the image display screen to output the virtual models in different states according to the calculation result, so that a user can watch or control the virtual models in multiple directions.
Optionally, the augmented reality device further includes an angular velocity sensor or an acceleration sensor for exciting the image display screen to change display information according to the movement of the head of the user, and the data sensor is in data connection with the camera and the data processor.
Optionally, the augmented reality device further includes a remote controller for controlling the angular velocity sensor or the acceleration sensor to be switched on and off and/or for exciting the image display screen to change display information, and the remote controller is connected to the data processor in a data transmission manner.
Optionally, a voice input structure and a voice control switch for controlling the on/off of the voice input structure are arranged on the remote controller.
Optionally, a software development kit for a developer to develop is provided in the data processor.
Optionally, the optical path conversion assembly comprises a corner reflecting member and a transmission member disposed between the corner reflecting member and the transflective mirror, the image display panel and the corner reflecting member being both located above the transmission member, wherein,
the corner reflection component is positioned at one side close to the semi-reflecting and semi-transmitting mirror, and the image display screen is positioned at one side far away from the semi-reflecting and semi-transmitting mirror; or the corner reflection component is positioned on one side far away from the semi-reflecting and semi-transmitting mirror, and the image display screen is positioned on one side close to the semi-reflecting and semi-transmitting mirror.
Optionally, the corner reflecting member is a right-angle prism or a total reflecting member composed of a plurality of reflecting mirrors; and/or the transmission component is a plano-convex lens or a Fresnel lens or a magnifying lens group consisting of a plurality of lenses.
Optionally, an avoidance space for avoiding glasses worn by a user is arranged between the semi-reflecting and semi-transparent mirror and human eyes; and/or the half-reflecting and half-transmitting mirror is obliquely arranged in a mode that the included angle between the half-reflecting and half-transmitting mirror and the vertical surface is more than 45 degrees and less than 90 degrees; and/or the included angle between the light projected on the semi-reflecting and semi-transmitting lens by the light path conversion component and the semi-reflecting and semi-transmitting lens is 45 +/-5 degrees.
Optionally, the image display screen is a display screen of a smart phone, the camera is a rear camera of the smart phone, and the data processor is a processor of the smart phone; or the image display screen, the camera and the data processor are independent display screens, independent cameras and independent processors which are mutually independent, and the independent display screens and the independent cameras are in data connection with the independent processors; or the image display screen is a display screen of a smart phone, the camera is a combination of a front camera of the smart phone and a wide-angle lens or a fish-eye lens with adjustable positions, and the data processor is a processor of the smart phone; or, the image display screen is the display screen of the smart phone, the camera is a wide-angle lens or a fisheye lens on a sensing arithmetic device, and the data processor comprises a processor of the smart phone and a processor of the sensing arithmetic device.
The utility model provides an augmented reality equipment adopts the monocular structure of binocular sharing to supply the user to watch the integrated scene that is obtained by virtual feature and outside real environment coincide, and like this, the virtual content of stack reinforcing is the monocular display, and data processor need not to carry out the branch screen operation, practices thrift operand and consumption, does benefit to the long-time continuous operation who realizes augmented reality equipment, has effectively improved the compatibility of augmented reality equipment simultaneously. Furthermore, the utility model discloses well data processor can fix a position integration, operation with the information of making a video recording of camera and the virtual characteristic that image display screen shows to according to operation result control image display screen changes display information, thereby realizes the space interaction of user and virtual characteristic, has improved user's experience effect to a great extent. Meanwhile, the optical system combines the real environment for superposition and the positioning algorithm, so that the user can see different sides of the virtual feature at all angles, and the monocular system cannot reduce the three-dimensional experience of the user.
Drawings
Fig. 1 is a schematic side view of an optical path system of an augmented reality device according to an embodiment of the present invention;
fig. 2 is a schematic side structure diagram of an optical path system of an augmented reality device according to a second embodiment of the present invention;
fig. 3 is a schematic side structure diagram of an optical path system of an augmented reality device according to a third embodiment of the present invention;
fig. 4 is a schematic diagram of a forward structure of an optical path system of an augmented reality device according to a third embodiment of the present invention;
fig. 5 is a schematic side view of an optical path system of an augmented reality device according to the fourth embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more clearly understood, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
It will be understood that when an element is referred to as being "secured to" or "disposed on" another element, it can be directly on the other element or intervening elements may also be present. When an element is referred to as being "connected" to another element, it can be directly connected to the other element or intervening elements may also be present.
It should be noted that the terms of orientation such as left, right, up, down, top, bottom, etc. in the following embodiments are only relative concepts or are referred to the normal use status of the product, and should not be considered as limiting.
The first embodiment is as follows:
as shown in fig. 1, an embodiment of the present invention provides an augmented reality device, including:
the camera 1 is used for shooting an external real environment;
the image display screen 2 is used for displaying virtual characteristics, and the image display screen 2 can be any type of screen, such as a common liquid crystal display screen, a self-luminous OLED display screen, an LED display screen and the like;
the transflective mirror 3 is used for reflecting the content on the image display screen 2, has a certain transmittance, and can be used for human eyes 5 to watch an integrated scene obtained by superposing virtual features and an external real environment, and the transflective mirror 3 is a monocular structure shared by two eyes;
the light path conversion component 4 is used for projecting the virtual features displayed on the image display screen 2 onto the semi-reflecting and semi-transmitting mirror 3, and a light path system formed by the light path conversion component 4 and the semi-reflecting and semi-transmitting mirror 3 is a monocular system shared by two eyes;
and the data processor (not shown) is in data connection with the camera 1 and the image display screen 2 (the data connection between the two components specifically means that the two components are connected in a connection mode capable of data transmission, and the data processor is specifically used for positioning, identifying and calculating the shooting information of the camera 1 and controlling the image display screen 2 to change the display information according to the calculation result, so that the spatial interaction between the user and the virtual feature is realized. The data connection between the two components specifically means that the two components are connected in a connection mode capable of data transmission, and specifically, the connection mode may be a wired connection or a wireless connection.
The transflective mirror 3 can reflect the content projected to the transflective mirror by the light path conversion component 4, and has a certain transmittance, so that after the user wears the augmented reality device provided by the embodiment, the user can see the virtual features sequentially projected to the transflective mirror through the image display screen 2 and the light path conversion component 4 through the transflective mirror 3, and can see the external real environment through the transflective mirror 3, so that the virtual features are as if they are in the real environment. When the user wants to interact with the virtual features, the data processor can perform positioning integration and operation on the camera information and the virtual features according to the camera information of the camera 1, so that the intention of the user can be judged, the display information of the image display screen 2 is controlled to be changed, the space interaction between the user and the virtual features is finally realized, and the experience effect of the user is greatly improved. In addition, the augmented reality device provided by this embodiment adopts a monocular system shared by two eyes for a user to view an integrated scene obtained by superimposing a virtual feature and an external real environment, so that the data processor does not need to perform split-screen operation, the operation amount and the power consumption are saved, the long-time continuous operation of the augmented reality device is facilitated, the compatibility of the augmented reality device is effectively improved, and the augmented reality device can be quickly accessed to the existing software to perform augmented reality operation. Meanwhile, the optical system combines the real environment for superposition and the positioning algorithm, so that the user can see different sides of the virtual feature at all angles, and the monocular system cannot reduce the three-dimensional experience of the user.
Preferably, the image display screen 2 is a display screen of the smartphone 100, the camera 1 is a rear camera of the smartphone 100, the rear camera is a three-dimensional camera, and the data processor is a processor of the smartphone 100. Camera 1, image display screen 2, data processor directly adopt camera 1, the display screen of smart mobile phone 100, the treater, make the user can control own smart mobile phone 100 under the AR mode, and need not to use other control system to carry out extra operation cooperation, the user need not additionally purchase display device, arithmetic device and power, its application that can control directly uses smart mobile phone 100 to control in with real environment and is the same as, it is compatible almost all smart mobile phone 100 on the market simultaneously, smart mobile phone 100 can change at any time.
Preferably, the smart phone 100 has an SLAM (simultaneous localization and mapping) positioning function, so that the SLAM positioning function of the smart phone 100 can be directly used to quickly locate the camera information of the camera 1, which is beneficial to simplifying a control system of the augmented reality device.
Preferably, the augmented reality equipment that this embodiment provided, including the head-mounted device main part that can supply the user to wear, half anti-half mirror 3 and light path conversion component 4 all locate the head-mounted device main part on, be equipped with the holding chamber that can supply smart mobile phone 100 holding location in the head-mounted device main part, install smart mobile phone 100 in the holding intracavity earlier when using augmented reality equipment. The smart phone 100 with various screen sizes can be inserted into the accommodating cavity and used as a functional component for shooting, displaying and data operation, and the smart phone is convenient to use and high in compatibility.
Preferably, the virtual feature of the image display screen 2 is a virtual button, a virtual control interface, or a virtual icon, and the data processor can identify, position, and calculate the image pickup information of the camera 1 and control the image display screen 2 to output different virtual features as the virtual button, the virtual control interface, or the virtual icon according to the calculation result, so as to realize the click control of the user on the virtual button, the virtual control interface, or the virtual icon.
Preferably, the virtual buttons or virtual control interfaces (UIs) or virtual icons are projected on the transflective mirror 3 in a manner that: the data processor stimulates the image display screen 2 to display the set virtual buttons or virtual control interfaces or virtual icons in a non-black mode at the corresponding positions according to the shooting information of the camera 1, and enables other positions on the image display screen 2 to display in a black color or close the light-emitting pixel points at other positions on the image display screen 2. The display content of the image display screen 2 is converted by the light path conversion component 4 and then projected onto the semi-reflecting and semi-transmitting mirror 3, and the user only sees the non-black virtual buttons (displayed by the lighted pixels) or the virtual control interfaces or the virtual icons on the image display screen 2, but does not see the black content of the image display screen 2, so that the user sees that the virtual buttons or the virtual control interfaces or the virtual icons are superposed in the external real environment, and feels that the virtual buttons or the virtual control interfaces or the virtual icons are as if they really exist in the external real environment and seem to be in a touch area just in front of the user's eyes.
Preferably, the click control mode of the user on the virtual button or the virtual control interface or the virtual icon is as follows: the user can see that the virtual button or the virtual control interface or the virtual icon is displayed in front of the user through the half-reflecting and half-transmitting mirror 3, the user extends a finger to a corresponding position on the virtual button or the virtual control interface or the virtual icon or the control component which the user wants to click, the camera 1 can shoot the finger extended by the user or the control component operated by the user, the control component can be a handle, a remote controller, a keyboard, a mouse or the like, the data processor identifies the finger extended by the user or the control component operated by the user shot by the camera 1 and calculates a plane position (coordinate of an XOY plane) of the finger or the control component to be used for controlling the selection of the virtual button or the virtual control interface or the virtual icon. And triggering a click event when the data processor identifies that the finger or the control component stays at the position corresponding to the virtual button or the virtual control interface or the virtual icon for a set time (such as 2s), just as if the virtual button or the virtual control interface or the virtual icon in the touch screen is clicked.
Preferably, the identification and positioning manner of the finger or the control component is as follows: firstly, the focus of the camera 1 is adjusted to enable the focus point of the camera 1 to be as short as possible, the area of the positions of the virtual focus and the real focus is identified through a data processor algorithm, and then the position of the finger or the control component is divided by using the color of the finger or the control component as a screening condition (or the position of the finger extending forwards and the position of the finger are directly identified through morphology) to be used as the position data of the finger or the control component. In specific application, according to specific practical conditions, a user can firstly calibrate before use, and during calibration, the user sequentially clicks the virtual features of 4 corner points of the display area to obtain a difference value between an actual click position of the user and a pixel coordinate of a finger or a picture of a control component captured by the camera 1, so that a corresponding relation between a coordinate of the finger or the control component captured by the camera 1 on an image and the actual click position is obtained.
Preferably, the augmented reality device further comprises an angular velocity sensor or an acceleration sensor (not shown) for exciting the image display screen 2 to change the display information according to the head movement of the user, and the angular velocity sensor or the acceleration sensor is in data connection with the camera 1 and the data processor. The setting of angular velocity sensor or acceleration sensor for the user slides up, glides, slides to the left, slides to the right virtual characteristic, can realize through the mode that the user raised the head, stooped the head, turned the head left, turned the head right. The data processor can perform positioning integration and calculation according to the camera information of the camera 1 and the detection data information of the angular velocity sensor or the acceleration sensor, so that the intention of a user can be judged, and the display information of the image display screen 2 can be controlled and changed. For the scheme that the camera 1, the image display screen 2 and the data processor of the smart phone 100 directly adopt the camera 1, the display screen and the processor, the angular velocity sensor is a gyroscope of the smart phone 100, and the acceleration sensor is an acceleration sensor of the smart phone 100.
Specifically, in this embodiment, the implementation mode of performing the sliding up, sliding down, sliding left, and sliding right operations on the virtual feature by the user tilting up, lowering head, turning left, and turning right is: and detecting the angular acceleration change of the head of the user at a vertical angle by using the angular velocity sensor (or further combining the data of the gravity acceleration sensor), and determining the rotation vector of the head direction of the user. Therefore, the angular velocity sensor can always detect the change of the angular acceleration, when the angular acceleration in a certain direction suddenly has a large change, the user is identified to shift the head towards the direction, and the corresponding operation of sliding the virtual feature upwards or downwards or sliding the virtual feature leftwards and rightwards is triggered. In order to improve the recognition rate, the set threshold value for triggering the corresponding operation of sliding the virtual feature upwards or downwards or sliding the virtual feature leftwards and rightwards can be used for recognizing the value of the common angular acceleration change of the user through long-term learning by a machine, or the user performs the operations of raising the head, lowering the head, deflecting the head leftwards and deflecting the head rightwards once under the guidance of the voice of the augmented reality device before use, and the augmented reality device records the value of the acceleration sudden change of the user as the set threshold value for triggering the corresponding event.
Preferably, in order to further reduce the false triggering rate of the sliding operation of the virtual feature upwards or downwards or leftwards and rightwards, the virtual feature may be triggered by matching with a remote controller, that is, the above augmented reality device further includes a remote controller for controlling the switch of the angular velocity sensor, and the remote controller is connected with the data processor in a data transmission manner. . The remote controller can be in a rectangular keyboard shape, a mouse shape, a bracelet shape or a handle shape. In specific application, by pressing a specific key on the remote controller, the remote controller can send a signal to the mobile phone to trigger head raising, head lowering and head left deflection, and head right deflection to control the virtual characteristic sliding direction. The data transmission mode of the remote controller can be wifi, Bluetooth, ZigBee, NFC, infrared and the like. wifi (Wireless Fidelity) is a technology for realizing data transmission by using a Wireless local area network. ZigBee is a low-power consumption local area network protocol based on IEEE802.15.4 standard; according to international standards, the ZigBee technology is a short-range, low-power consumption wireless communication technology. NFC (Near Field Communication), a short range high frequency radio technology, operates at 13.56MHz frequencies over a distance of 10 cm.
Preferably, the remote controller can be further configured to activate the image display screen 2 to change the display information, so that in a specific application, the up-sliding, down-sliding, left-sliding, and right-sliding operations of the virtual feature can be directly implemented by manipulating the keys on the remote controller.
Preferably, the remote controller is further provided with a voice input structure for exciting the image display screen 2 to change the display information and a voice control switch for controlling the on/off of the voice input structure. The voice input structure may specifically be a microphone. The voice control switch can be specifically a button or a knob or a touch switch and the like. In specific application, after the voice control switch triggers the voice input structure to be started, a user speaks a corresponding voice command to excite the image display screen 2 to change display information.
Preferably, a Software Development Kit (SDK) is provided within the data processor for development by a developer. The enhanced display device of the embodiment is provided with a software development kit, development of developers is facilitated, and the setting problems of coincidence of virtual features and real objects and the like are solved.
Preferably, the optical path conversion member 4 includes a corner reflection member 41 and a transmission member 42 disposed between the corner reflection member 41 and the half mirror 3. In a specific application, the image display screen 2 emits light to project onto the corner reflecting member 41, the corner reflecting member 41 reflects the light projected onto the corner reflecting member 41 from the image display screen 2 onto the transmitting member 42, the light projected onto the transmitting member 42 from the corner reflecting member 41 can be projected onto the transflective mirror 3 through the transmitting member 42, and the transflective mirror 3 reflects the light projected onto the transflective mirror 3 through the transmitting member 42 onto the human eye 5 of a user, so that the user can see virtual features projected by the image display screen 2, the corner reflecting member 41 and the transmitting member 42 through the transflective mirror 3; in addition, light in the external real environment can be projected to human eyes 5 of a user through the transflective lens 3, so that the user can see the external real environment through the transflective lens 3, and thus, the user can see an integrated scene of the virtual feature and the external real environment through the transflective lens 3.
In this embodiment, in the optical path formed by the image display screen 2, the corner reflection member 41, the transmission member 42, and the half-reflecting and half-transmitting mirror 3, there is no need to set a binocular partition, there is no need to perform screen division processing, there is no need to adjust the interpupillary distance, and the compatibility is strong.
Preferably, the corner reflecting member 41 is a right-angle prism or a total reflecting member composed of a plurality of reflecting mirrors. The corner reflecting member 41 reflects light onto the transmitting member 42 preferably at an incident direction of 45 °; of course, in a specific application, the light reflected by the corner reflecting member 41 to the transmitting member 42 may not be the exact incident direction of 45 °. The optical path length of the corner reflecting member 41 can be optimally selected depending on the parameters of the transmissive member 42 and the distance required for imaging the dummy feature.
Preferably, the transmission member 42 is a fresnel lens, which is lightweight and low cost. Of course, in a specific application, the transmission member 42 may also be configured in other configurations, such as a plano-convex lens or a magnifying lens group composed of a plurality of lenses, and the use of the magnifying lens group composed of a plurality of lenses may facilitate manual focusing during the use of the user. Preferably, the focal length of the fresnel lens or the plano-convex lens or the magnifying lens group is adjusted to 180mm ± 10mm according to the adjustment of the distance from the corner reflecting member 41.
Preferably, an avoidance space for avoiding glasses worn by a user is arranged between the semi-reflecting and semi-transparent mirror 3 and the human eyes 5; and/or the half-reflecting and half-transmitting mirror 3 is obliquely arranged in a mode that the included angle between the half-reflecting and half-transmitting mirror and the vertical surface is more than 45 degrees and less than 90 degrees; and/or the included angle between the light projected on the half-reflecting and half-transmitting mirror 3 by the light path conversion component 4 and the half-reflecting and half-transmitting mirror 3 is 45 degrees +/-5 degrees. Dodge the setting in space, the user who wears glasses still has certain interval between glasses and the semi-reflecting half mirror 3 after wearing the augmented reality equipment that this embodiment provided to make the user who wears glasses can use the augmented reality equipment that this embodiment provided well, and can not produce the oppression sense during the use, greatly improved the travelling comfort that the user used augmented reality equipment. Set up to half anti-half mirror 3 to be greater than 45 with the contained angle between the vertical face and be less than 90 forms slope setting to with the vertical top of putting in half anti-half mirror 3 and transmission component 42 transversely of image display screen 2, like this, can distribute the user's forehead naturally to most weight of head-mounted device main part, thereby make the pressure that bears on user's the bridge of the nose and the ear significantly reduce (just almost just be equivalent to the weight that only needs to bear half anti-half mirror 3), improved the travelling comfort that the user used this augmented reality equipment. In addition, when the user uses the augmented reality device, other users can not see the virtual features displayed on the transflective mirror 3 unless the other users are under the transflective mirror 3 and under a certain distance, so that the other users can not see the virtual contents seen by the user, and the privacy of the user can be protected.
Preferably, in the present embodiment, the image display panel 2 and the corner reflecting member 41 are both located above the transmitting member 42, the corner reflecting member 41 is located on the side far away from the transflective mirror 3, and the image display panel 2 is located on the side close to the transflective mirror 3, that is: the smartphone 100 and the corner reflecting member 41 are both positioned above the transmitting member 41, and the corner reflecting member 41 is positioned on the side close to the human eye 5, and the smartphone 100 is positioned on the side away from the human eye.
Preferably, an included angle between the light projected on the half-reflecting and half-transmitting mirror 3 by the light path conversion component 4 and the half-reflecting and half-transmitting mirror 3 is 45 degrees, at this time, the half-reflecting and half-transmitting mirror 3 reflects most strongly, so that under the condition that the half-reflecting and half-transmitting mirror 3 with higher transmittance (lower reflectivity) is used, the seen virtual features are clearer (brighter and higher contrast), and meanwhile, the seen external real environment is not too dark; moreover, the virtual feature projected according to the angle of 45 degrees can be vertical to the front of the sight line of the human eyes 5, so that the user feels more natural, but the included angle between the half-reflecting and half-transmitting mirror 3 and the optical axis can be adjusted according to the specific degree condition. In specific application, the half-reflecting and half-transmitting mirror 3 with different reflectivity and transmissivity can be adopted according to the difference of the brightness and the use environment of the image display screen 2.
Preferably, the surface of the half-reflecting and half-transmitting mirror 3 facing the human eye 5 can be a plane surface, a cambered surface or other curved surfaces.
The optical path length is skillfully increased by adjusting the focal length of the transmission member 42, so that specific features and virtual features in an external real environment can be displayed in a manner of overlapping in the focal point of the transmission member 42, and the problem that the half-reflecting and half-transmitting mirror 3 cannot be focused when being watched at a short distance is solved.
The augmented reality equipment that this embodiment provided has following beneficial effect:
1) almost all the smart phones 100 on the market can be compatible, and the smart phones 100 can be replaced at any time.
2) The optical path system is a monocular system shared by two eyes, the data processor does not need to perform split screen operation, only the single screen display content saves the operation capacity and the power consumption, the processor of the smart phone 100 can be directly adopted, other software systems or additional operation coordination is not needed, and devices with a power supply and an operation processor are not needed to be purchased; the user may operate his smartphone 100 in AR mode, where the application operable in AR mode is similar to the operation in a real environment.
3) When the user uses the augmented reality device, weight is mainly concentrated at the forehead, and pressure at the bridge of the nose and at the ears is less.
4) Through specific light path design, the extended light path enables the virtual features to be imaged at a distance which is accessible to the tentacle in front of the human eyes 5, so that interaction between the user and the virtual features is more natural, and the AR effect is more vivid.
5) When the user watches the virtual features by using the augmented reality device, other users can not see the virtual content seen by the user, and the use privacy of the user is effectively protected.
6) After the smart phone 100 with the SLAM function is used, the positioning capability (6-degree-of-freedom space positioning) of Inside-out can be provided, so that the virtual feature can be as if it is in the external real environment, and the user can walk to different angles to view the virtual feature in the external real environment.
7) Simple structure and low cost.
Example two:
referring to fig. 1 and fig. 2 together, the augmented reality device provided in this embodiment is mainly different from the first embodiment in that: in the first embodiment, the image display screen 2 is a display screen of the smart phone 100, the camera 1 is a rear camera 1 of the smart phone 100, and the data processor is a processor of the smart phone 100; in this embodiment, the image display screen 2, the camera 1 and the data processor are independent display screens, independent cameras 1 and independent processors which are independently arranged, and the independent display screens and the independent cameras 1 are in data connection with the independent processors, that is, the image display screen 2, the camera 1 and the data processor in the first embodiment are integrally designed on the same mobile device (the smart phone 100), and the image display screen 2, the camera 1 and the data processor in the first embodiment are separately and independently arranged and independently installed. Augmented reality equipment includes head-mounted device and the host computer that can supply the user to dress, and independent camera 1, independent display screen, half anti half mirror 3 and light path conversion components 4 all set up on the head-mounted device, and independent treater sets up on the host computer. Of course, in certain applications, the independent processor may be used by connecting to a computer through a wired connection or a wireless connection.
Preferably, in the present embodiment, the image display screen 2, the camera 1 and the corner reflection member 41 are all located above the transmission member 42, the corner reflection member 41 is located at a side close to the transflective mirror 3, and the image display screen 2 is located at a side far from the transflective mirror 3, that is: the corner reflecting member 41 is positioned on the side away from the human eyes 5, and the image display screen 2 is positioned on the side close to the human eyes; and the image display screen 2 and the camera 1 are respectively located at both sides of the corner reflecting member 41.
In addition to the above differences, other parts of the augmented reality device provided in this embodiment may be optimally designed with reference to the first embodiment, and will not be described in detail herein.
Example three:
similar to the embodiment, the augmented reality device provided in this embodiment has an image display screen 2 as a display screen of the smart phone 100, and a data processor as a processor of the smart phone 100; the main difference from the first embodiment is that the arrangement mode of the camera 1 is different, which is specifically embodied as follows: referring to fig. 1, in the first embodiment, a camera 1 is a rear camera of a smartphone 100, the rear camera is a three-dimensional camera, and the smartphone 100 has a SLAM positioning function; in the embodiment, referring to fig. 3 and 4, the camera 1 is a combination of a front camera 11 of the smart phone 100 and a wide-angle (or fish-eye) lens 12 with an adjustable position, and the relative position of the wide-angle (or fish-eye) lens 12 is adjustable, which can be used in combination with all the front cameras 11 of the smart phone 100. The augmented reality device provided in this embodiment is applicable to the smart phone 100 without SLAM positioning function, and the display principle thereof is substantially the same as that of the embodiment, specifically, the image display screen 2 (the display screen of the smart phone 100) emits light toward the corner reflection component 41 (the reflector), and is reflected by the corner reflection component 41, and is transmitted by the transmission component 42 (the fresnel lens), and is projected on the transflective mirror 3, and the human eyes 5 look forward, and will see an integrated scene obtained by superimposing the virtual feature on the transflective mirror 3 with the external real environment.
Preferably, the present embodiment further has the following differences from the first embodiment: in the first embodiment, the image display panel 2 and the corner reflection member 41 are both located above the transmission member 42, the corner reflection member 41 is located on the side far away from the transflective mirror 3, and the image display panel 2 is located on the side near to the transflective mirror 3, that is: the smartphone 100 and the corner reflecting member 41 are both positioned above the transmitting member 42, the corner reflecting member 41 is positioned on the side close to the human eye 5, and the smartphone 100 is positioned on the side away from the human eye; in this embodiment, the image display screen 2, the wide-angle (or fisheye) lens 12 and the corner reflection member 41 are all located above the transmission member 42, the corner reflection member 41 is located on a side close to the transflective lens 3, the image display screen 2 is located on a side far away from the transflective lens 3, and the wide-angle (or fisheye) lens 12 is located between the smart phone 100 and the corner reflection member 41, that is: in this embodiment, the smartphone 100, the wide-angle (or fisheye) lens 12, and the corner reflecting member 41 are all located above the transmissive member 42, the smartphone 100 is located on a side close to the human eye 5, the corner reflecting member 41 is located on a side away from the human eye, and the wide-angle (or fisheye) lens 12 is located between the smartphone 100 and the corner reflecting member 41. The camera 1 of this embodiment adopts the combination mode of leading camera 11 and wide angle (or fish-eye) camera 12 of smart mobile phone 100, and because the smart mobile phone 100 position moves backward, add (or fish-eye) camera 12 between leading camera 11 of smart mobile phone 100 and corner reflection component 41, can shoot wider outer space like this, so the spatial position that the finger was controlled is wider, and simultaneously, smart mobile phone 100 moves backward, can make augmented reality equipment's whole part focus move backward, the sense weight when lightening user's head and wearing.
In addition to the above differences, other parts of the augmented reality device provided in this embodiment may be optimally designed with reference to the first embodiment, and will not be described in detail herein.
Example four:
similar to the first embodiment and the third embodiment, the image display screen 2 of the augmented reality device provided in this embodiment is a display screen of the smart phone 100. The main differences between this embodiment and the first and third embodiments are: referring to fig. 1, in the first embodiment, the camera 1 is a rear camera of the smartphone 100, the rear camera is a three-dimensional camera, the smartphone 100 has a SLAM positioning function, and the data processor is a processor of the smartphone 100; referring to fig. 3 and 4, in the third embodiment, the camera 1 is a combination of a front camera 11 and an adjustable wide-angle (or fish-eye) lens 12 of the smart phone 100, and the data processor is a processor of the smart phone 100; in the present embodiment, as shown in fig. 5, instead of using the camera of the smartphone 100 itself, a sensing operation device 6 is provided outside the corner reflecting member 41 (mirror), the camera 1 is a wide-angle or fisheye camera provided on the sensing operation device 6, and the data processor includes the processor of the smartphone 100 and the processor of the sensing operation device 6. The sensing operation device 6 communicates with the smart phone 100 in a wireless wifi or bluetooth or wired connection manner. The sensing operation device 6 is also provided with an acceleration sensor, an angular acceleration sensor (gyroscope), or a geomagnetic sensor. The sensing arithmetic device 6 is provided with a processor which can calculate and position the data of the image pickup and the induction, thereby realizing the SLAM positioning function, realizing the image pickup, the sensing and the operation on the same circuit board, and reducing the problem of untimely positioning data caused by information transmission delay. An independent power supply can be arranged on the sensing and operation device 6. It can convey information such as camera shooting, sensing, operational positioning, etc. to the smartphone 100. The display principle of the present embodiment is similar to that of the first embodiment and the third embodiment: the image display screen 2 (the display screen of the smart phone 100) emits light toward the corner reflection member 41 (the reflector), is reflected by the corner reflection member 41, is transmitted by the transmission member 42 (the fresnel lens), and is projected onto the transflective mirror 3, and the human eyes 5 look forward to see an integrated scene obtained by superimposing virtual features on the transflective mirror 3 with an external real environment.
Preferably, in this embodiment, the sensing operation device 6 is further provided with a depth camera for performing three-dimensional space object recognition or three-dimensional gesture interaction. The obtained three-dimensional point cloud data or depth map data is transmitted to the smartphone 100 through the operation of the sensing operation device 6. The embodiment can quickly and simply enable the traditional smart phone 100 to realize AR interaction and positioning based on real space positions immediately after connection, the AR interaction experience is more real after space positioning, each surface of the virtual features can be seen, and the virtual features can interact with users in three space directions. In this embodiment, the above-mentioned operations of positioning and three-dimensional point cloud are performed on the sensing operation device 6, and the smart phone 100 can be dedicated to the operations of AR effect display and rendering, without increasing the amount of operations and power consumption. The sensing operation device 6 is only used as a peripheral of an AR input device, is just like a traditional Bluetooth keyboard mouse, can be compatible with all the smart phones 100, and enables users to realize AR interaction with the lowest cost, the least change and no need of upgrading.
In addition to the above differences, other parts of the augmented reality device provided in this embodiment may be optimally designed with reference to the first embodiment or the third embodiment, and will not be described in detail herein.
Example five:
the augmented reality device provided in this embodiment is mainly different from the first, second, third, and fourth embodiments in that: in the first, second, third, and fourth embodiments, the virtual feature displayed on the image display screen 2 is a virtual button, a virtual control interface, or a virtual icon, and the spatial interaction between the user and the virtual feature is the click control of the user on the virtual button, the virtual control interface, or the virtual icon; in this embodiment, the virtual feature displayed by the image display screen 2 is a virtual model, and the data processor can identify, locate and calculate the image pickup information of the camera 1 and control the image display screen 2 to output the virtual model in different states according to the calculation result, so as to realize multi-directional viewing or control of the virtual model by the user, that is, the spatial interaction between the user and the virtual feature is multi-directional viewing or control of the virtual model by the user. The virtual model may be a virtual model of any real object, such as a virtual model of a person or animal or car or food or weapon or box or office or living goods, etc. The multi-directional viewing of the virtual model by the user is specifically that the user can view different parts of the virtual model in all directions of the space and the solid, as if the virtual model is actually in front of the user. The user's manipulation of the virtual model specifically includes touching, clicking, rotating, etc. of the virtual model by the user.
Preferably, the mode of projecting the virtual model onto the semi-reflecting and semi-transmitting mirror 3 is as follows: the data processor stimulates the image display screen 2 to display the set three-dimensional virtual model in a non-black mode at the corresponding position according to the shooting information of the camera 1, and enables other positions on the image display screen 2 to display in a black color. The display content of the image display screen 2 is converted by the light path conversion component 4 and then projected onto the semi-reflecting and semi-transmitting mirror 3, and a user only sees the non-black virtual model on the image display screen 2 but does not see the black content of the image display screen 2, so that the user sees that the virtual model is superposed in the external real environment, and the user feels that the virtual model is just as if the virtual model really exists in the external real environment and looks like the virtual model is in front of the user.
Specifically, the real distance of the virtual models from the user is related to the focal length of the transmissive member 42, and for a specific application, the focal length of the transmissive member 42 can be set to a value such that the virtual models are displayed as if at a distance that can be touched by the user's finger (the distance that the finger can touch more conveniently is 0.3m to 0.65m), which is beneficial for realizing the spatial interaction between the user and the virtual models.
Preferably, the implementation manner of the user to view the virtual model in multiple directions is as follows: after the data processor identifies that the relative azimuth angle of the camera information of the camera 1 is changed, the image display screen 2 is excited to display the image information of the three-dimensional virtual model at the corresponding azimuth angle through re-identification, positioning and operation, so that a user can see different faces of the three-dimensional virtual model when watching the three-dimensional virtual model at various angles, and if the user walks a circle around the three-dimensional virtual model, each face of the three-dimensional virtual model at 360 degrees can be seen, so that the virtual model appears as if the user is in an external real environment, and the user can walk to different angles to watch the virtual model in the external real environment.
Preferably, the user manipulates the virtual model in the following manner: the user can see that the virtual model is presented in front of the user through the half-reflecting and half-transmitting mirror 3, the user extends a finger to a virtual button or a virtual control interface or a virtual icon which the user wants to click, or extends the finger to a corresponding control part on a control part, the camera 1 can shoot the finger extended by the user or the control part operated by the user, the control part can be a handle, a remote controller, a keyboard, a mouse and the like, the data processor identifies the finger extended by the user or the control part operated by the user shot by the camera 1, and obtains the plane position (the coordinate of an XOY plane) of the finger or the control part as the selection of controlling the virtual model. And triggering a control event after recognizing that the position coordinates of the finger or the control component stay above the virtual model for a set time (such as 2s), just like controlling the virtual model in the touch screen.
In addition to the above differences, other parts of the augmented reality device provided in this embodiment may be optimally designed with reference to the first embodiment, the second embodiment, the third embodiment, or the fourth embodiment, and will not be described in detail herein.
The above description is only exemplary of the present invention and should not be taken as limiting the scope of the present invention, as any modification, equivalent replacement or improvement made within the spirit and principle of the present invention should be included in the present invention.
Claims (10)
1. Augmented reality device, characterized in that includes:
the camera is used for shooting an external real environment;
an image display screen for displaying the virtual features;
the semi-reflecting and semi-transmitting lens is used for enabling human eyes to watch integrated scenes obtained by overlapping virtual features and an external real environment, and is of a monocular structure shared by two eyes;
the light path conversion component is used for projecting the virtual features displayed on the image display screen onto the semi-reflecting and semi-transparent mirror;
and the data processor is in data connection with the camera and the image display screen and is used for identifying, positioning and calculating the camera shooting information of the camera and controlling the image display screen to change the display information according to the calculation result, so that the spatial interaction between the user and the virtual feature is realized.
2. The augmented reality device of claim 1, wherein the virtual feature displayed on the image display screen is a virtual button or a virtual control interface or a virtual icon, and the data processor is capable of identifying, positioning and operating the camera information of the camera and controlling the image display screen to output different virtual features as the virtual button or the virtual control interface or the virtual icon according to the operation result, so as to realize the click control of the virtual button or the virtual control interface or the virtual icon by the user; or,
the virtual characteristics displayed by the image display screen are virtual models, and the data processor can identify, position and calculate the camera shooting information of the camera and control the image display screen to output the virtual models in different states according to the calculation result, so that a user can watch or control the virtual models in multiple directions.
3. Augmented reality device according to claim 1 or 2, further comprising an angular velocity sensor or an acceleration sensor for exciting the image display screen to change the displayed information in dependence of the user's head movement, the angular velocity sensor or the acceleration sensor being in data connection with the camera, the data processor.
4. Augmented reality device according to claim 3, further comprising a remote control for controlling the angular velocity sensor or the acceleration sensor to switch and/or to activate the image display screen to change the displayed information, the remote control being in data transmission connection with the data processor.
5. The augmented reality device of claim 4, wherein the remote controller is provided with a voice input structure and a voice control switch for controlling the voice input structure to be turned on or off.
6. An augmented reality device as claimed in claim 1 or 2, wherein a software development kit is provided within the data processor for development by a developer.
7. The augmented reality device of claim 1, wherein the light path conversion assembly includes a corner reflecting member and a transmitting member disposed between the corner reflecting member and the transflective mirror, the image display screen and the corner reflecting member being both positioned above the transmitting member, wherein,
the corner reflection component is positioned at one side close to the semi-reflecting and semi-transmitting mirror, and the image display screen is positioned at one side far away from the semi-reflecting and semi-transmitting mirror; or the corner reflection component is positioned on one side far away from the semi-reflecting and semi-transmitting mirror, and the image display screen is positioned on one side close to the semi-reflecting and semi-transmitting mirror.
8. The augmented reality device of claim 7, wherein the corner reflecting means is a right-angle prism or a total reflecting means composed of a plurality of mirrors; and/or the transmission component is a plano-convex lens or a Fresnel lens or a magnifying lens group consisting of a plurality of lenses.
9. The augmented reality device of claim 1 or 2 or 7 or 8, wherein an avoidance space for avoiding glasses worn by a user is provided between the transflective mirror and human eyes; and/or the half-reflecting and half-transmitting mirror is obliquely arranged in a mode that the included angle between the half-reflecting and half-transmitting mirror and the vertical surface is more than 45 degrees and less than 90 degrees; and/or the included angle between the light projected on the semi-reflecting and semi-transmitting lens by the light path conversion component and the semi-reflecting and semi-transmitting lens is 45 +/-5 degrees.
10. The augmented reality device of claim 1, 2, 7 or 8, wherein the image display screen is a display screen of a smartphone, the camera is a rear camera of the smartphone, and the data processor is a processor of the smartphone; or the image display screen, the camera and the data processor are independent display screens, independent cameras and independent processors which are mutually independent, and the independent display screens and the independent cameras are in data connection with the independent processors; or the image display screen is a display screen of a smart phone, the camera is a combination of a front camera of the smart phone and a wide-angle lens or a fish-eye lens with adjustable positions, and the data processor is a processor of the smart phone; or, the image display screen is the display screen of the smart phone, the camera is a wide-angle lens or a fisheye lens on a sensing arithmetic device, and the data processor comprises a processor of the smart phone and a processor of the sensing arithmetic device.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201720273433.3U CN206906983U (en) | 2017-03-20 | 2017-03-20 | Augmented reality equipment |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201720273433.3U CN206906983U (en) | 2017-03-20 | 2017-03-20 | Augmented reality equipment |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN206906983U true CN206906983U (en) | 2018-01-19 |
Family
ID=61295185
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201720273433.3U Active CN206906983U (en) | 2017-03-20 | 2017-03-20 | Augmented reality equipment |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN206906983U (en) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108398786A (en) * | 2018-03-12 | 2018-08-14 | 深圳市易瞳科技有限公司 | A kind of augmented reality display device |
| CN112634463A (en) * | 2020-12-21 | 2021-04-09 | 上海影创信息科技有限公司 | Size matching augmented reality method and system for AR glasses |
-
2017
- 2017-03-20 CN CN201720273433.3U patent/CN206906983U/en active Active
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108398786A (en) * | 2018-03-12 | 2018-08-14 | 深圳市易瞳科技有限公司 | A kind of augmented reality display device |
| CN112634463A (en) * | 2020-12-21 | 2021-04-09 | 上海影创信息科技有限公司 | Size matching augmented reality method and system for AR glasses |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN106919262A (en) | Augmented reality equipment | |
| US20220153412A1 (en) | Control method, control system, and smart glasses for first person view unmanned aerial vehicle flight | |
| US10521026B2 (en) | Passive optical and inertial tracking in slim form-factor | |
| US10788673B2 (en) | User-based context sensitive hologram reaction | |
| US12293479B2 (en) | Augmented reality eyewear with 3D costumes | |
| CN111886564B (en) | Information processing device, information processing method, and program | |
| US10474226B2 (en) | Head-mounted display device, computer program, and control method for head-mounted display device | |
| CN106291930A (en) | Head-Mounted Display | |
| KR20180096434A (en) | Method for displaying virtual image, storage medium and electronic device therefor | |
| JP2017102768A (en) | Information processor, display device, information processing method, and program | |
| US20170289533A1 (en) | Head mounted display, control method thereof, and computer program | |
| JP6303274B2 (en) | Head-mounted display device and method for controlling head-mounted display device | |
| JP6996115B2 (en) | Head-mounted display device, program, and control method of head-mounted display device | |
| US12169968B2 (en) | Augmented reality eyewear with mood sharing | |
| KR20190101827A (en) | Electronic apparatus for providing second content associated with first content displayed through display according to motion of external object, and operating method thereof | |
| KR20180109669A (en) | Smart glasses capable of processing virtual objects | |
| JP2018091882A (en) | Head-mounted display device, program, and control method for head-mounted display device | |
| CN206906983U (en) | Augmented reality equipment | |
| CN114115544B (en) | Human-computer interaction method, three-dimensional display device and storage medium | |
| US20250298250A1 (en) | Wearable device and method for displaying user interface related to control of external electronic device | |
| US20250054252A1 (en) | Wearable device for displaying multimedia content provided by external electronic device and method thereof | |
| KR20240142273A (en) | Wearable device for identifying area for displaying image and method thereof | |
| US20250342559A1 (en) | Mobile information terminal and object display method | |
| JP2019053714A (en) | Head-mounted display device and control method for head-mounted display device | |
| US20250046028A1 (en) | Wearable device for identifying area for displaying image and method thereof |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| GR01 | Patent grant | ||
| GR01 | Patent grant |