CN104749777B - The interactive approach of wearable smart machine - Google Patents
The interactive approach of wearable smart machine Download PDFInfo
- Publication number
- CN104749777B CN104749777B CN201310739674.9A CN201310739674A CN104749777B CN 104749777 B CN104749777 B CN 104749777B CN 201310739674 A CN201310739674 A CN 201310739674A CN 104749777 B CN104749777 B CN 104749777B
- Authority
- CN
- China
- Prior art keywords
- micro
- eyes
- interactive approach
- infrared
- smart machine
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000002452 interceptive effect Effects 0.000 title claims abstract description 39
- 238000013459 approach Methods 0.000 title claims abstract description 37
- 210000001508 eye Anatomy 0.000 claims abstract description 136
- 230000009471 action Effects 0.000 claims abstract description 9
- 238000003384 imaging method Methods 0.000 claims description 49
- 230000004287 retinal location Effects 0.000 claims description 38
- 210000001525 retina Anatomy 0.000 claims description 30
- 230000011514 reflex Effects 0.000 claims description 23
- 230000008859 change Effects 0.000 claims description 22
- 230000004044 response Effects 0.000 claims description 22
- 230000002207 retinal effect Effects 0.000 claims description 21
- 230000033001 locomotion Effects 0.000 claims description 16
- 238000004891 communication Methods 0.000 claims description 9
- 210000005252 bulbus oculi Anatomy 0.000 claims description 7
- VZCCETWTMQHEPK-QNEBEIHSSA-N gamma-linolenic acid Chemical compound CCCCC\C=C/C\C=C/C\C=C/CCCCC(O)=O VZCCETWTMQHEPK-QNEBEIHSSA-N 0.000 claims description 7
- 230000003287 optical effect Effects 0.000 claims description 7
- 230000005540 biological transmission Effects 0.000 claims description 6
- 238000001914 filtration Methods 0.000 claims description 5
- 230000004434 saccadic eye movement Effects 0.000 claims description 4
- 239000011800 void material Substances 0.000 claims description 3
- 238000010521 absorption reaction Methods 0.000 claims 1
- 230000000007 visual effect Effects 0.000 abstract description 4
- 210000000695 crystalline len Anatomy 0.000 description 22
- 238000010586 diagram Methods 0.000 description 18
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 13
- 238000000034 method Methods 0.000 description 9
- 238000012937 correction Methods 0.000 description 8
- 230000000694 effects Effects 0.000 description 5
- 239000011521 glass Substances 0.000 description 5
- 230000004256 retinal image Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 239000003550 marker Substances 0.000 description 4
- 235000013399 edible fruits Nutrition 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000006073 displacement reaction Methods 0.000 description 2
- 230000004438 eyesight Effects 0.000 description 2
- 238000003331 infrared imaging Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000002310 reflectometry Methods 0.000 description 2
- 230000002123 temporal effect Effects 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 241001062009 Indigofera Species 0.000 description 1
- WHXSMMKQMYFTQS-UHFFFAOYSA-N Lithium Chemical compound [Li] WHXSMMKQMYFTQS-UHFFFAOYSA-N 0.000 description 1
- BQCADISMDOOEFD-UHFFFAOYSA-N Silver Chemical group [Ag] BQCADISMDOOEFD-UHFFFAOYSA-N 0.000 description 1
- 244000062793 Sorghum vulgare Species 0.000 description 1
- 230000003321 amplification Effects 0.000 description 1
- 230000004397 blinking Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 210000004087 cornea Anatomy 0.000 description 1
- 230000007797 corrosion Effects 0.000 description 1
- 238000005260 corrosion Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 230000004418 eye rotation Effects 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 230000005057 finger movement Effects 0.000 description 1
- PCHJSUWPFVWCPO-UHFFFAOYSA-N gold Chemical compound [Au] PCHJSUWPFVWCPO-UHFFFAOYSA-N 0.000 description 1
- 229910052737 gold Inorganic materials 0.000 description 1
- 239000010931 gold Substances 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 230000004446 light reflex Effects 0.000 description 1
- 229910052744 lithium Inorganic materials 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 235000019713 millet Nutrition 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 230000010355 oscillation Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000009527 percussion Methods 0.000 description 1
- BASFCYQUMIYNBI-UHFFFAOYSA-N platinum Chemical group [Pt] BASFCYQUMIYNBI-UHFFFAOYSA-N 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 210000001747 pupil Anatomy 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/017—Head mounted
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
- G02B2027/014—Head-up displays characterised by optical features comprising information/image processing systems
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0179—Display position adjusting means not related to the information to be displayed
- G02B2027/0187—Display position adjusting means not related to the information to be displayed slaved to motion of at least a part of the body of the user, e.g. head, eye
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Optics & Photonics (AREA)
- User Interface Of Digital Computer (AREA)
- Eye Examination Apparatus (AREA)
Abstract
A kind of interactive approach of wearable smart machine, including:There is provided wearable smart machine there is provided can reading matter, it is described can reading matter there is electronic tag;When the position of eyes is with electronic tag location matches, central data center performs the operation matched with the position mode of changing with time.The position or the position mode of changing with time that the present invention can act the graphic interface virtual image of control interface with operator match or associated so that operator's action is consistent with visual effect or associates.
Description
Technical field
The present invention relates to smart electronicses field, more particularly to a kind of interactive approach of wearable smart machine.
Background technology
Wearable smart machine is directly to wear, or is incorporated into the clothes of user or a kind of of accessory portable sets
It is standby.Wearable smart machine is not only a kind of hardware device, be even more by software support and data interaction, high in the clouds interaction come
Powerful function is realized, wearable smart machine will bring very big transformation to our life, perception.
Wearable smart machine is considered as the next focus for promoting electronic industry development, according to news:By 2016,
The scale in the wearable smart machine market in the whole world, is up to 6,000,000,000 dollars.
In order to occupy favourable leading position, each major company puts into substantial amounts of fund one after another on wearable smart machine and entered
Row research, and release corresponding product.Wherein, Apple Inc. releases " iWatch " product, and Nike releases " Nike+
FuelBand SE " products, the Android system intelligent watch that the Adidas will release, grand intelligence scientific & technical corporation releases
" BrainLink consciousness forces head hoop " product, Sony releases " Smart Watch " products, baidu company release " thud hand
Ring " product, Walt Disney Company releases " MagicBand " product, and grand company releases " GEAK intelligent watch " product, Google
Release " Google Glass " products.
But, the more or less all existing defects of the said goods, above-mentioned some product major functions calculate to run, navigation
And remote control shooting, or the exercise data of user is recorded, record result is inaccurate.And " the Google Glass " of Google
Function be also limited to sound control and take pictures, video calling, navigate and surf the web, and because " Google Glass' " lacks
Fall into, Google has announced to postpone will " Google Glass " be introduced to the market, and in the internet conference on the 14th of August in 2013
In, millet science and technology CEO thunders army represents, " I also used many Intelligent worn devices, and light bracelet just tried out more than 10, with
And Google glass.Have very big curiosity in arms before using these equipment, but after carefully studying, to actual experience effect
Compare disappointed." Lei Jun further indicates that:The whole industrial chain of wearable smart machine is also inreal ripe, really makes on a large scale
With will also be after some time.
The content of the invention
The problem of present invention is solved is matching degree height and the small wearable smart machine interactive approach of functional limitation.
To solve the above problems, the present invention provides a kind of wearable smart machine interactive approach, including:Wearable intelligence is provided
Energy equipment, the wearable smart machine includes:Device framework;The micro-projector on device framework is arranged at, suitable for by image
Interface is projeced on spectroscope;The spectroscope on device framework is arranged at, suitable for receiving the graphic interface of projection and by image circle
Face is into the virtual image in human eye;The retinal location sensing unit on device framework is arranged at, position and position suitable for sensing eyes
The position mode of changing with time simultaneously is changed corresponding operational order and is converted to position by the mode that changes with time
Position data;The central data center on device framework is arranged at, is adapted at least to receive the position data and operational order simultaneously
Perform corresponding operating;There is provided can reading matter, it is described can reading matter there is electronic tag;Position and electronic tag location matches when eyes
When, central data center performs the operation matched with the position mode of changing with time.
Optionally, the retinal location sensing unit includes:Infrared light light source, suitable for launching infrared light and exposing to eye
The retina of eyeball;Infrared image sensor, the infrared ray suitable for receiving retinal reflex, according to retinal reflex infrared ray and general
Retina image-forming, and according to the picture and as the mode of changing with time determines the position and the position side of changing with time of eyes
Formula;The convex lens before infrared image sensor light path are arranged at, the convex lens, which are configured at along light path, to be moved, the convex lens are fitted
Converged in by the infrared ray of retinal reflex.
Optionally, the wearable smart machine also includes:Light path system, suitable for the infrared light for launching infrared light light source
Transmit to the retina of eyes and by the infrared transmission of retinal reflex to infrared image sensor.
Optionally, the micro-projector shares part light path system with the retinal location sensing unit.
Optionally, the light path system includes:First speculum, infrared filter, half-reflecting half mirror, the spectroscope;Its
In, first speculum, suitable for the infrared light reflection of launching the infrared light light source to the infrared filter;It is described infrared
Filter, suitable for filtering the infrared light of the first speculum reflection and the infrared light of half-reflecting half mirror reflection;Described half is anti-
Pellicle mirror, suitable for reflecting the infrared light of the infrared filter filtering and transmiting the graphic interface of the micro-projector projection;It is described
Spectroscope, is further adapted for the infrared light for reflecting the half-reflecting half mirror reflection in eyes.
Optionally, the convex lens are corresponding with the diopter of eyes along the position that the light path is moved so that described infrared
Imaging sensor and the convex lens are by the infrared ray of retinal reflex into sharply defined image.
Optionally, the central data center is suitable to receive the position data that the convex lens are moved along the light path, and
Control the micro-projector into the virtual image at picture rich in detail interface in eyes according to the position data.
Optionally, the micro-projector includes:Low-light source, is suitable for micro-projector and provides light source;Picture filter, is suitable to
Receive it is micro- projection output light, and on demand output image in micro- projecting lens;Micro- projecting lens, is configured at and is suitable to along micro- projection
The optical system axis movement of instrument, to export image by the focal length variations of user;By configuring micro-projector and spectroscope,
Control enters the ray density of eyes, and wearable smart machine works in the following two kinds pattern:Overlay model:Graphic interface is imaged
The virtual image and the actual graphical overlay model that is visually observed in eyes;Full virtual image pattern:Eyes only receive graphic interface imaging
In the virtual image pattern of eyes.
Optionally, the eyes at least include with the mode of change in location:Saccade, watches attentively, smooth pursuit, blink.
Optionally, the operational order at least includes:Selection, determination, mobile or unblock.
Optionally, the wearable smart machine also includes:The position sensor of device framework front end is arranged at, suitable for sense
Answer the position or position of at least part human body to change with time and mode and the position mode of changing with time is changed into phase
The operational order answered and position is converted into position data.
Optionally, at least part human body includes:Hand, finger, fist, arm, both hands or multiple fingers.
Optionally, the position of the part human body mode that changes with time at least includes:Click on, double-click or slide.
Optionally, the device framework is configured with eyeglass and is worn on before user's eyes.
Optionally, the wearable smart machine also includes communication module, and the communication module is suitable to pass through Wi-Fi, indigo plant
Tooth, GPRS, WAP, HSCSD, GPRS, WAP, EDGE, EPOC, WCDMA, CDMA2000 or TD-SCDMA and mobile phone, fixed electricity
Words, computer or tablet personal computer carry out information exchange.
Optionally, wearable smart machine also includes local data base, or the central data center is suitable to and long-range number
Data are exchanged according to storehouse.
Optionally, based on Wi-Fi, bluetooth, GPRS, WAP, HSCSD, GPRS, WAP, EDGE, EPOC, WCDMA,
CDMA2000 or TD-SCDMA patterns, call local data base, or the data of remote data base to support.
Optionally, the electronic tag is Quick Response Code or Image Coding.
Optionally, the electronic tag be arranged at can reading matter keyword beside.
Optionally, the keyword is landscape keyword, article keyword.
Optionally, it is described can reading matter be books, newspaper, e-book.
Optionally, at least operation includes:Central data center controls micro- projection according to key search related data
The image of instrument and spectroscope projection related data is into the virtual image in human eye.
Optionally, the operation at least includes:The micro-projector projects cursor into the virtual image in eyes, the cursor position
Corresponding with eye retina position, the cursor movement is corresponding with eye retina movement, when cursor position and electronic tag position
When putting coincidence, central data center is according to key search related data, control micro-projector and spectroscope projection related data
Image into the virtual image in human eye.
Compared with prior art, technical scheme has advantages below:The present invention provides what a kind of virtual-real was combined
The interactive approach of wearable smart machine, by sensing the eyes of user, and the graphic interface virtual image by control interface and eye
The position of eyeball or the eye position mode of changing with time are matched so that operator's action is consistent with visual effect.
Brief description of the drawings
Fig. 1 is the wearable smart machine schematic diagram of one embodiment of the invention;
Fig. 2 is the schematic diagram of the micro-projector of the wearable smart machine of one embodiment of the invention;
Fig. 3 illustrates for the retinal location sensing unit and light path system of the wearable smart machine of one embodiment of the invention
Figure;
Fig. 4 retinal location sensing units described in the T1 moment of the wearable smart machine of one embodiment of the invention receive
The imaging results schematic diagram of the retina arrived;
Fig. 5 retinal location sensing units described in the T2 moment of the wearable smart machine of one embodiment of the invention receive
The imaging results schematic diagram of the retina arrived;
Fig. 6 is the wearable smart machine schematic diagram of another embodiment of the present invention;
Fig. 7 is the correction schematic diagram of the wearable smart machine of one embodiment of the invention;
Fig. 8 is the menu setecting schematic diagram of the wearable smart machine of one embodiment of the invention;
Fig. 9 is the wearable smart machine schematic diagram of further embodiment of this invention;
Figure 10 is the wearable smart machine schematic diagram of further embodiment of this invention;
Figure 11 is the wearable smart machine schematic diagram of further embodiment of this invention;
Figure 12 and Figure 13 is obtaining at least part people for the wearable smart machine imaging sensor of one embodiment of the invention
The position of body, and the position is converted to the schematic diagram of position data;
Figure 14 is the wearable smart machine schematic diagram of further embodiment of this invention;
Figure 15 is the interactive approach schematic flow sheet of the wearable smart machine of one embodiment of the invention;
Figure 16 and 17 is the interactive approach schematic diagram of the wearable smart machine of one embodiment of the invention;
Figure 18 and 19 is the interactive approach schematic diagram of the wearable smart machine of another embodiment of the present invention;
Figure 20 is the interactive approach schematic diagram of the wearable smart machine of further embodiment of this invention.
Embodiment
Existing wearable smart machine is essentially sound control and takes pictures, video calling, navigates and surf the web, function office
It is sex-limited strong.
Found after being furtherd investigate for existing wearable smart machine:Existing wearable smart machine is interactive
Difference, some equipment need the startup come control program by sound, or need operator to pass through the switch built in control device
Or button is operated, cause wearable smart machine needs that sound control hardware and similar operation hardware are additionally set,
Not only hardware cost increase and wearable smart machine and user's is interactive poor.
For the studies above, the present invention provides the wearable smart machine that a kind of virtual-real is combined, by sensing user's
Eyes, and the graphic interface virtual image of control interface is matched with the position of eyes or the eye position mode of changing with time,
So that operator's action is consistent with visual effect.
Further, using retinal location sensing unit realized by retinal reflex infrared imaging sight with
Track, can accurately position eye position, compared to monitoring iris and the Visual Trace Technology of pupil, retina image-forming accuracy
It is high.
Further, embodiments of the invention realized by the not visible infrared ray of retinal reflex human eye sight with
Track, normally works without interference with eyes.
Further, embodiments of the invention can realize the projection virtual image by optimizing light path in less space
Sight is followed the trail of with infrared ray, properties of product are excellent and small volume.
It is understandable to enable the above objects, features and advantages of the present invention to become apparent, below in conjunction with the accompanying drawings to the present invention
Specific embodiment be described in detail.
Fig. 1 is refer to, Fig. 1 is the wearable smart machine schematic diagram of one embodiment of the invention, including:
Device framework 100;
The micro-projector 110 on device framework 100 is arranged at, suitable for graphic interface is projeced on spectroscope 120;
The spectroscope 120 on device framework 100 is arranged at, suitable for receiving the graphic interface of projection and by graphic interface into void
As in human eye;
Be arranged at the retinal location sensing unit 150 on device framework 100, suitable for sense eyes position and position with
The position mode of changing with time simultaneously is changed corresponding operational order and position is converted into position by the variation pattern of time
Put data;
The central data center 140 on device framework is arranged at, is adapted at least to receive the position data and operational order
And perform corresponding operating.
In one embodiment, the device framework 100 is spectacle framework, with the first support 101 extended laterally, from
The first side arm 102 and the second side arm 103 that the two ends of first support 101 are extended.
Wherein, when the wearable smart machine is worn by a user, the face of the almost parallel user of first support 101
Portion, and the first support 101 be used for for spectroscope 120 provide support platform so that spectroscope can preferably into the virtual image in
Human eye.
The side arm 103 of first side arm 102 or second is used to be retinal location sensing unit 150, the and of micro-projector 110
Central data center 140 provides support platform.
As an example, the micro-projector 110 and central data center 140 are arranged at the same side in the present embodiment
Arm, located at the lower section of the first side arm 102;It should be noted that in other embodiments, the micro-projector 110 and central data
Center 140 can be arranged at the second side arm 103, or the micro-projector 110 and central data center 140 can be set respectively
In different side arms, those skilled in the art can be selected in the micro-projector 110 and central data according to actual production product
The position of the heart 140, as a principle, the micro-projector 110 needs to match with the spectroscope 120 so that suitable for that will scheme
As interface is projeced on spectroscope 120.
In the present embodiment, the retinal location sensing unit 150 is arranged at the inner side of the first side arm 102.This area
Technical staff should know that the retinal location sensing unit 150 sets the infrared light for being suitable for receiving retinal reflex
It is advisable, can be rationally set, should not limited the scope of the invention according to actual product.
It should also be noted that, the first support 101 can be configured with eyeglass and be worn on before user's eyes.
Fig. 2 is refer to, Fig. 2 is the amplification of the micro-projector 110 of the wearable smart machine of one embodiment of the invention
Figure, the micro-projector 110 includes:
Low-light source 111, is suitable for micro-projector 110 and provides light source.
As an embodiment, the low-light source 111 can be LED(Light-Emitting Diode, are abbreviated as LED)Light
Source.
Picture filter 112, suitable for receive it is micro- projection output light, and on demand output image in micro- projecting lens;
Described image filter 112 can be partially transparent to pass through the light that low-light source 111 is exported according to demand, so as to export
Affiliated image.
As an embodiment, described image filter 112 can be liquid crystal display(Liquid Crystal
Display, is abbreviated as LCD).
Micro- projecting lens 113, is configured at suitable for being moved along the optical system axis of micro-projector, with by the focal length of user
Change exports image.
The lens group that micro- projecting lens 113 can constitute for multiple lens.
The micro-projector 110 can also include input/output module, to receive the data of central data center 140 and refer to
Order, is accordingly exported corresponding figure or operation interface with image mode.
The micro-projector 110 may be arranged as adjustable crevice projection angle, to control the angle of output image.
Fig. 1 still please be referred to, be arranged at the spectroscope 120 on device framework 100, suitable for receiving the graphic interface of projection simultaneously
By graphic interface into the virtual image in human eye.
The spectroscope 120 passes through light splitting mirror support(Do not identify)It is connected with device framework 100, the light splitting mirror support exists
It is adjustable in certain angle, it is suitable for the image for receiving the output of micro-projector 110, and into the virtual image in user's eye.
As an embodiment, the spectroscope 120 is level crossing, and the reflectivity of the spectroscope 120 is 30% to 70%, is made
For an embodiment, the reflectivity of the spectroscope 120 is about 50%.
As another embodiment, the level crossing that it is half-transmitting and half-reflecting that the spectroscope 120, which is, the spectroscope 120 is fitted
The image exported in reflection micro-projector 110, and into the virtual image in user's eye, and the person of being adapted in use to receives to come from light splitting simultaneously
The light in the front of mirror 120, so that user receives the virtual image and real image simultaneously.
In other embodiments, the lens group that the spectroscope 120 can also constitute for multiple lens, the technology of this area
Personnel should know, 120, the spectroscope need to meet the graphic interface that receives projection and by graphic interface into the virtual image in human eye,
Specially illustrate herein, should not limit the scope of the invention.
It should be noted that by configuring micro-projector 110 and spectroscope 120, control enters the ray density of human eye, can
Wearing smart machine works in the following two kinds pattern:Overlay model:Graphic interface images in the virtual image of human eye and eye-observation is arrived
Actual graphical overlay model;Full virtual image pattern:Human eye only receives the virtual image pattern that graphic interface images in human eye.
The retinal location sensing unit 150 on device framework is arranged at, suitable for sensing the position and position of eyes at any time
Between variation pattern and the position mode of changing with time is changed into corresponding operational order and position is converted into position
Data.
Specifically, the retinal location sensing unit 150 can pass through contact spectacles type, solenoid type, infrared electro
Change with time mode for the reflective, position of infrared television formula sensing eyes and position.
As an embodiment, Fig. 3 is refer to, the retinal location sensing unit 150 includes:Infrared light light source 151, is fitted
In transmitting infrared light and expose to the retinas of eyes;Infrared image sensor 152, suitable for receiving the infrared of retinal reflex
Line, according to retinal reflex infrared ray and by retina image-forming, and according to the picture and as the mode of changing with time determines eye
Change with time mode for the position and position of eyeball;It is arranged at the convex lens 153 before infrared image sensor light path, the convex lens
Mirror is configured at along light path and moved, and the convex lens 153 are suitable to converge the infrared ray of retinal reflex.
Device framework is utilized in order to rational, the wearable smart machine of the present embodiment also includes light path system, the light
Road system is suitable to the infrared optical transport of launching infrared light light source to the retina of eyes and passes the infrared ray of retinal reflex
Infrared image sensor is transported to, to reduce the volume of wearable smart machine.
Specifically, the light path system includes:First speculum 161, infrared filter 162, half-reflecting half mirror 163, second
Speculum 164;Wherein, first speculum 161, suitable for the infrared light reflection of launching the infrared light light source to described red
Outer filter 162;The infrared filter 162, suitable for filtering infrared light and the half-reflection and half-transmission that first speculum 161 reflects
The infrared light that mirror 163 reflects;The half-reflecting half mirror 163, suitable for reflecting infrared light and the transmission that the infrared filter 162 is filtered
The graphic interface that the micro-projector 110 is projected;Second speculum 164, is further adapted for reflecting the half-reflecting half mirror 163 anti-
The infrared light penetrated is in eyes 170.
It is preferred that in order to further reduce extra wearable smart machine unit, reducing wearable smart machine unit
Volume and weight, the micro-projector and the retinal location sensing unit share part light path system.
As an embodiment, incorporated by reference to Fig. 1 and Fig. 3 is referred to, micro-projector 110 and the retinal location sensing is single
Member 150 is set on the first side arm 102, wherein the micro-projector 110 faces the transmission plane of the half-reflecting half mirror 163 so that
The image that the micro-projector 110 is projected is transmitted from the half-reflecting half mirror 163;In the present embodiment, using the spectroscope
120 as the second speculum, i.e., described spectroscope 120 reflects the image transmitted from the half-reflecting half mirror 163, into the virtual image in
Eyes 170.
And the transmitting illuminated infrared light of infrared light light source 151 in the retinal location sensing unit 150, it is anti-by first
Penetrate mirror 161 to reflect, after illuminated infrared light is by infrared filter 162, be incident to the reflecting surface of the half-reflecting half mirror 163, reflect
To the second speculum, in the present embodiment, using the spectroscope 120 as the second speculum, the spectroscope 120 will be illuminated
Infrared light reflection is to the retina of eyes 170, and illuminated infrared light reflexes to the spectroscope 120, the spectroscope by retina
120 reflex to the reflecting surface of the half-reflecting half mirror 163, and the half-reflecting half mirror 163 is by the infrared light reflection of retinal reflex
After infrared filter 162, received by infrared image sensor 152 and by retina image-forming, in order to make it easy to understand, infrared light supply
The illuminated infrared light for being incident to retina represents that the infrared light of retinal reflex is indicated by the solid line with the virtual image.
The present embodiment shares part light path, and micro-projector using retinal location sensing unit 150 and micro-projector 110
110 be visible ray, and retinal location sensing unit 150 is black light, and both realize that resource is total on the basis of not interfereing with each other
Enjoy, can largely reduce optical unit, mitigate the weight of wearable smart machine, and by optimizing light path system, make
Obtain eye tracking and virtual image projection volume is small.
It should be noted that first speculum 161 can be built in the retinal location sensing unit 150,
To improve integrated level, when first speculum 161 is built in the retinal location sensing unit 150, described first
Speculum 161 should use the speculum of reduced size, to avoid influenceing the infrared of the reflection retinal reflex of half-reflecting half mirror 163
The imaging effect of light.
It should also be noted that, in other examples, can also retinal location sensing unit 150 exclusively enjoy light path,
Micro-projector 110 exclusively enjoys light path, specially illustrates herein, should not limit the scope of the invention.
Fig. 4 is refer to, Fig. 4 is the imaging knot for the retina that retinal location sensing unit 150 described in the T1 moment is received
Fruit is schemed, and the image of retina is as shown in 171 in Fig. 4;
Fig. 5 is refer to, Fig. 5 is the imaging knot for the retina that retinal location sensing unit 150 described in the T2 moment is received
Fruit is schemed, and the image of retina is as shown in 172 in Fig. 5;As an embodiment, eyes are to stare eyes in state, Fig. 5 in Fig. 4
Turn left.
From Fig. 4 and Fig. 5 retinal images, it can analyze and show that retina is moved to the right, therefore eyes can be known
Turn left, and the onboard clock of retinal location sensing unit 150, and the retinal images according to Fig. 4 and Fig. 5 position
Difference is put, the speed of eye rotation can be known.
User can be corrected to the eyes with the mode of change in location before actual use, and setting individual makes
With custom, the eyes at least include with the mode of change in location:Saccade, watches attentively, smooth pursuit, and blink, the operation refers to
Order at least includes:Selection, determination, mobile or unblock, as a demonstration example, watch attentively and are set to double-click, blink and be set to click, put down
Sliding tracking is set to movement, and saccade is set to noise.
As a demonstration example, watch expression sight attentively and rest on the time on target object and be at least more than 100-200 milliseconds,
It should be noted that the residence time can be corrected according to personal use custom, and when being look at, eyeball is not definitely quiet
Only, but ceaselessly slight jitter, its jitter amplitude is less than 1 °.
It should also be noted that, the eyes can be according to user's with the operational order with the mode of change in location
Custom is configured, and is specially illustrated herein, should not be limited the scope of the invention.
In another embodiment, it is contemplated that the crystalline lens and cornea of the eyes of user have different diopters, described
The diopter that the convex lens 153 of retinal location sensing unit 150 are set to the position and eyes moved along the light path is corresponding,
So that the infrared image sensor 152 and the convex lens 153 by the infrared ray of retinal reflex into sharply defined image.
Moved it may also be noted that the central data center 140 is suitable to the receiving convex lens 153 along the light path
Position data, and control the micro-projector 110 into the virtual image at picture rich in detail interface in eyes according to the position data
In another embodiment, Fig. 1 still please be referred to, the wearable smart machine can also be set in device framework front end
Seated position sensor 130, changes with time mode and by the position suitable for sensing the position and position of at least part human body
The mode of changing with time changes corresponding operational order and position is converted into position data.
As an embodiment, the position sensor 130 can be acoustic wave reflector or imaging sensor, and the position is passed
Sensor 130 obtains the position of hand, finger, fist, arm, both hands or multiple fingers and moved according to Principles of Acoustics or optical principle
Make, and match or be associated as to choose accordingly, determine, move or unlock instruction.
As an example, the position mode that changes with time at least includes:Click, double-click or the slip of finger;Or
The movement and percussion of person's fist;Or the longitudinal oscillation of arm, transverse shifting and relative to operator face draw near or
Movement from the close-by examples to those far off.
As an example, single clicing on for finger is matched or is associated as to choose, and double clicks of finger are matched or are associated as
It is determined that, the shifted matching of finger or be associated as unblock.
It should be noted that those skilled in the art should know, above-mentioned example be intended only as it is exemplary illustrated, at least
The position of part human body and action can be configured according to the custom of user, specially illustrated herein, should not be limited the present invention
Protection domain.
The central data center 140 on device framework is arranged at, is adapted at least to receive the position data and operational order
And perform corresponding operating.
The central data center 140 can be processor or controller, for example, central processing unit, or be integrated with
The central processing unit of graphics processor, the central data center 140 can at least receive the position of the position sensor 130
Data and operational order, and control the micro-projector 110 to export respective graphical circle according to the position data and operational order
Face, to match the position of at least human body, and performs operation corresponding with operational order.
The central data center 140 is further adapted for exchanging data with remote data base, based on Wi-Fi, bluetooth, GPRS,
WAP, HSCSD, GPRS, WAP, EDGE, EPOC, WCDMA, CDMA2000 or TD-SCDMA pattern obtain remote data base data
Support.
The internal battery of central data center 140, such as lithium battery, solar cell or ultracapacitor, with right
The central data center 140 is powered.
The wearable smart machine can also include communication module(It is not shown), the communication module can be built in and set
In standby framework 100 or be included in central data center 140, the communication module be suitable to by Wi-Fi, bluetooth, GPRS, WAP,
HSCSD, GPRS, WAP, EDGE, EPOC, WCDMA, CDMA2000 or TD-SCDMA and mobile phone, landline telephone, computer or flat
Plate computer carries out information exchange.
The wearable smart machine also includes local data base, and the central data center 140 calls local data base
Carry out data support.
Embodiments of the invention are by setting micro-projector 110 and spectroscope 120 in user's eye into the virtual image, and center
The position and position of at least part human body for the user that data center 140 obtains according to position sensor 130 with the time change
Change mode, corrects the position of the virtual image so that real image of the virtual image with human body in human eye is matched so that operator acts imitates with vision
Fruit is consistent.
The present invention also provides the wearable smart machine of another embodiment, refer to Fig. 6, including:
Device framework 200;
The micro-projector 210 of the both sides of device framework 200 is respectively arranged at, suitable for graphic interface is projeced into spectroscope 220
On;
The spectroscope 220 on device framework both sides is respectively arranged at, suitable for receiving the graphic interface of projection and by image circle
Face is into the virtual image in human eye;
Be arranged at the position sensor 230 of device framework front end, suitable for sense at least part human body position and position with
The position mode of changing with time simultaneously is changed corresponding operational order and position is converted into position by the variation pattern of time
Put data;
The retinal location sensing unit 250 on the both sides of device framework 200 is respectively arranged at, the position suitable for sensing eyes
Changed with time with position and mode and the position mode of changing with time is changed into corresponding operational order and by position
Be converted to position data;
The central data center 240 on device framework is arranged at, is adapted at least to receive the position data and operational order,
And adjust graphic interface to match at least position of human body and accordingly perform operation according to the position data.
Device framework 200, spectroscope 220, retinal location sensing unit 250, position sensor 230 in the present embodiment
The corresponding description with embodiment before central data center 240 refer to.
It should be strongly noted that the micro-projector 210 in the present embodiment is two, device framework 200 is respectively arranged at
The first side arm and the second side arm, so as to be imaged in the eyes of the left and right of user two so that imaging effect has vertical
Body-sensing.
Fig. 7 is the correction of the wearable smart machine of one embodiment of the invention, and specifically, central data center 140 is controlled
Micro-projector 110 projects calibration marker 180 into the virtual image in eyes 170, and the coordinate of the calibration marker is pre-set in center
In data center 140, user's eyes watch the calibration marker 180 attentively, and retinal location sensing unit 150 obtains user's eye
Retinal images 173 when eyeball is watched attentively, and according to retinal images 173 and calibration marker 180, the operation to follow-up user is entered
Row correction.
Similarly, wearable smart machine can be carried out to the position of other eyes and the position mode of changing with time
Correction, to improve the response degree of accuracy of wearable smart machine.
Fig. 8 is the menu setecting of the wearable smart machine of one embodiment of the invention, and central data center 140 controls micro- throwing
Shadow instrument 110 projects menu interface into the virtual image in human eye;The icon that user is watched attentively in menu interface, as an example, with Fig. 8
Icon shown in arrow is that user watches icon attentively, and while user watches icon attentively, retinal location sensing unit 150 is obtained
Retinal images 174 when user's eyes are watched attentively, and by the conversion number of the position of retina, image and retina with the time
According to, transmit to central data center 140, central data center 140 determines operational order according to default and correction data before,
As an example, it is defined as choosing;Central data center 140 can project a cursor, Ke Yishi according to the position of human eye 170
Arrow in Fig. 8, on menu interface, this cursor can follow human eye 170 to watch the movement of position attentively, to aid in confirming human eye 170
Institute's fixation object.
It should be noted that preset data can be configured according to the hobby of user, for example, fixation time is 1.5
Second for choose, fixation time be 3 seconds be selection, or blink 3 times for choose.
The present invention also provides the wearable smart machine of another embodiment, refer to Fig. 9, including:
Device framework 300;
The micro-projector 310 on device framework 300 is arranged at, suitable for graphic interface is projeced on spectroscope 320;
The spectroscope 320 on device framework is arranged at, suitable for receiving the graphic interface of projection and by graphic interface into the virtual image
In human eye;
Be arranged at the acoustic wave reflector 330 of device framework front end, suitable for sense at least part human body position and position with
The action is simultaneously changed corresponding operational order and position is converted into position data by the variation pattern of time;
The central data center 340 on device framework is arranged at, is adapted at least to receive the position data and operational order,
And adjust graphic interface to match at least position of human body and accordingly perform operation according to the position data.
It is worn on the location indentifier 350 of finger, the location indentifier 350 is suitable to be felt by the acoustic wave reflector 340
Should, to determine that the position and position of finger are changed with time mode.
In the present embodiment, device framework 300, micro-projector 310, spectroscope 320 and central data center 340 refer to
The corresponding description of embodiment before.
In the present embodiment, the position sensor is acoustic wave reflector 330, and the location indentifier 350 is becket,
For example, ring.
Wherein, the acoustic wave reflector 330 is used to send sound wave to presumptive area, when becket enters to presumptive area,
Reflected by sound wave by becket, the acoustic wave reflector 330 receives the position data and operational order of becket, and will be described
Position data and operational order are sent to central data center 340;Central data center 340 calibrates micro- according to the position data
Projecting apparatus 310 or spectroscope 320 so that the virtual image at described image interface is superimposed with finger in the real image position of human eye, and according to
Operational order performs corresponding operating.
As an embodiment, the distance of becket and acoustic wave reflector 330 can be determined according to the following equation:
D=V0t/2
Wherein, d is the distance of becket and acoustic wave reflector 330, V0The speed propagated in atmosphere for sound wave.
As an embodiment, when sound wave is ultrasonic wave,
V0=331.45 × (1+ τ/273.15)1/2m/s
Wherein, τ is the temperature of environment when sound wave reflects;
The position mode of changing with time of the relative acoustic wave reflector 330 of becket can be true according to Doppler effect
Fixed, specific formula is:
Δ f=(2 × V × cos θ/V0)×f
Wherein, Δ f is the frequency displacement that acoustic wave reflector 330 is detected, and V is the fortune of the relative acoustic wave reflector 330 of becket
Dynamic speed, f is the frequency of sound wave, and θ is angle of the becket direction of motion with respect to three's line, and three's line is becket, sound
The position of the transmitting sound wave of wave reflection device 330, the line of the position of the detector of acoustic wave reflector 330.
In the present embodiment, the becket can be the ring of user, such as gold finger-ring, silver ring or platinum ring.
It should also be noted that, the quantity of acoustic wave reflector 330 of the present embodiment can for 1,2,3,4 ... 6 ... 11.
It is preferred that the quantity of acoustic wave reflector 330 is 4, be arranged at the upper left of device framework 300, lower-left, upper right,
The position of bottom right four, to obtain larger investigative range, and the greater number of acoustic wave reflector 330 can more it is accurate really
Determine the position data and operational order of becket.
Further, the present embodiment is by the use of ring as location indentifier, and the wearing of increase user that will not be additionally is born
Load, and Effect on Detecting can be strengthened.
Figure 10 is refer to, Figure 10 is the adjustment graphic interface of the wearable smart machine of one embodiment of the invention to be matched to
Lack the position of human body and accordingly perform operation chart.
Central data center 340 is built-in with the pre-stored data of user, and central data center 340 obtains the retina position
The position or position for putting the eyes that sensing unit 350 and position sensor 330 are obtained are changed with time mode(According to retina
Image 375 and related data)Change with time mode, calculated according to range data with the position or position of at least part human body
Go out the adjustment data of graphic interface, and the figure exported is adjusted according to the adjustment data control micro-projector 310 and spectroscope 320
As imaging of the interface in eyes 370 so that imaging is matched with the finger position of user.
In another embodiment, central data center 340 is built-in with the pre-stored data of user, central data center 340
After the distance for the becket for obtaining the acoustic wave reflector 330, the adjustment data of graphic interface are calculated according to range data, and
Imaging of the graphic interface of the adjustment output of acoustic wave reflector 330 in human eye according to the adjustment data calibration so that into
As being matched with the finger position of user.
As an example, a target pattern, such as cross star pattern are first sent by micro-projector 310, into the virtual image in making
In user's eye, then user's finger clicks on the cross star pattern, the position sensor(It is anti-for sound wave in the present embodiment
Emitter 330)Current finger position is recognized by location indentifier 350, and done with the position of the target pattern of micro-projector 310
Calibration is corresponded, by taking 2 dimension coordinates as an example, the coordinate of target pattern is (0,0), and the position sensor recognizes current finger
Coordinate is(5,7), the coordinate for the current finger that central data center 340 is transmitted according to the position sensor is(5,7), logarithm
According to being corrected, the coordinate by current finger is(5,7)It is corrected to (0,0).
Meanwhile, the pre-stored data of user and the acoustic wave reflector 330 are obtained according to built in central data center 340
The direction of motion of becket, distance and movement velocity, it may be determined that user clicks on, double-clicks or slided, and according to central data
The pre-stored data of user built in center 340, performs corresponding selection, determination, mobile or unblock operation.
It should also be noted that, wearable smart machine can be with compatible transfer voice unit 360, the transfer voice list
Member can send position data and operational order to central data center 340 according to the phonetic order of user, in central data
The heart 340 according to above-mentioned phonetic order with adjust output graphic interface and perform operational order.
The present invention also provides the wearable smart machine of another embodiment, refer to Figure 11, including:
Device framework 400;
The micro-projector 410 on device framework 400 is arranged at, suitable for graphic interface is projeced on spectroscope 420;
The spectroscope 420 on device framework 400 is arranged at, suitable for receiving the graphic interface of projection and by graphic interface into void
As in human eye;
Be arranged at the position sensor 430 of device framework front end, suitable for sense at least part human body position and position with
The position mode of changing with time simultaneously is changed corresponding operational order and position is converted into position by the variation pattern of time
Data are put, the position sensor 430 is the different imaging sensor in some positions;
The central data center 440 on device framework is arranged at, is adapted at least to receive the position data and operational order,
And adjust graphic interface to match at least position of human body and accordingly perform operation according to the position data.
In the present embodiment, device framework 400, micro-projector 410, spectroscope 420 and central data center 440 refer to
The corresponding description of embodiment before.
It should be noted that depending on the position of imaging sensor and quantity can be according to actual wearable smart machines,
The position and quantity for only needing imaging sensor can sense the position of at least part human body and act and the action is changed into phase
The operational order answered and position is converted into position data, specially also illustrated herein, the position of imaging sensor and quantity
It should not limit the scope of the invention.
As an embodiment, the position sensor 430 is to be arranged at the imaging sensor of the upper left of device framework 400 and set
It is placed in the imaging sensor of the upper right of device framework 400.
Under the control that the imaging sensor of upper left and the imaging sensor of upper right pass through Synchronous Sampling Pulse, high-speed parallel
View data is gathered, and acquisition time is associated as additional information with corresponding picture frame, is integrated in position sensor 430
Processor it is parallel using processing after, the image coordinate and temporal information of at least part human body are obtained, according to the image of upper left
The image coordinate and temporal information at least part human body that sensor and the imaging sensor of upper right are obtained simultaneously, are integrated in position
Processor in sensor 430 is matched according to time tag, and the image coordinate of at least part human body of synchronization is true
Determine space coordinate.
Basic determination method has to be combined to detect at least part people using frame difference method or screening frame with probabilistic method
Change with time mode for the position of body.
As an embodiment, frame difference method subtracts each other to detect region that at least part human body is moved using consecutive frame.Frame
Poor method has double frame differences and three frames poor, does exemplary illustrated in the present embodiment with double frame differences.
The first image 471 and the second image 472 that still please be referred in Figure 11, Figure 11 represent t-1 with t at least respectively
Location drawing picture data of the part human body where in the plane of delineation, according to the position of above-mentioned image definition data at least part human body
It is A, B in t-1 and t;Double frame differences are utilized | the position data that A-B| obtains at least part human body where in the plane of delineation.
It is the wearable intelligence using the present embodiment incorporated by reference to reference Figure 12 and Figure 13, Figure 12 and Figure 13 as an embodiment
Energy equipment obtains the schematic diagram of the position data of at least part human body, understands for convenience, the image of upper left is only shown in Figure 12
Sensor 731 and the imaging sensor of upper right 732, same understanding for convenience, near small part human body is with arrow 740
Signal.
Wherein, the spacing of the imaging sensor 731 of upper left and the imaging sensor 732 of upper right is preset value, for convenience
It is L to understand the spacing, and the focal length of the imaging sensor 731 of upper left is f1, the focal length of the imaging sensor 732 of upper right is f2, when
At least part human body is at a certain position, and the space coordinate of at least part human body is(X, Y, Z), passed by the image of upper left
The imaging sensor 732 of view data 741 and upper right that sensor 731 obtains at least part human body obtains the figure of at least part human body
As data 742, by measuring the position data of at least part human body in 2 width location drawing picture data, result in(x1, y1)、(x2,
y2), as an embodiment,(x1, y1)Measure and obtain in the image obtained from the imaging sensor 731 of upper left,(x2, y2)From upper left
Imaging sensor 731 obtain image in measure obtain;The focal length f of the imaging sensor 731 of upper left1Passed with the image of upper right
The focal length f of sensor 7322It can preset also with by that can be obtained from the displacement of automatic focusing mechanism.
By above-mentioned data, the space coordinate that can obtain at least part human body is(X, Y, Z)
Wherein:
Based on above-mentioned calculating, you can obtain the space coordinate of at least part human body(X, Y, Z), by setting in advance
It is fixed, position can be obtained and changed with time mode, such as in 3 seconds in move 1 time along Z-direction finger be click, it is interior in 3 seconds
It is that finger movement is dragging in X direction in double-click, 2 seconds to move 2 times along Z-direction finger.
It should be noted that approximate in being analyzed above be set to an element by groups of people's realization, that is, that obtain is groups of people
The space coordinate of the position of centre of gravity of body, can also be by the skin brightness and the difference of environment of human body, with reference to corrosion refinement method, shape
Shape center method and sciagraphy determine part human body;And the above-mentioned position mode of changing with time can by correcting in advance and
Embedded software correction is accustomed to meeting personal use.
It should also be noted that, in other embodiments, the imaging sensor of upper left obtains the image of at least part human body
When the view data that the imaging sensor of data and upper right obtains at least part human body is inverted image, reversion need to be obtained by inverted image
Erect image, and coordinate is obtained by erect image.
In other embodiments, at least part human body can also be determined according to the movable body method for catching of imaging sensor
Position and position change with time mode, those skilled in the art can also be according to the actual imaging sensor chosen, such as
CCD or CIS etc., to determine position and the action of at least part human body, specially illustrates herein, should not too limit the guarantor of the present invention
Protect scope.
It refer to Figure 14, imaging sensor changes with time mode in the position and position for obtaining at least part human body
Afterwards, and by the position mode of changing with time change corresponding operational order and position is converted into position data, center
Data center 440 is built-in with the pre-stored data of user, and central data center 440 obtains the operational order and position data
Afterwards, the adjustment data of graphic interface are calculated according to position data, and according to the adjustment data control micro-projector 410 and are divided
Imaging of the graphic interface of the adjustment output of light microscopic 420 in human eye so that imaging and the position of at least part human body of user
Matching, in the present embodiment, so that at least part human body is fist as an example, does exemplary illustrated.
Figure 14 please be still referred to, imaging sensor is obtaining the position and the position side of changing with time of at least part human body
After formula, and the position mode of changing with time is changed into corresponding operational order and position is converted into position data, in
Centre data center 440 is built-in with the pre-stored data of user, and central data center 440 obtains the operational order and position data
Afterwards, the adjustment data of graphic interface, and the position sensor according to the adjustment Data correction are calculated according to position data,
Adjust imaging of the graphic interface of output in human eye so that imaging and the location matches of at least part human body of user.
As an example, a target pattern, such as cross star pattern are first sent by micro-projector 410, into the virtual image in making
In user's eye, then user's finger clicks on the cross star pattern, and the position sensor recognizes current finger position, and
One-to-one corresponding calibration is done in position with the target pattern of micro-projector 410, by taking 2 dimension coordinates as an example, the coordinate of target pattern for (0,
0), the position sensor recognizes that the coordinate of current finger is(5,7), central data center 440 is according to the position sensor
The coordinate of current finger of transmission is(5,7), data are corrected, the coordinate by current finger is(5,7)Be corrected to (0,
0)。
Meanwhile, the pre-stored data of user and described image sensor are obtained extremely according to built in central data center 440
The direction of motion of small part human body, distance and movement velocity, it may be determined that user clicks on, double-clicks or slided, and according to center
The pre-stored data of user built in data center 440, performs corresponding selection, determination, mobile or unblock operation.
It should also be noted that, in other embodiments, Figure 14, position sensor 430 still please be referred to(In the present embodiment
In be imaging sensor)Changed with time mode for the position of capture fist 484, and the 3rd image 481 is obtained in t-1 and t
With the 4th image 482, and according to the computing mode of embodiment before, the position of fist and movement locus are converted into the operation
Instruction and position data, central data center 440 are obtained after the operational order and position data, control the He of micro-projector 410
The graphic interface of the adjustment output of spectroscope 420, and in human eye by fist 484 into the virtual image 483 so that user is in operation diagram picture
During interface, with preferable experience.
The present invention also provides the interactive approach of the wearable smart machine of an embodiment, refer to Figure 15, including following step
Suddenly:
S101 is there is provided wearable smart machine, and the wearable smart machine includes:Device framework;It is arranged at equipment frame
Micro-projector on frame, suitable for graphic interface is projeced on spectroscope;The spectroscope on device framework is arranged at, suitable for receiving
The graphic interface of projection and by graphic interface into the virtual image in human eye;The retinal location sensing unit on device framework is arranged at,
Change with time suitable for sensing the position and position of eyes and mode and change the position mode of changing with time accordingly
Operational order and position is converted into position data;The central data center on device framework is arranged at, is adapted at least to receive
The position data and operational order simultaneously perform corresponding operating;
S102 there is provided can reading matter, it is described can reading matter there is electronic tag;
S103, when the position of eyes is with electronic tag location matches, central data center was performed with position with the time
The operation of variation pattern matching.
Specifically, the corresponding description of embodiment before the associated description of the wearable smart machine refer to, herein not
Repeat again.
It is described can reading matter be books, newspaper or e-book.
The electronic tag is Quick Response Code or Image Coding, it is preferred that the electronic tag is Quick Response Code, Quick Response Code is to use
Certain specific geometric figure is according to certain rules in plane(Two-dimensional directional)The chequered with black and white graphic recording data symbol of distribution
Information.Quick Response Code is DOI(Digital Object Unique Identifier, digital object unique identifier)One
Kind.Specifically, the electronic tag of Quick Response Code can be stack/row row's formula two-dimensional bar code or matrix two-dimensional barcode.
The electronic tag be arranged at can reading matter keyword beside, such as landscape keyword, article keyword.
When the position of retinal location sensing units sense eyes and the position of the electronic tag to it is corresponding when and eyes
Position is with the mapping mode of time to click on, and the position mode of changing with time is changed phase by retinal location sensing unit
The operational order answered and position is converted into position data.
Central data center projects related data according to key search related data, control micro-projector and spectroscope
Image is into the virtual image in human eye.
If it should be noted that the position sensor of wearable smart machine when using acoustic wave reflector, it is necessary to extra
Imaging sensor is set to recognize electronic tag, can if the position sensor of wearable smart machine is imaging sensor
Directly to recognize electronic tag using the imaging sensor of position sensor.
The interactive approach of the wearable smart machine of the present invention is elaborated with reference to a specific embodiment, please be join
Examine Figure 16,
Wearable smart machine 500 is provided, specifically, the phase of wearable smart machine 500 embodiment with reference to before
The wearable smart machine that should be described, will not be repeated here.
There is provided can reading matter 501, it is described can reading matter there is electronic tag 502, wherein, can reading matter 501 be specifically as follows newspaper,
Books or e-book.In the present embodiment, by can reading matter 501 be books exemplified by do exemplary illustrated.
The electronic tag 502 is for Quick Response Code, it is necessary to illustrate, the electronic tag 502 is arranged at beside keyword,
Reading during wearable smart machine 500 is used with convenient use person.
The electronic tag 502 sets the Quick Response Code of different coding according to different keywords, in the present embodiment, with institute
State keyword and do exemplary illustrated for landscape example, such as keyword " Dalian " is set to the Quick Response Code of a certain coding, keyword " on
Sea " is set to the Quick Response Code of another coding, and keyword " Cape Cod " is set to the Quick Response Code of another coding.
Specific Quick Response Code coding can be encoded according to actual encoding law, such as capable row's formula coding or matrix form
Coding.
User wear wearable 500 pairs of smart machine can reading matter 501 read, position of the user first to eyes
It is corrected, the corresponding description of embodiment before specific correction method may be referred to.
After correction is finished, the micro-projector projects cursor 503 into the virtual image in eyes, the position of cursor 503 and eyes
504 retinal locations correspondence, the cursor 503 movement is corresponding with the movement of the retina of eyes 504, when the eyes of user read to
Can be in reading matter 501 keyword when, the position of cursor 503 is overlapped with the position of electronic tag 502, if user's eyes position is with the time
Variation pattern to click on when, central data center is according to key search related data, and control micro-projector and spectroscope are thrown
The image of related data is penetrated into the virtual image in human eye.
As an example, mode is changed with time to be blinked 3 times in 1.5 seconds in user's eyes position, and with blinking 2 times
Corresponding operation to click on, when the eyes of user read to can be in reading matter 501 keyword when, the position of cursor 503 and electronics
The position of label 502 is overlapped, and the eyes of user are blinked 3 times in 1.5 seconds, and now, retinal location sensing units sense was by 1.5 seconds
Interior blink 3 times, and blink 3 times by eye position and in 1.5 seconds and be converted to position data and click commands, central data center root
Quick Response Code coding is obtained according to imaging sensor, as an embodiment, the Quick Response Code encoded content binds corresponding scenery picture.
Figure 17 is refer to, by taking keyword " Shanghai " as an example, when the position of cursor 503 is located at keyword " Shanghai ", now, is used
Person's eyes are blinked 3 times in 1.5 seconds, and central data center obtains Quick Response Code according to imaging sensor and encoded, and is compiled according to Quick Response Code
Digital content obtains the scenery picture in Shanghai, such as " Shanghai blue sea Jinsha seabeach scenery picture ", and control micro-projector into the virtual image
In human eye.
It should be noted that when cursor position not with electronic tag 502(It refer to Figure 14)Position is overlapped, and is now made
User's eyes are blinked 3 times in 1.5 seconds, are also not carried out determine instruction and into corresponding virtual images in human eye.
As an embodiment, when the position of cursor is not overlapped with the position of electronic tag 502, if now user's eyes
Blinked 3 times in 1.5 seconds, the cursor can be moved to nearest keyword, and perform determine instruction and into corresponding virtual image figure
As in human eye.
Figure 18 and 19 are refer to, as other embodiment, so that keyword is " good small piece of land surrounded by water civilization " as an example, when cursor position is located at
Keyword " good small piece of land surrounded by water civilization ", now, user's eyes are blinked 3 times in 1.5 seconds, and central data center is obtained according to imaging sensor
The Quick Response Code coding of " good small piece of land surrounded by water civilization ", and " fictitious tour of good small piece of land surrounded by water civilization " animation is obtained according to Quick Response Code encoded content, and lead to
Cross micro-projector and incite somebody to action " fictitious tour of good small piece of land surrounded by water civilization " animation into the virtual image in human eye, " fictitious tour of the good small piece of land surrounded by water civilization " animation
The image of the virtual image has secondary cursor 601(It refer to Figure 17), such as secondary cursor 601 is directionkeys, when user's eyes position
In secondary cursor 601 and perform to selecting corresponding eye position to change with time mode, for example, eyes blink 2 in 0.5 second
Secondary, then described " fictitious tour of good small piece of land surrounded by water civilization " the animation virtual image is shown according to selected direction.
Figure 20 is refer to, as other embodiment, so that keyword is " good small piece of land surrounded by water civilization " as an example, when cursor position is located at key
Word " good small piece of land surrounded by water civilization ", now, user's eyes are blinked 2 times in 0.5 second, and central data center obtains " good according to imaging sensor
The Quick Response Code coding of small piece of land surrounded by water civilization ", and " fictitious tour of good small piece of land surrounded by water civilization " animation is obtained according to Quick Response Code encoded content, and by micro-
Projecting apparatus is by " fictitious tour of good small piece of land surrounded by water civilization " animation into the virtual image in human eye, " fictitious tour of good small piece of land surrounded by water civilization " the animation virtual image
Image there is secondary cursor, such as secondary light is designated as directionkeys, and user can be selected secondary cursor by finger,
The position of specific finger and finger position change with time mode corresponding description refer to before embodiment accordingly retouch
State.
In addition it is also necessary to which explanation, eye position changes with time mode or the finger position side of changing with time
Formula can also obtain image or video recording with seeking network support, send image or video recording, calling support etc. is matched, when eyes position
Put the mode of changing with time or when the finger position mode of changing with time corresponds to aforesaid operations, central data center according to
Data perform corresponding operation.
Although present disclosure is as above, the present invention is not limited to this.Any those skilled in the art, are not departing from this
In the spirit and scope of invention, it can make various changes or modifications, therefore protection scope of the present invention should be with claim institute
The scope of restriction is defined.
Claims (24)
1. a kind of interactive approach of wearable smart machine, it is characterised in that including:
Wearable smart machine is provided, the wearable smart machine includes:Device framework;It is arranged at micro- throwing on device framework
Shadow instrument, suitable for graphic interface is projeced on spectroscope;The spectroscope on device framework is arranged at, the image suitable for receiving projection
Interface and by graphic interface into the virtual image in human eye;The retinal location sensing unit on device framework is arranged at, suitable for sensing eye
The position and position of eyeball are changed with time mode and changes corresponding operational order by the position mode of changing with time
And position is converted into position data;The position sensor of device framework front end is arranged at, suitable for sensing at least part human body
Position or position change with time mode and by the position mode of changing with time change corresponding operational order and will
Position is converted to position data;Be arranged at the central data center on device framework, be adapted at least to receive the position data and
Operational order simultaneously performs corresponding operating;
There is provided can reading matter, it is described can reading matter there is electronic tag;
When the position of eyes is with electronic tag location matches, mode that central data center is performed and position is changed with time
The operation matched somebody with somebody.
2. interactive approach as claimed in claim 1, it is characterised in that the retinal location sensing unit includes:Infrared light
Light source, suitable for transmitting infrared light and exposes to the retinas of eyes;Infrared image sensor, suitable for receiving the red of retinal reflex
Outside line, according to retinal reflex infrared ray and by retina image-forming, and according to the picture and as the mode of changing with time is determined
Change with time mode for the position and position of eyes;It is arranged at the convex lens before infrared image sensor light path, the convex lens
Mirror is configured at along light path and moved, and the convex lens are suitable to converge the infrared ray of retinal reflex.
3. interactive approach as claimed in claim 1, it is characterised in that the wearable smart machine also includes:Light path system,
Suitable for the infrared optical transport of launching infrared light light source to eyes retina and by the infrared transmission of retinal reflex to red
Outer imaging sensor.
4. interactive approach as claimed in claim 3, it is characterised in that the micro-projector senses single with the retinal location
The shared part light path system of member.
5. interactive approach as claimed in claim 4, it is characterised in that the light path system includes:First speculum, infrared absorption filter
Mirror, half-reflecting half mirror, the spectroscope;Wherein, first speculum, suitable for the infrared light for launching the infrared light light source
Reflex to the infrared filter;The infrared filter, suitable for filtering the infrared light and described half of the first speculum reflection instead
The infrared light of pellicle mirror reflection;The half-reflecting half mirror, the infrared light and transmission suitable for reflecting the infrared filter filtering is described
The graphic interface of micro-projector projection;The spectroscope, is further adapted for the infrared light for reflecting the half-reflecting half mirror reflection in eyes.
6. interactive approach as claimed in claim 2, it is characterised in that position and eye that the convex lens are moved along the light path
The diopter correspondence of eyeball so that the infrared image sensor and the convex lens are by the infrared ray of retinal reflex into clear
Picture.
7. interactive approach as claimed in claim 6, it is characterised in that the central data center is suitable to receive the convex lens
The position data moved along the light path, and the micro-projector is controlled into the void at picture rich in detail interface according to the position data
As in eyes.
8. interactive approach as claimed in claim 1, it is characterised in that the micro-projector includes:
Low-light source, is suitable for micro-projector and provides light source;
Picture filter, suitable for receive it is micro- projection output light, and on demand output image in micro- projecting lens;
Micro- projecting lens, is configured at suitable for being moved along the optical system axis of micro-projector, to be incited somebody to action by the focal length variations of user
Image is exported;
By configuring micro-projector and spectroscope, control enters the ray density of eyes, and wearable smart machine works in as follows
Both of which:
Overlay model:Graphic interface images in the virtual image of eyes and the actual graphical overlay model being visually observed;
Full virtual image pattern:Eyes only receive the virtual image pattern that graphic interface images in eyes.
9. interactive approach as claimed in claim 1, it is characterised in that the eyes at least include with the mode of change in location:
Saccade, watches attentively, smooth pursuit, blink.
10. interactive approach as claimed in claim 1, it is characterised in that the operational order at least includes:Choose, determine, move
Dynamic or unblock.
11. interactive approach as claimed in claim 1, it is characterised in that at least part human body includes:Hand, finger, fist
Or arm.
12. interactive approach as claimed in claim 1, it is characterised in that at least part human body includes:Both hands or multiple hands
Refer to.
13. interactive approach as claimed in claim 1, it is characterised in that the position side of changing with time of the part human body
Formula at least includes:Click on, double-click or slide.
14. interactive approach as claimed in claim 1, it is characterised in that the device framework, which is configured with eyeglass and is worn on, to be made
Before user's eyes.
15. interactive approach as claimed in claim 1, it is characterised in that the wearable smart machine also includes communication module,
The communication module is suitable to pass through Wi-Fi, bluetooth, HSCSD, GPRS, WAP, EDGE, EPOC, WCDMA, CDMA2000 or TD-
SCDMA carries out information exchange with mobile phone, landline telephone or computer.
16. interactive approach as claimed in claim 1, it is characterised in that the wearable smart machine also includes communication module,
The communication module is suitable to pass through Wi-Fi, bluetooth, HSCSD, GPRS, WAP, EDGE, EPOC, WCDMA, CDMA2000 or TD-
SCDMA carries out information exchange with tablet personal computer.
17. interactive approach as claimed in claim 1, it is characterised in that wearable smart machine also includes local data base, or
The central data center is suitable to exchange data with remote data base.
18. interactive approach as claimed in claim 17, it is characterised in that based on Wi-Fi, bluetooth, HSCSD, GPRS, WAP,
EDGE, EPOC, WCDMA, CDMA2000 or TD-SCDMA pattern, call local data base, or remote data base data
Support.
19. interactive approach as claimed in claim 1, it is characterised in that the electronic tag is Quick Response Code or Image Coding.
20. interactive approach as claimed in claim 19, it is characterised in that the electronic tag be arranged at can reading matter keyword
Side.
21. interactive approach as claimed in claim 20, it is characterised in that the keyword is landscape keyword, article key
Word.
22. interactive approach as claimed in claim 1, it is characterised in that it is described can reading matter be books, newspaper, e-book.
23. interactive approach as claimed in claim 1, it is characterised in that at least operation includes:Central data center according to
The image of key search related data, control micro-projector and spectroscope projection related data is into the virtual image in human eye.
24. interactive approach as claimed in claim 1, it is characterised in that the operation at least includes:The micro-projector projection
Cursor is into the virtual image in eyes, and the cursor position is corresponding with eye retina position, and the cursor movement is moved with eye retina
Dynamic correspondence, when cursor position is overlapped with electronic tag position, central data center is according to key search related data, control
The image of micro-projector and spectroscope projection related data is into the virtual image in human eye.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310739674.9A CN104749777B (en) | 2013-12-27 | 2013-12-27 | The interactive approach of wearable smart machine |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310739674.9A CN104749777B (en) | 2013-12-27 | 2013-12-27 | The interactive approach of wearable smart machine |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104749777A CN104749777A (en) | 2015-07-01 |
CN104749777B true CN104749777B (en) | 2017-09-26 |
Family
ID=53589719
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310739674.9A Active CN104749777B (en) | 2013-12-27 | 2013-12-27 | The interactive approach of wearable smart machine |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104749777B (en) |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102433833B1 (en) | 2015-09-23 | 2022-08-17 | 매직 립, 인코포레이티드 | Eye Imaging with Off-Axis Imager |
CN105204719A (en) * | 2015-10-12 | 2015-12-30 | 上海创功通讯技术有限公司 | Interface laser projection system and method for mobile terminal |
CN105867600A (en) * | 2015-11-06 | 2016-08-17 | 乐视移动智能信息技术(北京)有限公司 | Interaction method and device |
CN105892634A (en) * | 2015-11-18 | 2016-08-24 | 乐视致新电子科技(天津)有限公司 | Anti-dizziness method and virtual reality display output device |
CN105825112A (en) * | 2016-03-18 | 2016-08-03 | 北京奇虎科技有限公司 | Mobile terminal unlocking method and device |
EP3449337B1 (en) * | 2016-04-29 | 2023-01-04 | Tobii AB | Eye-tracking enabled wearable devices |
IL252582A0 (en) * | 2017-05-29 | 2017-08-31 | Eyeway Vision Ltd | A method and system for registering between external scenery and a virtual image |
CN107065198B (en) * | 2017-06-21 | 2019-11-26 | 常州快来信息科技有限公司 | Wear the vision optimization method of display equipment |
CN107193127A (en) * | 2017-06-27 | 2017-09-22 | 北京数科技有限公司 | A kind of imaging method and Wearable |
CN108536285B (en) * | 2018-03-15 | 2021-05-14 | 中国地质大学(武汉) | Mouse interaction method and system based on eye movement recognition and control |
CN108873333A (en) * | 2018-05-24 | 2018-11-23 | 成都理想境界科技有限公司 | A kind of display module apparatus for adjusting position and display equipment |
CN109271824B (en) * | 2018-08-31 | 2021-07-09 | 出门问问信息科技有限公司 | Method and device for identifying two-dimensional code image |
CN112717286A (en) * | 2021-01-14 | 2021-04-30 | 复旦大学 | Transcranial ultrasonic stimulation system based on android system |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6023372A (en) * | 1997-10-30 | 2000-02-08 | The Microoptical Corporation | Light weight, compact remountable electronic display device for eyeglasses or other head-borne eyewear frames |
US8390533B2 (en) * | 2007-11-20 | 2013-03-05 | Panasonic Corporation | Beam-scan display apparatus, display method, and vehicle |
AU2011220382A1 (en) * | 2010-02-28 | 2012-10-18 | Microsoft Corporation | Local advertising content on an interactive head-mounted eyepiece |
CN102445768B (en) * | 2010-10-01 | 2014-10-08 | 奥林巴斯株式会社 | Device-mounting support member |
CN202533948U (en) * | 2012-04-09 | 2012-11-14 | 深圳市元创兴科技有限公司 | Eye tracker |
-
2013
- 2013-12-27 CN CN201310739674.9A patent/CN104749777B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN104749777A (en) | 2015-07-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104749777B (en) | The interactive approach of wearable smart machine | |
CN105446474B (en) | Wearable smart machine and its method of interaction, wearable smart machine system | |
US11656677B2 (en) | Planar waveguide apparatus with diffraction element(s) and system employing same | |
CN104750234B (en) | The interactive approach of wearable smart machine and wearable smart machine | |
US9612403B2 (en) | Planar waveguide apparatus with diffraction element(s) and system employing same | |
JP6786792B2 (en) | Information processing device, display device, information processing method, and program | |
KR102300390B1 (en) | Wearable food nutrition feedback system | |
CN104750230A (en) | Wearable intelligent device, interactive method of wearable intelligent device and wearable intelligent device system | |
US12039096B2 (en) | Glasses-type wearable device providing augmented reality guide and method for controlling the same | |
CN104750229B (en) | The exchange method and wearing smart machine system of wearable smart machine | |
KR20250027810A (en) | Gesture detection via image capture of subcutaneous tissue from a wrist-pointing camera system | |
CN115185365A (en) | Wireless control eye control system and control method thereof | |
Wang et al. | Wink Lens Smart Glasses in Communication Engineering: Catalyst for Metaverse and Future Growth Point |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |