CN103761460B - Method for authenticating users of display equipment - Google Patents
Method for authenticating users of display equipment Download PDFInfo
- Publication number
- CN103761460B CN103761460B CN201310757226.1A CN201310757226A CN103761460B CN 103761460 B CN103761460 B CN 103761460B CN 201310757226 A CN201310757226 A CN 201310757226A CN 103761460 B CN103761460 B CN 103761460B
- Authority
- CN
- China
- Prior art keywords
- user
- augmented reality
- certification
- feature
- input
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/31—User authentication
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/31—User authentication
- G06F21/32—User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
Landscapes
- Engineering & Computer Science (AREA)
- Computer Security & Cryptography (AREA)
- Theoretical Computer Science (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention relates to a method for authenticating users of display equipment, and discloses various embodiments for authenticating users of display equipment. For example, the method in one embodiment of the invention includes displaying one virtual image or a plurality of virtual images on the display equipment; identifying one movement action or a plurality of movement actions of each user by the aid of data received by sensors of the display equipment, and comparing the identified movement actions of the users to predetermined authentication information groups of the users; authenticating the users if the authenticated movement actions indicate that the users select augmented reality features according to predetermined sequences, or stopping authenticating the users if the authenticated movement actions indicate that the users do not select the augmented reality features according to the predetermined sequences. Each virtual image comprises a group of augmented reality features. User authentication can be linked to the augmented reality features in the predetermined sequences by the aid of the predetermined authentication information groups of the users.
Description
Technical field
The present invention relates to user authentication technique, more particularly, to the user on display device is authenticated.
Background technology
In order to access limited feature, equipment, specific program, application, data, website etc., computing device can be pointed out
User input password has the authority accessing described limited feature with this user of certification.Such password generally includes user can be
On keypad, input is a series of alphabetical and/or digital.Furthermore, it is possible to pass through biological attribute data, such as finger scan, retina
Scanning etc., to execute certification.
Content of the invention
Disclose each embodiment for certification display device user.For example, a disclosed embodiment provides one kind
Method, the method includes showing one or more virtual images on the display device, wherein one or more virtual images include
One group of augmented reality feature.The method also includes identifying the one of user by the data receiving from the sensor of this display device
Individual or multiple movements, and it is linked to the use of the augmented reality feature of predefined procedure by mobile for the user being identified with by user authentication
The predetermined authentication information group at family is compared.If it is existing that the mobile instruction user of described mark presses described predefined procedure selective enhancement
Real character, then this user of certification, and if the mobile instruction user of described mark does not have selective enhancement reality in a predetermined order
Function, then this user of not certification.
There is provided present invention be in order to introduce in simplified form will be described in detail below in the choosing that further describes
The concept selected.Present invention is not intended as identifying key feature or the essential feature of claimed subject, is not intended to use
In the scope limiting claimed subject.Additionally, theme required for protection is not limited to solve any part in the disclosure
In mentioned any or all shortcoming realization.
Brief description
Fig. 1 shows the signal of the example use environment according to the disclosure for an embodiment of display device
Property description.
Fig. 2 diagrammatically illustrates two example embodiment of each augmented reality feature group.
Fig. 3 diagrammatically illustrates an example embodiment of display device.
Fig. 4 is the block diagram of the display device of Fig. 3.
Fig. 5 is the flow chart of the method for certification user that the embodiment according to the disclosure is described.
Fig. 6 is the flow chart of the method for certification user that another embodiment according to the disclosure is described.
Fig. 7 schematically shows exemplary computing system.
Specific embodiment
Wear-type shows that (hmd) equipment can be used for assuming the augmented reality view of background scene to user.Additionally, hmd
Equipment can allow user to access various programs, data and other similar resources.In order to authorized user accesses limited resources, or
In order to ensure user has the authority using hmd, in response to the augmented reality image shown by hmd equipment, can be with setting through hmd
The input that standby one or more input mechanisms are received carrys out this user of certification.
Before discussing these embodiments in detail, it is described with reference to Figure 1 unrestricted use scene.More specifically, Fig. 1 illustrates
The example embodiment of the use environment of one embodiment of head-mounted display apparatus 104, wherein environment 100 employs the shape in parlor
Formula.User 106 checks room by head-mounted display apparatus 104.Fig. 1 further depict the user visual field 102, and it includes passing through
The component environment that display device 104 can be checked, and therefore described component environment can with display device 104 display image increase
By force.In certain embodiments, the user visual field 102 substantially can be co-extensive with the actual visual field of user, and in other embodiments,
The user visual field 102 can take up the part less than the actual visual field of user.
As will be described in greater detail below, display device 104 can include one or more imageing sensors outwardly
(for example, two-dimensional camera and/or depth camera), is configured to obtain expression user authentication environment 100 in user navigation contexts
View data (for example, colour/gray level image, depth image/point cloud chart picture etc.).This view data can be used for obtain with regard to
The information (for example, three-dimension surface etc.) of environment layout and the object wherein comprising, such as bookcase 108, door 110, window 112 and
Sofa 114.
The real-world object that display device 104 can cover one or more virtual images in user visual field view 102
On.The example virtual object that Fig. 1 describes includes bowl 116, the moon 118 and Canis familiaris L. 120.Can be with these dummy objects of three dimensional display, this
Sample these objects in user visual field view 102 are shown with different depth does not give user 106.Virtual shown by display device 104
Object can only allow user 106 visible it is possible to moving with user 106 and moving, or how can move regardless of user 106
The dynamic position all in setting.
According to various embodiments disclosed herein, the augmented reality image being shown by display device 104, may in conjunction with regard to
The information of use environment 100 correlation, can be used for certification user 106 on display device 104.For example, it is possible to show to user 106
Show above-mentioned dummy object to provide one group of augmented reality feature that user 106 can select in a predetermined order.If user 106 presses
Predefined procedure selective enhancement feature, this user can obtain certification, thus allowing this user to access limited resources.Described augmented reality is special
Levy and can include any suitable feature, including but not limited to dummy object feature, three dimensional hologram, two-dimension holographic photograph,
Sound, virtual movement, vibration, can also include the real-world object feature in the user visual field.For example, in the example of fig. 1 to
The augmented reality feature of family 106 display includes the virtual bowl on the observable true frame being illustrated within bookcase 108 of user
116th, the virtual moon 118 and there is a 122, virtual Canis familiaris L. 120 of tail 124.
User 106 can select described augmented reality feature with the detectable any appropriate ways of display device 104.Example
As user 106 can be by staring the augmented reality feature of selection come selective enhancement real-world characteristics, wherein one or more eyes
Tracking transducer can detect gaze-direction.In another example, as inertia motion sensor and/or imageing sensor detect
, user 106 can pass through for his or her head, handss, whole body etc. to move to augmented reality feature, or special towards augmented reality
Levy movement, carry out selective enhancement real-world characteristics.User can be sent out the voice command that one or more microphones detect.Join below
Examine Fig. 3-4 more details with regard to the sensor senses user input order through display device 104 to be discussed.
Can detect and determine increasing based on the movement that user 106 execution is associated with that augmented reality feature or order
The selection of strong real-world characteristics.Movement can include user and simply look at or shift to described augmented reality feature.Additionally, in some realities
Apply in example, movement can also include user and look at the described augmented reality feature predetermined time, and user sees on side described augmented reality
Specific voice command is specifically moved or sends in characteristic edge execution, and/or for indicating other of augmented reality feature selection
Suitable mechanism.In addition, in some embodiments it is possible to by specific input be linked to ad-hoc location hmd equipment specific
Geographical position, specific direction, etc..
The predefined procedure of user's selective enhancement real-world characteristics can be determined in any suitable manner.For example, initially recognizing
In card pass code setting session, this user can determine the characteristic of feature and selects the suitable of described feature in verification process
Sequence.In another example, display device 104 can determine the characteristic selecting described augmented reality feature and order it is possible to lead to
Know that this user carrys out selective enhancement real-world characteristics in what order.
In the example of Fig. 1 description, user 106 selects virtual bowl 116 first, is followed by the virtual moon 118, followed by Canis familiaris L.
The 122 of 120.In order to determine whether this user have selected correct augmented reality feature in a predetermined order, can be by showing
The movement (include eyes and body moves) of the sensor senses user of equipment and other possible information (for example, language of user
Sound order, position and/or direction), and recognize with the specific order of user that user authentication is linked to selective enhancement real-world characteristics
Card information is compared.
In certain embodiments, each authen session can display to the user that described enhancing in the same or a similar manner
Real-world characteristics.In other embodiments, it is displayed differently from augmented reality feature to user between each user authentication session.For example,
In certain embodiments, diverse one group of enhancing comprising to select feature can be shown in different user authentication sessions now
Real character.In another example, same group of augmented reality feature can be shown, but be in different positions, different certification
In session.In either case, different authentication ession for telecommunication shows that augmented reality feature can aid in by different way and prevents
External observer detects the certification motor pattern of user's execution, and therefore, it is possible to contribute to the augmented reality feature sequence of protective seletion
The confidentiality of row.
Additionally, the quantity of augmented reality feature displaying to the user that and/or outward appearance can depend on customer location or other
Parameter and change.For example if it is determined that user in his or her family or other private location (for example, by gps data, image
Data and/or other display device sensing data) certified, for comparing when user is in the more place of the public, can
To show better simply image pass code (for example, with less all augmented reality features), and/or about whether license coupling
In authentication data user input for certification relatively low level of confidence.
Fig. 2 shows that the example of the two groups of augmented reality features that can show during two single authen sessions is implemented
Example.As described above, displaying to the user that described augmented reality feature (Fig. 2 is not shown) through display device.Further depict time shafts 202
The relative timing of single authen session to be shown.
In time t1, display to the user that first group of augmented reality feature 210.First group of augmented reality feature 210 includes showing
Be shown as the user visual field 200 in diverse location slowly fall one group of screen letter.As depicted in Figure 2, user presses in advance
Fixed order selects one group of letter to risk word firkin.If the authorization information indicate that user cipher is firkin, then this user
Access authentication.
Next, in time t2, starting another authen session, wherein via the user visual field 200, second group is strengthened existing
Real character 220 is shown to user.In described second group of augmented reality feature 220, letter and/or character position can be different
In first group of augmented reality feature 210, but still allow one group of spor ts choice letter in the first stack features 210 with difference department
firkin.Although user have selected identical character combination in two authen sessions, the movement of user is seen for outside
Seem different for the person of examining.
As described above, user authentication process can be executed by any suitable computing device, including but not limited to show
Equipment.Display device according to the disclosure can be with using any suitable form, including but not limited to the wearing of such as Fig. 1
The hmd equipment of formula display device 104 etc.Fig. 3 shows an example of display system 300, and Fig. 4 shows display system
300 block diagram.
Display system 300 includes constituting one or more camera lenses 302 of display subsystem 304 part, and this sampled images can
It is projected onto on camera lens 302 or to be generated by the image generating member (for example, transparent oled display) including camera lens 302.Aobvious
Show that system 300 also includes one or more imageing sensor 306 outwardly, configuration come to obtain the background scene that user checks and/or
The image of physical space, and may be configured to detect one or more microphones of the sound of the voice command of such as user
308.Imageing sensor 306 can include one or more depth transducers and/or one or more two dimensional image sensing outwardly
Device.
As described above, display system 300 may further include to configure to detect the solidifying of user's each eye gaze-direction
Inspection surveys subsystem 310.Gaze detection subsystem 310 can configure to determine the solidifying of user's each eye in any way as suitable
Apparent direction.For example, in the embodiment depicted, gaze detection subsystem 310 includes one or more scintillation sources 312, such as
Infrared light sources, are configured such that light each ocular reflex from user of flicker, and one or more imageing sensor
314, such as towards inner sensor, configure to catch the image of each eyeball of user.According to the figure collected via imageing sensor 314
The flicker change of user eyeball as determined by data is determined for gaze-direction.Additionally, it is solidifying from eyes of user projection
The position that sight line and external display intersect be determined for object that user stares (for example, shown dummy object and/
Or real background object).Gaze detection subsystem 310 can have any right quantity and the light source arranged and image sensing
Device.
Display system 300 can also include additional sensor.Display system 300 can also include additional sensor.
For example, display system 300 can include global location (gps) subsystem 316 can determine the position of display system 300.This
The user of display system 300 can be allowed to use different passwords in different positions, it can allow the level of security wanted depend on
To change in device location.
Display system 300 can also include one or more motion sensors 318, to have on display system 300 in user
When detection account movement.Exercise data may can be also had eye tracking glint data and view data outwardly, use
In gaze detection and image stabilization, to help to correct the fuzzy part in the image of imageing sensor 306 outwardly.Even if cannot
Parse the view data of imageing sensor 306 outwardly, the use of exercise data can allow to follow the trail of the change staring position.With
Sample, motion sensor 318 and microphone 308 and gaze detection subsystem 310 are also used as user input device, such user
The posture of eye, neck and/or head can be passed through and by verbal order and display system 300 interaction.It will be understood that, Fig. 3 and 4 is retouched
The sensor painted is in order at the purpose of example and illustrates, and is not intended to be limited by any way, because can use appointing
The combination of what his suitable sensor and/or sensor.
Display system 300 also includes controller 320, and it has and sensor, gaze detection subsystem 310 and display subsystem
Logic subsystem 322 data of system 304 communication keeps subsystem 324 (or referred to as storage system).Data keeps subsystem 324
Including the instruction that can be executed by logic subsystem 322 storing thereon, for example, with receive and explain input from sensor with
The movement of mark user, the movement being identified and authentication information is compared, to determine whether user selects in a predetermined order
Select augmented reality feature, and this user of certification, and other task.
It will be understood that, describe described display device 104,300 for exemplary purposes, and therefore do not mean that and limited
System.It is appreciated that described display device can include in addition to those of display without departing from the additional of the disclosure category and/
Or substitute sensor, camera, microphone, input equipment, outut device etc..Additionally, the physical configuration of display device and its various
Sensor and sub-component can take the various multi-forms without departing from the disclosure category.
Fig. 5 shows the side describing the user for certification display device (all head-mounted display apparatus 104 described above)
The flow chart of the embodiment of method 500.In short, for certification user, method 500 assumes one to user during authen session
Or multiple virtual images or other augmented reality feature.Once certification, this user can be authorized to access one or more limited set
Standby, program, application etc..Described authen session can be started by any input, and such as user opens display device, request starts journey
The authority of website is checked in the license of sequence or application, request, etc..
502, method 500 includes showing the one or more virtual images comprising one group of augmented reality feature.This group increases
Strong real-world characteristics can include any suitable feature of virtual image and/or real-world object.For example, described augmented reality feature
Can include user's selection specific 3d is true or dummy object, the image of description certain objects or people is (with respect to specific pattern
Picture), and/or have particular community 3d is true or dummy object.Additionally, described feature can also include sound or other enhancing
Actual environment factor.Augmented reality feature can present in any suitable manner.For example, described augmented reality feature is permissible
In a predetermined order, random order, the display such as order of algorithm change between variant authen session.
504, method 500 includes identifying one or more movements of user by the data receiving from sensor.Institute
State sensing data to receive from any suitable sensor, including but not limited to inertia motion sensor 506, and/or eye
Eyeball tracking transducer system 508.Additionally, the sensor of described display device exterior may be utilized for identifying user's movement.Example
As this user draws a picture on the screen of external equipment (for example, panel computer), and this panel computer can detect the shifting of user's handss
Dynamic, and described movement is sent to perspective display device.In another example, remote service, for example, can visit through Website server
Ask, be used as input equipment.For example, this display device can be with the state of query web service, and if this website takes
Business cannot access or return " not recognizing " message, then can not authenticate this user.The movement being identified may be used to determine user
The characteristic of augmented reality feature selecting and order.Additionally, when user input selective enhancement real-world characteristics mobile when, if institute
The movement of mark with and the shifted matching that be associated of augmented reality feature, then method 509 can include output user's selection increasing
The instruction of strong real-world characteristics.
510, method 500 includes the movement being identified and user authentication is linked to augmented reality feature predefined procedure
Authentication information be compared.Described authentication information is stored locally within display device, or can remotely store and can carry
High-altitude service access.
512, whether in a predetermined order method 500 includes determining user's selective enhancement real-world characteristics.As described above, being marked
The user's movement known is determined for the characteristic of the order of augmented reality feature of user's selection.In certain embodiments, such as
Shown in 514, can based on user in a predetermined order actual selection augmented reality feature level of confidence come the movement by being identified
Distribution confidence.Described confidence can reflect the definitiveness that identified movement is mated with anticipated movement, and can
To be the function of the various other factors, including but not limited to sensor detects the ability of movement.
In order to determine whether certification user, described confidence and threshold confidence fractional value can be compared.
If described confidence is more than this threshold value and have selected described augmented reality feature in the correct order, as shown in 516, described
Display device can authenticate this user, as indicated at 518.This threshold value can be constant fixed value, or it can be according to certification
The situation change of session, such as customer location and/or biometric signature.On the other hand, without selecting in a predetermined order
Described augmented reality feature (or if described confidence is unsatisfactory for this threshold value, it can be taken as not in a predetermined order
Select described feature), then method 500 includes 520, this user of not certification.
In the above-mentioned example with reference to Fig. 5, described certification is based on single input mechanism.Can in the combination of multiple input mechanisms
It is used in the embodiment of certification user, the confidence threshold value for certification user can be according to using how many input mechanisms
And change.For example, in the case of using single input mechanism, can apply higher compared to the situation using multiple input mechanisms
Confidence threshold value.
Fig. 6 shows the stream having located that the embodiment of the method 600 of certification input authentication user based on user's variable number is described
Cheng Tu.As used herein, certification inputs this term can to refer to any user that any sensor of display device detects defeated
Enter, it can be compared with the authentication information that user's movement is linked to augmented reality feature with certification user.What certification inputted shows
Example is included, but not limited to be moved by eye and selects dummy object, by body movement, voice command suppose a direction or position
Put to select dummy object.
602, method 600 includes receiving one group of certification input.As indicated at 604, can be multiple identified from user
Certification input selects this group certification input, and wherein said multiple identified inputs are defeated corresponding to carry out through different sensors
Enter.As more specifically example, the plurality of identified input can include eye tracking input, head follows the tracks of input, arm
Posture input and/or user biological characteristic information (for example, the interpupillary distance (ipd) of user, gait, height etc.), and receive
Described group can include eye tracks input data, be that head moves input data afterwards.
In certain embodiments, one or more certifications input can with (for example, detected by gps and/or view data
) physical location and/or (for example, the precalculated position in the background that can be checked by display device is by having known physics thing
Body (for example, poster) is detected) specific direction is associated, so only has and be in described physical location and/or side as this user
To when just can be successfully executed certification input.However, other examples can include the enhancing that user carries out being not responsive to show
The input of real world images.For example, user can be drawn by eye tracking and/or inertia motion and court using his or her eyes or handss
Outer imageing sensor is the specific pattern of detection, and/or can input voice command.As another example, can be detected other
The presence of user, and user authentication can quantity based on the other users detecting, characteristic and/or other feature.It will be understood that,
Purpose for real example proposes these input patterns it is no intended to be defined by any way.
In 608 continuation, method 600 includes determining whether each certification input is matched with corresponding predetermined authentication input.This
Outward, as indicated by 610, a confidence can be distributed to the certification input that each be received, described confidence level divides
Number reflects the level of confidence that received certification input is mated with predetermined authentication input.
612, described confidence and threshold value are compared.For example, in some embodiments it is possible to combine each
The confidence of certification input, is compared amounting to threshold value.In other embodiments, each confidence can be by
One is compared with corresponding threshold value.In either case, when (for example, described confidence meets predetermined condition with respect to threshold value
It is equal to or more than) when, user can be authenticated.
In certain embodiments, input by using the relatively large number of certification of quantity, even if the confidence level of all inputs divides
Number does not meet predetermined condition it is also possible to this user of certification with respect to the threshold score of lesser amt input.As a non-limit
Property example processed, three certification inputs of user input, exceed single confidence threshold value at least two confidence
In the case of just can authenticate this user.On the contrary, if inputting to be authenticated by the certification of negligible amounts, this user may not
All threshold values must be less than.Equally, the confidence threshold value of combination can be according to some inputs for obtaining similar effect
Pattern is changing.Each of these examples can be recognized as non-according to the disclosure confidence changes of threshold
Limitative examples.
Therefore, as indicated by 614, method 600 can include increasing with the certification input number that display device receives
And reduce confidence threshold value.Equally, method 600 can also include reducing with certification input number and increase threshold value.Will
Understand, confidence threshold value, including but not limited to those described above can be changed in any suitable manner.
Above in conjunction with described in Fig. 5, in certain embodiments, for certification user threshold value can based on customer location or its
Its parameter and change, as indicated by 616.For example, can be more using the first number in initial, less safe position
Certification input, and use the second small numbers of certification input in the second safer position.In another example, at two
The certification input number of position is identical, but the threshold value of less safe position could possibly be higher than safer position.Furthermore, it is possible to
Affect the parameter in addition to a position of described threshold value, the safety specified including user, for example, be accessed for information type, etc.
Deng.
618, determine whether described confidence is more than described threshold value.If this confidence is more than described threshold
Value, in this user of 620 certifications.On the other hand, if described confidence is not more than described threshold value, in this use of 622 not certifications
Family.
Therefore, various embodiments described herein passes through various inputs provides use using the special characteristic of disclosed display device
Family certification.It will be understood that, described disclosed input as non-limiting example it is also possible to other input.For example, it is also possible to using blinking
Eye detection, object identification (for example, certain objects, such as poster, building, the art work etc., identification), retina scanning, refer to
Stricture of vagina detection and/or other suitable input mechanism key in user authentication input.
Furthermore, it is possible to these combinations inputting are used for certification.For example, user can see with side that (true or virtual) is specific
Object edge taps particular cadence on the display device with his or her finger, so detects this image by imageing sensor outwardly,
Described percussion is detected by inertia motion sensor.As another example, the interpupillary distance (ipd) of user can be surveyed by display device
For biometric data.This can be combined with retina scanning, one group of prearranged gesture and/or other input with certification user.
In another example, the one or more inputs for certification can be changed during each certification user,
Simultaneously still related to specific category.The example user pass code comprising this kind of input can include category definition,
For example as " cartoon ", " feather " of non-limiting example, " baby is related ", " jump object ", " music of releiving
Direction ".For certification, user selects object or other execution meet selected by the definition in region (for example, cartoon, has feather
, related to baby, move up and down, towards hear music direction move his or her head) input with by certification.When
When the special properties of input (object of selection, the direction of user's face) can change with each input, described category keeps identical straight
To being changed by the user.
In certain embodiments, certification input is specific for each user's.In other embodiments, multiple users are permissible
Inputted using same group of certification.For example, it is possible to the kinsfolk of authorized user keys in the certification input of user, and described display sets
Standby it is configurable to identify the input of each kinsfolk.Additionally, based on the family according to determinations such as view data, speech datas
Member identities identify, and different authorities can be applied to each kinsfolk.In another embodiment, can be entered by many individuals
The input of row carrys out certification user.For example, if described certification is used for authorizing accesses confidential information, monitor another credible people of information
The checking of member can be used for certification user together with the certification input of user.
In certain embodiments, the process described above and process can be with the computing systems of one or more computing devices
Binding.In particular, such method and process can be implemented as computer applied algorithm or service, application programming interface (api),
Storehouse and/or other computer programs.
Fig. 7 schematically show can execute said method and process one or more of computing system 700
Non-limiting example.Display device 104 can be non-limiting examples of computing system 700.Show in simplified form
Go out computing system 700.It should be appreciated that, it is possible to use actually any computer architecture, it is made without departing from the scope of the present invention.?
In different embodiments, computing system 700 can take display device, wearable computing devices, mainframe computer, server meter
Calculation machine, desk computer, laptop computer, tablet PC, home entertaining computer, network computing device, game station,
The form of mobile computing device, mobile communication equipment (such as smart phone) etc..
Computing system 700 includes logic subsystem 702 data and keeps subsystem 704.Computing system 700 is optionally wrapped
Include display subsystem 706, input subsystem 708, communication subsystem 710 and/or unshowned other assemblies in the figure 7.
Logic subsystem 702 includes the one or more physical equipments being configured to execute instruction.For example, logic subsystem
Execution can be configured to as one or more applications, service, program, routine, storehouse, object, assembly, data structure or other
The instruction of a part for logical construct.Can realize such instruct for execution task, realize data type, conversion one or many
The state of individual assembly or otherwise reach required result.
Logic subsystem can include the one or more processors being configured to execute software instruction.Additionally or can replace
Dai Di, logic subsystem can include being configured to executing one or more hardware or the firmware logic machine of hardware or firmware instructions
Device.The processor of logic subsystem can be monokaryon or multinuclear, and the program executing thereon can be configured for serial,
Parallel or distributed treatment.Logic subsystem can optionally include the stand-alone assembly being distributed between two or more equipment,
These stand-alone assemblies may be located at long-range in certain embodiments and/or are arranged to coordinate process.Logic subsystem
Each side can be virtualized by the networked computing device capable of making remote access being configured with cloud computing and execute.
Storage subsystem 704 includes one or more physics, non-momentary equipment, and this one or more physics, non-momentary set
Standby data and/or the instruction being configured to keep logic subsystem to may perform to realize methods and procedures described herein.In reality
State-(for example, the preserving different data) of storage subsystem 704 when these methods existing and process, can be converted.
Storage subsystem 704 can include removable medium and/or built-in device.Storage subsystem 704 may include optics
Memory devices (for example, cd, dvd, hd-dvd, Blu-ray disc etc.), semiconductor memory devices (for example, ram, eprom,
Eeprom etc.) and/or magnetic storage device (for example, hard disk drive, floppy disk, tape drive, mram etc.) etc..
Storage subsystem 704 may include volatibility, non-volatile, dynamic, static, read/write, read-only, random access memory, sequential access, position
Put addressable, file addressable and/or content addressable equipment.
It should be understood that storage subsystem 704 includes one or more physics, non-momentary equipment.However, in some embodiments
In, each side of instruction described here can be passed through not kept during finite duration by physical equipment by transient state mode
Pure signal (such as electromagnetic signal, optical signal etc.) propagate.Additionally, the letter of the data relevant with the disclosure and/or other forms
Breath can be propagated by pure signal.
In certain embodiments, each side of logic subsystem 702 and storage subsystem 704 can be integrated together into one
In individual or multiple hardware-logic module, execute feature described herein by described assembly.Such hardware logic assembly
May include: for example, field programmable gate array (fpga), program and application specific integrated circuit (pasic/asic), program and
Application specific standardized product (pssp/assp), SOC(system on a chip) (soc) system and complex programmable logic equipment (cpld).
Term " program " can be used for describing the one side of the computing system 700 being implemented to perform specific function.Some
In the case of, the logic subsystem 702 of the instruction that can be kept via execution storage subsystem 704 carrys out instantiation procedure.Should
Understand, different programs can carry out instantiation by identical application, service, code block, object, storehouse, routine, api, function etc..
It is likewise possible to instantiation same program is come by different applications, service, code block, object, routine, api, function etc..Term
" program " can include single executable file, data file, storehouse, driver, script, data-base recording etc. or their collection
Close.
It should be understood that " service " as used herein is application program executable across multiple user conversations.Service can be used for
One or more system components, program and/or other services.In some implementations, service can be in one or more servers
Run in computing device.
When included, display subsystem 706 can be used for presenting the vision table of the data being preserved by storage subsystem 704
Show.This visual representation can take the form of graphic user interface (gui).Due to method described herein and process change by
The data that storage subsystem keeps, and thus converted the state of storage subsystem, therefore equally can change display subsystem
706 state is visually to represent the change of bottom data.Display subsystem 706 can be included using virtually any type of
One or more display devices of technology.Can be by such display device and logic subsystem 702 and/or storage subsystem 704
Group is combined in shared encapsulation together, or such display device can be that periphery touches display device.
When included, input subsystem 708 may include keyboard, mouse, touch screen or game console etc.
Or multiple user input device or dock with these user input devices.In certain embodiments, input subsystem can wrap
Include selected natural user input (nui) part or in connection.Such part can be integrated form or peripheral hardware, and
And the conversion of input action and/or process can onboard or under plate be processed.The example of nui part may include for language and/
Or micro- phone of speech recognition;Infrared, color, ultrasound wave for machine version and/or gesture recognition and/or depth camera;
For the head-tracker of motion detection and/or intention assessment, eye tracker, accelerometer and/or gyroscope;And be used for
The electric field sensing part of assessment brain activity.
When including communication subsystem 710, communication subsystem 508 may be configured to computing system 700 and one or
Other computing devices multiple are communicatively coupled.Communication subsystem 710 may include simultaneous from one or more different communication protocols
The wiredly and/or wirelessly communication equipment holding.As non-limiting example, communication subsystem may be configured for via radio
Telephone network or wired or wireless LAN or wide area network are being communicated.In certain embodiments, this communication subsystem can permit
Permitted computing system 700 and sent message and/or from other equipment reception message via network (such as the Internet) to other equipment.
It should be understood that configuration described herein and/or method are exemplary in itself, and these specific embodiments or show
Example is not circumscribed, because numerous variants are possible.Logic subsystem journey or method can represent any amount of place
One or more of reason strategy.Thus, shown and/or description each action can by shown and/or description suitable
Sequence, in other sequences, executed in parallel or be ignored.It is also possible to change the order of said process.
The theme of the disclosure includes all novel and non-obvious combination and the subgroup of various processes, system and configuration
Close and further feature disclosed herein, function, action and/or characteristic and its any and whole equivalent.
Claims (10)
1. a kind of method (500) of the user of the computing system comprising display device for certification, comprising:
(502) one or more virtual image is shown on described display device, one or more of virtual images include one
Group augmented reality feature;
Identify one or more movements of (504) user by the data receiving from the sensor of described computing system;
The user being identified is mobile and user predetermined authentication information group is compared (510), described predetermined authentication information group
User authentication is linked to the described augmented reality feature of predefined procedure;
If the mobile instruction user being identified have selected augmented reality feature by described predefined procedure, then certification (518) institute
State user;And
If the mobile instruction user being identified is by described predefined procedure selective enhancement real-world characteristics, then not certification (520)
Described user,
Wherein, different authentication ession for telecommunication shows augmented reality feature by different way.
2. methods described as claimed in claim 1 is it is characterised in that described sensor includes inertia motion sensor, and described number
According to inclusion inertia motion data.
3. methods described as claimed in claim 1 is it is characterised in that described sensor includes eye tracking sensor system, and institute
State data and include eye movement data.
4. the method for claim 1 is it is characterised in that described in each certification during user, all in similar position display
One or more of virtual images.
5. the method for claim 1 is it is characterised in that in different authen sessions, in different position display institutes
State one or more virtual images.
6. the method for claim 1, also includes, if the shifted matching being identified of user in described augmented reality
The movement that feature is associated, then the instruction of output user selective enhancement real-world characteristics.
7. the method for claim 1 is it is characterised in that also include:
Determine the confidence reflecting the described user level of confidence of selective enhancement real-world characteristics in a predetermined order;
If described confidence is more than threshold value, user described in certification;And
If described confidence is less than described threshold value, user described in not certification.
8. method as claimed in claim 7 is it is characterised in that the position based on described user for the described threshold value.
9. method as claimed in claim 7 is it is characterised in that described threshold value is changed based on some augmented reality features.
10. the method for claim 1 is it is characterised in that described augmented reality feature group includes the void in the user visual field
Quasi character and real-world object.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201310757226.1A CN103761460B (en) | 2013-12-18 | 2013-12-18 | Method for authenticating users of display equipment |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201310757226.1A CN103761460B (en) | 2013-12-18 | 2013-12-18 | Method for authenticating users of display equipment |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN103761460A CN103761460A (en) | 2014-04-30 |
| CN103761460B true CN103761460B (en) | 2017-01-18 |
Family
ID=50528696
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201310757226.1A Active CN103761460B (en) | 2013-12-18 | 2013-12-18 | Method for authenticating users of display equipment |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN103761460B (en) |
Families Citing this family (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105844128B (en) * | 2015-01-15 | 2021-03-02 | 北京三星通信技术研究有限公司 | Identity recognition method and device |
| RU2606874C1 (en) * | 2015-12-02 | 2017-01-10 | Виталий Витальевич Аверьянов | Method of augmented reality environment generating device controlling |
| JP6801251B2 (en) * | 2016-06-16 | 2020-12-16 | コニカミノルタ株式会社 | Information equipment management system, personal identification device and program |
| CN107018121B (en) | 2016-10-13 | 2021-07-20 | 创新先进技术有限公司 | Method and device for user authentication |
| CN106709303B (en) * | 2016-11-18 | 2020-02-07 | 深圳超多维科技有限公司 | Display method and device and intelligent terminal |
| CN106599656A (en) * | 2016-11-28 | 2017-04-26 | 深圳超多维科技有限公司 | Display method, device and electronic equipment |
| KR102700049B1 (en) * | 2017-02-03 | 2024-08-29 | 삼성전자주식회사 | Electronic device for authenticating biometric data and system |
| CN107508826B (en) * | 2017-09-14 | 2020-05-05 | 阿里巴巴集团控股有限公司 | Authentication method and device based on VR scene, VR terminal and VR server |
| CN111344775A (en) * | 2017-10-06 | 2020-06-26 | S·拉德 | Augmented Reality Systems and Kits |
| KR102397886B1 (en) * | 2017-12-06 | 2022-05-13 | 삼성전자주식회사 | Electronic device, user terminal apparatus, and control method thereof |
| WO2021179968A1 (en) * | 2020-03-07 | 2021-09-16 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Method and system for authenticating a user for providing access to a content on a wearable computing device |
Family Cites Families (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7730546B2 (en) * | 2005-07-01 | 2010-06-01 | Time Warner, Inc. | Method and apparatus for authenticating usage of an application |
| US8601589B2 (en) * | 2007-03-05 | 2013-12-03 | Microsoft Corporation | Simplified electronic messaging system |
| WO2011106797A1 (en) * | 2010-02-28 | 2011-09-01 | Osterhout Group, Inc. | Projection triggering through an external marker in an augmented reality eyepiece |
-
2013
- 2013-12-18 CN CN201310757226.1A patent/CN103761460B/en active Active
Also Published As
| Publication number | Publication date |
|---|---|
| CN103761460A (en) | 2014-04-30 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN103761460B (en) | Method for authenticating users of display equipment | |
| US9977882B2 (en) | Multi-input user authentication on display device | |
| US12111962B2 (en) | User interfaces and device settings based on user identification | |
| EP2887253A1 (en) | User authentication via graphical augmented reality password | |
| US10331945B2 (en) | Fair, secured, and efficient completely automated public Turing test to tell computers and humans apart (CAPTCHA) | |
| US20230273985A1 (en) | Devices, methods, and graphical user interfaces for authorizing a secure operation | |
| US10510190B2 (en) | Mixed reality interactions | |
| US9836889B2 (en) | Executable virtual objects associated with real objects | |
| US9030495B2 (en) | Augmented reality help | |
| CN109074441A (en) | Based on the certification watched attentively | |
| AU2022221706B2 (en) | User interfaces and device settings based on user identification | |
| KR102312900B1 (en) | User authentication on display device | |
| WO2023230290A1 (en) | Devices, methods, and graphical user interfaces for user authentication and device management | |
| KR102906456B1 (en) | Adaptive user enrollment for electronic devices | |
| JP6272688B2 (en) | User authentication on display devices | |
| KR102193636B1 (en) | User authentication on display device | |
| Sluganovic | Security of mixed reality systems: authenticating users, devices, and data | |
| AU2026200262A1 (en) | User interfaces and device settings based on user identification |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| ASS | Succession or assignment of patent right |
Owner name: MICROSOFT TECHNOLOGY LICENSING LLC Free format text: FORMER OWNER: MICROSOFT CORP. Effective date: 20150731 |
|
| C41 | Transfer of patent application or patent right or utility model | ||
| TA01 | Transfer of patent application right |
Effective date of registration: 20150731 Address after: Washington State Applicant after: Micro soft technique license Co., Ltd Address before: Washington State Applicant before: Microsoft Corp. |
|
| C14 | Grant of patent or utility model | ||
| GR01 | Patent grant |