[go: up one dir, main page]

WO2025120738A1 - Authentication device - Google Patents

Authentication device Download PDF

Info

Publication number
WO2025120738A1
WO2025120738A1 PCT/JP2023/043489 JP2023043489W WO2025120738A1 WO 2025120738 A1 WO2025120738 A1 WO 2025120738A1 JP 2023043489 W JP2023043489 W JP 2023043489W WO 2025120738 A1 WO2025120738 A1 WO 2025120738A1
Authority
WO
WIPO (PCT)
Prior art keywords
authentication
eye
terminal
gaze
mark
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/JP2023/043489
Other languages
French (fr)
Japanese (ja)
Inventor
正人 山下
健太 江口
大楼 茂木
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NTT Docomo Inc
Original Assignee
NTT Docomo Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NTT Docomo Inc filed Critical NTT Docomo Inc
Priority to JP2025561558A priority Critical patent/JPWO2025120738A1/ja
Priority to PCT/JP2023/043489 priority patent/WO2025120738A1/en
Publication of WO2025120738A1 publication Critical patent/WO2025120738A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication

Definitions

  • This disclosure relates to an authentication device.
  • the information processing device described in Patent Document 1 is known as a technology for authenticating users.
  • This device acquires the user's electrooculography, and unlocks the information processing device if it is determined that the acquired electrooculography satisfies a predetermined condition.
  • This device identifies the numbers that the user is looking at on the passcode entry screen based on a change in gaze direction caused by the detected electrooculography. If the sequence of numbers that the user is looking at matches a preset authentication passcode, this device unlocks the device.
  • Patent Document 1 unlocks the device based on the sequence of numbers that the user sees on the passcode input screen.
  • a preset authentication passcode becomes known to someone other than the user, authentication may be performed by someone other than the user. Therefore, there is a demand to improve the security of authentication.
  • the present disclosure aims to provide an authentication device that can improve the security of authentication.
  • the information processing device includes an eye detection unit that detects the user's eyes, a gaze detection unit that detects the user's gaze based on the detection result of the eye detection unit, a state detection unit that detects the state of the user's eyes based on the detection result of the eye detection unit, a display unit that displays an authentication pattern including a mark for guiding the gaze for authentication, and an authentication unit that performs authentication based on the state of the eyes when the mark is in the line of sight.
  • This disclosure makes it possible to improve the security of authentication.
  • FIG. 1 is a diagram for explaining a problem to be solved by the present disclosure.
  • 1A is a diagram showing an example of an input pattern and an authentication pattern
  • FIG. 1B is a diagram showing another example of an input pattern and an authentication pattern.
  • FIG. 2 is a block diagram illustrating a terminal according to an embodiment.
  • 10 is a flowchart illustrating an operation of a terminal according to an embodiment.
  • 5 is a flowchart showing a process for setting authentication information shown in FIG. 4; 5 is a flowchart showing a process for performing authentication shown in FIG. 4;
  • FIG. 2 is a diagram illustrating a hardware configuration of a terminal according to an embodiment.
  • FIG. 1 is a diagram for explaining the problem with terminal 1 (authentication device) according to this embodiment.
  • terminal 1 is a mobile terminal such as a smartphone or a tablet terminal.
  • a mobile terminal is a terminal that can be carried by a user.
  • Terminal 1 may also be a non-portable terminal such as a desktop computer, for example.
  • Terminal 1 performs authentication. "Performing authentication” means determining whether or not the user operating terminal 1 is a pre-set user (hereinafter, may be referred to as "authenticated user”). If it is determined that the user operating terminal 1 is an authenticated user, terminal 1 unlocks its own terminal 1. If it is determined that the user operating terminal 1 is not an authenticated user, terminal 1 does not unlock its own terminal 1.
  • the terminal 1 is equipped with a display unit 14 (described below) that displays an authentication pattern M.
  • the display unit 14 displays a preset authentication pattern M.
  • the display unit 14 displays the authentication pattern M on the display of the terminal 1.
  • the authentication pattern M is an image for authenticating the terminal 1.
  • the authentication pattern M includes a mark M1 for guiding the user's line of sight for authenticating the terminal 1.
  • the mark M1 is a single dot. The type of mark M1 can be changed as appropriate.
  • the authentication pattern M includes multiple marks M1.
  • three marks M1 are arranged in three rows, each with three marks M1 per column. That is, the authentication pattern M includes nine marks M1.
  • the positions of the marks M1 are, for example, set in advance. The contents of the authentication pattern M can be changed as appropriate.
  • An authentication pattern for performing authentication is preset in the terminal 1.
  • the authentication pattern is the order of marks M1 that are moved ahead of the user's gaze E.
  • the authentication pattern is set, for example, by the user.
  • the number of marks M1 that are moved ahead of the gaze E in the authentication pattern (hereinafter sometimes referred to as the "prescribed number") is set in advance.
  • “Gaze” refers to the direction in which the user is looking.
  • Ahead of the gaze refers to the position in the authentication pattern M where the user is looking.
  • the user moves his/her gaze E, for example, in accordance with a preset authentication pattern.
  • the user moves the gaze E in accordance with the order of the marks M1 that move the gaze E, which are preset.
  • the user positions the gaze E on one mark M1 and gazes at that one mark M1, and then moves the gaze E away from that one mark M1.
  • the user moves the gaze E away from the one mark M1 and moves the gaze E towards another mark M1, and then positions the gaze E on that other mark M1 and gazes at that other mark M1.
  • the user repeats these actions to authenticate the terminal 1.
  • the terminal 1 detects the user's eye E1.
  • the eye E1 includes the user's eyeball and the area surrounding the eyeball. For example, the eye E1 includes the user's eyelid.
  • the terminal 1 detects the user's gaze E based on the detection result of the eye E1.
  • the terminal 1 acquires the order of marks M1 to which the user moves the gaze E (hereinafter, sometimes referred to as "input pattern") based on the detected gaze E.
  • the terminal 1 determines whether or not the acquired input pattern matches a preset authentication pattern. In other words, the terminal 1 determines whether or not the order of marks M1 to which the user moves the gaze E matches the preset order of marks M1 to which the gaze E moves.
  • FIG. 2(a) is a diagram showing an example of an input pattern P1 and an authentication pattern P2.
  • FIG. 2(b) is a diagram showing another example of an input pattern P1 and an authentication pattern P2.
  • the input pattern P1 is indicated by a solid arrow
  • the authentication pattern P2 is indicated by a dashed arrow.
  • FIG. 2(a) shows an example in which input pattern P1 does not match authentication pattern P2. If the user operating terminal 1 is not an authenticated user, it is highly likely that input pattern P1 input by the user operating terminal 1 does not match authentication pattern P2. If it is determined that input pattern P1 does not match authentication pattern P2, terminal 1 determines that the user operating terminal 1 is not an authenticated user. In this case, terminal 1 does not unlock terminal 1.
  • FIG. 2(b) shows an example in which input pattern P1 matches authentication pattern P2. If the user operating terminal 1 is an authenticated user, the user operating terminal 1 can input input pattern P1 that is the same as authentication pattern P2. If it is determined that input pattern P1 matches authentication pattern P2, terminal 1 determines that the user operating terminal 1 is an authenticated user. In this case, terminal 1 unlocks terminal 1.
  • the terminal 1 when authentication is performed by determining whether the input pattern P1 matches the authentication pattern P2, there is room for improvement in the security of the authentication. For example, if the preset authentication pattern P2 becomes known to someone other than the authenticated user, the terminal 1 may be unlocked by someone other than the authenticated user. To solve the above problem, the terminal 1 performs authentication based on the state of the eye E1 when the user is looking at the authentication pattern M. As a result, even if the authentication pattern P2 becomes known to someone other than the authenticated user, it is possible to prevent authentication of the terminal 1 by someone other than the authenticated user based on the state of the eye E1. The configuration and operation of the terminal 1 are described in detail below.
  • FIG. 3 is a block diagram showing a terminal 1 according to an embodiment.
  • the terminal 1 includes an eye detection unit 11, a gaze detection unit 12, a state detection unit 13, a display unit 14, and an authentication unit 15.
  • the eye detection unit 11 detects the user's eye E1.
  • the eye detection unit 11 detects the user's eye E1, for example, via a camera mounted on the terminal 1.
  • the eye detection unit 11 acquires an image via the camera, for example.
  • the eye detection unit 11 may detect the eye E1, for example, by analyzing the acquired image using AI.
  • the eye detection unit 11 may detect the user's eye E1 using known means.
  • the gaze detection unit 12 detects the user's gaze E based on the detection result of the eye detection unit 11.
  • the gaze detection unit 12 for example, acquires the position of the pupil of eye E1 from eye E1 detected by the eye detection unit 11.
  • the gaze detection unit 12 may detect the direction in which the pupil is pointing based on the acquired position of the pupil of eye E1.
  • the gaze detection unit 12 may detect the direction in which the detected pupil is pointing as the gaze E.
  • the gaze detection unit 12 may detect the user's gaze E using known means.
  • the state detection unit 13 detects the state of the user's eye E1 based on the detection result of the eye detection unit 11.
  • the state of eye E1 is a characteristic of eye E1 that is unique to each person.
  • the "unique characteristic” mentioned here does not necessarily mean a characteristic that strictly differs from person to person, but also includes a characteristic that a certain number of people have in the same way.
  • the state detection unit 13 detects at least one of blinking and characteristics related to eye E1 as the state of eye E1. In this embodiment, both blinking and characteristics related to eye E1 are detected as the state of eye E1.
  • the state detection unit 13 detects the position of the eyelid from the eye E1 detected by the eye detection unit 11. The state detection unit 13 determines whether the detected eyelid position has moved a predetermined distance or more. If it is determined that the eyelid position has moved a predetermined distance or more, the state detection unit 13 determines that the user has blinked, and detects the user's blink. The state detection unit 13 may detect the user's blink using known means.
  • the features related to eye E1 refer to, for example, the features of eye E1 itself.
  • the state detection unit 13 detects at least one of the state of wrinkles around the eyes, the focal depth of eye E1, and the number of eyeballs as the features related to eye E1. In this embodiment, the state detection unit 13 detects all of the state of wrinkles around the eyes, the focal depth of eye E1, and the number of eyeballs.
  • the state detection unit 13 detects the position of the eyeball from the eye E1 detected by the eye detection unit 11.
  • the eyeball referred to here refers to a real eyeball and does not include, for example, a prosthetic eye.
  • the state detection unit 13 detects, for example, an area within a specified distance from the detected eyeball position as the eye.
  • the state detection unit 13 detects, for example, the length, position, and thickness of wrinkles formed around the detected eye as the state of wrinkles around the eyes.
  • the state detection unit 13 may detect the state of wrinkles around the eyes using known means.
  • the state detection unit 13 detects the focal depth of the eye E1 from the eye E1 detected by the eye detection unit 11. Specifically, the state detection unit 13 first acquires, for example, the distance from the terminal 1 to the user's eye E1. The state detection unit 13 may analyze, for example, an image acquired via a camera mounted on the terminal 1, and acquire the distance from the analyzed image to the eye E1 contained in the image. The state detection unit 13 then detects the focal depth of the eye E1 based on the acquired distance. The state detection unit 13 may detect the focal depth of the eye E1 using known means.
  • the state detection unit 13 may detect the focal depth of the eye E1 based on the size of the eye E1.
  • the size of the eye E1 is, for example, the distance from the user's upper eyelid to the lower eyelid.
  • the state detection unit 13 may obtain the size of the eye E1 based on, for example, the obtained distance from the terminal 1 to the eye E1 and an image obtained via a camera mounted on the terminal 1.
  • the state detection unit 13 detects the number of eyeballs from the eye E1 detected by the eye detection unit 11. For example, if the user has one eye E1 and a prosthetic eye, the state detection unit 13 detects the number of eyeballs as one.
  • the authentication unit 15 performs authentication based on the state of the eye E1 detected by the state detection unit 13 when the mark M1 is in front of the gaze E detected by the gaze detection unit 12. Below, the processing of the authentication unit 15 when the mark M1 is in front of the gaze E will be explained.
  • the authentication unit 15 acquires at least one of the number of blinks and the timing of the blinks based on the blinks detected by the state detection unit 13. In this embodiment, the authentication unit 15 acquires both the number of blinks and the timing of the blinks based on the blinks detected by the state detection unit 13.
  • the authentication unit 15 detects a gaze timing, which is the timing when the gaze E is ahead of the mark M1, based on the gaze E detected by the gaze detection unit 12.
  • the authentication unit 15 detects a non-gaze timing, which is the timing when the gaze E moves away from the mark M1, based on the gaze E detected by the gaze detection unit 12.
  • the authentication unit 15 acquires a first number, which is the number of times the user blinks between the detected gaze timing and the non-gaze timing, as the number of blinks.
  • the authentication unit 15 acquires a first blink time, which is the time from the gaze timing to the timing when the user blinks, between the detected gaze timing and non-gazing timing, based on the blinks detected by the state detection unit 13.
  • the timing when the user blinks means, for example, the timing when the user closes their eyelids after opening them. If the user blinks multiple times between the gaze timing and non-gazing timing, the authentication unit 15 acquires the first blink time for each of the multiple blinks. The authentication unit 15 acquires the acquired first blink time as the blink timing.
  • the frequency and timing of blinking can reflect, for example, a person's constitution and habits. Therefore, the number of blinks and the timing of blinking can be said to be characteristics of the eye E1 that are unique to each person.
  • the authentication unit 15 performs authentication based on the state of the eye E1 when the gaze E moves from one mark M1 to another mark M1. Below, the processing of the authentication unit 15 when the gaze E moves from one mark M1 to another mark M1 will be explained.
  • the authentication unit 15 acquires, as the number of blinks, a second number of times that the user blinks between the detected timing of not gazing at one mark M1 and the timing of gazing at the other mark M1.
  • the authentication unit 15 acquires, as the blink timing, a second blink time that is the time from the detected timing of not gazing at one mark M1 to the timing at which the user blinks between the detected timing of not gazing at one mark M1 and the timing of gazing at the other mark M1. If the user blinks multiple times between the detected timing of not gazing at one mark M1 and the timing of gazing at the other mark M1, the authentication unit 15 acquires a second blink time for each of the multiple blinks. The authentication unit 15 acquires the acquired second blink time as the blink timing.
  • FIG. 4 is a flowchart showing the operation of the terminal 1 according to one embodiment.
  • the terminal 1 sets authentication information (step S1).
  • the authentication information is information used to authenticate the terminal 1 in step S5, which will be described later.
  • the terminal 1 executes step S1.
  • the terminal 1 may omit executing step S1.
  • step S1 the terminal 1 executes the process shown in FIG. 5.
  • FIG. 5 is a flowchart showing the process of setting the authentication information shown in FIG. 4.
  • the display unit 14 displays the authentication pattern M (step S11).
  • the authentication pattern M is, for example, set in advance.
  • step S11 the display unit 14 displays the authentication pattern M on, for example, the display of the terminal 1.
  • step S12 the eye detection unit 11 detects the user's eye E1 (step S12).
  • step S12 the eye detection unit 11 detects the user's eye E1, for example, via a camera mounted on the terminal 1.
  • the gaze detection unit 12 detects the user's gaze E based on the detection result in step S12 (step S13).
  • step S13 the gaze detection unit 12 detects, for example, the position of the pupil of eye E1 based on the eye E1 detected in step S12. Then, the gaze detection unit 12 detects the direction in which the pupil is pointing based on the detected pupil position. The gaze detection unit 12 then detects the direction in which the detected pupil is pointing as the user's gaze E.
  • the authentication unit 15 judges whether or not there is a mark M1 at the end of the line of sight E detected in step S13 (step S14). Specifically, the authentication unit 15 first acquires, for example, the distance from the terminal 1 to the user's eye E1. The authentication unit 15 may analyze an image acquired, for example, via a camera mounted on the terminal 1, and acquire the distance from the analyzed image to the eye E1 included in the image. The authentication unit 15 may acquire the distance from the terminal 1 to the user's eye E1 using a known means. Next, the authentication unit 15 calculates the position of the end of the line of sight E based on the acquired distance and the line of sight E detected in step S13.
  • the terminal 1 judges whether or not the calculated position of the end of the line of sight E is a predetermined distance or more away from the position of the mark M1 set in advance. If it is determined that the position of the end of the line of sight E is not a predetermined distance or more away from the position of the mark M1, the authentication unit 15 judges that the mark M1 is at the end of the line of sight E (step S14: YES). If it is determined that the position at the end of the line of sight E is a predetermined distance or more away from the position of the mark M1, the authentication unit 15 determines that the mark M1 is not at the end of the line of sight E (step S14: NO).
  • step S14 If it is determined that mark M1 is in the line of sight E (step S14: YES), terminal 1 executes step S15. If it is determined that mark M1 is not in the line of sight E (step S14: NO), terminal 1 executes step S14 again. Terminal 1 repeatedly executes step S14 for a predetermined time until it is determined that mark M1 is in the line of sight E. Authentication unit 15 detects the timing at which it is determined in step S14 that mark M1 is in the line of sight E as the gaze timing.
  • step S15 the state detection unit 13 detects the state of the eye E1 when the mark M1 is in the line of sight E.
  • the state detection unit 13 detects blinking and features related to the eye E1 as the state of the eye E1.
  • the state detection unit 13 detects the state of wrinkles around the eyes, the focal depth of the eye E1, and the number of eyeballs as the features related to the eye E1.
  • step S15 the authentication unit 15 detects a non-gazing timing in step S16 described later, and then acquires the number of blinks and the timing of the blinks based on the detected blinks. More specifically, first, the authentication unit 15 acquires a first number of times, which is the number of times the user blinked between the gazing timing detected in step S14 or step S18 (described later) and the non-gazing timing detected in step S16. Before executing step S18, the authentication unit 15 acquires the first number of times between the gazing timing detected in step S14 and the non-gazing timing detected in step S16. After executing step S18 once, the authentication unit 15 acquires the first number of times between the gazing timing detected in step S18 and the non-gazing timing detected in step S16. The authentication unit 15 acquires the acquired first number of times as the number of blinks.
  • step S15 the authentication unit 15 acquires the first blink time between the gaze timing detected in step S14 or step S18 and the non-gazing timing detected in step S16.
  • step S18 the authentication unit 15 acquires the first blink time between the gaze timing detected in step S14 and the non-gazing timing detected in step S16.
  • step S18 the authentication unit 15 acquires the first blink time between the gaze timing detected in step S18 and the non-gazing timing detected in step S16.
  • the authentication unit 15 acquires the acquired first blink time as the blink timing.
  • step S16 the authentication unit 15 judges whether the point of the gaze E detected in step S13 has moved away from the mark M1 (step S16).
  • step S16 the authentication unit 15 calculates the position of the gaze E by executing the same process as in step S14. The authentication unit 15 then judges whether the calculated position of the gaze E is a predetermined distance or more away from the preset position of the mark M1. If it is judged that the position of the gaze E is not a predetermined distance or more away from the position of the mark M1, the authentication unit 15 judges that the gaze E has not moved away from the mark M1 (step S16: NO). If it is judged that the position of the gaze E is a predetermined distance or more away from the position of the mark M1, the authentication unit 15 judges that the gaze E has moved away from the mark M1 (step S16: YES).
  • step S15 If it is determined that the gaze E has not left the mark M1 (step S16: NO), the terminal 1 executes step S15 again. The terminal 1 repeatedly executes steps S15 and S16 for a predetermined time until it is determined that the gaze E has left the mark M1. If it is determined that the gaze E has left the mark M1 (step S16: YES), the terminal 1 executes step S17. The authentication unit 15 detects the timing at which it is determined in step S16 that the gaze E has left the mark M1 as a non-gaze timing.
  • step S16 When it is determined in step S16 that the gaze E has left the mark M1, it is considered that the user is moving the gaze E from one mark M1 to another mark M1. For example, the user gazes at one mark M1 and then selects another mark M1 to which the gaze E will move. The user moves the gaze E to the other mark M1 that he or she has selected.
  • step S17 the state detection unit 13 detects the state of the eye E1 when the gaze E moves from one mark M1 to another mark M1.
  • the state detection unit 13 executes a process similar to that of step S15 to detect blinking and characteristics related to the eye E1 as the state of the eye E1.
  • step S17 the authentication unit 15 detects the gaze timing in step S18 described below, and then acquires the number of blinks and the blink timing based on the detected blinks. More specifically, the authentication unit 15 first acquires a second number, which is the number of times the user blinked between the non-gaze timing detected in step S16 and the gaze timing detected in step S18. The authentication unit 15 acquires the acquired second number as the number of blinks.
  • step S17 the authentication unit 15 acquires a second blinking time between the non-gazing timing detected in step S16 and the gazing timing detected in step S18.
  • the authentication unit 15 acquires the acquired second blinking time as the blinking timing.
  • the authentication unit 15 determines whether or not the mark M1 is located at the end of the line of sight E detected in step S13 (step S18). In step S18, the authentication unit 15 executes the same process as in step S14 to determine whether or not the mark M1 is located at the end of the line of sight E.
  • step S18 If it is determined that there is no mark M1 in the line of sight E (step S18: NO), the terminal 1 executes step S17 again. The terminal 1 repeatedly executes steps S17 and S18 for a predetermined time until it is determined that there is mark M1 in the line of sight E. If it is determined that there is mark M1 in the line of sight E (step S18: YES), the terminal 1 executes step S19. The authentication unit 15 detects the timing at which it is determined in step S18 that there is mark M1 in the line of sight E as the gaze timing.
  • step S19 the terminal 1 determines whether or not the termination condition is satisfied.
  • the termination condition is that the number of times gaze timing is detected in steps S14 and S18 reaches the above-mentioned preset number.
  • the termination condition is, for example, set in advance. The content of the termination condition can be changed as appropriate. If it is determined that the termination condition is not satisfied (step S19: NO), the terminal 1 executes step S15 again. If it is determined that the termination condition is satisfied (step S19: YES), the terminal 1 executes step S20.
  • step S20 the terminal 1 stores the authentication information.
  • step S20 the terminal 1 stores the authentication pattern P2 (see FIG. 2) as the authentication information.
  • the terminal 1 acquires the order of the marks M1 that were determined to be in the line of sight E in steps S14 and S18.
  • the terminal 1 stores the acquired order of the marks M1 as the authentication pattern P2.
  • step S20 the terminal 1 stores the first authentication information as authentication information.
  • the first authentication information indicates the characteristics of the eye E1 when the mark M1 is at the end of the line of sight E.
  • the terminal 1 acquires the characteristics of the eye E1 detected in step S15 for each mark M1 at which the user's line of sight E is located.
  • the terminal 1 then stores the acquired characteristics of the eye E1 as the first authentication information.
  • step S20 the terminal 1 stores the second authentication information as authentication information.
  • the second authentication information indicates the number of blinks and the timing of the blinks when the mark M1 is in the line of sight E.
  • the terminal 1 stores the number of blinks and the timing of the blinks acquired in step S15 as the second authentication information.
  • step S20 the terminal 1 stores the third authentication information as authentication information.
  • the third authentication information indicates the characteristics of the eye E1 when the gaze E moves from one mark M1 to another mark M1.
  • the terminal 1 stores the characteristics of the eye E1 detected in step S17 as the third authentication information.
  • step S20 the terminal 1 stores the fourth authentication information as authentication information.
  • the fourth authentication information indicates the number of blinks and the timing of the blinks when the gaze E moves from one mark M1 to another mark M1.
  • the terminal 1 stores the number of blinks and the timing of the blinks acquired in step S17 as the fourth authentication information. After the above processing, the terminal 1 ends step S1.
  • step S2 display unit 14 displays authentication pattern M.
  • step S11 display unit 14 displays authentication pattern M by executing the same process as step S11.
  • the eye detection unit 11 detects the user's eye E1 (step S3).
  • the eye detection unit 11 detects the user's eye E1 by executing a process similar to that in step S12.
  • the gaze detection unit 12 detects the user's gaze E based on the detection result in step S3 (step S4).
  • the gaze detection unit 12 detects the user's gaze E by executing a process similar to that in step S13.
  • step S5 the terminal 1 performs authentication.
  • step S5 the terminal 1 executes the process shown in FIG. 6.
  • FIG. 6 is a flowchart showing the process of performing the authentication shown in FIG. 4.
  • the authentication unit 15 determines whether or not the first mark M1 is located at the end of the line of sight E detected in step S4 (step S51).
  • the first mark M1 is the mark M1 that is determined to be located at the end of the line of sight E in step S14, among the multiple marks M1 included in the authentication pattern P2 set in step S1.
  • the authentication unit 15 determines whether or not the first mark M1 is located at the end of the line of sight E by executing a process similar to that in step S14.
  • step S51: YES If it is determined that the first mark M1 is in the line of sight E (step S51: YES), the terminal 1 executes step S52. If it is determined that the first mark M1 is not in the line of sight E (step S51: NO), the terminal 1 executes step S51 again. The terminal 1 repeatedly executes step S51 for a predetermined time until it is determined that the first mark M1 is in the line of sight E. The authentication unit 15 detects the timing at which it is determined in step S51 that the first mark M1 is in the line of sight E as the gaze timing.
  • step S52 the state detection unit 13 detects the state of the eye E1 when the mark M1 is in front of the line of sight E.
  • the state detection unit 13 detects the state of the mark M1 when the mark M1 is in front of the line of sight E by executing a process similar to that of step S15.
  • step S52 the authentication unit 15 detects a non-gazing timing in step S53 described later, and then acquires the number of blinks and the timing of the blinks based on the detected blinks. More specifically, the authentication unit 15 first acquires a first number of times between the gaze timing detected in step S51 or step S55 (described later) and the non-gazing timing detected in step S53. Before executing step S55, the authentication unit 15 acquires the first number of times between the gaze timing detected in step S51 and the non-gazing timing detected in step S53. After executing step S55 once, the authentication unit 15 acquires the first number of times between the gaze timing detected in step S55 and the non-gazing timing detected in step S53. The authentication unit 15 acquires the acquired first number of times as the number of blinks.
  • step S52 the authentication unit 15 acquires the first blink time between the gaze timing detected in step S51 or step S55 and the non-gazing timing detected in step S53.
  • step S55 the authentication unit 15 acquires the first blink time between the gaze timing detected in step S51 and the non-gazing timing detected in step S53.
  • step S55 the authentication unit 15 acquires the first blink time between the gaze timing detected in step S55 and the non-gazing timing detected in step S53.
  • the authentication unit 15 acquires the acquired first blink time as the blink timing.
  • step S53 the authentication unit 15 determines whether the gaze E detected in step S4 has moved away from the mark M1 (step S53).
  • step S53 the authentication unit 15 executes a process similar to that of step S16 to determine whether the gaze E has moved away from the mark M1.
  • step S53: NO If it is determined that the gaze E has not left the mark M1 (step S53: NO), the terminal 1 executes step S52 again. The terminal 1 repeatedly executes steps S52 and S53 for a predetermined time until it is determined that the gaze E has left the mark M1. If it is determined that the gaze E has left the mark M1 (step S53: YES), the terminal 1 executes step S54. The authentication unit 15 detects the timing at which it is determined in step S53 that the gaze E has left the mark M1 as a non-gaze timing.
  • step S53 When it is determined in step S53 that the gaze E has left mark M1, it is considered that the user is moving the gaze E from one mark M1 to another mark M1. If the user operating terminal 1 is an authenticated user, the authenticated user moves the gaze E from one mark M1 to another mark M1 in accordance with the authentication pattern P2 set in step S1.
  • step S54 the state detection unit 13 detects the state of the eye E1 when the gaze E moves from one mark M1 to another mark M1. In step S54, the state detection unit 13 detects the state of the eye E1 when the gaze E moves from one mark M1 to another mark M1 by executing a process similar to that of step S17.
  • step S54 the authentication unit 15 detects the gaze timing in step S55 described below, and then acquires the number of blinks and the blink timing based on the detected blinks. More specifically, the authentication unit 15 first acquires a second number, which is the number of times the user blinked between the non-gaze timing detected in step S53 and the gaze timing detected in step S55. The authentication unit 15 acquires the acquired second number as the number of blinks.
  • step S54 the authentication unit 15 acquires a second blinking time between the non-gazing timing detected in step S53 and the gazing timing detected in step S55.
  • the authentication unit 15 acquires the acquired second blinking time as the blinking timing.
  • step S55 the authentication unit 15 determines whether or not the mark M1 is located at the end of the line of sight E detected in step S4 (step S55).
  • step S55 the authentication unit 15 executes a process similar to that of step S18 to determine whether or not the mark M1 is located at the end of the line of sight E.
  • step S55: NO If it is determined that there is no mark M1 in the line of sight E (step S55: NO), the terminal 1 executes step S54 again. The terminal 1 repeatedly executes steps S54 and S55 for a predetermined time until it is determined that there is mark M1 in the line of sight E. If it is determined that there is mark M1 in the line of sight E (step S55: YES), the terminal 1 executes step S56. The authentication unit 15 detects the timing at which it is determined in step S55 that there is mark M1 in the line of sight E as the gaze timing.
  • step S56 the terminal 1 determines whether or not the termination condition is satisfied.
  • the termination condition is that the number of times that gaze timing is detected in steps S51 and S55 reaches the above-mentioned preset number.
  • the termination condition is, for example, set in advance. The content of the termination condition can be changed as appropriate. If it is determined that the termination condition is not satisfied (step S56: NO), the terminal 1 executes step S52 again. If it is determined that the termination condition is satisfied (step S56: YES), the terminal 1 acquires the order of the marks M1 that are determined to be in the line of sight E in steps S51 and S55. The terminal 1 acquires the acquired order of the marks M1 as the input pattern P1. Then, the terminal 1 executes step S57.
  • step S57 the authentication unit 15 determines whether the user operating the terminal 1 is an authenticated user. In this embodiment, the authentication unit 15 determines whether each of the first condition, second condition, third condition, fourth condition, and fifth condition described below is satisfied. If it is determined that all of the first condition to fifth condition are satisfied, the authentication unit 15 determines that the user operating the terminal 1 is an authenticated user. In other cases, the authentication unit 15 determines that the user operating the terminal 1 is not an authenticated user.
  • the first condition is a condition related to the input pattern P1 input by the user.
  • the authentication unit 15 determines whether the input pattern P1 acquired in step S56 matches the authentication pattern P2 set in step S1. More specifically, the authentication unit 15 determines whether the order of the marks M1 determined to be in the line of sight E in steps S51 and S55 matches the order of the marks M1 indicated by the authentication pattern P2 stored in step S20. If it is determined that the input pattern P1 and the authentication pattern P2 match, the authentication unit 15 determines that the first condition is met. If it is determined that the input pattern P1 and the authentication pattern P2 do not match, the authentication unit 15 determines that the first condition is not met.
  • the second condition is a condition related to the features of the eye E1 when the mark M1 is in the line of sight E.
  • the authentication unit 15 calculates the match rate between the features of the eye E1 detected in step S52 and the features of the eye E1 indicated by the first authentication information stored in step S20.
  • the authentication unit 15 calculates, for example, the average of the match rates of the wrinkle state around the eyes, the match rate of the focal depth of the eye E1, and the match rate of the number of eyeballs, which are described below, as the match rate of the features of the eye E1.
  • the authentication unit 15 calculates the rate of agreement between the state of wrinkles around the eyes detected in step S52 and the state of wrinkles around the eyes indicated by the first authentication information.
  • the authentication unit 15 calculates the rate of agreement between the length, position, and thickness of the wrinkles detected in step S52 and the length, position, and thickness of the wrinkles indicated by the first authentication information.
  • the authentication unit 15 may calculate the rate of agreement between the state of wrinkles around the eyes using known means.
  • the authentication unit 15 calculates the matching rate between the focal depth of the eye E1 detected in step S52 and the focal depth of the eye E1 indicated by the first authentication information. For example, the authentication unit 15 may calculate the above matching rate as a normalized value obtained by multiplying the inverse of the difference between the focal depth of the eye E1 detected in step S52 and the focal depth of the eye E1 indicated by the first authentication information by a predetermined value.
  • the authentication unit 15 calculates the matching rate between the number of eyeballs detected in step S52 and the number of eyeballs indicated by the first authentication information. For example, if the number of eyeballs detected in step S52 matches the number of eyeballs indicated by the first authentication information, the authentication unit 15 may calculate the above matching rate as a predetermined first value (100% as an example). For example, if the number of eyeballs detected in step S52 does not match the number of eyeballs indicated by the first authentication information, the authentication unit 15 may calculate the above matching rate as a predetermined second value smaller than the first value (0% as an example).
  • the authentication unit 15 determines whether the calculated matching rate of the features related to the eye E1 is equal to or greater than a predetermined value. If it is determined that the matching rate of the features related to the eye E1 is equal to or greater than the predetermined value, the authentication unit 15 determines that the second condition is satisfied. If it is determined that the matching rate of the features related to the eye E1 is less than the predetermined value, the authentication unit 15 determines that the second condition is not satisfied.
  • the third condition is a condition related to the number of blinks and the timing of blinks when the mark M1 is ahead of the line of sight E.
  • the authentication unit 15 calculates the match rate between the number of blinks acquired in step S52 and the number of blinks indicated by the second authentication information stored in step S20. For example, the authentication unit 15 calculates the match rate of the number of blinks by normalizing the reciprocal of the difference between the first number acquired in step S52 and the first number indicated by the second authentication information by multiplying it by a predetermined value.
  • the authentication unit 15 calculates the coincidence rate between the blink timing acquired in step S52 and the blink timing indicated by the second authentication information stored in step S20. For example, the authentication unit 15 calculates the coincidence rate of the blink timing by multiplying the inverse of the difference between the first blink time acquired in step S52 and the first blink time indicated by the second authentication information by a predetermined value and normalizing the result.
  • the authentication unit 15 calculates the average value of the calculated match rate of the number of blinks and the calculated match rate of the blink timing. The authentication unit 15 determines whether the calculated average value is equal to or greater than a predetermined value. If it is determined that the average value is equal to or greater than the predetermined value, the authentication unit 15 determines that the third condition is satisfied. If it is determined that the average value is less than the predetermined value, the authentication unit 15 determines that the third condition is not satisfied.
  • the fourth condition is a condition related to the features of the eye E1 when the gaze E moves from one mark M1 to another mark M1.
  • the authentication unit 15 calculates the match rate between the features of the eye E1 detected in step S54 and the features of the eye E1 indicated by the third authentication information stored in step S20.
  • the method by which the authentication unit 15 calculates the match rate is the same as the method used to determine whether the second condition is satisfied.
  • the authentication unit 15 determines whether the calculated matching rate is equal to or greater than a predetermined value. If it is determined that the matching rate is equal to or greater than the predetermined value, the authentication unit 15 determines that the fourth condition is met. If it is determined that the matching rate is less than the predetermined value, the authentication unit 15 determines that the fourth condition is not met.
  • the fifth condition is a condition related to the number of blinks and the timing of the blinks when the gaze E moves from one mark M1 to another mark M1.
  • the authentication unit 15 calculates the match rate between the number of blinks acquired in step S54 and the number of blinks indicated by the fourth authentication information stored in step S20. For example, the authentication unit 15 calculates the match rate of the number of blinks by normalizing the reciprocal of the difference between the second number acquired in step S54 and the second number indicated by the fourth authentication information by multiplying it by a predetermined value.
  • the authentication unit 15 calculates the coincidence rate between the blink timing acquired in step S54 and the blink timing indicated by the fourth authentication information stored in step S20. For example, the authentication unit 15 calculates the coincidence rate of the blink timing by multiplying the inverse of the difference between the second blink time acquired in step S54 and the second blink time indicated by the fourth authentication information by a predetermined value and normalizing the result.
  • the authentication unit 15 calculates the average value of the calculated match rate of the number of blinks and the match rate of the blink timing. The authentication unit 15 determines whether the calculated average value is equal to or greater than a predetermined value. If it is determined that the average value is equal to or greater than the predetermined value, the authentication unit 15 determines that the fifth condition is satisfied. If it is determined that the average value is less than the predetermined value, the authentication unit 15 determines that the fifth condition is not satisfied.
  • the criteria for determining that the user operating the terminal 1 is an authenticated user can also be changed as appropriate. For example, if it is determined that at least one of the first condition, the second condition, the third condition, the fourth condition, and the fifth condition is satisfied, the authentication unit 15 may determine that the user operating the terminal 1 is an authenticated user.
  • step S57 If it is determined that the user operating terminal 1 is an authenticated user (step S57: YES), the authentication unit 15 unlocks terminal 1 (step S58). If it is determined that the user operating terminal 1 is not an authenticated user (step S57: NO), the authentication unit 15 does not unlock terminal 1 (step S59). In step S59, the display unit 14 may display a message to retry authentication. After the above processing, the terminal 1 ends step S5.
  • step S6 judges whether or not the terminal 1 has been unlocked.
  • step S6 judges whether or not the terminal 1 has been unlocked.
  • step S6 executes step S7.
  • step S59 the terminal 1 ends the series of operations.
  • step S7 the terminal 1 performs learning.
  • step S7 the terminal 1 updates the criteria for judging whether or not the user operating the terminal 1 is an authenticated user, for example.
  • step S7 terminal 1 replaces the state of wrinkles around the eyes indicated by the first authentication information with the state of wrinkles around the eyes detected in step S52 and stores it.
  • step S7 terminal 1 calculates the average value of the focal depth of eye E1 indicated by the first authentication information and the focal depth of eye E1 detected in step S52.
  • step S7 terminal 1 replaces the focal depth of eye E1 indicated by the first authentication information with the calculated average value and stores it.
  • step S7 terminal 1 replaces the number of eyeballs indicated by the first authentication information with the number of eyeballs detected in step S52 and stores it.
  • step S7 terminal 1 calculates the average value of the number of blinks indicated by the second authentication information and the number of blinks acquired in step S52. Terminal 1 replaces the number of blinks indicated by the second authentication information with the calculated average value of the number of blinks and stores it. In step S7, terminal 1 calculates the average value of the blink timing indicated by the second authentication information and the blink timing acquired in step S52. Terminal 1 replaces the blink timing indicated by the second authentication information with the calculated average value of the blink timing and stores it.
  • step S7 terminal 1 replaces the state of wrinkles around the eyes indicated by the third authentication information with the state of wrinkles around the eyes detected in step S54 and stores it.
  • step S7 terminal 1 calculates the average value of the focal depth of eye E1 indicated by the third authentication information and the focal depth of eye E1 detected in step S54.
  • step S7 terminal 1 replaces the focal depth of eye E1 indicated by the third authentication information with the calculated average value and stores it.
  • step S7 terminal 1 replaces the number of eyeballs indicated by the third authentication information with the number of eyeballs detected in step S54 and stores it.
  • step S7 terminal 1 calculates the average value of the number of blinks indicated by the fourth authentication information and the number of blinks acquired in step S54. Terminal 1 replaces the number of blinks indicated by the fourth authentication information with the calculated average value of the number of blinks and stores it. In step S7, terminal 1 calculates the average value of the blink timing indicated by the fourth authentication information and the blink timing acquired in step S54. Terminal 1 replaces the blink timing indicated by the fourth authentication information with the calculated average value of the blink timing and stores it.
  • the condition of the user's eye E1 may change due to, for example, aging of the user.
  • the terminal 1 updates the criteria for determining whether the user operating the terminal 1 is an authenticated user. Since the authentication criteria are changed according to the condition of the eye E1 of the authenticated user who unlocked the terminal 1, the accuracy of the authentication can be maintained even if the condition of the user's eye E1 changes as authentication is repeated.
  • the terminal 1 includes an eye detection unit 11 that detects the user's eye E1, a gaze detection unit 12 that detects the user's gaze E based on the detection result of the eye detection unit 11, a state detection unit 13 that detects the state of the user's eye E1 based on the detection result of the eye detection unit 11, a display unit 14 that displays an authentication pattern M including a mark M1 for guiding the gaze E for authentication, and an authentication unit 15 that performs authentication based on the state of the eye E1 when the mark M1 is at the end of the gaze E.
  • authentication is performed based on the state of the eye E1 when the user is looking at the mark M1.
  • the state of the eye E1 is a characteristic of the eye E1 that is unique to each person. Therefore, even if the pre-set authentication pattern P2 is known to someone other than the authenticated user, it is possible to prevent authentication of the terminal 1 by anyone other than the authenticated user based on the state of the eye E1. As a result, the security of authentication can be improved.
  • the authentication pattern M may include multiple marks M1, and the authentication unit 15 may perform authentication based on the state of the eye E1 when the gaze E moves from one mark M1 to another mark M1.
  • authentication is performed based on the state of the eye E1 when the user removes the gaze E from the mark M1 and moves the gaze E toward the other mark M1. Since authentication is performed taking into account not only the state of the eye E1 when the user is looking at the mark M1, but also the state of the eye E1 when the user is moving the gaze E, the accuracy of authentication can be improved. Therefore, the security of authentication can be further improved.
  • the state detection unit 13 may detect blinking as the state of the eye E1. For example, the frequency of blinking and the timing of blinking may reflect a person's constitution and habits. Since blinking can be detected as the state of the eye E1 and authentication can be performed based on the detected blinking, the accuracy of authentication can be further improved. Therefore, the safety of authentication can be further improved.
  • the authentication unit 15 may obtain at least one of the number of blinks and the timing of blinks based on the detected blinks, and perform authentication based on at least one of the obtained number of blinks and the timing of blinks.
  • the number of blinks and the timing of blinks may reflect a person's constitution and habits. Since authentication can be performed based on at least one of the number of blinks and the timing of blinks, the accuracy of authentication can be further improved. Therefore, the safety of authentication can be further improved.
  • the state detection unit 13 may detect characteristics related to the eye E1 as the state of the eye E1. In this case, authentication can be performed based on the characteristics related to the eye E1, which are the characteristics of the eye E1 itself, thereby further improving the security of the authentication.
  • the state detection unit 13 may detect at least one of the state of wrinkles around the eyes, the focal depth of the eye E1, and the number of eyeballs as features related to the eye E1.
  • the state of wrinkles around the eyes, the focal depth of the eye E1, and the number of eyeballs are each unique features for each person. Therefore, authentication can be performed based on these features, which further improves the security of the authentication.
  • the authentication unit 15 may perform authentication based on a preset authentication pattern P2, which is the order of the marks M1 that move in front of the line of sight E.
  • authentication can be performed using the preset authentication pattern P2 in addition to the state of the user's eyes E1. For example, it can determine whether the input pattern P1 input by the user matches the authentication pattern P2, and perform authentication based on the determination result. Therefore, even if a person with an eye E1 state similar to that of the authenticated user authenticates the terminal 1, if the input pattern P1 does not match the authentication pattern P2, it is possible to prevent the terminal 1 from being unlocked. As a result, the security of authentication can be further improved.
  • step S7 the terminal 1 may, for example, replace the focal depth of the eye E1 indicated by the first authentication information with the focal depth of the eye E1 detected in step S52 and store it.
  • the terminal 1 may also execute a similar process for the third authentication information.
  • step S7 the terminal 1 may, for example, replace the number of blinks indicated by the second authentication information with the number of blinks acquired in step S52 and store it.
  • the terminal 1 may, for example, replace the timing of the blinks indicated by the second authentication information with the timing of the blinks acquired in step S52 and store it.
  • the terminal 1 may also execute a similar process for the fourth authentication information.
  • the state detection unit 13 detects blinking and features related to the eye E1 as the state of the eye E1.
  • the state detection unit 13 only needs to detect features of the eye E1 that are unique to each person as the state of the eye E1, and may detect features other than blinking and features related to the eye E1 as the state of the eye E1.
  • the state detection unit 13 detects the state of wrinkles around the eyes, the focal depth of eye E1, and the number of eyeballs as features related to eye E1.
  • the state detection unit 13 may detect features other than the state of wrinkles around the eyes, the focal depth of eye E1, and the number of eyeballs as features related to eye E1.
  • the state detection unit 13 may detect the size or color of the iris of eye E1 as a feature related to eye E1.
  • the authentication device disclosed herein has the following configuration:
  • An eye detection unit that detects the eyes of a user; a gaze detection unit that detects a gaze of the user based on a detection result of the eye detection unit; a condition detection unit that detects an eye condition of the user based on a detection result of the eye detection unit; a display unit that displays an authentication pattern including a mark for guiding the line of sight for authentication; and an authentication unit that performs authentication based on the state of the eye when the mark is in the line of sight.
  • Authentication device that detects the eyes of a user; a gaze detection unit that detects a gaze of the user based on a detection result of the eye detection unit; a condition detection unit that detects an eye condition of the user based on a detection result of the eye detection unit; a display unit that displays an authentication pattern including a mark for guiding the line of sight for authentication; and an authentication unit that performs authentication based on the state of the eye when the mark is in the line of sight.
  • the authentication pattern includes a plurality of the marks, the authentication unit performs authentication based on a state of the eye when the gaze moves from one of the marks to the other of the marks.
  • the authentication device according to [1].
  • the state detection unit detects blinking as the eye state.
  • the authentication device according to [1] or [2].
  • the authentication unit Obtaining at least one of the number of blinks and the timing of the blinks based on the detected blinks; performing the authentication based on at least one of the acquired number of blinks and the acquired blink timing; The authentication device according to [3].
  • the condition detection unit detects a feature related to the eye as the eye condition.
  • An authentication device according to any one of [1] to [4].
  • the state detection unit detects at least one of a state of wrinkles around the eyes, a focal depth of the eyes, and a number of eyeballs as the eye-related features.
  • the authentication device according to [5].
  • the authentication unit performs authentication based on an authentication pattern that is a sequence of the marks to which the line of sight is moved, the authentication pattern being set in advance;
  • An authentication device according to any one of [2] to [6].
  • each functional block may be realized using one device that is physically or logically coupled, or may be realized using two or more devices that are physically or logically separated and connected directly or indirectly (for example, using wires, wirelessly, etc.) and these multiple devices.
  • the functional blocks may be realized by combining the one device or the multiple devices with software.
  • Functions include, but are not limited to, judgement, determination, judgment, calculation, computation, processing, derivation, investigation, search, confirmation, reception, transmission, output, access, resolution, selection, election, establishment, comparison, assumption, expectation, regarding, broadcasting, notifying, communicating, forwarding, configuring, reconfiguring, allocating, mapping, and assignment.
  • a functional block (component) that performs the transmission function is called a transmitting unit or transmitter.
  • terminal 1 in one embodiment of the present disclosure may function as a computer that performs the information processing of the present disclosure.
  • FIG. 7 is a diagram showing an example of the hardware configuration of terminal 1 according to one embodiment of the present disclosure.
  • the above-mentioned terminal 1 may be physically configured as a computer device including a processor 1001, memory 1002, storage 1003, communication device 1004, input device 1005, output device 1006, bus 1007, etc.
  • the hardware configuration of terminal 20 may also be as described here.
  • terminal 1 can be interpreted as a circuit, device, unit, etc.
  • the hardware configuration of terminal 1 may be configured to include one or more of the devices shown in the figure, or may be configured to exclude some of the devices.
  • Each function of the terminal 1 is realized by loading a specific software (program) onto hardware such as the processor 1001 and memory 1002, causing the processor 1001 to perform calculations, control communications via the communication device 1004, and control at least one of the reading and writing of data in the memory 1002 and storage 1003.
  • a specific software program
  • the processor 1001 to perform calculations, control communications via the communication device 1004, and control at least one of the reading and writing of data in the memory 1002 and storage 1003.
  • the processor 1001 for example, runs an operating system to control the entire computer.
  • the processor 1001 may be configured as a central processing unit (CPU) including an interface with peripheral devices, a control device, an arithmetic unit, registers, etc.
  • CPU central processing unit
  • each function in the terminal 1 described above may be realized by the processor 1001.
  • the processor 1001 also reads out programs (program codes), software modules, data, etc. from at least one of the storage 1003 and the communication device 1004 into the memory 1002, and executes various processes according to these.
  • the programs used are those that cause a computer to execute at least some of the operations described in the above-mentioned embodiments.
  • each function in the terminal 1 may be realized by a control program stored in the memory 1002 and running on the processor 1001.
  • the above-mentioned various processes have been described as being executed by one processor 1001, they may be executed simultaneously or sequentially by two or more processors 1001.
  • the processor 1001 may be implemented by one or more chips.
  • the programs may be transmitted from a network via a telecommunications line.
  • Memory 1002 is a computer-readable recording medium, and may be composed of at least one of, for example, ROM (Read Only Memory), EPROM (Erasable Programmable ROM), EEPROM (Electrically Erasable Programmable ROM), RAM (Random Access Memory), etc.
  • Memory 1002 may also be called a register, cache, main memory (primary storage device), etc.
  • Memory 1002 can store executable programs (program codes), software modules, etc., for performing information processing related to one embodiment of the present disclosure.
  • Storage 1003 is a computer-readable recording medium, and may be composed of at least one of an optical disk such as a CD-ROM (Compact Disc ROM), a hard disk drive, a flexible disk, a magneto-optical disk (e.g., a compact disk, a digital versatile disk, a Blu-ray (registered trademark) disk), a smart card, a flash memory (e.g., a card, a stick, a key drive), a floppy (registered trademark) disk, a magnetic strip, etc.
  • Storage 1003 may also be called an auxiliary storage device.
  • the storage medium provided in terminal 1 may be, for example, a database, a server, or other appropriate medium that includes at least one of memory 1002 and storage 1003.
  • the communication device 1004 is hardware (transmitting/receiving device) for communicating between computers via at least one of a wired network and a wireless network, and is also called, for example, a network device, a network controller, a network card, a communication module, etc.
  • the input device 1005 is an input device (e.g., a keyboard, a mouse, a microphone, a switch, a button, a sensor, etc.) that accepts input from the outside.
  • the output device 1006 is an output device (e.g., a display, a speaker, an LED lamp, etc.) that performs output to the outside. Note that the input device 1005 and the output device 1006 may be integrated into one structure (e.g., a touch panel).
  • each device such as the processor 1001 and memory 1002 is connected by a bus 1007 for communicating information.
  • the bus 1007 may be configured using a single bus, or may be configured using different buses between each device.
  • the terminal 1 may be configured to include hardware such as a microprocessor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a programmable logic device (PLD), or a field programmable gate array (FPGA), and some or all of the functional blocks may be realized by the hardware.
  • the processor 1001 may be implemented using at least one of these pieces of hardware.
  • the input and output information may be stored in a specific location (e.g., memory) or may be managed using a management table.
  • the input and output information may be overwritten, updated, or added to.
  • the output information may be deleted.
  • the input information may be sent to another device.
  • the determination may be based on a value represented by one bit (0 or 1), a Boolean value (true or false), or a numerical comparison (e.g., a comparison with a predetermined value).
  • notification of specific information is not limited to being done explicitly, but may be done implicitly (e.g., not notifying the specific information).
  • Software shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software modules, applications, software applications, software packages, routines, subroutines, objects, executable files, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise.
  • Software, instructions, information, etc. may also be transmitted and received via a transmission medium.
  • a transmission medium For example, if the software is transmitted from a website, server, or other remote source using at least one of wired technologies (such as coaxial cable, fiber optic cable, twisted pair, Digital Subscriber Line (DSL)), and/or wireless technologies (such as infrared, microwave), then at least one of these wired and wireless technologies is included within the definition of a transmission medium.
  • wired technologies such as coaxial cable, fiber optic cable, twisted pair, Digital Subscriber Line (DSL)
  • wireless technologies such as infrared, microwave
  • system and “network” are used interchangeably.
  • information, parameters, etc. described in this disclosure may be expressed using absolute values, may be expressed using relative values from a predetermined value, or may be expressed using other corresponding information.
  • determining may encompass a wide variety of actions.
  • Determining and “determining” may include, for example, judging, calculating, computing, processing, deriving, investigating, looking up, search, inquiry (e.g., searching in a table, database, or other data structure), and considering ascertaining as “judging” or “determining.”
  • determining and “determining” may include receiving (e.g., receiving information), transmitting (e.g., sending information), input, output, accessing (e.g., accessing data in memory), and considering ascertaining as “judging” or “determining.”
  • judgment” and “decision” can include considering resolving, selecting, choosing, establishing, comparing, etc., to have been “judged” or “decided.” In other words, “judgment” and “decision” can include considering some action to have been “judged” or “decided.” Additionally, “judgment (decision)” can be interpreted as “assuming,” “ex
  • connection refers to any direct or indirect connection or coupling between two or more elements, and may include the presence of one or more intermediate elements between two elements that are “connected” or “coupled” to each other.
  • the coupling or connection between elements may be physical, logical, or a combination thereof.
  • “connected” may be read as "access.”
  • two elements may be considered to be “connected” or “coupled” to each other using at least one of one or more wires, cables, and printed electrical connections, as well as electromagnetic energy having wavelengths in the radio frequency range, microwave range, and optical (both visible and invisible) range, as some non-limiting and non-exhaustive examples.
  • the phrase “based on” does not mean “based only on,” unless expressly stated otherwise. In other words, the phrase “based on” means both “based only on” and “based at least on.”
  • any reference to an element using a designation such as "first,” “second,” etc., used in this disclosure does not generally limit the quantity or order of those elements. These designations may be used in this disclosure as a convenient method of distinguishing between two or more elements. Thus, a reference to a first and a second element does not imply that only two elements may be employed or that the first element must precede the second element in some way.
  • a and B are different may mean “A and B are different from each other.”
  • the term may also mean “A and B are each different from C.”
  • Terms such as “separate” and “combined” may also be interpreted in the same way as “different.”
  • terminal authentication device
  • 11... eye detection unit 12... gaze detection unit, 13... status detection unit
  • 14... display unit 15... authentication unit, E... gaze, E1... eye, M... authentication pattern, M1... mark.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Eye Examination Apparatus (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

A terminal 1 (authentication device) comprises: an eye detection unit 11 that detects an eye E1 of a user; an eye gaze detection unit 12 that detects an eye gaze E of the user on the basis of the detection result of the eye detection unit 11; a state detection unit 13 that detects a state of the eye E1 of the user on the basis of the detection result of the eye detection unit 11; a display unit 14 that displays an authentication pattern M including a mark M1 for guiding the eye gaze E for authentication; and an authentication unit 15 that performs authentication on the basis of the state of the eye E1 at a time when the mark M1 is ahead in the direction of the eye gaze E. Thus, the safety of authentication can be improved.

Description

認証装置Authentication Device

 本開示は、認証装置に関する。 This disclosure relates to an authentication device.

 ユーザの認証を行う技術として、特許文献1に記載された情報処理装置が知られている。この装置は、ユーザの眼電位を取得し、取得した眼電位が予め定められた条件を満たすと判断した場合に情報処理装置のロック状態を解除する。この装置は、検出された眼電位に基づく視線方向の変化に基づいて、パスコード入力画面内においてユーザが見ている数字を特定する。ユーザが見ている数字の順番が予め設定された認証用のパスコードと一致する場合、この装置は、ロック状態を解除する。 The information processing device described in Patent Document 1 is known as a technology for authenticating users. This device acquires the user's electrooculography, and unlocks the information processing device if it is determined that the acquired electrooculography satisfies a predetermined condition. This device identifies the numbers that the user is looking at on the passcode entry screen based on a change in gaze direction caused by the detected electrooculography. If the sequence of numbers that the user is looking at matches a preset authentication passcode, this device unlocks the device.

特開2015-203957号公報JP 2015-203957 A

 特許文献1に記載の装置は、パスコード入力画面内においてユーザが見る数字の順番に基づいてロック状態を解除する。しかしながら、予め設定された認証用のパスコードが当該ユーザ以外の者に知られた場合、ユーザ以外の者により認証が行われる可能性がある。したがって、認証の安全性を向上することが求められる。 The device described in Patent Document 1 unlocks the device based on the sequence of numbers that the user sees on the passcode input screen. However, if a preset authentication passcode becomes known to someone other than the user, authentication may be performed by someone other than the user. Therefore, there is a demand to improve the security of authentication.

 上記課題を解決するために、本開示は、認証の安全性を向上できる認証装置を提供することを目的とする。 In order to solve the above problems, the present disclosure aims to provide an authentication device that can improve the security of authentication.

 本開示に係る情報処理装置は、ユーザの眼を検出する眼検出部と、眼検出部の検出結果に基づいて、ユーザの視線を検出する視線検出部と、眼検出部の検出結果に基づいて、ユーザの眼の状態を検出する状態検出部と、認証のための視線を誘導するためのマークを含む認証模様を表示する表示部と、視線の先にマークがあるときの眼の状態に基づいて、認証を行う認証部と、を備える。 The information processing device according to the present disclosure includes an eye detection unit that detects the user's eyes, a gaze detection unit that detects the user's gaze based on the detection result of the eye detection unit, a state detection unit that detects the state of the user's eyes based on the detection result of the eye detection unit, a display unit that displays an authentication pattern including a mark for guiding the gaze for authentication, and an authentication unit that performs authentication based on the state of the eyes when the mark is in the line of sight.

 本開示によれば、認証の安全性を向上できる。 This disclosure makes it possible to improve the security of authentication.

本開示の課題を説明するための図である。FIG. 1 is a diagram for explaining a problem to be solved by the present disclosure. (a)は、入力パターン及び認証パターンの一例を示す図である。(b)は、入力パターン及び認証パターンの別の例を示す図である。1A is a diagram showing an example of an input pattern and an authentication pattern, and FIG. 1B is a diagram showing another example of an input pattern and an authentication pattern. 一実施形態に係る端末を示すブロック図である。FIG. 2 is a block diagram illustrating a terminal according to an embodiment. 一実施形態に係る端末の動作を示すフローチャートである。10 is a flowchart illustrating an operation of a terminal according to an embodiment. 図4に示される認証情報を設定する処理を示すフローチャートである。5 is a flowchart showing a process for setting authentication information shown in FIG. 4; 図4に示される認証を行う処理を示すフローチャートである。5 is a flowchart showing a process for performing authentication shown in FIG. 4; 一実施形態に係る端末のハードウェア構成を示す図である。FIG. 2 is a diagram illustrating a hardware configuration of a terminal according to an embodiment.

 添付図面を参照しながら本開示の実施形態を説明する。可能な場合には、同一の部分には同一の符号を付して、重複する説明を省略する。 The embodiments of the present disclosure will be described with reference to the attached drawings. Where possible, identical parts will be designated by the same reference numerals, and duplicate explanations will be omitted.

 図1は、本実施形態に係る端末1(認証装置)の課題を説明するための図である。本実施形態では、端末1は、スマートフォン、又はタブレット端末等の携帯端末である。携帯端末は、ユーザにより携帯可能な端末である。端末1は、例えば、デスクトップ型のコンピュータ等の携帯不能な端末でもよい。 FIG. 1 is a diagram for explaining the problem with terminal 1 (authentication device) according to this embodiment. In this embodiment, terminal 1 is a mobile terminal such as a smartphone or a tablet terminal. A mobile terminal is a terminal that can be carried by a user. Terminal 1 may also be a non-portable terminal such as a desktop computer, for example.

 端末1は、認証を行う。「認証を行う」とは、端末1を操作するユーザが予め設定されたユーザ(以下、「認証ユーザ」と称することがある)であるか否かを判定することを意味する。端末1を操作するユーザが認証ユーザであると判定された場合、端末1は、自端末1のロックを解除する。端末1を操作するユーザが認証ユーザでないと判定された場合、端末1は、自端末1のロックを解除しない。 Terminal 1 performs authentication. "Performing authentication" means determining whether or not the user operating terminal 1 is a pre-set user (hereinafter, may be referred to as "authenticated user"). If it is determined that the user operating terminal 1 is an authenticated user, terminal 1 unlocks its own terminal 1. If it is determined that the user operating terminal 1 is not an authenticated user, terminal 1 does not unlock its own terminal 1.

 端末1は、認証模様Mを表示する表示部14(後述)を備える。表示部14は、予め設定された認証模様Mを表示する。本実施形態では、表示部14は、端末1のディスプレイに認証模様Mを表示する。認証模様Mは、端末1の認証を行うための画像である。認証模様Mは、端末1の認証のためのユーザの視線を誘導するためのマークM1を含む。本実施形態では、マークM1は、1つの点である。マークM1の種類は、適宜変更可能である。 The terminal 1 is equipped with a display unit 14 (described below) that displays an authentication pattern M. The display unit 14 displays a preset authentication pattern M. In this embodiment, the display unit 14 displays the authentication pattern M on the display of the terminal 1. The authentication pattern M is an image for authenticating the terminal 1. The authentication pattern M includes a mark M1 for guiding the user's line of sight for authenticating the terminal 1. In this embodiment, the mark M1 is a single dot. The type of mark M1 can be changed as appropriate.

 本実施形態では、認証模様Mは、複数のマークM1を含む。認証模様Mでは、1列につき3つのマークM1が3行配列されている。すなわち、認証模様Mは、9つのマークM1を含む。マークM1の位置は、例えば、予め設定されている。認証模様Mの内容は、適宜変更可能である。 In this embodiment, the authentication pattern M includes multiple marks M1. In the authentication pattern M, three marks M1 are arranged in three rows, each with three marks M1 per column. That is, the authentication pattern M includes nine marks M1. The positions of the marks M1 are, for example, set in advance. The contents of the authentication pattern M can be changed as appropriate.

 端末1には、認証を行うための認証パターンが予め設定されている。本実施形態では、認証パターンは、ユーザの視線Eの先を移動させるマークM1の順番である。認証パターンは、例えば、ユーザにより設定されている。認証パターンにおいて視線Eの先を移動させるマークM1の数(以下、「規定数」と称することがある)は、予め設定されている。「視線」とは、ユーザが見ている方向を意味する。「視線の先」とは、認証模様Mにおいてユーザが見ている位置を意味する。 An authentication pattern for performing authentication is preset in the terminal 1. In this embodiment, the authentication pattern is the order of marks M1 that are moved ahead of the user's gaze E. The authentication pattern is set, for example, by the user. The number of marks M1 that are moved ahead of the gaze E in the authentication pattern (hereinafter sometimes referred to as the "prescribed number") is set in advance. "Gaze" refers to the direction in which the user is looking. "Ahead of the gaze" refers to the position in the authentication pattern M where the user is looking.

 ユーザが端末1の認証を行う場合、ユーザは、例えば、予め設定した認証パターンに従って自らの視線Eを移動させる。言い換えれば、ユーザは、予め設定された視線Eの先を移動させるマークM1の順番の通りに視線Eの先を移動させる。 When a user authenticates the terminal 1, the user moves his/her gaze E, for example, in accordance with a preset authentication pattern. In other words, the user moves the gaze E in accordance with the order of the marks M1 that move the gaze E, which are preset.

 ユーザは、例えば、一のマークM1に視線Eの先を位置させて当該一のマークM1を注視した後に、当該一のマークM1から視線Eの先を外す。ユーザは、一のマークM1から視線Eの先を外し、他のマークM1に向けて視線Eの先を移動させた後に、当該他のマークM1に視線Eの先を位置させて当該他のマークM1を注視する。ユーザは、これらの動作を繰り返すことにより、端末1の認証を行う。 For example, the user positions the gaze E on one mark M1 and gazes at that one mark M1, and then moves the gaze E away from that one mark M1. The user moves the gaze E away from the one mark M1 and moves the gaze E towards another mark M1, and then positions the gaze E on that other mark M1 and gazes at that other mark M1. The user repeats these actions to authenticate the terminal 1.

 端末1は、ユーザの眼E1を検出する。眼E1は、ユーザの眼球と、当該眼球の周辺とを含む。例えば、眼E1は、ユーザのまぶたを含む。端末1は、眼E1の検出結果に基づいて、ユーザの視線Eを検出する。端末1は、検出した視線Eに基づいて、ユーザが視線Eの先を移動させたマークM1の順番(以下、「入力パターン」と称することがある)を取得する。端末1は、取得した入力パターンが予め設定された認証パターンと一致するか否かを判定する。言い換えれば、端末1は、ユーザが視線Eの先を移動させたマークM1の順番が、予め設定された視線Eの先を移動させるマークM1の順番と一致するか否かを判定する。 The terminal 1 detects the user's eye E1. The eye E1 includes the user's eyeball and the area surrounding the eyeball. For example, the eye E1 includes the user's eyelid. The terminal 1 detects the user's gaze E based on the detection result of the eye E1. The terminal 1 acquires the order of marks M1 to which the user moves the gaze E (hereinafter, sometimes referred to as "input pattern") based on the detected gaze E. The terminal 1 determines whether or not the acquired input pattern matches a preset authentication pattern. In other words, the terminal 1 determines whether or not the order of marks M1 to which the user moves the gaze E matches the preset order of marks M1 to which the gaze E moves.

 図2(a)は、入力パターンP1及び認証パターンP2の一例を示す図である。図2(b)は、入力パターンP1及び認証パターンP2の別の例を示す図である。図2(a)及び図2(b)では、入力パターンP1が実線の矢印で示されており、認証パターンP2が破線の矢印で示されている。 FIG. 2(a) is a diagram showing an example of an input pattern P1 and an authentication pattern P2. FIG. 2(b) is a diagram showing another example of an input pattern P1 and an authentication pattern P2. In FIGS. 2(a) and 2(b), the input pattern P1 is indicated by a solid arrow, and the authentication pattern P2 is indicated by a dashed arrow.

 図2(a)は、入力パターンP1が認証パターンP2と一致していない例を示している。端末1を操作するユーザが認証ユーザでない場合、端末1を操作するユーザにより入力される入力パターンP1は、認証パターンP2と一致しない可能性が高い。入力パターンP1が認証パターンP2と一致しないと判定された場合、端末1は、自端末1を操作するユーザが認証ユーザでないと判定する。この場合、端末1は、自端末1のロックを解除しない。 FIG. 2(a) shows an example in which input pattern P1 does not match authentication pattern P2. If the user operating terminal 1 is not an authenticated user, it is highly likely that input pattern P1 input by the user operating terminal 1 does not match authentication pattern P2. If it is determined that input pattern P1 does not match authentication pattern P2, terminal 1 determines that the user operating terminal 1 is not an authenticated user. In this case, terminal 1 does not unlock terminal 1.

 図2(b)は、入力パターンP1が認証パターンP2と一致する例を示している。端末1を操作するユーザが認証ユーザである場合、端末1を操作するユーザは、認証パターンP2と同一の入力パターンP1を入力することができる。入力パターンP1が認証パターンP2と一致すると判定された場合、端末1は、自端末1を操作するユーザが認証ユーザであると判定する。この場合、端末1は、自端末1のロックを解除する。 FIG. 2(b) shows an example in which input pattern P1 matches authentication pattern P2. If the user operating terminal 1 is an authenticated user, the user operating terminal 1 can input input pattern P1 that is the same as authentication pattern P2. If it is determined that input pattern P1 matches authentication pattern P2, terminal 1 determines that the user operating terminal 1 is an authenticated user. In this case, terminal 1 unlocks terminal 1.

 上記のように、入力パターンP1が認証パターンP2/と一致するか否かを判定することにより認証を行う場合、認証の安全性に向上の余地がある。例えば、予め設定された認証パターンP2が認証ユーザ以外の者に知られた場合、認証ユーザ以外の者により端末1のロックが解除される可能性がある。端末1は、上記の課題を解決すべく、ユーザが認証模様Mを見ているときの眼E1の状態に基づいて認証を行う。これにより、仮に認証パターンP2が認証ユーザ以外の者に知られた場合でも、眼E1の状態に基づいて、認証ユーザ以外の者による端末1に認証を防ぐことができる。以下、端末1の構成及び動作を詳細に説明する。 As described above, when authentication is performed by determining whether the input pattern P1 matches the authentication pattern P2, there is room for improvement in the security of the authentication. For example, if the preset authentication pattern P2 becomes known to someone other than the authenticated user, the terminal 1 may be unlocked by someone other than the authenticated user. To solve the above problem, the terminal 1 performs authentication based on the state of the eye E1 when the user is looking at the authentication pattern M. As a result, even if the authentication pattern P2 becomes known to someone other than the authenticated user, it is possible to prevent authentication of the terminal 1 by someone other than the authenticated user based on the state of the eye E1. The configuration and operation of the terminal 1 are described in detail below.

 図3は、一実施形態に係る端末1を示すブロック図である。端末1は、眼検出部11と、視線検出部12と、状態検出部13と、表示部14と、認証部15と、を備える。 FIG. 3 is a block diagram showing a terminal 1 according to an embodiment. The terminal 1 includes an eye detection unit 11, a gaze detection unit 12, a state detection unit 13, a display unit 14, and an authentication unit 15.

 眼検出部11は、ユーザの眼E1を検出する。眼検出部11は、例えば、端末1に搭載されたカメラを介してユーザの眼E1を検出する。眼検出部11は、例えば、当該カメラを介して映像を取得する。眼検出部11は、例えば、取得した映像をAIにより解析することで眼E1を検出してもよい。眼検出部11は、公知の手段を用いて、ユーザの眼E1を検出してよい。 The eye detection unit 11 detects the user's eye E1. The eye detection unit 11 detects the user's eye E1, for example, via a camera mounted on the terminal 1. The eye detection unit 11 acquires an image via the camera, for example. The eye detection unit 11 may detect the eye E1, for example, by analyzing the acquired image using AI. The eye detection unit 11 may detect the user's eye E1 using known means.

 視線検出部12は、眼検出部11の検出結果に基づいて、ユーザの視線Eを検出する。視線検出部12は、例えば、眼検出部11により検出された眼E1から、眼E1の瞳孔の位置を取得する。視線検出部12は、取得した眼E1の瞳孔の位置に基づいて、当該瞳孔が向く方向を検出してもよい。視線検出部12は、検出した瞳孔が向く方向を視線Eとして検出してもよい。視線検出部12は、公知の手段を用いて、ユーザの視線Eを検出してよい。 The gaze detection unit 12 detects the user's gaze E based on the detection result of the eye detection unit 11. The gaze detection unit 12, for example, acquires the position of the pupil of eye E1 from eye E1 detected by the eye detection unit 11. The gaze detection unit 12 may detect the direction in which the pupil is pointing based on the acquired position of the pupil of eye E1. The gaze detection unit 12 may detect the direction in which the detected pupil is pointing as the gaze E. The gaze detection unit 12 may detect the user's gaze E using known means.

 状態検出部13は、眼検出部11の検出結果に基づいて、ユーザの眼E1の状態を検出する。眼E1の状態は、人物毎に固有の眼E1の特徴である。ここで言う「固有の特徴」とは、必ずしも厳密に人物毎に異なる特徴のみならず、一定の人数の人物が同様に有する特徴も含む。状態検出部13は、眼E1の状態として、瞬き及び眼E1に係る特徴の少なくとも1つを検出する。本実施形態では、眼E1の状態として、瞬き及び眼E1に係る特徴の双方を検出する。 The state detection unit 13 detects the state of the user's eye E1 based on the detection result of the eye detection unit 11. The state of eye E1 is a characteristic of eye E1 that is unique to each person. The "unique characteristic" mentioned here does not necessarily mean a characteristic that strictly differs from person to person, but also includes a characteristic that a certain number of people have in the same way. The state detection unit 13 detects at least one of blinking and characteristics related to eye E1 as the state of eye E1. In this embodiment, both blinking and characteristics related to eye E1 are detected as the state of eye E1.

 状態検出部13は、眼検出部11により検出された眼E1から、まぶたの位置を検出する。状態検出部13は、検出したまぶたの位置が所定距離以上動いたか否かを判定する。まぶたの位置が所定距離以上動いたと判定された場合、状態検出部13は、ユーザが瞬きをしたと判定し、ユーザの瞬きを検出する。状態検出部13は、公知の手段を用いて、ユーザの瞬きを検出してよい。 The state detection unit 13 detects the position of the eyelid from the eye E1 detected by the eye detection unit 11. The state detection unit 13 determines whether the detected eyelid position has moved a predetermined distance or more. If it is determined that the eyelid position has moved a predetermined distance or more, the state detection unit 13 determines that the user has blinked, and detects the user's blink. The state detection unit 13 may detect the user's blink using known means.

 眼E1に係る特徴とは、例えば、眼E1自体の特徴を意味する。状態検出部13は、眼E1に係る特徴として、目元のしわの状態、眼E1の焦点深度、及び眼球の個数の少なくとも1つを検出する。本実施形態では、状態検出部13は、目元のしわの状態、眼E1の焦点深度、及び眼球の個数のいずれも検出する。 The features related to eye E1 refer to, for example, the features of eye E1 itself. The state detection unit 13 detects at least one of the state of wrinkles around the eyes, the focal depth of eye E1, and the number of eyeballs as the features related to eye E1. In this embodiment, the state detection unit 13 detects all of the state of wrinkles around the eyes, the focal depth of eye E1, and the number of eyeballs.

 状態検出部13は、眼検出部11により検出された眼E1から、眼球の位置を検出する。ここで言う眼球は、実物の眼球を言い、例えば義眼を含まない。状態検出部13は、例えば、検出した眼球の位置から所定距離内の範囲を目元として検出する。状態検出部13は、例えば、検出した目元に形成されるしわの長さ、しわの位置、及びしわの太さを、目元のしわの状態として検出する。状態検出部13は、公知の手段を用いて、目元のしわの状態を検出してよい。 The state detection unit 13 detects the position of the eyeball from the eye E1 detected by the eye detection unit 11. The eyeball referred to here refers to a real eyeball and does not include, for example, a prosthetic eye. The state detection unit 13 detects, for example, an area within a specified distance from the detected eyeball position as the eye. The state detection unit 13 detects, for example, the length, position, and thickness of wrinkles formed around the detected eye as the state of wrinkles around the eyes. The state detection unit 13 may detect the state of wrinkles around the eyes using known means.

 状態検出部13は、眼検出部11により検出された眼E1から、眼E1の焦点深度を検出する。具体的には、まず、状態検出部13は、例えば、端末1からユーザの眼E1までの距離を取得する。状態検出部13は、例えば、端末1に搭載されたカメラを介して取得した画像を解析し、解析した画像から当該画像に含まれる眼E1までの距離を取得してよい。そして、状態検出部13は、取得した距離に基づいて、眼E1の焦点深度を検出する。状態検出部13は、公知の手段を用いて、眼E1の焦点深度を検出してよい。 The state detection unit 13 detects the focal depth of the eye E1 from the eye E1 detected by the eye detection unit 11. Specifically, the state detection unit 13 first acquires, for example, the distance from the terminal 1 to the user's eye E1. The state detection unit 13 may analyze, for example, an image acquired via a camera mounted on the terminal 1, and acquire the distance from the analyzed image to the eye E1 contained in the image. The state detection unit 13 then detects the focal depth of the eye E1 based on the acquired distance. The state detection unit 13 may detect the focal depth of the eye E1 using known means.

 ユーザによっては、眼E1を細めることにより眼E1の焦点を合わせることも考えられる。状態検出部13は、眼E1の大きさに基づいて、眼E1の焦点深度を検出してもよい。眼E1の大きさは、例えば、ユーザの上のまぶたから下のまぶたまでの距離である。状態検出部13は、例えば、取得した端末1から眼E1までの距離、及び、端末1に搭載されたカメラを介して取得した画像に基づいて、眼E1の大きさを取得してよい。 Some users may focus their eye E1 by squinting their eye E1. The state detection unit 13 may detect the focal depth of the eye E1 based on the size of the eye E1. The size of the eye E1 is, for example, the distance from the user's upper eyelid to the lower eyelid. The state detection unit 13 may obtain the size of the eye E1 based on, for example, the obtained distance from the terminal 1 to the eye E1 and an image obtained via a camera mounted on the terminal 1.

 状態検出部13は、眼検出部11により検出された眼E1から、眼球の個数を検出する。例えば、ユーザが1つの眼E1と義眼とを有する場合、状態検出部13は、眼球の個数を1つとして検出する。 The state detection unit 13 detects the number of eyeballs from the eye E1 detected by the eye detection unit 11. For example, if the user has one eye E1 and a prosthetic eye, the state detection unit 13 detects the number of eyeballs as one.

 認証部15は、視線検出部12により検出された視線Eの先にマークM1があるときの、状態検出部13により検出された眼E1の状態に基づいて、認証を行う。以下、視線Eの先にマークM1があるときの認証部15の処理を説明する。 The authentication unit 15 performs authentication based on the state of the eye E1 detected by the state detection unit 13 when the mark M1 is in front of the gaze E detected by the gaze detection unit 12. Below, the processing of the authentication unit 15 when the mark M1 is in front of the gaze E will be explained.

 認証部15は、状態検出部13により検出された瞬きに基づいて、瞬きの回数及び瞬きのタイミングの少なくとも1つを取得する。本実施形態では、認証部15は、状態検出部13により検出された瞬きに基づいて、瞬きの回数及び瞬きのタイミングの双方を取得する。 The authentication unit 15 acquires at least one of the number of blinks and the timing of the blinks based on the blinks detected by the state detection unit 13. In this embodiment, the authentication unit 15 acquires both the number of blinks and the timing of the blinks based on the blinks detected by the state detection unit 13.

 認証部15は、視線検出部12により検出された視線Eに基づいて、視線Eの先にマークM1があるタイミングである注視タイミングを検出する。認証部15は、視線検出部12により検出された視線Eに基づいて、当該マークM1から視線Eの先が外れたタイミングである非注視タイミングを検出する。認証部15は、検出した注視タイミングから非注視タイミングまでの間にユーザが瞬きをした回数である第1回数を瞬きの回数として取得する。 The authentication unit 15 detects a gaze timing, which is the timing when the gaze E is ahead of the mark M1, based on the gaze E detected by the gaze detection unit 12. The authentication unit 15 detects a non-gaze timing, which is the timing when the gaze E moves away from the mark M1, based on the gaze E detected by the gaze detection unit 12. The authentication unit 15 acquires a first number, which is the number of times the user blinks between the detected gaze timing and the non-gaze timing, as the number of blinks.

 認証部15は、状態検出部13により検出された瞬きに基づいて、検出した注視タイミングから非注視タイミングまでの間において、注視タイミングからユーザが瞬きをしたタイミングまでの時間である第1瞬き時間を取得する。瞬きをしたタイミングとは、例えば、ユーザがまぶたを開いた状態からまぶたを閉じたタイミングを意味する。注視タイミングから非注視タイミングまでの間においてユーザが複数回の瞬きをした場合、認証部15は、複数回の瞬きのそれぞれについて第1瞬き時間を取得する。認証部15は、取得した第1瞬き時間を瞬きのタイミングとして取得する。 The authentication unit 15 acquires a first blink time, which is the time from the gaze timing to the timing when the user blinks, between the detected gaze timing and non-gazing timing, based on the blinks detected by the state detection unit 13. The timing when the user blinks means, for example, the timing when the user closes their eyelids after opening them. If the user blinks multiple times between the gaze timing and non-gazing timing, the authentication unit 15 acquires the first blink time for each of the multiple blinks. The authentication unit 15 acquires the acquired first blink time as the blink timing.

 瞬きの頻度、及び、瞬きを行うタイミングには、例えば、人物の体質及び癖が反映され得る。したがって、瞬きの回数及び瞬きのタイミングは、人物毎に固有の眼E1の特徴であると言える。 The frequency and timing of blinking can reflect, for example, a person's constitution and habits. Therefore, the number of blinks and the timing of blinking can be said to be characteristics of the eye E1 that are unique to each person.

 本実施形態では、認証部15は、一のマークM1から他のマークM1へ視線Eの先が移動するときの眼E1の状態に基づいて、認証を行う。以下、一のマークM1から他のマークM1へ視線Eの先が移動するときの認証部15の処理を説明する。 In this embodiment, the authentication unit 15 performs authentication based on the state of the eye E1 when the gaze E moves from one mark M1 to another mark M1. Below, the processing of the authentication unit 15 when the gaze E moves from one mark M1 to another mark M1 will be explained.

 認証部15は、検出した一のマークM1の非注視タイミングから他のマークM1の注視タイミングまでの間にユーザが瞬きをした回数である第2回数を瞬きの回数として取得する。認証部15は、検出した一のマークM1の非注視タイミングから他のマークM1の注視タイミングまでの間において、一のマークM1の非注視タイミングからユーザが瞬きをしたタイミングまでの時間である第2瞬き時間を瞬きのタイミングとして取得する。一のマークM1の非注視タイミングから他のマークM1の注視タイミングまでの間においてユーザが複数回の瞬きをした場合、認証部15は、複数回の瞬きのそれぞれについて第2瞬き時間を取得する。認証部15は、取得した第2瞬き時間を瞬きのタイミングとして取得する。 The authentication unit 15 acquires, as the number of blinks, a second number of times that the user blinks between the detected timing of not gazing at one mark M1 and the timing of gazing at the other mark M1. The authentication unit 15 acquires, as the blink timing, a second blink time that is the time from the detected timing of not gazing at one mark M1 to the timing at which the user blinks between the detected timing of not gazing at one mark M1 and the timing of gazing at the other mark M1. If the user blinks multiple times between the detected timing of not gazing at one mark M1 and the timing of gazing at the other mark M1, the authentication unit 15 acquires a second blink time for each of the multiple blinks. The authentication unit 15 acquires the acquired second blink time as the blink timing.

 続いて、端末1の動作を説明する。図4は、一実施形態に係る端末1の動作を示すフローチャートである。まず、端末1は、認証情報を設定する(ステップS1)。認証情報は、後述するステップS5において端末1の認証に用いられる情報である。例えば、端末1が初めて認証装置としての機能を発揮するときに、端末1は、ステップS1を実行する。端末1がステップS1を実行した後、次に端末1が認証装置としての機能を発揮するときには、端末1は、ステップS1の実行を省略してもよい。 Next, the operation of the terminal 1 will be described. FIG. 4 is a flowchart showing the operation of the terminal 1 according to one embodiment. First, the terminal 1 sets authentication information (step S1). The authentication information is information used to authenticate the terminal 1 in step S5, which will be described later. For example, when the terminal 1 functions as an authentication device for the first time, the terminal 1 executes step S1. After the terminal 1 executes step S1, the next time the terminal 1 functions as an authentication device, the terminal 1 may omit executing step S1.

 ステップS1では、端末1は、図5に示される処理を実行する。図5は、図4に示される認証情報を設定する処理を示すフローチャートである。まず、表示部14は、認証模様Mを表示する(ステップS11)。認証模様Mは、例えば、予め設定されている。ステップS11では、表示部14は、例えば、端末1のディスプレイに認証模様Mを表示する。 In step S1, the terminal 1 executes the process shown in FIG. 5. FIG. 5 is a flowchart showing the process of setting the authentication information shown in FIG. 4. First, the display unit 14 displays the authentication pattern M (step S11). The authentication pattern M is, for example, set in advance. In step S11, the display unit 14 displays the authentication pattern M on, for example, the display of the terminal 1.

 続いて、眼検出部11は、ユーザの眼E1を検出する(ステップS12)。ステップS12では、眼検出部11は、例えば、端末1に搭載されたカメラを介してユーザの眼E1を検出する。 Next, the eye detection unit 11 detects the user's eye E1 (step S12). In step S12, the eye detection unit 11 detects the user's eye E1, for example, via a camera mounted on the terminal 1.

 続いて、視線検出部12は、ステップS12における検出結果に基づいて、ユーザの視線Eを検出する(ステップS13)。ステップS13では、視線検出部12は、例えば、ステップS12において検出された眼E1に基づいて、眼E1の瞳孔の位置を検出する。その後、視線検出部12は、検出した瞳孔の位置に基づいて、当該瞳孔が向く方向を検出する。そして、視線検出部12は、検出した瞳孔が向く方向をユーザの視線Eとして検出する。 Then, the gaze detection unit 12 detects the user's gaze E based on the detection result in step S12 (step S13). In step S13, the gaze detection unit 12 detects, for example, the position of the pupil of eye E1 based on the eye E1 detected in step S12. Then, the gaze detection unit 12 detects the direction in which the pupil is pointing based on the detected pupil position. The gaze detection unit 12 then detects the direction in which the detected pupil is pointing as the user's gaze E.

 続いて、認証部15は、ステップS13において検出された視線Eの先にマークM1があるか否かを判定する(ステップS14)。具体的には、まず、認証部15は、例えば、端末1からユーザの眼E1までの距離を取得する。認証部15は、例えば端末1に搭載されたカメラを介して取得した画像を解析し、解析した画像から当該画像に含まれる眼E1までの距離を取得してよい。認証部15は、公知の手段を用いて、端末1からユーザの眼E1までの距離を取得してよい。次に、認証部15は、取得した距離及びステップS13において検出された視線Eに基づいて、視線Eの先の位置を算出する。その後、端末1は、算出した視線Eの先の位置が、予め設定されたマークM1の位置から所定距離以上離れているか否かを判定する。視線Eの先の位置がマークM1の位置から所定距離以上離れていないと判定された場合、認証部15は、視線Eの先にマークM1があると判定する(ステップS14:YES)。視線Eの先の位置がマークM1の位置から所定距離以上離れていると判定された場合、認証部15は、視線Eの先にマークM1がないと判定する(ステップS14:NO)。 Then, the authentication unit 15 judges whether or not there is a mark M1 at the end of the line of sight E detected in step S13 (step S14). Specifically, the authentication unit 15 first acquires, for example, the distance from the terminal 1 to the user's eye E1. The authentication unit 15 may analyze an image acquired, for example, via a camera mounted on the terminal 1, and acquire the distance from the analyzed image to the eye E1 included in the image. The authentication unit 15 may acquire the distance from the terminal 1 to the user's eye E1 using a known means. Next, the authentication unit 15 calculates the position of the end of the line of sight E based on the acquired distance and the line of sight E detected in step S13. Thereafter, the terminal 1 judges whether or not the calculated position of the end of the line of sight E is a predetermined distance or more away from the position of the mark M1 set in advance. If it is determined that the position of the end of the line of sight E is not a predetermined distance or more away from the position of the mark M1, the authentication unit 15 judges that the mark M1 is at the end of the line of sight E (step S14: YES). If it is determined that the position at the end of the line of sight E is a predetermined distance or more away from the position of the mark M1, the authentication unit 15 determines that the mark M1 is not at the end of the line of sight E (step S14: NO).

 視線Eの先にマークM1があると判定された場合(ステップS14:YES)、端末1は、ステップS15を実行する。視線Eの先にマークM1がないと判定された場合(ステップS14:NO)、端末1は、ステップS14を再び実行する。端末1は、視線Eの先にマークM1があると判定されるまで、ステップS14を所定時間繰り返し実行する。認証部15は、ステップS14において視線Eの先にマークM1があると判定されたタイミングを注視タイミングとして検出する。 If it is determined that mark M1 is in the line of sight E (step S14: YES), terminal 1 executes step S15. If it is determined that mark M1 is not in the line of sight E (step S14: NO), terminal 1 executes step S14 again. Terminal 1 repeatedly executes step S14 for a predetermined time until it is determined that mark M1 is in the line of sight E. Authentication unit 15 detects the timing at which it is determined in step S14 that mark M1 is in the line of sight E as the gaze timing.

 ステップS15では、状態検出部13は、視線Eの先にマークM1があるときの眼E1の状態を検出する。本実施形態において、ステップS15では、状態検出部13は、眼E1の状態として、瞬き及び眼E1に係る特徴を検出する。状態検出部13は、眼E1に係る特徴として、目元のしわの状態、眼E1の焦点深度、及び眼球の個数を検出する。 In step S15, the state detection unit 13 detects the state of the eye E1 when the mark M1 is in the line of sight E. In this embodiment, in step S15, the state detection unit 13 detects blinking and features related to the eye E1 as the state of the eye E1. The state detection unit 13 detects the state of wrinkles around the eyes, the focal depth of the eye E1, and the number of eyeballs as the features related to the eye E1.

 ステップS15では、認証部15は、後述するステップS16において非注視タイミングを検出した後に、検出された瞬きに基づいて、瞬きの回数及び瞬きのタイミングを取得する。より具体的には、まず、認証部15は、ステップS14又はステップS18(後述)において検出した注視タイミングからステップS16において検出した非注視タイミングまでの間にユーザが瞬きをした回数である第1回数を取得する。認証部15は、ステップS18を実行する前においては、ステップS14において検出した注視タイミングからステップS16において検出した非注視タイミングまでの間の第1回数を取得する。認証部15は、一度ステップS18を実行した後においては、ステップS18において検出した注視タイミングからステップS16において検出した非注視タイミングまでの間の第1回数を取得する。認証部15は、取得した第1回数を瞬きの回数として取得する。 In step S15, the authentication unit 15 detects a non-gazing timing in step S16 described later, and then acquires the number of blinks and the timing of the blinks based on the detected blinks. More specifically, first, the authentication unit 15 acquires a first number of times, which is the number of times the user blinked between the gazing timing detected in step S14 or step S18 (described later) and the non-gazing timing detected in step S16. Before executing step S18, the authentication unit 15 acquires the first number of times between the gazing timing detected in step S14 and the non-gazing timing detected in step S16. After executing step S18 once, the authentication unit 15 acquires the first number of times between the gazing timing detected in step S18 and the non-gazing timing detected in step S16. The authentication unit 15 acquires the acquired first number of times as the number of blinks.

 ステップS15では、認証部15は、ステップS14又はステップS18において検出した注視タイミングからステップS16において検出した非注視タイミングまでの間における第1瞬き時間を取得する。認証部15は、ステップS18を実行する前においては、ステップS14において検出した注視タイミングからステップS16において検出した非注視タイミングまでの間における第1瞬き時間を取得する。認証部15は、一度ステップS18を実行した後においては、ステップS18において検出した注視タイミングからステップS16において検出した非注視タイミングまでの間における第1瞬き時間を取得する。認証部15は、取得した第1瞬き時間を瞬きのタイミングとして取得する。 In step S15, the authentication unit 15 acquires the first blink time between the gaze timing detected in step S14 or step S18 and the non-gazing timing detected in step S16. Before executing step S18, the authentication unit 15 acquires the first blink time between the gaze timing detected in step S14 and the non-gazing timing detected in step S16. After executing step S18 once, the authentication unit 15 acquires the first blink time between the gaze timing detected in step S18 and the non-gazing timing detected in step S16. The authentication unit 15 acquires the acquired first blink time as the blink timing.

 続いて、認証部15は、マークM1からステップS13において検出された視線Eの先が外れたか否かを判定する(ステップS16)。ステップS16では、認証部15は、ステップS14と同様の処理を実行することにより、視線Eの先の位置を算出する。その後、認証部15は、算出した視線Eの先の位置が、予め設定されたマークM1の位置から所定距離以上離れているか否かを判定する。視線Eの先の位置がマークM1の位置から所定距離以上離れていないと判定された場合、認証部15は、マークM1から視線Eの先が外れていないと判定する(ステップS16:NO)。視線Eの先の位置がマークM1の位置から所定距離以上離れていると判定された場合、認証部15は、マークM1から視線Eの先が外れたと判定する(ステップS16:YES)。 Then, the authentication unit 15 judges whether the point of the gaze E detected in step S13 has moved away from the mark M1 (step S16). In step S16, the authentication unit 15 calculates the position of the gaze E by executing the same process as in step S14. The authentication unit 15 then judges whether the calculated position of the gaze E is a predetermined distance or more away from the preset position of the mark M1. If it is judged that the position of the gaze E is not a predetermined distance or more away from the position of the mark M1, the authentication unit 15 judges that the gaze E has not moved away from the mark M1 (step S16: NO). If it is judged that the position of the gaze E is a predetermined distance or more away from the position of the mark M1, the authentication unit 15 judges that the gaze E has moved away from the mark M1 (step S16: YES).

 マークM1から視線Eの先が外れていないと判定された場合(ステップS16:NO)、端末1は、ステップS15を再び実行する。端末1は、マークM1から視線Eの先が外れたと判定されるまで、ステップS15及びステップS16を所定時間繰り返し実行する。マークM1から視線Eの先が外れたと判定された場合(ステップS16:YES)、端末1は、ステップS17を実行する。認証部15は、ステップS16においてマークM1から視線Eの先が外れたと判定されたタイミングを非注視タイミングとして検出する。 If it is determined that the gaze E has not left the mark M1 (step S16: NO), the terminal 1 executes step S15 again. The terminal 1 repeatedly executes steps S15 and S16 for a predetermined time until it is determined that the gaze E has left the mark M1. If it is determined that the gaze E has left the mark M1 (step S16: YES), the terminal 1 executes step S17. The authentication unit 15 detects the timing at which it is determined in step S16 that the gaze E has left the mark M1 as a non-gaze timing.

 ステップS16においてマークM1から視線Eの先が外れたと判定されたとき、ユーザは、一のマークM1から他のマークM1へ視線Eの先を移動させていると考えられる。ユーザは、例えば、一のマークM1を注視した後に視線Eの先を移動させる他のマークM1を選択する。ユーザは、自ら選択した他のマークM1へ視線Eの先を移動させる。 When it is determined in step S16 that the gaze E has left the mark M1, it is considered that the user is moving the gaze E from one mark M1 to another mark M1. For example, the user gazes at one mark M1 and then selects another mark M1 to which the gaze E will move. The user moves the gaze E to the other mark M1 that he or she has selected.

 ステップS17では、状態検出部13は、一のマークM1から他のマークM1へ視線Eの先が移動するときの眼E1の状態を検出する。本実施形態において、ステップS17では、状態検出部13は、ステップS15と同様の処理を実行することにより、眼E1の状態として瞬き及び眼E1に係る特徴を検出する。 In step S17, the state detection unit 13 detects the state of the eye E1 when the gaze E moves from one mark M1 to another mark M1. In this embodiment, in step S17, the state detection unit 13 executes a process similar to that of step S15 to detect blinking and characteristics related to the eye E1 as the state of the eye E1.

 ステップS17では、認証部15は、後述するステップS18において注視タイミングを検出した後に、検出された瞬きに基づいて、瞬きの回数及び瞬きのタイミングを取得する。より具体的には、まず、認証部15は、ステップS16において検出した非注視タイミングからステップS18において検出した注視タイミングまでの間にユーザが瞬きをした回数である第2回数を取得する。認証部15は、取得した第2回数を瞬きの回数として取得する。 In step S17, the authentication unit 15 detects the gaze timing in step S18 described below, and then acquires the number of blinks and the blink timing based on the detected blinks. More specifically, the authentication unit 15 first acquires a second number, which is the number of times the user blinked between the non-gaze timing detected in step S16 and the gaze timing detected in step S18. The authentication unit 15 acquires the acquired second number as the number of blinks.

 ステップS17では、認証部15は、ステップS16において検出した非注視タイミングからステップS18において検出した注視タイミングまでの間における第2瞬き時間を取得する。認証部15は、取得した第2瞬き時間を瞬きのタイミングとして取得する。 In step S17, the authentication unit 15 acquires a second blinking time between the non-gazing timing detected in step S16 and the gazing timing detected in step S18. The authentication unit 15 acquires the acquired second blinking time as the blinking timing.

 続いて、認証部15は、ステップS13において検出された視線Eの先にマークM1があるか否かを判定する(ステップS18)。ステップS18では、認証部15は、ステップS14と同様の処理を実行することにより、視線Eの先にマークM1があるか否かを判定する。 Then, the authentication unit 15 determines whether or not the mark M1 is located at the end of the line of sight E detected in step S13 (step S18). In step S18, the authentication unit 15 executes the same process as in step S14 to determine whether or not the mark M1 is located at the end of the line of sight E.

 視線Eの先にマークM1がないと判定された場合(ステップS18:NO)、端末1は、ステップS17を再び実行する。端末1は、視線Eの先にマークM1があると判定されるまで、ステップS17及びステップS18を所定時間繰り返し実行する。視線Eの先にマークM1があると判定された場合(ステップS18:YES)、端末1は、ステップS19を実行する。認証部15は、ステップS18において視線Eの先にマークM1があると判定されたタイミングを注視タイミングとして検出する。 If it is determined that there is no mark M1 in the line of sight E (step S18: NO), the terminal 1 executes step S17 again. The terminal 1 repeatedly executes steps S17 and S18 for a predetermined time until it is determined that there is mark M1 in the line of sight E. If it is determined that there is mark M1 in the line of sight E (step S18: YES), the terminal 1 executes step S19. The authentication unit 15 detects the timing at which it is determined in step S18 that there is mark M1 in the line of sight E as the gaze timing.

 ステップS19では、端末1は、終了条件を満たしたか否かを判定する。本実施形態では、終了条件は、ステップS14及びステップS18において注視タイミングが検出された回数が、予め設定された上記の規定数に達したことである。終了条件は、例えば、予め設定されている。終了条件の内容は、適宜変更可能である。終了条件を満たしていないと判定された場合(ステップS19:NO)、端末1は、ステップS15を再び実行する。終了条件を満たしたと判定された場合(ステップS19:YES)、端末1は、ステップS20を実行する。 In step S19, the terminal 1 determines whether or not the termination condition is satisfied. In this embodiment, the termination condition is that the number of times gaze timing is detected in steps S14 and S18 reaches the above-mentioned preset number. The termination condition is, for example, set in advance. The content of the termination condition can be changed as appropriate. If it is determined that the termination condition is not satisfied (step S19: NO), the terminal 1 executes step S15 again. If it is determined that the termination condition is satisfied (step S19: YES), the terminal 1 executes step S20.

 ステップS20では、端末1は、認証情報を記憶する。ステップS20では、端末1は、認証情報として認証パターンP2(図2参照)を記憶する。端末1は、ステップS14及びステップS18において視線Eの先にあると判定されたマークM1の順番を取得する。端末1は、取得したマークM1の順番を認証パターンP2として記憶する。 In step S20, the terminal 1 stores the authentication information. In step S20, the terminal 1 stores the authentication pattern P2 (see FIG. 2) as the authentication information. The terminal 1 acquires the order of the marks M1 that were determined to be in the line of sight E in steps S14 and S18. The terminal 1 stores the acquired order of the marks M1 as the authentication pattern P2.

 ステップS20では、端末1は、認証情報として、第1認証情報を記憶する。第1認証情報は、視線Eの先にマークM1があるときの眼E1に係る特徴を示す。端末1は、ユーザの視線Eの先が置かれたマークM1毎に、ステップS15において検出された眼E1に係る特徴を取得する。そして、端末1は、取得した眼E1に係る特徴を第1認証情報として記憶する。 In step S20, the terminal 1 stores the first authentication information as authentication information. The first authentication information indicates the characteristics of the eye E1 when the mark M1 is at the end of the line of sight E. The terminal 1 acquires the characteristics of the eye E1 detected in step S15 for each mark M1 at which the user's line of sight E is located. The terminal 1 then stores the acquired characteristics of the eye E1 as the first authentication information.

 ステップS20では、端末1は、認証情報として、第2認証情報を記憶する。第2認証情報は、視線Eの先にマークM1があるときの瞬きの回数及び瞬きのタイミングを示す。端末1は、ステップS15において取得した瞬きの回数及び瞬きのタイミングを第2認証情報として記憶する。 In step S20, the terminal 1 stores the second authentication information as authentication information. The second authentication information indicates the number of blinks and the timing of the blinks when the mark M1 is in the line of sight E. The terminal 1 stores the number of blinks and the timing of the blinks acquired in step S15 as the second authentication information.

 ステップS20では、端末1は、認証情報として、第3認証情報を記憶する。第3認証情報は、一のマークM1から他のマークM1へ視線Eの先が移動するときの眼E1に係る特徴を示す。端末1は、ステップS17において検出された眼E1に係る特徴を第3認証情報として記憶する。 In step S20, the terminal 1 stores the third authentication information as authentication information. The third authentication information indicates the characteristics of the eye E1 when the gaze E moves from one mark M1 to another mark M1. The terminal 1 stores the characteristics of the eye E1 detected in step S17 as the third authentication information.

 ステップS20では、端末1は、認証情報として、第4認証情報を記憶する。第4認証情報は、一のマークM1から他のマークM1へ視線Eの先が移動するときの瞬きの回数及び瞬きのタイミングを示す。端末1は、ステップS17において取得した瞬きの回数及び瞬きのタイミングを第4認証情報として記憶する。以上の処理を経て、端末1は、ステップS1を終了する。 In step S20, the terminal 1 stores the fourth authentication information as authentication information. The fourth authentication information indicates the number of blinks and the timing of the blinks when the gaze E moves from one mark M1 to another mark M1. The terminal 1 stores the number of blinks and the timing of the blinks acquired in step S17 as the fourth authentication information. After the above processing, the terminal 1 ends step S1.

 端末1が認証を行うときに、端末1は、ステップS2~ステップS7を実行する。ステップS2では、表示部14は、認証模様Mを表示する。表示部14は、ステップS11と同様の処理を実行することにより、認証模様Mを表示する。 When terminal 1 performs authentication, terminal 1 executes steps S2 to S7. In step S2, display unit 14 displays authentication pattern M. Display unit 14 displays authentication pattern M by executing the same process as step S11.

 続いて、眼検出部11は、ユーザの眼E1を検出する(ステップS3)。ステップS3では、眼検出部11は、ステップS12と同様の処理を実行することにより、ユーザの眼E1を検出する。続いて、視線検出部12は、ステップS3における検出結果に基づいて、ユーザの視線Eを検出する(ステップS4)。ステップS4では、視線検出部12は、ステップS13と同様の処理を実行することにより、ユーザの視線Eを検出する。 Then, the eye detection unit 11 detects the user's eye E1 (step S3). In step S3, the eye detection unit 11 detects the user's eye E1 by executing a process similar to that in step S12. Next, the gaze detection unit 12 detects the user's gaze E based on the detection result in step S3 (step S4). In step S4, the gaze detection unit 12 detects the user's gaze E by executing a process similar to that in step S13.

 続いて、端末1は、認証を行う(ステップS5)。ステップS5では、端末1は、図6に示される処理を実行する。図6は、図4に示される認証を行う処理を示すフローチャートである。 Then, the terminal 1 performs authentication (step S5). In step S5, the terminal 1 executes the process shown in FIG. 6. FIG. 6 is a flowchart showing the process of performing the authentication shown in FIG. 4.

 まず、認証部15は、ステップS4において検出された視線Eの先に最初のマークM1があるか否かを判定する(ステップS51)。最初のマークM1とは、ステップS1において設定された認証パターンP2に含まれる複数のマークM1のうち、ステップS14において視線Eの先にあると判定されたマークM1である。ステップS51では、認証部15は、ステップS14と同様の処理を実行することにより、視線Eの先に最初のマークM1があるか否かを判定する。 First, the authentication unit 15 determines whether or not the first mark M1 is located at the end of the line of sight E detected in step S4 (step S51). The first mark M1 is the mark M1 that is determined to be located at the end of the line of sight E in step S14, among the multiple marks M1 included in the authentication pattern P2 set in step S1. In step S51, the authentication unit 15 determines whether or not the first mark M1 is located at the end of the line of sight E by executing a process similar to that in step S14.

 視線Eの先に最初のマークM1があると判定された場合(ステップS51:YES)、端末1は、ステップS52を実行する。視線Eの先に最初のマークM1がないと判定された場合(ステップS51:NO)、端末1は、ステップS51を再び実行する。端末1は、視線Eの先に最初のマークM1があると判定されるまで、ステップS51を所定時間繰り返し実行する。認証部15は、ステップS51において視線Eの先に最初のマークM1があると判定されたタイミングを注視タイミングとして検出する。 If it is determined that the first mark M1 is in the line of sight E (step S51: YES), the terminal 1 executes step S52. If it is determined that the first mark M1 is not in the line of sight E (step S51: NO), the terminal 1 executes step S51 again. The terminal 1 repeatedly executes step S51 for a predetermined time until it is determined that the first mark M1 is in the line of sight E. The authentication unit 15 detects the timing at which it is determined in step S51 that the first mark M1 is in the line of sight E as the gaze timing.

 ステップS52では、状態検出部13は、視線Eの先にマークM1があるときの眼E1の状態を検出する。状態検出部13は、ステップS15と同様の処理を実行することにより、視線Eの先のマークM1があるときのマークM1の状態を検出する。 In step S52, the state detection unit 13 detects the state of the eye E1 when the mark M1 is in front of the line of sight E. The state detection unit 13 detects the state of the mark M1 when the mark M1 is in front of the line of sight E by executing a process similar to that of step S15.

 ステップS52では、認証部15は、後述するステップS53において非注視タイミングを検出した後に、検出された瞬きに基づいて、瞬きの回数及び瞬きのタイミングを取得する。より具体的には、まず、認証部15は、ステップS51又はステップS55(後述)において検出した注視タイミングからステップS53において検出した非注視タイミングまでの間の第1回数を取得する。認証部15は、ステップS55を実行する前においては、ステップS51において検出した注視タイミングからステップS53において検出した非注視タイミングまでの間の第1回数を取得する。認証部15は、一度ステップS55を実行した後においては、ステップS55において検出した注視タイミングからステップS53において検出した非注視タイミングまでの間の第1回数を取得する。認証部15は、取得した第1回数を瞬きの回数として取得する。 In step S52, the authentication unit 15 detects a non-gazing timing in step S53 described later, and then acquires the number of blinks and the timing of the blinks based on the detected blinks. More specifically, the authentication unit 15 first acquires a first number of times between the gaze timing detected in step S51 or step S55 (described later) and the non-gazing timing detected in step S53. Before executing step S55, the authentication unit 15 acquires the first number of times between the gaze timing detected in step S51 and the non-gazing timing detected in step S53. After executing step S55 once, the authentication unit 15 acquires the first number of times between the gaze timing detected in step S55 and the non-gazing timing detected in step S53. The authentication unit 15 acquires the acquired first number of times as the number of blinks.

 ステップS52では、認証部15は、ステップS51又はステップS55において検出した注視タイミングからステップS53において検出した非注視タイミングまでの間における第1瞬き時間を取得する。認証部15は、ステップS55を実行する前においては、ステップS51において検出した注視タイミングからステップS53において検出した非注視タイミングまでの間における第1瞬き時間を取得する。認証部15は、一度ステップS55を実行した後においては、ステップS55において検出した注視タイミングからステップS53において検出した非注視タイミングまでの間における第1瞬き時間を取得する。認証部15は、取得した第1瞬き時間を瞬きのタイミングとして取得する。 In step S52, the authentication unit 15 acquires the first blink time between the gaze timing detected in step S51 or step S55 and the non-gazing timing detected in step S53. Before executing step S55, the authentication unit 15 acquires the first blink time between the gaze timing detected in step S51 and the non-gazing timing detected in step S53. After executing step S55 once, the authentication unit 15 acquires the first blink time between the gaze timing detected in step S55 and the non-gazing timing detected in step S53. The authentication unit 15 acquires the acquired first blink time as the blink timing.

 続いて、認証部15は、マークM1からステップS4において検出された視線Eの先が外れたか否かを判定する(ステップS53)。ステップS53では、認証部15は、ステップS16と同様の処理を実行することにより、マークM1から視線Eの先が外れたか否かを判定する。 Then, the authentication unit 15 determines whether the gaze E detected in step S4 has moved away from the mark M1 (step S53). In step S53, the authentication unit 15 executes a process similar to that of step S16 to determine whether the gaze E has moved away from the mark M1.

 マークM1から視線Eの先が外れていないと判定された場合(ステップS53:NO)、端末1は、ステップS52を再び実行する。端末1は、マークM1から視線Eの先が外れたと判定されるまで、ステップS52及びステップS53を所定時間繰り返し実行する。マークM1から視線Eの先が外れたと判定された場合(ステップS53:YES)、端末1は、ステップS54を実行する。認証部15は、ステップS53においてマークM1から視線Eの先が外れたと判定されたタイミングを非注視タイミングとして検出する。 If it is determined that the gaze E has not left the mark M1 (step S53: NO), the terminal 1 executes step S52 again. The terminal 1 repeatedly executes steps S52 and S53 for a predetermined time until it is determined that the gaze E has left the mark M1. If it is determined that the gaze E has left the mark M1 (step S53: YES), the terminal 1 executes step S54. The authentication unit 15 detects the timing at which it is determined in step S53 that the gaze E has left the mark M1 as a non-gaze timing.

 ステップS53においてマークM1から視線Eの先が外れたと判定されたとき、ユーザは、一のマークM1から他のマークM1へ視線Eの先を移動させていると考えられる。端末1を操作するユーザが認証ユーザである場合、当該認証ユーザは、ステップS1において設定した認証パターンP2にしたがって、一のマークM1から他のマークM1へ視線Eの先を移動させる。 When it is determined in step S53 that the gaze E has left mark M1, it is considered that the user is moving the gaze E from one mark M1 to another mark M1. If the user operating terminal 1 is an authenticated user, the authenticated user moves the gaze E from one mark M1 to another mark M1 in accordance with the authentication pattern P2 set in step S1.

 ステップS54では、状態検出部13は、一のマークM1から他のマークM1へ視線Eの先が移動するときの眼E1の状態を検出する。ステップS54では、状態検出部13は、ステップS17と同様の処理を実行することにより、一のマークM1から他のマークM1へ視線Eの先が移動するときの眼E1の状態を検出する。 In step S54, the state detection unit 13 detects the state of the eye E1 when the gaze E moves from one mark M1 to another mark M1. In step S54, the state detection unit 13 detects the state of the eye E1 when the gaze E moves from one mark M1 to another mark M1 by executing a process similar to that of step S17.

 ステップS54では、認証部15は、後述するステップS55において注視タイミングを検出した後に、検出された瞬きに基づいて、瞬きの回数及び瞬きのタイミングを取得する。より具体的には、まず、認証部15は、ステップS53において検出した非注視タイミングからステップS55において検出した注視タイミングまでの間にユーザが瞬きをした回数である第2回数を取得する。認証部15は、取得した第2回数を瞬きの回数として取得する。 In step S54, the authentication unit 15 detects the gaze timing in step S55 described below, and then acquires the number of blinks and the blink timing based on the detected blinks. More specifically, the authentication unit 15 first acquires a second number, which is the number of times the user blinked between the non-gaze timing detected in step S53 and the gaze timing detected in step S55. The authentication unit 15 acquires the acquired second number as the number of blinks.

 ステップS54では、認証部15は、ステップS53において検出した非注視タイミングからステップS55において検出した注視タイミングまでの間における第2瞬き時間を取得する。認証部15は、取得した第2瞬き時間を瞬きのタイミングとして取得する。 In step S54, the authentication unit 15 acquires a second blinking time between the non-gazing timing detected in step S53 and the gazing timing detected in step S55. The authentication unit 15 acquires the acquired second blinking time as the blinking timing.

 続いて、認証部15は、ステップS4において検出された視線Eの先にマークM1があるか否かを判定する(ステップS55)。ステップS55では、認証部15は、ステップS18と同様の処理を実行することにより、視線Eの先にマークM1があるか否かを判定する。 Then, the authentication unit 15 determines whether or not the mark M1 is located at the end of the line of sight E detected in step S4 (step S55). In step S55, the authentication unit 15 executes a process similar to that of step S18 to determine whether or not the mark M1 is located at the end of the line of sight E.

 視線Eの先にマークM1がないと判定された場合(ステップS55:NO)、端末1は、ステップS54を再び実行する。端末1は、視線Eの先にマークM1があると判定されるまで、ステップS54及びステップS55を所定時間繰り返し実行する。視線Eの先にマークM1があると判定された場合(ステップS55:YES)、端末1は、ステップS56を実行する。認証部15は、ステップS55において視線Eの先にマークM1があると判定されたタイミングを注視タイミングとして検出する。 If it is determined that there is no mark M1 in the line of sight E (step S55: NO), the terminal 1 executes step S54 again. The terminal 1 repeatedly executes steps S54 and S55 for a predetermined time until it is determined that there is mark M1 in the line of sight E. If it is determined that there is mark M1 in the line of sight E (step S55: YES), the terminal 1 executes step S56. The authentication unit 15 detects the timing at which it is determined in step S55 that there is mark M1 in the line of sight E as the gaze timing.

 ステップS56では、端末1は、終了条件を満たしたか否かを判定する。本実施形態では、終了条件は、ステップS51及びステップS55において注視タイミングが検出された回数が、予め設定された上記の規定数に達したことである。終了条件は、例えば、予め設定されている。終了条件の内容は、適宜変更可能である。終了条件を満たしていないと判定された場合(ステップS56:NO)、端末1は、ステップS52を再び実行する。終了条件を満たしたと判定された場合(ステップS56:YES)、端末1は、ステップS51及びステップS55において視線Eの先にあると判定されたマークM1の順番を取得する。端末1は、取得したマークM1の順番を入力パターンP1として取得する。その後、端末1は、ステップS57を実行する。 In step S56, the terminal 1 determines whether or not the termination condition is satisfied. In this embodiment, the termination condition is that the number of times that gaze timing is detected in steps S51 and S55 reaches the above-mentioned preset number. The termination condition is, for example, set in advance. The content of the termination condition can be changed as appropriate. If it is determined that the termination condition is not satisfied (step S56: NO), the terminal 1 executes step S52 again. If it is determined that the termination condition is satisfied (step S56: YES), the terminal 1 acquires the order of the marks M1 that are determined to be in the line of sight E in steps S51 and S55. The terminal 1 acquires the acquired order of the marks M1 as the input pattern P1. Then, the terminal 1 executes step S57.

 ステップS57では、認証部15は、端末1を操作するユーザが認証ユーザであるか否かを判定する。本実施形態では、認証部15は、以下に説明する第1条件、第2条件、第3条件、第4条件、及び第5条件のそれぞれを満たすか否かを判定する。第1条件~第5条件の全てを満たすと判定された場合、認証部15は、端末1を操作するユーザが認証ユーザであると判定する。その他の場合、認証部15は、端末1を操作するユーザが認証ユーザでないと判定する。 In step S57, the authentication unit 15 determines whether the user operating the terminal 1 is an authenticated user. In this embodiment, the authentication unit 15 determines whether each of the first condition, second condition, third condition, fourth condition, and fifth condition described below is satisfied. If it is determined that all of the first condition to fifth condition are satisfied, the authentication unit 15 determines that the user operating the terminal 1 is an authenticated user. In other cases, the authentication unit 15 determines that the user operating the terminal 1 is not an authenticated user.

 第1条件は、ユーザが入力した入力パターンP1に関する条件である。認証部15は、ステップS56において取得した入力パターンP1とステップS1において設定された認証パターンP2とが一致するか否かを判定する。より具体的には、認証部15は、ステップS51及びステップS55において視線Eの先にあると判定されたマークM1の順番が、ステップS20において記憶された認証パターンP2により示されるマークM1の順番と一致するか否かを判定する。入力パターンP1と認証パターンP2とが一致すると判定された場合、認証部15は、第1条件を満たすと判定する。入力パターンP1と認証パターンP2とが一致しないと判定された場合、認証部15は、第1条件を満たさないと判定する。 The first condition is a condition related to the input pattern P1 input by the user. The authentication unit 15 determines whether the input pattern P1 acquired in step S56 matches the authentication pattern P2 set in step S1. More specifically, the authentication unit 15 determines whether the order of the marks M1 determined to be in the line of sight E in steps S51 and S55 matches the order of the marks M1 indicated by the authentication pattern P2 stored in step S20. If it is determined that the input pattern P1 and the authentication pattern P2 match, the authentication unit 15 determines that the first condition is met. If it is determined that the input pattern P1 and the authentication pattern P2 do not match, the authentication unit 15 determines that the first condition is not met.

 第2条件は、視線Eの先にマークM1があるときの眼E1に係る特徴に関する条件である。認証部15は、ステップS52において検出された眼E1に係る特徴と、ステップS20において記憶された第1認証情報により示される眼E1に係る特徴との一致率を算出する。認証部15は、例えば、以下に説明する目元のしわの状態の一致率、眼E1の焦点深度の一致率、及び眼球の個数の一致率の平均値を眼E1に係る特徴の一致率として算出する。 The second condition is a condition related to the features of the eye E1 when the mark M1 is in the line of sight E. The authentication unit 15 calculates the match rate between the features of the eye E1 detected in step S52 and the features of the eye E1 indicated by the first authentication information stored in step S20. The authentication unit 15 calculates, for example, the average of the match rates of the wrinkle state around the eyes, the match rate of the focal depth of the eye E1, and the match rate of the number of eyeballs, which are described below, as the match rate of the features of the eye E1.

 認証部15は、ステップS52において検出された目元のしわの状態と、第1認証情報により示される目元のしわの状態との一致率を算出する。認証部15は、ステップS52において検出されたしわの長さ、しわの位置及びしわの太さと、第1認証情報により示されるしわの長さ、しわの位置及びしわの太さとの一致率を算出する。認証部15は、公知の手段を用いて、目元のしわの状態の一致率を算出してよい。 The authentication unit 15 calculates the rate of agreement between the state of wrinkles around the eyes detected in step S52 and the state of wrinkles around the eyes indicated by the first authentication information. The authentication unit 15 calculates the rate of agreement between the length, position, and thickness of the wrinkles detected in step S52 and the length, position, and thickness of the wrinkles indicated by the first authentication information. The authentication unit 15 may calculate the rate of agreement between the state of wrinkles around the eyes using known means.

 認証部15は、ステップS52において検出された眼E1の焦点深度と、第1認証情報により示される眼E1の焦点深度との一致率を算出する。認証部15は、例えば、ステップS52において検出された眼E1の焦点深度と、第1認証情報により示される眼E1の焦点深度との差分の逆数に、所定の値を乗じて正規化した値を上記の一致率として算出してもよい。 The authentication unit 15 calculates the matching rate between the focal depth of the eye E1 detected in step S52 and the focal depth of the eye E1 indicated by the first authentication information. For example, the authentication unit 15 may calculate the above matching rate as a normalized value obtained by multiplying the inverse of the difference between the focal depth of the eye E1 detected in step S52 and the focal depth of the eye E1 indicated by the first authentication information by a predetermined value.

 認証部15は、ステップS52において検出された眼球の個数と、第1認証情報により示される眼球の個数との一致率を算出する。認証部15は、例えば、ステップS52において検出された眼球の個数と第1認証情報により示される眼球の個数とが一致している場合、上記の一致率を所定の第1の値(一例として100%)として算出してもよい。認証部15は、例えば、ステップS52において検出された眼球の個数と第1認証情報により示される眼球の個数とが一致しない場合、上記の一致率を第1の値よりも小さい所定の第2の値(一例として0%)として算出してもよい。 The authentication unit 15 calculates the matching rate between the number of eyeballs detected in step S52 and the number of eyeballs indicated by the first authentication information. For example, if the number of eyeballs detected in step S52 matches the number of eyeballs indicated by the first authentication information, the authentication unit 15 may calculate the above matching rate as a predetermined first value (100% as an example). For example, if the number of eyeballs detected in step S52 does not match the number of eyeballs indicated by the first authentication information, the authentication unit 15 may calculate the above matching rate as a predetermined second value smaller than the first value (0% as an example).

 認証部15は、算出した眼E1に係る特徴の一致率が所定値以上であるか否かを判定する。眼E1に係る特徴の一致率が所定値以上であると判定された場合、認証部15は、第2条件を満たすと判定する。眼E1に係る特徴の一致率が所定値未満であると判定された場合、認証部15は、第2条件を満たさないと判定する。 The authentication unit 15 determines whether the calculated matching rate of the features related to the eye E1 is equal to or greater than a predetermined value. If it is determined that the matching rate of the features related to the eye E1 is equal to or greater than the predetermined value, the authentication unit 15 determines that the second condition is satisfied. If it is determined that the matching rate of the features related to the eye E1 is less than the predetermined value, the authentication unit 15 determines that the second condition is not satisfied.

 第3条件は、視線Eの先にマークM1があるときの瞬きの回数及び瞬きのタイミングに関する条件である。認証部15は、ステップS52において取得された瞬きの回数と、ステップS20において記憶された第2認証情報により示される瞬きの回数との一致率を算出する。認証部15は、例えば、ステップS52において取得された第1回数と、第2認証情報により示される第1回数との差分の逆数に、所定の値を乗じて正規化した値を瞬きの回数の一致率として算出する。 The third condition is a condition related to the number of blinks and the timing of blinks when the mark M1 is ahead of the line of sight E. The authentication unit 15 calculates the match rate between the number of blinks acquired in step S52 and the number of blinks indicated by the second authentication information stored in step S20. For example, the authentication unit 15 calculates the match rate of the number of blinks by normalizing the reciprocal of the difference between the first number acquired in step S52 and the first number indicated by the second authentication information by multiplying it by a predetermined value.

 認証部15は、ステップS52において取得された瞬きのタイミングと、ステップS20において記憶された第2認証情報により示される瞬きのタイミングとの一致率を算出する。認証部15は、例えば、ステップS52において取得された第1瞬き時間と、第2認証情報により示される第1瞬き時間との差分の逆数に、所定の値を乗じて正規化した値を瞬きのタイミングの一致率として算出する。 The authentication unit 15 calculates the coincidence rate between the blink timing acquired in step S52 and the blink timing indicated by the second authentication information stored in step S20. For example, the authentication unit 15 calculates the coincidence rate of the blink timing by multiplying the inverse of the difference between the first blink time acquired in step S52 and the first blink time indicated by the second authentication information by a predetermined value and normalizing the result.

 認証部15は、算出した瞬きの回数の一致率と、算出した瞬きのタイミングの一致率との平均値を算出する。認証部15は、算出した平均値が所定値以上であるか否かを判定する。平均値が所定値以上であると判定された場合、認証部15は、第3条件を満たすと判定する。平均値が所定値未満であると判定された場合、認証部15は、第3条件を満たさないと判定する。 The authentication unit 15 calculates the average value of the calculated match rate of the number of blinks and the calculated match rate of the blink timing. The authentication unit 15 determines whether the calculated average value is equal to or greater than a predetermined value. If it is determined that the average value is equal to or greater than the predetermined value, the authentication unit 15 determines that the third condition is satisfied. If it is determined that the average value is less than the predetermined value, the authentication unit 15 determines that the third condition is not satisfied.

 第4条件は、一のマークM1から他のマークM1へ視線Eの先が移動するときの眼E1に係る特徴に関する条件である。認証部15は、ステップS54において検出された眼E1に係る特徴と、ステップS20において記憶された第3認証情報により示される眼E1に係る特徴との一致率を算出する。認証部15が当該一致率を算出する方法は、第2条件を満たすか否かを判定するときの方法と同様である。 The fourth condition is a condition related to the features of the eye E1 when the gaze E moves from one mark M1 to another mark M1. The authentication unit 15 calculates the match rate between the features of the eye E1 detected in step S54 and the features of the eye E1 indicated by the third authentication information stored in step S20. The method by which the authentication unit 15 calculates the match rate is the same as the method used to determine whether the second condition is satisfied.

 認証部15は、算出した一致率が所定値以上であるか否かを判定する。一致率が所定値以上であると判定された場合、認証部15は、第4条件を満たすと判定する。一致率が所定値未満であると判定された場合、認証部15は、第4条件を満たさないと判定する。 The authentication unit 15 determines whether the calculated matching rate is equal to or greater than a predetermined value. If it is determined that the matching rate is equal to or greater than the predetermined value, the authentication unit 15 determines that the fourth condition is met. If it is determined that the matching rate is less than the predetermined value, the authentication unit 15 determines that the fourth condition is not met.

 第5条件は、一のマークM1から他のマークM1へ視線Eが移動するときの瞬きの回数及び瞬きのタイミングに関する条件である。認証部15は、ステップS54において取得された瞬きの回数と、ステップS20において記憶された第4認証情報により示される瞬きの回数との一致率を算出する。認証部15は、例えば、ステップS54において取得された第2回数と、第4認証情報により示される第2回数との差分の逆数に、所定の値を乗じて正規化した値を瞬きの回数の一致率として算出する。 The fifth condition is a condition related to the number of blinks and the timing of the blinks when the gaze E moves from one mark M1 to another mark M1. The authentication unit 15 calculates the match rate between the number of blinks acquired in step S54 and the number of blinks indicated by the fourth authentication information stored in step S20. For example, the authentication unit 15 calculates the match rate of the number of blinks by normalizing the reciprocal of the difference between the second number acquired in step S54 and the second number indicated by the fourth authentication information by multiplying it by a predetermined value.

 認証部15は、ステップS54において取得された瞬きのタイミングと、ステップS20において記憶された第4認証情報により示される瞬きのタイミングとの一致率を算出する。認証部15は、例えば、ステップS54において取得された第2瞬き時間と、第4認証情報により示される第2瞬き時間との差分の逆数に、所定の値を乗じて正規化した値を瞬きのタイミングの一致率として算出する。 The authentication unit 15 calculates the coincidence rate between the blink timing acquired in step S54 and the blink timing indicated by the fourth authentication information stored in step S20. For example, the authentication unit 15 calculates the coincidence rate of the blink timing by multiplying the inverse of the difference between the second blink time acquired in step S54 and the second blink time indicated by the fourth authentication information by a predetermined value and normalizing the result.

 認証部15は、算出した瞬きの回数の一致率と、瞬きのタイミングの一致率との平均値を算出する。認証部15は、算出した平均値が所定値以上であるか否かを判定する。平均値が所定値以上であると判定された場合、認証部15は、第5条件を満たすと判定する。平均値が所定値未満であると判定された場合、認証部15は、第5条件を満たさないと判定する。 The authentication unit 15 calculates the average value of the calculated match rate of the number of blinks and the match rate of the blink timing. The authentication unit 15 determines whether the calculated average value is equal to or greater than a predetermined value. If it is determined that the average value is equal to or greater than the predetermined value, the authentication unit 15 determines that the fifth condition is satisfied. If it is determined that the average value is less than the predetermined value, the authentication unit 15 determines that the fifth condition is not satisfied.

 以上、第1条件~第5条件の内容の例を説明したが、第1条件~第5条件の内容は適宜変更可能である。また、端末1を操作するユーザが認証ユーザであると判定する基準も適宜変更可能である。例えば、第1条件、第2条件、第3条件、第4条件、及び第5条件の少なくとも1つを満たすと判定された場合、認証部15は、端末1を操作するユーザが認証ユーザであると判定してもよい。 The above describes examples of the contents of the first to fifth conditions, but the contents of the first to fifth conditions can be changed as appropriate. In addition, the criteria for determining that the user operating the terminal 1 is an authenticated user can also be changed as appropriate. For example, if it is determined that at least one of the first condition, the second condition, the third condition, the fourth condition, and the fifth condition is satisfied, the authentication unit 15 may determine that the user operating the terminal 1 is an authenticated user.

 端末1を操作するユーザが認証ユーザであると判定された場合(ステップS57:YES)、認証部15は、端末1のロックを解除する(ステップS58)。端末1を操作するユーザが認証ユーザでないと判定された場合(ステップS57:NO)、認証部15は、端末1のロックを解除しない(ステップS59)。ステップS59では、表示部14は、認証をやり直す旨のメッセージを表示してもよい。以上の処理を経て、端末1は、ステップS5を終了する。 If it is determined that the user operating terminal 1 is an authenticated user (step S57: YES), the authentication unit 15 unlocks terminal 1 (step S58). If it is determined that the user operating terminal 1 is not an authenticated user (step S57: NO), the authentication unit 15 does not unlock terminal 1 (step S59). In step S59, the display unit 14 may display a message to retry authentication. After the above processing, the terminal 1 ends step S5.

 続いて、端末1は、端末1のロックが解除されたか否かを判定する(ステップS6)。ステップS58が実行されることで端末1のロックが解除されたと判定された場合(ステップS6:YES)、端末1は、ステップS7を実行する。ステップS59が実行されることで端末1のロックが解除されなかったと判定された場合(ステップS6:NO)、端末1は、一連の動作を終了する。ステップS7では、端末1は、学習を行う。ステップS7では、端末1は、例えば、端末1を操作するユーザが認証ユーザであるか否かを判定するための基準を更新する。 Then, the terminal 1 judges whether or not the terminal 1 has been unlocked (step S6). When it is judged that the terminal 1 has been unlocked by executing step S58 (step S6: YES), the terminal 1 executes step S7. When it is judged that the terminal 1 has not been unlocked by executing step S59 (step S6: NO), the terminal 1 ends the series of operations. In step S7, the terminal 1 performs learning. In step S7, the terminal 1 updates the criteria for judging whether or not the user operating the terminal 1 is an authenticated user, for example.

 ステップS7では、端末1は、第1認証情報により示される目元のしわの状態を、ステップS52において検出された目元のしわの状態に置き換えて記憶する。ステップS7では、端末1は、第1認証情報により示される眼E1の焦点深度とステップS52において検出された眼E1の焦点深度との平均値を算出する。次に、端末1は、第1認証情報により示される眼E1の焦点深度を、算出した平均値に置き換えて記憶する。ステップS7では、端末1は、第1認証情報により示される眼球の個数を、ステップS52において検出された眼球の個数に置き換えて記憶する。 In step S7, terminal 1 replaces the state of wrinkles around the eyes indicated by the first authentication information with the state of wrinkles around the eyes detected in step S52 and stores it. In step S7, terminal 1 calculates the average value of the focal depth of eye E1 indicated by the first authentication information and the focal depth of eye E1 detected in step S52. Next, terminal 1 replaces the focal depth of eye E1 indicated by the first authentication information with the calculated average value and stores it. In step S7, terminal 1 replaces the number of eyeballs indicated by the first authentication information with the number of eyeballs detected in step S52 and stores it.

 ステップS7では、端末1は、第2認証情報により示される瞬きの回数と、ステップS52において取得された瞬きの回数との平均値を算出する。端末1は、第2認証情報により示される瞬きの回数を、算出した瞬きの回数の平均値に置き換えて記憶する。ステップS7では、端末1は、第2認証情報により示される瞬きのタイミングと、ステップS52において取得された瞬きのタイミングとの平均値を算出する。端末1は、第2認証情報により示される瞬きのタイミングを、算出した瞬きのタイミングの平均値に置き換えて記憶する。 In step S7, terminal 1 calculates the average value of the number of blinks indicated by the second authentication information and the number of blinks acquired in step S52. Terminal 1 replaces the number of blinks indicated by the second authentication information with the calculated average value of the number of blinks and stores it. In step S7, terminal 1 calculates the average value of the blink timing indicated by the second authentication information and the blink timing acquired in step S52. Terminal 1 replaces the blink timing indicated by the second authentication information with the calculated average value of the blink timing and stores it.

 ステップS7では、端末1は、第3認証情報により示される目元のしわの状態を、ステップS54において検出された目元のしわの状態に置き換えて記憶する。ステップS7では、端末1は、第3認証情報により示される眼E1の焦点深度とステップS54において検出された眼E1の焦点深度との平均値を算出する。次に、端末1は、第3認証情報により示される眼E1の焦点深度を、算出した平均値に置き換えて記憶する。ステップS7では、端末1は、第3認証情報により示される眼球の個数を、ステップS54において検出された眼球の個数に置き換えて記憶する。 In step S7, terminal 1 replaces the state of wrinkles around the eyes indicated by the third authentication information with the state of wrinkles around the eyes detected in step S54 and stores it. In step S7, terminal 1 calculates the average value of the focal depth of eye E1 indicated by the third authentication information and the focal depth of eye E1 detected in step S54. Next, terminal 1 replaces the focal depth of eye E1 indicated by the third authentication information with the calculated average value and stores it. In step S7, terminal 1 replaces the number of eyeballs indicated by the third authentication information with the number of eyeballs detected in step S54 and stores it.

 ステップS7では、端末1は、第4認証情報により示される瞬きの回数と、ステップS54において取得された瞬きの回数との平均値を算出する。端末1は、第4認証情報により示される瞬きの回数を、算出した瞬きの回数の平均値に置き換えて記憶する。ステップS7では、端末1は、第4認証情報により示される瞬きのタイミングと、ステップS54において取得された瞬きのタイミングとの平均値を算出する。端末1は、第4認証情報により示される瞬きのタイミングを、算出した瞬きのタイミングの平均値に置き換えて記憶する。 In step S7, terminal 1 calculates the average value of the number of blinks indicated by the fourth authentication information and the number of blinks acquired in step S54. Terminal 1 replaces the number of blinks indicated by the fourth authentication information with the calculated average value of the number of blinks and stores it. In step S7, terminal 1 calculates the average value of the blink timing indicated by the fourth authentication information and the blink timing acquired in step S54. Terminal 1 replaces the blink timing indicated by the fourth authentication information with the calculated average value of the blink timing and stores it.

 ユーザの眼E1の状態は、例えばユーザの加齢等により変化する可能性がある。端末1のロックが解除されたと判定された場合、端末1は、端末1を操作するユーザが認証ユーザであるか否かを判定するための基準を更新する。端末1のロックを解除した認証ユーザの眼E1の状態に応じて認証の基準を変更するので、認証を繰り返すにつれてユーザの眼E1の状態が変化した場合でも、認証の正確性を維持することができる。 The condition of the user's eye E1 may change due to, for example, aging of the user. When it is determined that the terminal 1 has been unlocked, the terminal 1 updates the criteria for determining whether the user operating the terminal 1 is an authenticated user. Since the authentication criteria are changed according to the condition of the eye E1 of the authenticated user who unlocked the terminal 1, the accuracy of the authentication can be maintained even if the condition of the user's eye E1 changes as authentication is repeated.

 次に、端末1の作用効果を説明する。端末1は、ユーザの眼E1を検出する眼検出部11と、眼検出部11の検出結果に基づいて、ユーザの視線Eを検出する視線検出部12と、眼検出部11の検出結果に基づいて、ユーザの眼E1の状態を検出する状態検出部13と、認証のための視線Eを誘導するためのマークM1を含む認証模様Mを表示する表示部14と、視線Eの先にマークM1があるときの眼E1の状態に基づいて、認証を行う認証部15と、を備える。 Next, the effects of the terminal 1 will be described. The terminal 1 includes an eye detection unit 11 that detects the user's eye E1, a gaze detection unit 12 that detects the user's gaze E based on the detection result of the eye detection unit 11, a state detection unit 13 that detects the state of the user's eye E1 based on the detection result of the eye detection unit 11, a display unit 14 that displays an authentication pattern M including a mark M1 for guiding the gaze E for authentication, and an authentication unit 15 that performs authentication based on the state of the eye E1 when the mark M1 is at the end of the gaze E.

 この端末1では、例えば、ユーザがマークM1を見ているときの眼E1の状態に基づいて認証を行う。眼E1の状態は、人物毎に固有の眼E1の特徴である。したがって、仮に予め設定された認証パターンP2が認証ユーザ以外の者に知られた場合でも、眼E1の状態に基づいて認証ユーザ以外の者による端末1の認証を防ぐことができる。以上より、認証の安全性を向上できる。 In this terminal 1, for example, authentication is performed based on the state of the eye E1 when the user is looking at the mark M1. The state of the eye E1 is a characteristic of the eye E1 that is unique to each person. Therefore, even if the pre-set authentication pattern P2 is known to someone other than the authenticated user, it is possible to prevent authentication of the terminal 1 by anyone other than the authenticated user based on the state of the eye E1. As a result, the security of authentication can be improved.

 認証模様Mは、複数のマークM1を含み、認証部15は、一のマークM1から他のマークM1へ視線Eの先が移動するときの眼E1の状態に基づいて、認証を行ってよい。この場合、例えば、ユーザがマークM1から視線Eの先を外し、他のマークM1に向かって視線Eの先を移動させているときの眼E1の状態に基づいて認証を行う。ユーザがマークM1を見ているときの眼E1の状態のみならず、ユーザが視線Eの先を移動させているときの眼E1の状態を考慮して認証を行うので、認証の正確性を向上できる。したがって、認証の安全性を一層向上できる。 The authentication pattern M may include multiple marks M1, and the authentication unit 15 may perform authentication based on the state of the eye E1 when the gaze E moves from one mark M1 to another mark M1. In this case, for example, authentication is performed based on the state of the eye E1 when the user removes the gaze E from the mark M1 and moves the gaze E toward the other mark M1. Since authentication is performed taking into account not only the state of the eye E1 when the user is looking at the mark M1, but also the state of the eye E1 when the user is moving the gaze E, the accuracy of authentication can be improved. Therefore, the security of authentication can be further improved.

 状態検出部13は、眼E1の状態として、瞬きを検出してよい。例えば、瞬きの頻度、及び、瞬きを行うタイミングには、人物の体質及び癖が反映され得る。眼E1の状態として瞬きを検出し、検出した瞬きに基づいて認証を行うことができるので、認証の正確性を一層向上できる。したがって、認証の安全性を一層向上できる。 The state detection unit 13 may detect blinking as the state of the eye E1. For example, the frequency of blinking and the timing of blinking may reflect a person's constitution and habits. Since blinking can be detected as the state of the eye E1 and authentication can be performed based on the detected blinking, the accuracy of authentication can be further improved. Therefore, the safety of authentication can be further improved.

 認証部15は、検出された瞬きに基づいて、瞬きの回数及び瞬きのタイミングの少なくとも1つを取得し、取得した瞬きの回数及び瞬きのタイミングの少なくとも1つに基づいて、認証を行ってよい。瞬きの回数及び瞬きのタイミングには、人物の体質及び癖が反映され得る。瞬きの回数及び瞬きのタイミングの少なくとも1つに基づいて認証を行うことができるので、認証の正確性を一層向上できる。したがって、認証の安全性を一層向上できる。 The authentication unit 15 may obtain at least one of the number of blinks and the timing of blinks based on the detected blinks, and perform authentication based on at least one of the obtained number of blinks and the timing of blinks. The number of blinks and the timing of blinks may reflect a person's constitution and habits. Since authentication can be performed based on at least one of the number of blinks and the timing of blinks, the accuracy of authentication can be further improved. Therefore, the safety of authentication can be further improved.

 状態検出部13は、眼E1の状態として、眼E1に係る特徴を検出してよい。この場合、眼E1自体の特徴である眼E1に係る特徴に基づいて認証を行うことができるので、認証の安全性を一層向上できる。 The state detection unit 13 may detect characteristics related to the eye E1 as the state of the eye E1. In this case, authentication can be performed based on the characteristics related to the eye E1, which are the characteristics of the eye E1 itself, thereby further improving the security of the authentication.

 状態検出部13は、眼E1に係る特徴として、目元のしわの状態、眼E1の焦点深度、及び眼球の個数の少なくとも1つを検出してよい。目元のしわの状態、眼E1の焦点深度、及び眼球の個数のそれぞれは、人物毎に固有の特徴である。したがって、これらの特徴に基づいて認証を行うことができるので、認証の安全性を一層向上できる。 The state detection unit 13 may detect at least one of the state of wrinkles around the eyes, the focal depth of the eye E1, and the number of eyeballs as features related to the eye E1. The state of wrinkles around the eyes, the focal depth of the eye E1, and the number of eyeballs are each unique features for each person. Therefore, authentication can be performed based on these features, which further improves the security of the authentication.

 認証部15は、予め設定され、視線Eの先を移動させるマークM1の順番である認証パターンP2に基づいて認証を行ってよい。この場合、ユーザの眼E1の状態に加え、予め設定された認証パターンP2を用いて認証を行うことができる。例えば、ユーザにより入力された入力パターンP1が認証パターンP2と一致するか否かを判定し、当該判定結果に基づいて認証を行うことができる。したがって、認証ユーザの眼E1の状態と類似する眼E1の状態を有する者が端末1の認証を行う場合でも、入力パターンP1が認証パターンP2と一致しない場合には、端末1のロックが解除されることを防止できる。以上より、認証の安全性を一層向上できる。 The authentication unit 15 may perform authentication based on a preset authentication pattern P2, which is the order of the marks M1 that move in front of the line of sight E. In this case, authentication can be performed using the preset authentication pattern P2 in addition to the state of the user's eyes E1. For example, it can determine whether the input pattern P1 input by the user matches the authentication pattern P2, and perform authentication based on the determination result. Therefore, even if a person with an eye E1 state similar to that of the authenticated user authenticates the terminal 1, if the input pattern P1 does not match the authentication pattern P2, it is possible to prevent the terminal 1 from being unlocked. As a result, the security of authentication can be further improved.

 続いて、変形例に係る端末1を説明する。以下、上記実施形態との相違点を主として説明し、重複する説明を適宜省略する。 Next, a modified version of the terminal 1 will be described. Below, differences from the above embodiment will be mainly described, and duplicate descriptions will be omitted as appropriate.

 ステップS7において端末1が実行する処理の内容は、適宜変更可能である。ステップS7では、端末1は、例えば、第1認証情報により示される眼E1の焦点深度を、ステップS52において検出された眼E1の焦点深度に置き換えて記憶してもよい。端末1は、第3認証情報についても同様の処理を実行してもよい。ステップS7では、端末1は、例えば、第2認証情報により示される瞬きの回数を、ステップS52において取得された瞬きの回数に置き換えて記憶してもよい。端末1は、例えば、第2認証情報により示される瞬きのタイミングを、ステップS52において取得された瞬きのタイミングに置き換えて記憶してもよい。端末1は、第4認証情報についても同様の処理を実行してもよい。 The content of the process executed by the terminal 1 in step S7 can be changed as appropriate. In step S7, the terminal 1 may, for example, replace the focal depth of the eye E1 indicated by the first authentication information with the focal depth of the eye E1 detected in step S52 and store it. The terminal 1 may also execute a similar process for the third authentication information. In step S7, the terminal 1 may, for example, replace the number of blinks indicated by the second authentication information with the number of blinks acquired in step S52 and store it. The terminal 1 may, for example, replace the timing of the blinks indicated by the second authentication information with the timing of the blinks acquired in step S52 and store it. The terminal 1 may also execute a similar process for the fourth authentication information.

 上記実施形態では、状態検出部13が眼E1の状態として瞬き及び眼E1に係る特徴を検出する例を説明した。しかし、状態検出部13は、人物毎に固有の眼E1の特徴を眼E1の状態として検出すればよく、瞬き及び眼E1に係る特徴以外の特徴を眼E1の状態として検出してよい。 In the above embodiment, an example has been described in which the state detection unit 13 detects blinking and features related to the eye E1 as the state of the eye E1. However, the state detection unit 13 only needs to detect features of the eye E1 that are unique to each person as the state of the eye E1, and may detect features other than blinking and features related to the eye E1 as the state of the eye E1.

 上記実施形態では、状態検出部13が眼E1に係る特徴として、目元のしわの状態、眼E1の焦点深度、及び眼球の個数を検出する例を説明した。しかし、状態検出部13は、眼E1自体の特徴を眼E1に係る特徴として検出すればよく、目元のしわの状態、眼E1の焦点深度、及び眼球の個数以外の特徴を眼E1に係る特徴として検出してよい。例えば、状態検出部13は、眼E1に係る特徴として、眼E1の光彩の大きさ、又は光彩の色を検出してよい。 In the above embodiment, an example has been described in which the state detection unit 13 detects the state of wrinkles around the eyes, the focal depth of eye E1, and the number of eyeballs as features related to eye E1. However, it is sufficient for the state detection unit 13 to detect the features of eye E1 itself as features related to eye E1, and the state detection unit 13 may detect features other than the state of wrinkles around the eyes, the focal depth of eye E1, and the number of eyeballs as features related to eye E1. For example, the state detection unit 13 may detect the size or color of the iris of eye E1 as a feature related to eye E1.

 本開示の認証装置は、以下の構成を有する。 The authentication device disclosed herein has the following configuration:

[1]
 ユーザの眼を検出する眼検出部と、
 前記眼検出部の検出結果に基づいて、前記ユーザの視線を検出する視線検出部と、
 前記眼検出部の検出結果に基づいて、前記ユーザの眼の状態を検出する状態検出部と、
 認証のための前記視線を誘導するためのマークを含む認証模様を表示する表示部と、
 前記視線の先に前記マークがあるときの前記眼の状態に基づいて、認証を行う認証部と、を備える、
認証装置。
[1]
An eye detection unit that detects the eyes of a user;
a gaze detection unit that detects a gaze of the user based on a detection result of the eye detection unit;
a condition detection unit that detects an eye condition of the user based on a detection result of the eye detection unit;
a display unit that displays an authentication pattern including a mark for guiding the line of sight for authentication;
and an authentication unit that performs authentication based on the state of the eye when the mark is in the line of sight.
Authentication device.

[2]
 前記認証模様は、複数の前記マークを含み、
 前記認証部は、一の前記マークから他の前記マークへ前記視線の先が移動するときの前記眼の状態に基づいて、認証を行う、
[1]に記載の認証装置。
[2]
The authentication pattern includes a plurality of the marks,
the authentication unit performs authentication based on a state of the eye when the gaze moves from one of the marks to the other of the marks.
The authentication device according to [1].

[3]
 前記状態検出部は、前記眼の状態として、瞬きを検出する、
[1]又は[2]に記載の認証装置。
[3]
The state detection unit detects blinking as the eye state.
The authentication device according to [1] or [2].

[4]
 前記認証部は、
  検出された前記瞬きに基づいて、前記瞬きの回数及び前記瞬きのタイミングの少なくとも1つを取得し、
  前記取得した前記瞬きの回数及び前記瞬きのタイミングの少なくとも1つに基づいて、前記認証を行う、
[3]に記載の認証装置。
[4]
The authentication unit
Obtaining at least one of the number of blinks and the timing of the blinks based on the detected blinks;
performing the authentication based on at least one of the acquired number of blinks and the acquired blink timing;
The authentication device according to [3].

[5]
 前記状態検出部は、前記眼の状態として、前記眼に係る特徴を検出する、
[1]~[4]のいずれかに記載の認証装置。
[5]
The condition detection unit detects a feature related to the eye as the eye condition.
An authentication device according to any one of [1] to [4].

[6]
 前記状態検出部は、前記眼に係る特徴として、目元のしわの状態、前記眼の焦点深度、及び眼球の個数の少なくとも1つを検出する、
[5]に記載の認証装置。
[6]
The state detection unit detects at least one of a state of wrinkles around the eyes, a focal depth of the eyes, and a number of eyeballs as the eye-related features.
The authentication device according to [5].

[7]
 前記認証部は、予め設定され、前記視線の先を移動させる前記マークの順番である認証パターンに基づいて認証を行う、
[2]~[6]のいずれかに記載の認証装置。
[7]
the authentication unit performs authentication based on an authentication pattern that is a sequence of the marks to which the line of sight is moved, the authentication pattern being set in advance;
An authentication device according to any one of [2] to [6].

 なお、上記実施形態の説明に用いたブロック図は、機能単位のブロックを示している。これらの機能ブロック(構成部)は、ハードウェア及びソフトウェアの少なくとも一方の任意の組み合わせによって実現される。また、各機能ブロックの実現方法は特に限定されない。すなわち、各機能ブロックは、物理的又は論理的に結合した1つの装置を用いて実現されてもよいし、物理的又は論理的に分離した2つ以上の装置を直接的又は間接的に(例えば、有線、無線などを用いて)接続し、これら複数の装置を用いて実現されてもよい。機能ブロックは、上記1つの装置又は上記複数の装置にソフトウェアを組み合わせて実現されてもよい。 The block diagrams used to explain the above embodiments show functional blocks. These functional blocks (components) are realized by any combination of at least one of hardware and software. Furthermore, the method of realizing each functional block is not particularly limited. That is, each functional block may be realized using one device that is physically or logically coupled, or may be realized using two or more devices that are physically or logically separated and connected directly or indirectly (for example, using wires, wirelessly, etc.) and these multiple devices. The functional blocks may be realized by combining the one device or the multiple devices with software.

 機能には、判断、決定、判定、計算、算出、処理、導出、調査、探索、確認、受信、送信、出力、アクセス、解決、選択、選定、確立、比較、想定、期待、見做し、報知(broadcasting)、通知(notifying)、通信(communicating)、転送(forwarding)、構成(configuring)、再構成(reconfiguring)、割り当て(allocating、mapping)、割り振り(assigning)などがあるが、これらに限られない。たとえば、送信を機能させる機能ブロック(構成部)は、送信部(transmitting unit)や送信機(transmitter)と呼称される。いずれも、上述したとおり、実現方法は特に限定されない。 Functions include, but are not limited to, judgement, determination, judgment, calculation, computation, processing, derivation, investigation, search, confirmation, reception, transmission, output, access, resolution, selection, election, establishment, comparison, assumption, expectation, regarding, broadcasting, notifying, communicating, forwarding, configuring, reconfiguring, allocating, mapping, and assignment. For example, a functional block (component) that performs the transmission function is called a transmitting unit or transmitter. As mentioned above, there are no particular limitations on the method of realization for either of these.

 例えば、本開示の一実施の形態における端末1は、本開示の情報処理を行うコンピュータとして機能してもよい。図7は、本開示の一実施の形態に係る端末1のハードウェア構成の一例を示す図である。上述の端末1は、物理的には、プロセッサ1001、メモリ1002、ストレージ1003、通信装置1004、入力装置1005、出力装置1006、バス1007などを含むコンピュータ装置として構成されてもよい。端末20のハードウェア構成も、ここで説明するものであってもよい。 For example, terminal 1 in one embodiment of the present disclosure may function as a computer that performs the information processing of the present disclosure. FIG. 7 is a diagram showing an example of the hardware configuration of terminal 1 according to one embodiment of the present disclosure. The above-mentioned terminal 1 may be physically configured as a computer device including a processor 1001, memory 1002, storage 1003, communication device 1004, input device 1005, output device 1006, bus 1007, etc. The hardware configuration of terminal 20 may also be as described here.

 なお、以下の説明では、「装置」という文言は、回路、デバイス、ユニットなどに読み替えることができる。端末1のハードウェア構成は、図に示した各装置を1つ又は複数含むように構成されてもよいし、一部の装置を含まずに構成されてもよい。 In the following explanation, the word "apparatus" can be interpreted as a circuit, device, unit, etc. The hardware configuration of terminal 1 may be configured to include one or more of the devices shown in the figure, or may be configured to exclude some of the devices.

 端末1における各機能は、プロセッサ1001、メモリ1002などのハードウェア上に所定のソフトウェア(プログラム)を読み込ませることによって、プロセッサ1001が演算を行い、通信装置1004による通信を制御したり、メモリ1002及びストレージ1003におけるデータの読み出し及び書き込みの少なくとも一方を制御したりすることによって実現される。 Each function of the terminal 1 is realized by loading a specific software (program) onto hardware such as the processor 1001 and memory 1002, causing the processor 1001 to perform calculations, control communications via the communication device 1004, and control at least one of the reading and writing of data in the memory 1002 and storage 1003.

 プロセッサ1001は、例えば、オペレーティングシステムを動作させてコンピュータ全体を制御する。プロセッサ1001は、周辺装置とのインターフェース、制御装置、演算装置、レジスタなどを含む中央処理装置(CPU:Central Processing Unit)によって構成されてもよい。例えば、上述の端末1における各機能は、プロセッサ1001によって実現されてもよい。 The processor 1001, for example, runs an operating system to control the entire computer. The processor 1001 may be configured as a central processing unit (CPU) including an interface with peripheral devices, a control device, an arithmetic unit, registers, etc. For example, each function in the terminal 1 described above may be realized by the processor 1001.

 また、プロセッサ1001は、プログラム(プログラムコード)、ソフトウェアモジュール、データなどを、ストレージ1003及び通信装置1004の少なくとも一方からメモリ1002に読み出し、これらに従って各種の処理を実行する。プログラムとしては、上述の実施の形態において説明した動作の少なくとも一部をコンピュータに実行させるプログラムが用いられる。例えば、端末1における各機能は、メモリ1002に格納され、プロセッサ1001において動作する制御プログラムによって実現されてもよい。上述の各種処理は、1つのプロセッサ1001によって実行される旨を説明してきたが、2以上のプロセッサ1001により同時又は逐次に実行されてもよい。プロセッサ1001は、1以上のチップによって実装されてもよい。なお、プログラムは、電気通信回線を介してネットワークから送信されても良い。 The processor 1001 also reads out programs (program codes), software modules, data, etc. from at least one of the storage 1003 and the communication device 1004 into the memory 1002, and executes various processes according to these. The programs used are those that cause a computer to execute at least some of the operations described in the above-mentioned embodiments. For example, each function in the terminal 1 may be realized by a control program stored in the memory 1002 and running on the processor 1001. Although the above-mentioned various processes have been described as being executed by one processor 1001, they may be executed simultaneously or sequentially by two or more processors 1001. The processor 1001 may be implemented by one or more chips. The programs may be transmitted from a network via a telecommunications line.

 メモリ1002は、コンピュータ読み取り可能な記録媒体であり、例えば、ROM(Read Only Memory)、EPROM(Erasable Programmable ROM)、EEPROM(Electrically Erasable Programmable ROM)、RAM(Random Access Memory)などの少なくとも1つによって構成されてもよい。メモリ1002は、レジスタ、キャッシュ、メインメモリ(主記憶装置)などと呼ばれてもよい。メモリ1002は、本開示の一実施の形態に係る情報処理を実施するために実行可能なプログラム(プログラムコード)、ソフトウェアモジュールなどを保存することができる。 Memory 1002 is a computer-readable recording medium, and may be composed of at least one of, for example, ROM (Read Only Memory), EPROM (Erasable Programmable ROM), EEPROM (Electrically Erasable Programmable ROM), RAM (Random Access Memory), etc. Memory 1002 may also be called a register, cache, main memory (primary storage device), etc. Memory 1002 can store executable programs (program codes), software modules, etc., for performing information processing related to one embodiment of the present disclosure.

 ストレージ1003は、コンピュータ読み取り可能な記録媒体であり、例えば、CD-ROM(Compact Disc ROM)などの光ディスク、ハードディスクドライブ、フレキシブルディスク、光磁気ディスク(例えば、コンパクトディスク、デジタル多用途ディスク、Blu-ray(登録商標)ディスク)、スマートカード、フラッシュメモリ(例えば、カード、スティック、キードライブ)、フロッピー(登録商標)ディスク、磁気ストリップなどの少なくとも1つによって構成されてもよい。ストレージ1003は、補助記憶装置と呼ばれてもよい。端末1が備える記憶媒体は、例えば、メモリ1002及びストレージ1003の少なくとも一方を含むデータベース、サーバその他の適切な媒体であってもよい。 Storage 1003 is a computer-readable recording medium, and may be composed of at least one of an optical disk such as a CD-ROM (Compact Disc ROM), a hard disk drive, a flexible disk, a magneto-optical disk (e.g., a compact disk, a digital versatile disk, a Blu-ray (registered trademark) disk), a smart card, a flash memory (e.g., a card, a stick, a key drive), a floppy (registered trademark) disk, a magnetic strip, etc. Storage 1003 may also be called an auxiliary storage device. The storage medium provided in terminal 1 may be, for example, a database, a server, or other appropriate medium that includes at least one of memory 1002 and storage 1003.

 通信装置1004は、有線ネットワーク及び無線ネットワークの少なくとも一方を介してコンピュータ間の通信を行うためのハードウェア(送受信デバイス)であり、例えばネットワークデバイス、ネットワークコントローラ、ネットワークカード、通信モジュールなどともいう。 The communication device 1004 is hardware (transmitting/receiving device) for communicating between computers via at least one of a wired network and a wireless network, and is also called, for example, a network device, a network controller, a network card, a communication module, etc.

 入力装置1005は、外部からの入力を受け付ける入力デバイス(例えば、キーボード、マウス、マイクロフォン、スイッチ、ボタン、センサなど)である。出力装置1006は、外部への出力を実施する出力デバイス(例えば、ディスプレイ、スピーカー、LEDランプなど)である。なお、入力装置1005及び出力装置1006は、一体となった構成(例えば、タッチパネル)であってもよい。 The input device 1005 is an input device (e.g., a keyboard, a mouse, a microphone, a switch, a button, a sensor, etc.) that accepts input from the outside. The output device 1006 is an output device (e.g., a display, a speaker, an LED lamp, etc.) that performs output to the outside. Note that the input device 1005 and the output device 1006 may be integrated into one structure (e.g., a touch panel).

 また、プロセッサ1001、メモリ1002などの各装置は、情報を通信するためのバス1007によって接続される。バス1007は、単一のバスを用いて構成されてもよいし、装置間ごとに異なるバスを用いて構成されてもよい。 Furthermore, each device such as the processor 1001 and memory 1002 is connected by a bus 1007 for communicating information. The bus 1007 may be configured using a single bus, or may be configured using different buses between each device.

 また、端末1は、マイクロプロセッサ、デジタル信号プロセッサ(DSP:Digital Signal Processor)、ASIC(Application Specific Integrated Circuit)、PLD(Programmable Logic Device)、FPGA(Field Programmable Gate Array)などのハードウェアを含んで構成されてもよく、当該ハードウェアにより、各機能ブロックの一部又は全てが実現されてもよい。例えば、プロセッサ1001は、これらのハードウェアの少なくとも1つを用いて実装されてもよい。 In addition, the terminal 1 may be configured to include hardware such as a microprocessor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a programmable logic device (PLD), or a field programmable gate array (FPGA), and some or all of the functional blocks may be realized by the hardware. For example, the processor 1001 may be implemented using at least one of these pieces of hardware.

 本開示において説明した各態様/実施形態の処理手順、シーケンス、フローチャートなどは、矛盾の無い限り、順序を入れ替えてもよい。例えば、本開示において説明した方法については、例示的な順序を用いて様々なステップの要素を提示しており、提示した特定の順序に限定されない。 The processing steps, sequences, flow charts, etc. of each aspect/embodiment described in this disclosure may be reordered unless inconsistent. For example, the methods described in this disclosure present elements of various steps using an example order and are not limited to the particular order presented.

 入出力された情報等は特定の場所(例えば、メモリ)に保存されてもよいし、管理テーブルを用いて管理してもよい。入出力される情報等は、上書き、更新、又は追記され得る。出力された情報等は削除されてもよい。入力された情報等は他の装置へ送信されてもよい。 The input and output information may be stored in a specific location (e.g., memory) or may be managed using a management table. The input and output information may be overwritten, updated, or added to. The output information may be deleted. The input information may be sent to another device.

 判定は、1ビットで表される値(0か1か)によって行われてもよいし、真偽値(Boolean:true又はfalse)によって行われてもよいし、数値の比較(例えば、所定の値との比較)によって行われてもよい。 The determination may be based on a value represented by one bit (0 or 1), a Boolean value (true or false), or a numerical comparison (e.g., a comparison with a predetermined value).

 本開示において説明した各態様/実施形態は単独で用いてもよいし、組み合わせて用いてもよいし、実行に伴って切り替えて用いてもよい。また、所定の情報の通知(例えば、「Xであること」の通知)は、明示的に行うものに限られず、暗黙的(例えば、当該所定の情報の通知を行わない)ことによって行われてもよい。 Each aspect/embodiment described in this disclosure may be used alone, in combination, or switched depending on the execution. In addition, notification of specific information (e.g., notification that "X is the case") is not limited to being done explicitly, but may be done implicitly (e.g., not notifying the specific information).

 以上、本開示について詳細に説明したが、当業者にとっては、本開示が本開示中に説明した実施形態に限定されるものではないということは明らかである。本開示は、請求の範囲の記載により定まる本開示の趣旨及び範囲を逸脱することなく修正及び変更態様として実施することができる。したがって、本開示の記載は、例示説明を目的とするものであり、本開示に対して何ら制限的な意味を有するものではない。  Although the present disclosure has been described in detail above, it is clear to those skilled in the art that the present disclosure is not limited to the embodiments described herein. The present disclosure can be implemented in modified and altered forms without departing from the spirit and scope of the present disclosure as defined by the claims. Therefore, the description of the present disclosure is intended to be illustrative and does not have any limiting meaning on the present disclosure.

 ソフトウェアは、ソフトウェア、ファームウェア、ミドルウェア、マイクロコード、ハードウェア記述言語と呼ばれるか、他の名称で呼ばれるかを問わず、命令、命令セット、コード、コードセグメント、プログラムコード、プログラム、サブプログラム、ソフトウェアモジュール、アプリケーション、ソフトウェアアプリケーション、ソフトウェアパッケージ、ルーチン、サブルーチン、オブジェクト、実行可能ファイル、実行スレッド、手順、機能などを意味するよう広く解釈されるべきである。 Software shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software modules, applications, software applications, software packages, routines, subroutines, objects, executable files, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise.

 また、ソフトウェア、命令、情報などは、伝送媒体を介して送受信されてもよい。例えば、ソフトウェアが、有線技術(同軸ケーブル、光ファイバケーブル、ツイストペア、デジタル加入者回線(DSL:Digital Subscriber Line)など)及び無線技術(赤外線、マイクロ波など)の少なくとも一方を使用してウェブサイト、サーバ、又は他のリモートソースから送信される場合、これらの有線技術及び無線技術の少なくとも一方は、伝送媒体の定義内に含まれる。 Software, instructions, information, etc. may also be transmitted and received via a transmission medium. For example, if the software is transmitted from a website, server, or other remote source using at least one of wired technologies (such as coaxial cable, fiber optic cable, twisted pair, Digital Subscriber Line (DSL)), and/or wireless technologies (such as infrared, microwave), then at least one of these wired and wireless technologies is included within the definition of a transmission medium.

 本開示において使用する「システム」及び「ネットワーク」という用語は、互換的に使用される。 As used in this disclosure, the terms "system" and "network" are used interchangeably.

 また、本開示において説明した情報、パラメータなどは、絶対値を用いて表されてもよいし、所定の値からの相対値を用いて表されてもよいし、対応する別の情報を用いて表されてもよい。 In addition, the information, parameters, etc. described in this disclosure may be expressed using absolute values, may be expressed using relative values from a predetermined value, or may be expressed using other corresponding information.

 本開示で使用する「判断(determining)」、「決定(determining)」という用語は、多種多様な動作を包含する場合がある。「判断」、「決定」は、例えば、判定(judging)、計算(calculating)、算出(computing)、処理(processing)、導出(deriving)、調査(investigating)、探索(looking up、search、inquiry)(例えば、テーブル、データベース又は別のデータ構造での探索)、確認(ascertaining)した事を「判断」「決定」したとみなす事などを含み得る。また、「判断」、「決定」は、受信(receiving)(例えば、情報を受信すること)、送信(transmitting)(例えば、情報を送信すること)、入力(input)、出力(output)、アクセス(accessing)(例えば、メモリ中のデータにアクセスすること)した事を「判断」「決定」したとみなす事などを含み得る。また、「判断」、「決定」は、解決(resolving)、選択(selecting)、選定(choosing)、確立(establishing)、比較(comparing)などした事を「判断」「決定」したとみなす事を含み得る。つまり、「判断」「決定」は、何らかの動作を「判断」「決定」したとみなす事を含み得る。また、「判断(決定)」は、「想定する(assuming)」、「期待する(expecting)」、「みなす(considering)」などで読み替えられてもよい。 As used in this disclosure, the terms "determining" and "determining" may encompass a wide variety of actions. "Determining" and "determining" may include, for example, judging, calculating, computing, processing, deriving, investigating, looking up, search, inquiry (e.g., searching in a table, database, or other data structure), and considering ascertaining as "judging" or "determining." Also, "determining" and "determining" may include receiving (e.g., receiving information), transmitting (e.g., sending information), input, output, accessing (e.g., accessing data in memory), and considering ascertaining as "judging" or "determining." Additionally, "judgment" and "decision" can include considering resolving, selecting, choosing, establishing, comparing, etc., to have been "judged" or "decided." In other words, "judgment" and "decision" can include considering some action to have been "judged" or "decided." Additionally, "judgment (decision)" can be interpreted as "assuming," "expecting," "considering," etc.

 「接続された(connected)」、「結合された(coupled)」という用語、又はこれらのあらゆる変形は、2又はそれ以上の要素間の直接的又は間接的なあらゆる接続又は結合を意味し、互いに「接続」又は「結合」された2つの要素間に1又はそれ以上の中間要素が存在することを含むことができる。要素間の結合又は接続は、物理的なものであっても、論理的なものであっても、或いはこれらの組み合わせであってもよい。例えば、「接続」は「アクセス」で読み替えられてもよい。本開示で使用する場合、2つの要素は、1又はそれ以上の電線、ケーブル及びプリント電気接続の少なくとも一つを用いて、並びにいくつかの非限定的かつ非包括的な例として、無線周波数領域、マイクロ波領域及び光(可視及び不可視の両方)領域の波長を有する電磁エネルギーなどを用いて、互いに「接続」又は「結合」されると考えることができる。 The terms "connected," "coupled," or any variation thereof, refer to any direct or indirect connection or coupling between two or more elements, and may include the presence of one or more intermediate elements between two elements that are "connected" or "coupled" to each other. The coupling or connection between elements may be physical, logical, or a combination thereof. For example, "connected" may be read as "access." As used in this disclosure, two elements may be considered to be "connected" or "coupled" to each other using at least one of one or more wires, cables, and printed electrical connections, as well as electromagnetic energy having wavelengths in the radio frequency range, microwave range, and optical (both visible and invisible) range, as some non-limiting and non-exhaustive examples.

 本開示において使用する「に基づいて」という記載は、別段に明記されていない限り、「のみに基づいて」を意味しない。言い換えれば、「に基づいて」という記載は、「のみに基づいて」と「に少なくとも基づいて」の両方を意味する。 As used in this disclosure, the phrase "based on" does not mean "based only on," unless expressly stated otherwise. In other words, the phrase "based on" means both "based only on" and "based at least on."

 本開示において使用する「第1の」、「第2の」などの呼称を使用した要素へのいかなる参照も、それらの要素の量又は順序を全般的に限定しない。これらの呼称は、2つ以上の要素間を区別する便利な方法として本開示において使用され得る。したがって、第1及び第2の要素への参照は、2つの要素のみが採用され得ること、又は何らかの形で第1の要素が第2の要素に先行しなければならないことを意味しない。 Any reference to an element using a designation such as "first," "second," etc., used in this disclosure does not generally limit the quantity or order of those elements. These designations may be used in this disclosure as a convenient method of distinguishing between two or more elements. Thus, a reference to a first and a second element does not imply that only two elements may be employed or that the first element must precede the second element in some way.

 本開示において、「含む(include)」、「含んでいる(including)」及びそれらの変形が使用されている場合、これらの用語は、用語「備える(comprising)」と同様に、包括的であることが意図される。さらに、本開示において使用されている用語「又は(or)」は、排他的論理和ではないことが意図される。 When the terms "include," "including," and variations thereof are used in this disclosure, these terms are intended to be inclusive, similar to the term "comprising." Additionally, the term "or," as used in this disclosure, is not intended to be an exclusive or.

 本開示において、例えば、英語でのa, an及びtheのように、翻訳により冠詞が追加された場合、本開示は、これらの冠詞の後に続く名詞が複数形であることを含んでもよい。 In this disclosure, where articles have been added through translation, such as a, an, and the in English, this disclosure may include that the nouns following these articles are plural.

 本開示において、「AとBが異なる」という用語は、「AとBが互いに異なる」ことを意味してもよい。なお、当該用語は、「AとBがそれぞれCと異なる」ことを意味してもよい。「離れる」、「結合される」などの用語も、「異なる」と同様に解釈されてもよい。 In this disclosure, the term "A and B are different" may mean "A and B are different from each other." The term may also mean "A and B are each different from C." Terms such as "separate" and "combined" may also be interpreted in the same way as "different."

 1…端末(認証装置)、11…眼検出部、12…視線検出部、13…状態検出部、14…表示部、15…認証部、E…視線、E1…眼、M…認証模様、M1…マーク。 1... terminal (authentication device), 11... eye detection unit, 12... gaze detection unit, 13... status detection unit, 14... display unit, 15... authentication unit, E... gaze, E1... eye, M... authentication pattern, M1... mark.

Claims (7)

 ユーザの眼を検出する眼検出部と、
 前記眼検出部の検出結果に基づいて、前記ユーザの視線を検出する視線検出部と、
 前記眼検出部の検出結果に基づいて、前記ユーザの眼の状態を検出する状態検出部と、
 認証のための前記視線を誘導するためのマークを含む認証模様を表示する表示部と、
 前記視線の先に前記マークがあるときの前記眼の状態に基づいて、認証を行う認証部と、を備える、
認証装置。
An eye detection unit that detects the eyes of a user;
a gaze detection unit that detects a gaze of the user based on a detection result of the eye detection unit;
a condition detection unit that detects an eye condition of the user based on a detection result of the eye detection unit;
a display unit that displays an authentication pattern including a mark for guiding the line of sight for authentication;
and an authentication unit that performs authentication based on the state of the eye when the mark is in the line of sight.
Authentication device.
 前記認証模様は、複数の前記マークを含み、
 前記認証部は、一の前記マークから他の前記マークへ前記視線の先が移動するときの前記眼の状態に基づいて、認証を行う、
請求項1に記載の認証装置。
The authentication pattern includes a plurality of the marks,
the authentication unit performs authentication based on a state of the eye when the gaze moves from one of the marks to the other of the marks.
The authentication device according to claim 1 .
 前記状態検出部は、前記眼の状態として、瞬きを検出する、
請求項1に記載の認証装置。
The state detection unit detects blinking as the eye state.
The authentication device according to claim 1 .
 前記認証部は、
  検出された前記瞬きに基づいて、前記瞬きの回数及び前記瞬きのタイミングの少なくとも1つを取得し、
  前記取得した前記瞬きの回数及び前記瞬きのタイミングの少なくとも1つに基づいて、前記認証を行う、
請求項3に記載の認証装置。
The authentication unit
Obtaining at least one of the number of blinks and the timing of the blinks based on the detected blinks;
performing the authentication based on at least one of the acquired number of blinks and the acquired blink timing;
The authentication device according to claim 3.
 前記状態検出部は、前記眼の状態として、前記眼に係る特徴を検出する、
請求項1に記載の認証装置。
The condition detection unit detects a feature related to the eye as the eye condition.
The authentication device according to claim 1 .
 前記状態検出部は、前記眼に係る特徴として、目元のしわの状態、前記眼の焦点深度、及び眼球の個数の少なくとも1つを検出する、
請求項5に記載の認証装置。
The state detection unit detects at least one of a state of wrinkles around the eyes, a focal depth of the eyes, and a number of eyeballs as the eye-related features.
The authentication device according to claim 5.
 前記認証部は、予め設定され、前記視線の先を移動させる前記マークの順番である認証パターンに基づいて認証を行う、
請求項2に記載の認証装置。
the authentication unit performs authentication based on an authentication pattern that is a sequence of the marks to which the line of sight is moved, the authentication pattern being set in advance;
The authentication device according to claim 2 .
PCT/JP2023/043489 2023-12-05 2023-12-05 Authentication device Pending WO2025120738A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2025561558A JPWO2025120738A1 (en) 2023-12-05 2023-12-05
PCT/JP2023/043489 WO2025120738A1 (en) 2023-12-05 2023-12-05 Authentication device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2023/043489 WO2025120738A1 (en) 2023-12-05 2023-12-05 Authentication device

Publications (1)

Publication Number Publication Date
WO2025120738A1 true WO2025120738A1 (en) 2025-06-12

Family

ID=95980709

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/043489 Pending WO2025120738A1 (en) 2023-12-05 2023-12-05 Authentication device

Country Status (2)

Country Link
JP (1) JPWO2025120738A1 (en)
WO (1) WO2025120738A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014092940A (en) * 2012-11-02 2014-05-19 Sony Corp Image display device and image display method and computer program
WO2016088415A1 (en) * 2014-12-05 2016-06-09 ソニー株式会社 Information processing device, information processing method, and program

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014092940A (en) * 2012-11-02 2014-05-19 Sony Corp Image display device and image display method and computer program
WO2016088415A1 (en) * 2014-12-05 2016-06-09 ソニー株式会社 Information processing device, information processing method, and program

Also Published As

Publication number Publication date
JPWO2025120738A1 (en) 2025-06-12

Similar Documents

Publication Publication Date Title
US8873147B1 (en) Chord authentication via a multi-touch interface
CN110286944B (en) Method and apparatus for processing biometric information in an electronic device
US20190207985A1 (en) Authorization policy recommendation method and apparatus, server, and storage medium
EP3287922B1 (en) Unlocking control method and terminal device
US20150067827A1 (en) Apparatus and method for setting a user-defined pattern for an application
JP7653228B2 (en) Pedestrian re-identification device and method
US20150199504A1 (en) Multi-touch local device authentication
CN106203297A (en) An identification method and device
KR20160147515A (en) Method for authenticating user and electronic device supporting the same
CN109145558B (en) Unlocking control method and electronic device
KR102740560B1 (en) Human-machine verification methods, devices, apparatus and recording media
CN108459783A (en) Control method, device and the equipment of dummy keyboard, readable medium
US20170249450A1 (en) Device and Method for Authenticating a User
US10055564B2 (en) Biometric authentication, and near-eye wearable device
WO2025120738A1 (en) Authentication device
US20210264006A1 (en) Dynamic biometric updating
US12271453B2 (en) Augmented handwritten signature authentication method and electronic device supporting same
US10496882B2 (en) Coded ocular lens for identification
CN115600174A (en) Intelligent equipment control method and device
CN109543380A (en) Solve lock control method and electronic device
JP7562831B2 (en) Information processing device
CN109240559A (en) application program control method and electronic device
CN109583168B (en) Unlocking control method and electronic device
CN109240560A (en) application program control method and electronic device
WO2025163819A1 (en) Terminal and method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23960753

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2025561558

Country of ref document: JP

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 2025561558

Country of ref document: JP