WO2018179723A1 - Facial authentication processing apparatus, facial authentication processing method, and facial authentication processing system - Google Patents
Facial authentication processing apparatus, facial authentication processing method, and facial authentication processing system Download PDFInfo
- Publication number
- WO2018179723A1 WO2018179723A1 PCT/JP2018/001863 JP2018001863W WO2018179723A1 WO 2018179723 A1 WO2018179723 A1 WO 2018179723A1 JP 2018001863 W JP2018001863 W JP 2018001863W WO 2018179723 A1 WO2018179723 A1 WO 2018179723A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- face
- unit
- authentication
- frame
- target person
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
Definitions
- the present disclosure relates to a face authentication processing device, a face authentication processing method, and a face authentication processing system that perform authentication using a face image of a target person.
- a face authentication technique In order to authenticate a target person, a face authentication technique is known in which a face image of the target person is acquired and verified against the face image of the registrant to determine whether the person is the person.
- various technologies for detecting a so-called “spoofing” act of improper use on behalf of a regular registrant have been proposed.
- Patent Document 1 a face image sequence for a predetermined time is input, and a face included in the face image sequence is impersonated based on temporal changes in predetermined color information extracted from the face image sequence.
- An information processing apparatus for determining whether or not is disclosed. According to this conventional example, it is possible to accurately detect impersonation when a fake face is presented during face authentication.
- a mask for disguise is obtained by acquiring information derived from changes in blood flow measured from a face image based on temporal changes in predetermined color information extracted from the face image sequence. It is possible to detect impersonation of a face image with little change, such as when wearing. However, in the conventional face authentication device, it is sometimes difficult to detect fraud against impersonation using a moving image of a face image.
- the present disclosure is devised in view of the above-described conventional circumstances, and an object thereof is to provide a face authentication processing device, a face authentication processing method, and a face authentication processing system that can accurately detect impersonation during authentication. .
- the present disclosure acquires a captured image obtained by capturing a target person, detects a face of the target person in an input image of the captured image, and a frame detection unit detects a linear frame in the input image.
- An arrangement determining unit that determines whether a frame surrounding the face of the target person exists using the face position information acquired by detecting the face and the frame position information acquired by detecting the frame;
- a face authentication processing device comprising: an authentication determination unit that determines the validity of an authentication result of face information of the target person based on a frame arrangement determination result.
- the present disclosure is a face authentication processing method in a face authentication processing device that performs face authentication of a target person, obtains a captured image obtained by capturing the target person, and includes the face of the target person in an input image of the captured image , Detecting a straight frame in the input image, and surrounding the face of the target person using the face position information acquired by detecting the face and the frame position information acquired by detecting the frame
- a face authentication processing method for determining whether or not a frame exists and determining the validity of the authentication result of the face information of the target person based on the frame arrangement determination result.
- the present disclosure also includes an imaging unit that captures a target person, a face detection unit that acquires a captured image of the target person and detects the face of the target person in an input image of the captured image, and a straight line in the input image Whether there is a frame surrounding the face of the target person using the frame detection unit for detecting a frame, the face position information acquired by the face detection, and the frame position information acquired by the frame detection
- An arrangement determination unit for determining whether or not, a face verification unit for comparing face information of the captured image of the target person with registered face information, a face verification result of the face information, and an arrangement determination result of the frame
- An authentication determination unit that determines the validity of the authentication result of the face information of the target person, an authentication result output unit that outputs the authentication result to a control target, the imaging unit, the face detection unit, and the frame detection unit ,
- the arrangement determination unit, the authentication determination unit Chino provides face recognition processing system comprising a display unit, a displaying at least one processing result.
- impersonation at the time of authentication can be detected with high accuracy.
- FIG. 1 is a diagram illustrating a first example of the configuration of the face authentication system according to the present embodiment.
- FIG. 2 is a block diagram illustrating a functional configuration of the face authentication apparatus according to the first embodiment.
- FIG. 3 is a diagram for explaining frame detection processing in the present embodiment.
- FIG. 4 is a diagram for explaining the background removal processing in the present embodiment.
- FIG. 5 is a diagram illustrating a first example of a functional configuration of the frame detection unit in the present embodiment.
- FIG. 6 is a diagram illustrating a second example of a functional configuration of the frame detection unit in the present embodiment.
- FIG. 7 is a diagram illustrating the arrangement determination process in the present embodiment.
- FIG. 8 is a flowchart illustrating a procedure of a first example of face authentication processing according to the first embodiment.
- FIG. 8 is a flowchart illustrating a procedure of a first example of face authentication processing according to the first embodiment.
- FIG. 9 is a flowchart illustrating the procedure of a second example of the face authentication process in the first embodiment.
- FIG. 10 is a block diagram illustrating a functional configuration of the face authentication apparatus according to the second embodiment.
- FIG. 11 is a block diagram illustrating a functional configuration of the face authentication apparatus according to the third embodiment.
- FIG. 12 is a flowchart illustrating a procedure of a first example of face authentication processing according to the third embodiment.
- FIG. 13 is a flowchart illustrating a procedure of a second example of the face authentication process in the third embodiment.
- FIG. 14 is a block diagram illustrating a functional configuration of the face authentication apparatus according to the fourth embodiment.
- FIG. 15 is a diagram illustrating a second example of the configuration of the face authentication system according to the present embodiment.
- FIG. 16 is a diagram illustrating a third example of the configuration of the face authentication system according to the present embodiment.
- the present embodiment an embodiment that specifically discloses a face authentication processing device, a face authentication processing method, and a face authentication processing system according to the present disclosure (hereinafter referred to as “the present embodiment”) will be described in detail with reference to the drawings as appropriate. .
- more detailed description than necessary may be omitted.
- detailed descriptions of already well-known matters and repeated descriptions for substantially the same configuration may be omitted. This is to avoid the following description from becoming unnecessarily redundant and to facilitate understanding by those skilled in the art.
- the accompanying drawings and the following description are provided to enable those skilled in the art to fully understand the present disclosure, and are not intended to limit the claimed subject matter.
- FIG. 1 is a diagram illustrating a first example of the configuration of the face authentication system according to the present embodiment.
- a face authentication device and a face authentication system that performs authentication by matching a face image of a registrant using a face image of a moving image obtained by capturing the target person when performing face authentication of the target person is shown.
- the present embodiment shows a configuration example capable of detecting fraud against impersonation or the like due to reproduction of a moving image of a regular registrant using a portable display terminal.
- the face authentication system 100A is configured to include the face authentication device 10A.
- the face authentication system 100A images the target person 30, performs face authentication, and outputs an authentication result to the control target 40.
- a service use management device such as car sharing, an entrance / exit management device in a security area, and the like are assumed.
- a car sharing usage management device based on the authentication result of the user's face authentication, if the user is a registered regular user (if the authentication result is approved), the vehicle is allowed to be used. If the user is not a user (if the authentication result is denied), the car is disabled.
- Control relating to permission to use the vehicle can be realized, for example, by inputting an authentication result to the control computer of the automobile and enabling or disabling the operation of the vehicle according to the authentication result.
- an entrance / exit management device based on the authentication result of the face authentication of the entrance / exit person, if it is an authorized person (when the authentication result is approval), it is allowed to pass, and if it is not an authorized person (authentication) When the result is denial), traffic is blocked, intruder detection is notified, etc.
- the face authentication device 10 ⁇ / b> A includes an imaging unit 11, a display unit 12, an ID reading unit 13, and a calculation unit 20.
- the imaging unit 11 is configured by a camera such as a surveillance camera, for example.
- the imaging unit 11 includes an imaging lens, an imaging device, an image signal processing circuit, a communication interface, and the like.
- the imaging unit 11 captures a moving image of an imaging region including the target person 30 and outputs the captured image to the calculation unit 20.
- the ID reading unit 13 is configured by an information reading device such as a card reader, for example, reads ID information of the ID card 35 such as a membership card, identification card, driver's license, etc. possessed by the target person 30 and outputs the ID information to the calculation unit 20. To do.
- the calculation unit 20 is configured by a computer such as a PC.
- the calculation unit 20 includes a processor, a memory, a communication interface, and the like.
- the display unit 12 is configured by a display device including a display device such as a liquid crystal display, and includes various types of information at the time of execution of processing in the calculation unit 20 such as an operation guidance screen, an input image of a captured image, a face detection result, a frame detection result, The verification result and the authentication result of face authentication are displayed. Processing in the face authentication apparatus of this embodiment such as frame detection will be described in detail later.
- the display unit 12 is disposed, for example, in a monitoring room of the face authentication system 100A, and is provided so as to be visible to an administrator or operator of the face authentication system 100A.
- the display unit 12 may be provided in the vicinity of the imaging unit 11 for guidance display to the target person 30.
- FIG. 2 is a block diagram illustrating a functional configuration of the face authentication apparatus according to the first embodiment.
- the calculation unit 20A of the face authentication device includes a face recognition unit 21, a frame detection unit 22, an arrangement determination unit 23, a face collation unit 24, and an validity determination unit 25 as functional components.
- the arithmetic unit 20A implements the functions of the respective units by executing a predetermined program by the processor. In this embodiment, even when there is spoofing by moving image reproduction of a regular registrant using a portable display terminal, the outline of the display terminal is recognized by performing frame detection, and the moving image of the face image is used. Detect fraud that has occurred.
- the face recognition unit 21 has a function of a face detection unit.
- the face recognition unit 21 detects the face of the target person in the input image by face recognition processing using the input image of the captured image including the face image, and acquires face feature information and position information (face arrangement information).
- the frame detection unit 22 uses the input image of the captured image and the background image to detect a linear frame in the input image from which the background image has been removed. If the captured image includes a frame such as a display terminal, the frame detection unit 22 Position information (frame layout information) is acquired.
- the arrangement determination unit 23 uses the face arrangement information and the frame arrangement information to perform arrangement determination as to whether or not the face is located within the frame of the display terminal or the like. By this arrangement determination, it is detected that the display terminal exists in the captured image of the input image and the face image is displayed by the display terminal within the frame of the display terminal.
- the face collation unit 24 collates the face in the captured image detected by the face recognition unit 21 using the facial feature information acquired from the captured image and the collation data of the registrant's face information, thereby registering the registrant. It is determined whether or not the face matches.
- the collation data image data of the registrant's face image or feature data including feature information of the registrant's face is used.
- the collation data is recorded as one of the ID information on the ID card possessed by the target person, and the collation data of the ID information read from the ID card is input to the face collation unit 24 of the arithmetic unit 20A.
- the validity determination unit 25 has functions of an authentication determination unit and an authentication result output unit.
- the validity determination unit 25 approves whether the face of the target person in the input image is valid based on the arrangement determination result by the arrangement determination unit 23 and the face comparison result by the face matching unit 24, that is, the validity of the authentication result of face authentication. It is determined whether or not, and the authentication result is output.
- FIG. 3 is a diagram for explaining frame detection processing in the present embodiment.
- an input image 51 of a captured image is a case where an image obtained by capturing a situation where a moving image in which a regular registrant's face is captured is reproduced on a portable display terminal is captured. It is shown.
- the face of the registrant displayed on the display terminal is captured as a subject
- the input image 51 includes a housing 53 of the display terminal and a person 54 on the display image in the display unit of the display terminal. It is included.
- the frame detection unit 22 performs a known contour extraction (edge detection) process on the input image 51 to generate an edge image 52.
- the edge image 52 includes the outline 55 of the housing of the display terminal and the outline 56 of the person in the display unit of the display terminal.
- the frame detection unit 22 performs a straight line detection process on the edge image 52 and extracts a straight line portion (line segment) 57 in the image.
- the straight line detection process for example, straight line detection using Hough transform may be used.
- the frame detection unit 22 determines whether there are two parallel straight lines and a straight line perpendicular thereto in the straight line portion 57 in the extracted image, extracts a frame 58 surrounded by four sides, and extracts frame coordinates. To get.
- FIG. 4 is a diagram for explaining the background removal processing in the present embodiment.
- the illustrated example shows a case in which impersonation using the display terminal is not performed as the input image 61 of the captured image, and an image obtained by directly capturing the face of the target person is input.
- the input image 61 includes a background 65 and a person 66.
- the background image 62 an image including the background 65 in a state where no person is present in the same imaging area as the input image 61 is used.
- the background image 62 is acquired by, for example, an image obtained by capturing only the background at a timing when there is no person, or an image obtained by extracting a fixed point that does not change in position (the amount of movement is not detected) in the captured image.
- the frame detection unit 22 removes the background 65 in the input image 61 using the input image 61 and the background image 62, and generates an image 63 after the background is removed.
- the image 63 after the background removal is an image of only a subject that has no background portion and moves, such as a person 66.
- the image after the background removal includes the display terminal and a person.
- FIG. 5 is a diagram illustrating a first example of a functional configuration of the frame detection unit in the present embodiment.
- the frame detection unit 22A of the first example includes contour extraction units 221, 222, a background removal unit 223, a straight line detection unit 224, and a frame extraction unit 225 as functional components.
- the first example is an example in which a contour is extracted for each of the input image and the background image, the contour of the background portion is removed from the contour of the captured image, and frame detection is performed on the edge image from which the background is removed.
- the contour extraction unit 221 performs edge detection on the input image of the captured image and extracts the contour of the captured image.
- the contour extracting unit 222 performs edge detection on the background image to extract the contour of the background image.
- the background removing unit 223 removes the contour of the background image from the contour of the captured image, and generates an edge image with a contour that does not include the background.
- the straight line detection unit 224 performs straight line detection processing such as Hough transform on the edge image, and extracts a straight line portion in the image.
- the frame extraction unit 225 extracts a frame surrounded by four line segments in a straight line portion in the extracted image, and outputs it as frame coordinates. In this way, by removing the contour of the background image after extracting the contour of the input image and the contour of the background image, the background removal efficiency can be improved and frame detection can be performed with high accuracy.
- FIG. 6 is a diagram illustrating a second example of the functional configuration of the frame detection unit in the present embodiment.
- the frame detection unit 22B of the second example includes a background removal unit 226, a contour extraction unit 227, a straight line detection unit 224, and a frame extraction unit 225 as functional components.
- the second example is an example in which the background image is removed from the input image, the contour is extracted from the image after the background removal, and frame detection is performed on the edge image from which the background has been removed.
- the background removal unit 226 removes the background image from the captured image and generates a captured image that does not include the background.
- the contour extraction unit 227 performs edge detection on the captured image after background removal, and extracts the contour of the captured image.
- the straight line detection unit 224 performs straight line detection processing such as Hough transform on the edge image, and extracts a straight line portion in the image.
- the frame extraction unit 225 extracts a frame surrounded by four line segments in a straight line portion in the extracted image, and outputs it as frame coordinates. As described above, when the conditions such as the brightness of the image are approximated between the captured image and the background image, the outline can be extracted from the image after the background is removed, and the frame detection can be executed with high accuracy.
- FIG. 7 is a diagram for explaining the arrangement determination processing in the present embodiment.
- the arrangement determination unit 23 acquires face arrangement information of the target person obtained as one of the face detection results by the face recognition unit 21.
- the face arrangement information includes a face frame 68 indicating the face contour position in the input image of the captured image.
- the arrangement determination unit 23 acquires frame arrangement information of the display terminal obtained as one of the frame detection results by the frame detection unit 22. As shown in FIG.
- the frame arrangement information includes a frame 58 indicating the contour position of the display terminal detected in the captured image.
- the placement determination unit 23 compares the positions of both the face frame 68 and the frame 58 and determines whether the face frame 68 exists in the area inside the frame 58, that is, the target person within the frame of the display terminal. It is determined whether or not the face is located.
- the arrangement determination unit 23 outputs an arrangement determination result that there is a face image in the frame.
- the validity determination unit 25 considers that the face image detected in the captured image is a face image displayed by the display terminal when the face image exists in the frame based on the result of the placement determination by the placement determination unit 23, and the face The collation result is determined to be invalid. Thus, when the determination result by the validity determination part 25 becomes invalid determination, the impersonation generation using the display terminal is detected. In the case of invalidity determination, the validity determination unit 25 outputs denial as the authentication result of face authentication.
- FIG. 8 is a flowchart showing the procedure of a first example of face authentication processing in the first embodiment.
- the calculation unit 20A of the face authentication apparatus captures the input image in the face recognition unit 21 (S11), and performs face recognition processing of the target person in the input image (S12).
- the computing unit 20A determines whether or not the face recognition unit 21 has detected the face of the target person (S13), and continues the face recognition process until a face is detected in the input image.
- the calculation unit 20A performs face matching of the detected target person in the face matching unit 24 (S14).
- the calculation unit 20A determines whether or not the facial feature information of the target person matches the matching data of the registrant's face in the face matching performed by the face matching unit 24 (S15), and if they match (S15: Yes), Continue face recognition processing. On the other hand, when the feature information of the face of the target person does not match the collation data of the registrant's face (S15: No), the calculation unit 20A performs an invalid determination in the validity determination unit 25 and rejects it as a face authentication authentication result. Is output.
- the calculation unit 20A performs frame detection in the input image at the frame detection unit 22 (S16), and the layout determination unit 23 performs layout determination between the face position by face detection and the frame position by frame detection ( S17).
- the calculation unit 20A detects a frame in the input image and determines whether or not a face exists inside the frame, that is, whether or not a frame surrounding the outside of the target person's face exists. Determine (S18).
- the calculation unit 20A determines invalidity in the validity determination unit 25, and outputs denial as the authentication result of face authentication.
- the calculation unit 20A performs the validity determination in the validity determination unit 25 and outputs an approval as the authentication result of the face authentication.
- the face detection and the frame detection may be executed in parallel, or one process is executed in series and then the other process is executed.
- the execution timing and order of the processing are not limited. Face matching and placement determination may be performed in parallel with each other, or one of the processes may be performed in series and the other may be performed. Implementation timing and order are not limited.
- FIG. 9 is a flowchart showing the procedure of a second example of the face authentication process in the first embodiment.
- the second example of face authentication processing shows a processing example in the case where there are a plurality of faces in the input image of the captured image.
- the images captured for face authentication include, for example, a plurality of face images including a plurality of face images of a person, a plurality of face images of a spoofed face image and a face image of the person or another person. May be detected.
- the computing unit 20A of the face authentication apparatus captures the input image in the face recognition unit 21 (S21), and performs face recognition processing of the target person in the input image (S22). At this time, the calculation unit 20A acquires the face detection number (N) as the number of face images detected by the face recognition unit 21. The calculation unit 20A determines whether the face detection number (N) is greater than 0, that is, whether face detection including the target person has been performed (S23), and continues face recognition processing until a face is detected in the input image. To do. When one or more faces are detected and the face detection number (N) is 1 or more (S23: Yes), the calculation unit 20A performs frame detection in the input image in the frame detection unit 22 (S24).
- the calculation unit 20A determines whether or not the facial feature information of the processing target matches the collation data of the registrant's face in the face matching performed by the face matching unit 24 (S26).
- the arithmetic unit 20A performs face collation for the next k-th face (S25, S26), and repeats the same processing until there is no unprocessed face.
- the calculation unit 20A performs invalidity determination in the validity determination unit 25, and outputs denial as the authentication result of face authentication To do.
- the calculation unit 20A determines the position of the face by the face detection for the kth face in the placement determination unit 23. An arrangement determination with the position of the frame by frame detection is performed (S28). In the placement determination by the placement determination unit 23, the calculation unit 20A detects a frame in the input image and determines whether a face exists inside the frame, that is, whether there is a frame surrounding the outside of the face to be processed. Determine (S29).
- the face detection and the frame detection may be executed in parallel, or one process is executed in series and then the other process is executed.
- the execution timing and order of the processing are not limited. Face collation and placement determination may be performed in parallel for each of a plurality of detected faces, or one process is performed in series and the other is performed in parallel. Processing may be executed, and the execution timing and order of the processing are not limited.
- a plurality of frames of a moving image may be used as an input image, frame detection may be performed on each frame, and a frame detection result obtained by integrating detection results on the plurality of frames may be acquired.
- Good For example, there may be a case where a frame cannot be detected from an image of one frame, for example, when a person's hand is on the display unit of a portable display terminal.
- the detection results in multiple frames are integrated. Therefore, the frame detection can be executed with high accuracy, and the stability of the frame detection can be improved.
- fraud such as impersonation using a portable display terminal can be detected by frame detection, and impersonation at the time of authentication can be detected with high accuracy.
- impersonation at the time of authentication can be detected with high accuracy.
- the facial feature information and the biological information may be combined in the face matching using the biological information acquired from the captured image of the moving image captured by the imaging unit.
- the reliability of face authentication can be improved by combining biometric information and face image information.
- FIG. 10 is a block diagram illustrating a functional configuration of the face authentication apparatus according to the second embodiment.
- the calculation unit 20B of the face authentication device according to the second embodiment includes, as functional components, a face recognition unit 21, a frame detection unit 22, an arrangement determination unit 23, a face collation unit 24, an validity determination unit 25a, and an ID authentication unit. 26.
- the functions and operations of the face recognition unit 21, the frame detection unit 22, the arrangement determination unit 23, and the face collation unit 24 are the same as those in the first embodiment shown in FIG. The explanation will focus on the part.
- the ID authentication unit 26 uses the ID information acquired by the ID reading unit 13 to authenticate ID information recorded on an ID card such as an identification card, a membership card, or a driver's license possessed by the target person. Note that the ID authentication unit 26 does not use the ID information acquired from the ID card, but input information such as a password or an identification code input by a user who is the target person, or for each person such as an iris or fingerprint of the target person.
- the identification information may be authenticated using the unique identification information as the ID information.
- the validity determination unit 25a has functions of an authentication determination unit and an authentication result output unit.
- the validity determination unit 25a determines whether the face of the target person in the input image is valid based on the arrangement determination result by the arrangement determination unit 23, the face verification result by the face verification unit 24, and the ID authentication result by the ID authentication unit 26, that is, the face It is determined whether the authentication result is approval or not, and the authentication result is output.
- fraud such as impersonation using a portable display terminal can be detected by frame detection, and impersonation at the time of authentication can be detected with high accuracy.
- impersonation at the time of authentication can be detected with high accuracy.
- FIG. 11 is a block diagram illustrating a functional configuration of the face authentication apparatus according to the third embodiment.
- the calculation unit 20C of the face authentication device according to the third embodiment includes a face recognition unit 21, a frame detection unit 22, an arrangement determination unit 23, and a face matching unit 24a as functional components.
- the functions and operations of the face recognition unit 21, the frame detection unit 22, and the arrangement determination unit 23 are the same as those in the first embodiment shown in FIG. To do.
- the face matching unit 24a has functions of an authentication determination unit and an authentication result output unit.
- the face collating unit 24a determines whether the frame is not detected in the captured image based on the arrangement determination result by the arrangement determining unit 23 or when the frame is detected but does not surround the outside of the face. The face in the captured image detected in this way is collated to determine whether or not it matches the registrant's face.
- the face matching unit 24a determines whether the authentication result of face authentication is approval or not based on the result of face matching, and outputs the authentication result.
- FIG. 12 is a flowchart showing a procedure of a first example of face authentication processing in the third embodiment.
- the calculation unit 20C of the face authentication apparatus captures the input image in the face recognition unit 21 (S31), and performs face recognition processing of the target person in the input image (S32).
- the calculation unit 20C determines whether or not the face recognition unit 21 has detected the face of the target person (S33), and continues the face recognition processing until a face is detected in the input image.
- the calculation unit 20C performs frame detection in the input image at the frame detection unit 22 (S34), and the placement determination unit 23 determines the position of the face by face detection. Arrangement determination with the position of the frame by frame detection is performed (S35).
- the calculation unit 20C detects a frame in the input image and determines whether a face exists inside the frame, that is, whether there is a frame surrounding the outside of the target person's face. Determination is made (S36). When there is a frame surrounding the outside of the target person's face (S36: Yes), the calculation unit 20C determines invalidity in the face matching unit 24a, and outputs a denial as the authentication result of the face authentication.
- the calculation unit 20C performs face matching of the detected target person in the face matching unit 24a (S37).
- the calculation unit 20C determines whether or not the facial feature information of the target person matches the matching data of the registrant's face in the face matching by the face matching unit 24a (S38), and if it matches (S38: Yes), Approval is output as the authentication result of face authentication.
- the calculation unit 20C determines invalidity in the face collation unit 24a and rejects it as the authentication result of face authentication. Is output.
- the face detection and the frame detection may be executed in parallel, or one process is executed in series and then the other process is executed.
- the execution timing and order of the processing are not limited.
- FIG. 13 is a flowchart showing the procedure of the second example of the face authentication process in the third embodiment.
- the second example of face authentication processing shows a processing example in the case where there are a plurality of faces in the input image of the captured image.
- the computing unit 20C of the face authentication apparatus captures the input image in the face recognition unit 21 (S41), and performs face recognition processing of the target person in the input image (S42). At this time, the calculation unit 20C acquires the face detection number (N) as the number of face images detected by the face recognition unit 21. The calculation unit 20C determines whether the face detection number (N) is greater than 0, that is, whether face detection including the target person has been performed (S43), and continues face recognition processing until a face is detected in the input image. To do. When one or more faces are detected and the face detection number (N) is 1 or more (S43: Yes), the calculation unit 20C performs frame detection in the input image in the frame detection unit 22 (S44).
- the frame is determined to be positioned with respect to (S45).
- the calculation unit 20C detects a frame in the input image and determines whether a face exists inside the frame, that is, whether a frame surrounding the outside of the processing target face exists. Determine (S46).
- the calculation unit 20C performs face matching on the kth face in the face matching unit 24a (S48).
- the calculation unit 20C determines whether or not the feature information of the face to be processed matches the collation data of the registrant's face in the face collation performed by the face collation unit 24a (S49).
- the calculation unit 20C When there is an unprocessed face (S47: Yes), the calculation unit 20C performs the same arrangement determination and face matching for the next k-th face (S45, S46, S48, S49). When there is no unprocessed face (S47: No), the calculation unit 20C determines invalidity in the face matching unit 24a, and outputs denial as the authentication result of face authentication. On the other hand, when the feature information of the face to be processed matches the collation data of the registrant's face (S49: Yes), the calculation unit 20C determines that the face collation unit 24a is valid and approves it as the authentication result of face authentication. Is output.
- the face detection and the frame detection may be executed in parallel, or one process is executed in series and then the other process is executed.
- the execution timing and order of the processing are not limited.
- fraud such as impersonation using a portable display terminal can be detected by frame detection, and impersonation at the time of authentication can be detected with high accuracy.
- impersonation at the time of authentication can be detected with high accuracy.
- FIG. 14 is a block diagram illustrating a functional configuration of the face authentication apparatus according to the fourth embodiment.
- the calculation unit 20D of the face authentication device according to the fourth embodiment includes, as functional components, a face recognition unit 21, a frame detection unit 22, an arrangement determination unit 23, a face collation unit 24a, an validity determination unit 25b, and an ID authentication unit. 26.
- the functions and operations of the face recognition unit 21, the frame detection unit 22, and the arrangement determination unit 23 are the same as those in the first embodiment shown in FIG. 2, and the functions and operations of the face matching unit 24a are 11 is the same as that of the third embodiment shown in FIG.
- the ID authentication unit 26 uses the ID information acquired by the ID reading unit 13 to authenticate ID information recorded on an ID card such as an identification card, a membership card, or a driver's license possessed by the target person. Note that the ID authentication unit 26 does not use the ID information acquired from the ID card, but input information such as a password or an identification code input by a user who is the target person, or for each person such as an iris or fingerprint of the target person.
- the identification information may be authenticated using the unique identification information as the ID information.
- the validity determination unit 25b has functions of an authentication determination unit and an authentication result output unit.
- the validity determination unit 25b determines whether the face of the target person in the input image is valid based on the face collation result by the face collation unit 24a and the ID authentication result by the ID authentication unit 26, that is, whether the authentication result of face authentication is approval or disapproval. And the authentication result is output.
- fraud such as impersonation using a portable display terminal can be detected by frame detection, and impersonation at the time of authentication can be detected with high accuracy.
- impersonation at the time of authentication can be detected with high accuracy.
- FIG. 15 is a diagram illustrating a second example of the configuration of the face authentication system according to the present embodiment.
- the face authentication system 100 ⁇ / b> B includes a face authentication device 10 ⁇ / b> B, captures an image of the target person 30, performs face authentication, and outputs an authentication result to the control target 40.
- the face authentication device 10 ⁇ / b> B includes an imaging unit 11, a display unit 12, an input device 14, a storage device 15, and a calculation unit 20. Note that the functions and operations of the imaging unit 11, the display unit 12, and the calculation unit 20 are the same as those in the first example shown in FIG. 1, and description thereof will be omitted here, and different parts will be mainly described.
- the input device 14 has a function of an ID input unit for inputting ID information.
- the input device 14 includes input devices such as a touch panel, a keyboard, and a mouse. Based on an input operation by the target person 30 or another operator, an ID such as a member number, an identification number, and a driver's license number of the target person 30 Enter information.
- the storage device 15 has a function of a storage unit that stores collation data.
- the storage device 15 includes a storage device such as a memory and a hard disk.
- the storage device 15 stores and accumulates registrant face collation data (face image image data, face feature information, etc.), and the ID of the target person 30.
- the collation data corresponding to the information is read and output to the calculation unit 20.
- the calculation unit 20 executes various calculation processes related to the face authentication described above and outputs an authentication result.
- the target person 30 can input the password from the input device 14 and perform password authentication, or face authentication combining face verification and password authentication may be performed.
- the face detection and the frame detection are executed as in the face authentication system 100A of the first example described above, and the portable display terminal is impersonated by the frame detection. It is possible to detect fraud such as, and the accuracy of face authentication of the target person 30 can be improved.
- FIG. 16 is a diagram illustrating a third example of the configuration of the face authentication system according to the present embodiment.
- the face authentication system 100 ⁇ / b> C includes a face authentication device 10 ⁇ / b> C and a server device 80.
- the face authentication system 100 ⁇ / b> C images the target person 30 to perform face authentication, and outputs an authentication result to the control target 40.
- the face authentication device 10 ⁇ / b> C and the server device 80 are connected via a network 70 so that various types of information can be transmitted and received.
- the face authentication device 10 ⁇ / b> C includes an imaging unit 11, a display unit 12, an ID reading unit 13, and a control unit 16. Note that the functions and operations of the imaging unit 11, the display unit 12, and the ID reading unit 13 are the same as those in the first example shown in FIG. 1, and description thereof will be omitted here, and different parts will be mainly described.
- the control unit 16 is configured by a computer such as a PC and has a communication unit having a communication function.
- the control unit 16 encodes information used for authentication, such as image information of a captured image acquired by the imaging unit 11 and ID information acquired by the ID reading unit 13, and communicates with the server device 80. Further, the control unit 16 inputs processing information such as an authentication result of face authentication transmitted from the server device 80, and transmits it to the display unit 12 for display together with an operation guidance screen and an input image of a captured image.
- the server device 80 includes an arithmetic device 81 and a storage device 82.
- the calculation device 81 is configured by a computer including a processor, a memory, a communication interface, and the like, and executes various calculation processes related to face authentication, like the calculation unit 20 of the face authentication device 10A of the first example.
- the storage device 82 has a function of a storage unit that stores collation data.
- the storage device 82 includes a storage device such as a memory and a hard disk.
- the storage device 82 stores and accumulates registrant's face collation data (face image image data, face feature information, etc.), and the ID of the target person 30.
- the collation data corresponding to the information is read and output to the arithmetic unit 81.
- the arithmetic device 81 executes various arithmetic processes related to the face authentication described above and outputs an authentication result.
- Various calculation processes related to face authentication may be performed in the control unit 16 of the face authentication device 10C in the same manner as the calculation unit 20 of the face authentication device 10A of the first example, or with the control unit 16 of the face authentication device 10C. You may carry out by distributing with the arithmetic unit 81 of the server apparatus 80.
- FIG. When performing the face authentication process in the control unit 16, the face authentication apparatus 10C acquires the collation data from the server device 80 via the network 70, and executes the face authentication process. In addition, the authentication result may be transmitted via the network 70 and notified not only to the control target 40 and the server device 80 but also to a management device at another location.
- the face detection and the frame detection are executed as in the case of the face authentication system 100A of the first example, and the portable display terminal is impersonated by the frame detection. It is possible to detect fraud such as, and the accuracy of face authentication of the target person 30 can be improved. Further, by using the collation data accumulated in the storage device 82 of the server device 80, face authentication using a large number of collation data becomes possible.
- the face authentication devices 10A, 10B, and 10C as examples of the face authentication processing device according to the present embodiment acquire a captured image obtained by capturing a target person, and detect the face of the target person in an input image of the captured image.
- a face recognition unit 21 as a face detection unit
- a frame detection unit 22 that detects a linear frame in the input image
- face position information acquired by face detection and frame position information acquired by frame detection.
- an arrangement determination unit 23 that determines whether or not a frame surrounding the face of the target person exists, and an authentication determination unit that determines the validity of the authentication result of the target person's face information based on the frame arrangement determination result.
- the validity determining unit 25 or the face matching unit 24a is the validity of the authentication result of the target person's face information based on the frame arrangement determination result.
- the frame of the captured image is detected, the presence / absence of the frame in the captured image and the arrangement of the frame with respect to the face are determined. Based on the determination result of the frame, fraud such as impersonation using a portable display terminal is detected. Detection is possible, and impersonation at the time of authentication can be detected with high accuracy. Thereby, the reliability and stability of face authentication can be improved, and a highly accurate face authentication processing device and face authentication processing system can be realized.
- the computing unit 20A of the face authentication apparatus 10A includes a face matching unit 24 that matches face information of a captured image of the target person with registered face information, and the validity determining unit 25 includes a face matching result of the face information.
- the validity of the authentication result of the face information of the target person is determined based on the frame arrangement determination result. Thereby, fraud such as impersonation using a portable display terminal can be detected, and impersonation at the time of authentication can be accurately detected.
- the computing unit 20C of the face authentication device 10A includes a face matching unit 24a that matches face information of a captured image of the target person with registered face information, and the face matching unit 24a includes a target in the frame placement determination result.
- face information is collated, and the validity of the authentication result of the target person's face information is determined based on the face collation result of the face information.
- fraud such as impersonation using a portable display terminal can be detected, and impersonation at the time of authentication can be accurately detected.
- the calculation units 20B and 20D of the face authentication device 10A include an ID authentication unit 26 that acquires and authenticates the ID information of the target person, and the validity determination units 25a and 25b further use the authentication result of the ID information, The validity of the authentication result of the target person's face information is determined. Thereby, the reliability of face authentication can be improved by combining ID information and face image information.
- the frame detection units 22A and 22B of the face authentication device 10A have background removal units 223 and 226 that remove the background in the input image.
- background removal units 223 and 226 that remove the background in the input image.
- the background removal units 223 and 226 of the face authentication device 10A remove the background of the input image using the background when the target person is imaged as the background image.
- the background removal units 223 and 226 use the background when the target person does not exist as the background image, and removes the background of the input image.
- frame detection can be executed from an image of only a subject having a background and no movement, such as a person, and frame detection accuracy can be improved.
- the frame detection units 22A and 22B of the face authentication apparatus 10A use a plurality of frames of moving images as input images, and acquire a detection result of a frame obtained by integrating the detection results of the plurality of frames. As a result, even if a frame cannot be detected in an image of one frame and frame detection results are missing, frame detection can be performed with high accuracy by integrating detection results in a plurality of frames. Stability can be improved.
- the face authentication systems 100A, 100B, and 100C acquire the captured image of the target person and the captured image of the target person, and in the input image of the captured image
- face recognition unit 21 as a face detection unit for detecting the face of the target person
- frame detection unit 22 for detecting a linear frame in the input image
- face position information obtained by face detection
- frame detection unit 23 Using the acquired frame position information
- the face determination unit 23 that determines whether there is a frame surrounding the face of the target person, and the face that matches the face information of the captured image of the target person with the registered face information
- an authentication determination unit that determines the validity of the authentication result of the face information of the target person, and authentication that outputs the authentication result to the control target Result output
- It includes a validity determination section 25 as an imaging unit 11, the face recognition unit 21, the frame detector 22, arranged determination unit 23, a display unit 12 for displaying at least one of the
- the frame of the captured image is detected, and when the frame is detected, the arrangement of the frame with respect to the face is determined. Based on the determination result of the arrangement of the frame, fraud such as impersonation using a portable display terminal is detected. It can be detected. Thereby, the impersonation at the time of authentication can be detected with high accuracy, and the reliability and stability of face authentication can be improved.
- the face authentication systems 100A, 100B, and 100C include an ID reading unit 13 that reads ID information of a target person, or an input device 14 as an ID input unit that inputs ID information, and an ID authentication unit that authenticates ID information. 26, the validity determination unit 25 further determines the validity of the authentication result of the face information of the target person using the authentication result of the ID information. Thereby, the reliability of face authentication can be improved by combining ID information and face image information.
- the ID information includes the target person face collation data
- the face collation unit 24 collates the face information of the captured image of the target person using the ID information collation data.
- the face authentication systems 100B and 100C include storage devices 15 and 82 as storage units for storing the verification data of the target person's face, and the face verification unit 24 acquires the verification data from the storage devices 15 and 82.
- the face information of the captured image of the target person is collated. Thereby, face collation can be performed using collation data stored in a storage device such as its own device or an external server device, and face authentication using a large number of collation data is possible.
- the present disclosure is useful as a face authentication processing device, a face authentication processing method, and a face authentication processing system that can accurately detect impersonation during authentication.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Collating Specific Patterns (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The present invention is provided with: a facial recognition unit (21) that acquires a captured image obtained by capturing a target person and detects the face of the target person in the input image of the captured image; a frame detection unit (22) that detects a linear frame in the input image; a disposition determination unit (23) that determines whether a frame surrounding the face of the target person exists by using face position information acquired through detection of the face and frame position information acquired through detection of the frame; and a validity determination unit (25) that determines the validity of the recognition result of the face information of the target person on the basis of a frame disposition determination result.
Description
本開示は、対象人物の顔画像によって認証を行う顔認証処理装置、顔認証処理方法及び顔認証処理システムに関する。
The present disclosure relates to a face authentication processing device, a face authentication processing method, and a face authentication processing system that perform authentication using a face image of a target person.
対象人物の認証を行うために、対象人物の顔画像を取得し、登録者の顔画像と照合して本人であるかどうか認証を行う顔認証技術が知られている。この種の顔認証技術において、正規の登録者に成り代わって不正利用するいわゆる「なりすまし」行為を検知する技術が種々提案されている。
In order to authenticate a target person, a face authentication technique is known in which a face image of the target person is acquired and verified against the face image of the registrant to determine whether the person is the person. In this type of face authentication technology, various technologies for detecting a so-called “spoofing” act of improper use on behalf of a regular registrant have been proposed.
例えば、特許文献1には、所定の時間の顔画像列を入力し、顔画像列から抽出された所定の色情報の時間的な変化に基づいて、顔画像列に含まれる顔がなりすましであるか否かを判定する情報処理装置が開示されている。この従来例によれば、顔認証の際に、偽物の顔が提示された場合のなりすましを精度良く検知できる。
For example, in Patent Document 1, a face image sequence for a predetermined time is input, and a face included in the face image sequence is impersonated based on temporal changes in predetermined color information extracted from the face image sequence. An information processing apparatus for determining whether or not is disclosed. According to this conventional example, it is possible to accurately detect impersonation when a fake face is presented during face authentication.
上述した特許文献1では、顔画像列から抽出された所定の色情報の時間的な変化に基づき、顔画像から測定される血流の変化等に由来する情報を取得することにより、変装用マスクを装着した場合など、変化の少ない顔画像となる状態のなりすましを検知可能である。しかし、従来の顔認証装置では、顔画像の動画像を用いたなりすまし等に対して、不正の検知が困難な場合があった。
In Patent Document 1 described above, a mask for disguise is obtained by acquiring information derived from changes in blood flow measured from a face image based on temporal changes in predetermined color information extracted from the face image sequence. It is possible to detect impersonation of a face image with little change, such as when wearing. However, in the conventional face authentication device, it is sometimes difficult to detect fraud against impersonation using a moving image of a face image.
本開示は、上述した従来の事情に鑑みて案出され、認証時のなりすましを精度良く検出することができる顔認証処理装置、顔認証処理方法及び顔認証処理システムを提供することを目的とする。
The present disclosure is devised in view of the above-described conventional circumstances, and an object thereof is to provide a face authentication processing device, a face authentication processing method, and a face authentication processing system that can accurately detect impersonation during authentication. .
本開示は、対象人物を撮像した撮像画像を取得し、前記撮像画像の入力画像において前記対象人物の顔を検出する顔検出部と、前記入力画像において直線状の枠を検出する枠検出部と、前記顔の検出により取得した顔位置情報と、前記枠の検出により取得した枠位置情報とを用いて、前記対象人物の顔を囲む枠が存在するかどうかを判定する配置判定部と、前記枠の配置判定結果に基づき、前記対象人物の顔情報の認証結果の有効性を判定する認証判定部と、を備える顔認証処理装置を提供する。
The present disclosure acquires a captured image obtained by capturing a target person, detects a face of the target person in an input image of the captured image, and a frame detection unit detects a linear frame in the input image. An arrangement determining unit that determines whether a frame surrounding the face of the target person exists using the face position information acquired by detecting the face and the frame position information acquired by detecting the frame; There is provided a face authentication processing device comprising: an authentication determination unit that determines the validity of an authentication result of face information of the target person based on a frame arrangement determination result.
また、本開示は、対象人物の顔認証を行う顔認証処理装置における顔認証処理方法であって、前記対象人物を撮像した撮像画像を取得し、前記撮像画像の入力画像において前記対象人物の顔を検出し、前記入力画像において直線状の枠を検出し、前記顔の検出により取得した顔位置情報と、前記枠の検出により取得した枠位置情報とを用いて、前記対象人物の顔を囲む枠が存在するかどうかを判定し、前記枠の配置判定結果に基づき、前記対象人物の顔情報の認証結果の有効性を判定する、顔認証処理方法を提供する。
Further, the present disclosure is a face authentication processing method in a face authentication processing device that performs face authentication of a target person, obtains a captured image obtained by capturing the target person, and includes the face of the target person in an input image of the captured image , Detecting a straight frame in the input image, and surrounding the face of the target person using the face position information acquired by detecting the face and the frame position information acquired by detecting the frame Provided is a face authentication processing method for determining whether or not a frame exists and determining the validity of the authentication result of the face information of the target person based on the frame arrangement determination result.
また、本開示は、対象人物を撮像する撮像部と、前記対象人物の撮像画像を取得し、前記撮像画像の入力画像において前記対象人物の顔を検出する顔検出部と、前記入力画像において直線状の枠を検出する枠検出部と、前記顔の検出により取得した顔位置情報と、前記枠の検出により取得した枠位置情報とを用いて、前記対象人物の顔を囲む枠が存在するかどうかを判定する配置判定部と、前記対象人物の撮像画像の顔情報と登録された顔情報とを照合する顔照合部と、前記顔情報の顔照合結果と、前記枠の配置判定結果とに基づき、前記対象人物の顔情報の認証結果の有効性を判定する認証判定部と、前記認証結果を制御対象へ出力する認証結果出力部と、前記撮像部、前記顔検出部、前記枠検出部、前記配置判定部、前記認証判定部のうちの少なくとも一つの処理結果を表示する表示部と、を備える顔認証処理システムを提供する。
The present disclosure also includes an imaging unit that captures a target person, a face detection unit that acquires a captured image of the target person and detects the face of the target person in an input image of the captured image, and a straight line in the input image Whether there is a frame surrounding the face of the target person using the frame detection unit for detecting a frame, the face position information acquired by the face detection, and the frame position information acquired by the frame detection An arrangement determination unit for determining whether or not, a face verification unit for comparing face information of the captured image of the target person with registered face information, a face verification result of the face information, and an arrangement determination result of the frame An authentication determination unit that determines the validity of the authentication result of the face information of the target person, an authentication result output unit that outputs the authentication result to a control target, the imaging unit, the face detection unit, and the frame detection unit , The arrangement determination unit, the authentication determination unit Chino provides face recognition processing system comprising a display unit, a displaying at least one processing result.
本開示によれば、認証時のなりすましを精度良く検出することができる。
According to the present disclosure, impersonation at the time of authentication can be detected with high accuracy.
以下、適宜図面を参照しながら、本開示に係る顔認証処理装置、顔認証処理方法及び顔認証処理システムを具体的に開示した実施形態(以下、「本実施形態」という)を詳細に説明する。但し、必要以上に詳細な説明は省略する場合がある。例えば、既によく知られた事項の詳細説明や実質的に同一の構成に対する重複説明を省略する場合がある。これは、以下の説明が不必要に冗長になるのを避け、当業者の理解を容易にするためである。なお、添付図面及び以下の説明は、当業者が本開示を十分に理解するために提供されるのであって、これらにより請求の範囲に記載の主題を限定することは意図されていない。
Hereinafter, an embodiment that specifically discloses a face authentication processing device, a face authentication processing method, and a face authentication processing system according to the present disclosure (hereinafter referred to as “the present embodiment”) will be described in detail with reference to the drawings as appropriate. . However, more detailed description than necessary may be omitted. For example, detailed descriptions of already well-known matters and repeated descriptions for substantially the same configuration may be omitted. This is to avoid the following description from becoming unnecessarily redundant and to facilitate understanding by those skilled in the art. The accompanying drawings and the following description are provided to enable those skilled in the art to fully understand the present disclosure, and are not intended to limit the claimed subject matter.
(顔認証システムのシステム構成、第1例)
図1は、本実施形態の顔認証システムの構成の第1例を示す図である。本実施形態では、対象人物の顔認証を行う際に、対象人物を撮像した動画像の顔画像を用いて登録者の顔画像と照合し認証する顔認証装置及び顔認証システムの例を示す。 (System configuration of face authentication system, first example)
FIG. 1 is a diagram illustrating a first example of the configuration of the face authentication system according to the present embodiment. In the present embodiment, an example of a face authentication device and a face authentication system that performs authentication by matching a face image of a registrant using a face image of a moving image obtained by capturing the target person when performing face authentication of the target person is shown.
図1は、本実施形態の顔認証システムの構成の第1例を示す図である。本実施形態では、対象人物の顔認証を行う際に、対象人物を撮像した動画像の顔画像を用いて登録者の顔画像と照合し認証する顔認証装置及び顔認証システムの例を示す。 (System configuration of face authentication system, first example)
FIG. 1 is a diagram illustrating a first example of the configuration of the face authentication system according to the present embodiment. In the present embodiment, an example of a face authentication device and a face authentication system that performs authentication by matching a face image of a registrant using a face image of a moving image obtained by capturing the target person when performing face authentication of the target person is shown.
顔認証に用いる画像情報として、対象人物の顔画像の動画像を用いることにより、認証対象の人物における脈動などの生体情報を取得でき、写真を用いたなりすまし等の不正を検知可能である。しかし、例えば非正規の人物がタブレット端末等の携帯型の表示端末を用いて、正規の登録者の顔が写っている動画像を再生してなりすましを行うような場合、動画像を用いた顔認証を行っても不正を検知できず、認証結果として承認と判定してしまうことが起こり得る。上記事情に鑑み、本実施形態では、携帯型の表示端末を用いた正規登録者の動画像再生によるなりすましなどに対して、不正を検知可能な構成例を示す。
By using a moving image of the face image of the target person as the image information used for face authentication, biometric information such as pulsation in the person to be authenticated can be acquired, and fraud such as impersonation using a photograph can be detected. However, for example, when a non-regular person uses a portable display terminal such as a tablet terminal to reproduce a moving image showing the face of a regular registrant and impersonates, the face using the moving image Even if authentication is performed, fraud cannot be detected, and it may happen that the authentication result is approved. In view of the above circumstances, the present embodiment shows a configuration example capable of detecting fraud against impersonation or the like due to reproduction of a moving image of a regular registrant using a portable display terminal.
顔認証システム100Aは、顔認証装置10Aを含む構成であり、対象人物30を撮像して顔認証を行い、認証結果を制御対象40に出力する。制御対象40の一例としては、カーシェアリング等のサービスの利用管理装置、セキュリティエリアにおける入退場管理装置などが想定される。カーシェアリングの利用管理装置に適用した場合、利用者の顔認証の認証結果に基づき、登録されている正規の利用者であるとき(認証結果が承認の場合)は車の利用を許可し、正規の利用者でないとき(認証結果が否認の場合)は車を利用不可とする。車の利用許可に関する制御は、例えば自動車の制御コンピュータに対して認証結果を入力し、制御コンピュータが認証結果に応じて自車の動作を有効又は無効とすることによって実現できる。入退場管理装置に適用した場合、入退場者の顔認証の認証結果に基づき、許可された人物であるとき(認証結果が承認の場合)は通行を許可し、許可された人物でないとき(認証結果が否認の場合)は通行の遮断、侵入者検知の報知等を行う。
The face authentication system 100A is configured to include the face authentication device 10A. The face authentication system 100A images the target person 30, performs face authentication, and outputs an authentication result to the control target 40. As an example of the control target 40, a service use management device such as car sharing, an entrance / exit management device in a security area, and the like are assumed. When applied to a car sharing usage management device, based on the authentication result of the user's face authentication, if the user is a registered regular user (if the authentication result is approved), the vehicle is allowed to be used. If the user is not a user (if the authentication result is denied), the car is disabled. Control relating to permission to use the vehicle can be realized, for example, by inputting an authentication result to the control computer of the automobile and enabling or disabling the operation of the vehicle according to the authentication result. When applied to an entrance / exit management device, based on the authentication result of the face authentication of the entrance / exit person, if it is an authorized person (when the authentication result is approval), it is allowed to pass, and if it is not an authorized person (authentication) When the result is denial), traffic is blocked, intruder detection is notified, etc.
顔認証装置10Aは、撮像部11、表示部12、ID読取部13、演算部20を含む構成である。撮像部11は、例えば監視カメラ等のカメラにより構成される。撮像部11は、撮像レンズ、撮像デバイス、画像信号処理回路、通信インタフェース等を含む構成であり、対象人物30を含む撮像領域の動画像を撮像し、撮像画像を演算部20に出力する。ID読取部13は、例えばカードリーダ等の情報読取装置により構成され、対象人物30が所持する会員証、身分証明カード、運転免許証などのIDカード35のID情報を読み取り、演算部20に出力する。
The face authentication device 10 </ b> A includes an imaging unit 11, a display unit 12, an ID reading unit 13, and a calculation unit 20. The imaging unit 11 is configured by a camera such as a surveillance camera, for example. The imaging unit 11 includes an imaging lens, an imaging device, an image signal processing circuit, a communication interface, and the like. The imaging unit 11 captures a moving image of an imaging region including the target person 30 and outputs the captured image to the calculation unit 20. The ID reading unit 13 is configured by an information reading device such as a card reader, for example, reads ID information of the ID card 35 such as a membership card, identification card, driver's license, etc. possessed by the target person 30 and outputs the ID information to the calculation unit 20. To do.
演算部20は、例えばPC等のコンピュータにより構成される。演算部20は、プロセッサ、メモリ、通信インタフェース等を含む構成であり、撮像部11にて取得した撮像画像の画像情報、ID読取部13にて取得したID情報を用いて顔認証に関する各種演算処理を実行し、認証結果を出力する。表示部12は、液晶ディスプレイ等の表示デバイスを含む表示装置により構成され、演算部20における処理実行時の各種情報として、操作案内画面、撮像画像の入力画像、顔検出結果、枠検出結果、顔照合結果、顔認証の認証結果などを表示する。枠検出等の本実施形態の顔認証装置における処理については後で詳述する。表示部12は、例えば顔認証システム100Aの監視室などに配置され、顔認証システム100Aの管理者や運用者が視認可能に設けられる。なお、表示部12は、撮像部11の近傍に対象人物30への案内表示のために設けてもよい。
The calculation unit 20 is configured by a computer such as a PC. The calculation unit 20 includes a processor, a memory, a communication interface, and the like. Various calculation processes related to face authentication using the image information of the captured image acquired by the imaging unit 11 and the ID information acquired by the ID reading unit 13. Is executed and the authentication result is output. The display unit 12 is configured by a display device including a display device such as a liquid crystal display, and includes various types of information at the time of execution of processing in the calculation unit 20 such as an operation guidance screen, an input image of a captured image, a face detection result, a frame detection result, The verification result and the authentication result of face authentication are displayed. Processing in the face authentication apparatus of this embodiment such as frame detection will be described in detail later. The display unit 12 is disposed, for example, in a monitoring room of the face authentication system 100A, and is provided so as to be visible to an administrator or operator of the face authentication system 100A. The display unit 12 may be provided in the vicinity of the imaging unit 11 for guidance display to the target person 30.
(顔認証装置の第1の実施形態)
図2は、第1の実施形態の顔認証装置の機能構成を示すブロック図である。顔認証装置の演算部20Aは、機能的な構成要素として、顔認識部21、枠検出部22、配置判定部23、顔照合部24、有効判定部25を含む。演算部20Aは、所定のプログラムをプロセッサにより実行することによって、各部の機能を実現する。本実施形態では、携帯型の表示端末を用いた正規登録者の動画像再生によるなりすましなどがあった場合でも、枠検出を行うことによって表示端末の外形を認識し、顔画像の動画像を用いた不正を検知する。 (First Embodiment of Face Authentication Device)
FIG. 2 is a block diagram illustrating a functional configuration of the face authentication apparatus according to the first embodiment. Thecalculation unit 20A of the face authentication device includes a face recognition unit 21, a frame detection unit 22, an arrangement determination unit 23, a face collation unit 24, and an validity determination unit 25 as functional components. The arithmetic unit 20A implements the functions of the respective units by executing a predetermined program by the processor. In this embodiment, even when there is spoofing by moving image reproduction of a regular registrant using a portable display terminal, the outline of the display terminal is recognized by performing frame detection, and the moving image of the face image is used. Detect fraud that has occurred.
図2は、第1の実施形態の顔認証装置の機能構成を示すブロック図である。顔認証装置の演算部20Aは、機能的な構成要素として、顔認識部21、枠検出部22、配置判定部23、顔照合部24、有効判定部25を含む。演算部20Aは、所定のプログラムをプロセッサにより実行することによって、各部の機能を実現する。本実施形態では、携帯型の表示端末を用いた正規登録者の動画像再生によるなりすましなどがあった場合でも、枠検出を行うことによって表示端末の外形を認識し、顔画像の動画像を用いた不正を検知する。 (First Embodiment of Face Authentication Device)
FIG. 2 is a block diagram illustrating a functional configuration of the face authentication apparatus according to the first embodiment. The
顔認識部21は、顔検出部の機能を有する。顔認識部21は、顔画像を含む撮像画像の入力画像を用いて、顔認識処理によって入力画像において対象人物の顔を検出し、顔の特徴情報と位置情報(顔配置情報)を取得する。枠検出部22は、撮像画像の入力画像と背景画像とを用いて、背景画像を除去した入力画像において直線状の枠の検出を行い、撮像画像中に表示端末等の枠がある場合に枠の位置情報(枠配置情報)を取得する。配置判定部23は、顔配置情報と枠配置情報とを用いて、表示端末等の枠内に顔が位置しているかどうかの配置判定を行う。この配置判定により、入力画像の撮像画像中に表示端末が存在し、表示端末の枠内に表示端末によって顔画像が表示されていることを検出する。
The face recognition unit 21 has a function of a face detection unit. The face recognition unit 21 detects the face of the target person in the input image by face recognition processing using the input image of the captured image including the face image, and acquires face feature information and position information (face arrangement information). The frame detection unit 22 uses the input image of the captured image and the background image to detect a linear frame in the input image from which the background image has been removed. If the captured image includes a frame such as a display terminal, the frame detection unit 22 Position information (frame layout information) is acquired. The arrangement determination unit 23 uses the face arrangement information and the frame arrangement information to perform arrangement determination as to whether or not the face is located within the frame of the display terminal or the like. By this arrangement determination, it is detected that the display terminal exists in the captured image of the input image and the face image is displayed by the display terminal within the frame of the display terminal.
顔照合部24は、撮像画像から取得した顔の特徴情報と登録者の顔情報の照合データとを用いて、顔認識部21にて検出された撮像画像中の顔の照合を行って登録者の顔と一致するかどうか判定する。照合データとしては、登録者の顔画像の画像データ、又は登録者の顔の特徴情報を含む特徴データを用いる。照合データは、例えば対象人物が所持するIDカードにID情報の一つとして記録されたものを利用し、IDカードから読み取ったID情報の照合データを演算部20Aの顔照合部24に入力する。有効判定部25は、認証判定部、及び認証結果出力部の機能を有する。有効判定部25は、配置判定部23による配置判定結果と顔照合部24による顔照合結果とに基づき、入力画像における対象人物の顔が有効かどうか、すなわち顔認証の認証結果の有効性として承認か否認かを判定し、認証結果を出力する。
The face collation unit 24 collates the face in the captured image detected by the face recognition unit 21 using the facial feature information acquired from the captured image and the collation data of the registrant's face information, thereby registering the registrant. It is determined whether or not the face matches. As the collation data, image data of the registrant's face image or feature data including feature information of the registrant's face is used. For example, the collation data is recorded as one of the ID information on the ID card possessed by the target person, and the collation data of the ID information read from the ID card is input to the face collation unit 24 of the arithmetic unit 20A. The validity determination unit 25 has functions of an authentication determination unit and an authentication result output unit. The validity determination unit 25 approves whether the face of the target person in the input image is valid based on the arrangement determination result by the arrangement determination unit 23 and the face comparison result by the face matching unit 24, that is, the validity of the authentication result of face authentication. It is determined whether or not, and the authentication result is output.
ここで、枠検出部22の機能構成及び動作の一例について詳細に説明する。図3は、本実施形態における枠検出処理を説明する図である。図示例は、撮像画像の入力画像51として、携帯型の表示端末に正規の登録者の顔が写っている動画像を再生してなりすましを行っている状況を撮像した画像が入力された場合を示したものである。この場合、撮像画像において、表示端末に表示された登録者の顔が被写体として撮像され、入力画像51には、表示端末の筐体53と、表示端末の表示部における表示画像上の人物54とが含まれている。枠検出部22は、入力画像51について公知の輪郭抽出(エッジ検出)処理を行い、エッジ画像52を生成する。エッジ画像52は、表示端末の筐体の輪郭55と、表示端末の表示部内の人物の輪郭56とを含むものとなる。そして、枠検出部22は、エッジ画像52について直線検出処理を行い、画像中の直線部分(線分)57を抽出する。直線検出処理としては、例えばハフ(Hough)変換を用いた直線検出などを用いればよい。次いで、枠検出部22は、抽出した画像中の直線部分57において、平行な2つの直線とこれに垂直の直線とがあるかを判定し、四辺で囲まれた枠58を抽出し、枠座標を取得する。なお、四辺が閉じていない方形状の線分の組や三辺又は二辺の直交する線分の組などであっても、所定条件に基づいて線分で囲まれた枠として判定し、枠58を抽出してもよい。
Here, an example of the functional configuration and operation of the frame detection unit 22 will be described in detail. FIG. 3 is a diagram for explaining frame detection processing in the present embodiment. In the illustrated example, an input image 51 of a captured image is a case where an image obtained by capturing a situation where a moving image in which a regular registrant's face is captured is reproduced on a portable display terminal is captured. It is shown. In this case, in the captured image, the face of the registrant displayed on the display terminal is captured as a subject, and the input image 51 includes a housing 53 of the display terminal and a person 54 on the display image in the display unit of the display terminal. It is included. The frame detection unit 22 performs a known contour extraction (edge detection) process on the input image 51 to generate an edge image 52. The edge image 52 includes the outline 55 of the housing of the display terminal and the outline 56 of the person in the display unit of the display terminal. Then, the frame detection unit 22 performs a straight line detection process on the edge image 52 and extracts a straight line portion (line segment) 57 in the image. As the straight line detection process, for example, straight line detection using Hough transform may be used. Next, the frame detection unit 22 determines whether there are two parallel straight lines and a straight line perpendicular thereto in the straight line portion 57 in the extracted image, extracts a frame 58 surrounded by four sides, and extracts frame coordinates. To get. In addition, even if it is a set of square line segments whose four sides are not closed or a set of line segments that are orthogonal to three sides or two sides, it is determined as a frame surrounded by the line segments based on a predetermined condition. 58 may be extracted.
また、枠検出部22は、枠検出処理において、撮像画像から背景画像を除去する背景除去をした後で直線検出及び枠抽出を行ってもよい。撮像画像において背景部分を除去することによって、枠検出の精度を向上できる。図4は、本実施形態における背景除去処理を説明する図である。図示例は、撮像画像の入力画像61として、表示端末を用いたなりすましが行われておらず、対象人物の顔を直接撮像した画像が入力された場合を示したものである。この場合、入力画像61には、背景65と人物66とが含まれている。背景画像62としては、入力画像61と同じ撮像領域において、人物がいない状態での背景65が含まれた画像を用いる。背景画像62は、例えば、人物がいないタイミングで背景のみを撮像した画像、或いは撮像画像において位置変化が無い(移動量が検出されない)不動点を抽出した画像などによって取得する。枠検出部22は、入力画像61と背景画像62とを用いて入力画像61における背景65を除去し、背景除去後の画像63を生成する。背景除去後の画像63は、背景部分が無く、人物66などの動きがある被写体のみの画像となる。なお、表示端末を用いたなりすましが行われた場合、背景除去後の画像には表示端末と人物とが含まれる。
Further, the frame detection unit 22 may perform straight line detection and frame extraction after performing background removal that removes the background image from the captured image in the frame detection processing. The accuracy of frame detection can be improved by removing the background portion from the captured image. FIG. 4 is a diagram for explaining the background removal processing in the present embodiment. The illustrated example shows a case in which impersonation using the display terminal is not performed as the input image 61 of the captured image, and an image obtained by directly capturing the face of the target person is input. In this case, the input image 61 includes a background 65 and a person 66. As the background image 62, an image including the background 65 in a state where no person is present in the same imaging area as the input image 61 is used. The background image 62 is acquired by, for example, an image obtained by capturing only the background at a timing when there is no person, or an image obtained by extracting a fixed point that does not change in position (the amount of movement is not detected) in the captured image. The frame detection unit 22 removes the background 65 in the input image 61 using the input image 61 and the background image 62, and generates an image 63 after the background is removed. The image 63 after the background removal is an image of only a subject that has no background portion and moves, such as a person 66. When impersonation is performed using a display terminal, the image after the background removal includes the display terminal and a person.
図5は、本実施形態における枠検出部の機能構成の第1例を示す図である。第1例の枠検出部22Aは、機能的な構成要素として、輪郭抽出部221、222、背景除去部223、直線検出部224、枠抽出部225を含む。第1例は、入力画像と背景画像のそれぞれについて輪郭を抽出し、撮像画像の輪郭から背景部分の輪郭を除去し、背景除去したエッジ画像において枠検出を行う例である。輪郭抽出部221は、撮像画像の入力画像についてエッジ検出を行って撮像画像の輪郭を抽出する。輪郭抽出部222は、背景画像についてエッジ検出を行って背景画像の輪郭を抽出する。背景除去部223は、撮像画像の輪郭から背景画像の輪郭を除去し、背景を含まない輪郭によるエッジ画像を生成する。直線検出部224は、エッジ画像についてハフ変換等による直線検出処理を行い、画像中の直線部分を抽出する。枠抽出部225は、抽出した画像中の直線部分において、四辺の線分で囲まれた枠を抽出し、枠座標として出力する。このように、入力画像の輪郭と背景画像の輪郭をそれぞれ抽出した後で背景画像の輪郭を除去することによって、背景の除去効率を高めることができ、精度良く枠検出を実行できる。
FIG. 5 is a diagram illustrating a first example of a functional configuration of the frame detection unit in the present embodiment. The frame detection unit 22A of the first example includes contour extraction units 221, 222, a background removal unit 223, a straight line detection unit 224, and a frame extraction unit 225 as functional components. The first example is an example in which a contour is extracted for each of the input image and the background image, the contour of the background portion is removed from the contour of the captured image, and frame detection is performed on the edge image from which the background is removed. The contour extraction unit 221 performs edge detection on the input image of the captured image and extracts the contour of the captured image. The contour extracting unit 222 performs edge detection on the background image to extract the contour of the background image. The background removing unit 223 removes the contour of the background image from the contour of the captured image, and generates an edge image with a contour that does not include the background. The straight line detection unit 224 performs straight line detection processing such as Hough transform on the edge image, and extracts a straight line portion in the image. The frame extraction unit 225 extracts a frame surrounded by four line segments in a straight line portion in the extracted image, and outputs it as frame coordinates. In this way, by removing the contour of the background image after extracting the contour of the input image and the contour of the background image, the background removal efficiency can be improved and frame detection can be performed with high accuracy.
図6は、本実施形態における枠検出部の機能構成の第2例を示す図である。第2例の枠検出部22Bは、機能的な構成要素として、背景除去部226、輪郭抽出部227、直線検出部224、枠抽出部225を含む。第2例は、入力画像から背景画像を除去し、背景除去後の画像について輪郭を抽出し、背景除去したエッジ画像において枠検出を行う例である。背景除去部226は、撮像画像から背景画像を除去し、背景を含まない撮像画像を生成する。輪郭抽出部227は、背景除去後の撮像画像についてエッジ検出を行って撮像画像の輪郭を抽出する。直線検出部224は、エッジ画像についてハフ変換等による直線検出処理を行い、画像中の直線部分を抽出する。枠抽出部225は、抽出した画像中の直線部分において、四辺の線分で囲まれた枠を抽出し、枠座標として出力する。このように、撮像画像と背景画像について、画像の明度などの条件が近似している場合は、背景除去後の画像から輪郭を抽出し、精度良く枠検出を実行できる。
FIG. 6 is a diagram illustrating a second example of the functional configuration of the frame detection unit in the present embodiment. The frame detection unit 22B of the second example includes a background removal unit 226, a contour extraction unit 227, a straight line detection unit 224, and a frame extraction unit 225 as functional components. The second example is an example in which the background image is removed from the input image, the contour is extracted from the image after the background removal, and frame detection is performed on the edge image from which the background has been removed. The background removal unit 226 removes the background image from the captured image and generates a captured image that does not include the background. The contour extraction unit 227 performs edge detection on the captured image after background removal, and extracts the contour of the captured image. The straight line detection unit 224 performs straight line detection processing such as Hough transform on the edge image, and extracts a straight line portion in the image. The frame extraction unit 225 extracts a frame surrounded by four line segments in a straight line portion in the extracted image, and outputs it as frame coordinates. As described above, when the conditions such as the brightness of the image are approximated between the captured image and the background image, the outline can be extracted from the image after the background is removed, and the frame detection can be executed with high accuracy.
図7は、本実施形態における配置判定処理を説明する図である。図示例は、図3と同様、撮像画像として、携帯型の表示端末に正規の登録者の顔が写っている動画像を再生してなりすましを行っている状況を撮像した画像が入力された場合を示したものである。配置判定部23は、顔認識部21による顔検出結果の一つとして得られた対象人物の顔配置情報を取得する。顔配置情報は、撮像画像の入力画像中の顔の輪郭位置を示す顔枠68を含むものである。また、配置判定部23は、枠検出部22による枠検出結果の一つとして得られた表示端末の枠配置情報を取得する。枠配置情報は、図3に示したように、撮像画像において検出された表示端末の輪郭位置を示す枠58を含むものである。配置判定部23は、配置判定処理として、顔枠68と枠58の双方の位置を比較し、枠58の内側の領域に顔枠68が存在するかどうか、すなわち表示端末の枠内に対象人物の顔が位置しているかどうかを判定する。配置判定部23は、図7のように枠58の内側に顔枠68がある場合、枠内に顔画像ありとの配置判定結果を出力する。有効判定部25は、配置判定部23による配置判定結果に基づき、枠内に顔画像が存在する場合、撮像画像において検出された顔画像が表示端末によって表示された顔画像であるとみなし、顔照合結果は無効であると判定する。このように、有効判定部25による判定結果が無効判定となった場合、表示端末を用いたなりすまし発生が検出されたことになる。有効判定部25は、無効判定の場合は、顔認証の認証結果として否認を出力する。
FIG. 7 is a diagram for explaining the arrangement determination processing in the present embodiment. In the illustrated example, as in FIG. 3, an image obtained by capturing a situation in which a moving image in which a regular registrant's face is reflected is reproduced on a portable display terminal is input as a captured image. Is shown. The arrangement determination unit 23 acquires face arrangement information of the target person obtained as one of the face detection results by the face recognition unit 21. The face arrangement information includes a face frame 68 indicating the face contour position in the input image of the captured image. In addition, the arrangement determination unit 23 acquires frame arrangement information of the display terminal obtained as one of the frame detection results by the frame detection unit 22. As shown in FIG. 3, the frame arrangement information includes a frame 58 indicating the contour position of the display terminal detected in the captured image. As the placement determination process, the placement determination unit 23 compares the positions of both the face frame 68 and the frame 58 and determines whether the face frame 68 exists in the area inside the frame 58, that is, the target person within the frame of the display terminal. It is determined whether or not the face is located. When there is a face frame 68 inside the frame 58 as shown in FIG. 7, the arrangement determination unit 23 outputs an arrangement determination result that there is a face image in the frame. The validity determination unit 25 considers that the face image detected in the captured image is a face image displayed by the display terminal when the face image exists in the frame based on the result of the placement determination by the placement determination unit 23, and the face The collation result is determined to be invalid. Thus, when the determination result by the validity determination part 25 becomes invalid determination, the impersonation generation using the display terminal is detected. In the case of invalidity determination, the validity determination unit 25 outputs denial as the authentication result of face authentication.
図8は、第1の実施形態における顔認証処理の第1例の手順を示すフローチャートである。顔認証装置の演算部20Aは、顔認識部21において、入力画像の取込みを行い(S11)、入力画像中の対象人物の顔認識処理を行う(S12)。演算部20Aは、顔認識部21によって対象人物の顔検出がなされたかどうかを判定し(S13)、入力画像において顔が検出されるまで顔認識処理を継続する。演算部20Aは、対象人物の顔が検出された場合(S13:Yes)、顔照合部24において、検出された対象人物の顔照合を行う(S14)。演算部20Aは、顔照合部24による顔照合において、対象人物の顔の特徴情報が登録者の顔の照合データと一致するかどうかを判定し(S15)、一致した場合(S15:Yes)、顔認証処理を継続する。一方、演算部20Aは、対象人物の顔の特徴情報が登録者の顔の照合データと一致しない場合(S15:No)、有効判定部25において無効の判定を行い、顔認証の認証結果として否認を出力する。
FIG. 8 is a flowchart showing the procedure of a first example of face authentication processing in the first embodiment. The calculation unit 20A of the face authentication apparatus captures the input image in the face recognition unit 21 (S11), and performs face recognition processing of the target person in the input image (S12). The computing unit 20A determines whether or not the face recognition unit 21 has detected the face of the target person (S13), and continues the face recognition process until a face is detected in the input image. When the face of the target person is detected (S13: Yes), the calculation unit 20A performs face matching of the detected target person in the face matching unit 24 (S14). The calculation unit 20A determines whether or not the facial feature information of the target person matches the matching data of the registrant's face in the face matching performed by the face matching unit 24 (S15), and if they match (S15: Yes), Continue face recognition processing. On the other hand, when the feature information of the face of the target person does not match the collation data of the registrant's face (S15: No), the calculation unit 20A performs an invalid determination in the validity determination unit 25 and rejects it as a face authentication authentication result. Is output.
また、演算部20Aは、枠検出部22において入力画像中の枠検出を行い(S16)、配置判定部23において、顔検出による顔の位置と枠検出による枠の位置との配置判定を行う(S17)。演算部20Aは、配置判定部23による配置判定において、入力画像中に枠が検出され、枠の内側に顔が存在するかどうか、すなわち対象人物の顔の外側を囲む枠が存在するかどうかを判定する(S18)。演算部20Aは、対象人物の顔の外側を囲む枠が存在する場合(S18:Yes)、有効判定部25において無効の判定を行い、顔認証の認証結果として否認を出力する。一方、演算部20Aは、対象人物の顔の外側を囲む枠が無い場合(S18:No)、有効判定部25において有効の判定を行い、顔認証の認証結果として承認を出力する。
In addition, the calculation unit 20A performs frame detection in the input image at the frame detection unit 22 (S16), and the layout determination unit 23 performs layout determination between the face position by face detection and the frame position by frame detection ( S17). In the arrangement determination by the arrangement determination unit 23, the calculation unit 20A detects a frame in the input image and determines whether or not a face exists inside the frame, that is, whether or not a frame surrounding the outside of the target person's face exists. Determine (S18). When there is a frame surrounding the outside of the target person's face (S18: Yes), the calculation unit 20A determines invalidity in the validity determination unit 25, and outputs denial as the authentication result of face authentication. On the other hand, when there is no frame surrounding the outside of the target person's face (S18: No), the calculation unit 20A performs the validity determination in the validity determination unit 25 and outputs an approval as the authentication result of the face authentication.
なお、上記顔認証処理において、顔検出と枠検出とは、並列的にそれぞれの処理を並行して実行してもよいし、直列的に一方の処理を実行してから他方の処理を実行してもよく、処理の実施タイミングや順序は限定されない。また、顔照合と配置判定とは、並列的にそれぞれの処理を並行して実行してもよいし、直列的に一方の処理を実行してから他方の処理を実行してもよく、処理の実施タイミングや順序は限定されない。
In the face authentication process, the face detection and the frame detection may be executed in parallel, or one process is executed in series and then the other process is executed. The execution timing and order of the processing are not limited. Face matching and placement determination may be performed in parallel with each other, or one of the processes may be performed in series and the other may be performed. Implementation timing and order are not limited.
図9は、第1の実施形態における顔認証処理の第2例の手順を示すフローチャートである。顔認証処理の第2例では、撮像画像の入力画像において、複数の顔が存在する場合の処理例を示す。顔認証用に撮像した画像は、例えば複数の人物の顔画像が写っている、なりすましの顔画像と本人又は他の人物の顔画像との複数の顔画像が存在する、などの複数の顔画像が検出される場合がある。
FIG. 9 is a flowchart showing the procedure of a second example of the face authentication process in the first embodiment. The second example of face authentication processing shows a processing example in the case where there are a plurality of faces in the input image of the captured image. The images captured for face authentication include, for example, a plurality of face images including a plurality of face images of a person, a plurality of face images of a spoofed face image and a face image of the person or another person. May be detected.
顔認証装置の演算部20Aは、顔認識部21において、入力画像の取込みを行い(S21)、入力画像中の対象人物の顔認識処理を行う(S22)。このとき、演算部20Aは、顔認識部21によって検出された顔画像の数として顔検出数(N)を取得する。演算部20Aは、顔検出数(N)が0より大きいかどうか、すなわち対象人物を含む顔検出がなされたかどうかを判定し(S23)、入力画像において顔が検出されるまで顔認識処理を継続する。演算部20Aは、一つ以上の顔が検出されて顔検出数(N)が1以上である場合(S23:Yes)、枠検出部22において入力画像中の枠検出を行う(S24)。また、演算部20Aは、顔認証処理のカウンタをkとして初期値k=1から開始し、顔照合部24において、検出された顔のうちk番目の顔について顔照合を行う(S25)。演算部20Aは、顔照合部24による顔照合において、処理対象の顔の特徴情報が登録者の顔の照合データと一致するかどうかを判定する(S26)。
The computing unit 20A of the face authentication apparatus captures the input image in the face recognition unit 21 (S21), and performs face recognition processing of the target person in the input image (S22). At this time, the calculation unit 20A acquires the face detection number (N) as the number of face images detected by the face recognition unit 21. The calculation unit 20A determines whether the face detection number (N) is greater than 0, that is, whether face detection including the target person has been performed (S23), and continues face recognition processing until a face is detected in the input image. To do. When one or more faces are detected and the face detection number (N) is 1 or more (S23: Yes), the calculation unit 20A performs frame detection in the input image in the frame detection unit 22 (S24). Further, the calculation unit 20A starts from the initial value k = 1 with the face authentication process counter set to k, and the face matching unit 24 performs face matching on the k-th face among the detected faces (S25). The calculation unit 20A determines whether or not the facial feature information of the processing target matches the collation data of the registrant's face in the face matching performed by the face matching unit 24 (S26).
ここで、演算部20Aは、処理対象の顔の特徴情報が登録者の顔の照合データと一致しない場合(S26:No)、顔認証処理のカウンタをk=k+1と増分し、未処理の顔が存在するかどうか、例えば顔の処理数が顔検出数を超えていないか(k≦Nであるか)を判定する(S27)。演算部20Aは、未処理の顔がある場合(S27:Yes)、次のk番目の顔について顔照合を行い(S25、S26)、未処理の顔が無くなるまで同様の処理を繰り返す。演算部20Aは、顔照合結果が不一致となり、未処理の顔が存在しない場合(S26:No、S27:No)、有効判定部25において無効の判定を行い、顔認証の認証結果として否認を出力する。
Here, when the feature information of the face to be processed does not match the collation data of the registrant's face (S26: No), the calculation unit 20A increments the face authentication processing counter to k = k + 1, and the unprocessed face Is determined, for example, whether the number of face processing does not exceed the number of face detections (whether k ≦ N) (S27). When there is an unprocessed face (S27: Yes), the arithmetic unit 20A performs face collation for the next k-th face (S25, S26), and repeats the same processing until there is no unprocessed face. When the face collation result is inconsistent and there is no unprocessed face (S26: No, S27: No), the calculation unit 20A performs invalidity determination in the validity determination unit 25, and outputs denial as the authentication result of face authentication To do.
一方、演算部20Aは、処理対象の顔の特徴情報が登録者の顔の照合データと一致した場合(S26:Yes)、配置判定部23において、k番目の顔について顔検出による顔の位置と枠検出による枠の位置との配置判定を行う(S28)。演算部20Aは、配置判定部23による配置判定において、入力画像中に枠が検出され、枠の内側に顔が存在するかどうか、すなわち処理対象の顔の外側を囲む枠が存在するかどうかを判定する(S29)。演算部20Aは、処理対象の顔の外側を囲む枠が存在する場合(S29:Yes)、顔認証処理のカウンタをk=k+1と増分し、未処理の顔が存在するかどうかを判定する(S27)。演算部20Aは、未処理の顔がある場合(S27:Yes)、次のk番目の顔について同様に顔照合、配置判定を行う(S25、S26、S28、S29)。演算部20Aは、未処理の顔が存在しない場合(S27:No)、有効判定部25において無効の判定を行い、顔認証の認証結果として否認を出力する。一方、演算部20Aは、処理対象の顔の外側を囲む枠が無い場合(S29:No)、有効判定部25において有効の判定を行い、顔認証の認証結果として承認を出力する。
On the other hand, when the feature information of the face to be processed matches the collation data of the registrant's face (S26: Yes), the calculation unit 20A determines the position of the face by the face detection for the kth face in the placement determination unit 23. An arrangement determination with the position of the frame by frame detection is performed (S28). In the placement determination by the placement determination unit 23, the calculation unit 20A detects a frame in the input image and determines whether a face exists inside the frame, that is, whether there is a frame surrounding the outside of the face to be processed. Determine (S29). When there is a frame surrounding the outside of the face to be processed (S29: Yes), the arithmetic unit 20A increments the face authentication processing counter to k = k + 1, and determines whether there is an unprocessed face ( S27). When there is an unprocessed face (S27: Yes), the arithmetic unit 20A performs face matching and arrangement determination on the next kth face in the same manner (S25, S26, S28, S29). When there is no unprocessed face (S27: No), the calculation unit 20A determines invalidity in the validity determination unit 25, and outputs denial as the authentication result of face authentication. On the other hand, when there is no frame surrounding the outside of the face to be processed (S29: No), the calculation unit 20A performs the validity determination in the validity determination unit 25, and outputs an approval as the authentication result of the face authentication.
なお、上記顔認証処理において、顔検出と枠検出とは、並列的にそれぞれの処理を並行して実行してもよいし、直列的に一方の処理を実行してから他方の処理を実行してもよく、処理の実施タイミングや順序は限定されない。また、顔照合と配置判定とは、検出された複数の顔のそれぞれについて、並列的にそれぞれの処理を並行して実行してもよいし、直列的に一方の処理を実行してから他方の処理を実行してもよく、処理の実施タイミングや順序は限定されない。
In the face authentication process, the face detection and the frame detection may be executed in parallel, or one process is executed in series and then the other process is executed. The execution timing and order of the processing are not limited. Face collation and placement determination may be performed in parallel for each of a plurality of detected faces, or one process is performed in series and the other is performed in parallel. Processing may be executed, and the execution timing and order of the processing are not limited.
また、上記枠検出処理において、入力画像として動画像の複数フレームの画像を用いて、各フレームにて枠検出を行い、複数フレームにおける検出結果を統合した枠の検出結果を取得するようにしてもよい。例えば、携帯型の表示端末の表示部に人物の手がかかっている場合など、1つのフレームの画像では枠が検出できない場合も生じ得る。このように、例えば5フレーム中の3フレームの画像において枠が検出された場合など、1つのフレームでは枠検出結果が欠落するような場合であっても、複数フレームにおける検出結果を統合することによって、精度良く枠検出を実行でき、枠検出の安定性を向上できる。
Further, in the frame detection process, a plurality of frames of a moving image may be used as an input image, frame detection may be performed on each frame, and a frame detection result obtained by integrating detection results on the plurality of frames may be acquired. Good. For example, there may be a case where a frame cannot be detected from an image of one frame, for example, when a person's hand is on the display unit of a portable display terminal. Thus, even if a frame detection result is missing in one frame, for example, when a frame is detected in an image of 3 frames out of 5 frames, the detection results in multiple frames are integrated. Therefore, the frame detection can be executed with high accuracy, and the stability of the frame detection can be improved.
上述したように、本実施形態によれば、携帯型の表示端末を用いたなりすましなどの不正を枠検出によって検知可能であり、認証時のなりすましを精度良く検出することができる。これにより、精度の高い顔認証装置、顔認証システムを実現可能となる。
As described above, according to this embodiment, fraud such as impersonation using a portable display terminal can be detected by frame detection, and impersonation at the time of authentication can be detected with high accuracy. Thereby, it is possible to realize a highly accurate face authentication device and face authentication system.
なお、顔認証装置において、撮像部にて撮像した動画像の撮像画像から取得した生体情報を用いて、顔照合において顔の特徴情報と生体情報とを組み合わせて照合を行ってもよい。生体情報と顔画像情報とを組み合わせることにより、顔認証の信頼性を向上できる。
It should be noted that in the face authentication device, the facial feature information and the biological information may be combined in the face matching using the biological information acquired from the captured image of the moving image captured by the imaging unit. The reliability of face authentication can be improved by combining biometric information and face image information.
(顔認証装置の第2の実施形態)
図10は、第2の実施形態の顔認証装置の機能構成を示すブロック図である。第2の実施形態の顔認証装置の演算部20Bは、機能的な構成要素として、顔認識部21、枠検出部22、配置判定部23、顔照合部24、有効判定部25a、ID認証部26を含む。なお、顔認識部21、枠検出部22、配置判定部23、顔照合部24の機能及び動作は、図2に示した第1の実施形態と同様であり、ここでは説明を省略し、異なる部分を中心に説明する。 (Second Embodiment of Face Authentication Device)
FIG. 10 is a block diagram illustrating a functional configuration of the face authentication apparatus according to the second embodiment. Thecalculation unit 20B of the face authentication device according to the second embodiment includes, as functional components, a face recognition unit 21, a frame detection unit 22, an arrangement determination unit 23, a face collation unit 24, an validity determination unit 25a, and an ID authentication unit. 26. The functions and operations of the face recognition unit 21, the frame detection unit 22, the arrangement determination unit 23, and the face collation unit 24 are the same as those in the first embodiment shown in FIG. The explanation will focus on the part.
図10は、第2の実施形態の顔認証装置の機能構成を示すブロック図である。第2の実施形態の顔認証装置の演算部20Bは、機能的な構成要素として、顔認識部21、枠検出部22、配置判定部23、顔照合部24、有効判定部25a、ID認証部26を含む。なお、顔認識部21、枠検出部22、配置判定部23、顔照合部24の機能及び動作は、図2に示した第1の実施形態と同様であり、ここでは説明を省略し、異なる部分を中心に説明する。 (Second Embodiment of Face Authentication Device)
FIG. 10 is a block diagram illustrating a functional configuration of the face authentication apparatus according to the second embodiment. The
ID認証部26は、ID読取部13により取得したID情報を用いて、対象人物が所持する身分証明カード、会員証、運転免許証などのIDカードに記録されたID情報の認証を行う。なお、ID認証部26は、IDカードから取得したID情報の代わりに、対象人物である利用者等により入力されたパスワードや識別符号の入力情報、又は対象人物の虹彩や指紋等の人物ごとに固有の識別情報をID情報として用いて、ID情報の認証を行ってもよい。
The ID authentication unit 26 uses the ID information acquired by the ID reading unit 13 to authenticate ID information recorded on an ID card such as an identification card, a membership card, or a driver's license possessed by the target person. Note that the ID authentication unit 26 does not use the ID information acquired from the ID card, but input information such as a password or an identification code input by a user who is the target person, or for each person such as an iris or fingerprint of the target person. The identification information may be authenticated using the unique identification information as the ID information.
有効判定部25aは、認証判定部、及び認証結果出力部の機能を有する。有効判定部25aは、配置判定部23による配置判定結果、顔照合部24による顔照合結果、及びID認証部26によるID認証結果に基づき、入力画像における対象人物の顔が有効かどうか、すなわち顔認証の認証結果が承認か否認かを判定し、認証結果を出力する。
The validity determination unit 25a has functions of an authentication determination unit and an authentication result output unit. The validity determination unit 25a determines whether the face of the target person in the input image is valid based on the arrangement determination result by the arrangement determination unit 23, the face verification result by the face verification unit 24, and the ID authentication result by the ID authentication unit 26, that is, the face It is determined whether the authentication result is approval or not, and the authentication result is output.
上述したように、本実施形態によれば、携帯型の表示端末を用いたなりすましなどの不正を枠検出によって検知可能であり、認証時のなりすましを精度良く検出することができる。これにより、精度の高い顔認証装置、顔認証システムを実現可能となる。また、ID情報と顔画像情報とを組み合わせることにより、顔認証の信頼性を向上できる。
As described above, according to this embodiment, fraud such as impersonation using a portable display terminal can be detected by frame detection, and impersonation at the time of authentication can be detected with high accuracy. Thereby, it is possible to realize a highly accurate face authentication device and face authentication system. Further, the reliability of face authentication can be improved by combining ID information and face image information.
(顔認証装置の第3の実施形態)
図11は、第3の実施形態の顔認証装置の機能構成を示すブロック図である。第3の実施形態の顔認証装置の演算部20Cは、機能的な構成要素として、顔認識部21、枠検出部22、配置判定部23、顔照合部24aを含む。なお、顔認識部21、枠検出部22、配置判定部23の機能及び動作は、図2に示した第1の実施形態と同様であり、ここでは説明を省略し、異なる部分を中心に説明する。 (Third Embodiment of Face Authentication Device)
FIG. 11 is a block diagram illustrating a functional configuration of the face authentication apparatus according to the third embodiment. Thecalculation unit 20C of the face authentication device according to the third embodiment includes a face recognition unit 21, a frame detection unit 22, an arrangement determination unit 23, and a face matching unit 24a as functional components. The functions and operations of the face recognition unit 21, the frame detection unit 22, and the arrangement determination unit 23 are the same as those in the first embodiment shown in FIG. To do.
図11は、第3の実施形態の顔認証装置の機能構成を示すブロック図である。第3の実施形態の顔認証装置の演算部20Cは、機能的な構成要素として、顔認識部21、枠検出部22、配置判定部23、顔照合部24aを含む。なお、顔認識部21、枠検出部22、配置判定部23の機能及び動作は、図2に示した第1の実施形態と同様であり、ここでは説明を省略し、異なる部分を中心に説明する。 (Third Embodiment of Face Authentication Device)
FIG. 11 is a block diagram illustrating a functional configuration of the face authentication apparatus according to the third embodiment. The
顔照合部24aは、認証判定部、及び認証結果出力部の機能を有する。顔照合部24aは、配置判定部23による配置判定結果に基づき、撮像画像中に枠が検出されない場合、又は枠が検出されても顔の外側を囲むものではない場合に、顔認識部21にて検出された撮像画像中の顔の照合を行って登録者の顔と一致するかどうか判定する。顔照合部24aは、顔照合の結果に基づき、顔認証の認証結果が承認か否認かを判定し、認証結果を出力する。
The face matching unit 24a has functions of an authentication determination unit and an authentication result output unit. The face collating unit 24a determines whether the frame is not detected in the captured image based on the arrangement determination result by the arrangement determining unit 23 or when the frame is detected but does not surround the outside of the face. The face in the captured image detected in this way is collated to determine whether or not it matches the registrant's face. The face matching unit 24a determines whether the authentication result of face authentication is approval or not based on the result of face matching, and outputs the authentication result.
図12は、第3の実施形態における顔認証処理の第1例の手順を示すフローチャートである。顔認証装置の演算部20Cは、顔認識部21において、入力画像の取込みを行い(S31)、入力画像中の対象人物の顔認識処理を行う(S32)。演算部20Cは、顔認識部21によって対象人物の顔検出がなされたかどうかを判定し(S33)、入力画像において顔が検出されるまで顔認識処理を継続する。演算部20Cは、対象人物の顔が検出された場合(S33:Yes)、枠検出部22において入力画像中の枠検出を行い(S34)、配置判定部23において、顔検出による顔の位置と枠検出による枠の位置との配置判定を行う(S35)。演算部20Cは、配置判定部23による配置判定において、入力画像中に枠が検出され、枠の内側に顔が存在するかどうか、すなわち対象人物の顔の外側を囲む枠が存在するかどうかを判定する(S36)。演算部20Cは、対象人物の顔の外側を囲む枠が存在する場合(S36:Yes)、顔照合部24aにおいて無効の判定を行い、顔認証の認証結果として否認を出力する。
FIG. 12 is a flowchart showing a procedure of a first example of face authentication processing in the third embodiment. The calculation unit 20C of the face authentication apparatus captures the input image in the face recognition unit 21 (S31), and performs face recognition processing of the target person in the input image (S32). The calculation unit 20C determines whether or not the face recognition unit 21 has detected the face of the target person (S33), and continues the face recognition processing until a face is detected in the input image. When the face of the target person is detected (S33: Yes), the calculation unit 20C performs frame detection in the input image at the frame detection unit 22 (S34), and the placement determination unit 23 determines the position of the face by face detection. Arrangement determination with the position of the frame by frame detection is performed (S35). In the placement determination by the placement determination unit 23, the calculation unit 20C detects a frame in the input image and determines whether a face exists inside the frame, that is, whether there is a frame surrounding the outside of the target person's face. Determination is made (S36). When there is a frame surrounding the outside of the target person's face (S36: Yes), the calculation unit 20C determines invalidity in the face matching unit 24a, and outputs a denial as the authentication result of the face authentication.
一方、演算部20Cは、対象人物の顔の外側を囲む枠が無い場合(S36:No)、顔照合部24aにおいて、検出された対象人物の顔照合を行う(S37)。演算部20Cは、顔照合部24aによる顔照合において、対象人物の顔の特徴情報が登録者の顔の照合データと一致するかどうかを判定し(S38)、一致した場合(S38:Yes)、顔認証の認証結果として承認を出力する。一方、演算部20Cは、対象人物の顔の特徴情報が登録者の顔の照合データと一致しない場合(S38:No)、顔照合部24aにおいて無効の判定を行い、顔認証の認証結果として否認を出力する。
On the other hand, when there is no frame surrounding the outside of the face of the target person (S36: No), the calculation unit 20C performs face matching of the detected target person in the face matching unit 24a (S37). The calculation unit 20C determines whether or not the facial feature information of the target person matches the matching data of the registrant's face in the face matching by the face matching unit 24a (S38), and if it matches (S38: Yes), Approval is output as the authentication result of face authentication. On the other hand, when the feature information of the face of the target person does not match the face collation data of the registrant (S38: No), the calculation unit 20C determines invalidity in the face collation unit 24a and rejects it as the authentication result of face authentication. Is output.
なお、上記顔認証処理において、顔検出と枠検出とは、並列的にそれぞれの処理を並行して実行してもよいし、直列的に一方の処理を実行してから他方の処理を実行してもよく、処理の実施タイミングや順序は限定されない。
In the face authentication process, the face detection and the frame detection may be executed in parallel, or one process is executed in series and then the other process is executed. The execution timing and order of the processing are not limited.
図13は、第3の実施形態における顔認証処理の第2例の手順を示すフローチャートである。顔認証処理の第2例では、撮像画像の入力画像において、複数の顔が存在する場合の処理例を示す。
FIG. 13 is a flowchart showing the procedure of the second example of the face authentication process in the third embodiment. The second example of face authentication processing shows a processing example in the case where there are a plurality of faces in the input image of the captured image.
顔認証装置の演算部20Cは、顔認識部21において、入力画像の取込みを行い(S41)、入力画像中の対象人物の顔認識処理を行う(S42)。このとき、演算部20Cは、顔認識部21によって検出された顔画像の数として顔検出数(N)を取得する。演算部20Cは、顔検出数(N)が0より大きいかどうか、すなわち対象人物を含む顔検出がなされたかどうかを判定し(S43)、入力画像において顔が検出されるまで顔認識処理を継続する。演算部20Cは、一つ以上の顔が検出されて顔検出数(N)が1以上である場合(S43:Yes)、枠検出部22において入力画像中の枠検出を行う(S44)。また、演算部20Cは、顔認証処理のカウンタをkとして初期値k=1から開始し、配置判定部23において、検出された顔のうちk番目の顔について顔検出による顔の位置と枠検出による枠の位置との配置判定を行う(S45)。演算部20Cは、配置判定部23による配置判定において、入力画像中に枠が検出され、枠の内側に顔が存在するかどうか、すなわち処理対象の顔の外側を囲む枠が存在するかどうかを判定する(S46)。
The computing unit 20C of the face authentication apparatus captures the input image in the face recognition unit 21 (S41), and performs face recognition processing of the target person in the input image (S42). At this time, the calculation unit 20C acquires the face detection number (N) as the number of face images detected by the face recognition unit 21. The calculation unit 20C determines whether the face detection number (N) is greater than 0, that is, whether face detection including the target person has been performed (S43), and continues face recognition processing until a face is detected in the input image. To do. When one or more faces are detected and the face detection number (N) is 1 or more (S43: Yes), the calculation unit 20C performs frame detection in the input image in the frame detection unit 22 (S44). Further, the calculation unit 20C starts from the initial value k = 1 with the face authentication processing counter set to k, and the placement determination unit 23 detects the face position and frame detection by face detection for the kth face among the detected faces. The frame is determined to be positioned with respect to (S45). In the placement determination by the placement determination unit 23, the calculation unit 20C detects a frame in the input image and determines whether a face exists inside the frame, that is, whether a frame surrounding the outside of the processing target face exists. Determine (S46).
ここで、演算部20Cは、処理対象の顔の外側を囲む枠が存在する場合(S46:Yes)、顔認証処理のカウンタをk=k+1と増分し、未処理の顔が存在するかどうか、例えば顔の処理数が顔検出数を超えていないか(k≦Nであるか)を判定する(S47)。演算部20Cは、未処理の顔がある場合(S47:Yes)、次のk番目の顔について配置判定を行い(S45、S46)、未処理の顔が無くなるまで同様の処理を繰り返す。演算部20Cは、配置判定結果が枠ありとなり、未処理の顔が存在しない場合(S46:Yes、S47:No)、顔照合部24aにおいて無効の判定を行い、顔認証の認証結果として否認を出力する。
Here, when there is a frame surrounding the outside of the face to be processed (S46: Yes), the arithmetic unit 20C increments the face authentication processing counter to k = k + 1, and determines whether or not an unprocessed face exists. For example, it is determined whether the number of face processes exceeds the number of detected faces (whether k ≦ N) (S47). When there is an unprocessed face (S47: Yes), the calculation unit 20C performs an arrangement determination for the next k-th face (S45, S46), and repeats the same process until there is no unprocessed face. When the placement determination result has a frame and there is no unprocessed face (S46: Yes, S47: No), the calculation unit 20C performs an invalid determination in the face matching unit 24a and rejects the authentication result of face authentication. Output.
一方、演算部20Cは、処理対象の顔の外側を囲む枠が無い場合(S46:No)、顔照合部24aにおいて、k番目の顔について顔照合を行う(S48)。演算部20Cは、顔照合部24aによる顔照合において、処理対象の顔の特徴情報が登録者の顔の照合データと一致するかどうかを判定する(S49)。演算部20Cは、処理対象の顔の特徴情報が登録者の顔の照合データと一致しない場合(S49:No)、顔認証処理のカウンタをk=k+1と増分し、未処理の顔が存在するかどうかを判定する(S47)。演算部20Cは、未処理の顔がある場合(S47:Yes)、次のk番目の顔について同様に配置判定、顔照合を行う(S45、S46、S48、S49)。演算部20Cは、未処理の顔が存在しない場合(S47:No)、顔照合部24aにおいて無効の判定を行い、顔認証の認証結果として否認を出力する。一方、演算部20Cは、処理対象の顔の特徴情報が登録者の顔の照合データと一致した場合(S49:Yes)、顔照合部24aにおいて有効の判定を行い、顔認証の認証結果として承認を出力する。
On the other hand, when there is no frame surrounding the outside of the face to be processed (S46: No), the calculation unit 20C performs face matching on the kth face in the face matching unit 24a (S48). The calculation unit 20C determines whether or not the feature information of the face to be processed matches the collation data of the registrant's face in the face collation performed by the face collation unit 24a (S49). When the feature information of the face to be processed does not match the collation data of the registrant's face (S49: No), the calculation unit 20C increments the face authentication process counter to k = k + 1, and there is an unprocessed face. Whether or not (S47). When there is an unprocessed face (S47: Yes), the calculation unit 20C performs the same arrangement determination and face matching for the next k-th face (S45, S46, S48, S49). When there is no unprocessed face (S47: No), the calculation unit 20C determines invalidity in the face matching unit 24a, and outputs denial as the authentication result of face authentication. On the other hand, when the feature information of the face to be processed matches the collation data of the registrant's face (S49: Yes), the calculation unit 20C determines that the face collation unit 24a is valid and approves it as the authentication result of face authentication. Is output.
なお、上記顔認証処理において、顔検出と枠検出とは、並列的にそれぞれの処理を並行して実行してもよいし、直列的に一方の処理を実行してから他方の処理を実行してもよく、処理の実施タイミングや順序は限定されない。
In the face authentication process, the face detection and the frame detection may be executed in parallel, or one process is executed in series and then the other process is executed. The execution timing and order of the processing are not limited.
上述したように、本実施形態によれば、携帯型の表示端末を用いたなりすましなどの不正を枠検出によって検知可能であり、認証時のなりすましを精度良く検出することができる。これにより、精度の高い顔認証装置、顔認証システムを実現可能となる。
As described above, according to this embodiment, fraud such as impersonation using a portable display terminal can be detected by frame detection, and impersonation at the time of authentication can be detected with high accuracy. Thereby, it is possible to realize a highly accurate face authentication device and face authentication system.
(顔認証装置の第4の実施形態)
図14は、第4の実施形態の顔認証装置の機能構成を示すブロック図である。第4の実施形態の顔認証装置の演算部20Dは、機能的な構成要素として、顔認識部21、枠検出部22、配置判定部23、顔照合部24a、有効判定部25b、ID認証部26を含む。なお、顔認識部21、枠検出部22、配置判定部23の機能及び動作は、図2に示した第1の実施形態と同様であり、また、顔照合部24aの機能及び動作は、図11に示した第3の実施形態と同様であり、ここでは説明を省略し、異なる部分を中心に説明する。 (Fourth Embodiment of Face Authentication Device)
FIG. 14 is a block diagram illustrating a functional configuration of the face authentication apparatus according to the fourth embodiment. Thecalculation unit 20D of the face authentication device according to the fourth embodiment includes, as functional components, a face recognition unit 21, a frame detection unit 22, an arrangement determination unit 23, a face collation unit 24a, an validity determination unit 25b, and an ID authentication unit. 26. The functions and operations of the face recognition unit 21, the frame detection unit 22, and the arrangement determination unit 23 are the same as those in the first embodiment shown in FIG. 2, and the functions and operations of the face matching unit 24a are 11 is the same as that of the third embodiment shown in FIG.
図14は、第4の実施形態の顔認証装置の機能構成を示すブロック図である。第4の実施形態の顔認証装置の演算部20Dは、機能的な構成要素として、顔認識部21、枠検出部22、配置判定部23、顔照合部24a、有効判定部25b、ID認証部26を含む。なお、顔認識部21、枠検出部22、配置判定部23の機能及び動作は、図2に示した第1の実施形態と同様であり、また、顔照合部24aの機能及び動作は、図11に示した第3の実施形態と同様であり、ここでは説明を省略し、異なる部分を中心に説明する。 (Fourth Embodiment of Face Authentication Device)
FIG. 14 is a block diagram illustrating a functional configuration of the face authentication apparatus according to the fourth embodiment. The
ID認証部26は、ID読取部13により取得したID情報を用いて、対象人物が所持する身分証明カード、会員証、運転免許証などのIDカードに記録されたID情報の認証を行う。なお、ID認証部26は、IDカードから取得したID情報の代わりに、対象人物である利用者等により入力されたパスワードや識別符号の入力情報、又は対象人物の虹彩や指紋等の人物ごとに固有の識別情報をID情報として用いて、ID情報の認証を行ってもよい。
The ID authentication unit 26 uses the ID information acquired by the ID reading unit 13 to authenticate ID information recorded on an ID card such as an identification card, a membership card, or a driver's license possessed by the target person. Note that the ID authentication unit 26 does not use the ID information acquired from the ID card, but input information such as a password or an identification code input by a user who is the target person, or for each person such as an iris or fingerprint of the target person. The identification information may be authenticated using the unique identification information as the ID information.
有効判定部25bは、認証判定部、及び認証結果出力部の機能を有する。有効判定部25bは、顔照合部24aによる顔照合結果とID認証部26によるID認証結果とに基づき、入力画像における対象人物の顔が有効かどうか、すなわち顔認証の認証結果が承認か否認かを判定し、認証結果を出力する。
The validity determination unit 25b has functions of an authentication determination unit and an authentication result output unit. The validity determination unit 25b determines whether the face of the target person in the input image is valid based on the face collation result by the face collation unit 24a and the ID authentication result by the ID authentication unit 26, that is, whether the authentication result of face authentication is approval or disapproval. And the authentication result is output.
上述したように、本実施形態によれば、携帯型の表示端末を用いたなりすましなどの不正を枠検出によって検知可能であり、認証時のなりすましを精度良く検出することができる。これにより、精度の高い顔認証装置、顔認証システムを実現可能となる。また、ID情報と顔画像情報とを組み合わせることにより、顔認証の信頼性を向上できる。
As described above, according to this embodiment, fraud such as impersonation using a portable display terminal can be detected by frame detection, and impersonation at the time of authentication can be detected with high accuracy. Thereby, it is possible to realize a highly accurate face authentication device and face authentication system. Further, the reliability of face authentication can be improved by combining ID information and face image information.
(顔認証システムのシステム構成、第2例)
図15は、本実施形態の顔認証システムの構成の第2例を示す図である。顔認証システム100Bは、顔認証装置10Bを含む構成であり、対象人物30を撮像して顔認証を行い、認証結果を制御対象40に出力する。 (Face authentication system configuration, second example)
FIG. 15 is a diagram illustrating a second example of the configuration of the face authentication system according to the present embodiment. Theface authentication system 100 </ b> B includes a face authentication device 10 </ b> B, captures an image of the target person 30, performs face authentication, and outputs an authentication result to the control target 40.
図15は、本実施形態の顔認証システムの構成の第2例を示す図である。顔認証システム100Bは、顔認証装置10Bを含む構成であり、対象人物30を撮像して顔認証を行い、認証結果を制御対象40に出力する。 (Face authentication system configuration, second example)
FIG. 15 is a diagram illustrating a second example of the configuration of the face authentication system according to the present embodiment. The
顔認証装置10Bは、撮像部11、表示部12、入力装置14、記憶装置15、演算部20を含む構成である。なお、撮像部11、表示部12、演算部20の機能及び動作は、図1に示した第1例と同様であり、ここでは説明を省略し、異なる部分を中心に説明する。入力装置14は、ID情報を入力するID入力部の機能を有する。入力装置14は、タッチパネル、キーボード、マウス等の入力デバイスにより構成され、対象人物30又は他の操作者による入力操作に基づき、対象人物30の会員番号、身分証明番号、運転免許証番号等のID情報を入力する。記憶装置15は、照合データを記憶する記憶部の機能を有する。記憶装置15は、メモリ、ハードディスク等のストレージデバイスを含む構成であり、登録者の顔の照合データ(顔画像の画像データ、顔の特徴情報など)を記憶して蓄積し、対象人物30のID情報に対応する照合データを読み出して演算部20に出力する。演算部20は、上述した顔認証に関する各種演算処理を実行し、認証結果を出力する。
The face authentication device 10 </ b> B includes an imaging unit 11, a display unit 12, an input device 14, a storage device 15, and a calculation unit 20. Note that the functions and operations of the imaging unit 11, the display unit 12, and the calculation unit 20 are the same as those in the first example shown in FIG. 1, and description thereof will be omitted here, and different parts will be mainly described. The input device 14 has a function of an ID input unit for inputting ID information. The input device 14 includes input devices such as a touch panel, a keyboard, and a mouse. Based on an input operation by the target person 30 or another operator, an ID such as a member number, an identification number, and a driver's license number of the target person 30 Enter information. The storage device 15 has a function of a storage unit that stores collation data. The storage device 15 includes a storage device such as a memory and a hard disk. The storage device 15 stores and accumulates registrant face collation data (face image image data, face feature information, etc.), and the ID of the target person 30. The collation data corresponding to the information is read and output to the calculation unit 20. The calculation unit 20 executes various calculation processes related to the face authentication described above and outputs an authentication result.
なお、入力装置14より対象人物30がパスワードを入力してパスワード認証を行うことも可能であり、顔照合とパスワード認証とを複合した顔認証を行ってもよい。
It should be noted that the target person 30 can input the password from the input device 14 and perform password authentication, or face authentication combining face verification and password authentication may be performed.
このような第2例の顔認証システム100Bにおいても、上述した第1例の顔認証システム100Aと同様に、顔検出と枠検出とを実行し、枠検出によって携帯型の表示端末を用いたなりすましなどの不正を検知可能にでき、対象人物30の顔認証の精度を向上できる。
Also in the face authentication system 100B of the second example like this, the face detection and the frame detection are executed as in the face authentication system 100A of the first example described above, and the portable display terminal is impersonated by the frame detection. It is possible to detect fraud such as, and the accuracy of face authentication of the target person 30 can be improved.
(顔認証システムのシステム構成、第3例)
図16は、本実施形態の顔認証システムの構成の第3例を示す図である。顔認証システム100Cは、顔認証装置10Cと、サーバ装置80とを含む構成であり、対象人物30を撮像して顔認証を行い、認証結果を制御対象40に出力する。顔認証装置10Cとサーバ装置80とはネットワーク70を介して接続され、各種情報の送受信が可能になっている。 (Face authentication system configuration, third example)
FIG. 16 is a diagram illustrating a third example of the configuration of the face authentication system according to the present embodiment. Theface authentication system 100 </ b> C includes a face authentication device 10 </ b> C and a server device 80. The face authentication system 100 </ b> C images the target person 30 to perform face authentication, and outputs an authentication result to the control target 40. The face authentication device 10 </ b> C and the server device 80 are connected via a network 70 so that various types of information can be transmitted and received.
図16は、本実施形態の顔認証システムの構成の第3例を示す図である。顔認証システム100Cは、顔認証装置10Cと、サーバ装置80とを含む構成であり、対象人物30を撮像して顔認証を行い、認証結果を制御対象40に出力する。顔認証装置10Cとサーバ装置80とはネットワーク70を介して接続され、各種情報の送受信が可能になっている。 (Face authentication system configuration, third example)
FIG. 16 is a diagram illustrating a third example of the configuration of the face authentication system according to the present embodiment. The
顔認証装置10Cは、撮像部11、表示部12、ID読取部13、制御部16を含む構成である。なお、撮像部11、表示部12、ID読取部13の機能及び動作は、図1に示した第1例と同様であり、ここでは説明を省略し、異なる部分を中心に説明する。制御部16は、例えばPC等のコンピュータにより構成され、通信機能を持つ通信部を有する。制御部16は、撮像部11にて取得した撮像画像の画像情報、ID読取部13にて取得したID情報など、認証に用いる情報を符号化し、サーバ装置80との間で通信する。また、制御部16は、サーバ装置80から伝送される顔認証の認証結果などの処理情報を入力し、操作案内画面や撮像画像の入力画像などとともに、表示部12に送信して表示させる。
The face authentication device 10 </ b> C includes an imaging unit 11, a display unit 12, an ID reading unit 13, and a control unit 16. Note that the functions and operations of the imaging unit 11, the display unit 12, and the ID reading unit 13 are the same as those in the first example shown in FIG. 1, and description thereof will be omitted here, and different parts will be mainly described. The control unit 16 is configured by a computer such as a PC and has a communication unit having a communication function. The control unit 16 encodes information used for authentication, such as image information of a captured image acquired by the imaging unit 11 and ID information acquired by the ID reading unit 13, and communicates with the server device 80. Further, the control unit 16 inputs processing information such as an authentication result of face authentication transmitted from the server device 80, and transmits it to the display unit 12 for display together with an operation guidance screen and an input image of a captured image.
サーバ装置80は、演算装置81、記憶装置82を含む構成である。演算装置81は、プロセッサ、メモリ、通信インタフェース等を含むコンピュータにより構成され、第1例の顔認証装置10Aの演算部20と同様に、顔認証に関する各種演算処理を実行する。記憶装置82は、照合データを記憶する記憶部の機能を有する。記憶装置82は、メモリ、ハードディスク等のストレージデバイスを含む構成であり、登録者の顔の照合データ(顔画像の画像データ、顔の特徴情報など)を記憶して蓄積し、対象人物30のID情報に対応する照合データを読み出して演算装置81に出力する。演算装置81は、上述した顔認証に関する各種演算処理を実行し、認証結果を出力する。
The server device 80 includes an arithmetic device 81 and a storage device 82. The calculation device 81 is configured by a computer including a processor, a memory, a communication interface, and the like, and executes various calculation processes related to face authentication, like the calculation unit 20 of the face authentication device 10A of the first example. The storage device 82 has a function of a storage unit that stores collation data. The storage device 82 includes a storage device such as a memory and a hard disk. The storage device 82 stores and accumulates registrant's face collation data (face image image data, face feature information, etc.), and the ID of the target person 30. The collation data corresponding to the information is read and output to the arithmetic unit 81. The arithmetic device 81 executes various arithmetic processes related to the face authentication described above and outputs an authentication result.
なお、顔認証に関する各種演算処理は、顔認証装置10Cの制御部16において、第1例の顔認証装置10Aの演算部20と同様に行ってもよいし、顔認証装置10Cの制御部16とサーバ装置80の演算装置81とで分散して行ってもよい。制御部16において顔認証処理を行う場合、顔認証装置10Cはサーバ装置80からネットワーク70を介して照合データを取得し、顔認証処理を実行する。また、認証結果をネットワーク70を介して伝送し、制御対象40やサーバ装置80だけでなく、他の場所の管理装置などに通知してもよい。
Various calculation processes related to face authentication may be performed in the control unit 16 of the face authentication device 10C in the same manner as the calculation unit 20 of the face authentication device 10A of the first example, or with the control unit 16 of the face authentication device 10C. You may carry out by distributing with the arithmetic unit 81 of the server apparatus 80. FIG. When performing the face authentication process in the control unit 16, the face authentication apparatus 10C acquires the collation data from the server device 80 via the network 70, and executes the face authentication process. In addition, the authentication result may be transmitted via the network 70 and notified not only to the control target 40 and the server device 80 but also to a management device at another location.
このような第3例の顔認証システム100Cにおいても、上述した第1例の顔認証システム100Aと同様に、顔検出と枠検出とを実行し、枠検出によって携帯型の表示端末を用いたなりすましなどの不正を検知可能にでき、対象人物30の顔認証の精度を向上できる。また、サーバ装置80の記憶装置82に蓄積した照合データを用いることで、多数の照合データを用いた顔認証が可能となる。
Also in the face authentication system 100C of the third example like this, the face detection and the frame detection are executed as in the case of the face authentication system 100A of the first example, and the portable display terminal is impersonated by the frame detection. It is possible to detect fraud such as, and the accuracy of face authentication of the target person 30 can be improved. Further, by using the collation data accumulated in the storage device 82 of the server device 80, face authentication using a large number of collation data becomes possible.
以上のように、本実施形態の顔認証処理装置の一例としての顔認証装置10A、10B、10Cは、対象人物を撮像した撮像画像を取得し、撮像画像の入力画像において対象人物の顔を検出する顔検出部としての顔認識部21と、入力画像において直線状の枠を検出する枠検出部22と、顔の検出により取得した顔位置情報と、枠の検出により取得した枠位置情報とを用いて、対象人物の顔を囲む枠が存在するかどうかを判定する配置判定部23と、枠の配置判定結果に基づき、対象人物の顔情報の認証結果の有効性を判定する認証判定部としての有効判定部25又は顔照合部24aと、を有する。
As described above, the face authentication devices 10A, 10B, and 10C as examples of the face authentication processing device according to the present embodiment acquire a captured image obtained by capturing a target person, and detect the face of the target person in an input image of the captured image. A face recognition unit 21 as a face detection unit, a frame detection unit 22 that detects a linear frame in the input image, face position information acquired by face detection, and frame position information acquired by frame detection. And using an arrangement determination unit 23 that determines whether or not a frame surrounding the face of the target person exists, and an authentication determination unit that determines the validity of the authentication result of the target person's face information based on the frame arrangement determination result. The validity determining unit 25 or the face matching unit 24a.
上記構成により、撮像画像の枠検出を行い、撮像画像における枠の有無、及び顔に対する枠の配置を判定し、この枠の判定結果に基づき、携帯型の表示端末を用いたなりすましなどの不正を検知可能であり、認証時のなりすましを精度良く検出することができる。これにより、顔認証の信頼性及び安定度を向上でき、精度の高い顔認証処理装置、顔認証処理システムを実現可能となる。
With the above configuration, the frame of the captured image is detected, the presence / absence of the frame in the captured image and the arrangement of the frame with respect to the face are determined. Based on the determination result of the frame, fraud such as impersonation using a portable display terminal is detected. Detection is possible, and impersonation at the time of authentication can be detected with high accuracy. Thereby, the reliability and stability of face authentication can be improved, and a highly accurate face authentication processing device and face authentication processing system can be realized.
また、顔認証装置10Aの演算部20Aは、対象人物の撮像画像の顔情報と登録された顔情報とを照合する顔照合部24を備え、有効判定部25は、顔情報の顔照合結果と、枠の配置判定結果とに基づき、対象人物の顔情報の認証結果の有効性を判定する。これにより、携帯型の表示端末を用いたなりすましなどの不正を検知可能であり、認証時のなりすましを精度良く検出することができる。
The computing unit 20A of the face authentication apparatus 10A includes a face matching unit 24 that matches face information of a captured image of the target person with registered face information, and the validity determining unit 25 includes a face matching result of the face information. The validity of the authentication result of the face information of the target person is determined based on the frame arrangement determination result. Thereby, fraud such as impersonation using a portable display terminal can be detected, and impersonation at the time of authentication can be accurately detected.
また、顔認証装置10Aの演算部20Cは、対象人物の撮像画像の顔情報と登録された顔情報とを照合する顔照合部24aを備え、顔照合部24aは、枠の配置判定結果において対象人物の顔を囲む枠が存在しない場合に、顔情報の照合を行い、顔情報の顔照合結果によって対象人物の顔情報の認証結果の有効性を判定する。これにより、携帯型の表示端末を用いたなりすましなどの不正を検知可能であり、認証時のなりすましを精度良く検出することができる。
The computing unit 20C of the face authentication device 10A includes a face matching unit 24a that matches face information of a captured image of the target person with registered face information, and the face matching unit 24a includes a target in the frame placement determination result. When there is no frame surrounding the person's face, face information is collated, and the validity of the authentication result of the target person's face information is determined based on the face collation result of the face information. Thereby, fraud such as impersonation using a portable display terminal can be detected, and impersonation at the time of authentication can be accurately detected.
また、顔認証装置10Aの演算部20B、20Dは、対象人物のID情報を取得して認証するID認証部26を備え、有効判定部25a、25bは、ID情報の認証結果をさらに用いて、対象人物の顔情報の認証結果の有効性を判定する。これにより、ID情報と顔画像情報とを組み合わせることにより、顔認証の信頼性を向上できる。
Further, the calculation units 20B and 20D of the face authentication device 10A include an ID authentication unit 26 that acquires and authenticates the ID information of the target person, and the validity determination units 25a and 25b further use the authentication result of the ID information, The validity of the authentication result of the target person's face information is determined. Thereby, the reliability of face authentication can be improved by combining ID information and face image information.
また、顔認証装置10Aの枠検出部22A、22Bは、入力画像における背景を除去する背景除去部223、226を有する。これにより、撮像画像において背景部分を除去することによって、人物などの動きがある被写体のみの画像から枠検出を実行でき、枠検出の精度を向上できる。
Also, the frame detection units 22A and 22B of the face authentication device 10A have background removal units 223 and 226 that remove the background in the input image. As a result, by removing the background portion from the captured image, frame detection can be performed from an image of only a subject such as a person, and frame detection accuracy can be improved.
また、顔認証装置10Aの背景除去部223、226は、対象人物を撮像した際の背景を背景画像として用い、入力画像の背景を除去する。或いは、背景除去部223、226は、対象人物が存在しない場合の背景を背景画像として用い、入力画像の背景を除去する。これにより、背景部分が無く、人物などの動きがある被写体のみの画像から枠検出を実行でき、枠検出の精度を向上できる。
Further, the background removal units 223 and 226 of the face authentication device 10A remove the background of the input image using the background when the target person is imaged as the background image. Alternatively, the background removal units 223 and 226 use the background when the target person does not exist as the background image, and removes the background of the input image. As a result, frame detection can be executed from an image of only a subject having a background and no movement, such as a person, and frame detection accuracy can be improved.
また、顔認証装置10Aの枠検出部22A、22Bは、入力画像として動画像の複数フレームの画像を用い、複数フレームにおける検出結果を統合した枠の検出結果を取得する。これにより、1つのフレームの画像では枠が検出できず、枠検出結果が欠落するような場合であっても、複数フレームにおける検出結果を統合することによって、精度良く枠検出を実行でき、枠検出の安定性を向上できる。
Further, the frame detection units 22A and 22B of the face authentication apparatus 10A use a plurality of frames of moving images as input images, and acquire a detection result of a frame obtained by integrating the detection results of the plurality of frames. As a result, even if a frame cannot be detected in an image of one frame and frame detection results are missing, frame detection can be performed with high accuracy by integrating detection results in a plurality of frames. Stability can be improved.
また、本実施形態の顔認証処理システムの一例としての顔認証システム100A、100B、100Cは、対象人物を撮像する撮像部11と、対象人物の撮像画像を取得し、前記撮像画像の入力画像において前記対象人物の顔を検出する顔検出部としての顔認識部21と、入力画像において直線状の枠を検出する枠検出部22と、顔の検出により取得した顔位置情報と、枠の検出により取得した枠位置情報とを用いて、対象人物の顔を囲む枠が存在するかどうかを判定する配置判定部23と、対象人物の撮像画像の顔情報と登録された顔情報とを照合する顔照合部24と、顔情報の顔照合結果と、枠の配置判定結果とに基づき、対象人物の顔情報の認証結果の有効性を判定する認証判定部、及び認証結果を制御対象へ出力する認証結果出力部としての有効判定部25と、撮像部11、顔認識部21、枠検出部22、配置判定部23、有効判定部25のうちの少なくとも一つの処理結果を表示する表示部12と、を備える。
In addition, the face authentication systems 100A, 100B, and 100C as examples of the face authentication processing system according to the present embodiment acquire the captured image of the target person and the captured image of the target person, and in the input image of the captured image By face recognition unit 21 as a face detection unit for detecting the face of the target person, frame detection unit 22 for detecting a linear frame in the input image, face position information obtained by face detection, and frame detection Using the acquired frame position information, the face determination unit 23 that determines whether there is a frame surrounding the face of the target person, and the face that matches the face information of the captured image of the target person with the registered face information Based on the collation unit 24, the face collation result of the face information, and the frame arrangement determination result, an authentication determination unit that determines the validity of the authentication result of the face information of the target person, and authentication that outputs the authentication result to the control target Result output It includes a validity determination section 25 as an imaging unit 11, the face recognition unit 21, the frame detector 22, arranged determination unit 23, a display unit 12 for displaying at least one of the processing result of the validity determination section 25, the.
上記構成により、撮像画像の枠検出を行い、枠が検出された場合に顔に対する枠の配置を判定し、枠の配置の判定結果に基づき、携帯型の表示端末を用いたなりすましなどの不正を検知可能である。これにより、認証時のなりすましを精度良く検出でき、顔認証の信頼性及び安定度を向上できる。
With the above configuration, the frame of the captured image is detected, and when the frame is detected, the arrangement of the frame with respect to the face is determined. Based on the determination result of the arrangement of the frame, fraud such as impersonation using a portable display terminal is detected. It can be detected. Thereby, the impersonation at the time of authentication can be detected with high accuracy, and the reliability and stability of face authentication can be improved.
また、顔認証システム100A、100B、100Cは、対象人物のID情報を読み取るID読取部13、又はID情報を入力するID入力部としての入力装置14を備えるとともに、ID情報を認証するID認証部26を備え、有効判定部25は、ID情報の認証結果をさらに用いて、対象人物の顔情報の認証結果の有効性を判定する。これにより、ID情報と顔画像情報とを組み合わせることにより、顔認証の信頼性を向上できる。
The face authentication systems 100A, 100B, and 100C include an ID reading unit 13 that reads ID information of a target person, or an input device 14 as an ID input unit that inputs ID information, and an ID authentication unit that authenticates ID information. 26, the validity determination unit 25 further determines the validity of the authentication result of the face information of the target person using the authentication result of the ID information. Thereby, the reliability of face authentication can be improved by combining ID information and face image information.
また、顔認証システム100A、100Cは、ID情報が、対象人物の顔の照合データを含み、顔照合部24は、ID情報の照合データを用いて対象人物の撮像画像の顔情報を照合する。これにより、ID情報に含まれる照合データを用いて、顔認証が可能であり、枠検出結果と顔検出結果とID情報とを組み合わせることにより、顔認証の信頼性を向上できる。
Further, in the face authentication systems 100A and 100C, the ID information includes the target person face collation data, and the face collation unit 24 collates the face information of the captured image of the target person using the ID information collation data. Thereby, face authentication is possible using the collation data included in the ID information, and the reliability of face authentication can be improved by combining the frame detection result, the face detection result, and the ID information.
また、顔認証システム100B、100Cは、対象人物の顔の照合データを記憶する記憶部としての記憶装置15、82を備え、顔照合部24は、記憶装置15、82より照合データを取得して対象人物の撮像画像の顔情報を照合する。これにより、自装置の記憶装置又は外部のサーバ装置等の記憶装置に記憶された照合データを用いて、顔照合が可能であり、多数の照合データを用いた顔認証が可能となる。
In addition, the face authentication systems 100B and 100C include storage devices 15 and 82 as storage units for storing the verification data of the target person's face, and the face verification unit 24 acquires the verification data from the storage devices 15 and 82. The face information of the captured image of the target person is collated. Thereby, face collation can be performed using collation data stored in a storage device such as its own device or an external server device, and face authentication using a large number of collation data is possible.
以上、図面を参照しながら各種の実施形態について説明したが、本開示はかかる例に限定されないことは言うまでもない。当業者であれば、請求の範囲に記載された範疇内において、各種の変更例又は修正例に想到し得ることは明らかであり、それらについても当然に本開示の技術的範囲に属するものと了解される。また、発明の趣旨を逸脱しない範囲において、上記実施形態における各構成要素を任意に組み合わせてもよい。
Although various embodiments have been described above with reference to the drawings, it goes without saying that the present disclosure is not limited to such examples. It will be apparent to those skilled in the art that various changes and modifications can be made within the scope of the claims, and these are of course within the technical scope of the present disclosure. Is done. In addition, the constituent elements in the above-described embodiment may be arbitrarily combined without departing from the spirit of the invention.
本開示は、認証時のなりすましを精度良く検出することができる顔認証処理装置、顔認証処理方法及び顔認証処理システムとして有用である。
The present disclosure is useful as a face authentication processing device, a face authentication processing method, and a face authentication processing system that can accurately detect impersonation during authentication.
10A,10B,10C 顔認証装置
11 撮像部
12 表示部
13 ID読取部
14 入力装置
15 記憶装置
16 制御部
20,20A,20B,20C,20D 演算部
21 顔認識部
22,22A,22B 枠検出部
23 配置判定部
24,24a 顔照合部
25,25a,25b 有効判定部
26 ID認証部
30 対象人物
35 IDカード
40 制御対象
70 ネットワーク
80 サーバ装置
81 演算装置
82 記憶装置
100A,100B,100C 顔認証システム
221,222,227 輪郭抽出部
223,226 背景除去部
224 直線検出部
225 枠抽出部 10A, 10B, 10CFace authentication device 11 Imaging unit 12 Display unit 13 ID reading unit 14 Input device 15 Storage device 16 Control unit 20, 20A, 20B, 20C, 20D Calculation unit 21 Face recognition unit 22, 22A, 22B Frame detection unit DESCRIPTION OF SYMBOLS 23 Arrangement | positioning determination part 24,24a Face collation part 25,25a, 25b Validity determination part 26 ID authentication part 30 Target person 35 ID card 40 Control object 70 Network 80 Server apparatus 81 Arithmetic apparatus 82 Storage apparatus 100A, 100B, 100C Face authentication system 221, 222, 227 Outline extraction unit 223, 226 Background removal unit 224 Straight line detection unit 225 Frame extraction unit
11 撮像部
12 表示部
13 ID読取部
14 入力装置
15 記憶装置
16 制御部
20,20A,20B,20C,20D 演算部
21 顔認識部
22,22A,22B 枠検出部
23 配置判定部
24,24a 顔照合部
25,25a,25b 有効判定部
26 ID認証部
30 対象人物
35 IDカード
40 制御対象
70 ネットワーク
80 サーバ装置
81 演算装置
82 記憶装置
100A,100B,100C 顔認証システム
221,222,227 輪郭抽出部
223,226 背景除去部
224 直線検出部
225 枠抽出部 10A, 10B, 10C
Claims (13)
- 対象人物を撮像した撮像画像を取得し、前記撮像画像の入力画像において前記対象人物の顔を検出する顔検出部と、
前記入力画像において直線状の枠を検出する枠検出部と、
前記顔の検出により取得した顔位置情報と、前記枠の検出により取得した枠位置情報とを用いて、前記対象人物の顔を囲む枠が存在するかどうかを判定する配置判定部と、
前記枠の配置判定結果に基づき、前記対象人物の顔情報の認証結果の有効性を判定する認証判定部と、
を備える顔認証処理装置。 A face detection unit that acquires a captured image obtained by capturing the target person and detects the face of the target person in the input image of the captured image;
A frame detection unit for detecting a linear frame in the input image;
An arrangement determining unit that determines whether a frame surrounding the face of the target person exists using the face position information acquired by detecting the face and the frame position information acquired by detecting the frame;
An authentication determination unit that determines the validity of the authentication result of the target person's face information based on the frame placement determination result;
A face authentication processing apparatus. - 請求項1に記載の顔認証処理装置であって、
前記対象人物の撮像画像の顔情報と登録された顔情報とを照合する顔照合部を備え、
前記認証判定部は、前記顔情報の顔照合結果と、前記枠の配置判定結果とに基づき、前記対象人物の顔情報の認証結果の有効性を判定する、
顔認証処理装置。 The face authentication processing device according to claim 1,
A face collation unit that collates face information of a captured image of the target person and registered face information;
The authentication determination unit determines the validity of the authentication result of the target person's face information based on the face matching result of the face information and the arrangement determination result of the frame;
Face authentication processing device. - 請求項1に記載の顔認証処理装置であって、
前記対象人物の撮像画像の顔情報と登録された顔情報とを照合する顔照合部を備え、
前記顔照合部は、前記枠の配置判定結果において前記対象人物の顔を囲む枠が存在しない場合に、前記顔情報の照合を行い、
前記認証判定部は、前記顔情報の顔照合結果によって前記対象人物の顔情報の認証結果の有効性を判定する、
顔認証処理装置。 The face authentication processing device according to claim 1,
A face collation unit that collates face information of a captured image of the target person and registered face information;
The face collation unit performs collation of the face information when there is no frame surrounding the face of the target person in the frame arrangement determination result;
The authentication determination unit determines the validity of the authentication result of the face information of the target person based on the face matching result of the face information.
Face authentication processing device. - 請求項1から3のいずれか一項に記載の顔認証処理装置であって、
前記対象人物のID情報を取得して認証するID認証部を備え、
前記認証判定部は、前記ID情報の認証結果をさらに用いて、前記対象人物の顔情報の認証結果の有効性を判定する、
顔認証処理装置。 The face authentication processing device according to any one of claims 1 to 3,
An ID authentication unit that acquires and authenticates the ID information of the target person,
The authentication determination unit further determines the validity of the authentication result of the face information of the target person using the authentication result of the ID information;
Face authentication processing device. - 請求項1に記載の顔認証処理装置であって、
前記枠検出部は、前記入力画像における背景を除去する背景除去部を有する、
顔認証処理装置。 The face authentication processing device according to claim 1,
The frame detection unit includes a background removal unit that removes a background in the input image.
Face authentication processing device. - 請求項5に記載の顔認証処理装置であって、
前記背景除去部は、前記対象人物を撮像した際の背景を背景画像として用い、前記入力画像の背景を除去する、
顔認証処理装置。 The face authentication processing device according to claim 5,
The background removing unit uses a background when the target person is imaged as a background image, and removes the background of the input image.
Face authentication processing device. - 請求項5に記載の顔認証処理装置であって、
前記背景除去部は、前記対象人物が存在しない場合の背景を背景画像として用い、前記入力画像の背景を除去する、
顔認証処理装置。 The face authentication processing device according to claim 5,
The background removal unit uses a background when the target person does not exist as a background image, and removes the background of the input image.
Face authentication processing device. - 請求項1に記載の顔認証処理装置であって、
前記枠検出部は、前記入力画像として動画像の複数フレームの画像を用い、複数フレームにおける検出結果を統合した枠の検出結果を取得する、
顔認証処理装置。 The face authentication processing device according to claim 1,
The frame detection unit uses a plurality of frames of moving images as the input image, and acquires a detection result of a frame obtained by integrating detection results in a plurality of frames.
Face authentication processing device. - 対象人物の顔認証を行う顔認証処理装置における顔認証処理方法であって、
前記対象人物を撮像した撮像画像を取得し、前記撮像画像の入力画像において前記対象人物の顔を検出し、
前記入力画像において直線状の枠を検出し、
前記顔の検出により取得した顔位置情報と、前記枠の検出により取得した枠位置情報とを用いて、前記対象人物の顔を囲む枠が存在するかどうかを判定し、
前記枠の配置判定結果に基づき、前記対象人物の顔情報の認証結果の有効性を判定する、
顔認証処理方法。 A face authentication processing method in a face authentication processing apparatus for performing face authentication of a target person,
Obtaining a captured image of the target person, detecting the face of the target person in the input image of the captured image,
Detecting a linear frame in the input image;
Using the face position information acquired by detecting the face and the frame position information acquired by detecting the frame, it is determined whether there is a frame surrounding the target person's face,
Determining the validity of the authentication result of the target person's face information based on the frame placement determination result;
Face authentication processing method. - 対象人物を撮像する撮像部と、
前記対象人物の撮像画像を取得し、前記撮像画像の入力画像において前記対象人物の顔を検出する顔検出部と、
前記入力画像において直線状の枠を検出する枠検出部と、
前記顔の検出により取得した顔位置情報と、前記枠の検出により取得した枠位置情報とを用いて、前記対象人物の顔を囲む枠が存在するかどうかを判定する配置判定部と、
前記対象人物の撮像画像の顔情報と登録された顔情報とを照合する顔照合部と、
前記顔情報の顔照合結果と、前記枠の配置判定結果とに基づき、前記対象人物の顔情報の認証結果の有効性を判定する認証判定部と、
前記認証結果を制御対象へ出力する認証結果出力部と、
前記撮像部、前記顔検出部、前記枠検出部、前記配置判定部、前記認証判定部のうちの少なくとも一つの処理結果を表示する表示部と、
を備える顔認証処理システム。 An imaging unit for imaging a target person;
A face detection unit that acquires a captured image of the target person and detects a face of the target person in an input image of the captured image;
A frame detection unit for detecting a linear frame in the input image;
An arrangement determining unit that determines whether a frame surrounding the face of the target person exists using the face position information acquired by detecting the face and the frame position information acquired by detecting the frame;
A face matching unit that matches face information of a captured image of the target person with registered face information;
An authentication determination unit that determines the validity of the authentication result of the face information of the target person based on the face comparison result of the face information and the frame placement determination result;
An authentication result output unit for outputting the authentication result to a control target;
A display unit that displays a processing result of at least one of the imaging unit, the face detection unit, the frame detection unit, the arrangement determination unit, and the authentication determination unit;
A face authentication processing system. - 請求項10に記載の顔認証処理システムであって、
前記対象人物のID情報を読み取るID読取部、又は前記ID情報を入力するID入力部を備えるとともに、
前記ID情報を認証するID認証部を備え、
前記認証判定部は、前記ID情報の認証結果をさらに用いて、前記対象人物の顔情報の認証結果の有効性を判定する、
顔認証処理システム。 The face authentication processing system according to claim 10,
An ID reading unit that reads ID information of the target person, or an ID input unit that inputs the ID information,
An ID authentication unit for authenticating the ID information;
The authentication determination unit further determines the validity of the authentication result of the face information of the target person using the authentication result of the ID information;
Face recognition processing system. - 請求項11に記載の顔認証処理システムであって、
前記ID情報は、前記対象人物の顔の照合データを含み、
前記顔照合部は、前記ID情報の照合データを用いて前記対象人物の撮像画像の顔情報を照合する、
顔認証処理システム。 The face authentication processing system according to claim 11,
The ID information includes collation data of the target person's face,
The face collation unit collates face information of a captured image of the target person using collation data of the ID information;
Face recognition processing system. - 請求項11に記載の顔認証処理システムであって、
前記対象人物の顔の照合データを記憶する記憶部を備え、
前記顔照合部は、前記記憶部より照合データを取得して前記対象人物の撮像画像の顔情報を照合する、
顔認証処理システム。 The face authentication processing system according to claim 11,
A storage unit for storing collation data of the target person's face;
The face collation unit obtains collation data from the storage unit and collates face information of a captured image of the target person;
Face recognition processing system.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2017068599A JP2018169943A (en) | 2017-03-30 | 2017-03-30 | Face authentication processing device, face authentication processing method and face authentication processing system |
JP2017-068599 | 2017-03-30 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018179723A1 true WO2018179723A1 (en) | 2018-10-04 |
Family
ID=63677721
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2018/001863 WO2018179723A1 (en) | 2017-03-30 | 2018-01-23 | Facial authentication processing apparatus, facial authentication processing method, and facial authentication processing system |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP2018169943A (en) |
WO (1) | WO2018179723A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP4379652A4 (en) * | 2021-07-27 | 2024-07-31 | Fujitsu Limited | DETERMINATION METHOD, DETERMINATION PROGRAM AND INFORMATION PROCESSING DEVICE |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP7415796B2 (en) | 2020-05-25 | 2024-01-17 | オムロン株式会社 | Living body determination device and living body determination method |
JP7574023B2 (en) | 2020-09-24 | 2024-10-28 | キヤノン株式会社 | Authentication processing device, authentication processing method, and program |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2014219703A (en) * | 2013-04-30 | 2014-11-20 | セコム株式会社 | Face authentication system |
JP2015026317A (en) * | 2013-07-29 | 2015-02-05 | オムロン株式会社 | Programmable display apparatus, control method, and program |
WO2015128961A1 (en) * | 2014-02-26 | 2015-09-03 | 株式会社日立製作所 | Face authentication system |
WO2017043314A1 (en) * | 2015-09-09 | 2017-03-16 | 日本電気株式会社 | Guidance acquisition device, guidance acquisition method, and program |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5083164B2 (en) * | 2008-10-09 | 2012-11-28 | 住友電気工業株式会社 | Image processing apparatus and image processing method |
JP6657646B2 (en) * | 2015-08-06 | 2020-03-04 | オムロン株式会社 | Obstacle detection device, obstacle detection method, and obstacle detection program |
-
2017
- 2017-03-30 JP JP2017068599A patent/JP2018169943A/en active Pending
-
2018
- 2018-01-23 WO PCT/JP2018/001863 patent/WO2018179723A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2014219703A (en) * | 2013-04-30 | 2014-11-20 | セコム株式会社 | Face authentication system |
JP2015026317A (en) * | 2013-07-29 | 2015-02-05 | オムロン株式会社 | Programmable display apparatus, control method, and program |
WO2015128961A1 (en) * | 2014-02-26 | 2015-09-03 | 株式会社日立製作所 | Face authentication system |
WO2017043314A1 (en) * | 2015-09-09 | 2017-03-16 | 日本電気株式会社 | Guidance acquisition device, guidance acquisition method, and program |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP4379652A4 (en) * | 2021-07-27 | 2024-07-31 | Fujitsu Limited | DETERMINATION METHOD, DETERMINATION PROGRAM AND INFORMATION PROCESSING DEVICE |
Also Published As
Publication number | Publication date |
---|---|
JP2018169943A (en) | 2018-11-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11093731B2 (en) | Analysis of reflections of projected light in varying colors, brightness, patterns, and sequences for liveness detection in biometric systems | |
US10990808B2 (en) | Face liveness detection using background/foreground motion analysis | |
KR102038851B1 (en) | Method and system for verifying identities | |
US9531710B2 (en) | Behavioral authentication system using a biometric fingerprint sensor and user behavior for authentication | |
KR101626880B1 (en) | Distinguish and recognition system of ID card for real name certification by non-face-to-face and online | |
JP2020061171A (en) | System and method for biometric authentication in connection with camera-equipped devices | |
US10108793B2 (en) | Systems and methods for secure biometric processing | |
US20140363058A1 (en) | Systems And Methods For Uniquely Identifying An Individual | |
US11997087B2 (en) | Mobile enrollment using a known biometric | |
KR102079952B1 (en) | Method of managing access using face recognition and apparatus using the same | |
KR20090008256A (en) | Face recognition system | |
WO2019216091A1 (en) | Face authentication device, face authentication method, and face authentication system | |
KR101724971B1 (en) | System for recognizing face using wide angle camera and method for recognizing face thereof | |
US20230222842A1 (en) | Improved face liveness detection using background/foreground motion analysis | |
CN108959884B (en) | Human authentication verification device and method | |
WO2018179723A1 (en) | Facial authentication processing apparatus, facial authentication processing method, and facial authentication processing system | |
US20160125239A1 (en) | Systems And Methods For Secure Iris Imaging | |
CN105427480A (en) | A teller machine based on image analysis | |
WO2022010022A1 (en) | Non-contact personal authentication customer identification apparatus and method therefor | |
KR101718244B1 (en) | Apparatus and method of processing wide angle image for recognizing face | |
KR102721059B1 (en) | Device and method to authorize user based on video data | |
KR101965749B1 (en) | Camera based contactless fingerprint reader | |
CN109299945B (en) | Identity verification method and device based on biological recognition algorithm | |
KR20220131583A (en) | Face identification device and method using background object | |
JP2024065337A (en) | Passage management program, passage management method, and passage management device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18776735 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18776735 Country of ref document: EP Kind code of ref document: A1 |