[go: up one dir, main page]

CN110781833A - Authentication method and device and electronic equipment - Google Patents

Authentication method and device and electronic equipment Download PDF

Info

Publication number
CN110781833A
CN110781833A CN201911028136.2A CN201911028136A CN110781833A CN 110781833 A CN110781833 A CN 110781833A CN 201911028136 A CN201911028136 A CN 201911028136A CN 110781833 A CN110781833 A CN 110781833A
Authority
CN
China
Prior art keywords
safety helmet
picture
target
target person
person
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911028136.2A
Other languages
Chinese (zh)
Inventor
赵拯
段魁
郑东
赵五岳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Pan Intelligent Technology Co Ltd
Original Assignee
Hangzhou Pan Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Pan Intelligent Technology Co Ltd filed Critical Hangzhou Pan Intelligent Technology Co Ltd
Priority to CN201911028136.2A priority Critical patent/CN110781833A/en
Publication of CN110781833A publication Critical patent/CN110781833A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/08Construction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services
    • G06Q50/265Personal security, identity or safety
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Tourism & Hospitality (AREA)
  • Molecular Biology (AREA)
  • Software Systems (AREA)
  • Strategic Management (AREA)
  • Primary Health Care (AREA)
  • Marketing (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Economics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Human Resources & Organizations (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • General Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Development Economics (AREA)
  • Computer Security & Cryptography (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the disclosure provides an authentication method, an authentication device and electronic equipment, wherein the method comprises the following steps: collecting a target picture containing the facial features of a target person; selecting a face area from a target picture frame; selecting a detection area from a target picture frame by taking a face area in the target picture as a reference according to a preset frame expansion direction and size; analyzing the characteristic points in the detection area by using a preset safety helmet detection model to obtain the wearing state of the safety helmet of the target person; verifying identity information of the target person; if the identity information of the target person passes the verification and the wearing state of the safety helmet of the target person is that the safety helmet is worn, the authentication is successful; and if the identity information of the target person is not verified, or the wearing state of the safety helmet of the target person is that the safety helmet is not worn, the authentication fails. Through the scheme disclosed by the invention, the detection efficiency and the accuracy of the wearing state of the safety helmet are improved, and the comprehensive detection accuracy of the wearing state and the identity of the safety helmet of the person to be detected is also improved.

Description

Authentication method and device and electronic equipment
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to an authentication method and apparatus, and an electronic device.
Background
Generally, the construction environment is relatively severe and dangerous. If the safety management work is not in place or the safety protection is improper, a large potential safety hazard can be brought to a construction site. Among them, wearing a helmet is a basic requirement for workers to enter a construction site. However, the constructor may not wear the helmet or the helmet to the position of the head for various reasons.
The existing safety helmet detection method comprises the steps of collecting image information of a construction site, and manually analyzing the wearing state of a safety helmet of a constructor, so that the detection efficiency and accuracy of the wearing state of the safety helmet are low.
Therefore, the existing safety helmet detection method has the technical problems of low safety helmet wearing state detection efficiency and low safety helmet wearing state detection accuracy.
Disclosure of Invention
In view of this, embodiments of the present disclosure provide an authentication method, an authentication apparatus, and an electronic device, which at least partially solve the problems in the prior art.
In a first aspect, an embodiment of the present disclosure provides an authentication method, including:
collecting a target picture containing the facial features of a target person;
selecting a face area from the target picture;
selecting a detection area from the target picture frame by taking a face area in the target picture as a reference according to a preset frame expansion direction and size, wherein the detection area comprises the face area;
analyzing the characteristic points in the detection area by using a preset safety helmet detection model to acquire the wearing state of the safety helmet of the target person, wherein the wearing state of the safety helmet comprises a wearing safety helmet and an unworn safety helmet;
verifying the identity information of the target person;
if the identity information of the target person passes verification and the wearing state of the safety helmet of the target person is that the safety helmet is worn, the authentication is successful;
and if the identity information of the target person is not verified, or the wearing state of the safety helmet of the target person is that the safety helmet is not worn, the authentication fails.
According to a specific implementation manner of the embodiment of the present disclosure, the step of selecting a detection region from the target picture by using the face region in the target picture as a reference according to a preset frame expansion direction and size includes:
selecting a first adjacent area of the face area along a first direction frame and selecting a second adjacent area of the face area along a second direction frame by taking the face area in the target picture as a reference;
determining a combined region including the face region, the first adjacent region, and the second adjacent region as the detection region.
According to a specific implementation manner of the embodiment of the present disclosure, the first direction is perpendicular to a central line of the face region, and the second direction is coincident with the central line of the face region;
the length of the first adjacent area along the first direction ranges from 0 meter to 0.6 meter, and the length of the second adjacent area along the second direction ranges from 0 meter to 1 meter.
According to a specific implementation manner of the embodiment of the present disclosure, the step of analyzing the feature points in the detection area by using a preset safety helmet detection model to obtain the wearing state of the safety helmet of the target person includes:
acquiring positive sample data of a state that a tester wears the safety helmet, and acquiring negative sample data of a state that the tester does not wear the safety helmet;
and training a multitask cascade convolution neural network by using the positive sample data and the negative sample data to obtain the safety helmet detection model capable of detecting the wearing state of the safety helmet of the tester.
According to a specific implementation manner of the embodiment of the disclosure, the step of obtaining positive sample data of the state that the tester wears the safety helmet includes:
crawling a preset number of first pictures containing the characteristics of the test safety helmet;
acquiring a second picture containing facial features of the test person, wherein the head of the test person in the second picture does not wear a safety helmet and/or a safety helmet-like;
and synthesizing the first picture and the second picture to obtain a third picture, wherein the testing personnel in the third picture is in a state of wearing the testing safety helmet.
According to a specific implementation manner of the embodiment of the disclosure, the step of obtaining negative sample data of the state that the test person does not wear the safety helmet includes:
crawling a preset number of fourth pictures containing characteristic information of similar safety helmets;
acquiring a fifth picture containing facial features of the test person, wherein the head of the test person in the fifth picture does not wear a safety helmet and/or a safety helmet-like;
and synthesizing the fourth picture and the fifth picture to obtain a sixth picture, wherein the tester in the sixth picture is in a state of wearing a safety helmet.
According to a specific implementation manner of the embodiment of the present disclosure, the positive sample data and the negative sample data are sample data with different simulation parameter types, where the simulation parameter types include at least one of a local gaussian brightness variation parameter, a helmet region color variation parameter, and a background color expansion parameter.
According to a specific implementation manner of the embodiment of the present disclosure, the step of verifying the identity information of the target person includes:
acquiring identity identification information of a target person;
and verifying the identification information of the target person according to a pre-stored person information set.
According to a specific implementation manner of the embodiment of the disclosure, the identification information of the target person is a target picture containing facial features of the target person;
the step of verifying the identification information of the target person according to the pre-stored person information set comprises:
and verifying the identity information of the target person according to a pre-stored person facial feature set by using the target picture.
In a second aspect, an embodiment of the present disclosure provides an authentication apparatus, including:
the acquisition module is used for acquiring a target picture containing the facial features of a target person;
the first framing module is used for framing out a face area from the target picture;
the second framing module is used for framing a detection area from the target picture by taking a face area in the target picture as a reference according to a preset frame expansion direction and size, wherein the detection area comprises the face area;
the analysis module is used for analyzing the characteristic points in the detection area by using a preset safety helmet detection model so as to obtain the wearing state of the safety helmet of the target person, wherein the wearing state of the safety helmet comprises a wearing safety helmet and an unworn safety helmet;
the verification module is used for verifying the identity information of the target personnel;
an authentication module to:
if the identity information of the target person passes verification and the wearing state of the safety helmet of the target person is that the safety helmet is worn, the authentication is successful;
and if the identity information of the target person is not verified, or the wearing state of the safety helmet of the target person is that the safety helmet is not worn, the authentication fails.
In a third aspect, an embodiment of the present disclosure further provides an electronic device, where the electronic device includes:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first aspect or any implementation of the first aspect.
The authentication method, the authentication device and the electronic equipment in the embodiment of the disclosure are characterized in that the authentication method comprises the steps of collecting a target picture containing facial features of a target person; selecting a face area from the target picture; selecting a detection area from the target picture frame by taking a face area in the target picture as a reference according to a preset frame expansion direction and size, wherein the detection area comprises the face area; analyzing the characteristic points in the detection area by using a preset safety helmet detection model to acquire the wearing state of the safety helmet of the target person, wherein the wearing state of the safety helmet comprises a wearing safety helmet and an unworn safety helmet; verifying the identity information of the target person; if the identity information of the target person passes verification and the wearing state of the safety helmet of the target person is that the safety helmet is worn, the authentication is successful; and if the identity information of the target person is not verified, or the wearing state of the safety helmet of the target person is that the safety helmet is not worn, the authentication fails. Through the scheme disclosed by the invention, the detection efficiency and the accuracy of the wearing state of the safety helmet are improved, the identity of the target person can be verified, and the accuracy of comprehensive detection of the wearing state of the safety helmet of the person to be detected is further improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings needed to be used in the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present disclosure, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flowchart of an authentication method according to an embodiment of the present disclosure;
fig. 2 is a partial schematic flow chart of another authentication method provided in the embodiment of the present disclosure;
fig. 3 is a schematic diagram of a target picture involved in an authentication method provided by an embodiment of the present disclosure;
fig. 4 is a partial flow chart of another authentication method provided by the embodiment of the present disclosure;
fig. 5 is a partial flow chart of another authentication method provided by the embodiment of the present disclosure;
fig. 6 is a partial flow chart of another authentication method provided by the embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of an authentication apparatus according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
The embodiments of the present disclosure are described in detail below with reference to the accompanying drawings.
The embodiments of the present disclosure are described below with specific examples, and other advantages and effects of the present disclosure will be readily apparent to those skilled in the art from the disclosure in the specification. It is to be understood that the described embodiments are merely illustrative of some, and not restrictive, of the embodiments of the disclosure. The disclosure may be embodied or carried out in various other specific embodiments, and various modifications and changes may be made in the details within the description without departing from the spirit of the disclosure. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
It is noted that various aspects of the embodiments are described below within the scope of the appended claims. It should be apparent that the aspects described herein may be embodied in a wide variety of forms and that any specific structure and/or function described herein is merely illustrative. Based on the disclosure, one skilled in the art should appreciate that one aspect described herein may be implemented independently of any other aspects and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method practiced using any number of the aspects set forth herein. Additionally, such an apparatus may be implemented and/or such a method may be practiced using other structure and/or functionality in addition to one or more of the aspects set forth herein.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present disclosure, and the drawings only show the components related to the present disclosure rather than the number, shape and size of the components in actual implementation, and the type, amount and ratio of the components in actual implementation may be changed arbitrarily, and the layout of the components may be more complicated.
In addition, in the following description, specific details are provided to facilitate a thorough understanding of the examples. However, it will be understood by those skilled in the art that the aspects may be practiced without these specific details.
The embodiment of the disclosure provides an authentication method. The authentication method provided by the present embodiment may be executed by a computing apparatus, which may be implemented as software, or implemented as a combination of software and hardware, and which may be integrally provided in a server, a terminal device, or the like.
Referring to fig. 1, an authentication method provided in an embodiment of the present disclosure includes:
s101, collecting a target picture containing the facial features of a target person;
the authentication method provided by the embodiment of the disclosure can be applied to the personnel authentication process in the scenes such as construction sites and the like. Before authentication, picture information of a person to be authenticated needs to be collected first. And defining the person to be authenticated as a target person, and defining the acquired picture containing the target person as a target picture. The "target person facial features" may be understood as facial features of the target person, which contain facial key point features. For example, a picture of a face of a person includes key point features of a face, eyes, nose, mouth, ears, and the like, and a set of these key point features is the face feature included in the picture. The target picture including the facial features of the target person may be understood as that the target picture may have other body part features of the target person besides the facial features of the target person, such as part features of the neck, limbs, and the like of the target person. Of course, besides, the target picture may also include environmental features of the target person, such as an environmental brightness feature, a live object background feature, and the like.
In the embodiment of the present disclosure, the manner of acquiring the target picture including the facial features of the target person may be a manner of dynamically acquiring a video to extract the target picture, and a manner of statically acquiring a picture of a fixed point region where the target person is located.
S102: selecting a face area from the target picture;
after the target picture containing the facial features of the target person is obtained through the steps, the face area containing the facial features of the target person can be selected from the target picture.
In the process, an image acquisition terminal such as a camera reads a target picture containing the facial features of a target person, and sends an image processing request of the target picture to a server; and the server receives the image processing request, reads the target picture and acquires a plurality of key point feature information containing the facial features of the target person. And selecting a face area containing the facial features of the target person from the target picture according to preset frame selection condition information.
Alternatively, the boxes for performing the box operation may include rectangular boxes, circle boxes, and other shaped boxes. And during operation, carrying out frame selection on the face area containing the facial features of the target person in the target picture according to a preset selection frame.
S103: selecting a detection area from the target picture frame by taking a face area in the target picture as a reference according to a preset frame expansion direction and size, wherein the detection area comprises the face area;
after a face region containing the facial features of a target person is selected from the target picture, a relatively large region is expanded outwards by taking the face region in the target picture as a reference according to the preset frame expansion direction and size for a subsequent detection step, and the large region selected from the frame can be defined as a detection region.
When the method is implemented, expansion is carried out according to the preset frame expansion direction and size. For example, the frame expansion is performed in a certain reference direction in the target picture, or the frame expansion is performed with reference to a direction in which a person is located in the target picture. In addition, the size of the expansion frame can be set according to the size of the human face, or the whole size of the human face when the safety helmet is worn. Optionally, the frame expansion direction and the frame expansion size may be set according to specific situations, and are not limited.
S104: analyzing the characteristic points in the detection area by using a preset safety helmet detection model to acquire the wearing state of the safety helmet of the target person, wherein the wearing state of the safety helmet comprises a wearing safety helmet and an unworn safety helmet;
the electronic device of the embodiment may be pre-stored with a helmet detection model, and the helmet detection model may analyze the wearing state of the personal helmet in the picture by receiving the picture containing the facial features of the person input by the user, where the wearing state includes wearing the helmet and non-wearing the helmet. Specifically, an analysis algorithm and a screening condition corresponding to the wearing state of the safety helmet are preset in the safety helmet detection model. The analysis algorithm can extract facial feature points of the person in the picture, judge whether the feature points in the picture meet the screening condition, determine that the wearing state of the safety helmet of the person in the picture is the wearing safety helmet when the feature points meet the screening condition, and otherwise determine that the wearing state of the safety helmet is the non-wearing safety helmet. In the embodiment of the disclosure, the safety helmet detection model is obtained through a built network framework, and the safety helmet detection model contains characteristic information of a worn safety helmet and characteristic information of an unworn safety helmet.
S105: verifying the identity information of the target person;
after the safety helmet detection step is completed, or before the safety helmet detection step is completed, the authentication method protected by the embodiment of the disclosure further adds a verification process aiming at the identity information of the target person, so as to further improve the accuracy of comprehensive detection for verifying the access state of the target person.
In a specific implementation, the step of verifying the identity information of the target person may include:
acquiring identity identification information of a target person;
and verifying the identification information of the target person according to a pre-stored person information set.
The electronic equipment/terminal can identify the identity information of a target person by acquiring the biological characteristic information of the target person or the identification of portable equipment such as a mobile phone and the like; the biological characteristic information comprises facial characteristics, fingerprints, irises and the like of target personnel; the portable equipment identification comprises a work card identification, an identity card identification, a mobile phone display identification and the like.
Further, the electronic device of the embodiment may store the personal identification information of the whole person in advance. And receiving the identification information of the target personnel, and analyzing and comparing the identification information of the target personnel with a pre-stored personnel information set to verify the identity of the target personnel.
Alternatively, the personnel information set may be a database storing identification information of the personnel. The identity identification information of each person stored in the database is in one-to-one correspondence with and completely identical to the identity identification information provided when the person passes the authentication detection. In addition, the person information set may also store personal information of each person, and the personal information has an inclusion relationship or a mapping relationship with identification information acquired during authentication detection thereof. In other words, the content of the personal data information in the database and the identification information during the authentication detection thereof need only be matched, and is not limited.
In one embodiment, the identification information of the target person may be a target picture containing facial features of the target person. In step S101, the identification information including the facial features of the target person is obtained, so that the target person does not need to capture the target image again, and does not need to be identified by other biometric information or identification information of a portable device such as a mobile phone. Therefore, information acquisition and calculation operations are saved, and the detection efficiency and the detection precision are improved.
When the identification information of the target person is a target picture containing facial features of the target person, the step of verifying the identification information of the target person according to a pre-stored person information set may include: and verifying the identity information of the target person according to a pre-stored person facial feature set by using the target picture.
S106: if the identity information of the target person passes verification and the wearing state of the safety helmet of the target person is that the safety helmet is worn, the authentication is successful;
s107: and if the identity information of the target person is not verified, or the wearing state of the safety helmet of the target person is that the safety helmet is not worn, the authentication fails.
In the embodiment of the disclosure, the authentication detection process includes two aspects, namely detecting the wearing state of the safety helmet of the target person on one hand, and verifying the identity information of the target person on the other hand, so as to avoid the situation of imposition as much as possible. Only when the two-aspect detection meets the detection requirement at the same time, the authentication can be passed. In both aspects, the failure of the detection is determined as authentication failure.
The processing mode after authentication may be various according to the specific application scenario. For example, at a construction site entrance guard, if the authentication passes, the entrance guard may be opened to allow the target person to enter the construction site. If the authentication fails, the entrance guard cannot be opened, and the target person cannot enter the construction site. Alternatively, in other scenarios, the authentication method may be used only for screening out persons who satisfy the authentication conditions.
In a specific embodiment, as shown in fig. 2, the step of selecting a detection region from the target picture frame by taking the face region in the target picture as a reference according to the preset frame expansion direction and size in step S103 may include:
s201: selecting a first adjacent area of the face area along a first direction frame and selecting a second adjacent area of the face area along a second direction frame by taking the face area in the target picture as a reference;
with reference to the schematic diagram 3, with a two-dimensional coordinate system XOY as a reference, a first direction along an X-axis direction and a second direction along a Y-axis direction are predefined; taking a face area selected by an abcd frame in a target picture as a reference, expanding a preset length in the positive direction and the negative direction of an X axis to obtain a new area, and defining the new area as a first adjacent area; similarly, a face area selected by an efgh frame in the target picture is used as a reference, a preset length is expanded towards the positive direction and the negative direction of the Y axis to obtain another new area, and the new area is defined as a second adjacent area.
More specifically, the first direction may be perpendicular to a center line of the face region selected by the abcd frame, and the second direction may be coincident with the center line of the face region selected by the abcd frame; in detail, referring to fig. 3, when the X axis in the two-dimensional coordinate system XOY is perpendicular to the central line of the face area, the X axis may be selected as the first direction; when the Y axis in the two-dimensional coordinate system XOY coincides with the center line of the face area, the Y axis can be selected as the second direction.
Further, the length of the first adjacent area along the first direction ranges from 0 meter to 0.6 meter; the length of the second adjacent area along the second direction ranges from 0 meter to 1 meter. At the moment, the framed detection area can improve the accuracy of the safety helmet detection model, and improve the accuracy of the detection area characteristic points of the target picture during analysis so as to obtain the safety helmet wearing state information with higher detection accuracy. In a specific embodiment of the present disclosure, a length of the first adjacent area along the first direction has a value range of 0.3 m; the length of the second adjacent area along the second direction ranges from 0.5 m. Of course, in other embodiments, other ranges and values may be provided.
S202: determining a combined region including the face region, the first adjacent region, and the second adjacent region as the detection region.
As described in detail in step S201, the face area is an area included in the abcd box; the area formed by the combination of the face area, the first adjacent area and the second adjacent area is an area contained by an efgh selection frame, the area contained by the efgh selection frame is defined as a detection area, and the detection area is used for a subsequent authentication detection process.
In a specific embodiment, as shown in fig. 4, before the step of analyzing feature points in the detection area by using a preset helmet detection model to obtain the wearing state of the helmet of the target person, the method further includes:
s401: acquiring positive sample data of a state that a tester wears the safety helmet, and acquiring negative sample data of a state that the tester does not wear the safety helmet;
the larger the number of positive and negative sample data is, the more the detection accuracy can be improved.
S402: and training a multitask cascaded Convolutional Neural Network (Mtcnn for short) by using the positive sample data and the negative sample data to obtain the safety helmet detection model capable of detecting the wearing state of the safety helmet of the tester.
It should be noted that, in the process of training, the safety helmet detection model retains positive sample data of the state that the test person wears the safety helmet, and removes negative sample data of the state that the test person does not wear the safety helmet. The above steps are repeated.
Further, as shown in fig. 5, the step of acquiring positive sample data of the state of the test person wearing the safety helmet in step S301 may include:
s501: crawling a preset number of first pictures containing the characteristics of the test safety helmet;
and making a mask from the crawled first picture information, and inputting the mask into a safety helmet detection model for training and optimizing the subsequent safety helmet detection model. Crawling is to acquire target content information in a network through a preset program, wherein the target content information comprises data information of characters, videos, pictures and the like. In this embodiment, the target content information is first picture information including a feature of the testing helmet.
The test safety helmet can be a safety helmet with different colors and different numbers, such as red, yellow, blue, white, orange and the like; the safety helmet can also be in a special environment scene, wherein the environment scene comprises environment characteristics, field physical background characteristics and the like. The purpose of configuring the safety helmets with different colors, different quantities and different scenes is to ensure the diversity of the characteristics of the tested safety helmets contained in the crawled first picture as much as possible, improve and optimize the detection precision
S502: acquiring a second picture containing facial features of the test person, wherein the head of the test person in the second picture does not wear a safety helmet and/or a safety helmet-like;
and making a mask according to the obtained second picture information, and inputting the mask into the safety helmet detection model for training and optimizing the subsequent safety helmet detection model. Specifically, the general cap, clothes, hands, paper, scarves, etc., are defined as "helmet-like". A second picture containing the facial features of the test person requires that the test person does not wear a safety helmet or similar; for example, the test person in the second picture does not wear anything at all.
S503: and synthesizing the first picture and the second picture to obtain a third picture, wherein the testing personnel in the third picture is in a state of wearing the testing safety helmet.
In another embodiment, as shown in fig. 6, the step of obtaining negative sample data of the state that the test person does not wear the safety helmet in step S401 includes:
s601: crawling a preset number of fourth pictures containing characteristic information of similar safety helmets;
and making a mask from the crawled fourth picture information, and inputting the mask into a safety helmet detection model for training and optimizing the subsequent safety helmet detection model. Similarly, the general cap, clothes, hands, paper, scarves, etc. are defined as "helmet-like". Crawling is to acquire target content information in a network through a preset program, wherein the target content information comprises data information of characters, videos, pictures and the like. In this embodiment, the target content information is fourth picture information including a safety helmet-like feature.
S602: acquiring a fifth picture containing facial features of the test person, wherein the head of the test person in the fifth picture does not wear a safety helmet and/or a safety helmet-like;
and making a mask according to the acquired fifth picture information, and inputting the mask into the safety helmet detection model for training and optimizing the subsequent safety helmet detection model. A fifth picture containing the facial features of the tester requires that the tester does not wear a safety helmet or similar; for example, the test person in the fifth picture does not wear anything at all.
S603: and synthesizing the fourth picture and the fifth picture to obtain a sixth picture, wherein the tester in the sixth picture is in a state of wearing a safety helmet.
In a specific embodiment, the positive sample data and the negative sample data in the above steps are sample data with different simulation parameter types, where the simulation parameter types include at least one of a gaussian local brightness variation parameter, a helmet region color variation parameter, and a background color expansion parameter.
Understandably, the simulation parameter types are used for simulation training of rays in the constructed convolutional neural network; for example, for analog training of light in a pruned Alexnet network.
Specifically, simulation training of light rays is performed when the positive sample data and the negative sample data are selected; or after the primary model is obtained by the safety helmet detection model, continuing to train the light scene to improve the simulation training of the light in the model precision process; or analyzing the characteristic points in the detection area by using a safety helmet detection model to obtain light simulation training performed in the process of the wearing state of the safety helmet of the target person.
Referring to fig. 7, in correspondence with the above method embodiment, the present disclosure embodiment further provides an authentication apparatus 70, including:
an acquisition module 701, configured to acquire a target picture including facial features of a target person;
a first framing module 702, configured to frame a face region from the target picture;
a second framing module 703, configured to select a detection region from the target picture based on a face region in the target picture according to a preset frame expansion direction and size, where the detection region includes the face region;
an analysis module 704, configured to analyze feature points in the detection area by using a preset helmet detection model to obtain a helmet wearing state of the target person, where the helmet wearing state includes a worn helmet and an unworn helmet;
a verification module 705, configured to verify identity information of the target person;
an authentication module 706 to:
if the identity information of the target person passes verification and the wearing state of the safety helmet of the target person is that the safety helmet is worn, the authentication is successful;
and if the identity information of the target person is not verified, or the wearing state of the safety helmet of the target person is that the safety helmet is not worn, the authentication fails.
The apparatus shown in fig. 7 may correspondingly execute the content in the above method embodiment, and details of the part not described in detail in this embodiment refer to the content described in the above method embodiment, which is not described again here.
Referring to fig. 8, an embodiment of the present disclosure also provides an electronic device 80, which includes:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the authentication method of the preceding method embodiment.
Referring now to FIG. 8, a block diagram of an electronic device 80 suitable for use in implementing embodiments of the present disclosure is shown. The electronic devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., car navigation terminals), and the like, and fixed terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 8 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 8, the electronic device 80 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 801 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)802 or a program loaded from a storage means 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data necessary for the operation of the electronic apparatus 80 are also stored. The processing apparatus 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
Generally, the following devices may be connected to the I/O interface 805: input devices 806 including, for example, a touch screen, touch pad, keyboard, mouse, image sensor, microphone, accelerometer, gyroscope, or the like; output devices 807 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, and the like; storage 808 including, for example, magnetic tape, hard disk, etc.; and a communication device 809. The communication means 809 may allow the electronic device 80 to communicate wirelessly or by wire with other devices to exchange data. While the figures illustrate an electronic device 80 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication means 809, or installed from the storage means 808, or installed from the ROM 802. The computer program, when executed by the processing apparatus 801, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
It should be understood that portions of the present disclosure may be implemented in hardware, software, firmware, or a combination thereof.
The above description is only for the specific embodiments of the present disclosure, but the scope of the present disclosure is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present disclosure should be covered within the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (11)

1. An authentication method, comprising:
collecting a target picture containing the facial features of a target person;
selecting a face area from the target picture;
selecting a detection area from the target picture frame by taking a face area in the target picture as a reference according to a preset frame expansion direction and size, wherein the detection area comprises the face area;
analyzing the characteristic points in the detection area by using a preset safety helmet detection model to acquire the wearing state of the safety helmet of the target person, wherein the wearing state of the safety helmet comprises a wearing safety helmet and an unworn safety helmet;
verifying the identity information of the target person;
if the identity information of the target person passes verification and the wearing state of the safety helmet of the target person is that the safety helmet is worn, the authentication is successful;
and if the identity information of the target person is not verified, or the wearing state of the safety helmet of the target person is that the safety helmet is not worn, the authentication fails.
2. The authentication method according to claim 1, wherein the step of selecting the detection region from the frame in the target picture based on the face region in the target picture according to the preset frame expansion direction and size comprises:
selecting a first adjacent area of the face area along a first direction frame and selecting a second adjacent area of the face area along a second direction frame by taking the face area in the target picture as a reference;
determining a combined region including the face region, the first adjacent region, and the second adjacent region as the detection region.
3. The authentication method according to claim 2, wherein the first direction is perpendicular to a center line of the face region, and the second direction coincides with the center line of the face region;
the length of the first adjacent area along the first direction ranges from 0 meter to 0.6 meter, and the length of the second adjacent area along the second direction ranges from 0 meter to 1 meter.
4. The authentication method according to any one of claims 1 to 3, wherein the step of analyzing feature points in the detection area by using a preset helmet detection model to obtain the helmet wearing state of the target person comprises:
acquiring positive sample data of a state that a tester wears the safety helmet, and acquiring negative sample data of a state that the tester does not wear the safety helmet;
and training a multitask cascade convolution neural network by using the positive sample data and the negative sample data to obtain the safety helmet detection model capable of detecting the wearing state of the safety helmet of the tester.
5. The authentication method of claim 4, wherein the step of obtaining positive sample data of the state of the test person wearing the safety helmet comprises:
crawling a preset number of first pictures containing the characteristics of the test safety helmet;
acquiring a second picture containing facial features of the test person, wherein the head of the test person in the second picture does not wear a safety helmet and/or a safety helmet-like;
and synthesizing the first picture and the second picture to obtain a third picture, wherein the testing personnel in the third picture is in a state of wearing the testing safety helmet.
6. The authentication method of claim 5, wherein the step of obtaining negative sample data of the status of the test person without the safety helmet comprises:
crawling a preset number of fourth pictures containing characteristic information of similar safety helmets;
acquiring a fifth picture containing facial features of the test person, wherein the head of the test person in the fifth picture does not wear a safety helmet and/or a safety helmet-like;
and synthesizing the fourth picture and the fifth picture to obtain a sixth picture, wherein the tester in the sixth picture is in a state of wearing a safety helmet.
7. The authentication method according to claim 4, wherein the positive sample data and the negative sample data are sample data having different simulation parameter types, wherein the simulation parameter types include at least one of a Gaussian local brightness variation parameter, a helmet region color variation parameter, and a background color expansion parameter.
8. The authentication method according to claim 1, wherein the step of verifying the identity information of the target person comprises:
acquiring identity identification information of a target person;
and verifying the identification information of the target person according to a pre-stored person information set.
9. The authentication method according to claim 8, wherein the identification information of the target person is a target picture containing facial features of the target person;
the step of verifying the identification information of the target person according to the pre-stored person information set comprises:
and verifying the identity information of the target person according to a pre-stored person facial feature set by using the target picture.
10. An authentication apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring a target picture containing the facial features of a target person;
the first framing module is used for framing out a face area from the target picture;
the second framing module is used for framing a detection area from the target picture by taking a face area in the target picture as a reference according to a preset frame expansion direction and size, wherein the detection area comprises the face area;
the analysis module is used for analyzing the characteristic points in the detection area by using a preset safety helmet detection model so as to obtain the wearing state of the safety helmet of the target person, wherein the wearing state of the safety helmet comprises a wearing safety helmet and an unworn safety helmet;
the verification module is used for verifying the identity information of the target personnel;
an authentication module to:
if the identity information of the target person passes verification and the wearing state of the safety helmet of the target person is that the safety helmet is worn, the authentication is successful;
and if the identity information of the target person is not verified, or the wearing state of the safety helmet of the target person is that the safety helmet is not worn, the authentication fails.
11. An electronic device, characterized in that the electronic device comprises:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the authentication method of any one of the preceding claims 1-9.
CN201911028136.2A 2019-10-28 2019-10-28 Authentication method and device and electronic equipment Pending CN110781833A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911028136.2A CN110781833A (en) 2019-10-28 2019-10-28 Authentication method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911028136.2A CN110781833A (en) 2019-10-28 2019-10-28 Authentication method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN110781833A true CN110781833A (en) 2020-02-11

Family

ID=69387051

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911028136.2A Pending CN110781833A (en) 2019-10-28 2019-10-28 Authentication method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN110781833A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111275058A (en) * 2020-02-21 2020-06-12 上海高重信息科技有限公司 A safety helmet wearing and color recognition method and device based on pedestrian re-identification
CN112183284A (en) * 2020-09-22 2021-01-05 上海钧正网络科技有限公司 Safety information verification and designated driving order receiving control method and device
CN114066155A (en) * 2021-10-14 2022-02-18 国网上海市电力公司 Field work method, apparatus, electronic equipment and storage medium
CN114495191A (en) * 2021-11-30 2022-05-13 珠海亿智电子科技有限公司 Combined safety helmet wearing real-time detection method based on end side

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107578041A (en) * 2017-10-27 2018-01-12 华润电力技术研究院有限公司 A kind of detecting system
CN108174165A (en) * 2018-01-17 2018-06-15 重庆览辉信息技术有限公司 Electric power safety operation and O&M intelligent monitoring system and method
CN108319934A (en) * 2018-03-20 2018-07-24 武汉倍特威视系统有限公司 Safety cap wear condition detection method based on video stream data
CN108537256A (en) * 2018-03-26 2018-09-14 北京智芯原动科技有限公司 A kind of safety cap wears recognition methods and device
CN109145789A (en) * 2018-08-09 2019-01-04 炜呈智能电力科技(杭州)有限公司 Power supply system safety work support method and system
CN208796293U (en) * 2018-09-06 2019-04-26 厦门路桥信息股份有限公司 Construction site information managing and control system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107578041A (en) * 2017-10-27 2018-01-12 华润电力技术研究院有限公司 A kind of detecting system
CN108174165A (en) * 2018-01-17 2018-06-15 重庆览辉信息技术有限公司 Electric power safety operation and O&M intelligent monitoring system and method
CN108319934A (en) * 2018-03-20 2018-07-24 武汉倍特威视系统有限公司 Safety cap wear condition detection method based on video stream data
CN108537256A (en) * 2018-03-26 2018-09-14 北京智芯原动科技有限公司 A kind of safety cap wears recognition methods and device
CN109145789A (en) * 2018-08-09 2019-01-04 炜呈智能电力科技(杭州)有限公司 Power supply system safety work support method and system
CN208796293U (en) * 2018-09-06 2019-04-26 厦门路桥信息股份有限公司 Construction site information managing and control system

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111275058A (en) * 2020-02-21 2020-06-12 上海高重信息科技有限公司 A safety helmet wearing and color recognition method and device based on pedestrian re-identification
CN111275058B (en) * 2020-02-21 2021-04-27 上海高重信息科技有限公司 A safety helmet wearing and color recognition method and device based on pedestrian re-identification
CN112183284A (en) * 2020-09-22 2021-01-05 上海钧正网络科技有限公司 Safety information verification and designated driving order receiving control method and device
CN112183284B (en) * 2020-09-22 2022-09-23 上海钧正网络科技有限公司 Safety information verification and designated driving order receiving control method and device
CN114066155A (en) * 2021-10-14 2022-02-18 国网上海市电力公司 Field work method, apparatus, electronic equipment and storage medium
CN114495191A (en) * 2021-11-30 2022-05-13 珠海亿智电子科技有限公司 Combined safety helmet wearing real-time detection method based on end side

Similar Documents

Publication Publication Date Title
CN109902659B (en) Method and apparatus for processing human body image
CN110619314A (en) Safety helmet detection method and device and electronic equipment
CN110781833A (en) Authentication method and device and electronic equipment
CN112101305B (en) Multi-path image processing method and device and electronic equipment
CN111914812B (en) Image processing model training method, device, equipment and storage medium
WO2020207190A1 (en) Three-dimensional information determination method, three-dimensional information determination device, and terminal apparatus
CN112232313A (en) Method and device for detecting wearing state of personal safety helmet in video and electronic equipment
CN111414879B (en) Face shielding degree identification method and device, electronic equipment and readable storage medium
CN110245645B (en) Face living body identification method, device, equipment and storage medium
CN111553266A (en) Identification verification method and device and electronic equipment
CN111582090A (en) Face recognition method and device and electronic equipment
CN108347444A (en) Identity identifying method, device based on block chain and computer readable storage medium
CN110287816B (en) Vehicle door motion detection method, device and computer readable storage medium
CN109948450A (en) A kind of user behavior detection method, device and storage medium based on image
WO2022242365A1 (en) Data encryption method and apparatus, computer device, and storage medium
WO2024125267A1 (en) Image processing method and apparatus, computer-readable storage medium, electronic device and computer program product
CN109102144B (en) Method and device for determining operation risk possibility grade and storage medium
CN111881740A (en) Face recognition method, face recognition device, electronic equipment and medium
CN110288532B (en) Method, apparatus, device and computer readable storage medium for generating whole body image
CN113887498B (en) Face recognition method, device, equipment and storage medium
CN112036519B (en) Multi-bit sigmoid-based classification processing method and device and electronic equipment
CN113269730B (en) Image processing method, image processing device, computer equipment and storage medium
CN115329309A (en) Verification method, verification device, electronic equipment and storage medium
CN108334869B (en) Method and device for selecting human face part, method and device for recognizing human face, and electronic equipment
CN111274602B (en) Image characteristic information replacement method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200211

RJ01 Rejection of invention patent application after publication