[go: up one dir, main page]

CN111414831B - Monitoring method and system, electronic device and storage medium - Google Patents

Monitoring method and system, electronic device and storage medium Download PDF

Info

Publication number
CN111414831B
CN111414831B CN202010177537.0A CN202010177537A CN111414831B CN 111414831 B CN111414831 B CN 111414831B CN 202010177537 A CN202010177537 A CN 202010177537A CN 111414831 B CN111414831 B CN 111414831B
Authority
CN
China
Prior art keywords
area
target object
living body
detection result
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010177537.0A
Other languages
Chinese (zh)
Other versions
CN111414831A (en
Inventor
方志军
李若岱
罗彬�
田士民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Sensetime Technology Co Ltd
Original Assignee
Shenzhen Sensetime Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Sensetime Technology Co Ltd filed Critical Shenzhen Sensetime Technology Co Ltd
Priority to CN202010177537.0A priority Critical patent/CN111414831B/en
Publication of CN111414831A publication Critical patent/CN111414831A/en
Priority to JP2021536787A priority patent/JP2022526207A/en
Priority to PCT/CN2020/124151 priority patent/WO2021179624A1/en
Priority to SG11202106842P priority patent/SG11202106842PA/en
Priority to TW110100321A priority patent/TWI768641B/en
Application granted granted Critical
Publication of CN111414831B publication Critical patent/CN111414831B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01KMEASURING TEMPERATURE; MEASURING QUANTITY OF HEAT; THERMALLY-SENSITIVE ELEMENTS NOT OTHERWISE PROVIDED FOR
    • G01K13/00Thermometers specially adapted for specific purposes
    • G01K13/20Clinical contact thermometers for use with humans or animals
    • G01K13/223Infrared clinical thermometers, e.g. tympanic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Emergency Alarm Devices (AREA)
  • Alarm Systems (AREA)
  • Radiation Pyrometers (AREA)
  • Signal Processing For Digital Recording And Reproducing (AREA)
  • Image Processing (AREA)

Abstract

The present disclosure relates to a monitoring method and system, an electronic device, and a storage medium, the method including: carrying out target area identification on the visible light image to obtain a first area where a target object is located and a second area where a preset part is located; according to the size of the first area and a third area corresponding to the first area in the infrared image, performing living body detection on the target object to obtain a living body detection result; when the living body detection result is a living body, performing identity recognition on the first area to obtain identity information, and performing temperature detection on a corresponding fourth area of the second area in the infrared image to obtain temperature information; and under the condition that the temperature information is greater than or equal to the preset threshold value, generating first early warning information according to the identity information. According to the monitoring method disclosed by the embodiment of the invention, living body detection and temperature detection can be carried out based on the infrared image, the body temperature of the target object in the image can be quickly detected, the body temperature detection efficiency is improved, and the method is suitable for places with large pedestrian flow.

Description

Monitoring method and system, electronic device and storage medium
Technical Field
The present disclosure relates to the field of computer vision technologies, and in particular, to a monitoring method and system, an electronic device, and a storage medium.
Background
In the control of respiratory viroid epidemic, early isolation was the most primitive and effective measure. The early detection means that a medical diagnosis and measurement means is adopted to detect suspicious people as early as possible, wherein body temperature detection can be included. Early isolation means that the virus-carrying patient is reduced from contacting with other people, and the risk of cross infection is reduced, wherein the measures can comprise non-contact body temperature detection, a mask and the like.
In the related art, body temperature detection can be performed by means of a thermometer, a thermography and the like, but the physical examination detection process is complicated and is difficult to adapt to places with large human flow, and when a suspected case with fever symptoms is detected, the identity information of the suspected case is difficult to verify.
Disclosure of Invention
The disclosure provides a monitoring method and system, an electronic device and a storage medium.
According to an aspect of the present disclosure, there is provided a monitoring method including: carrying out target area identification on a visible light image of a monitoring area to obtain a first area where a target object is located in the visible light image and a second area where a preset part of the target object is located; according to the size information of the first area and a third area corresponding to the first area in the infrared image of the monitoring area, performing living body detection on the target object to obtain a living body detection result; under the condition that the living body detection result is a living body, performing identity recognition processing on the first area to obtain identity information of the target object, and performing temperature detection processing on a fourth area corresponding to the second area in the infrared image to obtain temperature information of the target object; and generating first early warning information according to the target object and the identity information thereof under the condition that the temperature information is greater than or equal to a preset temperature threshold value.
According to the monitoring method disclosed by the embodiment of the invention, living body detection and temperature detection can be carried out based on the infrared image, the body temperature of the target object in the image can be quickly detected, the body temperature detection efficiency is improved, and the method is suitable for places with large pedestrian flow. And the identity information of the target object can be identified through the visible light image, which is beneficial to determining the identity information of suspected cases and improving the epidemic prevention efficiency.
In one possible implementation manner, performing living body detection on the target object according to the size information of the first region and a corresponding third region of the first region in the infrared image to obtain a living body detection result, includes: determining the distance between the target object and an image acquisition device for acquiring the visible light image according to the size information of the first area; determining a living body detection strategy according to the distance; determining the position of the third area in the infrared image according to the position of the first area in the visible light image; and performing living body detection processing on each third area in the infrared image according to the living body detection strategy to obtain a living body detection result.
In this way, the in-vivo detection strategy can be determined by the size of the first region, and the in-vivo detection can be performed based on the in-vivo detection strategy, and the in-vivo detection accuracy in the far-infrared image can be improved by isothermal analysis.
In one possible implementation, determining a liveness detection strategy according to the distance includes one of: determining the in-vivo detection strategy as an in-vivo detection strategy based on human body morphology if the distance is greater than or equal to a first distance threshold; determining the in vivo detection strategy as a head and neck morphology based in vivo detection strategy if the distance is greater than or equal to a second distance threshold and less than the first distance threshold; determining the liveness detection policy as a face morphology based liveness detection policy if the distance is less than the second distance threshold.
In one possible implementation manner, performing living body detection processing on the third area according to the living body detection strategy to obtain the living body detection result includes: according to the living body detection strategy, carrying out morphological detection processing on the third area to obtain a morphological detection result; when the form detection result is a living form, performing isothermal analysis processing on the face area in the third area to obtain an isothermal analysis result; determining a first weight of a morphological detection result and a second weight of the isothermal analysis result according to the in-vivo detection strategy; and determining the in-vivo detection result according to the first weight, the second weight, the morphological detection result and the isothermal analysis result.
In a possible implementation manner, the method further includes, in a case that a living body detection result is a living body, detecting whether a preset target is included in the first area to obtain the first detection result, where the preset target includes an article that blocks a partial area of a face, and in a case that the living body detection result is the living body, performing identification processing on the first area to obtain the identity information of the target object, including: and under the condition that the living body detection result is a living body, carrying out identity recognition processing on the first area according to the first detection result to obtain the identity information of the target object.
By the method, the identity recognition method can be selected according to whether the preset target exists or not, and the accuracy of identity recognition can be improved.
In a possible implementation manner, detecting whether a preset target is included in the first area, and obtaining the first detection result includes: detecting a face region in the first region, and determining a feature missing result of the face region; and under the condition that the feature missing result is preset feature missing, detecting whether a preset target is included in the face area or not, and obtaining the first detection result.
In a possible implementation manner, performing identity recognition processing on the first area according to the first detection result to obtain the identity information of the target object includes one of the following: under the condition that the first detection result is that the preset target does not exist, performing first identity recognition processing on a face area in the first area to obtain identity information of the target object; and under the condition that the first detection result is that the preset target exists, performing second identification processing on the face area in the first area to obtain the identity information of the target object, wherein the weight of the feature of the non-occluded area of the face in the second identification processing is greater than the weight of the feature of the corresponding area in the first identification processing.
In one possible implementation, the method further includes: and generating second early warning information under the condition that the first detection result indicates that the preset target does not exist.
In one possible implementation, the method further includes: acquiring gender information of the target object; and determining the preset temperature threshold according to the gender information.
In one possible implementation, the method further includes: and overlapping the position information of the first area or the third area, the temperature information of the target object and the identity information of the target object with the visible light image and/or the infrared image to obtain a detection image.
According to an aspect of the present disclosure, there is provided a monitoring system comprising: visible light image acquisition subassembly, infrared image acquisition subassembly, processing subassembly is used for: carrying out target area identification on a visible light image of a monitoring area acquired by a visible light image acquisition assembly to acquire a first area where a target object is located in the visible light image and a second area where a preset part of the target object is located; according to the size information of the first area and a third area corresponding to the first area in the infrared image acquired by the infrared image acquisition assembly, performing living body detection on the target object to obtain a living body detection result; under the condition that the living body detection result is a living body, performing identity recognition processing on the first area to obtain identity information of the target object, and performing temperature detection processing on a fourth area corresponding to the second area in the infrared image to obtain temperature information of the target object; and generating first early warning information according to the target object and the identity information thereof under the condition that the temperature information is greater than or equal to a preset temperature threshold value.
In one possible implementation, the processing component is further configured to: determining the distance between the target object and an image acquisition device for acquiring the visible light image according to the size information of the first area; determining a living body detection strategy according to the distance; determining the position of the third area in the infrared image according to the position of the first area in the visible light image; and performing living body detection processing on each third area in the infrared image according to the living body detection strategy to obtain the living body detection result.
In one possible implementation, the processing component is further configured to: determining the in-vivo detection strategy as an in-vivo detection strategy based on human body morphology if the distance is greater than or equal to a first distance threshold; or determining the in-vivo detection strategy as a head and neck morphology-based in-vivo detection strategy if the distance is greater than or equal to a second distance threshold and less than the first distance threshold; or determining the liveness detection policy as a face morphology based liveness detection policy if the distance is less than the second distance threshold.
In one possible implementation, the processing component is further configured to: according to the living body detection strategy, carrying out morphological detection processing on the third area to obtain a morphological detection result; when the form detection result is a living form, performing isothermal analysis processing on the face area in the third area to obtain an isothermal analysis result; determining a first weight of a morphological detection result and a second weight of the isothermal analysis result according to the in-vivo detection strategy; and determining the in-vivo detection result according to the first weight, the second weight, the morphological detection result and the isothermal analysis result.
In one possible implementation, the processing component is further configured to: in a case that the living body detection result is a living body, detecting whether a preset target is included in the first area, and obtaining the first detection result, wherein the preset target includes an article blocking a partial area of the face portion, and the processing component is further configured to: and under the condition that the living body detection result is a living body, carrying out identity recognition processing on the first area according to the first detection result to obtain the identity information of the target object.
In one possible implementation, the processing component is further configured to: detecting a face area in the first area, and determining a feature missing result of the face area; and under the condition that the feature missing result is preset feature missing, detecting whether a preset target is included in the face area or not, and obtaining the first detection result.
In one possible implementation, the processing component is further configured to: under the condition that the first detection result indicates that the preset target does not exist, performing first identity recognition processing on a face area in the first area to obtain identity information of the target object; or when the first detection result is that the preset target exists, performing second identification processing on the face area in the first area to obtain the identity information of the target object, wherein the weight of the feature of the non-occluded area of the face in the second identification processing is greater than the weight of the feature of the corresponding area in the first identification processing.
In one possible implementation, the processing component is further configured to: and overlapping the position information of the first area or the third area, the temperature information of the target object and the identity information of the target object with the visible light image and/or the infrared image to obtain a detection image.
In one possible implementation, the processing component is further configured to: acquiring gender information of the target object; and determining the preset temperature threshold according to the gender information.
In one possible implementation, the processing component is further configured to: and generating second early warning information under the condition that the first detection result indicates that the preset target does not exist.
According to an aspect of the present disclosure, there is provided an electronic device including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to: the above monitoring method is performed.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described monitoring method.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
FIG. 1 shows a flow diagram of an image processing method according to an embodiment of the present disclosure;
FIG. 2 shows a schematic diagram of in vivo testing in accordance with an embodiment of the present disclosure;
FIG. 3 shows a schematic diagram of an identification process according to an embodiment of the present disclosure;
fig. 4A and 4B show application schematics of a monitoring method according to an embodiment of the present disclosure;
FIG. 5 shows a block diagram of a monitoring system according to an embodiment of the present disclosure;
FIG. 6 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure;
fig. 7 shows a block diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
Fig. 1 shows a flowchart of an image processing method according to an embodiment of the present disclosure, as shown in fig. 1, the method comprising:
in step S11, performing target area recognition on a visible light image of a monitoring area to obtain a first area where a target object is located in the visible light image and a second area where a preset portion of the target object is located;
in step S12, performing a biopsy on the target object according to the size information of the first region and a corresponding third region of the first region in the infrared image of the monitoring region, so as to obtain a biopsy result;
in step S13, when the living body detection result is a living body, performing identification processing on the first region to obtain identification information of the target object, and performing temperature detection processing on a corresponding fourth region of the second region in the infrared image to obtain temperature information of the target object;
in step S14, when the temperature information is greater than or equal to a preset temperature threshold, first warning information is generated according to the target object and the identity information thereof.
According to the monitoring method disclosed by the embodiment of the invention, living body detection and temperature detection can be carried out based on the infrared image, the body temperature of the target object in the image can be quickly detected, the body temperature detection efficiency is improved, and the method is suitable for places with large pedestrian flow. And the identity information of the target object can be identified through the visible light image, which is beneficial to determining the identity information of suspected cases and improving the epidemic prevention efficiency.
In one possible implementation, the monitoring method may be performed by a terminal device or other processing device, where the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, or the like. The other processing devices may be servers or cloud servers, etc. In some possible implementations, the monitoring method may be implemented by a processor calling computer readable instructions stored in a memory.
In an example, the monitoring method may be used in a front-end monitoring device, for example, the front-end monitoring device may be a device in which a processor, a camera and other components are integrated, the camera and other components may be controlled by the processor, an image of a monitoring area is acquired, and the monitoring method is executed by the processor. In an example, the monitoring method may also be used in a server (for example, a processor for executing the monitoring method is located in the server, and the processor and components such as the camera are not packaged into a whole, but distributed in a monitoring area and a back-end processor to form a monitoring system), and the server may receive an image shot by the camera in the monitoring area and execute the monitoring method on the image.
In one possible implementation, the processor may be a System on a Chip (SoC), and the cameras may include an infrared camera for acquiring infrared images and a camera for acquiring visible light images. In an example, a special sensor such as a vanadium oxide uncooled infrared focal plane detector can be used for capturing a space optical signal, and the sensor can be used in a far infrared camera to acquire life rays for temperature monitoring.
In one possible implementation, the camera may capture a video of the monitored area, and the video frames of the video are the visible light image and the infrared image. Infrared camera and visible light camera can set up in close position, for example, infrared camera and visible light camera can adjacent setting, set up side by side etc. or can set up two cameras as an organic whole etc. this disclosure does not do the restriction to the setting mode. Therefore, in the images captured by the two cameras at the same time, the positions of the target objects are approximate, that is, the position deviation of the same target object in the light image and the corresponding infrared image is small, and the deviation can be corrected according to the position relationship of the two cameras, for example, the position of the target object in the third area in the infrared image can be determined according to the position of the first area (i.e., the area where the target object is located) in the visible light image.
In a possible implementation manner, the image captured by the camera may have poor image quality, for example, imaging blur, focus error, contamination or occlusion of the camera, and the like. The image quality may be checked first on the image. For example, image quality detection may be performed on the visible light image, e.g., the sharpness of the texture, boundaries of which may be detected. If the definition is greater than or equal to the definition threshold, the image quality is considered to be good, and the next position detection and other processing can be carried out. If the definition is smaller than the definition threshold value, the image quality is considered to be poor, and the visible light image and the infrared image corresponding to the visible light image can be deleted at the same time.
In one possible implementation manner, in step S11, a target area may be identified for the visible light image, and a first area where the target object is located may be obtained, where the first area may be a face area of the target object and/or a body area of the target object. The processing may obtain position coordinates of a first region where the target object is located, or position coordinates of a detection frame including the first region.
In a possible implementation manner, during the temperature detection process, the temperature of the forehead and other areas can be measured, for example, the temperature of the forehead area can be determined through the pixel values of the infrared image, and then the body temperature of the target object can be obtained. Therefore, the second region where the preset part (for example, the region such as the forehead) is located can be detected in the visible light image, the fourth region where the preset part is located in the infrared image can be determined, the corresponding relation between the pixel value of the infrared image and the temperature can be established in advance, and the body temperature of the target object can be determined according to the pixel value in the fourth region.
In one possible implementation, the living body is a bioactive, truly living biological modality represented by a human body having vital signs. The prosthesis is a model (e.g., a photograph, a mask, a head cover, etc.) similar to the living body, which is made to follow the biological characteristics of the living body. In general, in a living body detection process, a near-infrared image of a human face is used for living body detection. The method is mainly based on the imaging characteristics difference between the living body and the prosthesis under near infrared light, such as characteristic difference of light flow, texture, color and the like, so as to distinguish the living body from the prosthesis. However, in the temperature monitoring process, the infrared image obtained by the far infrared camera shooting is a far infrared image (i.e., an image based on life rays), and is not a near infrared image which is generally used, so that the living body detection strategy can be determined by the size of the first area where the target object is located, and the accuracy of living body detection is improved.
In one possible implementation, step S12 may include: determining the distance between the target object and an image acquisition device for acquiring the visible light image according to the size information of the first area; determining a living body detection strategy according to the distance; determining the position of the third area in the infrared image according to the position of the first area in the visible light image; and performing living body detection processing on each third area in the infrared image according to the living body detection strategy to obtain a living body detection result.
In a possible implementation manner, the first region is a region where a target object in the visible light image is located, for example, a face region of the target object, and a size of the region may be inversely related to a distance between the target object and an image acquisition device (a visible light camera) acquiring the visible light image, that is, the larger the size of the region is, the smaller the distance is, the smaller the size of the region is, and the larger the distance is. Further, the living body is characterized in an infrared image with a characteristic vital sign (for example, a far infrared image based on vital rays). The blood vessels of the human body are distributed over the human body (such as the human body, the human face, the neck and the shoulder), but when the target object is closer to the position in the visible light image, the target object only shows part of the human body in the visible light image, such as only the face, the head and neck and the like, but the facial features are clearer. The more distant the face feature amount is, the more the human body feature amount is. Based on different distances, different in-vivo detection strategies are combined, the in-vivo detection accuracy rate is improved, and the method is suitable for being used at different distances.
The corresponding relation between the reference size and the reference distance in the image can be established in advance, and the distance between the target object and the camera can be determined according to the proportional relation between the size of the area where the target object is located and the reference size and the reference distance.
Fig. 2 illustrates a schematic diagram of a live body detection according to an embodiment of the present disclosure, and as illustrated in fig. 2, a distance between a target object and a camera may be determined according to size information of a first area, and a live body detection strategy may be selected according to the distance. A corresponding third region in the infrared image may be determined based on the first region in the visible image. For example, if the first region is a face region of the target object a in the visible light image (for example, the target object a is closer to the camera, and only the face region of the target object a can be captured), the third region is a face region of the target object a in the infrared image. The first region is a head and neck region of the target object B in the visible light image (for example, a face, a shoulder and neck region, or an upper body region, etc. of the target object B can be photographed), and the third region is the head and neck region of the target object B in the infrared image. The first area is a human body area of the target object C in the visible light image (for example, the target object C is far away from the camera, and the human body area of the target object C can be photographed), and then the third area is a human body area of the target object C in the infrared image.
In one possible implementation, determining a liveness detection strategy according to the distance of the target object includes one of: determining the in-vivo detection strategy as an in-vivo detection strategy based on human body morphology if the distance is greater than or equal to a first distance threshold; determining the in-vivo detection strategy as an in-vivo detection strategy based on a head and neck morphology if the distance is greater than or equal to a second distance threshold and less than the first distance threshold; determining the liveness detection policy as a face morphology based liveness detection policy if the distance is less than the second distance threshold.
In an example, the first distance threshold and the second distance threshold may be determined according to parameters such as a range, a focal length, and the like of a camera detection area. For example, if the distance between the target object and the camera is greater than or equal to the first distance threshold, the whole body of the target object or most of the body may be included in the picture taken by the camera, and then a living body detection strategy based on the human body morphology may be selected. If the distance between the target object and the camera is smaller than the first distance threshold and larger than or equal to the second distance threshold, the whole body of the target object cannot be included in the picture shot by the camera, but the human face, the shoulder and neck region or the upper body region of the target object can be included, and then the living body detection strategy based on the head and neck form can be selected. And if the distance between the target object and the camera is smaller than the second distance threshold, only the face area of the target object is included in the picture shot by the camera, and the living body detection strategy based on the face morphology can be selected.
In one possible implementation, the position of the third region is determined in the infrared image according to the position of the first region in the visible light image. In an example, one or more target objects can be included in the visible light image, that is, one or more first regions can be included, and the position of the third region in the infrared image can be determined according to the position relationship between the infrared camera and the quality inspection of the visible light camera and the position of the first region.
In an example, since the infrared camera may acquire a far infrared image for detecting the light of life, and not a near infrared image for detecting the image feature, an accuracy of detecting the third area where the target object is located directly in the far infrared image may be low. In addition, when a plurality of target objects exist, if the third region where the target object exists is directly detected in the infrared image, it is difficult to clarify the correspondence relationship between each third region and each first region in the visible light image. Therefore, the position of the third area corresponding to the first area can be obtained according to the position of the first area in the visible light image and the position relationship between the two cameras. Similarly, the position of the corresponding fourth region in the infrared image may be acquired by the position of the second region in the visible light image.
In one possible implementation, after determining the liveness detection strategy, the liveness detection process can be performed on a third area in the infrared image using the selected liveness detection strategy. Performing a biopsy process on the third area according to the biopsy strategy to obtain the biopsy result, including: according to the living body detection strategy, carrying out morphological detection processing on the third area to obtain a morphological detection result; when the form detection result is a living form, performing isothermal analysis processing on the face area in the third area to obtain an isothermal analysis result; determining a first weight of a morphological detection result and a second weight of the isothermal analysis result according to the in-vivo detection strategy; and determining the in-vivo detection result according to the first weight, the second weight, the morphological detection result and the isothermal analysis result.
In one possible implementation, the morphology detection may be performed on the third area based on a live body detection strategy. In an example, in the body shape-based living body detection strategy, the third region may include a face region and a body region of the target object, the body feature amount is large, the feature extraction process may be performed by a neural network or the like to acquire the body feature amount, the shape detection process may be performed, and whether the body shape of the target object is a living body shape or not may be determined (for example, whether the shape of the target object is a living body shape or not may be determined from features such as the posture, the motion, and the shape of the target object).
In an example, in the head and neck configuration-based living body detection strategy, the third region may include a face and a shoulder and neck region of the target object, or an upper half body region, and feature extraction processing may be performed by a neural network or the like to obtain face and shoulder and neck region feature amounts to perform configuration detection processing, thereby determining whether the head and neck configuration of the target object is a living body configuration (for example, it may be determined whether the target object is a living body configuration according to features such as a posture, a motion, and a shape of the target object).
In an example, in the face shape-based living body detection strategy, the third area may include a face area of the target object, the human body feature quantity is small, the face feature quantity is large, the feature extraction process may be performed by a neural network or the like to obtain the face area feature quantity, the shape detection process may be performed, and whether the face shape of the target object is a living body shape or not may be determined (for example, whether the target object is a living body shape or not may be determined according to features such as an expression and a texture of the face of the target object).
In one possible implementation, the morphology detection result of the above-described morphology detection process may be a result in a fractional form. For example, when the score of the morphology detection is greater than or equal to the first score threshold, the morphology detection result may be considered as a living morphology. The form of the morphology detection results is not limited by the present disclosure.
In a possible implementation manner, in a case that the form detection result is a living form, isothermal analysis processing is performed on the face area in the third area, so as to obtain an isothermal analysis result. In an example, whether the face region is a living body may be determined through isothermal analysis, for example, if the real face of the target object has a uniform blood vessel distribution, and if the target object wears a prosthesis such as a mask, a headgear, etc., since the prosthesis has no blood vessel distribution, its isotherm is different from that of the real face, and whether the face region is a living body may be determined according to isothermal analysis. In an example, the isothermal analysis result may be a result in the form of a score, e.g., when the score of the isothermal analysis is greater than or equal to a second score threshold, the isothermal analysis result may be considered a living body. The present disclosure does not limit the form of the isothermal analysis results.
In one possible implementation manner, since the proportion of the feature amount of the face region is different among different living body detection strategies (for example, in a living body detection strategy based on human body morphology, the proportion of the face region is smaller if the third region includes the face region and the human body region, whereas in a living body detection strategy based on human body morphology, the proportion of the face region is larger if the third region includes only the face region). A first weight of a morphological detection result and a second weight of an isothermal analysis result may be determined according to an in vivo detection strategy.
For example, in the living body detection strategy based on the human body morphology, since the proportion of the face region is small, the second weight based on the isothermal analysis result of the face region may be small, and the first weight of the morphology detection result may be large. For example, in the face morphology-based live body detection strategy, since the proportion of the face region is large, the second weight of the isothermal analysis result based on the face region may be large, and the first weight of the morphology detection result may be small. For example, in a head and neck morphology based biopsy strategy, the second weight and the first weight may be close or equal. The present disclosure does not limit the values of the first weight and the second weight.
In a possible implementation manner, the morphology detection result and the isothermal analysis result may be subjected to weighted summation processing according to the first weight and the second weight, respectively, to obtain the in vivo detection result. In an example, the in-vivo detection result may be a result in the form of a score, and when the in-vivo detection result is greater than or equal to a threshold in-vivo score, the target object may be considered as a living body, and when the in-vivo detection result is less than the threshold in-vivo score, the target object may be considered as a prosthesis, or a non-human body, without subsequent processing.
In this way, the in-vivo detection strategy can be determined by the size of the first region, and the in-vivo detection can be performed based on the in-vivo detection strategy, and the in-vivo detection accuracy in the far-infrared image can be improved by isothermal analysis.
In one possible implementation manner, in step S13, the target object may be subjected to an identification process in the case where the target object is a living body. In an example, in a medical and health place (e.g., a hospital, an operating room, etc.) or a public place during an epidemic situation, protective equipment (e.g., a mask, goggles, etc.) is generally required to be worn, but after the protective equipment is worn, a certain degree of shielding is caused on an area of the face, and therefore, it is required to monitor whether a target object wears a preset article such as a mask, and determine an identification manner according to a detection result.
Fig. 3 shows a schematic diagram of an identification process according to an embodiment of the present disclosure. The method further comprises the following steps: and under the condition that the living body detection result is a living body, detecting whether a preset target is included in the first area or not, and obtaining a first detection result, wherein the preset target includes an article for shielding a partial area of the face, and the first detection result is whether the target object wears a mask or other articles capable of shielding the partial area of the face. Further, the identification may be performed according to the first detection result, and the step S13 may include: and under the condition that the living body detection result is a living body, carrying out identity recognition processing on the first area according to the first detection result to obtain the identity information of the target object.
In one possible implementation, it may be determined first whether a preset target (e.g., an item such as a mask) is present on the face of the target object. Detecting whether a preset target is included in the first area, and obtaining the first detection result, wherein the detecting comprises: detecting a face region in the first region, and determining a feature missing result of the face region; and under the condition that the feature missing result is the default feature missing, detecting whether a default target is included in the face area or not, and obtaining the first detection result.
In one possible implementation, the detection processing may be performed on the face area in the first area through a neural network, for example, a feature map of the face area may be obtained through a convolutional neural network, and if the target object wears a mask or other article (a preset target) capable of blocking the face, there is a feature missing in the feature map of the face area, for example, if the target object wears the mask, the feature of the mouth and nose portion of the target object, that is, the feature missing of the mouth and nose portion, cannot be detected, and if the target object wears sunglasses, the eye feature, that is, the feature missing of the eye portion, cannot be detected.
In one possible implementation, if the preset features (e.g., the nose-mouth feature or the eye feature, etc.) are missing, the target object may wear an article such as a mask, sunglasses, etc., and further detection may be performed, for example, whether the preset target such as the mask exists on the face of the target object may be monitored by a convolutional neural network, etc., for example, the shape, texture, etc. of the mask may be detected to identify the article such as the mask, and to exclude the target object from blocking the face with hands or blocking the face due to other people, etc.
In one possible implementation, if the preset features (e.g., oronasal or ocular, etc.) are not missing (i.e., facial features are intact), there are no preset targets such as masks. If the monitoring area is a medical and health place such as a hospital or the like or a public place during an epidemic situation, it may be illegal to wear the mask. The method further comprises the following steps: and generating second early warning information under the condition that the first detection result indicates that the preset target does not exist. Further, a second warning message may be output through the warning device or the display to remind the target subject to wear the mask. For example, the warning device may include a sound device, a warning lamp, or the like, and the second warning information may be output by sound, a light, or the like, or information that the target subject does not wear a mask may be displayed on the display. The output mode of the second early warning information is not limited in the present disclosure.
In one possible implementation, the method of identification may also be determined according to the first detection result (i.e., whether the target object wears an item that can block a face). According to the first detection result, performing identity recognition processing on the first area to obtain identity information of the target object, wherein the identity information comprises one of the following items: under the condition that the first detection result is that the preset target does not exist, performing first identity recognition processing on a face area in the first area to obtain identity information of the target object; and under the condition that the first detection result is that the preset target exists, performing second identification processing on the face area in the first area to obtain the identity information of the target object, wherein the weight of the feature of the non-occluded area of the face in the second identification processing is greater than the weight of the feature of the corresponding area in the first identification processing.
In one possible implementation, the identification process may be performed using a neural network, wherein the neural network used for the first identification process is different from the neural network used for the second identification process. In an example, if the facial region of the target object is partially occluded, only the non-occluded region is available for the facial region for identification, and therefore, in the neural network training process, the weight occupied by the features of the non-occluded region is large, for example, the attention of the neural network is focused on the features of the non-occluded region, and identification is performed on the features of the non-occluded region.
In an example, if the face region of the target object is not occluded, the features of the face region can be used for identification processing, and the neural network can perform identification according to all the features of the face region.
For example, if the target object wears a mask, the neural network can only acquire the characteristics of the eyes and the eyebrows (the characteristics of the nose and mouth are missing), and the neural network can increase the weights of the characteristics of the eyes and the eyebrows in training so as to perform identity recognition according to the characteristics of the eyes and the eyebrows. If the target object does not wear a mask, the neural network can perform identification according to all the characteristics of the facial area.
For example, the neural network may determine a similarity between the features of the eye and the eyebrow and reference features in the database (reference features of the eye and the eyebrow), and determine, as the identity information of the target object, identity information corresponding to the reference features having the similarity greater than or equal to a similarity threshold. Similarly, the neural network may determine a similarity between the facial feature and a reference feature (facial reference feature) in the database, and determine, as the identity information of the target object, identity information corresponding to the reference feature having the similarity greater than or equal to a similarity threshold.
For example, if there is no reference feature in the database with a feature similarity greater than or equal to the similarity threshold (i.e., the target object does not match existing identity information in the database), the feature of the target object may be stored in the database and an identity (e.g., identity code, etc.) may be added to the target object. The method of identity recognition is not limited by this disclosure.
By the method, the identity recognition method can be selected according to whether the preset target exists or not, and the accuracy of identity recognition can be improved.
In one possible implementation, the gender of the target object may also be identified. In an example, the average body temperature of men and women is different, so the temperature threshold can be set according to gender, for example, the average body temperature of women is higher than 0.3 ℃ for men, and in the temperature monitoring, the temperature threshold can be set according to gender, for example, 37 degrees for men and 37.3 degrees for women. The method further comprises the following steps: acquiring gender information of the target object; and determining the preset temperature threshold according to the gender information.
In an example, gender information of the target object may be determined according to facial features of the target object, for example, if the target object does not wear a mask, gender of the target object may be determined according to all features of the face of the target object, and if the target object wears a mask, gender of the target object may be determined according to eye and eyebrow features of the target object. Alternatively, if the facial features or the eye and eyebrow features of the target object match the reference features in the database, the gender of the target object may be determined from the gender recorded in the identity information corresponding to the reference features in the database. The method of determining gender is not limited by the present disclosure.
In one possible implementation, the preset temperature threshold may be determined according to the gender information, for example, if the average body temperature of the female is higher than the average body temperature of the male, the preset temperature threshold for the female may be higher than the preset temperature threshold for the male. The present disclosure does not limit the determination method of the preset temperature threshold.
In a possible implementation manner, in step S13, a temperature detection process may be further performed on a fourth region (corresponding to the second region in which the preset portion is located in the visible light image) in which the preset portion (for example, the forehead) is located in the infrared image, so as to determine temperature information (for example, the body temperature) of the target object. For example, the pixel value of the pixel point in the infrared image may represent a color temperature value, and the color temperature value may be used to determine the temperature of the forehead area of the target object.
In a possible implementation manner, in step S14, in a case that the temperature information is greater than or equal to a preset temperature threshold, first warning information is generated according to the identity information of the target object. For example, during an epidemic, fever is suspected and the target subject with the symptom may be monitored for emphasis. If the detected body temperature of the target object is greater than or equal to the preset temperature threshold, the target object is considered to have a fever symptom, first early warning information can be generated according to the identity information of the target object, for example, the identity information and the body temperature of the target object can be displayed on a display, so that the target object can be monitored in a key mode, and for example, first early warning information can be sent out through equipment such as a sound box, so that epidemic prevention personnel can be prompted to pay attention to the target object in a key mode, or the target object can be isolated and the like. The output mode of the first early warning information is not limited in the present disclosure.
In one possible implementation, the method further includes: and overlapping the position information of the first area or the third area, the temperature information of the target object and the identity information of the target object with the visible light image and/or the infrared image to obtain a detection image.
In an example, the visible light image and/or the infrared image may be displayed through the display, and the identity information and/or the temperature information of the target object may be displayed at a position where the target object is located in the visible light image and/or the infrared image. For example, the identity information and/or the temperature information of the target object may be superimposed on the first region in the visible light image where the target object is located, i.e., the identity information and/or the temperature information of the target object is displayed in the first region. Similarly, identity information and/or temperature information of the target object may be displayed in a third region of the infrared image. The present disclosure does not limit the display manner.
According to the monitoring method disclosed by the embodiment of the disclosure, the living body detection strategy can be determined through the size of the first area, the living body detection is carried out based on the living body detection strategy, the living body detection accuracy in the far infrared image can be improved by using isothermal analysis, the identity recognition method can be selected according to the existence of the preset target, and the accuracy of identity recognition is improved. Furthermore, the temperature detection can be carried out on the preset part, the body temperature of the target object in the image can be rapidly detected, the body temperature detection efficiency is improved, and the method is suitable for places with large pedestrian volume. And the identity information of the target object can be identified through the visible light image, which is beneficial to determining the identity information of suspected cases and improving the epidemic prevention efficiency.
Fig. 4A and 4B are schematic diagrams illustrating an application of the monitoring method according to an embodiment of the present disclosure, as shown in fig. 4A, the monitoring method may be used in a system for monitoring a monitored area, and the monitoring system may include an infrared camera 1 for collecting life rays to obtain an infrared image, a visible light camera 2 for obtaining a visible light image, and a processor 3 (e.g., a system on a chip SoC). The system can also comprise a light supplementing unit 4 used in an environment with poor lighting conditions, a display 5 used for displaying visible light images, infrared images and/or early warning information, a communication unit 6 used for transmitting the early warning information, temperature information and other information, a warning unit 7 used for outputting the early warning information, an interaction unit 8 used for inputting instructions and the like.
In one possible implementation, as shown in FIG. 4B, the infrared camera 1 and the visible light camera 2 may capture video frames of a monitored area, such as a visible light image AF (AF1-AF25) and an infrared image BF (BF1-BF 25). And position deviation correction can be carried out on the visible light image AF and the infrared image BF, namely, the position of a third area in the infrared image BF can be determined according to the position of the first area in the visible light image AF.
In a possible implementation, the processor 3 may perform image quality detection on the visible light image AF, for example, detect the texture of the visible light image, the definition of the boundary, and delete the visible light image with poor image quality and the corresponding infrared image. Further, if there is no target object in the visible light image, for example, there is no person in the monitored area, the visible light image and the corresponding infrared image may also be deleted.
In a possible implementation manner, the processor 3 may perform position detection processing on the visible light image AF to obtain a first area where the target object is located, and may also obtain a second area where the forehead of the target object is located. According to the positions of the first area and the second area in the visible light image, the position of a third area where the target object is located in the infrared image and the position of a fourth area where the forehead of the target object is located in the infrared image can be determined.
In one possible implementation, the processor 3 may determine a distance between the target object and the camera from the size information of the first area, and determine the liveness detection strategy based on the distance. In an example, the liveness detection strategy is determined to be a liveness detection strategy based on a human morphology in a case where the distance of the target object is greater than or equal to a first distance threshold. Morphology detection may be performed on a third region in the infrared image to obtain a morphology detection score. And carrying out isothermal analysis on the face area in the infrared image to obtain an isothermal analysis score. Further, according to a living body detection strategy based on human body morphology, a first larger weight can be distributed to the morphology detection score, a second smaller weight can be distributed to the isothermal analysis score, and after weighting summation, a living body detection result can be obtained.
In an example, the in-vivo detection strategy is determined to be an in-vivo detection strategy based on the head and neck morphology if the distance of the target object is less than the first distance and greater than or equal to a second distance threshold. Morphology detection may be performed on a third region in the infrared image to obtain a morphology detection score. And carrying out isothermal analysis on the face area in the infrared image to obtain an isothermal analysis score. Further, according to a living body detection strategy based on the head and neck morphology, similar weights are distributed to the morphology detection fraction and the isothermal analysis fraction, and after weighting summation, a living body detection result can be obtained.
In an example, the live body detection policy is determined to be a live body detection policy based on the face morphology in a case where the distance of the target object is less than a second distance threshold. Morphology detection may be performed on a third region in the infrared image to obtain a morphology detection score. And carrying out isothermal analysis on the face area in the infrared image to obtain an isothermal analysis score. Further, according to the face morphology-based in-vivo detection strategy, a smaller first weight may be assigned to the morphology detection score, a larger second weight may be assigned to the isothermal analysis score, and after weighted summation, the in-vivo detection result may be obtained.
In one possible implementation, the processor 3 may detect whether the target object wears a mask in a first region of the visible light image, may generate second warning information if the target object does not wear a mask, and may display the second warning information through the display 5 or play the second warning information through the warning unit 7.
In one possible implementation, the processor 3 may determine the identification method according to whether the target object wears a mask, for example, if the target object does not wear a mask, the identification information of the target object may be determined according to the comparison of all facial features of the target object with the reference features in the database. If the target object wears the mask, the identity information of the target object can be determined by comparing the eye and eyebrow characteristics of the target object with the reference characteristics in the database. Further, the second warning information may further include identity information of the target object, for example, XXX (name) without mask may be displayed on the display 5.
In one possible implementation, the processor 3 may identify the gender of the target object, for example, the gender of the target object may be identified according to facial features of the target object, or the gender of the target object may be obtained according to identity information of the target object. Further, the preset temperature threshold of the male target and the preset temperature threshold of the female target may be respectively determined according to the gender of the target object. Alternatively, the temperature threshold may also be input through the interaction unit 8, and the setting manner of the temperature threshold is not limited in the present disclosure.
In one possible implementation, the temperature of the fourth region (i.e., the region where the forehead is located in the infrared image) may be monitored to obtain the body temperature of the target subject. The processor 3 may generate the first warning information if the body temperature exceeds a preset temperature threshold. And displays the first warning information through the display 5 or plays the first warning information through the warning unit 7. Further, the identity information of the target object with fever, the body temperature and the position information of the camera (i.e. the position where the target object with fever appears) can be transmitted to the background database or the server through the communication unit 6, which facilitates the tracking of the suspected case and the determination of the activity track of the suspected case.
In a possible implementation manner, the identity information and the body temperature of the target object may be superimposed on the first region in the visible light image or the third region in the infrared image, and displayed on the display 5, so as to visually observe the temperature information and the identity information of each target object in the monitoring region, and the superimposed image may be stored.
In a possible implementation manner, the monitoring method can monitor the temperature by using visible light images and infrared images, so that the hardware cost of the camera can be reduced, and the data processing pressure can be reduced. And whether each target object wears the mask can be monitored. Can be used in the fields of monitoring or epidemic prevention monitoring in the health and medical places and the like. The present disclosure does not limit the field of application of the monitoring method.
Fig. 5 shows a block diagram of a monitoring system according to an embodiment of the present disclosure, as shown in fig. 5, the system comprising: an infrared image acquisition component 11, a visible light image acquisition component 12, a processing component 13,
the processing component 13 is configured to:
performing target area identification on a visible light image of a monitoring area acquired by a visible light image acquisition component 12 to obtain a first area where a target object is located in the visible light image and a second area where a preset part of the target object is located;
according to the size information of the first area and a third area corresponding to the first area in the infrared image acquired by the infrared image acquisition component 11, performing living body detection on the target object to obtain a living body detection result;
under the condition that the living body detection result is a living body, performing identity recognition processing on the first area to obtain identity information of the target object, and performing temperature detection processing on a fourth area corresponding to the second area in the infrared image to obtain temperature information of the target object;
and generating first early warning information according to the target object and the identity information thereof under the condition that the temperature information is greater than or equal to a preset temperature threshold value.
In one possible implementation, the processing component 13 may include a processor (e.g., the processor 3 in fig. 4A), the infrared image acquisition component 11 may include an infrared camera (e.g., the infrared camera 1 in fig. 4A), and the visible light image acquisition component 12 may include a visible light camera (e.g., the visible light camera 2 in fig. 4A).
In one possible implementation, the processing component is further configured to:
determining the distance between the target object and an image acquisition device for acquiring the visible light image according to the size information of the first area;
determining a living body detection strategy according to the distance;
determining the position of the third area in the infrared image according to the position of the first area in the visible light image;
and performing living body detection processing on each third area in the infrared image according to the living body detection strategy to obtain a living body detection result.
In one possible implementation, the processing component is further configured to:
determining the in-vivo detection strategy as an in-vivo detection strategy based on human body morphology if the distance is greater than or equal to a first distance threshold; or
Determining the in-vivo detection strategy as an in-vivo detection strategy based on a head and neck morphology if the distance is greater than or equal to a second distance threshold and less than the first distance threshold; or
Determining the liveness detection policy as a face morphology based liveness detection policy if the distance is less than the second distance threshold.
In one possible implementation, the processing component is further configured to:
performing morphology detection processing on the third area according to the living body detection strategy to obtain a morphology detection result;
when the form detection result is a living form, performing isothermal analysis processing on the face area in the third area to obtain an isothermal analysis result;
determining a first weight of a morphological detection result and a second weight of the isothermal analysis result according to the in-vivo detection strategy;
and determining the in-vivo detection result according to the first weight, the second weight, the morphological detection result and the isothermal analysis result.
In one possible implementation, the processing component is further configured to:
detecting whether a preset target is included in the first area or not when the living body detection result is a living body, and obtaining the first detection result, wherein the preset target includes an article for shielding a partial area of the face,
the processing component is further to:
and under the condition that the living body detection result is a living body, carrying out identity recognition processing on the first area according to the first detection result to obtain the identity information of the target object.
In one possible implementation, the processing component is further configured to:
detecting a face area in the first area, and determining a feature missing result of the face area;
and under the condition that the feature missing result is preset feature missing, detecting whether a preset target is included in the face area or not, and obtaining the first detection result.
In one possible implementation, the processing component is further configured to:
under the condition that the first detection result indicates that the preset target does not exist, performing first identity recognition processing on a face area in the first area to obtain identity information of the target object; or
And under the condition that the first detection result is that the preset target exists, performing second identification processing on the face area in the first area to obtain the identity information of the target object, wherein the weight of the feature of the non-occluded area of the face in the second identification processing is greater than the weight of the feature of the corresponding area in the first identification processing.
In one possible implementation, the processing component is further configured to:
and overlapping the position information of the first area or the third area, the temperature information of the target object and the identity information of the target object with the visible light image and/or the infrared image to obtain a detection image.
In one possible implementation, the processing component is further configured to:
acquiring gender information of the target object;
and determining the preset temperature threshold according to the gender information.
In one possible implementation, the processing component is further configured to:
and generating second early warning information under the condition that the first detection result indicates that the preset target does not exist.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted.
In addition, the present disclosure also provides a monitoring system, an electronic device, a computer-readable storage medium, and a program, which can be used to implement any one of the monitoring methods provided by the present disclosure, and the corresponding technical solutions and descriptions and corresponding descriptions of the method portions are not repeated.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the foregoing method embodiments, and for specific implementation, reference may be made to the description of the foregoing method embodiments, and for brevity, details are not described here again
Embodiments of the present disclosure also provide a computer-readable storage medium, on which computer program instructions are stored, and when executed by a processor, the computer program instructions implement the above method. The computer readable storage medium may be a non-volatile computer readable storage medium.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured as the above method.
The disclosed embodiments also provide a computer program product comprising computer readable code, which when run on a device, a processor in the device executes instructions for implementing the monitoring method provided in any of the above embodiments.
The embodiments of the present disclosure also provide another computer program product for storing computer readable instructions, which when executed cause a computer to perform the operations of the monitoring method provided in any of the above embodiments.
The electronic device may be provided as a terminal, server, or other form of device.
Fig. 6 is a block diagram illustrating an electronic device 800 in accordance with an example embodiment. For example, the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or the like terminal.
Referring to fig. 6, electronic device 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors, or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the electronic device 800 to perform the above-described methods.
Fig. 7 is a block diagram illustrating an electronic device 1900 according to an example embodiment. For example, the electronic device 1900 may be provided as a server. Referring to fig. 7, electronic device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the methods described above.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system stored in memory 1932, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the electronic device 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the disclosure are implemented by personalizing an electronic circuit, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA), with state information of computer-readable program instructions, which can execute the computer-readable program instructions.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (15)

1. A method of monitoring, comprising:
carrying out target area identification on a visible light image of a monitoring area to obtain a first area where a target object is located in the visible light image and a second area where a preset part of the target object is located;
according to the size information of the first area and a third area corresponding to the first area in the infrared image of the monitoring area, performing living body detection on the target object to obtain a living body detection result;
under the condition that the living body detection result is a living body, performing identity recognition processing on the first area to obtain identity information of the target object, and performing temperature detection processing on a fourth area corresponding to the second area in the infrared image to obtain temperature information of the target object;
under the condition that the temperature information is greater than or equal to a preset temperature threshold value, generating first early warning information according to the target object and identity information thereof;
according to the size information of the first area and a third area corresponding to the first area in the infrared image, performing living body detection on the target object to obtain a living body detection result, including:
determining the distance between the target object and an image acquisition device for acquiring the visible light image according to the size information of the first area;
determining a living body detection strategy according to the distance;
determining the position of the third area in the infrared image according to the position of the first area in the visible light image;
according to the living body detection strategy, carrying out living body detection processing on each third area in the infrared image to obtain a living body detection result;
determining a liveness detection strategy comprising one of:
determining the in-vivo detection strategy as an in-vivo detection strategy based on human body morphology if the distance is greater than or equal to a first distance threshold;
determining the in-vivo detection strategy as an in-vivo detection strategy based on a head and neck morphology if the distance is greater than or equal to a second distance threshold and less than the first distance threshold;
determining the liveness detection policy as a face morphology based liveness detection policy if the distance is less than the second distance threshold.
2. The method according to claim 1, wherein performing a biopsy process on the third area according to the biopsy strategy to obtain the biopsy result comprises:
according to the living body detection strategy, carrying out morphological detection processing on the third area to obtain a morphological detection result;
when the form detection result is a living form, performing isothermal analysis processing on the face area in the third area to obtain an isothermal analysis result;
determining a first weight of a morphological detection result and a second weight of the isothermal analysis result according to the in-vivo detection strategy;
and determining the in-vivo detection result according to the first weight, the second weight, the morphological detection result and the isothermal analysis result.
3. The method of claim 1, further comprising,
detecting whether a preset target is included in the first area or not under the condition that the living body detection result is a living body, and obtaining a first detection result, wherein the preset target includes an article for shielding a partial area of the face,
when the living body detection result is a living body, the identity recognition processing is performed on the first area to obtain the identity information of the target object, and the method comprises the following steps:
and under the condition that the living body detection result is a living body, carrying out identity recognition processing on the first area according to the first detection result to obtain the identity information of the target object.
4. The method of claim 3, wherein detecting whether the first region includes a preset target and obtaining a first detection result comprises:
detecting a face region in the first region, and determining a feature missing result of the face region;
and under the condition that the feature missing result is preset feature missing, detecting whether a preset target is included in the face area or not, and obtaining the first detection result.
5. The method according to claim 3, wherein performing identification processing on the first area according to the first detection result to obtain the identity information of the target object includes one of:
under the condition that the first detection result indicates that the preset target does not exist, performing first identity recognition processing on a face area in the first area to obtain identity information of the target object;
and under the condition that the first detection result is that the preset target exists, performing second identification processing on the face area in the first area to obtain the identity information of the target object, wherein the weight of the feature of the non-occluded area of the face in the second identification processing is greater than the weight of the feature of the corresponding area in the first identification processing.
6. The method of claim 3, further comprising:
and generating second early warning information under the condition that the first detection result indicates that the preset target does not exist.
7. The method of claim 1, further comprising,
and overlapping the position information of the first area or the third area, the temperature information of the target object and the identity information of the target object with the visible light image and/or the infrared image to obtain a detection image.
8. A monitoring system, comprising: a visible light image acquisition component, an infrared image acquisition component and a processing component,
the processing component is to:
carrying out target area identification on a visible light image of a monitoring area acquired by a visible light image acquisition assembly to acquire a first area where a target object is located in the visible light image and a second area where a preset part of the target object is located;
according to the size information of the first area and a third area corresponding to the first area in the infrared image of the monitoring area acquired by the infrared image acquisition assembly, performing living body detection on the target object to acquire a living body detection result;
under the condition that the living body detection result is a living body, performing identity recognition processing on the first area to obtain identity information of the target object, and performing temperature detection processing on a fourth area corresponding to the second area in the infrared image to obtain temperature information of the target object;
under the condition that the temperature information is greater than or equal to a preset temperature threshold value, generating first early warning information according to the target object and identity information thereof;
the processing component is further to:
determining the distance between the target object and an image acquisition device for acquiring the visible light image according to the size information of the first area;
determining a living body detection strategy according to the distance;
determining the position of the third area in the infrared image according to the position of the first area in the visible light image;
according to the living body detection strategy, carrying out living body detection processing on each third area in the infrared image to obtain a living body detection result;
the processing component is further to:
determining the in-vivo detection strategy as an in-vivo detection strategy based on human body morphology if the distance is greater than or equal to a first distance threshold; or
Determining the in-vivo detection strategy as an in-vivo detection strategy based on a head and neck morphology if the distance is greater than or equal to a second distance threshold and less than the first distance threshold; or
Determining the liveness detection policy as a face morphology based liveness detection policy if the distance is less than the second distance threshold.
9. The system of claim 8, wherein the processing component is further configured to:
according to the living body detection strategy, carrying out morphological detection processing on the third area to obtain a morphological detection result;
when the form detection result is a living form, performing isothermal analysis processing on the face area in the third area to obtain an isothermal analysis result;
determining a first weight of a morphological detection result and a second weight of the isothermal analysis result according to the in-vivo detection strategy;
and determining the in-vivo detection result according to the first weight, the second weight, the morphological detection result and the isothermal analysis result.
10. The system of claim 8, wherein the processing component is further configured to:
detecting whether a preset target is included in the first area or not under the condition that the living body detection result is a living body, and obtaining a first detection result, wherein the preset target includes an article for shielding a partial area of the face,
the processing component is to:
and under the condition that the living body detection result is a living body, carrying out identity recognition processing on the first area according to the first detection result to obtain the identity information of the target object.
11. The system of claim 10, wherein the processing component is further configured to:
detecting a face region in the first region, and determining a feature missing result of the face region;
and under the condition that the feature missing result is preset feature missing, detecting whether a preset target is included in the face area or not, and obtaining the first detection result.
12. The system of claim 10, wherein the processing component is further configured to:
under the condition that the first detection result is that the preset target does not exist, performing first identity recognition processing on a face area in the first area to obtain identity information of the target object; or
And under the condition that the first detection result is that the preset target exists, performing second identification processing on the face area in the first area to obtain the identity information of the target object, wherein the weight of the feature of the non-occluded area of the face in the second identification processing is greater than the weight of the feature of the corresponding area in the first identification processing.
13. The system of claim 8, wherein the processing component is further configured to:
and overlapping the position information of the first area or the third area, the temperature information of the target object and the identity information of the target object with the visible light image and/or the infrared image to obtain a detection image.
14. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: performing the method of any one of claims 1 to 7.
15. A computer readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the method of any one of claims 1 to 7.
CN202010177537.0A 2020-03-13 2020-03-13 Monitoring method and system, electronic device and storage medium Active CN111414831B (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
CN202010177537.0A CN111414831B (en) 2020-03-13 2020-03-13 Monitoring method and system, electronic device and storage medium
JP2021536787A JP2022526207A (en) 2020-03-13 2020-10-27 Monitoring methods and systems, electronic devices and storage media
PCT/CN2020/124151 WO2021179624A1 (en) 2020-03-13 2020-10-27 Monitoring method and system, electronic device, and storage medium
SG11202106842P SG11202106842PA (en) 2020-03-13 2020-10-27 Monitoring method and system, electronic equipment, and storage medium
TW110100321A TWI768641B (en) 2020-03-13 2021-01-05 Monitoring method, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010177537.0A CN111414831B (en) 2020-03-13 2020-03-13 Monitoring method and system, electronic device and storage medium

Publications (2)

Publication Number Publication Date
CN111414831A CN111414831A (en) 2020-07-14
CN111414831B true CN111414831B (en) 2022-08-12

Family

ID=71493042

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010177537.0A Active CN111414831B (en) 2020-03-13 2020-03-13 Monitoring method and system, electronic device and storage medium

Country Status (5)

Country Link
JP (1) JP2022526207A (en)
CN (1) CN111414831B (en)
SG (1) SG11202106842PA (en)
TW (1) TWI768641B (en)
WO (1) WO2021179624A1 (en)

Families Citing this family (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111414831B (en) * 2020-03-13 2022-08-12 深圳市商汤科技有限公司 Monitoring method and system, electronic device and storage medium
CN112057074A (en) * 2020-07-21 2020-12-11 北京迈格威科技有限公司 Respiration rate measuring method, respiration rate measuring device, electronic equipment and computer storage medium
CN111985377A (en) * 2020-08-13 2020-11-24 深圳市商汤科技有限公司 Temperature measurement method and device, electronic equipment and storage medium
CN112131976B (en) * 2020-09-09 2022-09-16 厦门市美亚柏科信息股份有限公司 Self-adaptive portrait temperature matching and mask recognition method and device
CN112215113A (en) * 2020-09-30 2021-01-12 张成林 Face recognition method and device
CN112232186B (en) * 2020-10-14 2024-02-27 盈合(深圳)机器人与自动化科技有限公司 Epidemic prevention monitoring method and system
CN114445869A (en) * 2020-10-21 2022-05-06 中企网络通信技术有限公司 Temperature monitoring system and related operations
CN112287798A (en) * 2020-10-23 2021-01-29 深圳市商汤科技有限公司 Temperature measuring method and device, electronic equipment and storage medium
CN112525355A (en) * 2020-12-17 2021-03-19 杭州海康威视数字技术股份有限公司 Image processing method, device and equipment
CN112750278A (en) * 2021-01-18 2021-05-04 上海燊义环保科技有限公司 Full-intelligent network nursing system
CN112883856B (en) * 2021-02-05 2024-03-29 浙江华感科技有限公司 Monitoring method, monitoring device, electronic equipment and storage medium
CN113158877A (en) * 2021-04-16 2021-07-23 上海云从企业发展有限公司 Imaging deviation analysis and biopsy method, imaging deviation analysis and biopsy device, and computer storage medium
CN113496564A (en) * 2021-06-02 2021-10-12 山西三友和智慧信息技术股份有限公司 Park artificial intelligence management and control system
CN113420629B (en) * 2021-06-17 2023-04-28 浙江大华技术股份有限公司 Image processing method, device, equipment and medium
CN113420667B (en) * 2021-06-23 2022-08-02 工银科技有限公司 Face living body detection method, device, equipment and medium
CN113432720A (en) * 2021-06-25 2021-09-24 深圳市迈斯泰克电子有限公司 Temperature detection method and device based on human body recognition and temperature detection instrument
CN113687370A (en) * 2021-08-05 2021-11-23 上海炬佑智能科技有限公司 Detection method, device, electronic device and storage medium
CN113989695B (en) * 2021-09-18 2022-05-20 北京远度互联科技有限公司 Target tracking method and device, electronic equipment and storage medium
CN114166358B (en) * 2021-11-19 2024-04-16 苏州超行星创业投资有限公司 Robot inspection method, system, equipment and storage medium for epidemic prevention inspection
CN114427915A (en) * 2022-01-21 2022-05-03 深圳市商汤科技有限公司 Temperature control method, temperature control device, storage medium and electronic equipment
CN114742974A (en) * 2022-04-26 2022-07-12 北京市商汤科技开发有限公司 Player determination method and device, electronic equipment and storage medium
CN114894337B (en) * 2022-07-11 2022-09-27 深圳市大树人工智能科技有限公司 Temperature measurement method and device for outdoor face recognition
CN115471984B (en) * 2022-07-29 2023-09-15 青岛海尔科技有限公司 Alarm event execution method and device, storage medium and electronic device
CN116849613B (en) * 2023-07-12 2024-07-26 北京鹰之眼智能健康科技有限公司 A trigeminal nerve function status monitoring system
CN116628560A (en) * 2023-07-24 2023-08-22 四川互慧软件有限公司 Method and device for identifying snake damage case data based on clustering algorithm and electronic equipment
CN117297550B (en) * 2023-10-30 2024-05-03 北京鹰之眼智能健康科技有限公司 Information Processing System

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105912908A (en) * 2016-04-14 2016-08-31 苏州优化智能科技有限公司 Infrared-based real person living body identity verification method
CN108288028A (en) * 2017-12-29 2018-07-17 佛山市幻云科技有限公司 Campus fever monitoring method, device and server
CN110411570A (en) * 2019-06-28 2019-11-05 武汉高德智感科技有限公司 Infrared human body temperature screening method based on human testing and human body tracking technology
CN110647955A (en) * 2018-06-26 2020-01-03 义隆电子股份有限公司 Authentication method

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005100193A (en) * 2003-09-26 2005-04-14 King Tsushin Kogyo Kk Invasion monitoring device
CN101793562B (en) * 2010-01-29 2013-04-24 中山大学 Face detection and tracking algorithm of infrared thermal image sequence
JP5339476B2 (en) * 2011-05-09 2013-11-13 九州日本電気ソフトウェア株式会社 Image processing system, fever tracking method, image processing apparatus, control method thereof, and control program
CN102622588B (en) * 2012-03-08 2013-10-09 无锡中科奥森科技有限公司 Double verification face anti-counterfeiting method and device
CN102855496B (en) * 2012-08-24 2016-05-25 苏州大学 Block face authentication method and system
CN106372601B (en) * 2016-08-31 2020-12-22 上海依图信息技术有限公司 Living body detection method and device based on infrared visible binocular images
US9693695B1 (en) * 2016-09-23 2017-07-04 International Business Machines Corporation Detecting oral temperature using thermal camera
CN107595254B (en) * 2017-10-17 2021-02-26 黄晶 Infrared health monitoring method and system
CN108710841B (en) * 2018-05-11 2021-06-15 杭州软库科技有限公司 Human face living body detection device and method based on MEMs infrared array sensor
CN111414831B (en) * 2020-03-13 2022-08-12 深圳市商汤科技有限公司 Monitoring method and system, electronic device and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105912908A (en) * 2016-04-14 2016-08-31 苏州优化智能科技有限公司 Infrared-based real person living body identity verification method
CN108288028A (en) * 2017-12-29 2018-07-17 佛山市幻云科技有限公司 Campus fever monitoring method, device and server
CN110647955A (en) * 2018-06-26 2020-01-03 义隆电子股份有限公司 Authentication method
CN110411570A (en) * 2019-06-28 2019-11-05 武汉高德智感科技有限公司 Infrared human body temperature screening method based on human testing and human body tracking technology

Also Published As

Publication number Publication date
CN111414831A (en) 2020-07-14
TW202134940A (en) 2021-09-16
JP2022526207A (en) 2022-05-24
TWI768641B (en) 2022-06-21
WO2021179624A1 (en) 2021-09-16
SG11202106842PA (en) 2021-10-28

Similar Documents

Publication Publication Date Title
CN111414831B (en) Monitoring method and system, electronic device and storage medium
WO2020062969A1 (en) Action recognition method and device, and driver state analysis method and device
EP4011274B1 (en) Eye tracking using eyeball center position
KR102296396B1 (en) Apparatus and method for improving accuracy of contactless thermometer module
US8973149B2 (en) Detection of and privacy preserving response to observation of display screen
CN104699250B (en) Display control method and device, electronic equipment
US20180081427A1 (en) Eye and Head Tracking
CN110287671B (en) Verification method and device, electronic equipment and storage medium
WO2020020022A1 (en) Method for visual recognition and system thereof
US9924090B2 (en) Method and device for acquiring iris image
CN110705365A (en) Human body key point detection method and device, electronic equipment and storage medium
WO2023005403A1 (en) Respiratory rate detection method and apparatus, and storage medium and electronic device
CN114140616A (en) Heart rate detection method and device, electronic equipment and storage medium
WO2020015145A1 (en) Method and electronic device for detecting open and closed states of eyes
HK40026866A (en) Monitoring nethod and system, electronic equipment and storage medium
JP5242827B2 (en) Face image processing apparatus, face image processing method, electronic still camera, digital image processing apparatus, and digital image processing method
EP4465112A1 (en) Anti-snooping prompt method and apparatus, and electronic device and storage medium
US20220225936A1 (en) Contactless vitals using smart glasses
CN114596096A (en) Multi-face payment method and device, electronic equipment and storage medium
CN117130468A (en) Eye movement tracking method, device, equipment and storage medium
JP5017476B2 (en) Facial image processing apparatus, facial image processing method, and electronic still camera
CN115766927A (en) Lie detection method, device, mobile terminal and storage medium
CN116797984A (en) Information processing method and device, electronic equipment and storage medium
JP2009169975A (en) Facial image processing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40026866

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant