[go: up one dir, main page]

CN118154829B - Imaging system integrating cat eye function - Google Patents

Imaging system integrating cat eye function Download PDF

Info

Publication number
CN118154829B
CN118154829B CN202410572068.0A CN202410572068A CN118154829B CN 118154829 B CN118154829 B CN 118154829B CN 202410572068 A CN202410572068 A CN 202410572068A CN 118154829 B CN118154829 B CN 118154829B
Authority
CN
China
Prior art keywords
image
light
imaging
module
floodlight
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410572068.0A
Other languages
Chinese (zh)
Other versions
CN118154829A (en
Inventor
张思曼
郑周胜
李安
陈驰
张莉萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Angstrong Technology Co ltd
Original Assignee
Shenzhen Angstrong Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Angstrong Technology Co ltd filed Critical Shenzhen Angstrong Technology Co ltd
Priority to CN202410572068.0A priority Critical patent/CN118154829B/en
Publication of CN118154829A publication Critical patent/CN118154829A/en
Application granted granted Critical
Publication of CN118154829B publication Critical patent/CN118154829B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/13Sensors therefor
    • G06V40/1324Sensors therefor by using geometrical optics, e.g. using prisms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Optics & Photonics (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The embodiment of the invention discloses an imaging system integrating a cat eye function. In the imaging system, a main control module controls a receiving camera to acquire ambient light under a cat eye imaging mode to obtain an ambient image; in the biological recognition imaging mode, the light source module is controlled to respectively project floodlight and/or structured light, and the receiving camera is controlled to at least collect reflected light of the floodlight, and the reflected light of the collected structured light can be increased to correspondingly obtain a floodlight image and/or a structured light image; the receiving camera is used for collecting the environment image, the angle of view of the receiving camera is larger than or equal to the angle of view of the collecting structure light image and the angle of view of the collecting floodlight image, so that the main control module can conduct environment display according to the environment image, and biological recognition is conducted according to the structure light image and the floodlight image. The embodiment of the invention not only enriches the functions of the imaging system, ensures that the system integrates the cat eye function, but also has the face brushing function or the palm brushing function, reduces optical elements, reduces material cost and assembly difficulty, and enables small integration to be possible.

Description

Imaging system integrating cat eye function
Technical Field
The invention relates to the field of biological identification, in particular to an imaging system integrating a cat eye function.
Background
In the existing biological recognition technology, imaging systems integrating a face brushing function and a palm brushing function are widely applied to the fields of mobile payment, intelligent door locks and the like. With the development of application requirements, imaging systems further integrating cat eye functions are becoming a mainstream solution based on integrating face and/or palm brushing functions. The imaging system integrating the cat eye function can realize the functions of identity verification, real-time or remote monitoring of pictures and the like through the cat eye, and has wider application prospect.
Fig. 1 is a system architecture diagram of a cat eye function integrated imaging system in the related art, as shown in fig. 1, in a typical cat eye function integrated imaging system in the prior art, the imaging module includes a floodlight illumination module 101', a floodlight image acquisition module 103', a cat eye module 105', and may also include a structured light projection module 102' and a 3D structured light image acquisition module 104'. The floodlight image capturing module 103', the 3D structured light image capturing module 104', and the cat eye module 105' are typically three independent units, each having a respective imaging chip, imaging lens, and optical filter, for structured light image capturing, floodlight image capturing, and cat eye functions, respectively. The floodlighting module 101', the structured light projection module 102' is also typically two separate units, each having a respective light source, beam shaping means. The imaging system integrating the cat eye function has higher structural complexity, more optical elements, higher material cost and higher assembly difficulty, and is not beneficial to small-sized integration.
Disclosure of Invention
The invention provides an imaging system integrating the function of a cat eye, which integrates the functions of the cat eye, the face brushing and the palm brushing into the same camera, enriches the functions of the imaging system, reduces optical elements of the system, and realizes the miniaturized and integrated imaging system integrating the function of the cat eye.
The embodiment of the invention provides an imaging system integrating a cat eye function, which comprises an imaging module and a main control module: the imaging module comprises a light source module and a receiving camera;
The light source module is used for at least projecting floodlight;
The receiving camera is used for collecting and imaging reflected light of the target object after floodlight is projected to the target object or reflected light of the target object under ambient light;
The main control module is respectively and electrically connected with the light source module and the receiving camera;
The imaging system has a cat eye imaging mode and a biometric imaging mode;
In the cat eye imaging mode, the main control module is used for controlling the receiving camera to collect reflected light of the target object under the ambient light to obtain an ambient image, or controlling the light source module to project floodlight and controlling the receiving camera to collect the reflected light of the floodlight projected to the target object to obtain the ambient image;
In the biological recognition imaging mode, the main control module is used for controlling the light source module to project floodlight at least, and controlling the receiving camera to collect at least reflected light of the floodlight projected to the target object, and correspondingly obtaining a floodlight image; the environment image and the floodlight image meet the following conditions: FOV1 is more than or equal to FOV2; wherein FOV1 is the angle of view of the receiving camera collecting the environmental image, FOV2 is the angle of view of the receiving camera collecting the floodlight image;
the main control module is also used for controlling the output of the environment image for display;
The main control module is also used for carrying out biological recognition according to the floodlight image.
In the technical scheme of the embodiment of the invention, the imaging system integrating the cat eye function comprises an imaging module and a main control module: the imaging module comprises a light source module and a receiving camera; the main control module is used for controlling the receiving camera to collect reflected light of the target object under the ambient light to obtain an ambient image under the cat eye imaging mode, or controlling the light source module to project floodlight and controlling the receiving camera to collect the reflected light of the floodlight projected to the target object to obtain the ambient image; under the biological identification imaging, the floodlight image is correspondingly obtained by controlling the light source module to project floodlight and controlling the receiving camera to at least collect reflected light after the floodlight is projected to the target object; the environment image and the floodlight image meet the following conditions: FOV1 is more than or equal to FOV2; wherein FOV1 is the angle of view of the receiving camera collecting the environmental image, FOV2 is the angle of view of the receiving camera collecting the floodlight image; the main control module is also used for controlling the output of the environment image for display; the main control module is also used for carrying out biological recognition according to the floodlight image. In summary, the imaging system integrating the cat eye function can integrate the cat eye function, simultaneously realize one or two of the face brushing function and the palm brushing function, combine the image acquisition module of the imaging module and the cat eye module into a unit, integrate the cat eye, the face brushing function and/or the palm brushing function into the same camera, enrich the functions of the imaging system, integrate the cat eye function with the system, simultaneously have one or two of the face brushing function and the palm brushing function, reduce optical elements, reduce material cost and assembly difficulty, and enable small integration to be possible.
Drawings
FIG. 1 is a system architecture diagram of a related art cat eye function integrated imaging system;
FIG. 2 is a system architecture diagram of an integrated cat eye function imaging system according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an imaging module according to an embodiment of the present invention;
FIG. 4 is a block diagram of another imaging module according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a structured light projection module according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of another embodiment of a structured light projection module;
fig. 7 is a schematic structural diagram of a light source module according to an embodiment of the present invention;
fig. 8 and 9 are schematic structural views of two other light source modules according to an embodiment of the present invention;
Fig. 10 is a schematic structural diagram of a receiving camera according to an embodiment of the present invention;
FIG. 11 is a graph of an optical distortion design for an imaging lens according to an embodiment of the present invention;
FIG. 12 is a schematic view of an imaging lens according to an embodiment of the present invention;
FIG. 13 is a graph of an optical distortion design of the imaging lens of FIG. 12;
FIGS. 14-16 are graphs of three transmission spectra provided by embodiments of the present invention;
FIG. 17 is a graph of photoelectric conversion efficiency of an RGB sensor according to an embodiment of the present invention;
FIG. 18 is a graph of photoelectric conversion efficiency of an RGB-IR sensor according to an embodiment of the present invention;
fig. 19 and 20 are schematic structural views of two further receiving cameras according to an embodiment of the present invention;
FIGS. 21-25 are schematic side or cross-sectional views of various imaging modules provided in accordance with embodiments of the present invention;
FIG. 26 is a system architecture diagram of another cat eye function integrated imaging system provided in accordance with an embodiment of the present invention;
FIG. 27 is a flowchart illustrating an exemplary embodiment of an integrated cat eye function imaging system;
Fig. 28 and 29 are schematic diagrams illustrating the operation of two cat eye function integrated imaging systems according to embodiments of the present invention;
in the figure:
The system comprises a 101' -floodlight illumination module, a 102' -structured light projection module, a 103' -floodlight image acquisition module, a 104' -3D structured light image acquisition module and a 105' -cat eye module;
The system comprises a 101-imaging module, a 102-main control module and a 103-external equipment module;
1010-light source module, 1011-flood lighting module, 1012-structured light projection module, 1013-receiving camera, 1014-motherboard, 10141-fixed aperture, 1015-ambient light sensor, 1016-proximity light sensor;
10111-a first light source, 10112-a second light source, 10112 (a) -a first sub-light source, 10112 (b) -a second sub-light source, 10112 (c) -a third sub-light source;
10121-a third light source, 10122-a collimating mirror, 10123-a diffraction optical element, 10124-a lens barrel, 101241-a first mounting groove, 101242-a second mounting groove, 101243-a first metal terminal, 101244-a second metal terminal, 10125-a collimating diffraction integrated optical element and 10120-a raising block;
10131-imaging chip, 10132-imaging lens, 10133-filter component, 10134-bracket;
1021-a living body detection unit, 1022-a depth processing unit, 1023-an authentication unit, 1024-an image detection unit, 1025-an image preprocessing unit.
Detailed Description
The invention is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting thereof. It should be further noted that, for convenience of description, only some, but not all of the structures related to the present invention are shown in the drawings.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. It should be noted that, the terms "upper", "lower", "left", "right", and the like in the embodiments of the present invention are described in terms of the angles shown in the drawings, and should not be construed as limiting the embodiments of the present invention. In addition, in the context, it will also be understood that when an element is referred to as being formed "on" or "under" another element, it can be directly formed "on" or "under" the other element or be indirectly formed "on" or "under" the other element through intervening elements. The terms "first," "second," and the like, are used for descriptive purposes only and not for any order, quantity, or importance, but rather are used to distinguish between different components. The specific meaning of the above terms in the present invention will be understood in specific cases by those of ordinary skill in the art.
The term "comprising" and variants thereof as used herein is intended to be open ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment".
It should be noted that the terms "first," "second," and the like herein are merely used for distinguishing between corresponding contents and not for defining a sequential or interdependent relationship.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those skilled in the art will appreciate that "one or more" is intended to be construed as "one or more" unless the context clearly indicates otherwise.
Fig. 2 is a system architecture diagram of an imaging system integrated with a cat eye function according to an embodiment of the present invention, and referring to fig. 2, the imaging system integrated with a cat eye function includes an imaging module 101 and a main control module 102: the imaging module 101 includes a light source module 1010 and a receiving camera 1013; the light source module 1010 is used for at least projecting floodlight; the receiving camera 1013 is configured to collect and image reflected light of the target object after flood light is projected onto the target object or reflected light of the target object under ambient light.
The main control module 102 is electrically connected with the light source module 1010 and the receiving camera 1013, respectively; the main control module 102 is provided with a cat eye imaging mode and a biological recognition imaging mode; in the cat eye imaging mode, the main control module 102 is configured to control the receiving camera 1013 to collect reflected light of the target object under the ambient light to obtain an ambient image, or control the light source module 1010 to project floodlight and control the receiving camera 1013 to collect reflected light of the floodlight projected to the target object to obtain an ambient image; in the biometric imaging mode, the main control module 102 is configured to control the light source module to project at least floodlight, and control the receiving camera 1013 to collect at least reflected light of the floodlight projected onto the target object, so as to obtain a floodlight image correspondingly; the environment image and the floodlight image meet the following conditions: FOV1 is more than or equal to FOV2; FOV1 is the angle at which the receiving camera 1013 collects the environmental image, and FOV2 is the angle at which the receiving camera 1013 collects the flood image.
The main control module 102 is also used for controlling the output of the environment image for display; the main control module 102 is further configured to perform biometric identification according to the floodlight image.
Firstly, the imaging system integrating the cat eye function in the embodiment of the invention shows that the imaging system can realize the cat eye function and can also realize the biological recognition function by utilizing imaging. The biometric function in the embodiment of the invention can specifically comprise one or two of a face brushing function and a palm brushing function. In addition, it is also understood that the main differences between the cat eye function and the biometric function are: the cat eye function is used for collecting and displaying the environment image, and the biological recognition function is used for collecting and recognizing the biological information in the floodlight image. Furthermore, since the cat eye function requires a larger viewing angle for displaying the environmental image, a larger range of the environmental image is provided for the user, and the biometric recognition such as the face brushing or palm brushing function mainly requires the acquisition of the face or palm image of the living being, which requires a relatively smaller viewing angle. Therefore, the cat eye imaging mode in the embodiment of the invention can be understood as that the imaging system executes the cat eye function, and the biological recognition imaging mode can be understood as that the imaging system executes one or two of the face brushing function and the palm brushing function. In the embodiment of the present invention, the main control module 102 has a cat eye imaging mode and a biometric identification imaging mode, and in the two imaging modes, the light source module and the receiving camera 1013 can be controlled to collect images in different lighting states, which essentially uses the same light source module and the same receiving camera 1013 to collect images in two visual angles, so that one or two of the functions of brushing face and brushing palm are realized while the cat eye function is realized.
Specifically, in the cat eye imaging mode, the receiving camera 1013 is controlled by the main control module 102 and is responsible for collecting an environmental image for cat eye display; in the biometric imaging mode, the receiving camera 1013 is controlled by the main control module 102 and is responsible for collecting at least flood images for face recognition or palm print recognition at least through the flood images. As can be seen, the field angle FOV1 of the ambient image is generally relatively larger or the same as the field angle FOV2 of the flood image, i.e., FOV1 is greater than or equal to FOV2.
According to the technical scheme, when the cat eye function is integrated, one or two of the face brushing function and the palm brushing function can be realized, the image acquisition module and the cat eye module of the imaging module are combined into one unit, and the cat eye, the face brushing function and/or the palm brushing function are integrated in the same camera, so that functions of an imaging system are enriched, one or two of the face brushing function and the palm brushing function are also realized when the cat eye function is integrated, optical elements can be reduced, material cost and assembly difficulty are reduced, and small integration is possible.
With continued reference to fig. 2, the detailed construction of the cat eye function integrated imaging system of the present invention will be described in additional detail. Optionally, the imaging system of the present invention further includes a main board, and the imaging module 101 and the main control module 102 are disposed on the same side or different side of the main board and are electrically connected to the main board. Therefore, the imaging system can be concentrated on a single main board, so that the integration level of the system can be improved, the volume is reduced, and the miniaturization of the whole equipment is facilitated. The main board is used for bearing all the modules. In addition, with continued reference to fig. 2, the imaging system may further include an external device module 103, where the external device module 103 is electrically connected to the main control module 102 through a motherboard. The external device module 103 is used for responding, displaying and outputting signals and information. The imaging module 101 is configured to project infrared floodlight under the function of the cat eye, or project infrared floodlight or visible floodlight to the target object under the function of the face brushing and the palm brushing, and after being reflected by the target object, the cat eye can acquire an original floodlight infrared image formed by reflected light of the projected infrared floodlight on the target object, or an original visible light image formed by reflected light of the target object under the environment light, that is, an environment image; the face brushing function or the palm brushing function can acquire and obtain an original floodlight infrared image formed by the reflected light of the projected infrared floodlight on the target object, or an original visible image formed by the reflected light of the visible floodlight on the target object, namely a floodlight image, and then transmit the acquired imaging information and the acquired image to the main control module 102 for subsequent processing. The main control module 102 is used for controlling the imaging module 101, performing subsequent processing on imaging information and images acquired from the imaging module 101 under the functions of cat eye, face brushing and palm brushing, and then controlling modules such as external equipment to respond, and transmitting the information and the images to an upper computer, a user terminal, a cloud end or the outside.
It should be noted that, in the above scheme, the light source module projects floodlight, so in the biometric imaging mode, that is, in the face brushing or palm brushing function, the image collected by the imaging module 101 is more a two-dimensional floodlight image, the subsequent processing of the main control module 102 is more to utilize the two-dimensional floodlight image to perform palm print recognition or face recognition, and the specific palm print recognition and face recognition principle and working process can be seen hereinafter. In order to improve the reliability and safety of recognition, the embodiment of the invention also provides a scheme for setting the light source module to project the structured light, and the structured light is projected under the function of face brushing or palm brushing, after the reflected light is formed on the face or palm, the three-dimensional structured light image carrying the depth information of the face or palm can be acquired and obtained, so that the structured light image can be converted into the depth image for in-vivo judgment, the recognition accuracy of the face brushing and palm brushing functions is improved, and the cheating or attack of a recognition system is effectively prevented or avoided.
Based on this, with continued reference to fig. 2, in an embodiment, optionally, the light source module is also used to project structured light; the receiving camera 1013 is further configured to collect and image reflected light after the structured light is projected onto the target object; in the biometric imaging mode, the main control module 102 is further configured to control the light source module to project the structured light, and control the receiving camera 1013 to collect the reflected light of the structured light projected onto the target object, so as to correspondingly obtain a structured light image; the ambient image and the structured light image satisfy the following conditions: FOV1 is more than or equal to FOV3; wherein FOV3 is the field angle at which the receiving camera 1013 captures the structured light image; the main control module 102 is further used for performing biological recognition according to the structured light image and the floodlight image.
Also, since the biometric imaging mode is exemplified by a face brushing or palm brushing mode, image acquisition is mainly required for a local area of the living being such as a face or palm, so that a relatively small angle of view is required; whereas the cat eye imaging mode is exemplified by a relatively larger viewing angle in which a user needs to be provided with a larger range of environmental images. Based on this, the field angle FOV3 of the structured light image acquired in the biometric imaging mode in this embodiment is relatively small and is less than or equal to the field angle FOV1 of the ambient image in the cat eye imaging mode.
Fig. 3 is a schematic structural diagram of an imaging module according to an embodiment of the present invention, referring to fig. 2 and 3, first, the imaging module further includes a main board 1014; the main plate 1014 includes fixing holes 10141, and the main plate 1014 can be fixed on the structural member through the fixing holes 10141. In order to use the same imaging module 101 to implement the cat eye, face and/or palm functions of the imaging system, the present invention provides the following various schemes of the light source module 1010 and the receiving camera 1013 in the imaging module 101.
First, optionally, the light source module 1010 includes a flood lighting module 1011; the floodlight lighting module 1011 comprises a first light source for emitting infrared band light, or the floodlight lighting module 1011 comprises a first light source for emitting infrared band light and a second light source for emitting at least one visible band light and/or at least one infrared band light, and the infrared band light emitted by the second light source is different from the infrared band light emitted by the first light source.
Embodiments of the flood lighting module 1011 of the present invention are described below. Specifically, the floodlight module 1011 is used for projecting floodlight, and is used under the functions of cat eye, face brushing and palm brushing. Because the light source wave band and the light supplementing frequency of the floodlight module 1011 used under the three functions are not necessarily the same, the invention provides the following multiple floodlight module schemes in order to realize the three functions of cat eye, face brushing and palm brushing by using the same floodlight module.
In an embodiment, the first light source selected by the flood lighting module 1011 may be a light source with a center wavelength of 930-950nm (hereinafter referred to as 940nm light source) or a light source with a center wavelength of 840-860nm (hereinafter referred to as 850nm light source). Taking 940nm light source as an example, using 940nm uniform floodlight projected by a floodlight module 1011 to perform infrared light supplement under the face brushing function or the palm brushing function, and starting the face brushing function or the palm brushing function to start the floodlight module 1011; under the function of the cat eye, when the ambient illuminance is lower than a certain threshold value, the floodlight module 1011 is turned on, the 940nm uniform floodlight projected by the floodlight module 1011 is used for infrared light supplement, and when the ambient illuminance is not lower than the threshold value, the floodlight module 1011 is turned off. The threshold is determined according to factors such as the dim light imaging quality, frame rate, and actual application scene of the imaging chip in the receiving camera 1013, and the setting range of the threshold is generally 5-100 Lux.
Fig. 4 is a block diagram of another imaging module according to an embodiment of the present invention, and referring to fig. 4, optionally, in another embodiment, the flood lighting module 1011 may further include a second light source 10112 as a light source of a visible light band or an infrared light band on the basis of the first light source 10111, where the second light source 10112 is used for emitting at least one light of a visible light band or at least one light of an infrared light band, which means that the second light source 10112 may be a light source containing one or more colors, for example, may contain white light (light mixed by multiple colors), red light (band 600-800 nm), green light (band 530-560 nm), blue light (430-470 nm), yellow light (band 570-600 nm), infrared light (infrared light with a wavelength different from that of the first light source, for example, band 850nm, 905nm, 940nm, 1550 nm), and so on. Specifically, as illustrated in fig. 4, the second light source 10112 may include a first sub-light source 10112 (a), a second sub-light source 10112 (b), and a third sub-light source 10112 (c). Of these, the first light source 10111 projects infrared light, the first sub-light source 10112 (a) projects white light, the second sub-light source 10112 (b) projects infrared light, and the third sub-light source 10112 (c) projects green light. In the figure, the four light sources of the first light source 10111, the first sub-light source 10112 (a), the second sub-light source 10112 (b) and the third sub-light source 10112 (c) are spliced and arranged, and other arrangements, such as spacing along the horizontal axial direction, may be alternatively adopted by those skilled in the art, and no requirement is imposed on the arrangement and sequence.
At this time, under the face brushing function or the palm brushing function, the floodlight lighting module 1011 may respectively project uniform floodlight by using the first light source 10111 and the second light source 10112, where the first light source 10111 projects uniform infrared floodlight, and the second light source 10112 projects at least one uniform visible floodlight or another uniform infrared floodlight, so as to implement multispectral floodlight light supplementing. Under the cat eye function, when the ambient illuminance is lower than the threshold, the floodlight module 1011 is turned on, the first light source 10111 is used for projecting infrared floodlight for infrared light compensation, or the second light source 10112 is used for projecting at least one visible floodlight or another infrared floodlight for multispectral floodlight compensation. And turns off the flood lighting module 1011 when the ambient illuminance is not lower than the above threshold. Based on the embodiment, especially under the function of face brushing or palm brushing, images with different wave bands can be obtained, and the types of acquired images are increased, so that multiple types of images are used for living body detection, the types of images used for living body detection are increased, and the safety of system identity recognition is improved.
Alternatively, in some embodiments, the flood lighting module 1011 may be composed of an LED light source and a lens disposed on the light emitting side of the LED light source for expanding the angle of view of the light beam emitted from the LED light source. Alternatively, in some embodiments, the flood lighting module 1011 may also be composed of a laser light source+diffuser (Diffuser) disposed on the light-emitting side of the laser light source, the Diffuser being used for beam shaping and light homogenizing.
The LED light source, lens or laser light source, and Diffuser are all packaged on a ceramic substrate or other material circuit board, the lower surface of the package has positive and negative electrodes, and the lower surface is welded on the main board 1014 by soldering tin. In some cases, the LED light source or the laser light source can also be raised by the raised sheet, so that on one hand, the light emitted by the light source can be prevented from being shielded by the device, and on the other hand, the window corresponding to the light source is convenient to set.
Next, optionally, the light source module may further include a structured light projection module for projecting structured light; the structured light projection module comprises a third light source, and the third light source is used for emitting infrared band light.
Specifically, fig. 5 is a schematic structural diagram of a structural light projection module according to an embodiment of the present invention, referring to fig. 5, a structural light projection module 1012 is configured to project structural light, and the structural light projection module 1012 is used only in a face brushing and palm brushing function, and the structural light projection module 1012 includes a third light source 10121, where the third light source 10121 is configured to emit infrared band light. In order to realize two functions of face brushing and palm brushing by using the same structured light projection module, the invention provides the following structured light projection module scheme.
The third light source 10121 in the structured light projection module 1012 may be a laser light source, which may be a laser light source such as a VCSEL (vertical cavity surface emitting laser), an EEL (edge emitting laser), a HCSEL (horizontal cavity surface emitting laser), or the like. The laser light source band is an infrared band, and a light source with a center wavelength of 930-950nm (hereinafter, referred to as 940nm light source) or a light source with a center wavelength of 840-860nm (hereinafter, referred to as 850nm light source) can be selected.
Further, the structured light projection module is further provided with a dimming element, wherein the dimming element is a collimation and diffraction integrated optical element, or the dimming element is a super-surface lens, or the dimming element is a combination of a collimation lens and a diffraction optical element. The light adjusting element is positioned at the light emitting side of the light source in the structural light projection module and is used for modulating the emergent light of the light source in the structural light projection module so as to correspondingly project the structural light.
The following describes possible cases of the above-described dimming element. First, with continued reference to fig. 5, in one embodiment, the dimming elements may be a collimating mirror 10122 and a diffractive optical element 10123. The collimator lens 10122 may be composed of one or more lens pieces fixed to the lens barrel 10124, and the diffractive optical element 10123 fixed to a stepped surface of the lens barrel 10124. The light emitting surface of the laser light source is located at the focal plane position of the collimator mirror 10122. The light emitted by the laser source is collimated into a parallel light beam by the collimator 10122, and the parallel light beam is diffracted and copied into a structured light with certain characteristic information by the diffraction optical element 10123, such as a common speckle structured light.
Fig. 6 is a schematic structural diagram of another structured light projection module according to an embodiment of the present invention, referring to fig. 6, optionally, the dimming element may further use a collimating and diffracting integrated optical element 10125 instead of the combination of the collimating mirror 10122 and the diffracting optical element 10123, where the collimating and diffracting integrated optical element 10125 includes a micro-structural surface and a substrate with an integral collimating and diffracting function, and the substrate may be a glass material (such as quartz) or a plastic material (such as PC, PMMA, etc.). The collimating and diffracting integrated optical element 10125 may be implemented by integrating a collimating microstructure surface and a diffracting microstructure surface on the basis of a single optical element, or may use one microstructure surface to implement the collimating and diffracting functions as described above, for example, the microstructure surface may be a grating microstructure surface designed based on the diffraction principle, or may be a super-surface microstructure surface designed based on the generalized snell principle. The light emitting surface of the laser light source is located at the focal plane position of the collimation/diffraction integrated optical element 10125. The collimating and diffracting integrated optical element 10125 is fixed in the mounting groove of the lens barrel 10124 by low-fluidity glue, and the lens barrel 10124 is fixed on the main board 1014 by glue. In this embodiment, the collimating and diffracting integrated optical element 10125 replaces the collimating function of the collimating lens 10122 and the diffracting duplicating function of the diffracting optical element 10123 in the combination of the traditional collimating lens 10112 and the diffracting optical element 10123, and by integrating the collimating and diffracting duplicating functions on one optical element, one collimating lens is reduced, and the material cost and the assembly difficulty are reduced.
Alternatively, in still another embodiment, a light source module for an imaging module may be provided in which a structured light projection module and a floodlight module may be integrated in the same barrel, in which case the structured light projection module and the floodlight module share one dimming element. The dimming element is positioned at the light emitting side of the light source in the floodlight module and the structural light projection module, and is used for respectively modulating the emergent light of the light source in the floodlight module and the structural light projection module so as to correspondingly project floodlight and structural light.
Specifically, fig. 7 is a schematic structural diagram of a light source module according to an embodiment of the present invention, and referring to fig. 7, a lens barrel 10124 has two mounting grooves, i.e., a first mounting groove 101241 and a second mounting groove 101242, and two metal terminals, i.e., a first metal terminal 101243 and a second metal terminal 101244. The collimator 10122 and the diffractive optical element 10123 are fixed to the first mounting groove 101241 by low-fluidity glue, and the floodlight module 1011 is fixed to the second mounting groove 101242 by low-temperature-cured conductive glue. The lower surface of the floodlight module 1011 is provided with an anode and a cathode which are respectively connected with different metal terminals through conductive adhesive. The upper end surfaces of the first metal terminal 101243 and the second metal terminal 101244 are flush with the grooved surface of the second mounting groove 101242, and the lower end surfaces of the first metal terminal 101243 and the second metal terminal 101244 protrude from the lower surface of the second mounting groove 101242 and are electrically connected to the motherboard 1014 by soldering or conductive adhesive connection. The first metal terminal 101243 and the second metal terminal 101244 are in the barrel 10124 by an in-mold injection molding process. In this embodiment, the lens barrel 10124 and the metal terminal are integrally formed by an in-mold injection molding process, and then the structural light projection module 1012 and the floodlight illumination module 1011 are integrated in the same lens barrel 10124, so that the complexity of the module is further simplified, the manufacturing and the production are easy, and the miniaturization is convenient.
Optionally, the structured light projection module and the flood illumination module may also be integrated in one projector for the light source module of the imaging module. Fig. 8 and 9 are schematic structural views of two other light source modules provided in the embodiment of the present invention, referring to fig. 8 and 9, in an alternative embodiment of the present invention, a third light source 10121 in the structural light projection module 1012 and a light source (only the first light source 10111 is included in the example in the figure) in the floodlight module 1011 may be disposed in the same barrel 10124, the third light source 10121 in the structural light projection module 1012 and the first light source 10111 in the floodlight module 1011 are electrically connected with the main board 1014, and the first light source 10111 is electrically connected with the main board 1014 through a bump 10120.
As can be seen from comparing fig. 8 and fig. 9, the two alternative embodiments are different in that the structured light projection module 1012 and the floodlighting module 1011 in the embodiment of fig. 8 share a group of dimming elements, which may specifically be a combination of a collimator lens 10122 and a diffractive optical element 10123, and the dimming element in the embodiment of fig. 9 is a collimating and diffracting integrated optical element 10125. The outgoing light of the third light source 10121 of the structured light projection module 1012 sequentially passes through the collimator lens 10122 and the diffractive optical element 10123 or through the collimating and diffracting integrated optical element 10125 to project structured light to the target scene, and the light of the light source (exemplified as the first light source 10111) in the floodlight illumination module 1011 sequentially passes through the collimator lens 10122 and the diffractive optical element 10123 or through the collimating and diffracting integrated optical element 10125 to project floodlight to the target scene. The third light source 10121 and the first light source 10111 emit light alternately, so that different floodlight light supplementing functions can be realized. It should be noted that the spacer 10120 may be a printed circuit board (Printed Circuit Board, PCB), a ceramic substrate, or a metal conductive block; the raising block 10120 may be disposed below the first light source 10111 or may be disposed below the light source of the floodlight module 1011, where on the one hand, the raising block 10120 is used to raise the light sources to make a certain height difference between the light emitting surface of one light source and the light emitting surface of the other light source, in this embodiment, the light emitting surface of the second light source 10112 is located at the virtual focal plane of the collimating mirror 10122 or the collimating and diffracting integrated optical element 10125, and the light emitting surface of the first light source 10111 is located at the focal plane of the collimating mirror 10122 or the collimating and diffracting integrated optical element 10125; another aspect is to electrically connect the light source to the motherboard 1014. This embodiment essentially further integrates the structured light projection module 1012 and the flood lighting module 1011 into one projector, further simplifying the complexity of the module, and facilitating easier manufacturing and production, and further facilitating smaller integration. In some embodiments, the light source in the flood lighting module 1011 may include both the first light source 10111 and the second light source 10112 as described above, that is, the light source added with the visible light band or the infrared band, that is, the second light source, and the second light source may be a sub-light source containing one or more colors, for example, may contain white light, red light, green light, blue light, yellow light, etc., so as to implement multispectral flood light.
Next, fig. 10 is a schematic structural diagram of a receiving camera according to an embodiment of the present invention, referring to fig. 10, the receiving camera 1013 is used for imaging, the receiving camera 1013 may include an imaging chip 10131, an imaging lens 10132 and a filtering component 10133, where the imaging lens 10132 and the filtering component 10133 are respectively located on a light receiving path of the imaging chip 10131; in the cat eye imaging mode, the filter component 10133 transmits at least visible light or at least infrared light; in the biometric imaging mode, the filter assembly 10133 transmits at least infrared band light. In addition, the receiving camera 1013 may further be provided with a holder 10134 for carrying and accommodating the imaging chip 10131, the imaging lens 10132 and the filter assembly 10133.
For the receiving camera of the embodiment of the invention, in the imaging system integrating the cat eye function, the target field of view (FOV) and the target imaging distance range required by the three functions of cat eye, face brushing and palm brushing are different, and the target light source wave bands are not necessarily the same, so that the imaging area, the imaging resolution and the imaging image height of the target scene on the imaging chip 10131 are different, and the imaging wave bands are not necessarily the same. In general, the size order of the target field of view angles under three functions is: cat eye function > palm brushing function > face brushing function. The imaging distance ranges of the targets under the three functions are respectively: the brush palm is generally 4-25cm under the function of brushing face, is generally 0.3-1m under the function of brushing face, and is generally 0.4-2.5m under the function of cat eye. Three functional target light source bands: when the ambient illuminance is higher than the threshold value, the object reflects ambient light to form an image, and when the ambient illuminance is lower than the threshold value, the floodlight lighting module 1011 is started to perform infrared light supplementing, and the object reflects infrared light to form an image; the structured light projection module 1012 and the floodlight illumination module 1011 need to be turned on at intervals under the functions of face brushing and palm brushing, when the structured light module is turned on, the infrared light is reflected by the face or the palm to form an image, and when the floodlight illumination module 1011 is turned on, the infrared light and/or the visible light is reflected by the face or the palm to form an image. In order to realize the functions of cat eye, face brushing and/or palm brushing by using the same receiving camera, the invention provides the following various schemes aiming at each structural member of the receiving camera.
Optionally, the present invention provides various solutions for the imaging lens 10132 in the receiving camera 1013. First, the imaging lens 10132 is used to focus the light reflected from the target object on the imaging chip 10131 for imaging. In the prior art, because the requirements on the angle of view are larger and larger under the function of the cat eye, such as the requirement on the angle of view of more than 155 degrees in general, the optical distortion of the imaging lens used is larger, the optical distortion of the imaging lens is more than 60 percent in general, and under the functions of face brushing and palm brushing, the imaging lens with overlarge optical distortion can influence the accuracy of identity recognition, if the imaging lens with smaller optical distortion is selected, the imaging lens with small optical distortion in the prior art is generally less than 120 degrees in the angle of view of the cat eye, and the requirement on the angle of view of the cat eye is difficult to meet, so the prior art generally uses two cameras, one for the cat eye function and one for the face brushing or palm brushing function.
Based on the above-described problems, the embodiment of the present invention makes a special design for the imaging lens 10132 in the receiving camera 1013. In the imaging system of the embodiment of the invention, the biological recognition imaging mode can specifically comprise a face brushing imaging mode and a palm brushing imaging mode; the imaging target in the face brushing imaging mode is a face image, and the imaging target in the palm brushing imaging mode is a palm image; the image output field angle in the palm brushing imaging mode is less than or equal to 155 degrees, and the image output field angle in the face brushing imaging mode is less than or equal to 120 degrees. Thus, the imaging lens 10132 in the receiving camera of the embodiment of the present invention satisfies: when the FOV is less than or equal to 155 degrees, the maximum distortion value is less than or equal to 70 percent; when the FOV is less than or equal to 120 degrees, the maximum distortion value is less than or equal to 5 percent; the increase in distortion value with the imaging image height is greater at 120 DEG < FOV < 155 DEG than at 120 DEG. At this time, the imaging lens 10132 can meet the requirements of the respective corresponding angles of view and optical distortion under three application scenes of the cat eye function, the face brushing function and the palm brushing function, can clearly image, and is convenient for effectively carrying out identity recognition and the like.
Optionally, for the above-mentioned imaging lens design, the present invention proposes a multi-stage distortion-distributed imaging lens scheme, so that the same imaging lens is used to satisfy both the requirement of a large field of view of the cat eye function and the requirement of small distortion of the face brushing and palm brushing function.
Specifically, fig. 11 is a graph of an optical distortion design of an imaging lens according to an embodiment of the present invention, referring to fig. 11, wherein the ordinate represents an image height of imaging, and the abscissa represents a distortion design value of the imaging lens. In the figure, an imaging lens design with three-section distortion distribution is taken as an example, wherein the first section is an a-b section (a point corresponds to a central view field point), the second section is a b-c section, and the third section is a c-d section. In the figure, the increment of the distortion value of the a-b optical section along with the imaging height is smaller than the increment of the distortion value of the b-c section along with the imaging height, and the increment of the distortion value of the b-c section along with the imaging height is smaller than the increment of the distortion value of the c-d section along with the imaging height; the requirements of three functions on the optical distortion value are as follows: the face brushing function is less than the palm brushing function and less than the cat eye function, so that the image output in the Y1 image height in the figure is used for the face brushing function, the image output in the Y2 image height is used for the palm brushing function, and the image output in the Y3 image height is used for the cat eye function.
Preferably, under the face brushing function, the preferred range of the imaging area diagonal field of view is generally smaller than 120 degrees, namely, the field angle corresponding to the Y1 image height is smaller than 120 degrees, the requirement on optical distortion is higher, and the range of the optical distortion is generally smaller than 5 percent. Under the palm brushing function, the optimal range of the diagonal view field of the imaging area is usually smaller than 155 degrees, namely, the view angle corresponding to the Y2 image height is smaller than 155 degrees, the requirement on optical distortion is also higher, and the range of the optical distortion is usually smaller than 70 percent. Under the function of the cat eye, the requirements on the visual angle of the imaging area are larger, and no special requirements on the optical distortion design value exist, so that the diagonal visual angle of the imaging area is more than 155 degrees, and no special requirements on the value of the I OP 3I exist. Based on the distortion design of the imaging lens, the large view field requirement of the cat eye function and the small distortion requirement of the face brushing and palm brushing function can be met by using the same imaging lens.
Fig. 12 is a schematic structural diagram of an imaging lens according to an embodiment of the present invention, and specifically, referring to fig. 12, a lens is designed according to the present invention based on the above scheme as an example, where the lens includes two sets of light-transmitting components extending in a horizontal direction, and each set includes three lenses and six lenses. The former group comprises a convex lens, a biconcave lens and a concave lens which are sequentially arranged along the light incidence direction, wherein the rear side surface of the biconcave lens, the front side surface of the concave lens and the rear side surface of the concave lens adopt quadric surfaces, and the distance between the biconcave lens, the concave lens and the rear side surface of the concave lens is 4.694mm and 2.054mm along the light incidence direction. The latter group includes biconvex lens, biconcave lens, convex lens that set gradually along the light incident direction, and wherein biconcave lens and convex lens's leading flank and trailing flank all adopt quadric, and the distance between the three is 0.501mm and 0.309mm along the light incident direction in proper order. The distance between the front group of light-transmitting components and the rear group of light-transmitting components is 6.159mm. The lenses are made of glass materials. The diaphragm is arranged between the concave lens of the front group and the biconvex lens of the rear group, and the distance between the diaphragm and the biconvex lens of the rear group is 1mm. The focal length of the lens is 1.5mm, and the total half image height is 2.85mm.
FIG. 13 is a graph of the optical distortion design of the imaging lens of FIG. 12, such as the imaging lens of FIG. 13, with an imaging area within 2mm of half-image height for a face brushing application, a diagonal field angle of 105, and a maximum distortion value of 2.53% absolute; an imaging area within 2.65mm of half image height is used for palm brushing application, the diagonal field angle is 140 degrees, and the maximum distortion value absolute value is 35.35%; the imaging area within 2.85mm of half-image height is used for face brushing applications, with a diagonal field angle of 170 deg., and a maximum distortion value of 83.62% absolute. The distortion increment of 0-2mm is 1.265%/mm, the distortion increment of 2-2.65mm is 50.49%/mm, and the distortion increment of 2.65-2.85mm is 241.4%/mm, i.e., the imaging lens of this example can meet the design requirements of the embodiment of the invention for a receiving camera.
Alternatively, in some embodiments, if the system supports cat eye, palm, face brushing functions simultaneously, an imaging lens of the three-stage distortion profile described above may be employed. Alternatively, in some embodiments, if the system integrates the cat eye function while only supporting one of the face brushing and palm brushing functions, the distortion design of the imaging lens described above requires only two segments.
Optionally, in some embodiments, the imaging lens may also be a common refractive lens, and the distortion of the cat eye function, the face brushing function or the palm brushing function is corrected in the main control module by an algorithm. For example using the polynomial distortion correction algorithm that the ISP chip is self-contained. Optionally, because the distortion requirement under the cat eye function is lower, the main control module can only correct the distortion of the cat eye function by brushing the face or brushing the palm, and does not process the distortion of the cat eye function.
Alternatively, the imaging lens 10132 may be a unitary lens structure, where the imaging lens 10132 is fixed to the bracket 10134 or the main board 1014 by AA glue. Alternatively, the imaging lens 10132 may also be fixed in a barrel, the barrel being screwed in a bracket 10134, the bracket 10134 being fixed on the main plate 1014 by glue.
Optionally, the present invention provides various solutions for the filter component 10133 in the receiving camera 1013. The filter component 10133 in the receiving camera 1013 is used for cutting off the non-effective light, so that the expected light beam can pass through the filter component 10133 to be imaged on the imaging chip 10131, the effect of resisting the interference of the ambient light can be achieved, and the expected imaging image can be obtained. The wave bands of the non-effective light which need to be cut off under the three functions of the cat eye, the face brushing and the palm brushing are not necessarily the same; for example, under the function of a cat eye, when illumination is sufficient, light beams in a visible light wave band need to be transmitted, light beams in other wave bands are cut off, and when illumination is insufficient, light beams in an infrared light wave band need to be transmitted; the palm/face brushing function needs to cut off the light beam in the visible light band under certain conditions, transmit the light beam in the infrared light band, and cut off the light beam in the infrared light band under certain conditions, and transmit the light beam in the visible light band. In order to implement cat eye, face and/or palm functions using the same receiving camera 1013, the present invention provides the following various filter assembly schemes. Fig. 14-16 are three transmission spectrum diagrams provided in the embodiments of the present invention, and referring to fig. 14-16, an explanation is given below by taking a filtering scheme in which a 940nm light source is selected for the structural light projection module 1012 and the floodlighting module 1011. The spectrum curve is a schematic curve, and does not limit the actual spectrum specification of the filtering component of the invention. If the light source selects other wave bands, the principle is the same.
In a first aspect, the filter assembly 10133 includes a first filter, a second filter, and a switching mechanism; the first optical filter transmits infrared band light rays, and the second optical filter transmits infrared band and visible band light rays; the main control module 102 is configured to control the switching mechanism to drive the second optical filter to move onto the light receiving path of the imaging chip 10131 in the cat eye imaging mode, and control the switching mechanism to drive the first optical filter to move onto the light receiving path of the imaging chip 10131 in the biometric imaging mode.
The scheme is essentially that an IR CUT is adopted, wherein the IR CUT comprises two filters, for example, one of the filters is a 940nm narrow-band filter, and the IR CUT has high transmittance only for light beams with wave bands near 940nm, and the spectrum curve of the IR CUT is shown in figure 14; one filter is a dual-pass filter for 940nm and visible light, and has high transmittance only for visible light and light beams in the wave band near 940nm, and the spectrum curve is shown in figure 16; the switching of the optical filter is realized by a power driven mechanical structure, namely a switching mechanism. Under the function of face brushing or palm brushing, a 940nm narrow-band filter is started, so that the imaging chip 10131 obtains a 940nm original speckle infrared image and an original floodlight infrared image of a target face or palm. Under the cat eye function, a 940nm and visible light dual-pass filter is started, when the ambient illuminance is lower than the threshold value, the imaging chip 10131 is enabled to acquire the 940nm original floodlight infrared image of the target object, and when the ambient illuminance is not lower than the threshold value, the imaging chip 10131 is enabled to acquire the original visible light image of the target object.
In a second aspect, the filter assembly 10133 includes a first filter, a second filter, and a switching mechanism; the first optical filter transmits infrared band light, and the second optical filter transmits visible band light; the main control module 102 is configured to control the switching mechanism to drive the second optical filter to move onto the light receiving path of the imaging chip 10131 when the cat eye imaging mode is in which the ambient light intensity is higher than the preset light intensity threshold, control the switching mechanism to drive the first optical filter to move onto the light receiving path of the imaging chip 10131 when the cat eye imaging mode is in which the ambient light intensity is lower than the preset light intensity threshold, and control the switching mechanism to drive the first optical filter to move onto the light receiving path of the imaging chip 10131 when the organism identification imaging mode is in which the ambient light intensity is lower than the preset light intensity threshold.
The solution is essentially still an IR CUT solution, for example, where one filter is a 940nm narrowband filter, and the spectral curve is shown in fig. 14; one filter is an infrared cut-off filter, has high transmittance only for light beams in a visible light band, and a spectrum curve of the filter is shown in fig. 15; the optical filter is still switched by a power driven mechanical structure, namely a switching mechanism. Under the function of face brushing or palm brushing, a 940nm narrow-band filter is started, so that the imaging chip 10131 obtains a 940nm original speckle infrared image and an original floodlight infrared image of a target face or palm. Under the function of a cat eye, when the ambient illuminance is lower than the threshold value, starting a 940nm narrow-band filter, so that an imaging chip 10131 obtains an original floodlight infrared image of 940nm of a target object; when the ambient illuminance is not lower than the threshold, an infrared cut filter is activated, so that the imaging chip 10131 acquires the original visible light map of the target object.
In a third aspect, the filter assembly 10133 includes a first filter; the first optical filter is an electrochromic optical filter, and the main control module 102 is used for controlling the first optical filter to be adjusted to transmit visible light wave band light rays or infrared wave band light rays in the cat eye imaging mode, and controlling the first optical filter to be adjusted to transmit infrared wave band light rays in the biological identification imaging mode.
The scheme is essentially that an electrochromic filter is adopted, the filter is made of electrochromic materials, and different voltages are applied to the filter through a driving circuit to realize electrochromic. The electrochromic material adopts tungsten trioxide, viologen and the like, and electrochemical oxidation-reduction reaction is carried out under the action of voltage, so that the color of the material is changed. For example, the electrochromic filter enables 940nm single-pass mode under the function of face brushing or palm brushing, so that the imaging chip 10131 obtains 940nm original speckle infrared image and original floodlight infrared image of the target face or palm, and the spectrum curve of the imaging chip is shown in fig. 14; under the function of a cat eye, when the ambient illuminance is lower than the threshold value, an all-pass mode is started, so that the imaging chip 10131 acquires an original floodlight infrared image of 940nm of a target object; when the ambient illuminance is not lower than the threshold, the visible light single-pass mode is started, so that the imaging chip 10131 acquires the original visible light graph of the target object, and the spectrum curve of the original visible light graph is shown in fig. 15.
In a fourth aspect, the filter assembly includes a first filter; the first filter transmits infrared band and visible band light.
The solution essentially uses only one dual-pass filter, for example 940nm and visible light dual-pass filters. At this time, under the face brushing function or palm brushing function, the structural light and floodlight of 940nm wave band pass through the optical filter, so that the imaging chip 10131 obtains 940nm original speckle infrared image and original floodlight infrared image of the target face or palm; under the cat eye function, when the ambient illuminance is higher than the threshold, the ambient light passes through the optical filter, so that the imaging chip 10131 obtains an original visible light image of the target scene, and when the ambient illuminance is lower than the threshold, the floodlight of 940nm passes through the optical filter, so that the imaging chip 10131 obtains an original floodlight infrared image of the target scene.
Optionally, in some embodiments, if the floodlight module 1011 further adds a light source for selecting a visible light band, the following is added to the above filter scheme:
In the first scheme, the main control module 102 may further control the switching mechanism to drive the second optical filter to move onto the light receiving path of the imaging chip 10131 in the biological identification imaging mode, that is, under the palm brushing or face brushing function, the dual-pass optical filter with 940nm and visible light may be started, so that the imaging chip 10131 obtains the original visible light map of the target face or palm (where the original visible light map may be the original visible light map under the white light supplementary lighting or the original visible light map under the monochromatic supplementary lighting of multiple wavebands);
in the second scheme, the main control module 102 can also control the switching mechanism to drive the second optical filter to move to the light receiving path of the imaging chip 10131 in the biological recognition imaging mode, namely, under the palm brushing or face brushing function, the single-pass optical filter of visible light can be started, so that the imaging chip 10131 can acquire the original visible light map of the target face or palm;
In the third scheme, the main control module 102 may further control the first optical filter to adjust to transmit light in a visible light band in the biological recognition imaging mode, that is, may enable a single-pass filtering mode of visible light in the palm brushing or face brushing function, so that the imaging chip 10131 obtains an original visible light map of the target face or palm;
in the fourth aspect, under the function of brushing the palm or the face, the visible light passes through the optical filter, so that the imaging chip 10131 obtains the original visible light map of the target face or the palm.
Alternatively, in some embodiments, the filter assembly 10133 may be directly secured in the bracket 10134, the bracket 10134 being secured to the motherboard 1014 by glue. Optionally, in some embodiments, the filter assembly 10133 may also be integrated within the barrel 10124 of the imaging lens 10132, the barrel 10124 being threadably secured in the holder 10134.
Optionally, the present invention provides various schemes for the imaging chip 10131 in the receiving camera 1013, the imaging chip 10131 being an RGB sensor or an RGB-IR sensor.
Specifically, because the three functions of cat eye, face brushing and palm brushing require the imaging chip 10131 in the camera to respond to the imaging of the light beams with multiple wave bands, different types of images are output, and the target fields of view required by the imaging under the three functions are different, the imaging chip 10131 in the camera is required to output images with different image height sizes. In order to meet different image output requirements under three functions by using the same imaging chip, the invention provides the following various imaging chip schemes. Fig. 17 is a graph of photoelectric conversion efficiency of an RGB sensor according to an embodiment of the present invention, and fig. 18 is a graph of photoelectric conversion efficiency of an RGB-IR sensor according to an embodiment of the present invention, and an imaging chip according to an embodiment of the present invention is described below with reference to fig. 17 and 18.
In the first scheme, an RGB sensor is used. The RGB sensor consists of Red, green, blue pixels.
Referring to fig. 17, three pixels also have a certain response to infrared light, so that the RGB sensor can output an original visible light image of the target scene under the irradiation of ambient light, an original floodlight infrared image or an original visible light image under the irradiation of the supplement light of the floodlight lighting module 1011, and an original speckle infrared image under the irradiation of the structured light projection module 1012. Because the sizes of the required imaging image height are ordered as the cat eye function > the palm function > the face function, the maximum output image height of the RGB sensor is required to be more than or equal to the required imaging image height under the cat eye function. The image collected by the RGB sensor corresponds to the projection schemes of the structural light projection module 1012 and the floodlight illumination module 1011, and the structural light projection module 1012 and the floodlight illumination module 1011 are respectively provided with a 940nm light source as an example, under the cat eye function, the RGB sensor can obtain the 940nm original floodlight infrared image and the original visible light image of the target scene, under the face brushing function, can obtain the 940nm original speckle infrared image and the original floodlight infrared image of the face, and under the palm brushing function, can obtain the 940nm original speckle infrared image and the original floodlight infrared image of the palm, and sends the imaging information and the image to the main control module 102 for subsequent processing.
Optionally, in some embodiments, the floodlight module 1011 further adds a visible light band light source, so that the RGB sensor can also obtain the original visible light map of the face or palm under the function of brushing the face or brushing the palm.
In the second scheme, RGB-IR sensor is used. The RGB-IR sensor consists of Red, green, blue, IR pixels.
Referring to fig. 18, the IR pixels are mainly used to respond to infrared light, so that the RGB-IR sensor can image both the original visible image and the original speckle infrared image and the original floodlight infrared image. Because the sizes of the required imaging image height are ordered as the cat eye function > the palm function > the face function, the maximum output image height of the RGB sensor is required to be more than or equal to the required imaging image height under the cat eye function. The image collected by the RGB-IR sensor corresponds to the projection schemes of the structured light projection module 1012 and the floodlight illumination module 1011, and taking the structured light projection module 1012 and the floodlight illumination module 1011 as examples, the RGB-IR sensor can also obtain the 940nm original floodlight infrared image and the 940nm original visible light image under the cat eye function, the 940nm original speckle infrared image and the original floodlight infrared image under the face brushing function, and the 940nm original speckle infrared image and the original floodlight infrared image under the palm brushing function, and send the imaging information and the image to the main control module 102 for subsequent processing. On the one hand, compared with R, G, B pixels in RGB sensor, the IR pixels in RGB-IR sensor have higher photoelectric conversion efficiency to infrared light, and the IR imaging is less influenced by ambient light, so that the obtained infrared image has better effect when applied under the infrared light compensation scene; on the other hand, since the image or information output by the IR pixels in the RGB-IR sensor can be extracted separately in the subsequent image processing, the optical element can be further simplified by applying the image or information in the infrared light compensation scene, preferably using a common dual-pass filter to filter the non-effective light in the visible light band.
Optionally, in some embodiments, the floodlight module 1011 further employs a visible light band light source, so that the RGB-IR sensor can also obtain an original visible light map of the face or palm under the function of brushing the face or brushing the palm.
Optionally, in some embodiments, the imaging chip 10131 is electrically coupled to the motherboard 1014 by solder via solder balls on the lower surface. Optionally, in some embodiments, the lower surface of the imaging chip 10131 is adhered to the main board 1014 by red glue, and the upper surface of the imaging chip 10131 is electrically connected to the main board 1014 by metal wires.
Alternatively, in some embodiments, the receiving camera 1013 may employ an auto-focusing AF module. The automatic focusing AF module adopts an automatic focusing lens, and the distance between the lens and the imaging chip 10131 is adjusted by an automatic focusing algorithm under three functions, so that the imaging definition can be realized in different imaging distance ranges under the three functions. Under this scheme, to three kinds of system functions, have different target imaging distance scope respectively, respectively: the brush palm is generally 4-25cm under the function, the brush face is generally 0.3-1m under the function, and the cat eye is generally 0.4-2.5m under the function, so that the matched depth of field range can be respectively realized, and the receiving camera 1013 can clearly image the object under the target distance under the corresponding depth of field range.
Alternatively, in some embodiments, the receiving camera 1013 may employ a two-stage focusing AF module. The AF module with two-stage focusing adopts a two-stage focusing lens, and can perform two-stage focusing aiming at two AB positions by adjusting the distance between the lens and the imaging chip 10131; the focusing of the position A is used for the palm brushing function so as to meet the requirement that clear imaging can be realized in the imaging distance range under the palm brushing function; the focusing of the B position is used for a face brushing function or a cat eye function so as to realize clear imaging in the imaging distance range under the face brushing function and the cat eye function. Compared with the first scheme, the two-stage focusing AF module does not need to use an automatic focusing algorithm, thereby being beneficial to improving the reliability service life of the motor in the receiving camera 1013, saving computing resources and reducing power consumption. Under this scheme, have different target imaging distance scope respectively to three kinds of system functions, do respectively: the brush palm function is generally 4-25cm, the brush face function is generally 0.3-1m, the cat eye function is generally 0.4-2.5m, a relatively near depth of field range can be set under the brush palm function, and a relatively far depth of field range is correspondingly set for the brush face function and the cat eye function, so that the receiving camera 1013 can clearly image objects under the target distance under the corresponding depth of field range.
Optionally, in some embodiments, the receiving camera 1013 may employ a fixed focus module. The fixed focus FF module adopts a fixed focus lens, and under the condition of fixed focus, the full-range clear imaging under three functions needs to be satisfied, so that the fixed focus FF module needs to be optimized with a large depth of field lens, and the depth of field of the lens can be improved by reducing the focal length of the lens and/or increasing the F value of the lens.
Compared with the first two schemes, the receiving camera of the fixed focus FF module meets the following conditions in a cat eye imaging mode, a palm brushing imaging mode and a face brushing imaging mode: z=f 2*l(f2-F*l*2*a)-f2*l(f2 +f x 2*a); where z is the depth of field of the receiving camera 1013, F is the focal length of the receiving camera 1013, l is the target object distance of the receiving camera 1013 in the cat eye imaging mode, the palm imaging mode or the face imaging mode, F is the aperture value of the receiving camera 1013, and a is the size of a single pixel unit of the imaging chip 10131 in the receiving camera 1013. Further, f < 2mm, F > 1.7, a >1 μm.
As described above, the camera satisfies the above formula in all three imaging modes, that is, it means that the camera can clearly image at the object distances in the three imaging modes. When the focusing object distance l is fixed, the smaller the F of the lens, the larger the aperture value F, and the larger the matched imaging chip a, the larger the depth of field z of the camera, so that the fixed focus FF module preferably uses a large depth of field lens matched with the imaging chip with large pixel-size.
In some embodiments, optionally, the matched lens focal length F < 2mm, the aperture value F > 1.7, and the single pixel size a > 1um of the imaging chip. Preferably, the aperture value F is more than 2 and the single pixel size a of the imaging chip is more than 2um in combination with the lens focal length F being less than 1.5 mm.
If the focal length is 0.9mm and the aperture is 2.4, the pixel-size of the matched chip is 3um, and when the lens focuses at a distance of 9cm, the corresponding clear imaging depth of field range is 4cm to infinity, and the clear imaging distance range of 2.5m is included when the palm is brushed, the face is brushed and the cat eye is closest to 4 cm.
In some embodiments, since the palm is closer to the camera when the palm is brushed, the camera can also resolve the palm print or palm vein information when the palm is not within the depth of field of the camera, and the close distance of the depth of field of the camera can be properly adjusted to a larger distance value, for example, the nearest distance of the depth of field of the camera is adjusted from 4cm to 8cm.
Optionally, in some embodiments, the following solutions are further provided for the imaging lens and the filter assembly of the receiving camera, and fig. 19 and fig. 20 are schematic structural diagrams of two further receiving cameras provided by the embodiments of the present invention. Specifically, referring to fig. 19, in yet another embodiment of the present invention, the imaging lens 10132 may be a super surface lens, a diffractive lens, a refractive lens, or any hybrid form of the above. Further, referring to fig. 20, in still another embodiment of the present invention, a filter assembly may employ a filter film (not shown) attached to a surface of the imaging lens 10132.
In the receiving camera 1013 shown in fig. 19, a super-surface lens, a diffraction lens, or any combination of a super-surface lens, a diffraction lens, and a refraction lens is substantially used instead of the conventional refraction lens. The super-surface lens generates abrupt phase by introducing a surface sub-wavelength size unit structure based on a generalized Snell's law, so that a two-dimensional plane structure of the super-surface has special electromagnetic properties, and the super-surface lens can flexibly regulate and control the amplitude, phase, polarization and the like of incident light and has strong light field control capability. The super-surface lens is generally formed by forming a layer of micro-structural surface with a plurality of sub-wavelength size units arranged according to a certain rule on a substrate with high transmittance such as quartz, siO2, polymer materials and PC; the phase distribution of the super surface at different positions can be obtained by defining the light field distribution and light field information (amplitude, polarization information and the like) of the input field and the output field, and the structural distribution of the micro structure surface can be obtained by calculating the phase distribution and the material selection of the super surface base material and the micro structure surface.
In the receiving camera 1013 shown in fig. 20, the imaging lens is the imaging lens 10132 described above, and a filter film is attached to the surface thereof to filter light. Thus, the solution is essentially to use an imaging lens 10132 integrating both the imaging function and the filter function, the imaging lens 10132 being fixed on the step surface of the holder 10134.
Based on the above-mentioned schemes of the light source module and the receiving camera in the imaging module, the present invention further provides a plurality of combined embodiments of the light source module and the receiving camera, and fig. 21 to 25 are schematic side or cross-sectional views of the plurality of imaging modules provided in the embodiment of the present invention, and various possible combined embodiments are described below with reference to the drawings.
In an alternative embodiment, referring to fig. 21, in the imaging module, for the light source module 1010, a floodlight module 1011 may be used, where the floodlight module 1011 includes a first light source 10111, and the first light source 10111 emits infrared light. In short, the light source module 1010 in this embodiment employs an infrared floodlight projector.
For the receiving camera 1013, a plurality of combinations are further provided in this embodiment: ① The imaging lens 10132 adopts a common imaging lens matched distortion correction algorithm; the filter assembly 10133 employs a dual pass filter and the imaging chip 10131 employs an RGB sensor. ② The imaging lens 10132 still adopts a common imaging lens matched distortion correction algorithm; the filter assembly 10133 employs an IR CUT filter or an electrochromic filter; the imaging chip 10131 still employs RGB sensors. For the ① th and ② th combinations of the receiving cameras 1013, the imaging lens 10132 may be replaced with a three-stage anamorphic lens, and the imaging chip 10131 may be replaced with an RGB-IR sensor.
In an alternative embodiment, referring to fig. 22, in the imaging module, for the light source module 1010, a floodlight module 1011 is also used, and the floodlight module 1011 includes a first light source 10111, and the first light source 10111 emits infrared light. In short, the light source module 1010 in this embodiment employs an infrared floodlight projector.
For the receiving camera 1013, the imaging lens 10132 in this embodiment adopts a super-surface lens, a diffraction lens, a refraction lens, or any mixed form of the above lenses; the filter assembly 10133 employs a dual pass filter or an IR CUT filter or an electrochromic filter; the imaging chip 10131 employs an RGB sensor or RGB-IR sensor.
In an alternative embodiment, referring to fig. 23, in the imaging module, for the light source module 1010, a floodlight module 1011 is used, where the floodlight module 1011 includes a first light source 10111 and a second light source 10112, and the first light source 10111 emits light in the infrared band, and the second light source 10112 emits light in at least one visible band or at least one infrared band. In short, the light source module 1010 in this embodiment adopts a multispectral floodlight projector.
In this embodiment, for the receiving camera 1013, the imaging lens 10132 can still use a common imaging lens with distortion correction algorithm; the filter assembly 10133 employs a dual pass filter and the imaging chip 10131 employs an RGB sensor. In other implementations of this embodiment, the imaging lens 10132 in the receiving camera may be replaced with a three-stage anamorphic lens, a super-surface lens, a diffractive lens, a refractive lens, or any hybrid form of the foregoing; the filter assembly 10133 may be replaced with an IR CUT filter or an electrochromic filter; the imaging chip 10131 may be replaced with an RGB-IR sensor.
It should be noted that, in the above three embodiments, the light source module 1010 may be provided with a structured light projection module in addition to the floodlight module 1011, which is not limited herein.
In an alternative embodiment, referring to fig. 24, in the imaging module, for the light source module 1010, it includes a floodlight lighting module 1011 and a structured light projection module 1012, where the floodlight lighting module 1011 includes at least a first light source 10111, and the first light source 10111 emits infrared band light; the structured light projection module 1012 includes a third light source 10121 and a dimming element, wherein the third light source 10121 emits infrared light, and the dimming element modulates the emitted light of the third light source 10121 to project structured light.
In other implementations of this embodiment, the dimming element of the structured light projection module may be a collimating lens and a diffractive optical element, or may be a collimating and diffractive integrated optical element, or may be a super-surface lens, or may be a reasonable combination of the three possible optical elements. In addition, the structured light projection module and the floodlight projection module can be integrated in the same lens barrel or the same projector.
In an alternative embodiment, referring to fig. 25, the imaging module 101 further includes an ambient light sensor 1015, where the ambient light sensor 1015 is configured to detect illuminance of the environment;
The main control module 102 is further configured to control the light source module to project floodlight in the cat eye imaging mode and when the illuminance of the environment is lower than a preset threshold.
Specifically, the ambient light sensor 1015 is configured to sense whether the illuminance of the surrounding environment is lower than a certain threshold, which is determined according to factors such as the dark imaging quality, the frame rate, and the actual application scene of the imaging chip 10131, and generally the setting range of the threshold is 5-100 Lux. When the ambient illuminance rises or falls, a signal is transmitted to the main control module 102, and the main control module 102 controls other modules to respond, for example, controls the floodlighting module 1011 to turn on and project uniform floodlight to the target object only when the ambient illuminance is lower than the above threshold.
With continued reference to fig. 25, in other implementations of this embodiment, an proximity light sensor 1016 may also be provided in the imaging module 101, the proximity light sensor 1016 being configured to generate a trigger signal when the target object is in proximity; the main control module 102 is used for controlling other modules to respond according to the trigger signal. Specifically, the main control module 102 may start the cat eye imaging mode or the biometric imaging mode according to the trigger signal. Specifically, by approaching the light sensor 1016 to sense whether there is a target object within the working distance range of the imaging module, the imaging module and other modules can respond if there is a target object.
Furthermore, it should be added that, for the above-described various embodiments, the structured light projection module 1012, the floodlight illumination module 1011, and the receiving camera 1013 in the imaging module 101 may be connected to the main board 1014 as separate devices through connectors; the light projection module 1012, the floodlight module 1011 and the main board 1014 of the receiving camera 1013 may also be directly attached to the main board 1014, i.e. they share a common substrate, and the related embodiments are all within the scope of the present invention.
Optionally, based on the above imaging module, a system architecture of the imaging system integrated with the cat eye function provided in the embodiment of the present invention is as follows. Fig. 26 is a system architecture diagram of another imaging system integrated with a cat eye function according to an embodiment of the present invention, and referring to fig. 26, the system may specifically include an imaging module 101, a main control module 102, an external device module 103, and a motherboard (not shown in the figure), where each module is electrically connected to the motherboard, and is electrically connected to other modules through the motherboard.
The imaging module 101 at least includes: the flood lighting module 1011, the receiving camera 1013, and the like may also include a structured light projection module 1012.
Referring now to fig. 26, a scheme of the internal structure of the main control module 102 in the imaging system of the present invention will be described.
First, optionally, the main control module 102 includes a living body detection unit 1021; the living body detection unit 1021 is configured to perform living body detection on the floodlight image or perform living body detection on the floodlight image and the environmental image by using a living body detection model, and determine that the object to be detected is a living body when both the living body detection models output positive results; wherein the ambient image comprises at least one image in the infrared band and/or at least one image in the visible band and the flood image comprises at least one image in the infrared band.
Further, in the previous embodiments, the light source module 1010 may also be used to project structured light; the receiving camera 1013 is also configured to collect and image reflected light after the structured light is projected onto the target object. In the biometric imaging mode, the main control module 102 is further configured to control the light source module 1010 to project the structured light, and control the receiving camera 1013 to collect the reflected light of the structured light projected onto the target object, so as to obtain the structured light image correspondingly.
Based on this, in an alternative embodiment, the main control module 102 may further include a depth processing unit 1022, where the depth processing unit 1022 is configured to calculate and obtain a depth map according to the structured-light image and a preset reference structured-light image. The living body detection unit 1021 is further configured to perform living body detection on the structured light image and/or the depth map by using a living body detection model, and determine that the object to be detected is a living body when the living body detection models output positive results; wherein the structured light image comprises an image of the infrared band.
The environment image, the structured light image, and the floodlight image obtained by the main control module 102 in the biometric imaging mode are two-dimensional images, which carry two-dimensional structure information of the target object, and according to the two-dimensional structure information, the target object can be correspondingly identified as a living target object such as a face, a palm, and the like. And because the environment image, the structured light image and the floodlight image all only carry two-dimensional structure information, the characterization of the surface characteristics of the target object is limited. In this embodiment, the three-dimensional structure information with the surface of the target object can be obtained by converting the structured light image into the depth image, so that the living body target object such as a human face, a palm and the like can be more accurately determined when the living body judgment is performed by using the depth image. Still further, the living body detection is performed by using the two-dimensional image and the three-dimensional image at the same time, respectively, so that the number and the kind of the samples of the living body detection can be increased, thereby improving the accuracy of the living body detection.
Still optionally, in an embodiment, the main control module 102 further includes an identity verification unit 1023; the identity verification unit 1023 is configured to perform identity verification on at least one of the floodlight image and the environment image by using an identity verification model, and determine that the identity verification is successful when the identity verification model outputs a positive result; wherein the ambient image comprises at least one image in the infrared band and/or at least one image in the visible band and the flood image comprises at least one image in the infrared band.
As also described in the previous embodiments, the biometric imaging mode includes a face brushing imaging mode and a palm brushing imaging mode; the imaging target of the face brushing imaging mode is a face image, and the imaging target of the palm brushing imaging mode is a palm image. Based on this, the main control module 102 further includes an image detection unit 1024; the image detection unit 1024 is configured to determine a target object in an image in a biometric imaging mode using the same image detection model, and determine whether the biometric imaging mode is a face-brushing imaging mode or a palm-brushing imaging mode according to the target object.
The internal structure of the main control module 102 and its operation in the above alternative embodiments are described in detail below with reference to fig. 26.
As shown in fig. 26, the main control module 102 at least includes an image preprocessing unit 1025, an image detecting unit 1024, a living body detecting unit 1021, an identity verifying unit 1023, and may further include other modules such as a depth processing unit 1022. Wherein, alternatively, the image preprocessing unit 1025 may be implemented by an ISP processor; the depth processing unit 1022, the image detection unit 1024, the living body detection unit 1021, and the authentication unit 1023 may be implemented by a CPU processor and an NPU processor. The master control module 102 may be implemented by a single application processor chip (Application Processor, AP), by an application processor chip together with a microcontroller chip (Microcontroller Unit, MCU), or by an application processor chip, a microcontroller chip and an algorithm chip together.
The main control module 102 is used for controlling the imaging module 101 to acquire images under the functions of cat eyes, face brushing and palm brushing; the imaging module 101 transmits the acquired imaging information and images to the main control module 102 for subsequent processing; the main control module 102 controls the external equipment module 103 and other modules to respond, and meanwhile, transmits information and images to an upper computer, a user terminal, a cloud or the outside and the like according to scene requirements. In the main control module 102, the chips are all fixed on the motherboard by welding, electrically connected with the motherboard, and electrically connected with other modules by the motherboard.
Optionally, in order to enable the user to perform unlocking operations in various manners, such as face-brushing unlocking, palm vein unlocking, fingerprint unlocking, password unlocking, etc., the system may respond to various wake-up scenarios, such as the user approaching the door lock, detecting a target object within a working distance range, staying in front of the user's door, touching the touch panel, touching the display screen, waking up a mobile phone with a wifi signal remotely, etc., and implement a response in combination with control and processing of the imaging system.
Optionally, in some embodiments, the system may further include an external device module 103, including a mechanical door lock, a display screen, a touch panel, a speaker, a condenser microphone, and the like, for responding, displaying, and outputting signals and information.
Optionally, in some embodiments, the system may further include a motherboard for carrying all of the above modules.
Optionally, in some embodiments, the system may further include an interface module including all interfaces for communication between the modules. The main control module 102 is connected with the imaging module 101, the external equipment module 103 and other modules through the MIPI interface, the IIC interface, the USB interface, the UART serial communication interface, the MIPI-DSI interface, the SPI interface and other interfaces. The interface module may also be provided with a DC jack, RJ45 portal, etc. The interface module is fixed on the main board in a welding mode, is electrically conducted with the main board, and is electrically connected with other modules through the main board.
Optionally, in some embodiments, the system may further include a wireless module, where the wireless module includes WIFI, bluetooth, 4G, 5G, and the like, for network connection and communication.
Optionally, in some embodiments, the system may further include a power supply module for supplying power to all the modules.
Optionally, in some embodiments, the above system may further include a storage module, for example, including DDR, flash memory, for storing various types of data.
Optionally, in some embodiments, the system may further include a security module for monitoring system security.
Alternatively, in some embodiments, the above system may enable the imaging module 101 to be connected to the motherboard as a separate device through a connector, while the main control module 102 is directly soldered to the motherboard; the imaging module 101 and the main control module 102 may share one substrate; the imaging module 101, the main control module 102 and other modules in the system can all share one substrate, and further adopt a common substrate design, and related embodiments are all in the protection scope of the invention.
Fig. 27 is a flowchart of an embodiment of the present invention for providing an integrated cat eye function imaging system, and referring now to fig. 2, 3, 19 and 27, the working procedures of the imaging system integrating the cat eye function, the working schemes of the cat eye function, the face brushing function and the palm brushing function provided by the embodiment of the invention are introduced.
The working flow of the imaging system integrating the cat eye function is as follows: when one of the above-mentioned various wake-up scenarios occurs, such as when the user approaches the door lock, the door lock is woken up and the system is powered up. Next, optionally, in some scenarios, the imaging system turns on the cat eye function based on the cat eye function turning on instruction sent by the host computer. Or alternatively, in some situations, the imaging system starts the face brushing or palm brushing function based on the face brushing/palm brushing starting function instruction sent by the upper computer.
The working scheme under the cat eye function is as follows: after the imaging system turns on the cat eye function, the main control module 102 optionally controls at least one of the light source module 1010, the receiving camera 1013, and the ambient light sensor 1015 in the imaging module 101 to start working. When the ambient light sensor 1015 senses that the ambient illuminance is lower than the threshold, a signal is transmitted to the main control module 102, and the main control module 102 controls the floodlighting module 1011 of the light source module 1010 to be turned on and projects uniform floodlight to the target object; when the ambient light sensor 1015 senses that the ambient illuminance is not lower than the threshold, a signal is transmitted to the main control module 102, and the main control module 102 controls the floodlight module 1011 of the light source module to be turned off, and only the ambient light in the ambient environment is projected to the target object. The flood lighting module 1011 typically employs infrared flood light, alternatively, may also employ multispectral flood light. After the uniform floodlight projected by the floodlight illumination module 1011 or the ambient light in the surrounding environment is projected to the target object, the ambient light is reflected by the target object, and then focused on the imaging chip 10131 for imaging through the imaging lens 10132, the imaging module 101 captures the current image in real time. If the ambient illuminance is not lower than the threshold, the imaging chip 10131 outputs an original visible light map of the target object under the ambient light; if the ambient illuminance is below the threshold, the imaging chip 10131 outputs an original flood infrared map of the target object under flood illumination. Optionally, if the flood illumination module is multispectral flood light, the imaging chip 10131 outputs an original visible light map and/or an original flood infrared map of the target object under flood illumination. Wherein the image can be a still image, i.e. a single frame picture, or a video image. The imaging chip 10131 then transmits the imaging information and the image to the main control module 102 for subsequent processing.
Next, optionally, the collected image is subjected to denoising, color reproduction, and the like by the image preprocessing unit 1025 of the main control module 102. Next, optionally, the main control module 102 controls the external device module 103 and the like to respond; the external equipment such as a display screen, a touch panel, a loudspeaker, a capacitor microphone and the like is connected through an interface, and the external equipment is controlled to respond, for example, the display screen is controlled to display pictures, the loudspeaker is controlled to play sounds and the like; meanwhile, optionally, information and images are transmitted to an upper computer, a user terminal, a cloud or the outside and the like according to scene requirements. Finally, the functions of real-time monitoring of pictures, real-time video intercom, remote monitoring of pictures, video review and the like are realized.
Optionally, when the system is in the cat eye function, if the instruction of opening the face or palm brushing function sent by the upper computer is received in some scenes, the system can switch to the face or palm brushing function.
The working scheme under the function of brushing the face or brushing the palm is as follows: after the imaging system starts the face brushing or palm brushing function, the main control module 102 optionally controls at least one of the light source module 1010 and the receiving camera 1013 in the imaging module 101 to start working. The main control module 102 controls the floodlighting module 1011 of the light source module 1010 in the imaging module 101 to be turned on and to project uniform floodlight to the target face or palm. The flood lighting module 1011 typically employs infrared flood light, alternatively, may also employ multispectral flood light. Alternatively, if the light source module 1010 further includes a structured light projection module 1012, the main control module 102 controls the structured light projection module 1012 and the floodlight module 1011 to be turned on at intervals and project a light beam toward the target face or palm. After being reflected by a target face or palm, the light emitted by the floodlight illumination module 1011 is focused on the imaging chip 10131 through the imaging lens 10132 to be imaged, and the imaging chip 10131 outputs an original floodlight infrared image with face or palm information. Optionally, if the floodlighting module 1011 is multispectral floodlight, the imaging chip 10131 outputs an original floodlight infrared map and/or an original visible light map with face or palm information. Optionally, if the light source module 1010 further includes a structured light projection module 1012, the light emitted by the structured light projection module 1012 is reflected by the target face or palm and focused by the imaging lens 10132 onto the imaging chip 10131 for imaging, and the imaging chip 10131 outputs an original speckle infrared image with face or palm information. The imaging chip 10131 then transmits the imaging information and the image to the main control module 102 for subsequent processing.
Next, optionally, the original floodlight infrared image is subjected to denoising, correction, color restoration and other processes by an image preprocessing unit 1025 in the main control module 102, so as to obtain a target floodlight infrared image. Optionally, if the floodlighting module 1011 is multispectral floodlighting, the original floodlighting infrared image and/or the original visible light image is subjected to denoising, correction, color reduction and other processes by the image preprocessing unit 1025 in the main control module 102, so as to obtain the target floodlighting infrared image and/or the target color image. Meanwhile, optionally, if the light source module further includes a structured light projection module, the original speckle infrared image is subjected to denoising, correction, color reduction and other processes by the image preprocessing unit 1025 in the main control module 102, so as to obtain the target speckle infrared image. Further, optionally, only under the face brushing function, the depth processing unit 1022 in the main control module 102 obtains the target depth map according to the triangle computing principle and the target speckle infrared map by using the reference map of the human face with a certain distance stored in the system.
Next, optionally, the obtained original image or target image is output to an image detection unit 1024, and compared with a face or palm detection model stored in the system, to detect whether there is a face or palm in the image. Here the face detection model and the palm detection model are typically two different models. If a face or palm is detected, optionally, image processing such as clipping is performed on the obtained original image or target image.
Next, optionally, outputting the obtained original image or target image (typically, a target floodlight infrared image) to a living body detection unit 1021 for living body detection, and comparing the obtained original image or target image with a human face or palm living body detection model stored in the system; if the image is judged to be a dummy, the image is re-acquired to carry out the next round of living body detection, or after the detection time is exceeded, the system stops picking up the image to wait for the next imaging instruction; if the living body is judged, the next authentication is performed. Alternatively, if the floodlight module 1011 is multispectral floodlight, the obtained target color map may be output to the living body detection unit 1021 for living body detection, so that the number and variety of samples for living body detection are increased, and the accuracy of living body detection is improved. Optionally, if the light source module 1010 further includes a structured light projection module 1012, the obtained target speckle infrared image and/or target depth image may be output to the living body detection unit 1021 for living body detection, so as to further increase the number and variety of samples detected by living body and improve the accuracy of living body detection.
Next, optionally, the master control module 102 outputs the obtained original image or target image (typically, the face target floodlight infrared image or the palm target floodlight infrared image) to the identity verification unit 1023 for identity verification. Extracting face or palm feature information from the image, wherein the palm feature information comprises palm prints and palm vein information of a palm; comparing and matching the characteristic information with a standard authentication model of a human face or palm stored in the system; if the matching fails, prompting authentication failure, re-acquiring the picture to perform next round of detection, stopping the picture acquisition by the system after the detection time is exceeded, and waiting for a next imaging instruction; if the matching is successful, the authentication is successful (the picture adopted by the authentication and the living body detection are not necessarily identical). Optionally, if the floodlighting module is multispectral floodlight, the identity verification unit 1023 may further use the obtained face target color chart or palm target color chart for the above-mentioned identity verification process, so as to increase the number and variety of samples for identity verification, and improve the accuracy of identity verification. After the authentication is successful, optionally, the main control module 102 controls the external equipment module 103 to respond, for example, the mechanical door lock is connected through an interface, and the mechanical door lock motor is driven to rotate, so that unlocking is realized. Meanwhile, optionally, information and images are transmitted to an upper computer, a user terminal, a cloud or the outside and the like according to scene requirements.
Optionally, when the system is in the face brushing or palm brushing function, if the instruction of opening the cat eye function sent by the upper computer is received in some scenes, the system can be switched to the cat eye function.
Next, the present invention provides the following various software processing schemes for the above-described imaging system, workflow and scheme integrating cat eye functions.
First, optionally, a master control module in the imaging system may employ a framing and/or framing processing scheme.
Fig. 28 and fig. 29 are schematic diagrams of the operation of two imaging systems integrated with a cat eye function according to the embodiment of the present invention, where the image frame rate of the camera is assumed to be 30fps, that is, 30 frames of images are output for 1s, and examples of the first image acquisition period and the second image acquisition period each include 6 frames of images, to explain a specific processing manner of framing and changing frames. Referring to fig. 28, the main control module 102 is configured to execute a plurality of first image capturing periods T1, and alternately execute a cat eye imaging mode and a biometric imaging mode according to a first preset time ratio in any one of the first image capturing periods T1.
Further, referring to fig. 29, the main control module 102 is further configured to execute a plurality of second image acquisition periods T2 after identifying the target object in the structured light image or the floodlight image obtained in the biometric imaging mode, and in any one of the second image acquisition periods T2, execute the cat eye imaging mode and the biometric imaging mode alternately according to a second preset time ratio; the duration of the biometric imaging mode in the second image acquisition period T2 is longer than the duration of the biometric imaging mode in the first image acquisition period T1 in the second preset time proportion.
Still further, the main control module 102 is further configured to determine that the preset wake-up signal is acquired before executing any one of the first image acquisition periods.
The following describes a specific operation of the cat eye function integrated imaging system according to the embodiment of the present invention with reference to fig. 28 and 29.
First, as shown in fig. 28, the main control module 102 alternately executes the cat eye imaging mode and the biometric imaging mode according to a first preset time ratio, that is, the imaging system alternately realizes the cat eye function and the face or palm brushing function through the framing mode. As shown in fig. 28 and 29, the main control module 102 is converted from executing the first image acquisition period to executing the second image acquisition period, which essentially means that the imaging system alternately realizes the cat eye function and the face or palm brushing function through a frame-changing mode.
The framing and frame changing processing scheme of the main control module in the imaging system is as follows. After the main control module 102 collects imaging images from the imaging module 101, framing the multi-frame images, namely, part of image output frames work in a cat eye imaging mode, and part of image output frames work in a biological identification imaging mode; and optionally, the number of the image output frames working in the biological identification imaging mode can be controlled and lifted by the main control module 102 according to scene requirements, so that frame division and frame changing processing are realized. Specifically, as shown in fig. 28, in 1s time, the main control module 102 controls the receiving camera 1013 to output 5 frames at intervals to operate in the cat eye imaging mode, and 1 frame to operate in the biometric imaging mode, wherein the time ratio of the biometric imaging mode in the first image acquisition period is 1/6, so that the image output frames are processed for separate uses. In the 30 frames of images, 25 frames of images working in the cat eye imaging mode are taken as a first group of frames; the images alternately output in the biological identification imaging mode are 5 frames, which can be called a second group of frames; the first and second sets of frames achieve framing. Next, optionally, the receiving camera 1013 may be controlled again by the main control module 102 according to the scene requirement, and the 30 frames of output images are redistributed, that is, the images operating in the biometric imaging mode are subjected to the frame changing process. As shown in fig. 29, the receiving camera 1013 outputs 4 frames at intervals in the biometric imaging mode in which 2 frames are operated in the cat eye imaging mode and the time ratio of the biometric imaging mode in the second image acquisition period is 1/3 in 1 s. In the 30 frames of images, 20 frames of images working in the cat eye imaging mode are taken as a third group of frames; images operating in the biometric imaging mode for a total of 10 frames, which may be referred to as a fourth set of frames; and changing the first group of frames into a third group of frames and simultaneously changing the second group of frames into a fourth group of frames by adjusting the frame dividing quantity, so as to realize frame changing.
Based on the control of framing and frame changing, different frames can be operated in different modes, and different subsequent processing can be carried out on different frame images, and the working flow is as follows:
When one of the above-mentioned various wake-up scenarios occurs, such as when the user approaches the door lock, the door lock is woken up and the system is powered up. Next, optionally, in some scenarios, based on the instruction sent by the host computer, the main control module 102 controls the light source module in the imaging module 101 and/or the receiving camera 1013 to start working. At this time, the main control module 102 starts the framing process of the first set of frames and the second set of frames, where the first set of frames operates in the cat eye imaging mode and the second set of frames operates in the biometric imaging mode. The system can keep the images collected by the imaging module in the first group of frames and the second group of frames to be always used for cat eye display; meanwhile, if the current frame is a first group of frames and a face or palm brushing instruction sent by the upper computer is received, the face or palm brushing is started at a second group of frames next to the current first group of frames; meanwhile, if the current frame is the second group of frames and a face or palm brushing instruction sent by the upper computer is received, the face or palm brushing is started at the current second group of frames or the face or palm brushing is started at the second group of frames in the next image acquisition period. Therefore, the system can brush the face or the palm while maintaining the function of the cat eye. Optionally, according to the scene requirement, the main control module can control to change the frame number, and switch to the frame changing processing of the third group of frames and the fourth group of frames, wherein the third group of frames work in a cat eye imaging mode, and the fourth group of frames work in a biological recognition imaging mode. Optionally, the main control module may further control the second group of frames and the fourth group of frames to be directly output to the subsequent processing module after image preprocessing or without image preprocessing. Thus, in the imaging system, the cat eye, face and/or palm brushing functions are realized by the same receiving camera 1013 of the imaging module 101, and the system can also brush the face or palm while maintaining the cat eye functions.
Compared with the integrated cat eye imaging system scheme, the advantages of the scheme are as follows: firstly, the scheme can make the system perform face brushing or palm brushing while maintaining the cat eye function through framing processing. Secondly, the scheme can lead the image effect output by the system to be more stable through framing processing, and has no phenomena of flickering, blocking, image frame disorder, partial frame image changing from color to black and white and the like. Thirdly, the scheme can control different image processing on different frame images through framing and/or frame changing processing, and simplify part of image processing flow, so that the response speed of cat eye, face brushing and palm brushing functions is further improved. Fourth, the scheme can improve the frame number of the face brushing or palm brushing functions and the recognition speed of the face brushing or palm brushing through frame changing processing according to scene requirements, so that the response speed of the face brushing and palm brushing functions is further improved. Fifth, the imaging system in this scheme can choose not to adopt the optical filter based on physical structure switching, so that the switching efficiency of the system is faster when the cat eye function is switched with the face brushing function or the palm brushing function, and the response speed of the cat eye function, the face brushing function and the palm brushing function is improved. Sixth, the imaging system in this scheme can choose not to adopt the optical filter based on physical structure switching, further avoid the phenomenon that the image is disordered and the partial frame image is changed from color to black and white, etc. caused by slow or blocked optical filter physical structure switching, make the system picture smooth and free from blocked, and the user has no perception to the switching of functions.
Next, optionally, the imaging system may be modified for image detection, in-vivo detection, and authentication procedures of the master control module.
Alternatively, multi-modal in vivo detection based on multispectral images may be employed in the in vivo detection. Specifically, on the basis of the living body detection flow, the original visible light image collected under the cat eye function is further used for living body detection under the face/palm brushing function after image preprocessing. Taking the above frame division and frame change scheme as an example, referring to fig. 27, if this scheme is adopted, when the second group of frames can be selected to be acquired, on the basis of the living body detection flow, the original visible light map under the cat eye function acquired in the first group of frames is further subjected to image preprocessing, and then is compared with the living body detection model stored in the system, and a living body detection result is output. The advantages of this scheme are as follows: first, because the FOV of cat eye function is great, the original visible light map that adopts cat eye function to gather down can further increase the sample quantity and the kind of living body detection, improves the degree of accuracy that living body detected. Secondly, the original visible light image is subjected to image preprocessing and then subjected to living body detection, so that the attack of the infrared image can be resisted; the original floodlight image is subjected to image preprocessing and then subjected to living body detection, so that the attack of the color image can be resisted; the two can be used simultaneously to further improve the accuracy of living body detection. This embodiment does not limit whether the imaging system uses ambient light or multispectral floodlight under cat eye function, or whether the imaging system uses framing and framing schemes.
Alternatively, multi-modal authentication based on multispectral images may also be employed in authentication. Specifically, the original visible light map collected under the cat eye function can be used for the identity verification under the face/palm brushing function after the image preprocessing, or the original visible light map collected under the cat eye function can be supplemented on the basis of the identity verification flow, and the image preprocessing is used for the identity verification under the face/palm brushing function. Taking the above frame dividing and changing scheme as an example, referring to fig. 27, if the scheme is adopted, when a second set of frames is acquired, the original visible light image under the cat eye function acquired in the first set of frames is preprocessed, then the face or palm feature information is extracted from the image, and then the feature information acquired based on the original visible light image acquired under the cat eye function is compared and matched with the standard authentication model of the face or palm stored in the system, and the identity verification result is output. Therefore, the number and variety of samples for identity verification are further increased, and the accuracy of the identity verification is improved. This embodiment also does not limit whether the imaging system uses ambient light or multispectral floodlight under cat eye function, or whether the imaging system uses framing and framing schemes.
Alternatively, the same image detection model can be used in image detection, and the face detection model and the palm detection model are not distinguished. In general, in the image detection, the main control module obtains an image and outputs the image to the image detection unit 1024. The image detection unit 1024 compares the image with the face detection model, and then compares the image with the palm detection model, or compares the image with the palm detection model, and then compares the image with the face detection model, so as to obtain a detection result. After the same detection model is adopted in the scheme, the image is compared with the detection model for one time, so that whether a face or a palm exists in the image can be detected, memory is saved, and the response speed of the face brushing and palm brushing functions is further improved.
In summary, the imaging system integrating the cat eye function provided by the embodiment of the invention has one or two of a face brushing function and a palm brushing function while integrating the cat eye function. In the imaging system, based on structural member design of the imaging module, an image acquisition module, a floodlight image acquisition module and a cat eye module of the imaging module are combined into a unit, and cat eye, face brushing and/or palm brushing functions are integrated in the same camera.
Meanwhile, in the imaging system, the structural light projection module and the floodlight illumination module of the imaging module can be two independent units or can be combined into one unit, so that the functions of the imaging system can be enriched, the system can integrate the cat eye function, and meanwhile, the imaging system also has one or two of the face brushing function and the palm brushing function, optical elements can be reduced, the material cost and the assembly difficulty are reduced, and the small-size integration is possible.
In addition, the imaging system of the invention can optionally take the floodlight image, the environment image, the structured light image and the depth image obtained according to the structured light image calculation as detection samples for living body detection, wherein the floodlight image, the environment image and the structured light image can comprise at least one infrared band image, and the floodlight image and the environment image can also comprise at least one visible band image, thereby realizing the multispectral image samples for living body detection, effectively increasing the types and the quantity of the samples for living body detection, and further improving the accuracy of living body detection. Similarly, in the identity verification process, the floodlight image and the environment image are used as samples, so that the multispectral image samples of the identity verification can be realized, the types and the number of the samples of the identity verification are effectively increased, and the accuracy of the identity verification is improved.
Moreover, the imaging system can optionally identify the object in the image by using the same image detection model, automatically execute the corresponding palm brushing function or face brushing function according to the identified palm or face, and realize the operation of automatically identifying the verification object before the identity verification, so that the system is more intelligent and the use experience of the user can be better improved. Meanwhile, two different types of authenticators of palms or faces are identified by adopting one image detection model, so that the authenticators can be directly determined to carry out corresponding authentication operation, the palms are not required to be identified and the faces are not required to be identified in sequence, the authentication step is simplified, the user authentication process is quickened, and the use experience of a user is improved.
Finally, the imaging system of the invention can selectively process the framing and the frame changing of the image based on the main control module, so that the system has higher switching efficiency and higher response speed when the cat eye function and the face brushing or palm brushing function are switched, the system is smooth to work without blocking, and the user does not feel the switching of the functions. Meanwhile, the scheme does not need to identify the face or the palm when each frame of image of the cat eye is output, so that the computing resource is saved.
Note that the above is only a preferred embodiment of the present invention and the technical principle applied. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, and that various obvious changes, rearrangements, combinations, and substitutions can be made by those skilled in the art without departing from the scope of the invention. Therefore, while the invention has been described in connection with the above embodiments, the invention is not limited to the embodiments, but may be embodied in many other equivalent forms without departing from the spirit or scope of the invention, which is set forth in the following claims.

Claims (14)

1. An imaging system integrating a cat eye function is characterized by comprising an imaging module and a main control module: the imaging module comprises a light source module and a receiving camera;
the light source module is used for at least projecting floodlight;
The receiving camera is used for collecting and imaging the reflected light of the target object after the floodlight is projected to the target object or the reflected light of the target object under the ambient light;
the main control module is respectively and electrically connected with the light source module and the receiving camera;
The imaging system has a cat eye imaging mode and a biometric imaging mode;
in the cat eye imaging mode, the main control module is used for controlling the receiving camera to collect reflected light of a target object under the ambient light to obtain an ambient image, or controlling the light source module to project floodlight and controlling the receiving camera to collect the reflected light of the floodlight projected to the target object to obtain an ambient image;
In the biological identification imaging mode, the main control module is used for controlling the light source module to project at least the floodlight and controlling the receiving camera to at least collect reflected light of the floodlight projected to a target object, and correspondingly obtaining a floodlight image; the environment image and the floodlight image meet the following conditions: FOV1 is more than or equal to FOV2; wherein FOV1 is the angle of view of the receiving camera to collect the ambient image, and FOV2 is the angle of view of the receiving camera to collect the flood image;
the main control module is also used for controlling the output of the environment image to be displayed;
the main control module is also used for carrying out biological recognition according to the floodlight image;
The biological recognition imaging mode comprises a face brushing imaging mode and a palm brushing imaging mode; the imaging target of the face brushing imaging mode is a face image, and the imaging target of the palm brushing imaging mode is a palm image;
the main control module further comprises an identity verification unit;
the identity verification unit is used for verifying the identity of at least one of the floodlight image and the environment image by using an identity verification model, and judging that the identity verification is successful when the identity verification model outputs positive results; wherein the ambient image comprises at least one image in the infrared band and/or at least one image in the visible band, and the flood image comprises at least one image in the infrared band.
2. The imaging system of claim 1, wherein the imaging system comprises a plurality of imaging devices,
The light source module is also used for projecting structural light;
the receiving camera is also used for collecting and imaging the reflected light after the structured light is projected to the target object;
In the biological recognition imaging mode, the main control module is further used for controlling the light source module to project the structured light and controlling the receiving camera to collect reflected light of the structured light projected to a target object, and correspondingly obtaining a structured light image;
the ambient image and the structured light image satisfy the following conditions: FOV1 is more than or equal to FOV3; wherein FOV3 is the field angle at which the receiving camera captures the structured light image;
The main control module is also used for performing biological recognition according to the structured light image and the floodlight image.
3. The imaging system of claim 1, wherein the receiving camera comprises an imaging chip, an imaging lens, and a filter assembly, the imaging lens and the filter assembly being located on a light receiving path of the imaging chip, respectively;
In the cat eye imaging mode, the optical filter component transmits at least visible light wave band light rays or at least infrared wave band light rays;
In the biometric imaging mode, the filter assembly transmits at least infrared band light;
The optical filtering component comprises a first optical filter, a second optical filter and a switching mechanism; the first optical filter transmits infrared band light, and the second optical filter transmits infrared band and visible band light; the main control module is used for controlling the switching mechanism to drive the second optical filter to move to a light receiving path of the imaging chip in the cat eye imaging mode, and controlling the switching mechanism to drive the first optical filter to move to the light receiving path of the imaging chip in the biological identification imaging mode;
Or the optical filter assembly comprises a first optical filter, a second optical filter and a switching mechanism; the first optical filter transmits infrared band light, and the second optical filter transmits visible band light; the main control module is used for controlling the switching mechanism to drive the second optical filter to move to a light receiving path of the imaging chip in the cat eye imaging mode when the environmental light intensity is higher than a preset light intensity threshold value, controlling the switching mechanism to drive the first optical filter to move to the light receiving path of the imaging chip in the cat eye imaging mode when the environmental light intensity is lower than the preset light intensity threshold value, and controlling the switching mechanism to drive the first optical filter to move to the light receiving path of the imaging chip in the biological identification imaging mode;
or the filter assembly comprises a first filter; the main control module is used for controlling the first optical filter to be adjusted to transmit visible light wave band light rays or infrared wave band light rays in the cat eye imaging mode, and controlling the first optical filter to be adjusted to transmit infrared wave band light rays in the biological identification imaging mode;
Or the filter assembly comprises a first filter; the first optical filter transmits infrared band light and visible band light.
4. The imaging system of claim 2, wherein the image output field angle in the brush palm imaging mode is equal to or less than 155 °, and the image output field angle in the brush face imaging mode is equal to or less than 120 °;
the receiving camera includes an imaging lens that satisfies:
When the FOV is less than or equal to 155 degrees, the maximum distortion value is less than or equal to 70 percent;
when the FOV is less than or equal to 120 degrees, the maximum distortion value is less than or equal to 5 percent;
The increase in distortion value with the imaging image height is greater at 120 DEG < FOV < 155 DEG than at 120 DEG.
5. The imaging system of claim 2, wherein the receiving camera comprises an imaging lens comprising at least one of a refractive lens, a super surface lens, and a diffractive lens.
6. The imaging system of claim 1, wherein the light source module comprises a flood lighting module; the floodlight module is used for projecting floodlight;
The floodlight module comprises a first light source, wherein the first light source is used for emitting infrared band light rays, or the floodlight module comprises a first light source and a second light source, the first light source is used for emitting infrared band light rays, the second light source is used for emitting at least one visible light band light rays and/or at least one infrared band light rays, and the infrared band light rays emitted by the second light source are different from the infrared band light rays emitted by the first light source.
7. The imaging system of claim 6, wherein the light source module further comprises a structured light projection module for projecting structured light;
The structured light projection module comprises a third light source, wherein the third light source is used for emitting infrared band light rays.
8. The imaging system of claim 7, wherein a dimming element is further disposed in the flood lighting module and the structured light projection module, the dimming element being a collimating and diffracting integrated optical element, or the dimming element being a super-surface lens, or the dimming element being a combination of a collimating mirror and a diffracting optical element;
the dimming element is positioned on the light emitting side of the light source in the floodlight module and the structural light projection module, and is used for respectively modulating the emergent light of the light source in the floodlight module and the structural light projection module so as to correspondingly project floodlight and structural light.
9. The imaging system of claim 1, wherein the imaging module further comprises an ambient light sensor for detecting illuminance of an environment; the main control module is also used for controlling the light source module to project floodlight in the cat eye imaging mode when the illuminance of the environment is lower than a preset threshold value;
and/or the imaging module further comprises a proximity light sensor, wherein the proximity light sensor is used for generating a trigger signal when a target object approaches; the main control module is used for starting the cat eye imaging mode or the biological identification imaging mode according to the trigger signal.
10. The imaging system of claim 1, wherein the master control module comprises a living body detection unit;
The living body detection unit is used for carrying out living body detection on the floodlight image or carrying out living body detection on the floodlight image and the environment image by utilizing a living body detection model, and judging that an object to be detected is a living body when the living body detection models output positive results; wherein the ambient image comprises at least one image in the infrared band and/or at least one image in the visible band, and the flood image comprises at least one image in the infrared band.
11. The imaging system of claim 10, wherein the light source module is further configured to project structured light;
the receiving camera is also used for collecting and imaging the reflected light after the structured light is projected to the target object;
In the biological recognition imaging mode, the main control module is further used for controlling the light source module to project the structured light and controlling the receiving camera to collect reflected light of the structured light projected to a target object, and correspondingly obtaining a structured light image;
The main control module further comprises a depth processing unit, wherein the depth processing unit is used for calculating and obtaining a depth map according to the structured light image and a preset reference structured light image;
The living body detection unit is further used for utilizing a living body detection model, carrying out living body detection on the structured light image and/or the depth image, and judging that the object to be detected is a living body when the living body detection model outputs positive results; wherein the structured light image comprises an image of the infrared band.
12. The imaging system of claim 1, wherein the master control module further comprises an image detection unit; the image detection unit is used for determining a target object in an image in the biological recognition imaging mode by using the same image detection model, and determining whether the biological recognition imaging mode is the face brushing imaging mode or the palm brushing imaging mode according to the target object.
13. The imaging system of claim 1, wherein the master control module is configured to perform a plurality of first image acquisition cycles, and wherein the cat eye imaging mode and the biometric imaging mode are alternately performed in a first predetermined time scale during any one of the first image acquisition cycles.
14. The imaging system of claim 13, wherein the master control module is further configured to perform a plurality of second image acquisition cycles, and wherein the cat eye imaging mode and the biometric imaging mode are alternately performed at a second predetermined time ratio during any one of the second image acquisition cycles; the time length of the biological identification imaging mode in the second image acquisition period is different from the time length of the biological identification imaging mode in the first image acquisition period in the first preset time proportion.
CN202410572068.0A 2024-05-10 2024-05-10 Imaging system integrating cat eye function Active CN118154829B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410572068.0A CN118154829B (en) 2024-05-10 2024-05-10 Imaging system integrating cat eye function

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410572068.0A CN118154829B (en) 2024-05-10 2024-05-10 Imaging system integrating cat eye function

Publications (2)

Publication Number Publication Date
CN118154829A CN118154829A (en) 2024-06-07
CN118154829B true CN118154829B (en) 2024-07-19

Family

ID=91295254

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410572068.0A Active CN118154829B (en) 2024-05-10 2024-05-10 Imaging system integrating cat eye function

Country Status (1)

Country Link
CN (1) CN118154829B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114495335A (en) * 2022-02-21 2022-05-13 湖南智虹视界科技有限公司 3D structured light face recognition door lock integrated with cat eye and recognition method thereof
CN117452747A (en) * 2023-12-21 2024-01-26 深圳市安思疆科技有限公司 3D structured light system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117809342A (en) * 2023-12-30 2024-04-02 深圳阜时科技有限公司 Biological characteristic recognition module, control method thereof and electronic equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114495335A (en) * 2022-02-21 2022-05-13 湖南智虹视界科技有限公司 3D structured light face recognition door lock integrated with cat eye and recognition method thereof
CN117452747A (en) * 2023-12-21 2024-01-26 深圳市安思疆科技有限公司 3D structured light system

Also Published As

Publication number Publication date
CN118154829A (en) 2024-06-07

Similar Documents

Publication Publication Date Title
US11017191B2 (en) Accessory having a target generating structure for a mobile device
US11348217B2 (en) Projector, detection method thereof, and electronic device
US11361179B2 (en) Barcode reading system having a legacy mobile client function, a non-legacy mobile client function, and a relay function
CN109379454B (en) Electronic device
US10853604B2 (en) Attachment for a mobile device for reading barcodes and capturing non-barcode images
CN109188711B (en) Under-screen optical system, design method of diffractive optical element, and electronic apparatus
CN109143607B (en) Compensation display screen, optical system under screen and electronic equipment
US10192086B2 (en) Barcode-reading enhancement system for a computing device that comprises a camera and an illumination system
US9858460B2 (en) Barcode-reading system having circuitry for generating a barcode read signal on a microphone input connector of a mobile device
CN109274789B (en) Electronic device
US9922221B2 (en) Barcode-reading attachment for focusing a camera of a mobile device
KR20220164481A (en) Shared emitter for multi-optical path imaging techniques and active depth sensing techniques
US9710685B2 (en) Barcode-reading application for a mobile device with a high resolution color camera
CN109240021B (en) Optical system and electronic equipment under screen
US10430625B2 (en) Barcode reading accessory for a mobile device having a one-way mirror
CN111798798A (en) Display with optical sensor and optical sensing module thereof
CN118154829B (en) Imaging system integrating cat eye function
CN110460757B (en) Integrated spectrum camera lens and spectrum camera
KR20130076273A (en) Active type iris photographing appararus
JP2004186721A (en) Camera
US20230030103A1 (en) Electronic apparatus
CN110868506A (en) Image processing method and electronic device
CN120356278A (en) Multifunctional imaging module of integrated multi-camera module for intelligent door lock
CN2706776Y (en) Lens module and portable electronic apparatus using the same
HK40011145A (en) Projector and test method and device therefor, image acquisition device, electronic device, readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant