CN112069864B - 3D vein image determination method, device and system - Google Patents
3D vein image determination method, device and system Download PDFInfo
- Publication number
- CN112069864B CN112069864B CN201910503284.9A CN201910503284A CN112069864B CN 112069864 B CN112069864 B CN 112069864B CN 201910503284 A CN201910503284 A CN 201910503284A CN 112069864 B CN112069864 B CN 112069864B
- Authority
- CN
- China
- Prior art keywords
- light intensity
- depth information
- reflected light
- model
- incident light
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 210000003462 vein Anatomy 0.000 title claims abstract description 129
- 238000000034 method Methods 0.000 title claims abstract description 44
- 238000013507 mapping Methods 0.000 claims abstract description 35
- 238000012549 training Methods 0.000 claims abstract description 23
- 210000002615 epidermis Anatomy 0.000 claims description 17
- 230000010365 information processing Effects 0.000 claims description 15
- 238000006243 chemical reaction Methods 0.000 claims description 10
- 230000008569 process Effects 0.000 claims description 7
- 238000010586 diagram Methods 0.000 description 15
- 238000005516 engineering process Methods 0.000 description 9
- 230000008859 change Effects 0.000 description 8
- 230000006870 function Effects 0.000 description 8
- 230000003287 optical effect Effects 0.000 description 5
- 210000001519 tissue Anatomy 0.000 description 5
- 238000003384 imaging method Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 238000004590 computer program Methods 0.000 description 3
- 210000003491 skin Anatomy 0.000 description 3
- 102000001554 Hemoglobins Human genes 0.000 description 2
- 108010054147 Hemoglobins Proteins 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000001678 irradiating effect Effects 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 206010010356 Congenital anomaly Diseases 0.000 description 1
- WQZGKKKJIJFFOK-GASJEMHNSA-N Glucose Natural products OC[C@H]1OC(O)[C@H](O)[C@@H](O)[C@@H]1O WQZGKKKJIJFFOK-GASJEMHNSA-N 0.000 description 1
- XSQUKJJJFZCRTK-UHFFFAOYSA-N Urea Chemical compound NC(N)=O XSQUKJJJFZCRTK-UHFFFAOYSA-N 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- QVGXLLKOCUKJST-UHFFFAOYSA-N atomic oxygen Chemical compound [O] QVGXLLKOCUKJST-UHFFFAOYSA-N 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000003542 behavioural effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- WQZGKKKJIJFFOK-VFUOTHLCSA-N beta-D-glucose Chemical compound OC[C@H]1O[C@@H](O)[C@H](O)[C@@H](O)[C@@H]1O WQZGKKKJIJFFOK-VFUOTHLCSA-N 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 239000008280 blood Substances 0.000 description 1
- 210000004369 blood Anatomy 0.000 description 1
- 210000004204 blood vessel Anatomy 0.000 description 1
- 239000004202 carbamide Substances 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000005670 electromagnetic radiation Effects 0.000 description 1
- 239000008103 glucose Substances 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000000338 in vitro Methods 0.000 description 1
- 238000001727 in vivo Methods 0.000 description 1
- 238000002329 infrared spectrum Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 239000001301 oxygen Substances 0.000 description 1
- 229910052760 oxygen Inorganic materials 0.000 description 1
- 230000035699 permeability Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 210000002966 serum Anatomy 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/14—Vascular patterns
Landscapes
- Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
The embodiment of the invention provides a method, a device and a system for determining a 3D vein image, which acquire specified parameters from the vein image of a measured object, and can determine depth information corresponding to the numerical value of the specified parameters based on the mapping relation between the reflected light intensity and the depth information obtained by training because the specified parameters are used for describing the reflected light intensity of a pixel point, and take the depth information and the vein image as the 3D vein image for carrying out identity recognition on the measured object, and can improve the recognition accuracy because the depth information is also taken as the biological feature for carrying out identity recognition.
Description
Technical Field
The invention relates to the field of biological feature acquisition, in particular to a 3D vein image determining method, device and system.
Background
Along with the development of the information age, the importance of identity authentication is further highlighted, the traditional identity authentication mode is difficult to meet various requirements of convenience, anti-counterfeiting performance and high security of the identity authentication of people in the new age, and at the moment, the biological feature recognition technology represented by fingerprint recognition and face recognition is gradually and deeply researched and widely applied.
The biometric identification technology is a technology for authenticating the identity of a human body by combining the physiological or behavioral characteristics inherent to the human body with a computer information system, such as palm print identification, signature identification, fingerprint identification, vein identification, iris identification and the like. The vein recognition technology with congenital, invariance and uniqueness plays an important role in the field of identity authentication, and is a biological feature recognition technology for using vein information of blood vessels under epidermis such as fingers/palms as individual identity authentication. For example, the method is applied to links requiring personal identity authentication, such as a member identification integrated machine, a bank ATM, an access control system, PC login, safe box management, electronic payment and the like.
The existing vein recognition technology generally adopts a monocular camera to perform near infrared band two-dimensional imaging on a detected object, so that the biological characteristic information is limited, and correspondingly, the vein recognition accuracy rate may not be high because the biological characteristic information compared in the vein recognition process is limited.
Disclosure of Invention
In order to overcome the problems in the related art, the invention provides a 3D vein image determination method, a device and a system.
According to a first aspect of an embodiment of the present invention, there is provided a 3D vein image determination method, the method including:
Acquiring the numerical value of a specified parameter from a vein image of a measured object, wherein the specified parameter is used for describing the reflection light intensity of a pixel point;
determining depth information corresponding to the numerical value of the specified parameter based on the mapping relation between the reflected light intensity and the depth information obtained by training, wherein the depth information is distance information from vein to epidermis;
and taking the depth information and the vein image as a 3D vein image to identify the object to be tested.
According to a second aspect of embodiments of the present invention, there is provided a 3D vein image determination system, the system comprising: a light source, an image sensor and an information processing module;
the light source irradiates near infrared light waves to the measured object;
The image sensor acquires vein images of the detected object by collecting reflected light intensity information of the detected object;
The information processing module acquires the numerical value of a specified parameter from a vein image of a measured object, wherein the specified parameter is used for describing the reflection light intensity of a pixel point; determining depth information corresponding to the numerical value of the specified parameter based on the mapping relation between the reflected light intensity and the depth information obtained by training, wherein the depth information is distance information from vein to epidermis; and taking the depth information and the vein image as a 3D vein image to identify the object to be tested.
According to a third aspect of embodiments of the present invention, there is provided a 3D vein image determination apparatus, the apparatus including:
The data acquisition module is used for acquiring the numerical value of a specified parameter from the vein image of the measured object, wherein the specified parameter is used for describing the reflection light intensity of the pixel point;
The depth determining module is used for determining depth information corresponding to the numerical value of the specified parameter based on the mapping relation between the reflected light intensity and the depth information obtained by training, wherein the depth information is distance information from a vein to a epidermis;
And the image determining module is used for taking the depth information and the vein image as 3D vein images so as to identify the object to be detected.
The technical scheme provided by the embodiment of the invention can comprise the following beneficial effects:
According to the embodiment of the invention, the appointed parameters are obtained from the vein image of the object to be detected, and because the appointed parameters are used for describing the reflection light intensity of the pixel points, the depth information corresponding to the numerical value of the appointed parameters can be determined based on the mapping relation between the reflection light intensity and the depth information obtained through training, and the depth information and the vein image are used as 3D vein images for carrying out identity recognition on the object to be detected.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
Fig. 1A is a schematic diagram of incident light and reflected light without encountering a vein for a near infrared light wave according to an exemplary embodiment of the invention.
Fig. 1B is a schematic diagram showing a comparison of incident light and reflected light in a case where a near infrared light wave encounters a vein and the distance of the vein to the epidermis is different according to an exemplary embodiment of the present invention.
Fig. 2 is a flowchart illustrating a 3D vein image determination method according to an exemplary embodiment of the present invention.
Fig. 3A and 3B are schematic diagrams showing reflected light intensity as a function of depth information for two different incident light intensities according to an exemplary embodiment of the present invention.
Fig. 4 is a block diagram of a 3D vein image determination system according to an exemplary embodiment of the present invention.
Fig. 5 is a block diagram of another 3D vein image determination system according to an exemplary embodiment of the present invention.
Fig. 6 is a hardware configuration diagram of a computer device in which the 3D vein image determination apparatus of the present invention is located.
Fig. 7 is a block diagram of a 3D vein image determination apparatus according to an exemplary embodiment of the present invention.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the invention. Rather, they are merely examples of apparatus and methods consistent with aspects of the invention as detailed in the accompanying claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the invention. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "in response to a determination" depending on the context.
The biological recognition technology is rapidly developed, and the vein recognition is less likely to be damaged than the fingerprint because of the difficulty in copying the characteristics, less change than the behavior biological measurement characteristics such as signature, voiceprint and the like, and has wide application prospect. The infrared light with the wavelength of 700-1100 nm is used for irradiating the epidermis of the tested object, and the hemoglobin in the vein blood vessel in the epidermis can absorb part of the infrared light, so that the vein image can be drawn according to the reflected light. Near infrared light is a wave of electromagnetic radiation between the visible (Vis) and mid-infrared (MIR) regions, which are defined by the american society for detection of materials (ASTM) as the region of 780-2526 nm. In clinical medicine, the near infrared spectrum has the greatest advantage of good permeability to tissues, and can be used for in-vitro or in-vivo nondestructive and non-invasive analysis. For example, the method can be used for measuring the oxygen carrying amount of hemoglobin, PH, glucose, urea and the like in whole blood or serum. In the related art, a monocular camera is generally adopted to perform two-dimensional imaging of a detected object in a near infrared band, and biological characteristic information contained in a two-dimensional vein image is limited, so that the biological characteristic information compared in the vein recognition process is limited, and the defect of low vein recognition accuracy rate can occur.
The presence of certain substances in a vein that can absorb near infrared light, while other tissues do not fully absorb, and there is some reflected and transmitted light, as shown in fig. 1A, is a schematic diagram of incident and reflected light without the near infrared light waves encountering a vein, according to an exemplary embodiment of the invention. For example, illuminating human skin with near infrared light of 100% of the incident light may return 80% of the light after passing through human tissue. The applicant found that the distances between veins and epidermis of different measured objects may be different, and when the measured objects are irradiated by near infrared light waves with the same light intensity, the light intensity of reflected light is different due to the difference between veins and epidermis, and the distance between the epidermis and veins is in positive correlation with the reflected light intensity. As shown in fig. 1B, a schematic diagram of the comparison of incident light and reflected light in the case where the near infrared light waves encounter veins and the distances from the veins to the epidermis are different according to an exemplary embodiment of the present invention is shown. For example, in fig. 1B (1), the vein-to-epidermis distance may be p millimeters, such as 5mm. In fig. 1B (2), the vein-to-epidermis distance may be q millimeters, such as 1mm. Applicants have found that if p is greater than q, the near infrared ratio will be greater than the intensity of the reflected light after a vein is encountered at p mm below the skin, i.e., m% > n% than the intensity of the reflected light after a vein is encountered at q mm below the skin.
In view of this, an embodiment of the present invention provides a 3D vein image determination solution, where a specified parameter is obtained from a vein image of a measured object, and because the specified parameter is used to describe a reflection light intensity of a pixel point, depth information corresponding to a numerical value of the specified parameter can be determined based on a mapping relationship between the reflection light intensity obtained by training and the depth information, and the depth information and the vein image are used as 3D vein images to identify the measured object, and because the depth information is also used as a biological feature to identify the identity, the identification accuracy can be improved.
Embodiments of the present invention are illustrated in the following drawings.
As shown in fig. 2, there is a flowchart of a 3D vein image determination method according to an exemplary embodiment of the present invention, the method including:
In step 202, obtaining a numerical value of a specified parameter from a vein image of a measured object, wherein the specified parameter is used for describing the reflection light intensity of a pixel point;
in step 204, depth information corresponding to the numerical value of the specified parameter is determined based on the mapping relation between the reflected light intensity and the depth information obtained by training, wherein the depth information is distance information from vein to epidermis;
in step 206, the depth information and the vein image are used as 3D vein images to identify the object to be tested.
The 3D vein image determining method provided in this embodiment may be implemented by software, or may be implemented by a combination of software and hardware or by hardware, where the hardware may be composed of two or more physical entities, or may be composed of one physical entity. The method of the embodiment can be applied to electronic equipment with processing capability. The electronic device may be a PC, a tablet computer, a notebook computer, a desktop computer, a PDA (Personal DIGITAL ASSISTANT), or the like.
The object to be measured may be an object having veins, for example, the palm, back of the hand, fingers, face, etc. of a person. The vein image may be a vein image acquired by the acquisition module after the object to be measured is irradiated with the near infrared wave. The embodiment can acquire the numerical value of a specified parameter from the vein image of the measured object, wherein the specified parameter is used for describing the reflection light intensity of the pixel point. In one embodiment, the value of the specified parameter may be a pixel value of the pixel point. The reflected light intensity is directly represented by the pixel value without data conversion. In another embodiment, the value of the specified parameter may also be the reflected light intensity, for example, by converting the pixel value to obtain the reflected light intensity of the pixel.
After the value of the specified parameter is obtained, since the specified parameter is used for describing the reflection light intensity of the pixel point, the depth information corresponding to the value of the specified parameter can be determined based on the mapping relationship between the reflection light intensity and the depth information obtained by training. Taking the value of the specified parameter as the pixel value of the pixel point as an example, the depth information corresponding to the value of the specified parameter can be determined according to the mapping relation between the reflected light intensity and the depth information obtained by training and the relation between the pixel value and the reflected light intensity.
The mapping relation between the reflected light intensity and the depth information can be obtained by training sample data containing the reflected light intensity and the depth information.
In one embodiment, the mapping relationship may be a mapping relationship obtained by training the reflected light intensity and depth information under the same incident light intensity. For example, multiple sets of corresponding sample data under the same incident light intensity can be obtained, and the mapping relation between the reflected light intensity and the depth information can be obtained by training the sample data. In practical application, the near infrared wave of the incident light intensity is utilized to irradiate the measured object, so as to obtain a vein image. Then, the vein image is a vein image obtained by the object under test with the irradiation of the near infrared wave of the incident light intensity.
In this embodiment, each time a 3D vein image needs to be obtained, the object to be measured may be irradiated with near infrared waves of incident light intensity used in training the mapping relationship, and under the incident light intensity, a two-dimensional vein image is obtained first, and then a subsequent depth information determining process is performed.
In some scenes, it may happen that different vein images are vein images obtained after the measured object is irradiated with near infrared waves with different incident light intensities, and in this regard, in another embodiment, the mapping relationship may be in one-to-one correspondence with the incident light intensities, and the number of the incident light intensities may be at least two, and each incident light intensity corresponds to a mapping relationship, where the mapping relationship is obtained by training the reflected light intensity and depth information under the incident light intensity. Subsequently, after determining the incident light intensity corresponding to the vein image, the mapping relation between the incident light intensity and the determined incident light intensity can be selected to determine the depth information corresponding to the numerical value of the specified parameter.
For example, different incident light intensities correspond to different models describing the mapping of reflected light intensity to depth information. Step 204 may use a model to determine depth information corresponding to the values of the specified parameters. The model used to determine the depth information may be determined based on the intensity of the incident light at which the object under test is illuminated. Correspondingly, the method further comprises the steps of: and determining a corresponding model according to the incident light intensity of the irradiated object to be measured.
In the embodiment, the incident light intensity has a corresponding relation with the model, so that the depth information of each pixel point in the vein image acquired under different incident light intensities can be determined.
For the same incident light intensity, in the mapping relationship between the corresponding reflected light intensity and the depth information, the situation that the reflected light intensity changes too fast or too slow along with the depth information may occur, as shown in fig. 3A and fig. 3B, which are schematic diagrams showing the change of the reflected light intensity along with the depth information under two different incident light intensities according to an exemplary embodiment of the present invention. In fig. 3A and fig. 3B, the mapping relationship is merely illustrated, where the values of the abscissa and the ordinate do not necessarily have practical significance. The abscissa may be depth information and the ordinate may be reflected light intensity. The incident light intensity referred to in fig. 3A is smaller than the incident light intensity referred to in fig. 3B. As shown in fig. 3A, in the curve segment corresponding to the region B, the reflected light intensity varies with the variation of the depth information, and the variation is small. As shown in fig. 3B, in the curve segment corresponding to the first region B, the reflected light intensity changes with the change of the depth information, and the change is large. In the curve segment corresponding to the second region B, the reflected light intensity changes with the change of the depth information, and the change is small. Whether the reflected light intensity varies too rapidly with the depth information or too slowly, the depth information does not contribute much to the reflected light intensity, and the region corresponding to such curve segments may be referred to as an unavailable quantum area. And for curve segments where depth information contributes more to reflected light intensity, the corresponding region may be referred to as the usable quantum area. Therefore, in the depth information-reflected light intensity curve corresponding to each incident light intensity, a part of curve segments (one or more curve segments) can be taken to construct the mapping relation between the reflected light intensity and the depth information under the incident light intensity. Because the reflected light intensities and the models are in one-to-one correspondence, in one embodiment, each model may correspond to at least one range of values of the reflected light intensities. The range of the reflected light intensity of the model can be determined based on the slope of the function curve of the model, wherein the slope of the function curve of the model is in a preset interval. The preset interval may be a preset slope area that ensures that the reflected light intensity does not change too fast or too slow with the depth information. For example, the value range may be constituted by a tangent value having an inclination of about 45 degrees.
In order to measure depth information corresponding to each pixel, in one embodiment, the method further includes: and if the reflected light intensity described by the specified parameters is not in the range of the reflected light intensity of the determined model, adjusting the incident light intensity of the irradiated object to be measured so as to acquire new specified parameters.
In this embodiment, since each model has a corresponding range of reflected light intensity, in order to avoid a situation that the depth information corresponding to the pixel point cannot be determined due to the fact that the reflected light intensity is not in the range of reflected light intensity, the incident light intensity of the object to be measured irradiated can be adjusted to obtain new specified parameters, so that the depth information corresponding to each pixel point can be obtained. After a vein image is obtained by irradiating a measured object with near infrared waves of a certain incident light intensity each time, pixel points with reflected light intensity within a value range of the reflected light intensity corresponding to the incident light intensity can be determined, depth information of the determined pixel points is calculated and recorded, then new appointed parameters of unrecorded pixel points are obtained by adjusting the incident light intensity of the irradiated object, and the depth information calculation is performed until the reflected light intensity corresponding to all the pixel points falls into one or more value ranges, and corresponding depth information is obtained, so that coordinate points formed by the reflected light intensity and the depth information are all in an available range.
The adjustment of the incident light intensity can be determined according to the relationship between the reflected light intensity described by the specified parameter and the maximum/minimum value of the range of values. For example, if the reflected light intensity described by the specified parameter is greater than the maximum value of the range of values, the adjustment amount may be determined according to the difference between the reflected light intensity described by the specified parameter and the maximum value, and the incident light intensity may be reduced. For another example, if the reflected light intensity described by the specified parameter is smaller than the minimum value of the value range, the adjustment amount may be determined according to the difference between the reflected light intensity described by the specified parameter and the minimum value, and the incident light intensity may be increased.
With respect to how the mapping of reflected light intensity to depth information is determined, in one embodiment, the tissue may be abstracted, divided into layers, each layer having an attenuation system. Also, since the tissue can be considered as a uniform individual, the attenuation coefficient is the same for each layer, e.g., α. Accordingly, the mapping relationship of the reflected light intensity and the depth information can be determined using the following formula:
y=L(1-α2x)
Where y represents the reflected light intensity, x represents the depth information, and L represents the incident light intensity.
In this embodiment, the initial model may be trained using known depth information and reflected light intensity data sets (training data) for the same incident light intensity to obtain the attenuation coefficient α, and further obtain a trained model. The corresponding attenuation coefficients under different incident light intensities may be the same or different, and under the condition of different attenuation coefficients, the different incident light intensities correspond to different models, and under the condition of the same attenuation coefficients, the incident light intensities can be used as variables, so that the same model can be shared. In the embodiment, the mapping relation between the reflected light intensity and the depth information is determined directly by using the formula, so that the calculation efficiency can be improved.
After the depth information of each pixel point is obtained, the depth information can be combined with any vein image in the processing process to obtain a 3D vein image. The 3D vein image can be used for carrying out identity recognition on an object to be detected, and is further applied to various identity authentication scenes. For example, screen unlocking after identity authentication, door lock unlocking, PC login, payment operation, and the like.
In the case of multiple light intensities, since each light intensity has a corresponding range of reflected light intensity, and the ranges of reflected light intensities corresponding to different light intensities may be discontinuous, for example, the range of reflected light intensity of the model corresponding to a is [ Amin, amax ], the range of reflected light intensity of the model corresponding to B is [ Bmin, bmax ], and [ Amin, amax ] and [ Bmin, bmax ] may overlap, or there may be an interval, and in one embodiment, depth information is determined directly using the model corresponding to the incident light intensity of the object to be measured. In another embodiment, considering that there may be an error between the actual incident light intensity and the expected incident light intensity, which leads to the fact that the actually collected reflected light intensity and the target reflected light intensity come in and go out, and further leads to the fact that the depth information is determined by using the model corresponding to the adjacent incident light intensity by mistake, and the depth information is caused to be wrong, the method further includes:
If depth information of different pixel points is determined by using at least two models corresponding to the incident light intensity, and in a value range of reflected light intensity of the models corresponding to the two adjacent incident light intensities, a difference value between a maximum value of one value range and a minimum value of the other value range is within a preset critical value interval range, the depth information determined by the models is converted, so that the converted depth information is used as final depth information of the pixel points, and the conversion process is as follows:
if the difference value between Amin and Bmax is within the preset critical value interval range, the depth information determined by the model corresponding to A is converted by adopting the following formula:
If the difference between Bmin and Cmax is within the preset critical value interval range, the depth information determined by the model corresponding to C is converted by adopting the following formula:
Wherein, incident light intensity: a > B > C, the range of the reflected light intensity of the model corresponding to A is [ Amin, amax ], the range of the reflected light intensity of the model corresponding to B is [ Bmin, bmax ], and the range of the reflected light intensity of the model corresponding to C is [ Cmin, cmax ]; depth A Before conversion represents Depth information determined by the model corresponding to a, and Depth C Before conversion represents Depth information determined by the model corresponding to C.
In this embodiment, the depth information determined under other light intensities is calibrated with the B light intensity as a standard, and converted depth information is obtained. The embodiment is suitable for the case that the difference between the maximum value of one value range and the minimum value of the other value range is within the preset critical value interval range, and the preset critical value interval range can be set as a smaller interval range according to the requirement. According to the embodiment, the value range of the reflected light intensity corresponding to the adjacent incident light intensity is equivalent to the continuous value range, so that the situation that the actually collected reflected light intensity and the target reflected light intensity come in and go out can be avoided, and further the depth information is determined by using a model corresponding to the adjacent incident light intensity by mistake and is caused to be wrong.
The various technical features of the above embodiments may be arbitrarily combined as long as there is no conflict or contradiction between the features, but are not described in detail, and therefore, the arbitrary combination of the various technical features of the above embodiments is also within the scope of the present disclosure.
Corresponding to the foregoing embodiment of the 3D vein image determination method, the present invention further provides a 3D vein image determination system, as shown in fig. 4, which is a structural diagram of a 3D vein image determination system according to an exemplary embodiment of the present invention. The system comprises: a light source 42, an image sensor 44, an information processing module 46;
the light source 42 irradiates a near infrared light wave to the object to be measured;
The image sensor 44 obtains a vein image of the measured object by collecting reflected light intensity information of the measured object;
The information processing module 46 obtains the numerical value of a specified parameter from the vein image of the detected object, wherein the specified parameter is used for describing the reflection light intensity of the pixel point; determining depth information corresponding to the numerical value of the specified parameter based on the mapping relation between the reflected light intensity and the depth information obtained by training, wherein the depth information is distance information from vein to epidermis; and taking the depth information and the vein image as a 3D vein image to identify the object to be tested.
The processing procedure of the information processing module in fig. 4 is the same as the 3D vein image determining procedure in fig. 2, and the related art is not described in detail herein. The number of the light sources may be one or a plurality of.
Wherein the light source may comprise an infrared LED or a laser, the central wavelength of which may be selected from 700nm to 1100nm, preferably 850. The image sensor may be a CMOS or CCD that converts optical signals into electrical signals consistent with the light source wavelength selection.
In one embodiment, the light source is a tunable light source, and the tunable light source may be connected to the information processing module. Different incident light intensities correspond to different models, and the models are used for describing the mapping relation between the reflected light intensity and the depth information;
The information processing module is further configured to: and if the reflected light intensity described by the specified parameters is not in the range of the reflected light intensity of the determined model, controlling the adjustable light source to adjust the incident light intensity of the measured object irradiated so as to acquire new specified parameters.
In the embodiment, the information processing module actively adjusts the adjustable light source, so that the automatic switching of the irradiated incident light intensity of the measured object is realized, the full automation is realized, and the processing efficiency is improved.
Further, the value range of the reflected light intensity of the model is determined based on the slope of the function curve of the model, and the slope of the function curve of the model is within a preset interval.
In one embodiment, the system further comprises a lens group. The lens group can collect the light signal image reflected by the measured object to the image sensor, and the wavelength of the lens group is consistent with that of the light source. By way of example, the lens group may be a single camera lens group.
To filter clutter, in one example, a filter, such as a narrow band pass filter, may also be configured. The optical filter is arranged in front of the adjustable light source and in front of the lens group, so that clutter is filtered, interference in imaging is avoided, and imaging effect is improved. The bandwidth can be as small as possible, consistent with the wavelength selection of the light source.
Fig. 5 is a block diagram illustrating another 3D vein image determination system according to an exemplary embodiment of the present invention. In this schematic diagram, the system includes at least two narrow band pass filters, an adjustable light source, a lens group, an image sensor, and an information processing module. The use of the information processing module may be similar to that of fig. 4 and will not be described in detail here. The information processing module can output the 3D vein image to the identity recognition module so as to carry out identity authentication. The 3D vein image is used for identification, and compared with the 2D vein image for identification, the identification accuracy can be improved due to the fact that the 3D vein image is used for identification with multiple one-dimensional features. In some scenes, the vein recognition result of the 3D vein image and the face recognition result can be combined, and the applications such as double authentication and the like can be realized.
In this system, the tunable light source may transmit near infrared light waves using a default incident light intensity; the measured object is irradiated and a certain amount of light is reflected. After converging through the lens group, the image sensor converts the optical signal into an electrical signal. The information processing module may determine whether the electrical signal value of each point (or the designated point) is in the unavailable range area/the available range area, and record the information of the pixel point in the available range area. For the pixels in the unavailable range area, the adjustable light source is controlled to reduce or increase the light source intensity so as to continuously obtain a new vein image. Until all pixels are in the available range, a 3D vein image can be obtained.
Corresponding to the foregoing embodiments of the 3D vein image determination method, the present invention further provides embodiments of a 3D vein image determination apparatus and an electronic device to which the same is applied.
The embodiment of the 3D vein image determination apparatus of the present invention may be applied to a computer device. The apparatus embodiments may be implemented by software, or may be implemented by hardware or a combination of hardware and software. Taking software implementation as an example, the device in a logic sense is formed by reading corresponding computer program instructions in a nonvolatile memory into a memory by a processor of a computer device where the device is located for operation. In terms of hardware, as shown in fig. 6, a hardware configuration diagram of a computer device where the 3D vein image determination apparatus of the present invention is located is shown in fig. 6, and in addition to the processor 610, the network interface 620, the memory 630, and the nonvolatile memory 640 shown in fig. 6, the computer device where the 3D vein image determination apparatus 631 is located in the embodiment generally includes other hardware according to the actual function of the device, which is not described herein again.
As shown in fig. 7, there is a block diagram of a 3D vein image determination apparatus according to an exemplary embodiment of the present invention, the apparatus including:
a data acquisition module 72, configured to acquire a value of a specified parameter from a vein image of a measured object, where the specified parameter is used to describe a reflected light intensity of a pixel point;
the depth determining module 74 is configured to determine depth information corresponding to the numerical value of the specified parameter based on the mapping relationship between the reflected light intensity and the depth information obtained by training, where the depth information is distance information from a vein to a epidermis;
The image determining module 76 is configured to take the depth information and the vein image as a 3D vein image, so as to identify the object to be tested.
In an alternative embodiment, the value of the specified parameter is a pixel value.
In an alternative embodiment, different incident light intensities correspond to different models describing the mapping of reflected light intensity to depth information;
The depth determining module 74 is further configured to determine a corresponding model according to the intensity of the incident light of the object to be measured.
In an alternative embodiment, the range of the reflected light intensity of the model is determined based on the slope of the function curve of the model, and the slope of the function curve of the model is within a preset interval.
In an alternative embodiment, the apparatus further comprises a light intensity adjustment module (not shown in fig. 7) for:
And if the reflected light intensity described by the specified parameters is not in the range of the reflected light intensity of the determined model, adjusting the incident light intensity of the irradiated object to be measured so as to acquire new specified parameters.
In an alternative embodiment, the depth determination module 74 is further configured to:
If depth information of different pixel points is determined by using at least two models corresponding to the incident light intensity, and in a value range of reflected light intensity of the models corresponding to the two adjacent incident light intensities, a difference value between a maximum value of one value range and a minimum value of the other value range is within a preset critical value interval range, the depth information determined by the models is converted, so that the converted depth information is used as final depth information of the pixel points, and the conversion process is as follows:
if the difference value between Amin and Bmax is within the preset critical value interval range, the depth information determined by the model corresponding to A is converted by adopting the following formula:
If the difference between Bmin and Cmax is within the preset critical value interval range, the depth information determined by the model corresponding to C is converted by adopting the following formula:
Wherein, incident light intensity: a > B > C, the range of the reflected light intensity of the model corresponding to A is [ Amin, amax ], the range of the reflected light intensity of the model corresponding to B is [ Bmin, bmax ], and the range of the reflected light intensity of the model corresponding to C is [ Cmin, cmax ]; depth A Before conversion represents Depth information determined by the model corresponding to a, and Depth C Before conversion represents Depth information determined by the model corresponding to C.
For the device embodiments, reference is made to the description of the method embodiments for the relevant points, since they essentially correspond to the method embodiments. The apparatus embodiments described above are merely illustrative, wherein the modules illustrated as separate components may or may not be physically separate, and the components shown as modules may or may not be physical, i.e., may be located in one place, or may be distributed over a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purposes of the present invention. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
Correspondingly, the embodiment of the invention also provides computer equipment, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes any one of the 3D vein image determining methods when executing the program.
The embodiments of the present invention are described in a progressive manner, and the same and similar parts of the embodiments are all referred to each other, and each embodiment is mainly described in the differences from the other embodiments. In particular, for the apparatus embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments in part.
Correspondingly, the embodiment of the invention also provides a computer storage medium, wherein the storage medium stores program instructions, and the program instructions are used for implementing any one of the 3D vein image determining methods.
Embodiments of the invention may take the form of a computer program product embodied on one or more storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having program code embodied therein. Computer-usable storage media include both permanent and non-permanent, removable and non-removable media, and information storage may be implemented by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to: phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, may be used to store information that may be accessed by the computing device.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It is to be understood that the invention is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the invention is limited only by the appended claims.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather to enable any modification, equivalent replacement, improvement or the like to be made within the spirit and principles of the invention.
Claims (9)
1. A method of 3D vein image determination, the method comprising:
Acquiring the numerical value of a specified parameter from a vein image of a measured object, wherein the specified parameter is used for describing the reflection light intensity of a pixel point;
Determining a corresponding model according to the incident light intensity of the irradiated object to be measured; different incident light intensities correspond to different models, any model corresponding to the incident light intensity is obtained by training the reflected light intensity and depth information under the incident light intensity, and the model is used for describing the mapping relation between the reflected light intensity and the depth information;
determining depth information corresponding to the numerical value of the specified parameter based on the mapping relation between the reflected light intensity and the depth information obtained by training, wherein the depth information is distance information from vein to epidermis;
and taking the depth information and the vein image as a 3D vein image to identify the object to be tested.
2. The method of claim 1, wherein the value of the specified parameter is a pixel value.
3. The method of claim 1, wherein the range of values of the reflected light intensity of the model is determined based on a slope of a function curve of the model, the slope of the function curve of the model being within a predetermined interval.
4. A method according to claim 3, characterized in that the method further comprises:
And if the reflected light intensity described by the specified parameters is not in the range of the reflected light intensity of the determined model, adjusting the incident light intensity of the irradiated object to be measured so as to acquire new specified parameters.
5. A method according to claim 3, characterized in that the method further comprises:
If depth information of different pixel points is determined by using at least two models corresponding to the incident light intensity, and in a value range of reflected light intensity of the models corresponding to the two adjacent incident light intensities, a difference value between a maximum value of one value range and a minimum value of the other value range is within a preset critical value interval range, the depth information determined by the models is converted, so that the converted depth information is used as final depth information of the pixel points, and the conversion process is as follows:
If the difference between a min and B max is within the preset critical value interval range, the depth information determined by the model corresponding to a is converted by adopting the following formula:
If the difference between B min and C max is within the preset critical value interval range, the depth information determined by the model corresponding to C is converted by using the following formula:
Wherein, incident light intensity: a > B > C, the range of the reflected light intensity of the model corresponding to A is [ A min,Amax ], the range of the reflected light intensity of the model corresponding to B is [ B min,Bmax ], the range of the reflected light intensity of the model corresponding to C is [ C min,Cmax];DepthA Before conversion ] which represents Depth information determined by the model corresponding to A, and Depth C Before conversion represents Depth information determined by the model corresponding to C.
6. A 3D vein image determination system, the system comprising: a light source, an image sensor and an information processing module;
the light source irradiates near infrared light waves to the measured object;
The image sensor acquires vein images of the detected object by collecting reflected light intensity information of the detected object;
The information processing module acquires the numerical value of a specified parameter from a vein image of a measured object, wherein the specified parameter is used for describing the reflection light intensity of a pixel point; determining a corresponding model according to the incident light intensity of the irradiated object to be measured; different incident light intensities correspond to different models, any model corresponding to the incident light intensity is obtained by training the reflected light intensity and depth information under the incident light intensity, and the model is used for describing the mapping relation between the reflected light intensity and the depth information; determining depth information corresponding to the numerical value of the specified parameter based on the mapping relation between the reflected light intensity and the depth information obtained by training, wherein the depth information is distance information from vein to epidermis; and taking the depth information and the vein image as a 3D vein image to identify the object to be tested.
7. The system of claim 6, wherein the light source is a tunable light source;
The information processing module is further configured to: and if the reflected light intensity described by the specified parameters is not in the range of the reflected light intensity of the determined model, controlling the adjustable light source to adjust the incident light intensity of the measured object irradiated so as to acquire new specified parameters.
8. A 3D vein image determination apparatus, the apparatus comprising:
The data acquisition module is used for acquiring the numerical value of a specified parameter from the vein image of the measured object, wherein the specified parameter is used for describing the reflection light intensity of the pixel point;
The depth determining module is used for determining a corresponding model according to the incident light intensity of the irradiated object to be measured; different incident light intensities correspond to different models, any model corresponding to the incident light intensity is obtained by training the reflected light intensity and depth information under the incident light intensity, and the model is used for describing the mapping relation between the reflected light intensity and the depth information; determining depth information corresponding to the numerical value of the specified parameter based on the mapping relation between the reflected light intensity and the depth information obtained by training, wherein the depth information is distance information from vein to epidermis;
And the image determining module is used for taking the depth information and the vein image as 3D vein images so as to identify the object to be detected.
9. The apparatus of claim 8, wherein the range of values of the reflected light intensity of the model is determined based on a slope of a function curve of the model, the slope of the function curve of the model being within a preset interval;
the device also comprises a light intensity adjusting module for: and if the reflected light intensity described by the specified parameters is not in the range of the reflected light intensity of the determined model, adjusting the incident light intensity of the irradiated object to be measured so as to acquire new specified parameters.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910503284.9A CN112069864B (en) | 2019-06-11 | 2019-06-11 | 3D vein image determination method, device and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910503284.9A CN112069864B (en) | 2019-06-11 | 2019-06-11 | 3D vein image determination method, device and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112069864A CN112069864A (en) | 2020-12-11 |
CN112069864B true CN112069864B (en) | 2024-09-24 |
Family
ID=73658587
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910503284.9A Active CN112069864B (en) | 2019-06-11 | 2019-06-11 | 3D vein image determination method, device and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112069864B (en) |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107392088A (en) * | 2017-06-01 | 2017-11-24 | 燕南国创科技(北京)有限公司 | Three-dimensional vein identification device and method, switch, mobile terminal |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101198320B1 (en) * | 2010-06-21 | 2012-11-06 | (주)아이아이에스티 | Method and apparatus for converting 2d image into 3d image |
CN109740561A (en) * | 2019-01-11 | 2019-05-10 | 重庆工商大学 | 3D Finger Vein Imaging System Based on Monocular Camera |
-
2019
- 2019-06-11 CN CN201910503284.9A patent/CN112069864B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107392088A (en) * | 2017-06-01 | 2017-11-24 | 燕南国创科技(北京)有限公司 | Three-dimensional vein identification device and method, switch, mobile terminal |
Also Published As
Publication number | Publication date |
---|---|
CN112069864A (en) | 2020-12-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR101035667B1 (en) | Biometric pattern detecting device, a personal authentication device and method | |
JP4499341B2 (en) | Biometric authentication device and biometric authentication method | |
US8687856B2 (en) | Methods, systems and computer program products for biometric identification by tissue imaging using optical coherence tomography (OCT) | |
US9384404B2 (en) | Apparatus and method for capturing a vital vascular fingerprint | |
US10869623B2 (en) | Non-invasive optical measurement of blood analyte | |
Luck et al. | An image model and segmentation algorithm for reflectance confocal images of in vivo cervical tissue | |
JP2023535143A (en) | Spectral imaging system and method for histological assessment of wounds | |
WO2015103566A2 (en) | Spatial frequency domain imaging using custom patterns | |
Crisan et al. | A low cost vein detection system using near infrared radiation | |
KR102570637B1 (en) | Apparatus and Method for Recognizing Finger Vein | |
CN101641705A (en) | Method for validating a biometrical acquisition, mainly a body imprint | |
Robinson et al. | Polarimetric imaging for cervical pre-cancer screening aided by machine learning: ex vivo studies | |
JP2012164357A (en) | Biometric authentication device, biometric authentication method, program, electronic device, and biometric authentication system | |
CN106815554B (en) | Internal fingerprint rapid imaging and biological recognition system and method | |
CN112069864B (en) | 3D vein image determination method, device and system | |
CN106473752A (en) | Using method and structure of the weak coherence chromatographic imaging art of holographic field to identification | |
US10548520B2 (en) | Non-invasive optical measurement of blood analyte | |
JP5299491B2 (en) | Biological information detection apparatus and biological information detection method | |
DE102004043876A1 (en) | Method and system for determining the authenticity of the individual characteristics of test objects | |
US11992329B2 (en) | Processing optical coherence tomography scans | |
CN113936330B (en) | Iris recognition device and method based on digital holography | |
Sun et al. | Quantitative laser speckle contrast imaging for presentation attack detection in biometric authentication systems | |
TWI831091B (en) | Analysis method and system for hyperspectral imaging | |
JP4930455B2 (en) | Biometric authentication device and biometric authentication method | |
Mishra et al. | Impact of deep learning applications in medical hyperspectral imaging |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |