[go: up one dir, main page]

 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (78)

Search Parameters:
Keywords = finger vein

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 9520 KiB  
Article
Visual Feature-Guided Diamond Convolutional Network for Finger Vein Recognition
by Qiong Yao, Dan Song, Xiang Xu and Kun Zou
Sensors 2024, 24(18), 6097; https://doi.org/10.3390/s24186097 - 20 Sep 2024
Viewed by 394
Abstract
Finger vein (FV) biometrics have garnered considerable attention due to their inherent non-contact nature and high security, exhibiting tremendous potential in identity authentication and beyond. Nevertheless, challenges pertaining to the scarcity of training data and inconsistent image quality continue to impede the effectiveness [...] Read more.
Finger vein (FV) biometrics have garnered considerable attention due to their inherent non-contact nature and high security, exhibiting tremendous potential in identity authentication and beyond. Nevertheless, challenges pertaining to the scarcity of training data and inconsistent image quality continue to impede the effectiveness of finger vein recognition (FVR) systems. To tackle these challenges, we introduce the visual feature-guided diamond convolutional network (dubbed ‘VF-DCN’), a uniquely configured multi-scale and multi-orientation convolutional neural network. The VF-DCN showcases three pivotal innovations: Firstly, it meticulously tunes the convolutional kernels through multi-scale Log-Gabor filters. Secondly, it implements a distinctive diamond-shaped convolutional kernel architecture inspired by human visual perception. This design intelligently allocates more orientational filters to medium scales, which inherently carry richer information. In contrast, at extreme scales, the use of orientational filters is minimized to simulate the natural blurring of objects at extreme focal lengths. Thirdly, the network boasts a deliberate three-layer configuration and fully unsupervised training process, prioritizing simplicity and optimal performance. Extensive experiments are conducted on four FV databases, including MMCBNU_6000, FV_USM, HKPU, and ZSC_FV. The experimental results reveal that VF-DCN achieves remarkable improvement with equal error rates (EERs) of 0.17%, 0.19%, 2.11%, and 0.65%, respectively, and Accuracy Rates (ACC) of 100%, 99.97%, 98.92%, and 99.36%, respectively. These results indicate that, compared with some existing FVR approaches, the proposed VF-DCN not only achieves notable recognition accuracy but also shows fewer number of parameters and lower model complexity. Moreover, VF-DCN exhibits superior robustness across diverse FV databases. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>Radial filters under different values of <math display="inline"><semantics> <msub> <mi>σ</mi> <mi>r</mi> </msub> </semantics></math>.</p>
Full article ">Figure 2
<p>Angular filters under different angular scaling factors <span class="html-italic">T</span>.</p>
Full article ">Figure 3
<p>Bank of Log-Gabor filters. Each row in (<b>c</b>) contains filters computed with the same scale, for each scale, 10 orientations are sampled.</p>
Full article ">Figure 3 Cont.
<p>Bank of Log-Gabor filters. Each row in (<b>c</b>) contains filters computed with the same scale, for each scale, 10 orientations are sampled.</p>
Full article ">Figure 4
<p>Illustration of the framework of VF-DCN.</p>
Full article ">Figure 5
<p>Diamond convolutional structure of VF-DCN.</p>
Full article ">Figure 6
<p>Adaptive orientational filter learning strategy for the convolutional kernels across different scales.</p>
Full article ">Figure 7
<p>ROI images of four FV databases, in which, ROIs in (<b>a</b>,<b>b</b>) are provided by the dataset itself, while ROIs in (<b>c</b>,<b>d</b>) are extracted by 3<span class="html-italic">σ</span> criterion [<a href="#B1-sensors-24-06097" class="html-bibr">1</a>].</p>
Full article ">Figure 7 Cont.
<p>ROI images of four FV databases, in which, ROIs in (<b>a</b>,<b>b</b>) are provided by the dataset itself, while ROIs in (<b>c</b>,<b>d</b>) are extracted by 3<span class="html-italic">σ</span> criterion [<a href="#B1-sensors-24-06097" class="html-bibr">1</a>].</p>
Full article ">Figure 8
<p>Trend of EER at varying parameters.</p>
Full article ">Figure 9
<p>ROC curves of various diamond-shaped convolutional structures on four finger vein databases.</p>
Full article ">
24 pages, 4262 KiB  
Article
DDP-FedFV: A Dual-Decoupling Personalized Federated Learning Framework for Finger Vein Recognition
by Zijie Guo, Jian Guo, Yanan Huang, Yibo Zhang and Hengyi Ren
Sensors 2024, 24(15), 4779; https://doi.org/10.3390/s24154779 - 23 Jul 2024
Viewed by 688
Abstract
Finger vein recognition methods, as emerging biometric technologies, have attracted increasing attention in identity verification due to their high accuracy and live detection capabilities. However, as privacy protection awareness increases, traditional centralized finger vein recognition algorithms face privacy and security issues. Federated learning, [...] Read more.
Finger vein recognition methods, as emerging biometric technologies, have attracted increasing attention in identity verification due to their high accuracy and live detection capabilities. However, as privacy protection awareness increases, traditional centralized finger vein recognition algorithms face privacy and security issues. Federated learning, a distributed training method that protects data privacy without sharing data across endpoints, is gradually being promoted and applied. Nevertheless, its performance is severely limited by heterogeneity among datasets. To address these issues, this paper proposes a dual-decoupling personalized federated learning framework for finger vein recognition (DDP-FedFV). The DDP-FedFV method combines generalization and personalization. In the first stage, the DDP-FedFV method implements a dual-decoupling mechanism involving model and feature decoupling to optimize feature representations and enhance the generalizability of the global model. In the second stage, the DDP-FedFV method implements a personalized weight aggregation method, federated personalization weight ratio reduction (FedPWRR), to optimize the parameter aggregation process based on data distribution information, thereby enhancing the personalization of the client models. To evaluate the performance of the DDP-FedFV method, theoretical analyses and experiments were conducted based on six public finger vein datasets. The experimental results indicate that the proposed algorithm outperforms centralized training models without increasing communication costs or privacy leakage risks. Full article
(This article belongs to the Special Issue Biometrics Recognition Systems)
Show Figures

Figure 1

Figure 1
<p>Example diagram illustrating the necessity of decoupling. Finger vein images can be divided into two parts: background and foreground information. In this paper, we refer to them as domain-specific and domain-invariant information, respectively. The former reflects the personalized information of specific datasets, such as exposure and imaging conditions for different clients. The latter refers to the core vein texture, which is common and is the key feature for finger vein recognition technology.</p>
Full article ">Figure 2
<p>DDP-FedFV framework for finger vein recognition. The training process of this framework is divided into two main stages. In the first stage (generalization phase), the model is decoupled into two parts: a feature extractor and a classifier. To further decouple the dataset into foreground and background information, two feature extractors are devised for each client: domain-invariant and domain-specific extractors. This decouples the local dataset’s feature representations into domain-invariant and domain-specific parts. During this phase, the parameters of the domain-invariant model for each client are uploaded to the server, allowing the training of a model with strong generalizability. In the second stage (personalization phase), the server uses the size and distribution similarity information of each client’s dataset to determine the aggregation weight matrix W, customizing the model for each client to enhance personalization.</p>
Full article ">Figure 3
<p>Boxplot of this experiment.</p>
Full article ">Figure 4
<p>Boxplot depicting the experimental comparison with the baseline.</p>
Full article ">Figure 5
<p>Grad-CAM visualization of the regions focused on by the domain-invariant (DI) and domain-specific (DS) models. In this subsection, the Grad-CAM [<a href="#B41-sensors-24-04779" class="html-bibr">41</a>] method is used to visualize the feature maps of the deep learning model, highlighting the areas of the image that the model focuses on most. The first row shows sample images for each client. The second row displays the heatmaps obtained with the baseline method selected in this paper. The third row presents heatmaps of the areas on which the domain-invariant (DI) model focuses. The fourth row shows heatmaps of the areas on which the domain-specific (DS) model focuses.</p>
Full article ">Figure 6
<p>Boxplot of the ablation experiment.</p>
Full article ">Figure 7
<p>Boxplot of the experiment.</p>
Full article ">Figure A1
<p>Visual example of the personalized parameter aggregation process.</p>
Full article ">
23 pages, 2355 KiB  
Article
Two-Layered Multi-Factor Authentication Using Decentralized Blockchain in an IoT Environment
by Saeed Bamashmos, Naveen Chilamkurti and Ahmad Salehi Shahraki
Sensors 2024, 24(11), 3575; https://doi.org/10.3390/s24113575 - 1 Jun 2024
Viewed by 832
Abstract
Internet of Things (IoT) technology is evolving over the peak of smart infrastructure with the participation of IoT devices in a wide range of applications. Traditional IoT authentication methods are vulnerable to threats due to wireless data transmission. However, IoT devices are resource- [...] Read more.
Internet of Things (IoT) technology is evolving over the peak of smart infrastructure with the participation of IoT devices in a wide range of applications. Traditional IoT authentication methods are vulnerable to threats due to wireless data transmission. However, IoT devices are resource- and energy-constrained, so building lightweight security that provides stronger authentication is essential. This paper proposes a novel, two-layered multi-factor authentication (2L-MFA) framework using blockchain to enhance IoT devices and user security. The first level of authentication is for IoT devices, one that considers secret keys, geographical location, and physically unclonable function (PUF). Proof-of-authentication (PoAh) and elliptic curve Diffie–Hellman are followed for lightweight and low latency support. Second-level authentication for IoT users, which are sub-categorized into four levels, each defined by specific factors such as identity, password, and biometrics. The first level involves a matrix-based password; the second level utilizes the elliptic curve digital signature algorithm (ECDSA); and levels 3 and 4 are secured with iris and finger vein, providing comprehensive and robust authentication. We deployed fuzzy logic to validate the authentication and make the system more robust. The 2L-MFA model significantly improves performance, reducing registration, login, and authentication times by up to 25%, 50%, and 25%, respectively, facilitating quicker cloud access post-authentication and enhancing overall efficiency. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

Figure 1
<p>Flow chart for the proposed work.</p>
Full article ">Figure 2
<p>The designed 2L-MFA system.</p>
Full article ">Figure 3
<p>Matrix-based password panel: (<b>a</b>) intersection point ‘I’; (<b>b</b>) intersection point ‘o’; (<b>c</b>) intersection point ‘T’.</p>
Full article ">Figure 4
<p>Workflow of proposed 2L-MFA.</p>
Full article ">Figure 5
<p>Pipeline structure of the proposed 2L-MFA.</p>
Full article ">Figure 6
<p>Comparison of registration time, authentication time, and login time.</p>
Full article ">Figure 7
<p>Comparison of authentication success rate.</p>
Full article ">
14 pages, 2920 KiB  
Article
Zero-FVeinNet: Optimizing Finger Vein Recognition with Shallow CNNs and Zero-Shuffle Attention for Low-Computational Devices
by Nghi C. Tran, Bach-Tung Pham, Vivian Ching-Mei Chu, Kuo-Chen Li, Phuong Thi Le, Shih-Lun Chen, Aufaclav Zatu Kusuma Frisky, Yung-Hui Li and Jia-Ching Wang
Electronics 2024, 13(9), 1751; https://doi.org/10.3390/electronics13091751 - 1 May 2024
Viewed by 1106
Abstract
In the context of increasing reliance on mobile devices, robust personal security solutions are critical. This paper presents Zero-FVeinNet, an innovative, lightweight convolutional neural network (CNN) tailored for finger vein recognition on mobile and embedded devices, which are typically resource-constrained. The model integrates [...] Read more.
In the context of increasing reliance on mobile devices, robust personal security solutions are critical. This paper presents Zero-FVeinNet, an innovative, lightweight convolutional neural network (CNN) tailored for finger vein recognition on mobile and embedded devices, which are typically resource-constrained. The model integrates cutting-edge features such as Zero-Shuffle Coordinate Attention and a blur pool layer, enhancing architectural efficiency and recognition accuracy under various imaging conditions. A notable reduction in computational demands is achieved through an optimized design involving only 0.3 M parameters, thereby enabling faster processing and reduced energy consumption, which is essential for mobile applications. An empirical evaluation on several leading public finger vein datasets demonstrates that Zero-FVeinNet not only outperforms traditional biometric systems in speed and efficiency but also establishes new standards in biometric identity verification. The Zero-FVeinNet achieves a Correct Identification Rate (CIR) of 99.9% on the FV-USM dataset, with a similarly high accuracy on other datasets. This paper underscores the potential of Zero-FVeinNet to significantly enhance security features on mobile devices by merging high accuracy with operational efficiency, paving the way for advanced biometric verification technologies. Full article
(This article belongs to the Special Issue Emerging Artificial Intelligence Technologies and Applications)
Show Figures

Figure 1

Figure 1
<p>The finger vein recognition system proposed in this research.</p>
Full article ">Figure 2
<p>Different levels of features extracted by CNNs for human faces, vehicles, and finger veins.</p>
Full article ">Figure 3
<p>The Shallow CNN network with diverse branch block (DBB).</p>
Full article ">Figure 4
<p>The detailed structure of the ZeroBlur-DBB and ZeroBlur-ConvBlock. (<b>a</b>) The ZeroBlur-DBB architecture; (<b>b</b>) the ZeroBlur-ConvBlock architecture.</p>
Full article ">Figure 5
<p>Architecture of different attention mechanism. (<b>a</b>) Coordinate attention block (CA) architecture. (<b>b</b>) ZSCA block architecture.</p>
Full article ">
23 pages, 10179 KiB  
Article
A Degraded Finger Vein Image Recovery and Enhancement Algorithm Based on Atmospheric Scattering Theory
by Dingzhong Feng, Peng Feng, Yongbo Mao, Yang Zhou, Yuqing Zeng and Ye Zhang
Sensors 2024, 24(9), 2684; https://doi.org/10.3390/s24092684 - 24 Apr 2024
Viewed by 1033
Abstract
With the development of biometric identification technology, finger vein identification has received more and more widespread attention for its security, efficiency, and stability. However, because of the performance of the current standard finger vein image acquisition device and the complex internal organization of [...] Read more.
With the development of biometric identification technology, finger vein identification has received more and more widespread attention for its security, efficiency, and stability. However, because of the performance of the current standard finger vein image acquisition device and the complex internal organization of the finger, the acquired images are often heavily degraded and have lost their texture characteristics. This makes the topology of the finger veins inconspicuous or even difficult to distinguish, greatly affecting the identification accuracy. Therefore, this paper proposes a finger vein image recovery and enhancement algorithm using atmospheric scattering theory. Firstly, to normalize the local over-bright and over-dark regions of finger vein images within a certain threshold, the Gamma transform method is improved in this paper to correct and measure the gray value of a given image. Then, we reconstruct the image based on atmospheric scattering theory and design a pixel mutation filter to segment the venous and non-venous contact zones. Finally, the degraded finger vein images are recovered and enhanced by global image gray value normalization. Experiments on SDUMLA-HMT and ZJ-UVM datasets show that our proposed method effectively achieves the recovery and enhancement of degraded finger vein images. The image restoration and enhancement algorithm proposed in this paper performs well in finger vein recognition using traditional methods, machine learning, and deep learning. The recognition accuracy of the processed image is improved by more than 10% compared to the original image. Full article
(This article belongs to the Topic Applications in Image Analysis and Pattern Recognition)
Show Figures

Figure 1

Figure 1
<p>Finger vein image recognition system. (<b>a</b>) System schematic. (<b>b</b>) Finger vein recognition project device. (<b>c</b>) Finger vein image.</p>
Full article ">Figure 2
<p>Finger vein scattering model. (<b>a</b>) Main internal structures of the fingers. (<b>b</b>) Schematic diagram of the scattering effect of near-infrared light inside the finger.</p>
Full article ">Figure 3
<p>Flowchart of the overall structure of the algorithm in this paper.</p>
Full article ">Figure 4
<p>Mean gray values of different regions of the venous image.</p>
Full article ">Figure 5
<p>Degraded finger vein image and its gray-value change map. (<b>a</b>) Degraded finger vein image I. (<b>b</b>) Grayscale value changes in the specified area of image I. (<b>c</b>) Grayscale value changes in the specified area of image I after applying atmospheric scattering model restoration. (<b>d</b>) Degenerated finger vein image II. (<b>e</b>) Change in gray value of the specified area of image II. (<b>f</b>) Change in gray value of the specified area of image II after applying atmospheric scattering model restoration.</p>
Full article ">Figure 5 Cont.
<p>Degraded finger vein image and its gray-value change map. (<b>a</b>) Degraded finger vein image I. (<b>b</b>) Grayscale value changes in the specified area of image I. (<b>c</b>) Grayscale value changes in the specified area of image I after applying atmospheric scattering model restoration. (<b>d</b>) Degenerated finger vein image II. (<b>e</b>) Change in gray value of the specified area of image II. (<b>f</b>) Change in gray value of the specified area of image II after applying atmospheric scattering model restoration.</p>
Full article ">Figure 6
<p>Picture of finger vein image’s gray values. (<b>a</b>) Finger vein image III. (<b>b</b>) Processed image III. (<b>c</b>) Finger vein image IV. (<b>d</b>) Processed image IV.</p>
Full article ">Figure 6 Cont.
<p>Picture of finger vein image’s gray values. (<b>a</b>) Finger vein image III. (<b>b</b>) Processed image III. (<b>c</b>) Finger vein image IV. (<b>d</b>) Processed image IV.</p>
Full article ">Figure 7
<p>(<b>a</b>) Raw finger vein image. (<b>b</b>) Picture after processing in <a href="#sec4dot2-sensors-24-02684" class="html-sec">Section 4.2</a>. (<b>c</b>) Picture after processing in <a href="#sec4dot3-sensors-24-02684" class="html-sec">Section 4.3</a>.</p>
Full article ">Figure 8
<p>Selected finger vein images from the ZJ-UVM dataset.</p>
Full article ">Figure 9
<p>Schematic representation of the results of the ablation experiment. (<b>a</b>) Original figure. (<b>b</b>) Image after processing by an image enhancement algorithm based on atmospheric scattering theory. (<b>c</b>) Image after processing based on homomorphic filtering. (<b>d</b>) Image after processing using the method of this paper.</p>
Full article ">Figure 10
<p>Finger vein images after processing by different image enhancement algorithms. (<b>a</b>) Adaptive histogram equalization (CLAHE) [<a href="#B16-sensors-24-02684" class="html-bibr">16</a>]. (<b>b</b>) Fusion of image descriptors and global histogram equalization [<a href="#B19-sensors-24-02684" class="html-bibr">19</a>]. (<b>c</b>) 2D Gabor filter [<a href="#B23-sensors-24-02684" class="html-bibr">23</a>]. (<b>d</b>) Adaptive triple Gaussian filter [<a href="#B24-sensors-24-02684" class="html-bibr">24</a>].</p>
Full article ">Figure 11
<p>Image recognition DET curve. (<b>a</b>) DET curves for Sift-Flann image recognition methods. (<b>b</b>) DET curves for LBP-SVM image recognition methods.</p>
Full article ">Figure 11 Cont.
<p>Image recognition DET curve. (<b>a</b>) DET curves for Sift-Flann image recognition methods. (<b>b</b>) DET curves for LBP-SVM image recognition methods.</p>
Full article ">Figure 12
<p>Comparison with other methods. (<b>a</b>) Original image. (<b>b</b>) Dark-channel a priori algorithm [<a href="#B10-sensors-24-02684" class="html-bibr">10</a>]. (<b>c</b>) Bi-local adaptive contrast enhancement technique [<a href="#B13-sensors-24-02684" class="html-bibr">13</a>]. (<b>d</b>) MSRCP [<a href="#B14-sensors-24-02684" class="html-bibr">14</a>]. (<b>e</b>) AWRM [<a href="#B15-sensors-24-02684" class="html-bibr">15</a>]. (<b>f</b>) Results of the method in this paper.</p>
Full article ">
16 pages, 2880 KiB  
Article
Customizable Presentation Attack Detection for Improved Resilience of Biometric Applications Using Near-Infrared Skin Detection
by Tobias Scheer, Markus Rohde, Ralph Breithaupt, Norbert Jung and Robert Lange
Sensors 2024, 24(8), 2389; https://doi.org/10.3390/s24082389 - 9 Apr 2024
Viewed by 857
Abstract
Due to their user-friendliness and reliability, biometric systems have taken a central role in everyday digital identity management for all kinds of private, financial and governmental applications with increasing security requirements. A central security aspect of unsupervised biometric authentication systems is the presentation [...] Read more.
Due to their user-friendliness and reliability, biometric systems have taken a central role in everyday digital identity management for all kinds of private, financial and governmental applications with increasing security requirements. A central security aspect of unsupervised biometric authentication systems is the presentation attack detection (PAD) mechanism, which defines the robustness to fake or altered biometric features. Artifacts like photos, artificial fingers, face masks and fake iris contact lenses are a general security threat for all biometric modalities. The Biometric Evaluation Center of the Institute of Safety and Security Research (ISF) at the University of Applied Sciences Bonn-Rhein-Sieg has specialized in the development of a near-infrared (NIR)-based contact-less detection technology that can distinguish between human skin and most artifact materials. This technology is highly adaptable and has already been successfully integrated into fingerprint scanners, face recognition devices and hand vein scanners. In this work, we introduce a cutting-edge, miniaturized near-infrared presentation attack detection (NIR-PAD) device. It includes an innovative signal processing chain and an integrated distance measurement feature to boost both reliability and resilience. We detail the device’s modular configuration and conceptual decisions, highlighting its suitability as a versatile platform for sensor fusion and seamless integration into future biometric systems. This paper elucidates the technological foundations and conceptual framework of the NIR-PAD reference platform, alongside an exploration of its potential applications and prospective enhancements. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

Figure 1
<p>Remission spectra of human skin and exemplary artifact materials.</p>
Full article ">Figure 2
<p>Multispectral sensor working principle.</p>
Full article ">Figure 3
<p>Simplified multispectral sensor implementation schema.</p>
Full article ">Figure 4
<p>Images of the PCB stack: (<b>a</b>) CAD. (<b>b</b>) Photo.</p>
Full article ">Figure 5
<p>Analog filter chain.</p>
Full article ">Figure 6
<p>The optical shielding of the proposed sensor.</p>
Full article ">Figure 7
<p>Sensor images: (<b>a</b>) Closed Variant Front (CAD). (<b>b</b>) Focused Variant (Photo). (<b>c</b>,<b>d</b>) Closed Variant Front and Back (Photo)).</p>
Full article ">Figure 8
<p>LED remission/NIR-protection transmission spectra.</p>
Full article ">Figure 9
<p>Normalized differences of human skin and exemplary artifact materials. (<b>a</b>) Fixed distance at 20 cm. (<b>b</b>) Full range from 5 cm to 30 cm.</p>
Full article ">
18 pages, 8506 KiB  
Article
FV-MViT: Mobile Vision Transformer for Finger Vein Recognition
by Xiongjun Li, Jin Feng, Jilin Cai and Guowen Lin
Sensors 2024, 24(4), 1331; https://doi.org/10.3390/s24041331 - 19 Feb 2024
Cited by 3 | Viewed by 1506
Abstract
In addressing challenges related to high parameter counts and limited training samples for finger vein recognition, we present the FV-MViT model. It serves as a lightweight deep learning solution, emphasizing high accuracy, portable design, and low latency. The FV-MViT introduces two key components. [...] Read more.
In addressing challenges related to high parameter counts and limited training samples for finger vein recognition, we present the FV-MViT model. It serves as a lightweight deep learning solution, emphasizing high accuracy, portable design, and low latency. The FV-MViT introduces two key components. The Mul-MV2 Block utilizes a dual-path inverted residual connection structure for multi-scale convolutions, extracting additional local features. Simultaneously, the Enhanced MobileViT Block eliminates the large-scale convolution block at the beginning of the original MobileViT Block. It converts the Transformer’s self-attention into separable self-attention with linear complexity, optimizing the back end of the original MobileViT Block with depth-wise separable convolutions. This aims to extract global features and effectively reduce parameter counts and feature extraction times. Additionally, we introduce a soft target center cross-entropy loss function to enhance generalization and increase accuracy. Experimental results indicate that the FV-MViT achieves a recognition accuracy of 99.53% and 100.00% on the Shandong University (SDU) and Universiti Teknologi Malaysia (USM) datasets, with equal error rates of 0.47% and 0.02%, respectively. The model has a parameter count of 5.26 million and exhibits a latency of 10.00 milliseconds from the sample input to the recognition output. Comparison with state-of-the-art (SOTA) methods reveals competitive performance for FV-MViT. Full article
(This article belongs to the Special Issue Biometrics Recognition Systems)
Show Figures

Figure 1

Figure 1
<p>Feature extraction of Mini-ROI in the USM Dataset, (<b>a</b>) original image and (<b>b</b>) image processed with feature extraction of Mini-ROI.</p>
Full article ">Figure 2
<p>Sampling and preprocessing results, (<b>a</b>) original sampled image and (<b>b</b>) image processed with CLAHE.</p>
Full article ">Figure 3
<p>Cropping results. (<b>a</b>) Equalized image; (<b>b</b>) first Cropping; (<b>c</b>) second Cropping. In the figure, the red line represents the upper and lower boundaries, and the yellow line represents Mini-ROI.</p>
Full article ">Figure 4
<p>Pixel accumulation images. (<b>a</b>) Pixel accumulation image in the row direction; (<b>b</b>) pixel accumulation image in the column direction.</p>
Full article ">Figure 5
<p>Examples of extracted Mini-ROI, depicting the upper column representing the original finger vein image and the lower representing the extracted Mini-ROI of (<b>a</b>) SDU and (<b>b</b>) USM.</p>
Full article ">Figure 6
<p>Finger vein feature extraction network.</p>
Full article ">Figure 7
<p>(<b>a</b>) MV2 Block; (<b>b</b>) Mul-MV2 Block. In the figure, we use the same color for similar structures.</p>
Full article ">Figure 8
<p>(<b>a</b>) Original MobileViT Block structure; (<b>b</b>) Transformer as Convolution Block structure; (<b>c</b>) Enhanced MobileViT Block structure. In the figure, the first dimension <span class="html-italic">H</span> represents height, the second dimension <span class="html-italic">W</span> represents width, and the third dimension <span class="html-italic">d</span>/<span class="html-italic">C</span> represents depth. We use the same color for similar structures.</p>
Full article ">Figure 9
<p>(<b>a</b>) SUD-FAR/FRR curve; (<b>b</b>) USM-FAR/FRR curve.</p>
Full article ">Figure 10
<p>(<b>a</b>) SDU DET Curve; (<b>b</b>) USM DET Curve.</p>
Full article ">
18 pages, 3683 KiB  
Article
Finger Vein Identification Based on Large Kernel Convolution and Attention Mechanism
by Meihui Li, Yufei Gong and Zhaohui Zheng
Sensors 2024, 24(4), 1132; https://doi.org/10.3390/s24041132 - 9 Feb 2024
Cited by 1 | Viewed by 1009
Abstract
FV (finger vein) identification is a biometric identification technology that extracts the features of FV images for identity authentication. To address the limitations of CNN-based FV identification, particularly the challenge of small receptive fields and difficulty in capturing long-range dependencies, an FV identification [...] Read more.
FV (finger vein) identification is a biometric identification technology that extracts the features of FV images for identity authentication. To address the limitations of CNN-based FV identification, particularly the challenge of small receptive fields and difficulty in capturing long-range dependencies, an FV identification method named Let-Net (large kernel and attention mechanism network) was introduced, which combines local and global information. Firstly, Let-Net employs large kernels to capture a broader spectrum of spatial contextual information, utilizing deep convolution in conjunction with residual connections to curtail the volume of model parameters. Subsequently, an integrated attention mechanism is applied to augment information flow within the channel and spatial dimensions, effectively modeling global information for the extraction of crucial FV features. The experimental results on nine public datasets show that Let-Net has excellent identification performance, and the EER and accuracy rate on the FV_USM dataset can reach 0.04% and 99.77%. The parameter number and FLOPs of Let-Net are only 0.89M and 0.25G, which means that the time cost of training and reasoning of the model is low, and it is easier to deploy and integrate into various applications. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

Figure 1
<p>Method flow and overall structure of Let-Net: (<b>a</b>) Identification process; (<b>b</b>) Overall architecture of Let-Net.</p>
Full article ">Figure 2
<p>Structural design of large kernel: (<b>a</b>) Direct Connection; (<b>b</b>) Funnel Connection; (<b>c</b>) Funnel Connection; (<b>d</b>) Taper Connection.</p>
Full article ">Figure 3
<p>Channel attention mechanism and spatial attention mechanism.</p>
Full article ">Figure 4
<p>The way images are combined, where 1,2,3,…, N is the sequence number of a single finger vein image, and N is the number of a single finger vein image.</p>
Full article ">Figure 5
<p>ROC curves of Let-Net on nine FV datasets.</p>
Full article ">Figure 6
<p>Bar charts of the results of classical models on nine FV datasets: (<b>a</b>) bar charts of the EERs, (<b>b</b>) bar charts of the ACC.</p>
Full article ">
19 pages, 4144 KiB  
Article
Finger Vein Recognition Using DenseNet with a Channel Attention Mechanism and Hybrid Pooling
by Nikesh Devkota and Byung Wook Kim
Electronics 2024, 13(3), 501; https://doi.org/10.3390/electronics13030501 - 25 Jan 2024
Cited by 2 | Viewed by 1314
Abstract
This paper proposes SE-DenseNet-HP, a novel finger vein recognition model that integrates DenseNet with a squeeze-and-excitation (SE)-based channel attention mechanism and a hybrid pooling (HP) mechanism. To distinctively separate the finger vein patterns from their background, original finger vein images are preprocessed using [...] Read more.
This paper proposes SE-DenseNet-HP, a novel finger vein recognition model that integrates DenseNet with a squeeze-and-excitation (SE)-based channel attention mechanism and a hybrid pooling (HP) mechanism. To distinctively separate the finger vein patterns from their background, original finger vein images are preprocessed using region-of-interest (ROI) extraction, contrast enhancement, median filtering, adaptive thresholding, and morphological operations. The preprocessed images are then fed to SE-DenseNet-HP for robust feature extraction and recognition. The DenseNet-based backbone improves information flow by enhancing feature propagation and encouraging feature reuse through feature map concatenation. The SE module utilizes a channel attention mechanism to emphasize the important features related to finger vein patterns while suppressing less important ones. HP architecture used in the transitional blocks of SE-DenseNet-HP concatenates the average pooling method with a max pooling strategy to preserve both the most discriminative and contextual information. SE-DenseNet-HP achieved recognition accuracy of 99.35% and 93.28% on the good-quality FVUSM and HKPU datasets, respectively, surpassing the performance of existing methodologies. Additionally, it demonstrated better generalization performance on the FVUSM, HKPU, UTFVP, and MMCBNU_6000 datasets, achieving remarkably low equal error rates (EERs) of 0.03%, 1.81%, 0.43%, and 1.80%, respectively. Full article
(This article belongs to the Special Issue Revolutionizing Medical Image Analysis with Deep Learning)
Show Figures

Figure 1

Figure 1
<p>Block diagram of the image preprocessing stage used in finger vein recognition.</p>
Full article ">Figure 2
<p>ROI image extraction to remove unnecessary background in a sample image from the FVUSM dataset.</p>
Full article ">Figure 3
<p>Image preprocessing steps to obtain clear finger vein patterns from an original finger image in HKPU dataset.</p>
Full article ">Figure 4
<p>Denseblock architecture inside SE-DenseNet-HP. (<b>a</b>) Feature map concatenation inside a dense block of the SE-DenseNet-HP architecture. (<b>b</b>) Detailed structure of bottleneck layer. (<b>c</b>) Detailed structure of SE module.</p>
Full article ">Figure 5
<p>Block diagram of transitional block used in SE-DenseNet-HP architecture.</p>
Full article ">Figure 6
<p>SE-DenseNet-HP network architecture for finger vein extraction and recognition. The color circles represent feature map concatenation between subsequent layers inside a dense block.</p>
Full article ">Figure 7
<p>Original images along with their corresponding preprocessed images for (<b>a</b>) original HKPU images; (<b>b</b>) preprocessed HKPU images; (<b>c</b>) original FVUSM images; (<b>d</b>) preprocessed FVUSM images; (<b>e</b>) original UTFVP images; (<b>f</b>) preprocessed UTFVP images; (<b>g</b>) original MMCBNU_6000 images; (<b>h</b>) preprocessed MMCBNU_6000 images. The labels (<b>i</b>) and (<b>ii</b>) denote images selected from two separate classes within each respective dataset.</p>
Full article ">Figure 8
<p>Comparison of good-quality and poor-quality finger vein patterns obtained from the HKPU and FVUSM datasets: (<b>a</b>) good-quality original HKPU images; (<b>b</b>) preprocessing of good-quality original HKPU images; (<b>c</b>) poor-quality original HKPU images; (<b>d</b>) preprocessing of poor-quality original HKPU images; (<b>e</b>) good-quality original FVUSM images; (<b>f</b>) preprocessing of good-quality original FVUSM images; (<b>g</b>) poor-quality original FVUSM images; (<b>h</b>) preprocessing of poor-quality original FVUSM images. The labels (i) and (ii) denote images selected from two separate classes within the good-quality and poor-quality categories of the HKPU and FVUSM datasets, respectively.</p>
Full article ">Figure 9
<p>Comparison of preprocessed images from SE-DenseNet-HP and FVR-Net [<a href="#B22-electronics-13-00501" class="html-bibr">22</a>]: (<b>a</b>) original sample images from the HKPU dataset; (<b>b</b>) preprocessed images from the HKPU dataset (SE-DenseNet-HP); (<b>c</b>) preprocessed images from the HKPU dataset (FVR-Net); (<b>d</b>) original sample images from the FVUSM dataset; (<b>e</b>) preprocessed images from the FVUSM dataset (SE-DenseNet-HP); (<b>f</b>) preprocessed images from the FVUSM dataset (FVR-Net). The labels (<b>i</b>), (<b>ii</b>) and (<b>iii</b>) indicate images taken from three distinct classes within each respective dataset.</p>
Full article ">Figure 10
<p>Comparison of EERs (%) obtained from SE-DenseNet-HP and FVR-Net.</p>
Full article ">
25 pages, 5346 KiB  
Article
An Improved Multimodal Biometric Identification System Employing Score-Level Fuzzification of Finger Texture and Finger Vein Biometrics
by Syed Aqeel Haider, Shahzad Ashraf, Raja Masood Larik, Nusrat Husain, Hafiz Abdul Muqeet, Usman Humayun, Ashraf Yahya, Zeeshan Ahmad Arfeen and Muhammad Farhan Khan
Sensors 2023, 23(24), 9706; https://doi.org/10.3390/s23249706 - 8 Dec 2023
Cited by 1 | Viewed by 5230
Abstract
This research work focuses on a Near-Infra-Red (NIR) finger-images-based multimodal biometric system based on Finger Texture and Finger Vein biometrics. The individual results of the biometric characteristics are fused using a fuzzy system, and the final identification result is achieved. Experiments are performed [...] Read more.
This research work focuses on a Near-Infra-Red (NIR) finger-images-based multimodal biometric system based on Finger Texture and Finger Vein biometrics. The individual results of the biometric characteristics are fused using a fuzzy system, and the final identification result is achieved. Experiments are performed for three different databases, i.e., the Near-Infra-Red Hand Images (NIRHI), Hong Kong Polytechnic University (HKPU) and University of Twente Finger Vein Pattern (UTFVP) databases. First, the Finger Texture biometric employs an efficient texture feature extracting algorithm, i.e., Linear Binary Pattern. Then, the classification is performed using Support Vector Machine, a proven machine learning classification algorithm. Second, the transfer learning of pre-trained convolutional neural networks (CNNs) is performed for the Finger Vein biometric, employing two approaches. The three selected CNNs are AlexNet, VGG16 and VGG19. In Approach 1, before feeding the images for the training of the CNN, the necessary preprocessing of NIR images is performed. In Approach 2, before the pre-processing step, image intensity optimization is also employed to regularize the image intensity. NIRHI outperforms HKPU and UTFVP for both of the modalities of focus, in a unimodal setup as well as in a multimodal one. The proposed multimodal biometric system demonstrates a better overall identification accuracy of 99.62% in comparison with 99.51% and 99.50% reported in the recent state-of-the-art systems. Full article
(This article belongs to the Section Biosensors)
Show Figures

Figure 1

Figure 1
<p>Block diagram for proposed multimodal biometric system.</p>
Full article ">Figure 2
<p>Sample images for NIRHI, HKPU and UTFVP databases. (<b>a</b>) Center finger—NIRHI sample images. (<b>b</b>) Ring finger—NIRHI sample images. (<b>c</b>) Index finger—NIRHI sample images. (<b>d</b>) HKPU sample images. (<b>e</b>) UTFVP sample images.</p>
Full article ">Figure 3
<p>Details of images in the employed databases.</p>
Full article ">Figure 4
<p>Finger Texture identification algorithm for (<b>a</b>) NIRHI, (<b>b</b>) HKPU and (<b>c</b>) UTFVP.</p>
Full article ">Figure 5
<p>Finger Vein identification algorithm for NIRHI, HKPU and UTFVP Databases.</p>
Full article ">Figure 6
<p>Original and optimized images.</p>
Full article ">Figure 7
<p>Sample transformed images for the three databases. (<b>a</b>) NIRHI—center finger images after transformation. (<b>b</b>) NIRHI—index finger images after transformation. (<b>c</b>) NIRHI—ring finger images after transformation. (<b>d</b>) HKPU—sample finger images after transformation. (<b>e</b>) UTFVP—sample finger images after transformation.</p>
Full article ">Figure 8
<p>Fuzzy Rule-Based Inference System.</p>
Full article ">Figure 9
<p>Fuzzification of input and output variables. (<b>a</b>) Finger Texture biometric. (<b>b</b>) Finger Vein biometric. (<b>c</b>) Confidence level. (<b>d</b>) Surface for Fuzzy Inference System.</p>
Full article ">Figure 10
<p>Output confidence scores against individual confidence scores as inputs for the Fuzzy Inference System. Red colour—Low confidence, Green colour—High confidence.</p>
Full article ">Figure 11
<p>Sample ROC curves for Finger Texture (<b>a</b>–<b>c</b>) and Finger Vein (<b>d</b>–<b>i</b>) biometrics.</p>
Full article ">Figure 11 Cont.
<p>Sample ROC curves for Finger Texture (<b>a</b>–<b>c</b>) and Finger Vein (<b>d</b>–<b>i</b>) biometrics.</p>
Full article ">Figure 12
<p>Training time versus accuracy scatter plots. (<b>a</b>) SVM—NIRHI. (<b>b</b>) SVM—HKPU and UTFVP. (<b>c</b>) CNN—all databases.</p>
Full article ">Figure 13
<p>Testing time vs. accuracy scatter plots. (<b>a</b>) Testing candidate image time vs. accuracy for SVM. (<b>b</b>) Testing candidate image time vs. accuracy for CNN.</p>
Full article ">Figure 14
<p>Equal Error Rate ROC curve for the multimodal system using NIRHI. (<b>a</b>) Approach 1 for AlexNet. (<b>b</b>) Approach 2 for AlexNet.</p>
Full article ">
17 pages, 3949 KiB  
Article
A Finger Vein Liveness Detection System Based on Multi-Scale Spatial-Temporal Map and Light-ViT Model
by Liukui Chen, Tengwen Guo, Li Li, Haiyang Jiang, Wenfu Luo and Zuojin Li
Sensors 2023, 23(24), 9637; https://doi.org/10.3390/s23249637 - 5 Dec 2023
Cited by 1 | Viewed by 1298
Abstract
Prosthetic attack is a problem that must be prevented in current finger vein recognition applications. To solve this problem, a finger vein liveness detection system was established in this study. The system begins by capturing short-term static finger vein videos using uniform near-infrared [...] Read more.
Prosthetic attack is a problem that must be prevented in current finger vein recognition applications. To solve this problem, a finger vein liveness detection system was established in this study. The system begins by capturing short-term static finger vein videos using uniform near-infrared lighting. Subsequently, it employs Gabor filters without a direct-current (DC) component for vein area segmentation. The vein area is then divided into blocks to compute a multi-scale spatial–temporal map (MSTmap), which facilitates the extraction of coarse liveness features. Finally, these features are trained for refinement and used to predict liveness detection results with the proposed Light Vision Transformer (Light-ViT) model, which is equipped with an enhanced Light-ViT backbone, meticulously designed by interleaving multiple MN blocks and Light-ViT blocks, ensuring improved performance in the task. This architecture effectively balances the learning of local image features, controls network parameter complexity, and substantially improves the accuracy of liveness detection. The accuracy of the Light-ViT model was verified to be 99.63% on a self-made living/prosthetic finger vein video dataset. This proposed system can also be directly applied to the finger vein recognition terminal after the model is made lightweight. Full article
(This article belongs to the Special Issue AI-Driven Sensing for Image Processing and Recognition)
Show Figures

Figure 1

Figure 1
<p>Flowchart of finger vein static short-term video liveness detection system.</p>
Full article ">Figure 2
<p>Hardware equipment: (<b>a</b>) housings image; (<b>b</b>) internal structure diagram; (<b>c</b>) acquisition process image.</p>
Full article ">Figure 3
<p>Low chart of finger vein static short-term video live detection system.</p>
Full article ">Figure 4
<p>Construction of multi-scale spatial–temporal map.</p>
Full article ">Figure 5
<p>The structure of the MN block.</p>
Full article ">Figure 6
<p>L-ViT block structure diagram.</p>
Full article ">Figure 7
<p>Light-ViT network structure.</p>
Full article ">Figure 8
<p>The prosthesis made of three different materials: (<b>a</b>) A4 paper; (<b>b</b>) PVC plastic; (<b>c</b>) laser film.</p>
Full article ">Figure 9
<p>The captured images in vivo and of prosthesis: (<b>a</b>) living sample; (<b>b</b>) A4 paper; (<b>c</b>) PVC plastic; (<b>d</b>) laser printing film.</p>
Full article ">Figure 10
<p>Loss curve.</p>
Full article ">Figure 11
<p>The ablation experiment loss curves of Light-ViT. (<b>a</b>) Loss curve of ablation experiment; (<b>b</b>) MSTmap experimental loss curve.</p>
Full article ">
14 pages, 1701 KiB  
Article
Improved Lightweight Convolutional Neural Network for Finger Vein Recognition System
by Chih-Hsien Hsia, Liang-Ying Ke and Sheng-Tao Chen
Bioengineering 2023, 10(8), 919; https://doi.org/10.3390/bioengineering10080919 - 3 Aug 2023
Cited by 3 | Viewed by 1812
Abstract
Computer vision (CV) technology and convolutional neural networks (CNNs) demonstrate superior feature extraction capabilities in the field of bioengineering. However, during the capturing process of finger-vein images, translation can cause a decline in the accuracy rate of the model, making it challenging to [...] Read more.
Computer vision (CV) technology and convolutional neural networks (CNNs) demonstrate superior feature extraction capabilities in the field of bioengineering. However, during the capturing process of finger-vein images, translation can cause a decline in the accuracy rate of the model, making it challenging to apply CNNs to real-time and highly accurate finger-vein recognition in various real-world environments. Moreover, despite CNNs’ high accuracy, CNNs require many parameters, and existing research has confirmed their lack of shift-invariant features. Based on these considerations, this study introduces an improved lightweight convolutional neural network (ILCNN) for finger vein recognition. The proposed model incorporates a diverse branch block (DBB), adaptive polyphase sampling (APS), and coordinate attention mechanism (CoAM) with the aim of improving the model’s performance in accurately identifying finger vein features. To evaluate the effectiveness of the model in finger vein recognition, we employed the finger-vein by university sains malaysia (FV-USM) and PLUSVein dorsal-palmar finger-vein (PLUSVein-FV3) public database for analysis and comparative evaluation with recent research methodologies. The experimental results indicate that the finger vein recognition model proposed in this study achieves an impressive recognition accuracy rate of 99.82% and 95.90% on the FV-USM and PLUSVein-FV3 public databases, respectively, while utilizing just 1.23 million parameters. Moreover, compared to the finger vein recognition approaches proposed in previous studies, the ILCNN introduced in this work demonstrated superior performance. Full article
Show Figures

Figure 1

Figure 1
<p>Framework of finger vein recognition system proposed in this study.</p>
Full article ">Figure 2
<p>Architecture of diversity branch block. (<b>a</b>) Architecture for the training phase. (<b>b</b>) Architecture for the testing phase.</p>
Full article ">Figure 3
<p>Architecture of different attention mechanisms. (<b>a</b>) CBAM architecture. (<b>b</b>) CoAM architecture.</p>
Full article ">Figure 4
<p>APS sampling on single-channel feature map [<a href="#B20-bioengineering-10-00919" class="html-bibr">20</a>]. (<b>a</b>) Feature map before and after translation. (<b>b</b>) Candidate sampling results selected by APS sampling on the feature map. (<b>c</b>) Result with the highest value after computing using <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>L</mi> </mrow> <mrow> <mi>p</mi> </mrow> </msub> </mrow> </semantics></math> normalization on the candidate samples. (<b>d</b>) Result computed using the traditional downsampling method.</p>
Full article ">Figure 5
<p>Diverse branch residual block. (<b>a</b>) DBRB proposed in this work. (<b>b</b>) Basic residual block proposed by ResNet [<a href="#B23-bioengineering-10-00919" class="html-bibr">23</a>].</p>
Full article ">Figure 6
<p>ILCNN architecture proposed in this work.</p>
Full article ">
26 pages, 2046 KiB  
Article
A Novel Finger Vein Verification Framework Based on Siamese Network and Gabor Residual Block
by Qiong Yao, Chen Chen, Dan Song, Xiang Xu and Wensheng Li
Mathematics 2023, 11(14), 3190; https://doi.org/10.3390/math11143190 - 20 Jul 2023
Cited by 1 | Viewed by 1158
Abstract
The evolution of deep learning has promoted the performance of finger vein verification systems, but also brings some new issues to be resolved, including high computational burden, massive training sample demand, as well as the adaptability and generalization to various image acquisition equipment, [...] Read more.
The evolution of deep learning has promoted the performance of finger vein verification systems, but also brings some new issues to be resolved, including high computational burden, massive training sample demand, as well as the adaptability and generalization to various image acquisition equipment, etc. In this paper, we propose a novel and lightweight network architecture for finger vein verification, which was constructed based on a Siamese framework and embedded with a pair of eight-layer tiny ResNets as the backbone branch network. Therefore, it can maintain good verification accuracy under the circumstance of a small-scale training set. Moreover, to further reduce the number of parameters, Gabor orientation filters (GoFs ) were introduced to modulate the conventional convolutional kernels, so that fewer convolutional kernels were required in the subsequent Gabor modulation, and multi-scale and orientation-insensitive kernels can be obtained simultaneously. The proposed Siamese network framework (Siamese Gabor residual network (SGRN)) embeds two parameter-sharing Gabor residual subnetworks (GRNs) for contrastive learning; the inputs are paired image samples (a reference image with a positive/negative image), and the outputs are the probabilities for accepting or rejecting. The subject-independent experiments were performed on two benchmark finger vein datasets, and the experimental results revealed that the proposed SGRN model can enhance inter-class discrepancy and intra-class similarity. Compared with some existing deep network models that have been applied to finger vein verification, our proposed SGRN achieved an ACC of 99.74% and an EER of 0.50% on the FV-USM dataset and an ACC of 99.55% and an EER of 0.52% on the MMCBNU_6000 dataset. In addition, the SGRN has smaller model parameters with only 0.21 ×106 Params and 1.92 ×106 FLOPs, outperforming some state-of-the-art FV verification models; therefore, it better facilitates the application of real-time finger vein verification. Full article
Show Figures

Figure 1

Figure 1
<p>Illustration of the appearance of finger vein imaging.</p>
Full article ">Figure 2
<p>Illustration of the basic processing flow of a finger vein verification system.</p>
Full article ">Figure 3
<p>Illustration of a Siamese network framework.</p>
Full article ">Figure 4
<p>Overall architectures of tiny ResNet and Gabor residual network models.</p>
Full article ">Figure 5
<p>Overall architecture of the proposed Siamese Gabor residual network.</p>
Full article ">Figure 6
<p>Sample images from the two finger vein datasets.</p>
Full article ">Figure 7
<p>Visualization of the convolutional kernels in the first three convolutional layers.</p>
Full article ">Figure 8
<p>ACC, EER, and Params by using different CNN-based network models on the FV-USM dataset.</p>
Full article ">Figure 9
<p>ACC, EER, Params by different CNN-based network models on the MMCBNU_6000 dataset.</p>
Full article ">Figure 10
<p>DET curves of the compared models on the two FV datasets.</p>
Full article ">
17 pages, 5835 KiB  
Article
The Impact of High Temperatures in the Field on Leaf Tissue Structure in Different Grape Cultivars
by Jiuyun Wu, Riziwangguli Abudureheman, Haixia Zhong, Vivek Yadav, Chuan Zhang, Yaning Ma, Xueyan Liu, Fuchun Zhang, Qian Zha and Xiping Wang
Horticulturae 2023, 9(7), 731; https://doi.org/10.3390/horticulturae9070731 - 21 Jun 2023
Cited by 5 | Viewed by 1529
Abstract
Global warming will significantly affect grapevine growth and development. To analyze the effects of high temperature on the leaf tissue structure of grapevines in the field, 19 representative cultivars were selected from the grapevine germplasm resources garden in Turpan Research Institute of Agricultural [...] Read more.
Global warming will significantly affect grapevine growth and development. To analyze the effects of high temperature on the leaf tissue structure of grapevines in the field, 19 representative cultivars were selected from the grapevine germplasm resources garden in Turpan Research Institute of Agricultural Sciences, XAAS. Twelve tissue structure indexes of grapevine leaves, including the thickness of the upper epidermis (TUE), the thickness of palisade tissue (TPT), leaf vein (LV), the thickness of spongy tissue (TST), the thickness of the lower epidermis (TLE), stoma (St), guard cell (GC), cuticle (Cu), leaf tissue compactness (CTR) and leaf tissue porosity (SR), were measured during the natural high-temperature period in Turpan. The results showed significant differences in the leaf tissue structure of the 19 grapevine cultivars under natural high temperature. Based on the comprehensive comparative analysis of the leaf phenotype in the field, we identified that the leaves of some cultivars, including ‘Zaoxia Wuhe’, ‘Centennial Seedless’ and ‘Kyoho’ showed strong heat tolerance, whereas grapevine cultivars ‘Golden Finger’, ‘Shine Muscat’, ‘Flame Seedless’, ‘Bixiang Wuhe’ and ‘Thompson Seedless’ showed sensitivity to high temperature. We further evaluated the heat tolerance of different grapevine cultivars by principal component analysis and the optimal segmentation clustering of ordered samples. These findings provide a theoretical basis for adopting appropriate cultivation management measures to reduce the effect of high temperatures and offer fundamental knowledge for future breeding strategies for heat-tolerant grapevine varieties. Full article
(This article belongs to the Special Issue Horticulture Plants Stress Physiology)
Show Figures

Figure 1

Figure 1
<p>The air temperature of the viticultural region of XAAS in July, 2022, in Turpan.</p>
Full article ">Figure 2
<p>The leaf phenotypes of different grapevine cultivars after exposure to high temperatures in the field. 1: ‘Golden Finger’; 2: ‘ZhengyanWuhe’; 3: ‘Flame Seedless’; 4: ‘Jumeigui’; 5: ‘Kyoho’; 6: ‘Cardinal’; 7: ‘BixiangWuhe’; 8: ‘Qingfeng’; 9: ‘JintianMeigui’; 10: ‘Centennial Seedless’; 11: ‘Thompson Seedless’; 12: ‘Summer Black’; 13: ‘Xinyu’; 14: ‘Shine Muscat’; 15: ‘Zhengmei’; 16: ‘ZitianWuhe’; 17: ‘Zuijinxiang’; 18: ‘ZaoxiaWuhe’; 19: ‘Brilliant Seedless’.</p>
Full article ">Figure 3
<p>Anatomical structures of grapevine leaves. Anatomical structure of grapevine leaves and response to heat stress. (<b>a</b>–<b>c</b>): Different views of leaf cross-section; AS: abaxial surface; Ca: cambium; CU: cuticle; Ep: epidermis; LE: lower epidermis; LV: lateral vein; Me: mesophyll; MV: main vein; PC: parenchymal cells; PT: palisade tissue; ST: spongy tissue; VB: vascular bundles; Ve: vein; VS: ventral surface; Xy: xylem. Scale bars: (<b>a</b>,<b>c</b>), 50 μm; (<b>b</b>), 100 μm.</p>
Full article ">Figure 4
<p>The CTR and SR values of grapevine leaf anatomy. Different letters indicate means which are significantly different at <span class="html-italic">p</span> &lt; 0.05. (<b>a</b>) The CTR values of 19 grapevines. (<b>b</b>) The SR values of 19 grapevines. X-axis represent grape genotypes. 1: ‘Golden Finger’; 2: ‘Zhengyan Wuhe’; 3: ‘Flame Seedless’; 4: ‘Jumeigui’; 5: ‘Kyoho’; 6: ‘Cardinal’; 7: ‘Bixiang Wuhe’; 8: ‘Qingfeng’; 9: ‘Jintian Meigui’; 10: ‘Centennial Seedless’; 11: ‘Thompson Seedless’; 12: ‘Summer Black’; 13: ‘Xinyu’; 14: ‘Shine Muscat’; 15: ‘Zhengmei’; 16: ‘Zitian Wuhe’; 17: ‘Zuijinxiang’; 18: ‘Zaoxia Wuhe’; 19: ‘Brilliant Seedless’.</p>
Full article ">Figure 5
<p>Correlation analysis diagram. ** shows highly significant correlation at 0.01 level and * indicates significant correlation at 0.05 level. CTR: Palisade tissue/leaf thickness ratio; Cu: cuticle; GC: guard cell; P/S: ratio of palisade tissue/spongy tissue; LV: leaf vein; SR: spongy tissue/thickness of leaf; St: stoma; TL: thickness of leaf; TLE: thickness of lower epidermis; TPT: thickness of palisade tissue; TST: thickness of spongy tissue; TUE: thickness of upper epidermis.</p>
Full article ">Figure 6
<p>Principal component analysis of leaf cell structure components. Vectors indicate the direction and strength of each variable to the overall distribution. Colored symbols correspond to leaf structure components. Cluster 1: CTR, P/S, TPS, TL, LV, St, SR, and cluster 2: Cu, TLE, GC, TST, TUE.</p>
Full article ">
19 pages, 1545 KiB  
Article
Deep Learning-Based Wrist Vascular Biometric Recognition
by Felix Marattukalam, Waleed Abdulla, David Cole and Pranav Gulati
Sensors 2023, 23(6), 3132; https://doi.org/10.3390/s23063132 - 15 Mar 2023
Cited by 6 | Viewed by 2871
Abstract
The need for contactless vascular biometric systems has significantly increased. In recent years, deep learning has proven to be efficient for vein segmentation and matching. Palm and finger vein biometrics are well researched; however, research on wrist vein biometrics is limited. Wrist vein [...] Read more.
The need for contactless vascular biometric systems has significantly increased. In recent years, deep learning has proven to be efficient for vein segmentation and matching. Palm and finger vein biometrics are well researched; however, research on wrist vein biometrics is limited. Wrist vein biometrics is promising due to it not having finger or palm patterns on the skin surface making the image acquisition process easier. This paper presents a deep learning-based novel low-cost end-to-end contactless wrist vein biometric recognition system. FYO wrist vein dataset was used to train a novel U-Net CNN structure to extract and segment wrist vein patterns effectively. The extracted images were evaluated to have a Dice Coefficient of 0.723. A CNN and Siamese Neural Network were implemented to match wrist vein images obtaining the highest F1-score of 84.7%. The average matching time is less than 3 s on a Raspberry Pi. All the subsystems were integrated with the help of a designed GUI to form a functional end-to-end deep learning-based wrist biometric recognition system. Full article
(This article belongs to the Special Issue Biometrics Recognition Based on Sensor Technology)
Show Figures

Figure 1

Figure 1
<p>Wrist Vein Example Images. (<b>a</b>) input, (<b>b</b>) Enhanced Vein Image, (<b>c</b>) Vein Features.</p>
Full article ">Figure 2
<p>Wrist Vein Recognition System Flowchart.</p>
Full article ">Figure 3
<p>U-Net Architecture.</p>
Full article ">Figure 4
<p>Generated mask images for the original image from FYO. (<b>a</b>) Original Image, (<b>b</b>) Mask Generated, (<b>c</b>) Mask Generated with U-Net.</p>
Full article ">Figure 5
<p>CNN Matching Network Architecture.</p>
Full article ">Figure 6
<p>Siamese Neural Network Architecture with sub-network.</p>
Full article ">Figure 7
<p>Sub-network Siamese Feature Network Architecture.</p>
Full article ">Figure 8
<p>Wrist Vein GUI Segmentation Process.</p>
Full article ">Figure 9
<p>Wrist Vein GUI Matching Process.</p>
Full article ">Figure 10
<p>A Developed Filed to Patent Image Acquisition Device.</p>
Full article ">Figure 11
<p>U-Net Training Evaluation.</p>
Full article ">Figure 12
<p>Convolutional Network Training Evaluation: Loss and Accuracy.</p>
Full article ">Figure 13
<p>Convolutional Network Training Evaluation: F1-Score.</p>
Full article ">Figure 14
<p>Siamese Network Training Evaluation: Loss and Accuracy.</p>
Full article ">Figure 15
<p>Siamese Network Training Evaluation: F1-Score.</p>
Full article ">
Back to TopTop