This repository contains supplementary material for the conference paper "Edge AI-Based Vein Detector for Efficient Venipuncture in the Antecubital Fossa" (MICAI 2023 Oral session). Authors: Edwin Salcedo and Patricia Peñaloza
[Project page] [Dataset] [arXiv]
3. Getting started
4. Citation
Assessing vein condition and visibility is crucial before obtaining intravenous access in the antecubital fossa, a common site for blood draws and intravenous therapy. However, medical practitioners often struggle with patients who have less visible veins due to factors such as fluid retention, age, obesity, dark skin tone, or diabetes. Current research explores the use of near-infrared (NIR) imaging and deep learning (DL) for forearm vein segmentation, achieving high precision. However, a research gap remains in recognising veins specifically in the antecubital fossa. Additionally, most studies rely on stationary computers, limiting portability for medical personnel during venipuncture procedures. To address these challenges, we propose a portable vein finder for the antecubital fossa based on the Raspberry Pi 4B.
We implemented various vein semantic segmentation models in Deep_Learning_based_Segmentation.ipynb
and selected the best-performing one—a U-Net model. We then enhanced it in Inference_Multi_task_U_Net.ipynb
by adding an additional head to predict the coordinates of the antecubital fossa and its angle. The final computer vision system deployed in the vein finder is shown below:
The device was designed using the 3D CAD software SolidWorks. It can be viewed by opening the file Ensamblaje.SLDASM
. We also provide a detailed list of its components and visuals of the final 3D-printed prototype.
Component | Specifications | CAD Design |
---|---|---|
Power bank | Xiaomi Mi Power Bank 3 | Power bank |
NIR Camera | Raspberry Pi Camera Module 2 NoIR | Holder Picam Noir Leds Matrix |
LCD display | Waveshare 3.5inch Touch Screen | Screen LCD Assembly |
Processing unit | Raspberry Pi 4 Model B | Raspberry Pi 4B |
Relay module | D.C. 5V 1 Channel Relay Module with Optocoupler | Relay |
LED matrix | Perforated Phenolic Plate 5x7cm + 12 Infrared Ray IR 940nm Emitter LED Diode Lights | LED Matrix |
On/off switch | ON-OFF Switch 19*13mm KCD1-101 | - |
Case | - | Base Cover Charger |
9v battery holder | - | Case Holder Battery |
Frontal view | Back view | Side view | Inner view |
---|---|---|---|
![]() |
![]() |
![]() |
![]() |
To collect the dataset, we captured 2016 NIR images of 1008 young individuals with low visibility veins. Each individual placed one arm at a time on a table, allowing us to use a preliminary version of the device to capture an NIR image. The dataset, available here, comes in four versions:
- A:
final_dataset.zip
→ Base version with complete annotations. Three samples are shown below. - B:
final_augmented_dataset.zip
→ Resulting dataset after applying data augmentation to version A. - C:
square_final_dataset512x512.zip
→ This is a resized version of dataset A, with images reshaped to 512x512 pixels, to match the input requirements of the semantic segmentation models. - D:
square_augmented_final_dataset512x512.zip
→ Similarly, resized version of B (512x512).
Below, you can see the original NIR samples, their preprocessed versions (after applying grayscale conversion and CLAHE), and their annotations: a grayscale mask overlay (with a different colormap for visualization), a dot representing the x and y coordinates of the antecubital fossa, and a floating number representing the arm angle. Furthermore, we provide a detailed explanation of the file final_dataset.zip
, which contains the base version of the dataset.
NIR Images | Preprocessed Images | Annotations |
---|---|---|
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
final_dataset/
------------- dataset.csv # Demographic data for each sample includes age, complexion, gender, observations, NIR image location, preprocessed image location, mask location, antecubital fossa coordinates, and arm angle. Each subject contributed two samples, one for each arm.
------------- masks/ # Grayscale images with pixel values 0,1, and 2 representing background, arm, and vein, respectively.
------------- nir_images/ # NIR images
------------- preprocessed_images/ # The same NIR images after applying grayscale conversion and CLAHE.
Initial results from implementations of U-Net, SegNet, PSPNet, Pix2Pix, and DeepLabv3+ on the dataset (version C) are presented. The results indicate that U-Net achieved the highest accuracy. As a result, we focused further research on this method for antecubital fossa detection.
To validate the device, we asked three certified nurses to indicate the location where they would perform venipuncture on 384 samples. We saved this information in image format and shared it in the validation folder, inside the dataset location. We have also included the documents signed by the nurses, confirming their consent to share the information. The annotated images can be used to compare the model's inference to the nurses' chosen venipuncture locations. We used these image subsets to evaluate the proposed device's performance, finding an 83% agreement between the regions identified by the nurses and those identified by the U-Net vein segmentation algorithm.
In this repository, we provide a pretrained multi-task U-Net model, embedded within a complete pipeline for performing inference on NIR images. You can run the pipeline by following these steps:
# Clone the repository
git clone git@github.com:EdwinTSalcedo/CUBITAL.git cubital
# Create and activate a new conda environment
conda create -n new_env python=3.10.12
conda activate new_env
# Install the dependencies
pip install -r requirements.txt
# Execute inference script
python inference.py
The pretrained, serialized model files are stored in edge/models
. The Jupyter notebooks (.ipynb files) in the notebooks
directory contain the code used to train and evaluate these models, along with their architectural definitions.
If you find CUBITAL useful in your project, please consider citing the following paper:
@inproceedings{salcedo2023,
title={Edge AI-Based Vein Detector for Efficient Venipuncture in the Antecubital Fossa},
author={Salcedo, Edwin and Pe{\~n}aloza, Patricia},
booktitle={Mexican International Conference on Artificial Intelligence},
pages={297--314},
year={2023},
organization={Springer}
}