[go: up one dir, main page]

Next Article in Journal
Standard for the Quantification of a Sterilization Effect Using an Artificial Intelligence Disinfection Robot
Next Article in Special Issue
Heart Rate Variability as a Potential Indicator of Cancer Pain in a Mouse Model of Peritoneal Metastasis
Previous Article in Journal
Method of Step Detection and Counting Based on Measurements of Magnetic Field Variations
Previous Article in Special Issue
Estimating Resting HRV during fMRI: A Comparison between Laboratory and Scanner Environment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Implementation of Thermal Camera for Non-Contact Physiological Measurement: A Systematic Review

by
Martin Clinton Tosima Manullang
1,2,
Yuan-Hsiang Lin
1,*,
Sheng-Jie Lai
1 and
Nai-Kuan Chou
3,*
1
Department of Electronic and Computer Engineering, National Taiwan University of Science and Technology, Taipei 10607, Taiwan
2
Department of Informatics, Institut Teknologi Sumatera, South Lampung Regency 35365, Indonesia
3
Department of Cardiovascular Surgery, National Taiwan University Hospital, Taipei 10002, Taiwan
*
Authors to whom correspondence should be addressed.
Sensors 2021, 21(23), 7777; https://doi.org/10.3390/s21237777
Submission received: 10 October 2021 / Revised: 6 November 2021 / Accepted: 19 November 2021 / Published: 23 November 2021
(This article belongs to the Special Issue Neurophysiological Monitoring)

Abstract

:
Non-contact physiological measurements based on image sensors have developed rapidly in recent years. Among them, thermal cameras have the advantage of measuring temperature in the environment without light and have potential to develop physiological measurement applications. Various studies have used thermal camera to measure the physiological signals such as respiratory rate, heart rate, and body temperature. In this paper, we provided a general overview of the existing studies by examining the physiological signals of measurement, the used platforms, the thermal camera models and specifications, the use of camera fusion, the image and signal processing step (including the algorithms and tools used), and the performance evaluation. The advantages and challenges of thermal camera-based physiological measurement were also discussed. Several suggestions and prospects such as healthcare applications, machine learning, multi-parameter, and image fusion, have been proposed to improve the physiological measurement of thermal camera in the future.

1. Introduction

1.1. Research Motivation

The use of thermal cameras has become very widespread in recent years as it can be applied in various fields. Thermal cameras have the advantages of operating in an environment without light and not being affected by changes in light. There are existing studies illustrating that thermal camera can be used to monitor respiratory rate (RR), heart rate (HR), and body temperature, while other studies found its use in breast cancer diagnosis [1], evaluating physical condition [2], stress level [3], as well as the neonates’ health condition [4], sleep posture [5], and many more not mentioned here.
Meanwhile, vital signs data, such as blood pressure, temperature, respiration rate, and heart rate, are critical for patient care and diagnosis. They enable physicians and other healthcare workers to make informed decisions about a patient’s treatment options and overall well-being. However, existing medical instruments still rely on physical touch from existing tools to gather data about patients’ health. Most techniques for determining respiration and heart rate include physical contact with the patients such as pulse oximeters, ECG (electrocardiogram), monitoring systems using electrodes, or piezoelectric sensors.
During the SARS-CoV-2 19 pandemic, the whole world reduced the amount of direct contact drastically. People are reluctant to visit health and medical institutions due to fear of infection. Based on existing studies [6], there has been a change in medical services since the outbreak of SARS-CoV-2 19. Contactless services have been implemented during SARS-CoV-2 19, and will become commonplace [7] even after the pandemic. Several developments such as measuring RR and HR using the non-contact method with radar sensors [8], blood volume pulse and vasomotion measurements [9], using radio frequency, and the Doppler effect to monitor vital body objects [10] have been tested and researched.
Body temperature is an excellent indication of a patient’s health [11]. Human body temperature can be categorized into two types: skin temperature and core body temperature. Skin temperature is the temperature of the outermost surface of the body. Average human skin temperature varies between 33.5 and 36.9 °C (92.3 and 98.4 °F) while healthy core body temperature falls within 37 °C (98 °F) and 37.8 °C (100 °F) [12]. According to a study [13], extreme body temperature can negatively affect how a human’s body and vital organs work. To compensate for this, the body has thermoregulation processes that enable it to maintain a standard core internal temperature.
Non-contact systems can detect temperature through infrared thermography because it can detect electromagnetic waves produced by anything with a temperature greater than absolute zero Kelvin. This phenomenon is used for the development of thermal cameras. Thermal cameras measure temperature using infrared radiation from 1 to 14 µm spectral range [14,15]. This measurement procedure is known as infrared thermography (IRT). IRT is a non-invasive technique that remotely measures the energy emitted by an entity (i.e., human body, industrial machine, engine, and many more objects). IRT is applied as an indirect technique for capturing the changes in surface body temperature and can be used to measure other physiological signals [16]. However, the commonly used thermal camera for medical purpose is a long-wavelength infrared (LWIR) type with a 7–14 µm spectral range [17].
According to research in United States by McKinsey between March and April 2020, a large migration to telemedicine occurred, coinciding with an over 80% drop in in-person visits. The use of telemedicine by physicians and healthcare organizations has also expanded by 50–175 times since the COVID-19 outbreak [18]. The increase in public demand for indirect healthcare during the pandemic led to the rapid development of non-contact healthcare practice and emphasized its importance. There is also a trend for non-contact measurement technology to replace the current conventional methods without any compromise on performance and accuracy. However, the use of thermal cameras to capture vital signs has its challenges in terms of image and signal processing.

1.2. Research Objective

This paper aims to systematically evaluate the use and development of thermal cameras in its application for measuring vital signs using preferred reporting items for systematic reviews and meta-analyses (PRISMA) to produce relevant papers from 2012 to 2021. More specifically, this paper will evaluate system capabilities, thermal camera types, signal processing steps, system platform, and highlight system performance along with the validation method. This systematic review contributes an evaluation of recent progress in non-contact physiological measurement with LWIR thermal camera that can be used as a basis for reference for other related research.

1.3. Comparison with Existing Reviews

Other previous systematic review papers that also discussed the applications of thermal cameras are listed in Table 1.
There are some significant differences between this systematic review and the existing ones. These differences mean that this study has a novelty in review. The following studies [19,20] have similarities with this study regarding research objectives, but in these the coverage years are different. Some studies also have a scope that focuses on specific aspects such as sports [21], psychophysiological [22], breast cancer [1], neonatal [4], vein finder [24], and human core temperature [26]. Some studies have a broader scope; for example, the study conducted by He et al. [27] covered all aspects outside the health field.

2. System Architecture in General

The image processing carried out by each study requires a platform on which the processing takes place. Most of the studies carried out signal processing on software such as python for a programming language and OpenCV as a library framework. Python is a multi-hardware programming language that runs on various hardware ranging from personal computers to mini-PC boards such as Jetson [28] and Raspberry Pi. OpenCV stands for Open-Source Computer Vision, a library that provides various kinds of image processing functions which can be used for real-time processing directly from the camera or by using pre-recorded image data.
Although each study that uses a thermal camera for physiological measurement included in this systematic review is very diverse in various vital signs, the processing stages of each study can generally be summed up in one process flow that can be seen in Figure 1.

2.1. Thermal Camera Model and Specification

The image process stage begins with the acquisition of a thermal image from a thermal camera. The resolution and the number of images that run in one second or what is known as frames per second (FPS) are essential in signal processing, primarily related to images. Several cameras are used more than once by the studies included in this systematic review, i.e., the A315 and A325 from Teledyne FLIR LLC and the MAG62 from Magnity Electronics Co., Ltd., Shanghai, China.
There are several necessary specifications related to thermal cameras. In general, the cameras used in the included studies can record video from the lowest FPS of 8.7 FPS to 60 FPS in resolution between 160 × 120 pixels to 1024 × 768 pixels. FPS and resolution are closely related to the quality of the resulting signal [29,30]. FPS refers to the number of thermal image frames captured in one second. The more image frames obtained, the greater the variation of thermal information and its variability. Therefore, in this case, FPS can be interpreted as the sampling rate of a system. Likewise, the image’s dimensions also show the number of measurement points made by the thermal camera. The larger the image dimensions, the easier it will be for the system to detect region of interest (ROI).
Apart from thermal image specifications, two variables are often shown regarding the performance of thermal cameras, i.e., temperature accuracy and thermal sensitivity. Temperature accuracy indicates how close a measurement from a thermal imager is to the actual absolute value. Meanwhile, thermal sensitivity refers to the noise equivalent temperature difference (NETD). This value specifies the most negligible temperature difference that the camera can detect.
Some research shows that NETD is a critical aspect to show the performance of a thermal camera [31,32,33,34]. The value of NETD is also an essential variable in using low-cost thermal cameras for applications in the medical field. For example, a thermal camera with a NETD value of less than 50 mK is ideal for medical applications [35]. In studies related to the measurement of respiratory rate and heart rate, the value of the NETD becomes an essential aspect because the measurement of respiratory rate considers changes in temperature rather than the value of the temperature itself. Meanwhile, the study conducted by Pan et al. [36] made corrections to the value of body temperature readings with correction variables in measuring body temperature, while a study conducted by Rao et al. [37] developed an automatic temperature correction algorithm to calibrate the camera according to black body reference.
The correction value from the thermal camera is obtained by calibrating it. This calibration process is performed using a radiometric calibration method, which establishes the relationship between the pixel signal and the temperature of the target object [38,39]. Calibrated thermal cameras minimize temperature readings that differ significantly from reference devices and become more reliable for medical applications.
More complete details regarding the use of thermal cameras in each study are summarized in Table 2.

2.2. Image Pre-Processing and Feature Matching

The second stage is the pre-processing of the thermal image. Pre-processing is carried out on the entire frame of the image. During pre-processing, gaussian filter, changes in image dimensions or size, conversion of the number of FPS, altering color channels to grayscale, bitmap, or using pseudocolor can be applied.
A feature matching stage is required for studies that use more than one camera or what is known as image fusion. The fusion cameras used are also varied. Some use a combination of near-infrared (NIR) and LWIR cameras [40,53], while the others use a combination of LWIR and RGB cameras [42,47,48,52]. Most studies combined the two images, generally thermal images and RGB images. The RGB image is used for ROI detection. The ROI coordinates in the RGB image are matched with the coordinates on the thermal camera. Determination of these coordinates has a diverse method, ranging from multispectral localization using the dlib algorithm [55,56], and pre-trained machine learning models. These points are then correlated with the thermal image. Some pre-processing may be required, such as frame per second synchronization and adjustment of the image’s dimensions (in general, the dimensions of thermal images are often smaller than RGB images). This cross-correlation process also has several algorithms, including affine transformation [57], the Oriented Fast and Rotated Brief feature [58,59], and others. This cross-correlation process will produce an equation matrix called a homography matrix (some studies call it a transformation matrix or correlation matrix). An illustrative simplification of this process can be seen in Figure 2.

2.3. Determining and Tracking of ROI

Determining the ROI in a dimensional image is an important stage in thermal camera image processing. There are several methods used by the studies selected in this systematic review. The first one that was used is called Viola-Jones framework [60]. Paul Viola and Michael Jones developed this framework using the haar feature, commonly used to detect facial parts. All studies by Negishi [42,47,48] included in this systematic review used Viola-Jones to determine ROI. However, Hu et al. [51] claimed that their ROI detection algorithm has better performance when compared to Viola-Jones, with 98.46% accuracy versus 87.69%. Chen et al. [52] also argue that Viola-Jones in OpenCV is always used to determine coarse faces’ locations but is not precise in respiratory rate measurement. Therefore, deep learning is used as a method to determine ROI.
Movement between frames, especially when the camera is set at a low FPS, is often a problem in signal acquisition from the thermal camera. For this reason, optical flow is used to solve this problem. The study conducted by Lyra et al. [28] used optical flow to quantify the thermal image so that the subtle motion in the chest area is reduced for later extraction of the respiratory signal. Furthermore, the study conducted by Scebba et al. [40] used a dense optical flow algorithm developed by Farneback to reduce the periodic motion of the torso.
Extracting signals from moving objects is a challenge, and therefore tracking methods are needed to overcome them, one of which is by using The Kanade-Lucas-Tomasi (KLT) algorithm, which is used by several studies [40,51,52]. This algorithm uses a linear coordinate mapping which determine the corresponding region in the thermal video. This tracker extracts feature points from ROI using the minimum eigenvalue algorithm and follows those points with a single point tracker.

2.4. Signal Extraction, Feature Extraction, and Classification

Signal extraction is an advanced stage that is carried out when the system has succeeded in identifying ROI and tracking ROI movements. Vital physiological signals are plotted in units of time known as time-series signals. The general method for extracting this signal is by comparing pixel-per-pixel motion in thermal images [61]. Not infrequently, the extracted signal requires post-processing in a filter until the signal results can characterize a change (signature). These changes contain data that must be extracted at the feature extraction stage. Some algorithms can be used for extracting the feature, i.e., peak detection [62,63], fuzzy rule [64], one dimensional CNN [65], power spectral density [66], and various other methods. There is also a python toolkit for quickly extracting the feature [67] in a python package library.
The results of this feature extraction will show a value that can be drawn into the system’s output. In some studies, this output cannot be easily interpreted. Using machine learning or deep learning [43,45], a classification of the signals generated by the previous stages is carried out based on the model trained beforehand. This output is a form of classification that users can easily interpret. Each input will be directed to two or more outputs, either anomaly detection or multi-class output, using classification and machine learning.

3. Thermal Camera for Physiological Measurement

This systematic review reviews the use of thermal cameras for physiological measurements which will be broken down into three subsections, including respiratory rate, heart rate, and body temperature.

3.1. Respiratory Rate

3.1.1. Overview of Respiratory Rate Measurement

Monitoring RR and related variations are critical for determining an individual’s health status [68]. Moreover, research [69] states that the RR can be used as one of the most efficient indicators to determine whether a person is healthy. Anomalous RR is critical for detecting significant health problems and can also be used to forecast potentially serious clinical outcomes such as influenza classification [42], lung airflow limitation [70], and sleep apnea screening. Additionally, monitoring variations in RR can assist in identifying a high-risk intensive care patient up to 24-h before a medical emergency.
RR is defined clinically as the times of respiration recorded within a minute (in breaths per minute, or bpm). In general, the RR is normal if it is 12–20 bpm for adult humans. A study [71] shows that a slight increase of breaths per minute to 24–28 correlates to an increased risk of mortality by 5%. During the COVID-19 pandemic, RR counting is essential. RR is also a vital sign that determines the severity of SARS-CoV-2 19 infection [72]. This viral outbreak has caused many ICUs to be at full capacity and high bed occupancy rates throughout the world. Medical equipment, including instruments for measuring vital signs, is insufficient in some countries [73]. The thermal camera is undoubtedly one potential solution for developing a RR counter that works without direct contact.
While RR is a significant clinical predictor of severe events, it is often measured manually, yielding erroneous findings. RR was often not regularly recorded, even when the patient’s main complaint was a respiratory illness [68].
There are several well-known methods used for RR monitoring [68]. The first is by using a manual human counting method. However, it might be inaccurate and time-consuming. The second is using a spirometer. This method is considered accurate and also measures some other respiratory parameters. However, it can interfere with natural breathing and difficult for continuous RR monitoring. The third approach employs capnometry, a highly accurate, simple, and measured continuous monitoring technique. This is a contact approach that is not particularly pleasant and requires analysis using specialized equipment. The last approach is impedance pneumography, which is precise, continuous, and concurrent. However, this procedure is challenging to conduct and requires specialized tools for analysis.
In addition to standard medical measurements used as a reference in hospitals, there are also several affordable, contact-based methods to measure RR, such as using a nasal temperature probe near the nostril [74] or using a microphone located near the nostril that records the inhale-exhale sound noise [75]. The primary drawbacks of these methods stem from their intrusive nature: they may be unpleasant and potentially disruptive to sleep which may alter the results. Additionally, patient movement and any other signal noise may dislodge the sensors or skew the data.
Noninvasive methods for detecting breathing include non-contact audio analysis, vibration sensors, thermal imaging, and doppler radar sensors. The extraction of breathing sounds from sensor data polluted by ambient noise is significant for non-contact audio analysis. Vibration sensors impose positional and postural limitations and need the use of costly specialized hardware. By detecting the breath as it is exhaled, thermal imaging methods were utilized to record a breathing signal. Thermal imaging can be achieved using a thermal camera. One of the advantages of a thermal camera is that it can be used indirectly and reliably without affecting the light intensity, and can be used in a completely dark room. In general, the challenge in using a thermal camera is processing thermal images and extracting features from these images.

3.1.2. Summary of Thermal Camera Usage Related to Respiratory

Next, Table 3 summarizes all the characteristics of the studies with respiration as the main objective that were used in this systematic review.

3.1.3. Deep Learning for RR Monitoring

Several studies use deep learning to classify the breathing pattern and to determine the ROI from the image. There are some deep learning algorithms used in the research list above, CSPDarknet [28], FlowNet 2.0 [76] (algorithm based on deep networks), k-nearest neighbors (k-NN) [43,45], and cascade convolutional neural network (CCNN) [40].
CSPDarknet is the backbone running on YOLOv4. Lyra et al., in their research [28], used it to determine four classes that would be used as ROI, namely head, chest, patient, and clinician. Head ROI is used to measure body temperature, chest ROI is used for RR estimation, and clinician ROI is used to count the number of clinicians near the patient. After the chest ROI was determined, respiratory movement was tracked using a pixel-wise temporal mean algorithm by comparing movement between frames. The use of the neural network to determine five facial landmarks were also used by Scebba et al. [40] by utilizing CCNN on NIR images.
Meanwhile, Jagadev et al., in both of their studies [43,45], used k-NN to decide whether the human volunteer had normal or abnormal respiration. Previously, the breath detection algorithm (BDA), which they also developed, was used to extract respiratory movements. In simple terms, BDA calculates the number of peaks and valleys based on the specified ROI movement from the nostrils. Finally, the output of this BDA is forwarded to the k-NN to classify between normal breathing, bradypnea, or tachypnea (abnormal). The output of this system is compared with the Support-Vector Machine (SVM) method to determine the accuracy achieved.

3.1.4. Camera Sensor Fusion: Usability and Image Fusion Method

Several studies on the list combined the two types of cameras with different measurements related to respiratory. Most of them use a combination of a LWIR thermal camera with a CMOS RGB camera or a color camera that we commonly find on smartphones, webcams, or point-and-shoot cameras. The merging of these two cameras aims to gain the advantages of each camera and eliminate the weaknesses of each camera. Meanwhile, light significantly affects RGB cameras, and this type of camera cannot be used in low or no light conditions. In contrast, thermal cameras can capture objects even in light conditions because they work using the principle of radiation emitted by objects. Table 4 summarizes each study involving fusion cameras and their characteristics.
The following spectral video fusion study [40] combines two types of thermal cameras, which are NIR and LWIR cameras. Scebba et al. initiated a new algorithm to calculate the RR based on multispectral data fusion from the two cameras. The multispectral ROI localization analyzes footage from LWIR and NIR cameras. The localized ROIs extract the Thermal Airflow (TA) signal from the nose ROI and the respiratory motion signal from the chest ROIs in the LWIR and NIR cameras. The RR and signal to noise ratio (SNR) are calculated in the Signal Quality-based fusion (SQb Fusion) using the TA’s frequency analysis and respiratory motion signal both from the LWIR camera and the NIR camera. The weighted median generates an RR estimation by combining all RR estimations and weighting them by their SNR. Temporal aspects and frequency characteristics of TA, respiratory motion from NIR and LWIR cameras are used as input to the ensemble of support-vector-machine to determine whether apnea occurs or not. The intelligent signal quality-based fusion (S2Fusion) algorithm combines the findings of the SQb Fusion with the apnea classifier (h) to produce an apnea-sensitive signal.
There is also the use of two different cameras to measure two different physiological quantities. Negishi et al. used the same two cameras configuration in all three studies [42,47,48]. LWIR camera is used to measure RR while RGB camera is used to measure HR. Image fusion determines ROI based on RGB images rather than thermal images. To determine the ROI of the nose and mouth, a feature matching analysis was performed with a homography matrix between RGB and thermal images based on the contours of the human face. Grabcut is used for facial contour extraction, while oriented-fast and rotated brief (ORB) algorithm is used for feature matching and dlib for determining the region of nose and mouth. All these algorithms and tools are available as libraries in OpenCV.
Almost the same as before, Hu et al., in their studies [51], also use camera fusion to make the determination of ROI while facial objects was detected using the RGB camera. To record thermal and visual images, an affine transformation is needed. The first step is to pick the most correlated points in the first frame of bimodal movies to determine the thermal image’s fixed point and RGB image’s image points. Following that, cross-correlation is used to modify these points in order to produce the transformation matrix. After mapping between the RGB image and the thermal image, the bounding box is determined for the ROI object (the face, nose, and mouth) by using the Viola-Jones algorithm. The shi-Tomasi corner detection algorithm is used to help extract the interest points to calculate the covariance matrix, while the KLT algorithm is used to track ROI on movement.
As before, the use of RGB camera for face detection is also used by Chen et al. [52]. They provide an alternative method to measuring RR if no face is detected by tracking the sticky markers that placed on the body. Meanwhile, to combine RGB and thermal images, they use affine transformation to transform two different geometric shapes. The Viola-Jones algorithm is used to detect faces, while the KLT algorithm is used for tracking the ROI.

3.1.5. RR Signal Extraction Process

A thermal image only has a single information channel that is a temperature representation converted into an image matrix. It is by utilizing this only information that a respiratory signal can be generated. In general, as shown in Figure 3, two changes can be observed: the first is a change in the temperature value and the second is a change in movement. Each of these characters will be explored by each study based on the methods and algorithms they use, respectively.
The study [44] conducted by Mutlu et al. used the temperature change around the nostril to indicate respiration. After defining the ROI and excluding non-varying pixels, the decreasing segments are identified using experimentally established criteria for a minimal frame-to-frame decline. If a single frame exists between two possible decrease segments, they are combined. The process of identifying temperature changes at the pixel level by comparing frame per frame was also used in other studies.
Another way of extracting the RR signal is to consider the movement of pixels between frames without making the nose or mouth the ROI. This method was used in the following study [54] and is more reliable when used in real time and with patients in the frame using a blanket or in a position not facing the camera. Breathing motion detection uses a subtraction technique in the background to identify motion by computing the difference between the current and previous frames. To be precise, the absolute difference between the current frame I ( x , y , f ) and the previous frame I ( x , y , f 1 ) is computed for all pixels where x , y are the coordinates of x and y axis respectively, and f is the frame sequence. Then, employing thresholding, erosion, and dilation procedures, parts of the relocated region are removed. The parameters utilized in these procedures are 5 for thresholding, which ensures that the difference between the pixel values is smaller than 5, and 5 × 5 kernel for opening (i.e., erosion and dilation). Following that, boundary boxes are determined using contour detection and noise filtering. Finally, the RR is determined by the number of bounding boxes. The concept of comparing each pixel movement between frames per frame is also used by other studies [77].
Negishi et al., in three of their studies [42,47,48], use multiple signal classification (MUSIC) algorithms to calculate RR estimates. This algorithm is also proven to be more accurate than FFT in time series data with a shorter window. By using this algorithm, the correlation matrix from the time series data is calculated, and the eigenvectors are obtained.

3.1.6. Performance Validation Method on RR

Testing on RR is carried out by comparing the system output with reference equipment such as apnea monitors, sleep diagnostic equipment, respiratory belts, and other respiratory rate measuring devices. For example, in this study [52], GY-6620 (South China Medical and Electrical Technology Co., Ltd., Zhengzhou, China) was used as a comparison. GY-6620 is the equipment used in polysomnography or sleep tests and it can provide output records of body activity during sleep, including RR. In another study [50], the SOMNOlab2 tool (Weinman GmbH, Hamburg, Germany) was used as the reference. The device is a measuring device for body activity that records thoracic movements based on piezo plethysmography. In his three studies [42,47,48], Negishi also used the same validation system, namely the respiratory effort belt. However, the model of the tool is only listed in one of their studies [48], namely DL-231 (S&ME, Tokyo, Japan). A similar force sensor-based respiratory belt was also used by another study [54], the Go Direct Respiration Belt model. This belt is set to record ten respiration samples per second for 5400 s. This study [44] also uses a respiratory belt to obtain the reference value. Unfortunately, not all studies compare the results with standard medical equipment; others use statistical calculations as a performance test method, such as scatter plots, bland-Altman plots, or other statistical calculations.

3.2. Heart Rate

Studies that use thermal cameras to measure heart rate tend to be less popular than those measuring respiratory rate. Many researchers use RGB cameras rather than thermal cameras for heart rate measurement. The heart rate measurement with the RGB camera utilizes changes in skin color that can be observed in one of the three color channels in the RGB image. While in thermal cameras, discriminant characteristics can be obtained based on the two most popular methods: using the blood perfusion temperature changes from a particular pixel [49,78,79] or by analyzing the head movement based on Balistocardiography (BCG) [80,81].
Similar to the respiratory rate process, obtaining a heart rate signal from a thermal camera begins with capturing the images, pre-processing the image, detecting the ROI, and tracking the ROI so the ROI will be more stable to movement.
Some studies describe the camera specification they use for the research. For example, Bennet et al. [79] use a FLIR-A camera with 640 × 480 pixels and 60 FPS framerate. On the other hand, Kim et al. [49] use a FLIR T430sc camera with 320 × 240 pixels and 12 FPS framerate. Unfortunately, Gault et al. [78] use pre-recorded thermal video with ten subjects without further detail about the camera specification.
For studies that are using temperature changes methods, they select some regions such as the highest blood vessel temperature region on the skin [49,78] or chest [79] as the ROI. In other parts, for the head movement-based method, they use the entire head as an ROI and track its movement [80,81].
Some studies reported that the noise was very high on the unprocessed signal and the discriminant characteristics are almost imperceptible. Various and multiple filters were applied to enhance the signal quality and its characteristics. For example, Kim et al. [49] converted the time-series signals into the frequency domain using Fourier transform, while some others [79,80,81] put a bandpass filter to extract the heartbeat signals and count the heart rate.
For studies that rely on the head movement method, they use almost identical processing steps. Both Li et al. [80] and Balakrishnan et al. [81] applied temporal filtering for the signal obtained from the head movement trajectories. Then they use principal component analysis (PCA) to obtain the periodic signal caused by the heartbeat. Finally, peak detection is used to help determine the heart rate.
Each study shows promising results by obtaining identical value compared to the reference ground truth, i.e., Kim et al. [49] obtained an average accuracy of 95.48%, Gault et al. [78] reached 90% accuracy, Li et al. [80] had a mean error of 2.7%, and finally, Balakrishnan et al. [81] achieve mean error of 1.5%. These results were achieved by comparing them to a contact-based device such as a pulse sensor and ECG. Unfortunately, there is no detailed information regarding the reference equipment model or specification.

3.3. Body Temperature

In most cases, the body temperature is measured using the LWIR type thermal camera. For example, Pan et al. [36]. used the P384-20 thermal camera, Rao et al. [37] used the Mobotix 16TR camera consisting of RGB and LWIR thermal camera, while Lewicki et al. [41] used the FLIR Lepton 3.5.
There are some steps to process the image and calculate the body temperature value. The process starts from obtaining the image from the thermal camera, followed by determining the ROI. In this case, some studies [37,41] used a combination of RGB and thermal cameras and needed a calibration process between two images. This RGB camera objective is to obtain enough information about the facial landmark and help select the ROI. Rao et al. [37] also implemented some advanced algorithms in their system, such as a prioritization algorithm to decide which person should be measured and background removal to separate person object and background environment. In terms of tracking, Rao et al. [37] used a neural network-based head tracker while another study [36] used an elliptical head tracking method.
Matching the two images from RGB cameras and thermal cameras also requires a series of processes. Instead of using Affine Transformation, Lewicki et al. [41] use coarse-grained field of view method, while Rao et al. [37] use manual offset approach and dynamic frame alignment.
Some authors also mentioned challenges related to accuracy. Rao et al. [37] implemented the temperature correction algorithm to compensate the temperature with the distance using a regression and multi-layer perceptron. The camera also needs to be calibrated in order to achieve a minimum error [82].
Each study has its procedure to get the body temperature value. For example, Pan et al. [36] determine the body temperature by measuring the highest temperature value from every point on the face. In comparison, Rao et al. [37] set some algorithms to prioritize the region of eyes and forehead followed by face and head. In contrast, Lewicki et al. [41] use the average value of each point on the face as the body temperature.
The eye’s inner canthus or medial canthus is known as the most accurate region of the face for measuring body temperature [83,84]. This region has a temperature that is almost identical to the eardrum temperature [85,86]. This region also has a similar temperature compared with the rectum, which has been the most identified as a reference for inner core body temperature [87]. However, because the selection of the inner eye canthus ROI in thermal images is quite challenging and requires an advanced method (for example, with machine learning or cross correlation [88]), several studies [87,89] have chosen the highest temperature value on the face or forehead as the body temperature which is also quite accurate.
Several methods were used to validate the system result by comparing it with the reference. Rao et al. [37], in their studies, used the black body measured temperature as the ground truth and obtained 100% sensitivity and 96.9% specificity from 105 people as the subject. On the other hand, Pan et al. [36] used an infrared ear thermometer as a reference and achieved a CAND value above 0.9.

4. Discussion

This systematic review provides an overview of studies that use thermal cameras to measure and monitor aspects of vital signs in the human body. This discussion section will present the advantages, disadvantages, challenges, and future trends and works.

4.1. Advantages of Thermal Camera-Based Physiological Measurement

A thermal camera can be used to measure several parameters such as RR, HR, and body temperature. Thus, it has the potential to be used in non-invasive, continuous measurements such as in neonatal ICU monitoring, long-term monitoring, fitness applications, and health screening. Moreover, during the COVID-19 pandemic, non-contact body temperature measurement methods promise hygiene. Mainly, it could be implemented as non-contact physiological measurement. Moreover, the thermal camera is able to operate in low light conditions. Therefore, using the thermal camera-based non-contact method offers more flexibility and convenience for the patient.
Since thermal camera overcomes the drawbacks of contact-based sensors, some clinical applications are using this method. For instance, it can provide continuous monitoring for neonates [4], classifying affective states [90], and monitoring exercise [91]. The thermal camera method is also effective for sleep monitoring [54] and estimating the movement during sleeping [5]. In addition, there are also some other implementations such as human thermal comfort modeling [92], lie detector [93,94], mood and stress-related disorders [95], sober and drunk classification [96], and many more.

4.2. Challenges of Thermal Camera-Based Physiological Measurement

A significant challenge of using a non-contact method that depends on the camera is that it is very susceptible to movement—both the motion of objects in the frame and the movement of the camera itself. Research in ROI tracking is needed to overcome this shortcoming. Additionally, separating partially obscured individuals or objects of the same temperature in thermal pictures can be challenging, as their pixels have the same intensity [97]. In these cases, including depth information or color edges can aid in disambiguation.
Related to the applicability, there is a concern for non-contact measurement using a thermal camera when faced with the current standard medical method in terms of accuracy and reliability. Although the accuracy of the prototype in this systematic review shows good results, it is not enough to exceed the performance of existing medical standard equipment. This happens because the systems developed in these studies have not been tested in actual health care conditions. Moreover, it has not been standardized or medically certified. Therefore, measurement involving several possibilities in real-case scenarios is an important direction to take. Several studies emphasize the importance of further clinical trials to ensure the reliability of the developed systems. The tests carried out need to combine several test scenarios to test the reliability of the system in measuring RR, for example, the use of various kinds of bed covers or blankets [54], the involvement of various patient demographics [40], and further clinical trials in the health institution such as hospital or clinic.

4.3. Future Trends and Works

Several suggestions and prospects need to be considered to improve the future of thermal cameras for physiological signal measurements.

4.3.1. Healthcare Applications

In healthcare, the telemedicine revolution shifts illness prediction, prevention, and treatment from a hospital-centered reactive paradigm to a person-centered one. Driven by the COVID-19 pandemic and a growing need for home healthcare, telemedicine trends have the potential to transform and enhance healthcare delivery and accessibility. Non-contact physiological signal monitoring is a significant development that supports current telemedicine technology [98], including using thermal cameras as part of non-contact monitoring. Thermal cameras also provide a hygienic aspect for users where there are no components attached to the body to minimize contact between users.
For the popularization of health care, another critical challenge is related to low-cost implementation. Meanwhile, most of the studies in this systematic review have not explained development costs in detail. Research related to low-cost development is essential because the cost aspect is considered in implementing medical devices in developing countries [99,100].

4.3.2. Machine Learning

Machine learning can be used as an enhancement to thermal images. As discussed in the previous section, a thermal camera generally has a small resolution. The resulting image can be enhanced to have a higher resolution by using the Thermal Image Enhancement method using convolutional neural network (TEN-CNN) [101]. Furthermore, the challenge in using machine learning is increasing the variety of datasets used to improve inference engine capabilities and accuracy. This is also expressed by Lyra et al. in their study [28]. The YOLOv4 they use relies on large-scale datasets to enhance their inference engine abilities. Improvements to this dataset also need to consider aspects of data variation for various races, ages, genders, weight and height, and other aspects.

4.3.3. Multi-Parameter and Data Fusion

Multi-parameter measurement could be future work for the thermal camera-based physiological measurement system. Simplifying the measurement of several parameters will benefit patients using this system, especially multi-parameters on vital signs such as body temperature, blood pressure, heart rate, and respiratory rate. One study [102] that evaluated a multi-parameter wireless wearable sensor to monitor vital signs showed excellent and effective results. However, not many similar systems are accommodated in a non-invasive method.
Multi-parameter focuses on various parameters being measured. In contrast, data fusion is an aspect that combines several input data into the same process to achieve an output from a system. In this systematic review, there are several studies [40,42,47] that use data fusion in the form of combining several types of cameras. The aim and benefit of combining various sensors and data are to eliminate each sensor’ or method’s disadvantages and take advantage of every sensor. Further observation related to the combination of various sensors should be addressed to achieve better signal measurement.

5. Conclusions

In this paper, we have reviewed the existing literatures regarding thermal cameras to measure the human body’s respiratory rate, heart rate, and body temperature. The general process stages in processing thermal images for physiological signal measurements are discussed, compared, and evaluated to find advantages and challenges. The advantages of using a thermal camera in measuring physiological signals include comfort and convenience for the patient due to its non-invasive aspects, hygiene, and the ability to capture images in low light conditions. On the contrary, the challenges include how to reduce motion artifact and increase the accuracy and reliability of the physiological signal measurement system.
This systematic review contributes a comprehensive overview by providing highlights of current methodological concerns related to using the thermal camera to measure physiological signal. Furthermore, this systematic review can be used as an initial reference for researchers to identify the existing research gap. In addition, we also provide several future development directions, including integrating multi-parameter systems to improve functionality, using data fusion and machine learning technology to improve measurement accuracy and reliability, and developing low-cost thermal imaging applications to increase penetration.

Author Contributions

Conceptualization: M.C.T.M. and Y.-H.L.; studies screening: M.C.T.M. and S.-J.L.; original draft preparation: M.C.T.M., Y.-H.L. and S.-J.L.; contextual review: Y.-H.L. and N.-K.C.; proofreading: S.-J.L., Y.-H.L. and N.-K.C.; editing: M.C.T.M., Y.-H.L. and N.-K.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by a research grant from the Ministry of Science and Technology, Taiwan for academic research under Grant MOST 109-2637-E-011-002- and Grant MOST 110-2221-E-011-123-, and financially supported by the Taiwan Building Technology Center from The Featured Areas Research Center Program within the framework of the Higher Education Sprout Project by the Ministry of Education in Taiwan.

Data Availability Statement

The data used in this review are from published primary studies available in the public domain.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zadeh, H.G.; Masoumzadeh, S.; Nour, S.; Kianersi, S.; Eyvazi Zadeh, Z.; Joneidi Shariat Zadeh, F.; Haddadnia, J.; Khamseh, F.; Ahmadinejad, N. Breast cancer diagnosis by thermal imaging in the fields of medical and artificial intelligence sciences: Review article. Tehran Univ. Med. J. 2016, 74, 377–385. [Google Scholar]
  2. Balaji, A.S.; Makaram, N.; Balasubramanian, S.; Swaminathan, R. Analysis of pre- and post- fatigue thermal profiles of the dominant hand using infrared imaging. ACM Int. Conf. Proceeding Ser. 2017, 1, 53–57. [Google Scholar]
  3. Cardone, D.; Perpetuini, D.; Filippini, C.; Spadolini, E.; Mancini, L.; Chiarelli, A.M.; Merla, A. Driver stress state evaluation by means of thermal imaging: A supervised machine learning approach based on ECG signal. Appl. Sci. 2020, 10, 5673. [Google Scholar] [CrossRef]
  4. Topalidou, A.; Ali, N.; Sekulic, S.; Downe, S. Thermal imaging applications in neonatal care: A scoping review. BMC Pregnancy Childbirth 2019, 19, 381. [Google Scholar] [CrossRef]
  5. Mohammadi, S.M.; Enshaeifar, S.; Hilton, A.; Dijk, D.-J.; Wells, K. Transfer Learning for Clinical Sleep Pose Detection Using a Single 2D IR Camera. IEEE Trans. Neural Syst. Rehabil. Eng. 2021, 29, 290–299. [Google Scholar] [CrossRef] [PubMed]
  6. Mann, D.M.; Chen, J.; Chunara, R.; Testa, P.A.; Nov, O. COVID-19 transforms health care through telemedicine: Evidence from the field. J. Am. Med. Inform. Assoc. 2020, 27, 1132–1135. [Google Scholar] [CrossRef]
  7. Lee, S.M.; Lee, D. Opportunities and challenges for contactless healthcare services in the post-COVID-19 Era. Technol. Forecast. Soc. Chang. 2021, 167, 337–339. [Google Scholar] [CrossRef] [PubMed]
  8. Kim, H.; Jeong, J. Non-Contact Measurement of Human Respiration and Heartbeat Using W-band Doppler Radar Sensor. Sensors 2020, 20, 5209. [Google Scholar] [CrossRef]
  9. McDuff, D.; Nishidate, I.; Nakano, K.; Haneishi, H.; Aoki, Y.; Tanabe, C.; Niizeki, K.; Aizu, Y. Non-contact imaging of peripheral hemodynamics during cognitive and psychological stressors. Sci. Rep. 2020, 10, 10884. [Google Scholar] [CrossRef]
  10. Hall, T.; Lie, D.Y.C.; Nguyen, T.Q.; Mayeda, J.C.; Lie, P.E.; Lopez, J.; Banister, R.E. Non-Contact Sensor for Long-Term Continuous Vital Signs Monitoring: A Review on Intelligent Phased-Array Doppler Sensor Design. Sensors 2017, 17, 2632. [Google Scholar] [CrossRef] [Green Version]
  11. Cheshire, W.P. Thermoregulatory disorders and illness related to heat and cold stress. Auton. Neurosci. 2016, 196, 91–104. [Google Scholar] [CrossRef] [Green Version]
  12. Fu, M.; Weng, W.; Chen, W.; Luo, N. Review on modeling heat transfer and thermoregulatory responses in human body. J. Therm. Biol. 2016, 62, 189–200. [Google Scholar] [CrossRef] [PubMed]
  13. Ivanov, K. The development of the concepts of homeothermy and thermoregulation. J. Therm. Biol. 2006, 31, 24–29. [Google Scholar] [CrossRef]
  14. Barr, E.S. The Infrared Pioneers—II. Macedonio Melloni. Infrared Phys. 1962, 2, 67–74. [Google Scholar] [CrossRef]
  15. Wikipedia Contributors. Thermographic Camera. Available online: https://en.wikipedia.org/w/index.php?title=Thermographic_camera&oldid=1052657772 (accessed on 30 October 2021).
  16. Tattersall, G.J. Infrared thermography: A non-invasive window into thermal physiology. Comp. Biochem. Physiol. Part A Mol. Integr. Physiol. 2016, 202, 78–98. [Google Scholar] [CrossRef] [PubMed]
  17. Howell, K.J.; Smith, R.E. Guidelines for specifying and testing a thermal camera for medical applications. Thermol. Int. 2009, 19, 5–12. [Google Scholar]
  18. Bestsennyy, O.; Gilbert, G.; Harris, A.; Rost, J. Telehealth: A Quarter-Trillion-Dollar Post-COVID-19 Reality? McKinsey 2020. Available online: https://www.mckinsey.com/industries/healthcare-systems-and-services/our-insights/telehealth-a-quarter-trillion-dollar-post-covid-19-reality (accessed on 4 November 2021).
  19. Mikulska, D. Contemporary applications of infrared imaging in medical diagnostics. Ann. Acad. Med. Stetin. 2006, 52, 35–40. [Google Scholar]
  20. Lahiri, B.; Bagavathiappan, S.; Jayakumar, T.; Philip, J. Medical applications of infrared thermography: A review. Infrared Phys. Technol. 2012, 55, 221–235. [Google Scholar] [CrossRef]
  21. El, E.N.; Una, D. Applications of Infrared Thermography in Sports: A Review. Rev. Int. Med. Cienc. Act. Física Deporte 2015, 15, 805–824. [Google Scholar]
  22. Znamenskaya, V.V.S.I.A.; Koroteeva, E.Y.; Khakhalin, A.V. Thermographic visualization and remote control of dynamical processes around a facial area. Sci. Vis. 2016, 8, 122–131. [Google Scholar]
  23. Moreira, D.G.; Costello, J.T.; Brito, C.J.; Adamczyk, J.G.; Ammer, K.; Bach, A.J.; Costa, C.M.; Eglin, C.; Fernandes, A.A.; Fernández-Cuevas, I.; et al. Thermographic imaging in sports and exercise medicine: A Delphi study and consensus statement on the measurement of human skin temperature. J. Therm. Biol. 2017, 69, 155–162. [Google Scholar] [CrossRef]
  24. Pan, C.-T.; Francisco, M.D.; Yen, C.-K.; Wang, S.-Y.; Shiue, Y.-L. Vein Pattern Locating Technology for Cannulation: A Review of the Low-Cost Vein Finder Prototypes. Sensors 2019, 19, 3573. [Google Scholar] [CrossRef] [Green Version]
  25. NAggarwal, N.; Garg, M.; Dwarakanathan, V.; Gautam, N.; Kumar, S.S.; Jadon, R.S.; Gupta, M.; Ray, A. Diagnostic accuracy of non-contact infrared thermometers and thermal scanners: A systematic review and meta-analysis. J. Travel Med. 2020, 27, taaa193. [Google Scholar] [CrossRef] [PubMed]
  26. Foster, J.; Lloyd, A.B.; Havenith, G. Non-contact infrared assessment of human body temperature: The journal Temperature toolbox. Temperature 2021, 1–14. Available online: https://www.tandfonline.com/action/showAxaArticles?journalCode=ktmp20 (accessed on 4 November 2021). [CrossRef]
  27. He, Y.; Deng, B.; Wang, H.; Cheng, L.; Zhou, K.; Cai, S.; Ciampa, F. Infrared machine vision and infrared thermography with deep learning: A review. Infrared Phys. Technol. 2021, 116, 103754. [Google Scholar] [CrossRef]
  28. Lyra, S.; Mayer, L.; Ou, L.; Chen, D.; Timms, P.; Tay, A.; Chan, P.; Ganse, B.; Leonhardt, S.; Antink, C.H. A Deep Learning-Based Camera Approach for Vital Sign Monitoring Using Thermography Images for ICU Patients. Sensors 2021, 21, 1495. [Google Scholar] [CrossRef]
  29. Nowara, E.M.; Duff, D.M. Combating the Impact of Video Compression on Non-Contact Vital Sign Measurement Using Supervised Learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, Seoul, Korea, 27–28 October 2019. [Google Scholar]
  30. AKirimtat, A.; Krejcar, O.; Selamat, A.; Herrera-Viedma, E. FLIR vs SEEK thermal cameras in biomedicine: Comparative diagnosis through infrared thermography. BMC Bioinform. 2020, 21 (Suppl. S2), 22. [Google Scholar]
  31. Khare, S.; Singh, M.; Kaushik, B.K. Development and validation of a quantitative model for the subjective and objective minimum resolvable temperature difference of thermal imaging systems. Opt. Eng. 2019, 58, 104111. [Google Scholar] [CrossRef]
  32. Kim, D.-I.; Kim, G.; Kim, G.-H.; Chang, K.S. Responsivity and Noise Evaluation of Infrared Thermal Imaging Camera. J. Korean Soc. Nondestruct. Test. 2013, 33, 342–348. [Google Scholar] [CrossRef] [Green Version]
  33. Da-xing, S.M.-M.P. Evaluation of Performance of Infrared Systems Using Noise Equivalent Temperature Difference. Infrared 2010, 31, 22–25. [Google Scholar]
  34. Li, Y.; Pan, D.; Yang, C.; Luo, Y. NETD test of high-sensitivity infrared camera. In Proceedings of the 3rd International Symposium on Advanced Optical Manufacturing and Testing Technologies: Optical Test and Measurement Technology and Equipment, Chengdu, China, 8–12 July 2007; Volume 6723, pp. 836–840. [Google Scholar]
  35. Villa, E.; Arteaga-Marrero, N.; Ruiz-Alzola, J. Performance Assessment of Low-Cost Thermal Cameras for Medical Applications. Sensors 2020, 20, 1321. [Google Scholar] [CrossRef] [Green Version]
  36. Pan, C.Y.; Huang, C.S.; Horng, G.J.; Peng, P.L.; Jong, G.J. Infrared Image Processing for a Physiological Information Telemetry System. Wirel. Pers. Commun. 2015, 83, 3181–3208. [Google Scholar] [CrossRef]
  37. Rao, K.; Coviello, G.; Feng, M.; Debnath, B.; Hsiung, W.-P.; Sankaradas, M.; Yang, Y.; Po, O.; Drolia, U.; Chakradhar, S. F3S: Free Flow Fever Screening. In Proceeding of the 7th IEEE International Conference on Smart Computing, Irvine, CA, USA, 23–27 August 2021; pp. 276–285. [Google Scholar]
  38. HBudzier, H.; Gerlach, G. Calibration of uncooled thermal infrared cameras. J. Sens. Sens. Syst. 2015, 4, 187–197. [Google Scholar] [CrossRef] [Green Version]
  39. König, S.; Gutschwager, B.; Taubert, R.D.; Hollandt, J. Metrological characterization and calibration of thermographic cameras for quantitative temperature measurement. J. Sens. Sens. Syst. 2020, 9, 425–442. [Google Scholar] [CrossRef]
  40. Scebba, G.; Da Poian, G.; Karlen, W. Multispectral Video Fusion for Non-Contact Monitoring of Respiratory Rate and Apnea. IEEE Trans. Biomed. Eng. 2021, 68, 350–359. [Google Scholar] [CrossRef] [PubMed]
  41. Lewicki, T.; Liu, K. AI thermometer for temperature screening: Demo abstract. In Proceedings of the 18th Conference on Embedded Networked Sensor Systems, Yokohama, Japan, 16–19 November 2020; pp. 597–598. [Google Scholar]
  42. Negishi, T.; Abe, S.; Matsui, T.; Liu, H.; Kurosawa, M.; Kirimoto, T.; Sun, G. Contactless Vital Signs Measurement System Using RGB-Thermal Image Sensors and Its Clinical Screening Test on Patients with Seasonal Influenza. Sensors 2020, 20, 2171. [Google Scholar] [CrossRef] [Green Version]
  43. Jagadev, P.; Giri, L.I. Non-contact monitoring of human respiration using infrared thermography and machine learning. Infrared Phys. Technol. 2020, 104, 103117. [Google Scholar] [CrossRef]
  44. Mutlu, K.; Rabell, J.E.; del Olmo, P.M.; Haesler, S. IR thermography-based monitoring of respiration phase without image segmentation. J. Neurosci. Methods 2018, 301, 1–8. [Google Scholar] [CrossRef] [PubMed]
  45. Jagadev, P.; Giri, L.I. Human respiration monitoring using infrared thermography and artificial intelligence. Biomed. Phys. Eng. Express 2020, 6, 35007. [Google Scholar] [CrossRef]
  46. Goldman, L.J. Nasal airflow and thoracoabdominal motion in children using infrared thermographic video processing. Pediatr. Pulmonol. 2012, 47, 476–486. [Google Scholar] [CrossRef]
  47. Negishi, T.; Sun, G.; Liu, H.; Sato, S.; Matsui, T.; Kirimoto, T. Stable Contactless Sensing of Vital Signs Using RGB-Thermal Image Fusion System with Facial Tracking for Infection Screening. Conf. Proc. IEEE Eng. Med. Biol. Soc. 2018, 2018, 4371–4374. [Google Scholar]
  48. Negishi, T.; Sun, G.; Sato, S.; Liu, H.; Matsui, T.; Abe, S.; Nishimura, H.; Kirimoto, T. Infection screening system using thermography and CCD camera with good stability and swiftness for non-contact vital-signs measurement by feature matching and MUSIC algorithm. In Proceedings of the 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Berlin, Germany, 23–27 July 2019. [Google Scholar]
  49. Kim, Y.; Park, Y.; Kim, J.; Lee, E.C. Remote heart rate monitoring method using infrared thermal camera. Int. J. Eng. Res. Technol. 2018, 11, 493–500. [Google Scholar]
  50. Pereira, C.B.; Yu, X.; Goos, T.; Reiss, I.; Orlikowsky, T.; Heimann, K.; Venema, B.; Blazek, V.; Leonhardt, S.; Teichmann, D. Noncontact Monitoring of Respiratory Rate in Newborn Infants Using Thermal Imaging. IEEE Trans. Biomed. Eng. 2018, 66, 1105–1114. [Google Scholar] [CrossRef]
  51. Hu, M.-H.; Zhai, G.-T.; Li, D.; Fan, Y.-Z.; Chen, X.-H.; Yang, X.-K. Synergetic use of thermal and visible imaging techniques for contactless and unobtrusive breathing measurement. J. Biomed. Opt. 2017, 22, 1. [Google Scholar] [CrossRef]
  52. Chen, L.; Hu, M.; Liu, N.; Zhai, G.; Yang, S.X. Collaborative use of RGB and thermal imaging for remote breathing rate measurement under realistic conditions. Infrared Phys. Technol. 2020, 111, 103504. [Google Scholar] [CrossRef]
  53. Hu, M.; Zhai, G.; Li, D.; Fan, Y.; Duan, H.; Zhu, W.; Yang, X. Combination of near-infrared and thermal imaging techniques for the remote and simultaneous measurements of breathing and heart rates under sleep situation. PLoS ONE 2018, 13, e0190466. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  54. Jakkaew, P.; Onoye, T. Non-contact respiration monitoring and body movements detection for sleep using thermal imaging. Sensors 2020, 20, 6307. [Google Scholar] [CrossRef]
  55. Boyko, N.; Basystiuk, O.; Shakhovska, N. Performance Evaluation and Comparison of Software for Face Recognition, Based on Dlib and Opencv Library. In Proceedings of the 2018 IEEE Second International Conference on Data Stream Mining & Processing (DSMP), Lviv, Ukraine, 21–25 August 2018; pp. 478–482. [Google Scholar]
  56. King, D.E. Dlib-ml: A machine learning toolkit. J. Mach. Learn. Res. 2009, 10, 1755–1758. [Google Scholar]
  57. Weisstein, E.W. Affine Transformation. Available online: https://mathworld.wolfram.com/ (accessed on 25 July 2021).
  58. Rublee, E.; Rabaud, V.; Konolige, K.; Bradski, G. ORB: An efficient alternative to SIFT or SURF. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 2564–2571. [Google Scholar]
  59. Abdulmajeed, M.; Seyfi, L. Object recognition system based on oriented FAST and rotated BRIEF. In Proceedings of the 2nd International Symposium on Innovative Approaches in Scientific Studies, Konya, Turkey, 30 November–2 December 2018. [Google Scholar]
  60. Viola, P.; Jones, M. Rapid object detection using a boosted cascade of simple features. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Kauai, HI, USA, 8–14 December 2001. [Google Scholar]
  61. Bennett, S.L.; Goubran, R.; Knoefel, F. Comparison of motion-based analysis to thermal-based analysis of thermal video in the extraction of respiration patterns. In Proceedings of the 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Jeju, Korea, 11–15 July 2017; pp. 3835–3839. [Google Scholar]
  62. Scholkmann, F.; Boss, J.; Wolf, M. An Efficient Algorithm for Automatic Peak Detection in Noisy Periodic and Quasi-Periodic Signals. Algorithms 2012, 5, 588–603. [Google Scholar] [CrossRef] [Green Version]
  63. Palshikar, G. Simple algorithms for peak detection in time-series. In Proceedings of the 1st Int. Conf. Advanced Data Analysis, Business Analytics and Intelligence, Ahmedabad, India, 6–7 June 2009; Volume 122. [Google Scholar]
  64. Sandya, H.B.; Hemanth, K.P.; Himanshi, P. Fuzzy rule based feature extraction and classification of time series signal. Int. J. Soft Comput. Eng. 2013, 3, 2231–2307. [Google Scholar]
  65. Huang, S.; Tang, J.; Dai, J.; Wang, Y. Signal Status Recognition Based on 1DCNN and Its Feature Extraction Mechanism Analysis. Sensors 2019, 19, 2018. [Google Scholar] [CrossRef] [Green Version]
  66. Laguna, P.; Moody, G.B.; Mark, R.G. Power spectral density of unevenly sampled data by least-square analysis: Performance and application to heart rate signals. IEEE Trans. Biomed. Eng. 1998, 45, 698–715. [Google Scholar] [CrossRef]
  67. Barandas, M.; Folgado, D.; Fernandes, L.; Santos, S.; Abreu, M.; Bota, P.; Liu, H.; Schultz, T.; Gamboa, H. TSFEL: Time Series Feature Extraction Library. SoftwareX 2020, 11, 100456. [Google Scholar] [CrossRef]
  68. Liu, H.; Allen, J.; Zheng, D.; Chen, F. Recent development of respiratory rate measurement technologies. Physiol. Meas. 2019, 40, 07TR01. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  69. Carol, K. Respiratory rate 1: Why measurement and recording are crucial. Nurs. Times 2018, 114, 23–24. [Google Scholar]
  70. Takamoto, H.; Nishine, H.; Sato, S.; Sun, G.; Watanabe, S.; Seokjin, K.; Asai, M.; Mineshita, M.; Matsui, T. Development and Clinical Application of a Novel Non-contact Early Airflow Limitation Screening System Using an Infrared Time-of-Flight Depth Image Sensor. Front. Physiol. 2020, 11, 552942. [Google Scholar] [CrossRef]
  71. Flenady, T.; Dwyer, T.; Applegarth, J. Accurate respiratory rates count: So should you! Australas. Emerg. Nurs. J. 2017, 20, 45–47. [Google Scholar] [CrossRef] [Green Version]
  72. Sartini, C.; Tresoldi, M.; Scarpellini, P.; Tettamanti, A.; Carcò, F.; Landoni, G.; Zangrillo, A. Respiratory Parameters in Patients With COVID-19 After Using Noninvasive Ventilation in the Prone Position Outside the Intensive Care Unit. JAMA 2020, 323, 2338–2340. [Google Scholar] [CrossRef]
  73. Total Hospital Bed Occupancy (COVID-19)|SCDHEC. Available online: https://scdhec.gov/covid19/hospital-bed-capacity-covid-19 (accessed on 25 July 2021).
  74. Storck, K.; Karlsson, M.; Ask, P.; Loyd, D. Heat transfer evaluation of the nasal thermistor technique. IEEE Trans. Biomed. Eng. 1996, 43, 1187–1191. [Google Scholar] [CrossRef]
  75. Hunsaker, D.H.; Riffenburgh, R.H. Snoring Significance in Patients Undergoing Home Sleep Studies. Otolaryngol. Neck Surg. 2006, 134, 756–760. [Google Scholar] [CrossRef]
  76. Akbarian, S.; Ghahjaverestan, N.M.; Yadollahi, A.; Taati, B. Distinguishing Obstructive Versus Central Apneas in Infrared Video of Sleep Using Deep Learning: Validation Study. J. Med. Internet Res. 2020, 22, e17252. [Google Scholar] [CrossRef] [PubMed]
  77. Wang, C.W.; Hunter, A.; Gravill, N.; Matusiewicz, S. Unconstrained video monitoring of breathing behavior and application to diagnosis of sleep apnea. IEEE Trans. Biomed. Eng. 2014, 61, 396–404. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  78. Gault, T.; Farag, A. A fully automatic method to extract the heart rate from thermal video. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Louisville, KY, USA, 23–28 June 2013; pp. 336–341. [Google Scholar]
  79. Bennett, S.L.; Goubran, R.; Knoefel, F. Adaptive eulerian video magnification methods to extract heart rate from thermal video. In Proceedings of the 2016 IEEE International Symposium on Medical Measurements and Applications (MeMeA), Benevento, Italy, 15–18 May 2016; pp. 1–5. [Google Scholar]
  80. Li, F.; Zhao, Y.; Kong, L.; Dong, L.; Liu, M.; Hui, M.; Liu, X. A camera-based ballistocardiogram heart rate measurement method. Rev. Sci. Instrum. 2020, 91, 054105. [Google Scholar] [CrossRef]
  81. Balakrishnan, G.; Durand, F.; Guttag, J. Detecting Pulse from Head Motions in Video. In Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 23–28 June 2013; pp. 3430–3437. [Google Scholar]
  82. Netinant, P.; Vasprasert, P.; Rukhiran, M. Evaluations of Effective on LWIR Micro Thermal Camera IoT and Digital Thermometer for Human Body Temperatures. In Proceedings of the 2021 The 5th International Conference on E-Commerce, E-Business and E-Government, New York, NY, USA, 28 April 2021; pp. 20–24. [Google Scholar]
  83. Ring, E.F.J.; McEvoy, H.; Jung, A.; Zuber, J.; Machin, G. New standards for devices used for the measurement of human body temperature. J. Med. Eng. Technol. 2010, 34, 249–253. [Google Scholar] [CrossRef]
  84. Mercer, J.B.; Ring, E.F.J. Fever screening and infrared thermal imaging: Concerns and guidelines. Thermol. Int. 2009, 19, 67–69. [Google Scholar]
  85. Kim, N.W.; Zhang, H.Y.; Yoo, J.-H.; Park, Y.S.; Song, H.J.; Yang, K.H. The Correlation Between Tympanic Membrane Temperature and Specific Region of Face Temperature. Quant. InfraRed Thermogr. Asia 2017, 16, 1–7. [Google Scholar]
  86. Yeoh, W.K.; Lee, J.K.W.; Lim, H.Y.; Gan, C.W.; Liang, W.; Tan, K.K. Re-visiting the tympanic membrane vicinity as core body temperature measurement site. PLoS ONE 2017, 12, e0174120. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  87. Dell’Isola, G.B.; Cosentini, E.; Canale, L.; Ficco, G.; Dell’Isola, M. Noncontact Body Temperature Measurement: Uncertainty Evaluation and Screening Decision Rule to Prevent the Spread of COVID-19. Sensors 2021, 21, 346. [Google Scholar] [CrossRef]
  88. Strąkowska, M.; Strąkowski, R. Automatic eye corners detection and tracking algorithm in sequence of thermal medical images. Meas. Autom. Monit. 2015, 61, 199–202. [Google Scholar]
  89. Chen, H.-Y.; Chen, A.; Chen, C. Investigation of the Impact of Infrared Sensors on Core Body Temperature Monitoring by Comparing Measurement Sites. Sensors 2020, 20, 2885. [Google Scholar] [CrossRef]
  90. Nhan, B.R.; Chau, T. Classifying Affective States Using Thermal Infrared Imaging of the Human Face. IEEE Trans. Biomed. Eng. 2010, 57, 979–987. [Google Scholar] [CrossRef] [PubMed]
  91. Merla, A.; Mattei, P.A.; Di Donato, L.; Romani, G.L. Thermal Imaging of Cutaneous Temperature Modifications in Runners During Graded Exercise. Ann. Biomed. Eng. 2009, 38, 158–163. [Google Scholar] [CrossRef] [PubMed]
  92. Tejedor, B.; Casals, M.; Gangolells, M.; Macarulla, M.; Forcada, N. Human comfort modelling for elderly people by infrared thermography: Evaluating the thermoregulation system responses in an indoor environment during winter. Build. Environ. 2020, 186, 107354. [Google Scholar] [CrossRef]
  93. Pavlidis, I.T. Lie detection using thermal imaging. Def. Secur. 2004, XXVI, 270–279. [Google Scholar]
  94. Warmelink, L.; Vrij, A.; Mann, S.; Leal, S.; Forrester, D.; Fisher, R.P. Thermal imaging as a lie detection tool at airports. Law Hum. Behav. 2011, 35, 40–48. [Google Scholar] [CrossRef]
  95. VEngert, V.; Merla, A.; Grant, J.; Cardone, D.; Tusche, A.; Singer, T. Exploring the Use of Thermal Infrared Imaging in Human Stress Research. PLoS ONE 2014, 9, e90782. [Google Scholar]
  96. Koukiou, G.; Anastassopoulos, V. Neural networks for identifying drunk persons using thermal infrared imagery. Forensic Sci. Int. 2015, 252, 69–76. [Google Scholar] [CrossRef]
  97. Gade, R.; Moeslund, T.B. Thermal cameras and applications: A survey. Mach. Vis. Appl. 2014, 25, 245–262. [Google Scholar] [CrossRef] [Green Version]
  98. Zhao, F.; Li, M.; Tsien, J.Z. Technology platforms for remote monitoring of vital signs in the new era of telemedicine. Expert Rev. Med. Devices 2015, 12, 411–429. [Google Scholar] [CrossRef] [PubMed]
  99. Weber, M.; Hiete, M.; Lauer, L.; Rentz, O. Low cost country sourcing and its effects on the total cost of ownership structure for a medical devices manufacturer. J. Purch. Supply Manag. 2010, 16, 4–16. [Google Scholar] [CrossRef]
  100. Balsam, J.; Ossandon, M.; Bruck, H.A.; Lubensky, I.; Rasooly, A. Low-cost technologies for medical diagnostics in low-resource settings. Expert Opin. Med. Diagn. 2013, 7, 243–255. [Google Scholar] [CrossRef] [PubMed]
  101. YChoi, Y.; Kim, N.; Hwang, S.; Kweon, I.S. Thermal Image Enhancement using Convolutional Neural Network. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Korea, 9–14 October 2016; pp. 223–230. [Google Scholar]
  102. Welch, J.; Kanter, B.; Skora, B.; McCombie, S.; Henry, I.; McCombie, D.; Kennedy, R.; Soller, B. Multi-parameter vital sign database to assist in alarm optimization for general care units. J. Clin. Monit. 2015, 30, 895–900. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. The general process stages of studies using a thermal camera that performs physiological measurements. Several stages are depicted by dotted line boxes explaining that these stages only apply in certain studies in general.
Figure 1. The general process stages of studies using a thermal camera that performs physiological measurements. Several stages are depicted by dotted line boxes explaining that these stages only apply in certain studies in general.
Sensors 21 07777 g001
Figure 2. An overview of how RGB cameras are used to assist thermal cameras in determining ROI and the transformation process.
Figure 2. An overview of how RGB cameras are used to assist thermal cameras in determining ROI and the transformation process.
Sensors 21 07777 g002
Figure 3. An overview of how the signal extraction process from a thermal image is carried out. In general, there are two methods: first by measuring changes in temperature in the area around the nostrils and mouth, and second by looking at the movement based on the comparison between changes in pixels in each frame.
Figure 3. An overview of how the signal extraction process from a thermal image is carried out. In general, there are two methods: first by measuring changes in temperature in the area around the nostrils and mouth, and second by looking at the movement based on the comparison between changes in pixels in each frame.
Sensors 21 07777 g003
Table 1. Existing Systematic review.
Table 1. Existing Systematic review.
AuthorPublished YearDifference
Mikulska D. [19] 2006Covered studies before 2006
Lahiri et al. [20]2012Published in 2012 and covered studies before 2012
El et al. [21] 2015Only covered applications related to sports
Znamenskaya et al. [22] 2016Limited to human psychophysiological conditions that are based on thermographic video
Zadeh et al. [1] 2016Only covered breast cancer diagnostics by using thermal imaging
Moreira et al. [23] 2017Developed checklist guidelines to assess skin temperature for sports and exercise medicine
Topalidou et al. [4] 2019Database limited to EMBASE, MEDLINE, and MIDIRS and only covered thermal camera usage in neonatal care
Pan et al. [24] 2019Focused on vein finder by using near infrared (NIR)
Aggarwal et al. [25] 2020Focused on reviewing the accuracy of handheld thermal cameras
Foster et al. [26] 2021Focused on assessing human core temperature using infrared thermometry
He et al. [27] 2021Not focused on human vital signs
Table 2. List of Thermal Cameras Used in Some Studies in this Systematic Review Along with The Specifications Used.
Table 2. List of Thermal Cameras Used in Some Studies in this Systematic Review Along with The Specifications Used.
ManufacturerModelSpectral RangeTemperature AccuracyThermal Sensitivity (NETD)Maximum FPS and ResolutionsUsed by
Flir Systems Inc., Wilsonville, OR, USALepton 3.58 to 14 µm±5 °C<50 mK8.7 FPS, 160 × 120 pixels[40,41]
A3257 to 13.5 µm±5 °C<50 mK60 FPS, 320 × 240 pixels[42,43,44,45]
Thermovision A40M7 to 13.5 µm±2 °C<50 mK60 FPS, 320 × 240 pixels[46]
A3157.5 to 13 µm±2 °C<50 mK60 FPS, 320 × 240 pixels[47,48]
P384-208 to 14 µm±2 °C<50 mK50 FPS, 384 × 288 pixels[36]
T430sc7.5 to 13 µm±2 °C<30 mK12 FPS, 320 × 240 pixels[49]
InfraTec GmbH, Dresden, GermanyVarioCAMR HD 820S7.5 to 14 µm±1 °C<55 mK30 FPS, 1024 × 768 pixels[50]
Magnity Electronics Co., Ltd., Shanghai, ChinaMAG 627.5 to 14 µm±2 °C<60 mK50 FPS, 640 × 480 pixels[51,52,53]
Optris Gmbh, Berlin, GermanyOptris PI 450i8 to 14 µm±2 °C<75 mK80 FPS, 382 × 288 pixels[28]
Seek Thermal Inc., Santa Barbara, CA, USACompact PRO7.5 to 14 µm-<70 mK>15 FPS, 320 × 240 pixels[54]
Mobotix AG, Winnweiler, GermanyM16 TR7.5 to 13 µm±10 °C<50 mK9 FPS, 336 × 252 pixels[37]
Table 3. Summary of Thermal Camera Usage Related to Respiratory.
Table 3. Summary of Thermal Camera Usage Related to Respiratory.
AuthorObjectivesThermal Camera Model, FPS, and Dimension UsedImage and Signal Processing ToolsAlgorithm UsedValidation MethodPerformance
Chen et al. [52]RR measurementMAG 62, 10 FPS, 640 × 480 pixels·Open CV: Image Processing Tools·KLT: Coordinate Mapping
·RSQI_dtw: score each ROI
Compared with the GY-6620 sleep monitor·Root Mean Square Error: 0.71 breaths/min and 0.76 breaths/min
Goldman et al. [46]RR measurementThermovision A40, 50FPS, 320 × 240 pixels·Matlab for signal processing software·n/aCompared with standard measurements of nasal pressure·Intraclass correlation of 0.978 (0.991–0.954 95% CI)
Hu et al. [51]RR measurementMAG 62, 640 × 480 pixels·All analysis conducted with Matlab R2014A·Viola-Jones Algorithm for Cascade Object Detector
·Shi-Tomasi for the corner detection algorithm
Compared with human observers (manual counting)·Accuracy for face, nose, and mouth: 98.46%, 95.38%, 84.62%
Hu, et al. [53]RR and HR measurementMAG 62, 30 FPS, 640 × 480 pixels·Matlab R2014a for Image Processing·Affine Transformation for transforming imagesCompared with human observers (manual counting)·Determination Coefficient: 0.831
Jagadev et al. [45]RR measurementFlir A325, 25 FPS, 320 × 240 pixels ·k-nearest neighbors (k-NN) Classifier
·the t-Stochastic Neighbor Embedding algorithm
Statistical calculation of sensitivity, precision, spurious cycle rate, missed cycle rate·Sensitivity: 98.76%
·Precision: 99.07%
·Spurious cycle rate: 0.92%
·Missed cycle rate: 1.23%
Jagadev et al. [43]RR measurement and classificationFlir A325, 25 FPS, 320 × 240 pixels ·Breath Detection algorithm for counting RR
·k-NN and SVM to classify the abnormalities
Statistical calculation of sensitivity, precision, spurious cycle rate, missed cycle rate·Sensitivity: 97.2%
·Precision: 98.6%
·Spurious cycle rate: 1.4%
·Missed cycle rate: 2.8%
Jakkaew et al. [54]RR measurement and body movement detectionCompact PRO, 17 FPS, 640 × 480 pixels·minMaxLoc OpenCV: ROI Detection
·findContour: programming library to detect significant movement
·OpenCV: image processing framework
Compared with Go Direct respiratory belt·Root Mean Square Error: 1.82 ± 0.75 bpm
Lyra et al. [28]RR measurementOptris PI 450i, 4 FPS, 382 × 288 pixels·YOLO_mark: Labelling framework
·YOLOv4 with CSPDarknet53 Backbone: training framework
·YOLOv4-Tiny: Real-Time classifier framework
Compared with thoracic bioimpedance based patient monitor device (Philips, Amsterdam, The Netherlands) ·Intersection over unit (IoU): 0.70
·IoU (tiny): 0.75
·Mean Absolute Errors: 2.79 bpm, 2.69 bpm (Tiny)
Mutlu et al. [44]RR measurementFlir A325, 60 FPS, 320 × 240 pixels·FLIR ResearchIRMax: Video Recording software
·Labview: camera trigger software
·MATLAB: analysis tools
Compared with a respiratory belt transducer containing a piezoelectric·Median Error Rate: 6.2%
Negishi et al. [47]RR measurementFlir A315, 15 FPS, 320 × 240 pixels·Labview: Image recording and analysis
·Grab cut: Extraction of contour
·Oriented FAST and Rotated Brief (ORB): feature matching
·dlib: ROI detection library
·OpenCV: Image Processing Tools
Compared with a respiratory effort belt (DL-231, S&ME, Japan)·Root Mean Square Error: 2.52 RPM
·Correlation Coefficient 0.77
Negishi et al. [48]RR and HR measurementFlir A315, 15 FPS, 320 × 240 pixels·Labview: Image recording and analysis
·Grab cut: Extraction of contour
·Oriented Fast and Rotated Brief: feature matching
·dlib: ROI detection library
·OpenCV: Image Processing Tools
Compared with a respiratory effort belt (DL-231, S&ME, Japan)·Root Mean Square Error: 1.13 RPM
·Correlation Coefficient 0.92
Negishi et al. [42]RR and HR measurementFlir A325, 15 FPS, 320 × 240 pixels·dlib: ROI detection library
·OpenCV: Image Processing Library
·Multiple signal classification (MUSIC) algorithm for signal estimation
·Homography Matrix for facial landmarking
Compared with a respiratory effort belt (DL-231, S&ME, Japan)·Sensitivity: 85.7%
·Specificity: 90.1%
Pereira et al. [50]RR measurement for infantsVarioCAMR HD 820S, 30 FPS, 1024 × 768 pixels·Matlab 2017 for Evaluation and Signal Processing software Compared with thoracic effort
piezo plethysmography belt, namely SOMNOlab2
·Root Mean Square Error: (0.31 ± 0.09) breaths/min.
Scebba et al. [40]RR measurement for apnea detectionNIR: See3cam_CU40 MV, 15 FPS, 336×190 pixels
LWIR: Flir Lepton 3.5, 8.7 FPS, 160 × 120 pixels
·Smart Signal Quality Fusion (S2Fusion) for RR estimation
·Cascade Convolutional Neural Network (CCNN) for facial landmark
·KLT for tracking
Compared with piezo-resistive sensors based ezRIP module, Philips Respironics· Median of Root Mean Square Error: 1.17 breaths/min
Table 4. List of Studies Involved Camera Fusion and Its Characteristic.
Table 4. List of Studies Involved Camera Fusion and Its Characteristic.
AuthorsFusion Camera CombinationCharacteristic
Scebba et al. [40]NIR and LWIR CameraLWIR camera used for nostrils and chest ROI, NIR camera used for chest ROI
Negishi et al. [42,47,48]RGB and LWIR CameraRGB camera used for determining ROI and extracting PPG signals while LWIR camera used for extracting respiratory signal
Hu et al. [51]RGB and LWIR CameraRGB camera used for determining ROI while LWIR camera used for extracting respiratory signal
Chen et al. [52]RGB and LWIR CameraRGB camera used for determining ROI and alternative method to extract respiratory signal if no face detected while LWIR camera sued for extract respiratory signal if any face detected
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Manullang, M.C.T.; Lin, Y.-H.; Lai, S.-J.; Chou, N.-K. Implementation of Thermal Camera for Non-Contact Physiological Measurement: A Systematic Review. Sensors 2021, 21, 7777. https://doi.org/10.3390/s21237777

AMA Style

Manullang MCT, Lin Y-H, Lai S-J, Chou N-K. Implementation of Thermal Camera for Non-Contact Physiological Measurement: A Systematic Review. Sensors. 2021; 21(23):7777. https://doi.org/10.3390/s21237777

Chicago/Turabian Style

Manullang, Martin Clinton Tosima, Yuan-Hsiang Lin, Sheng-Jie Lai, and Nai-Kuan Chou. 2021. "Implementation of Thermal Camera for Non-Contact Physiological Measurement: A Systematic Review" Sensors 21, no. 23: 7777. https://doi.org/10.3390/s21237777

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop