[go: up one dir, main page]

Next Issue
Volume 21, July-1
Previous Issue
Volume 21, June-1
 
 
sensors-logo

Journal Browser

Journal Browser

Sensors, Volume 21, Issue 12 (June-2 2021) – 311 articles

Cover Story (view full-size image): Electrochemical sensors are sensitive, portable, fast, inexpensive, and suitable for online and in situ measurements compared to other methods. In this paper, we provide a survey of electrochemical sensors for the detection of water contaminants, i.e., pesticides, nitrate, nitrite, phosphorus, water hardeners, disinfectant, and other emergent contaminants. We focus on the influence of surface modification of working electrodes by carbon nanomaterials, metallic nanostructures, and imprinted polymers and evaluate the corresponding sensing performance. Especially for pesticides, we highlight biosensors, such as enzymatic sensors, immunobiosensors, aptasensors, and biomimetic sensors. We discuss the sensors' overall performance, especially concerning real-sample performance and the capability for actual field application. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
21 pages, 9812 KiB  
Article
CMPC: An Innovative Lidar-Based Method to Estimate Tree Canopy Meshing-Profile Volumes for Orchard Target-Oriented Spray
by Chenchen Gu, Changyuan Zhai, Xiu Wang and Songlin Wang
Sensors 2021, 21(12), 4252; https://doi.org/10.3390/s21124252 - 21 Jun 2021
Cited by 18 | Viewed by 3142
Abstract
Canopy characterization detection is essential for target-oriented spray, which minimizes pesticide residues in fruits, pesticide wastage, and pollution. In this study, a novel canopy meshing-profile characterization (CMPC) method based on light detection and ranging (LiDAR)point-cloud data was designed for high-precision canopy volume calculations. [...] Read more.
Canopy characterization detection is essential for target-oriented spray, which minimizes pesticide residues in fruits, pesticide wastage, and pollution. In this study, a novel canopy meshing-profile characterization (CMPC) method based on light detection and ranging (LiDAR)point-cloud data was designed for high-precision canopy volume calculations. First, the accuracy and viability of this method were tested using a simulated canopy. The results show that the CMPC method can accurately characterize the 3D profiles of the simulated canopy. These simulated canopy profiles were similar to those obtained from manual measurements, and the measured canopy volume achieved an accuracy of 93.3%. Second, the feasibility of the method was verified by a field experiment where the canopy 3D stereogram and cross-sectional profiles were obtained via CMPC. The results show that the 3D stereogram exhibited a high degree of similarity with the tree canopy, although there were some differences at the edges, where the canopy was sparse. The CMPC-derived cross-sectional profiles matched the manually measured results well. The CMPC method achieved an accuracy of 96.3% when the tree canopy was detected by LiDAR at a moving speed of 1.2 m/s. The accuracy of the LiDAR system was virtually unchanged when the moving speeds was reduced to 1 m/s. No detection lag was observed when comparing the start and end positions of the cross-section. Different CMPC grid sizes were also evaluated. Small grid sizes (0.01 m × 0.01 m and 0.025 m × 0.025 m) were suitable for characterizing the finer details of a canopy, whereas grid sizes of 0.1 m × 0.1 m or larger can be used for characterizing its overall profile and volume. The results of this study can be used as a technical reference for the development of a LiDAR-based target-oriented spray system. Full article
(This article belongs to the Collection Sensors in Agriculture and Forestry)
Show Figures

Figure 1

Figure 1
<p>LiDAR system structure: (<b>a</b>) Sliding LiDAR system, including notebook computer (1), stepper motor driver power source (2), LiDAR sensor power source (3), stepper motor driver (4), stepper motor controller (5), stepper motor (6), LiDAR sensor (7), and linear rail guide (8); (<b>b</b>) LiDAR system canopy measurement setup.</p>
Full article ">Figure 2
<p>Manual measurement system: (<b>a</b>) Manual measurement system structure, including measuring instrument fixed bracket (1), slider (2), mounting plate for measuring instruments (3), sliding platform (4); (<b>b</b>) manual measurement process.</p>
Full article ">Figure 3
<p>CMPC method: (<b>a</b>) Canopy profile mesh, including longitudinal profile extraction line (1), extraction spacing (2), horizontal profile extraction line (3), height from canopy bottom to the ground (4); (<b>b</b>) extraction of x<sub>max</sub> from a meshed area, including fixed meshed area (1), a point beyond the tree trunk (2), extract of the outermost point of the longitudinal section of the canopy profile (3), horizontal distance between the center of the LiDAR system and canopy (4).</p>
Full article ">Figure 4
<p>Illustration of canopy point-cloud calculations.</p>
Full article ">Figure 5
<p>Calculation of the canopy′s cross-sectional areas; (<b>a</b>) Canopy′s cross-section; (<b>b</b>) Canopy′s cross-sectional areas calculation.</p>
Full article ">Figure 6
<p>Simulated canopy and manual measurement system; (<b>a</b>) Simulated canopy dimensions; (<b>b</b>) Experimental study on simulated canopy detection.</p>
Full article ">Figure 7
<p>Target orchard tree.</p>
Full article ">Figure 8
<p>3D volumes of the simulated canopy: (<b>a</b>) LiDAR-measured 3D volume; (<b>b</b>) Manually measured 3D volume.</p>
Full article ">Figure 9
<p>Comparison between LiDAR and manually measured longitudinal and horizontal cross-sections of the simulated canopy: (<b>a</b>) Horizontal sections of the canopy; (<b>b</b>) Longitudinal sections of the canopy.</p>
Full article ">Figure 10
<p>Comparison between LiDAR and manually measured horizontal and longitudinal cross-sectional areas at different positions for an orchard tree canopy: (<b>a</b>) Comparison between horizontal cross-section areas; (<b>b</b>) Comparison between longitudinal cross-section areas.</p>
Full article ">Figure 11
<p>3D point cloud of orchard tree canopy: (<b>a</b>) LiDAR-measured 3D point cloud of orchard tree (both sides); (<b>b</b>) Point cloud of orchard tree (left-hand side).</p>
Full article ">Figure 12
<p>Point clouds of orchard tree canopy: (<b>a</b>) Front view; (<b>b</b>) Left view.</p>
Full article ">Figure 13
<p>3D and 2D color maps of the canopy: (<b>a</b>) LiDAR-measured 3D map; (<b>b</b>) Manually measured 3D map; (<b>c</b>) LiDAR-measured 2D color map; (<b>d</b>) Manually measured 2D color map.</p>
Full article ">Figure 13 Cont.
<p>3D and 2D color maps of the canopy: (<b>a</b>) LiDAR-measured 3D map; (<b>b</b>) Manually measured 3D map; (<b>c</b>) LiDAR-measured 2D color map; (<b>d</b>) Manually measured 2D color map.</p>
Full article ">Figure 14
<p>Horizontal (upper) and longitudinal (lower) cross-sections of the canopy: upper (<b>a</b>) Cross-section at the start of the canopy; upper (<b>b</b>) Cross-section in the middle of the canopy; upper (<b>c</b>) Cross-section in the middle of the canopy; upper (<b>d</b>) Cross-section at the end of the canopy; lower (<b>e</b>) Cross-section at the start of the canopy; lower (<b>f</b>) Cross-section in the middle of the canopy; lower (<b>g</b>) Cross-section in the middle of the canopy; lower (<b>h</b>) Cross-section at the end of the canopy.</p>
Full article ">Figure 15
<p>Comparison of LiDAR and manually measured cross-sectional areas in longitudinal and horizontal cross-sections measured at different positions in the orchard tree canopy: (<b>a</b>) Horizontal cross-sectional areas; (<b>b</b>) Longitudinal cross-sectional areas.</p>
Full article ">Figure 16
<p>Comparison of longitudinal and horizontal canopy profile measurements obtained under different moving speeds: (<b>a</b>) Horizontal canopy profile measurements; (<b>b</b>) Longitudinal canopy profile measurements.</p>
Full article ">Figure 17
<p>Comparison between 3D maps and cross-sectional orchard tree profiles obtained at different grid sizes: (<b>a</b>) 0.01 m × 0.01 m; (<b>b</b>) 0.025 m × 0.025 m; (<b>c</b>) 0.05 m × 0.05 m; (<b>d</b>) 0.1 m × 0.1 m.</p>
Full article ">
17 pages, 3127 KiB  
Article
Modelling and Evaluation of the Absorption of the 866 MHz Electromagnetic Field in Humans Exposed near to Fixed I-RFID Readers Used in Medical RTLS or to Monitor PPE
by Patryk Zradziński, Jolanta Karpowicz, Krzysztof Gryz, Grzegorz Owczarek and Victoria Ramos
Sensors 2021, 21(12), 4251; https://doi.org/10.3390/s21124251 - 21 Jun 2021
Cited by 8 | Viewed by 3122
Abstract
The aim of this study was to model and evaluate the Specific Energy Absorption Rate (SAR) values in humans in proximity to fixed multi-antenna I-RFID readers of passive tags under various scenarios mimicking exposure when they are incorporated in Real-Time Location Systems (RTLS), [...] Read more.
The aim of this study was to model and evaluate the Specific Energy Absorption Rate (SAR) values in humans in proximity to fixed multi-antenna I-RFID readers of passive tags under various scenarios mimicking exposure when they are incorporated in Real-Time Location Systems (RTLS), or used to monitor Personal Protective Equipment (PPE). The sources of the electromagnetic field (EMF) in the modelled readers were rectangular microstrip antennas at a resonance frequency in free space of 866 MHz from the ultra-high frequency (UHF) RFID frequency range of 865–868 MHz. The obtained results of numerical modelling showed that the SAR values in the body 5 cm away from the UHF RFID readers need consideration with respect to exposure limits set by international guidelines to prevent adverse thermal effects of exposure to EMF: when the effective radiated power exceeds 5.5 W with respect to the general public/unrestricted environments exposure limits, and with respect to occupational/restricted environments exposure limits, when the effective radiated power exceeds 27.5 W. Full article
Show Figures

Figure 1

Figure 1
<p>Examples of RFID systems in RTLS: (<b>a</b>) UHF RFID antennas at the entrance door to an operating theatre; (<b>b</b>) a shelf for biological materials and pharmaceuticals storing in medical centres and (<b>c</b>) examples of RFID passive tags.</p>
Full article ">Figure 2
<p>The principle of operating the system of automatic identification and management of PPE in the workplace.</p>
Full article ">Figure 3
<p>The S11 parameter determined using numerical simulations of the developed numerical model of the rectangular microstrip patch antenna used in the UHF RFID readers.</p>
Full article ">Figure 4
<p>Exposure scenarios to EMF near UHFI-RFID multi-antennas readers: (<b>a</b>) three antennas plane in front of (PF) and (<b>b</b>) to the side of (PS); and (<b>c</b>) two antennas side-on in front of (EF) and (<b>d</b>) to the side of (ES) human body model.</p>
Full article ">Figure 5
<p>SAR distributions in the human body exposed to EMF near a UHF I-RFID three-antenna reader located 5 cm from the body: (<b>a</b>,<b>b</b>) in the body’s sagittal cross-sections through the body’s main axis and 6 cm to side (through the leg) in scenario PF and (<b>c</b>) in the body’s frontal cross-section in scenario PS; antennas marked with white arrows.</p>
Full article ">Figure 6
<p>SAR distributions in the human body exposed to EMF near a UHF I-RFID two-antenna reader located 5 cm from the body: (<b>a</b>,<b>b</b>) in the body’s sagittal cross-sections through the body’s main axis and 6 cm to side (through the leg) in scenario EF and (<b>c</b>) in the body’s frontal cross-section in scenario ES; antennas marked with white arrows.</p>
Full article ">Figure 7
<p>SAR distributions in scenario PF in the sagittal cross-section of the human body exposed to EMF near a three-antenna UHF I-RFID reader, located: (<b>a</b>) 5 cm; (<b>b</b>) 20 cm and (<b>c</b>) 40 cm away; antennas marked with white arrows.</p>
Full article ">
24 pages, 6068 KiB  
Article
Low-Cost Ultrasonic Range Improvements for an Assistive Device
by David Abreu, Jonay Toledo, Benito Codina and Arminda Suárez
Sensors 2021, 21(12), 4250; https://doi.org/10.3390/s21124250 - 21 Jun 2021
Cited by 7 | Viewed by 8795
Abstract
To achieve optimal mobility, visually impaired people have to deal with obstacle detection and avoidance challenges. Aside from the broadly adopted white cane, electronic aids have been developed. However, available electronic devices are not extensively used due to their complexity and price. As [...] Read more.
To achieve optimal mobility, visually impaired people have to deal with obstacle detection and avoidance challenges. Aside from the broadly adopted white cane, electronic aids have been developed. However, available electronic devices are not extensively used due to their complexity and price. As an effort to improve the existing ones, this work presents the design of a low-cost aid for blind people. A standard low-cost HC-SRF04 ultrasonic range is modified by adding phase modulation in the ultrasonic pulses, allowing it to detect the origin of emission, thus discriminating if the echo pulses come from the same device and avoiding false echoes due to interference from other sources. This improves accuracy and security in areas where different ultrasonic sensors are working simultaneously. The final device, based on users and trainers feedback for the design, works with the user’s own mobile phone, easing utilization and lowering manufacturing costs. The device was tested with a set of twenty blind persons carrying out a travel experiment and satisfaction survey. The main results showed a change in total involuntary contacts with unknown obstacles and high user satisfaction. Hence, we conclude that the device can fill a gap in the mobility aids and reduce feelings of insecurity amongst the blind. Full article
(This article belongs to the Special Issue Sensors in Low-Cost Applications)
Show Figures

Figure 1

Figure 1
<p>Measurement noise to an static obstacle with a HC-SR04 sensor, a Parallax Ping sensor and a SRF08 sensor.</p>
Full article ">Figure 2
<p>Picture of the HC-SR04 ultrasonic sensor. Top view in the left panel, bottom view in the right panel. The ultrasound emitter is labeled as “T” (<b>left</b>) and the receiver as “R” (<b>right</b>). It has 4 pins: “Vcc” and “Gnd” to provide the needed power, “Trig” to deliver the trigger signal that activates the ultrasound emission, and Echo”, which is activated when the echo is detected.</p>
Full article ">Figure 3
<p>Speed of the sound in the air for different temperatures and three humidity levels.</p>
Full article ">Figure 4
<p>Signal obtained from HC-SR04 “Trig” pin (blue line), “Echo” pin (orange) and detection (green) of a flat object at 30 cm.</p>
Full article ">Figure 5
<p>Histogram of 10,000 distance measurements of static obstacle located at one meter from the HC-SR04 sensor. It is centered at 100 cm and spreads over a bit less than two centimeters with 0.269 standard deviation.</p>
Full article ">Figure 6
<p>Picture of the laboratory setup to test the aperture of the HC-SR04. Range sensor is mounted in the robotic arm that can rotate the base with high accuracy, leaving the other axis fixed. Data are acquired using an Arduino UNO board (under the sensor) and serial monitor.</p>
Full article ">Figure 7
<p>Measurement results for a cylinder one meter height, 75 mm diameter located one meter from the sensor. For steps of 1 degree, 100 measurements were acquired each. (<b>a</b>) In the left panel, the box plot of the data is in full range. (<b>b</b>) In hte right panel, details of the values where a detection was performed. The bottom line of each box indicates the first quartile. The box itself is the second and third divided by an horizontal line (the median). The top line is the fourth quartile. Outliers are plotted as individual points with a cross. The obstacle is detected from −6 to +6 degrees with a dispersion of 4 cm.</p>
Full article ">Figure 8
<p>Similar to <a href="#sensors-21-04250-f007" class="html-fig">Figure 7</a> for a cylinder one meter height and 120 mm of diameter. (<b>a</b>) Full range in the left panel. (<b>b</b>) values where a detection was performed in the right one. The dispersion for this larger obstacle is lower, and the round shape of the cylinder can be observed. It can also be observed that the detection is not horizontally symmetrical, as the range sensor has the ultrasound emitter on one side and the detector next to it. The object was detected for angles between −13 and +14 degrees and 2 cm dispersion.</p>
Full article ">Figure 9
<p>Polar plot of median values for each angle. (<b>a</b>) Left panel for the 75 mm cylinder. (<b>b</b>) Right panel for the 120 mm.</p>
Full article ">Figure 10
<p>HC-SR04 detection of the 120 mm cylinder. For wrist angles of −45, 0, 45, and 90 degrees of the robotic arm, an scan of steps of one degree around z-axis was performed.</p>
Full article ">Figure 11
<p>HC-SR04 circuit Diagram.</p>
Full article ">Figure 12
<p>Modified sine wave modulation.</p>
Full article ">Figure 13
<p>Received signal after obstacle detection.</p>
Full article ">Figure 14
<p>Signal demodulation and code extraction.</p>
Full article ">Figure 15
<p>Block diagram of the eBAT working process. The range sensor emits an ultrasonic wave that is reflected by any obstacle in the path and received back as an echo. The eBAT then transmits the measured distance to a mobile phone connected through Bluetooth.</p>
Full article ">Figure 16
<p>A diagram of the corridor with the obstacles in the path. The two possible orientations with starting point labeled with a blue cross and a “starting point” for each case. The total length was 16 m and with was 2 m. The space between obstacles was 3.2 m. On the left side, a double-leaf door is also drawn, which was left open during the tests.</p>
Full article ">Figure 17
<p>One of the volunteers wearing the eBAT (hanging from the neck) while receiving the short training to learn device operations.</p>
Full article ">Figure 18
<p>Histograms of number of involuntary contacts without (<b>left panel</b>) and with (<b>right panel</b>) the usage of the eBAT. Dashed lines indicate the average for each condition. It can be observed that the distribution without the eBAT is much lower, with the maximum at one involuntary contact.</p>
Full article ">
27 pages, 506 KiB  
Review
Intelligent Sensing Technologies for the Diagnosis, Monitoring and Therapy of Alzheimer’s Disease: A Systematic Review
by Nazia Gillani and Tughrul Arslan
Sensors 2021, 21(12), 4249; https://doi.org/10.3390/s21124249 - 21 Jun 2021
Cited by 25 | Viewed by 6215
Abstract
Alzheimer’s disease is a lifelong progressive neurological disorder. It is associated with high disease management and caregiver costs. Intelligent sensing systems have the capability to provide context-aware adaptive feedback. These can assist Alzheimer’s patients with, continuous monitoring, functional support and timely therapeutic interventions [...] Read more.
Alzheimer’s disease is a lifelong progressive neurological disorder. It is associated with high disease management and caregiver costs. Intelligent sensing systems have the capability to provide context-aware adaptive feedback. These can assist Alzheimer’s patients with, continuous monitoring, functional support and timely therapeutic interventions for whom these are of paramount importance. This review aims to present a summary of such systems reported in the extant literature for the management of Alzheimer’s disease. Four databases were searched, and 253 English language articles were identified published between the years 2015 to 2020. Through a series of filtering mechanisms, 20 articles were found suitable to be included in this review. This study gives an overview of the depth and breadth of the efficacy as well as the limitations of these intelligent systems proposed for Alzheimer’s. Results indicate two broad categories of intelligent technologies, distributed systems and self-contained devices. Distributed systems base their outcomes mostly on long-term monitoring activity patterns of individuals whereas handheld devices give quick assessments through touch, vision and voice. The review concludes by discussing the potential of these intelligent technologies for clinical practice while highlighting future considerations for improvements in the design of these solutions for Alzheimer’s disease. Full article
(This article belongs to the Collection Sensing Technologies for Diagnosis, Therapy and Rehabilitation)
Show Figures

Figure 1

Figure 1
<p>PRISMA flow diagram for search methodology.</p>
Full article ">
18 pages, 8041 KiB  
Communication
Dual Memory LSTM with Dual Attention Neural Network for Spatiotemporal Prediction
by Teng Li and Yepeng Guan
Sensors 2021, 21(12), 4248; https://doi.org/10.3390/s21124248 - 21 Jun 2021
Cited by 1 | Viewed by 2412
Abstract
Spatiotemporal prediction is challenging due to extracting representations being inefficient and the lack of rich contextual dependences. A novel approach is proposed for spatiotemporal prediction using a dual memory LSTM with dual attention neural network (DMANet). A new dual memory LSTM (DMLSTM) unit [...] Read more.
Spatiotemporal prediction is challenging due to extracting representations being inefficient and the lack of rich contextual dependences. A novel approach is proposed for spatiotemporal prediction using a dual memory LSTM with dual attention neural network (DMANet). A new dual memory LSTM (DMLSTM) unit is proposed to extract the representations by leveraging differencing operations between the consecutive images and adopting dual memory transition mechanism. To make full use of historical representations, a dual attention mechanism is designed to capture long-term spatiotemporal dependences by computing the correlations between the current hidden representations and the historical hidden representations from temporal and spatial dimensions, respectively. Then, the dual attention is embedded into DMLSTM unit to construct a DMANet, which enables the model with greater modeling power for short-term dynamics and long-term contextual representations. An apparent resistivity map (AR Map) dataset is proposed in this paper. The B-spline interpolation method is utilized to enhance AR Map dataset and makes apparent resistivity trend curve continuous derivative in the time dimension. The experimental results demonstrate that the developed method has excellent prediction performance by comparisons with some state-of-the-art methods. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

Figure 1
<p>A flow chart of DMANet in long-term prediction.</p>
Full article ">Figure 2
<p>DMLSTM unit.</p>
Full article ">Figure 3
<p>Dual attention module.</p>
Full article ">Figure 4
<p>DMA unit.</p>
Full article ">Figure 5
<p>DMANet.</p>
Full article ">Figure 6
<p>An example of apparent resistivity maps from Yungang Grottoes.</p>
Full article ">Figure 7
<p>The example of B-spline curve.</p>
Full article ">Figure 8
<p>Prediction performance in different αs (<b>a</b>) and γs (<b>b</b>) at different datasets from top to bottom, left to right, respectively.</p>
Full article ">Figure 9
<p>Prediction performance under different number of channels in bottom layer.</p>
Full article ">Figure 10
<p>Prediction performance under different number of hidden layers.</p>
Full article ">Figure 11
<p>Whisker plot comparisons of the different models at the Moving MNIST [<a href="#B34-sensors-21-04248" class="html-bibr">34</a>].</p>
Full article ">Figure 12
<p>Whisker plot comparisons of the different models at the Caltech [<a href="#B36-sensors-21-04248" class="html-bibr">36</a>].</p>
Full article ">Figure 13
<p>Whisker plot comparisons of the different models at the AR Map.</p>
Full article ">Figure 14
<p>Frame-by-frame quantitative results for the 10 frames at the Moving MNIST [<a href="#B34-sensors-21-04248" class="html-bibr">34</a>].</p>
Full article ">Figure 15
<p>Frame-by-frame quantitative results for the 10 frames at the Caltech [<a href="#B36-sensors-21-04248" class="html-bibr">36</a>].</p>
Full article ">Figure 16
<p>Frame-by-frame quantitative results for the 10 frames at the AR Map.</p>
Full article ">
18 pages, 9238 KiB  
Communication
A New Fracture Detection Algorithm of Low Amplitude Acoustic Emission Signal Based on Kalman Filter-Ripple Voltage
by Seong-Min Jeong, Seokmoo Hong and Jong-Seok Oh
Sensors 2021, 21(12), 4247; https://doi.org/10.3390/s21124247 - 21 Jun 2021
Cited by 2 | Viewed by 2662
Abstract
In this study, an acoustic emission (AE) sensor was utilized to predict fractures that occur in a product during the sheet metal forming process. An AE activity was analyzed, presuming that AE occurs when plastic deformation and fracturing of metallic materials occur. For [...] Read more.
In this study, an acoustic emission (AE) sensor was utilized to predict fractures that occur in a product during the sheet metal forming process. An AE activity was analyzed, presuming that AE occurs when plastic deformation and fracturing of metallic materials occur. For the analysis, a threshold voltage is set to distinguish the AE signal from the ripple voltage signal and noise. If the amplitude of the AE signal is small, it is difficult to distinguish the AE signal from the ripple voltage signal and the noise signal. Hence, there is a limitation in predicting fractures using the AE sensor. To overcome this limitation, the Kalman filter was used in this study to remove the ripple voltage signal and noise signal and then analyze the activity. However, it was difficult to filter out the ripple voltage signal using a conventional low-pass filter or Kalman filter because the ripple voltage signal is a high-frequency component governed by the switch-mode of the power supply. Therefore, a Kalman filter that has a low Kalman gain was designed to extract only the ripple voltage signal. Based on the KF-RV algorithm, the measured ripple voltage and noise signal were reduced by 97.3% on average. Subsequently, the AE signal was extracted appropriately using the difference between the measured value and the extracted ripple voltage signal. The activity of the extracted AE signal was analyzed using the ring-down count among various AE parameters to determine if there was a fracture in the test specimen. Full article
(This article belongs to the Special Issue Acoustic Emission Sensors for Structural Health Monitoring)
Show Figures

Figure 1

Figure 1
<p>AE parameter.</p>
Full article ">Figure 2
<p>AE signal amplitude according to fracture characteristics.</p>
Full article ">Figure 3
<p>Recursive structure algorithm of the Kalman filter.</p>
Full article ">Figure 4
<p>Simulation results of the classical Kalman filter.</p>
Full article ">Figure 5
<p>Simulation results of KF-RV.</p>
Full article ">Figure 6
<p>Hole expansion test using AE sensor.</p>
Full article ">Figure 7
<p>Images of test specimens after the hole expansion test.</p>
Full article ">Figure 8
<p>Driving circuit for AE sensor.</p>
Full article ">Figure 9
<p>AE raw signal without press process.</p>
Full article ">Figure 10
<p>AE raw signal with press process.</p>
Full article ">Figure 10 Cont.
<p>AE raw signal with press process.</p>
Full article ">Figure 11
<p>Result of fast Fourier transform applied to AE filtered signal.</p>
Full article ">Figure 12
<p>Spectrogram of AE raw data with press process.</p>
Full article ">Figure 12 Cont.
<p>Spectrogram of AE raw data with press process.</p>
Full article ">Figure 13
<p>AE filtered signal without press process.</p>
Full article ">Figure 14
<p>AE filtered signal with press process.</p>
Full article ">Figure 15
<p>Result of STFT using AE filtered data.</p>
Full article ">
25 pages, 2338 KiB  
Review
RGB-D Data-Based Action Recognition: A Review
by Muhammad Bilal Shaikh and Douglas Chai
Sensors 2021, 21(12), 4246; https://doi.org/10.3390/s21124246 - 21 Jun 2021
Cited by 53 | Viewed by 8838
Abstract
Classification of human actions is an ongoing research problem in computer vision. This review is aimed to scope current literature on data fusion and action recognition techniques and to identify gaps and future research direction. Success in producing cost-effective and portable vision-based sensors [...] Read more.
Classification of human actions is an ongoing research problem in computer vision. This review is aimed to scope current literature on data fusion and action recognition techniques and to identify gaps and future research direction. Success in producing cost-effective and portable vision-based sensors has dramatically increased the number and size of datasets. The increase in the number of action recognition datasets intersects with advances in deep learning architectures and computational support, both of which offer significant research opportunities. Naturally, each action-data modality—such as RGB, depth, skeleton, and infrared (IR)—has distinct characteristics; therefore, it is important to exploit the value of each modality for better action recognition. In this paper, we focus solely on data fusion and recognition techniques in the context of vision with an RGB-D perspective. We conclude by discussing research challenges, emerging trends, and possible future research directions. Full article
(This article belongs to the Collection Multi-Sensor Information Fusion)
Show Figures

Figure 1

Figure 1
<p>Structure of this paper. Numbers in brackets refer to the section numbering.</p>
Full article ">Figure 2
<p>Example data captured by an RGB-D sensor as taken from the NTU RGB-D dataset [<a href="#B31-sensors-21-04246" class="html-bibr">31</a>] in (<b>a</b>) RGB, (<b>b</b>) RGB + Skeleton Joints, (<b>c</b>) Depth, (<b>d</b>) Depth + Skeleton Joints, and (<b>e</b>) IR modalities.</p>
Full article ">Figure 3
<p>Various RGB-D sensors: (a) Microsoft Kinect [<a href="#B33-sensors-21-04246" class="html-bibr">33</a>,<a href="#B34-sensors-21-04246" class="html-bibr">34</a>,<a href="#B35-sensors-21-04246" class="html-bibr">35</a>], (b) Intel RealSense L515 [<a href="#B36-sensors-21-04246" class="html-bibr">36</a>], and (c) Orbbec Astra Pro [<a href="#B37-sensors-21-04246" class="html-bibr">37</a>].</p>
Full article ">Figure 4
<p>Hierarchy of action recognition techniques based on handcrafted features that use classical machine learning.</p>
Full article ">Figure 5
<p>Illustration of deep learning techniques for processing RGB-D data. (<b>a</b>) Convolutional Neural Network (CNN). (<b>b</b>) Long Short-Term Memory (LSTM). (<b>c</b>) Graph Convolutional Network (GCN).</p>
Full article ">Figure 6
<p>A possible architecture of LRCN with RGB-D input. Input from each modality, i.e., RGB, RGB + Skeleton joints, Depth, Depth + Skeleton joints, and IR are passed through a CNN layer for extracting visual features and an LSTM layer for sequence learning. Scores from each model are then fused and mapped to the number of classes for predictions. Visuals of RGB-D input are taken from NTU RGB-D 60 dataset [<a href="#B90-sensors-21-04246" class="html-bibr">90</a>].</p>
Full article ">Figure 7
<p>An example of (<b>a</b>) early, (<b>b</b>) slow, and (<b>c</b>) late fusion in HAR. Note that the input modalities are not limited in the above two modalities.</p>
Full article ">
21 pages, 4477 KiB  
Article
Wireless Body Area Network Control Policies for Energy-Efficient Health Monitoring
by Yair Bar David, Tal Geller, Ilai Bistritz, Irad Ben-Gal, Nicholas Bambos and Evgeni Khmelnitsky
Sensors 2021, 21(12), 4245; https://doi.org/10.3390/s21124245 - 21 Jun 2021
Cited by 3 | Viewed by 2487
Abstract
Wireless body area networks (WBANs) have strong potential in the field of health monitoring. However, the energy consumption required for accurate monitoring determines the time between battery charges of the wearable sensors, which is a key performance factor (and can be critical in [...] Read more.
Wireless body area networks (WBANs) have strong potential in the field of health monitoring. However, the energy consumption required for accurate monitoring determines the time between battery charges of the wearable sensors, which is a key performance factor (and can be critical in the case of implantable devices). In this paper, we study the inherent trade-off between the power consumption of the sensors and the probability of misclassifying a patient’s health state. We formulate this trade-off as a dynamic problem, in which at each step, we can choose to activate a subset of sensors that provide noisy measurements of the patient’s health state. We assume that the (unknown) health state follows a Markov chain, so our problem is formulated as a partially observable Markov decision problem (POMDP). We show that all the past measurements can be summarized as a belief state on the true health state of the patient, which allows tackling the POMDP problem as an MDP on the belief state. Then, we empirically study the performance of a greedy one-step look-ahead policy compared to the optimal policy obtained by solving the dynamic program. For that purpose, we use an open-source Continuous Glucose Monitoring (CGM) dataset of 232 patients over six months and extract the transition matrix and sensor accuracies from the data. We find that the greedy policy saves ≈50% of the energy costs while reducing the misclassification costs by less than 2% compared to the most accurate policy possible that always activates all sensors. Our sensitivity analysis reveals that the greedy policy remains nearly optimal across different cost parameters and a varying number of sensors. The results also have practical importance, because while the optimal policy is too complicated, a greedy one-step look-ahead policy can be easily implemented in WBAN systems. Full article
(This article belongs to the Special Issue Wireless Body Area Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>Schematic description of the Markov chain.</p>
Full article ">Figure 2
<p>Order of events throughout sensing epochs.</p>
Full article ">Figure 3
<p>Glucose levels of randomly selected patients over time.</p>
Full article ">Figure 4
<p>Activation and misclassification cost breakdown obtained by the different proposed policies. The costs are average over all patients. “FP” stands for false positive and “FN” stands for false negative.</p>
Full article ">Figure 5
<p>Ratio between total costs obtained by greedy and value iteration methods obtained over all patients for varying <math display="inline"><semantics> <mi>ω</mi> </semantics></math> values, for the following set parameters: <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>6</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mrow> <mi>F</mi> <mi>N</mi> </mrow> </msub> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>,<math display="inline"><semantics> <mrow> <mo> </mo> <msub> <mi>C</mi> <mrow> <mi>F</mi> <mi>P</mi> </mrow> </msub> <mo>=</mo> <mn>50</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>Activation costs vs. misclassification costs per <math display="inline"><semantics> <mi>ω</mi> </semantics></math>, averaged over all patients.</p>
Full article ">Figure 7
<p>Ratio between total costs obtained by greedy and value iteration methods obtained over all patients for <math display="inline"><semantics> <mrow> <mi>k</mi> <mo> </mo> <mi>v</mi> <mi>a</mi> <mi>l</mi> <mi>u</mi> <mi>e</mi> <mi>s</mi> </mrow> </semantics></math>, for the following set parameters: <math display="inline"><semantics> <mrow> <mi>ω</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mrow> <mi>F</mi> <mi>N</mi> </mrow> </msub> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>,<math display="inline"><semantics> <mrow> <mo> </mo> <msub> <mi>C</mi> <mrow> <mi>F</mi> <mi>P</mi> </mrow> </msub> <mo>=</mo> <mn>50</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>The unknown health state and the most likely health state (using the greedy policy) over 20 epochs.</p>
Full article ">Figure 9
<p>The number of sensors activated throughout the simulation using the greedy policy.</p>
Full article ">Figure 10
<p>Greedy sensing policy over the belief state space <math display="inline"><semantics> <mi>b</mi> </semantics></math>. Since <math display="inline"><semantics> <mrow> <msub> <mi>b</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>1</mn> <mo>−</mo> <mrow> <mo>(</mo> <mrow> <msub> <mi>b</mi> <mn>1</mn> </msub> <mo>+</mo> <msub> <mi>b</mi> <mn>2</mn> </msub> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math>, the belief space forms a simplex and may be described by only the belief over the first two states.</p>
Full article ">Figure 11
<p>Performance of the value iteration and greedy policies as a function of the discretization level, averaged over 100 i.i.d. Monte Carlo simulations.</p>
Full article ">Figure 12
<p>Total costs of the greedy policy and the value iteration policy over different sensor output probability matrices <math display="inline"><semantics> <mi>A</mi> </semantics></math> defined by <math display="inline"><semantics> <mrow> <mn>0</mn> <mo>≤</mo> <mi>ϵ</mi> <mo>≤</mo> <mn>0.3</mn> <mo> </mo> </mrow> </semantics></math>(step size <math display="inline"><semantics> <mrow> <mn>0.05</mn> </mrow> </semantics></math>), averaged over 100 i.i.d. Monte Carlo simulations.</p>
Full article ">Figure 13
<p>Total costs of the greedy policy and the value iteration policy over different transition matrices <math display="inline"><semantics> <mi>T</mi> </semantics></math>, defined by <math display="inline"><semantics> <mrow> <mn>0.3</mn> <mo>≤</mo> <mover accent="true"> <mi>T</mi> <mo>˜</mo> </mover> <mo>≤</mo> <mn>1</mn> <mo> </mo> </mrow> </semantics></math>(step size <math display="inline"><semantics> <mrow> <mn>0.05</mn> </mrow> </semantics></math>), averaged over 100 i.i.d. Monte Carlo simulations.</p>
Full article ">
16 pages, 4700 KiB  
Article
A System Using Artificial Intelligence to Detect and Scare Bird Flocks in the Protection of Ripening Fruit
by Petr Marcoň, Jiří Janoušek, Josef Pokorný, Josef Novotný, Eliška Vlachová Hutová, Anna Širůčková, Martin Čáp, Jana Lázničková, Radim Kadlec, Petr Raichl, Přemysl Dohnal, Miloslav Steinbauer and Eva Gescheidtová
Sensors 2021, 21(12), 4244; https://doi.org/10.3390/s21124244 - 21 Jun 2021
Cited by 11 | Viewed by 8232
Abstract
Flocks of birds may cause major damage to fruit crops in the ripening phase. This problem is addressed by various methods for bird scaring; in many cases, however, the birds become accustomed to the distraction, and the applied scaring procedure loses its purpose. [...] Read more.
Flocks of birds may cause major damage to fruit crops in the ripening phase. This problem is addressed by various methods for bird scaring; in many cases, however, the birds become accustomed to the distraction, and the applied scaring procedure loses its purpose. To help eliminate the difficulty, we present a system to detect flocks and to trigger an actuator that will scare the objects only when a flock passes through the monitored space. The actual detection is performed with artificial intelligence utilizing a convolutional neural network. Before teaching the network, we employed videocameras and a differential algorithm to detect all items moving in the vineyard. Such objects revealed in the images were labeled and then used in training, testing, and validating the network. The assessment of the detection algorithm required evaluating the parameters precision, recall, and F1 score. In terms of function, the algorithm is implemented in a module consisting of a microcomputer and a connected videocamera. When a flock is detected, the microcontroller will generate a signal to be wirelessly transmitted to the module, whose task is to trigger the scaring actuator. Full article
(This article belongs to the Special Issue Sensors for Object Detection, Classification and Tracking)
Show Figures

Figure 1

Figure 1
<p>A block diagram of the sample setup for the detection of flying objects.</p>
Full article ">Figure 2
<p>The optical detection and early warning unit to track a flock of birds.</p>
Full article ">Figure 3
<p>The complete setup located at a vineyard in Bořetice, Moravia, the Czech Republic.</p>
Full article ">Figure 4
<p>A flow chart characterizing the differential method (the input, actual, previous, and accumulated images); the differential image of the background; the differential image; the filtering of the background; object detection; image storage; transfer to a cloud storage room.</p>
Full article ">Figure 5
<p>A binary differential image (<b>left</b>) and a highlighted region around a detected object (<b>right</b>).</p>
Full article ">Figure 6
<p>Examples of images capturing the activity in a vineyard.</p>
Full article ">Figure 7
<p>A special content 3D map: the placement of the optical detectors.</p>
Full article ">Figure 8
<p>Categorizing the collected images (Individual birds: spread/retracted wings; insects: bees and flies; helicopters; flocks).</p>
Full article ">Figure 9
<p>The principle of the machine learning algorithm to assemble a convolutional neural network by utilizing the cloud-based service <span class="html-italic">Google AutoML Vision</span>.</p>
Full article ">Figure 10
<p>The overlap between the predicted ground truth and a detected object.</p>
Full article ">Figure 11
<p>The detection outcomes in individual objects.</p>
Full article ">
12 pages, 4169 KiB  
Communication
A Schottky-Type Metal-Semiconductor-Metal Al0.24Ga0.76N UV Sensor Prepared by Using Selective Annealing
by Byeong-Jun Park, Jeong-Hoon Seol and Sung-Ho Hahm
Sensors 2021, 21(12), 4243; https://doi.org/10.3390/s21124243 - 21 Jun 2021
Cited by 3 | Viewed by 2317
Abstract
Asymmetric metal-semiconductor-metal (MSM) aluminum gallium nitride (AlGaN) UV sensors with 24% Al were fabricated using a selective annealing technique that dramatically reduced the dark current density and improved the ohmic behavior and performance compared to a non-annealed sensor. Its dark current density at [...] Read more.
Asymmetric metal-semiconductor-metal (MSM) aluminum gallium nitride (AlGaN) UV sensors with 24% Al were fabricated using a selective annealing technique that dramatically reduced the dark current density and improved the ohmic behavior and performance compared to a non-annealed sensor. Its dark current density at a bias of −2.0 V and UV-to-visible rejection ratio (UVRR) at a bias of −7.0 V were 8.5 × 10−10 A/cm2 and 672, respectively, which are significant improvements over a non-annealed sensor with a dark current density of 1.3 × 10−7 A/cm2 and UVRR of 84, respectively. The results of a transmission electron microscopy analysis demonstrate that the annealing process caused interdiffusion between the metal layers; the contact behavior between Ti/Al/Ni/Au and AlGaN changed from rectifying to ohmic behavior. The findings from an X-ray photoelectron spectroscopy analysis revealed that the O 1s binding energy peak intensity associated with Ga oxide, which causes current leakage from the AlGaN surface, decreased from around 846 to 598 counts/s after selective annealing. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

Figure 1
<p>Schematics of the asymmetric MSM AlGaN UV sensor with 24% Al: (<b>a</b>) a cross-sectional view and (<b>b</b>) a three-dimensional view.</p>
Full article ">Figure 2
<p>PL spectra of the epitaxial wafer before fabrication of the asymmetric MSM AlGaN UV sensor at room temperature.</p>
Full article ">Figure 3
<p>Images magnified 90× using a photomicroscope of MS TECH’s MST-8000C probe station: top views of an asymmetric MSM AlGaN UV sensor (<b>a</b>) before and (<b>b</b>) after selective annealing (annealed region is marked with dotted box).</p>
Full article ">Figure 4
<p>Dark state and UV photoresponsive <span class="html-italic">I-V</span> of the asymmetric MSM AlGaN UV sensor (<b>a</b>) before and (<b>b</b>) after selective annealing.</p>
Full article ">Figure 5
<p>O 1s XPS spectra associated with Ga oxide on the AlGaN surface of the MSM AlGaN UV sensor before and after selective annealing.</p>
Full article ">Figure 6
<p>Spectral responsivity of the MSM AlGaN UV sensor (<b>a</b>) before and (<b>b</b>) after selective annealing under reverse bias. The insets show the spectral responsivity under forward bias.</p>
Full article ">Figure 7
<p>(<b>a</b>) TEM and (<b>b</b>–<b>d</b>) HAADF STEM images of weakly annealed area 1 in the MSM AlGaN UV sensor.</p>
Full article ">Figure 8
<p>Elemental content in each position in weakly annealed area 1.</p>
Full article ">Figure 9
<p>(<b>a</b>) TEM and (<b>b</b>–<b>d</b>) HAADF STEM images of strongly annealed area 2 in the MSM AlGaN UV sensor.</p>
Full article ">Figure 9 Cont.
<p>(<b>a</b>) TEM and (<b>b</b>–<b>d</b>) HAADF STEM images of strongly annealed area 2 in the MSM AlGaN UV sensor.</p>
Full article ">Figure 10
<p>Elemental content in each position in strongly annealed area 2.</p>
Full article ">
14 pages, 3546 KiB  
Communication
Influence of Temperature on the Natural Vibration Characteristics of Simply Supported Reinforced Concrete Beam
by Yanxia Cai, Kai Zhang, Zhoujing Ye, Chang Liu, Kaiji Lu and Linbing Wang
Sensors 2021, 21(12), 4242; https://doi.org/10.3390/s21124242 - 21 Jun 2021
Cited by 20 | Viewed by 3795
Abstract
Natural vibration characteristics serve as one of the crucial references for bridge monitoring. However, temperature-induced changes in the natural vibration characteristics of bridge structures may exceed the impact of structural damage, thus causing some interference in damage identification. This study analyzed the influence [...] Read more.
Natural vibration characteristics serve as one of the crucial references for bridge monitoring. However, temperature-induced changes in the natural vibration characteristics of bridge structures may exceed the impact of structural damage, thus causing some interference in damage identification. This study analyzed the influence of temperature on the natural vibration characteristics of simply supported beams, which is the most widely used bridge structure. The theoretical formula for the variation of the natural frequency of simply supported beams with temperature was proposed. The elastic modulus of simply supported beams in the range of −40 °C to 60 °C was acquired by means of the falling ball test and the theoretical formula and was compared with the elastic modulus obtained by the three-point bending test at room temperature (20 °C). In addition, the Midas/Civil finite-element simulation was carried out for the natural frequency of simply supported beams at different temperatures. The results showed that temperature was the main factor causing the variation of the natural frequency of simply supported beams. The linear negative correlation between the natural frequency of simply supported beams and their temperature were observed. The natural frequency of simply supported beams decreased by 0.148% for every 1 °C increase. This research contributed to the further understanding of the natural vibration characteristics of simply supported beams under the influence of temperature so as to provide references for natural frequency monitoring and damage identification of beam bridges. Full article
(This article belongs to the Special Issue Piezoelectric Energy Harvesting Sensors and Their Applications)
Show Figures

Figure 1

Figure 1
<p>Diagram of the simply supported beam.</p>
Full article ">Figure 2
<p>The relationship between <math display="inline"><semantics> <mrow> <mo>Δ</mo> <msub> <mi mathvariant="normal">f</mi> <mi mathvariant="normal">n</mi> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi mathvariant="normal">T</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>Test scheme.</p>
Full article ">Figure 4
<p>Relationship between the first-grade natural frequency of the No. 1 beam and the No. 2 beam with temperature change.</p>
Full article ">Figure 5
<p>Relationship between theoretical results of the testing beam and experimental results of the first-grade natural frequency with temperature change.</p>
Full article ">Figure 6
<p>Three-point bending test.</p>
Full article ">Figure 7
<p>The testing beam model.</p>
Full article ">Figure 8
<p>Relationship between the n-grades natural frequency (n = 1, 2, 3, 4) and temperature of the testing beam’s theoretical and simulation results.</p>
Full article ">Figure 8 Cont.
<p>Relationship between the n-grades natural frequency (n = 1, 2, 3, 4) and temperature of the testing beam’s theoretical and simulation results.</p>
Full article ">
14 pages, 7866 KiB  
Article
Blue as an Underrated Alternative to Green: Photoplethysmographic Heartbeat Intervals Estimation under Two Temperature Conditions
by Evgeniia Shchelkanova, Liia Shchapova, Alexander Shchelkanov and Tomohiro Shibata
Sensors 2021, 21(12), 4241; https://doi.org/10.3390/s21124241 - 21 Jun 2021
Cited by 3 | Viewed by 2912
Abstract
Since photoplethysmography (PPG) sensors are usually placed on open skin areas, temperature interference can be an issue. Currently, green light is the most widely used in the reflectance PPG for its relatively low artifact susceptibility. However, it has been known that hemoglobin absorption [...] Read more.
Since photoplethysmography (PPG) sensors are usually placed on open skin areas, temperature interference can be an issue. Currently, green light is the most widely used in the reflectance PPG for its relatively low artifact susceptibility. However, it has been known that hemoglobin absorption peaks at the blue part of the spectrum. Despite this fact, blue light has received little attention in the PPG field. Blue wavelengths are commonly used in phototherapy. Combining blue light-based treatments with simultaneous blue PPG acquisition could be potentially used in patients monitoring and studying the biological effects of light. Previous studies examining the PPG in blue light compared to other wavelengths employed photodetectors with inherently lower sensitivity to blue, thereby biasing the results. The present study assessed the accuracy of heartbeat intervals (HBIs) estimation from blue and green PPG signals, acquired under baseline and cold temperature conditions. Our PPG system is based on TCS3472 Color Sensor with equal sensitivity to both parts of the light spectrum to ensure unbiased comparison. The accuracy of the HBIs estimates, calculated with five characteristic points (PPG systolic peak, maximum of the first PPG derivative, maximum of the second PPG derivative, minimum of the second PPG derivative, and intersecting tangents) on both PPG signal types, was evaluated based on the electrocardiographic values. The statistical analyses demonstrated that in all cases, the HBIs estimation accuracy of blue PPG was nearly equivalent to the G PPG irrespective of the characteristic point and measurement condition. Therefore, blue PPG can be used for cardiovascular parameter acquisition. This paper is an extension of work originally presented at the 42nd Annual International Conference of the IEEE Engineering in Medicine and Biology Society. Full article
(This article belongs to the Collection Medical Applications of Sensor Systems and Devices)
Show Figures

Figure 1

Figure 1
<p>The system block diagram.</p>
Full article ">Figure 2
<p>The circuit diagram.</p>
Full article ">Figure 3
<p>The signal acquisition modules for the (<b>a</b>) left hand (Left Arm and RLD electrodes) and (<b>b</b>) right hand (Right Arm electrode and PPG device).</p>
Full article ">Figure 4
<p>The frequency response plot.</p>
Full article ">Figure 5
<p>Example of the ECG and respective PPG waveforms with 1st and 2nd PPG derivatives. The characteristic points (a-e) are indicated on the PPG waveform: (<b>a</b>) intersecting tangents (IT) (<b>b</b>) maximum 2nd derivative (APGmax) (<b>c</b>) maximum 1st derivative (VPGmax) (<b>d</b>) minimum 2nd derivative (APGmin) (<b>e</b>) PPG maximum value (PPGmax).</p>
Full article ">Figure 6
<p>Alignment of the ECG and PPG Clear signals with the test impulse. Digits from 0 to 4 shows the numbers of times the ECG signal is sampled in 10 ms.</p>
Full article ">Figure A1
<p>Bland-Altman plots for 720 ECG-PPG data pairs. The bias and LOA are displayed as red and blue solid lines, respectively. Their CI are displayed as dotted lines.</p>
Full article ">Figure A1 Cont.
<p>Bland-Altman plots for 720 ECG-PPG data pairs. The bias and LOA are displayed as red and blue solid lines, respectively. Their CI are displayed as dotted lines.</p>
Full article ">Figure A1 Cont.
<p>Bland-Altman plots for 720 ECG-PPG data pairs. The bias and LOA are displayed as red and blue solid lines, respectively. Their CI are displayed as dotted lines.</p>
Full article ">
9 pages, 1181 KiB  
Article
Measurement of Ankle Joint Movements Using IMUs during Running
by Byong Hun Kim, Sung Hyun Hong, In Wook Oh, Yang Woo Lee, In Ho Kee and Sae Yong Lee
Sensors 2021, 21(12), 4240; https://doi.org/10.3390/s21124240 - 21 Jun 2021
Cited by 17 | Viewed by 5459
Abstract
Gait analysis has historically been implemented in laboratory settings only with expensive instruments; yet, recently, efforts to develop and integrate wearable sensors into clinical applications have been made. A limited number of previous studies have been conducted to validate inertial measurement units (IMUs) [...] Read more.
Gait analysis has historically been implemented in laboratory settings only with expensive instruments; yet, recently, efforts to develop and integrate wearable sensors into clinical applications have been made. A limited number of previous studies have been conducted to validate inertial measurement units (IMUs) for measuring ankle joint kinematics, especially with small movement ranges. Therefore, the purpose of this study was to validate the ability of available IMUs to accurately measure the ankle joint angles by comparing the ankle joint angles measured using a wearable device with those obtained using a motion capture system during running. Ten healthy subjects participated in the study. The intraclass correlation coefficient (ICC) and standard error of measurement were calculated for reliability, whereas the Pearson coefficient correlation was performed for validity. The results showed that the day-to-day reliability was excellent (0.974 and 0.900 for sagittal and frontal plane, respectively), and the validity was good in both sagittal (r = 0.821, p < 0.001) and frontal (r = 0.835, p < 0.001) planes for ankle joints. In conclusion, we suggest that the developed device could be used as an alternative tool for the 3D motion capture system for assessing ankle joint kinematics. Full article
(This article belongs to the Special Issue Wearables for Movement Analysis in Healthcare)
Show Figures

Figure 1

Figure 1
<p>Participants’ setup.</p>
Full article ">Figure 2
<p>Sensor placement: (<b>A</b>) Sensor 1, (<b>B</b>) Sensor 2, and (<b>C</b>) Sensor 2 holder with strap and leveling.</p>
Full article ">Figure 3
<p>Comparison of ankle angle between VICON and IMUs in the sagittal plane.</p>
Full article ">Figure 4
<p>Comparison of ankle angle between VICON and IMUs in the frontal plane.</p>
Full article ">
28 pages, 1381 KiB  
Article
Development of an Intelligent Data-Driven System to Recommend Personalized Fashion Design Solutions
by Shukla Sharma, Ludovic Koehl, Pascal Bruniaux, Xianyi Zeng and Zhujun Wang
Sensors 2021, 21(12), 4239; https://doi.org/10.3390/s21124239 - 21 Jun 2021
Cited by 29 | Viewed by 5113
Abstract
In the context of fashion/textile innovations towards Industry 4.0, a variety of digital technologies, such as 3D garment CAD, have been proposed to automate, optimize design and manufacturing processes in the organizations of involved enterprises and supply chains as well as services such [...] Read more.
In the context of fashion/textile innovations towards Industry 4.0, a variety of digital technologies, such as 3D garment CAD, have been proposed to automate, optimize design and manufacturing processes in the organizations of involved enterprises and supply chains as well as services such as marketing and sales. However, the current digital solutions rarely deal with key elements used in the fashion industry, including professional knowledge, as well as fashion and functional requirements of the customer and their relations with product technical parameters. Especially, product design plays an essential role in the whole fashion supply chain and should be paid more attention to in the process of digitalization and intelligentization of fashion companies. In this context, we originally developed an interactive fashion and garment design system by systematically integrating a number of data-driven services of garment design recommendation, 3D virtual garment fitting visualization, design knowledge base, and design parameters adjustment. This system enables close interactions between the designer, consumer, and manufacturer around the virtual product corresponding to each design solution. In this way, the complexity of the product design process can drastically be reduced by directly integrating the consumer’s perception and professional designer’s knowledge into the garment computer-aided design (CAD) environment. Furthermore, for a specific consumer profile, the related computations (design solution recommendation and design parameters adjustment) are performed by using a number of intelligent algorithms (BIRCH, adaptive Random Forest algorithms, and association mining) and matching with a formalized design knowledge base. The proposed interactive design system has been implemented and then exposed through the REST API, for designing garments meeting the consumer’s personalized fashion requirements by repeatedly running the cycle of design recommendation—virtual garment fitting—online evaluation of designer and consumer—design parameters adjustment—design knowledge base creation, and updating. The effectiveness of the proposed system has been validated through a business case of personalized men’s shirt design. Full article
Show Figures

Figure 1

Figure 1
<p>Data-driven service architecture.</p>
Full article ">Figure 2
<p>The structure of the proposed garment knowledge base.</p>
Full article ">Figure 3
<p>The flow chart of the data-driven interactive design system.</p>
Full article ">Figure 4
<p>Cluster evaluation corresponding to different threshold.</p>
Full article ">Figure 5
<p>Cluster formation with silhouette score 0.3846.</p>
Full article ">Figure 6
<p>Architecture of ease prediction model.</p>
Full article ">Figure 7
<p>Architecture of RBFNN [<a href="#B37-sensors-21-04239" class="html-bibr">37</a>].</p>
Full article ">Figure 8
<p>RBFNN accuracy curve.</p>
Full article ">Figure 9
<p>BPNN accuracy curve.</p>
Full article ">Figure 10
<p>3D ease prediction service module.</p>
Full article ">Figure 11
<p>General architecture of the garment co-design interactive system.</p>
Full article ">Figure 12
<p>Shirt before adjustment.</p>
Full article ">Figure 13
<p>Shirt after adjustment.</p>
Full article ">Figure 14
<p>Shirt fitting before adjustment.</p>
Full article ">Figure 15
<p>Shirt fitting after adjustment.</p>
Full article ">
14 pages, 5115 KiB  
Article
Photonic Integrated Interrogator for Monitoring the Patient Condition during MRI Diagnosis
by Mateusz Słowikowski, Andrzej Kaźmierczak, Stanisław Stopiński, Mateusz Bieniek, Sławomir Szostak, Krzysztof Matuk, Luc Augustin and Ryszard Piramidowicz
Sensors 2021, 21(12), 4238; https://doi.org/10.3390/s21124238 - 21 Jun 2021
Cited by 10 | Viewed by 3392
Abstract
In this work, we discuss the idea and practical implementation of an integrated photonic circuit-based interrogator of fiber Bragg grating (FBG) sensors dedicated to monitoring the condition of the patients exposed to Magnetic Resonance Imaging (MRI) diagnosis. The presented solution is based on [...] Read more.
In this work, we discuss the idea and practical implementation of an integrated photonic circuit-based interrogator of fiber Bragg grating (FBG) sensors dedicated to monitoring the condition of the patients exposed to Magnetic Resonance Imaging (MRI) diagnosis. The presented solution is based on an Arrayed Waveguide Grating (AWG) demultiplexer fabricated in generic indium phosphide technology. We demonstrate the consecutive steps of development of the device from design to demonstrator version of the system with confirmed functionality of monitoring the respiratory rate of the patient. The results, compared to those obtained using commercially available bulk interrogator, confirmed both the general concept and proper operation of the device. Full article
Show Figures

Figure 1

Figure 1
<p>The general idea of the system for monitoring the vital functions of a patient undergoing an MRI procedure.</p>
Full article ">Figure 2
<p>Readout system for FBG sensor network: (<b>a</b>) Broadband light source, (<b>b</b>) circulator (<b>c</b>) optical fiber with FBG sensors, (<b>d</b>) spectrometer, (<b>e</b>) array of photodiodes.</p>
Full article ">Figure 3
<p>(<b>a</b>) Layout of PIC containing two integrated interrogators based on 36-channel AWGs having 75 GHz (top) and 50 GHz (bottom) channel spacing; (<b>b</b>) optical micrograph of a fabricated optical interrogator circuit.</p>
Full article ">Figure 4
<p>Transmission spectrum of (<b>a</b>) 36-channel; (<b>b</b>) two adjacent channels of 50 GHz channel-spaced AWG based interrogator.</p>
Full article ">Figure 5
<p>Transmission spectrum of 16 channels of 50 GHz channel-spaced AWG based interrogator with an active SOA.</p>
Full article ">Figure 6
<p>Schematic representation of the measurement system incorporating PIC-based interrogator.</p>
Full article ">Figure 7
<p>Comparison of the shape of FBG reflection spectrum collected with optical spectrum analyzer with the response of 14 consecutive photodiodes of the 50 GHz channel-spaced AWG-based interrogator.</p>
Full article ">Figure 8
<p>(<b>a</b>) Photodiodes names description on layout; (<b>b</b>) photodiode current measured during the experimental two cycles of fiber tensioning and loosening.</p>
Full article ">Figure 9
<p>(<b>a</b>) Close up on the experiment shown in <a href="#sensors-21-04238-f008" class="html-fig">Figure 8</a>, (<b>b</b>) corresponding reflection spectra recorded with the reference interrogator at given tensions applied.</p>
Full article ">Figure 10
<p>(<b>a</b>) Functional block diagram of the dedicated electronic driver; (<b>b</b>) assembly of the packaged interrogator chip.</p>
Full article ">Figure 11
<p>(<b>a</b>) Influence of temperature on FBG and PIC interrogator response; (<b>b</b>) response of the reference Ibsen I-MON 256.</p>
Full article ">Figure 12
<p>Regular breathing and effect of FBG heating up captured by (<b>a</b>) PIC interrogator; (<b>b</b>) Ibsen I-MON 256 interrogator.</p>
Full article ">Figure 13
<p>Previous measurement close-up with breath results captured by (<b>a</b>) PIC interrogator; (<b>b</b>) Ibsen I-MON 256 interrogator.</p>
Full article ">Figure 14
<p>Result of measurement with person lying on the mattress and changing breathing pace: regular breathing (R), apnea (A), fast breathing (F) and regular breathing (R), captured by (<b>a</b>) PIC interrogator; (<b>b</b>) Ibsen I-MON 256 interrogator. Effect of FBG heating up is visible also.</p>
Full article ">
11 pages, 604 KiB  
Communication
Influence of Features on Accuracy of Anomaly Detection for an Energy Trading System
by Hoon Ko, Kwangcheol Rim and Isabel Praça
Sensors 2021, 21(12), 4237; https://doi.org/10.3390/s21124237 - 21 Jun 2021
Cited by 5 | Viewed by 2376
Abstract
The biggest problem with conventional anomaly signal detection using features was that it was difficult to use it in real time and it requires processing of network signals. Furthermore, analyzing network signals in real-time required vast amounts of processing for each signal, as [...] Read more.
The biggest problem with conventional anomaly signal detection using features was that it was difficult to use it in real time and it requires processing of network signals. Furthermore, analyzing network signals in real-time required vast amounts of processing for each signal, as each protocol contained various pieces of information. This paper suggests anomaly detection by analyzing the relationship among each feature to the anomaly detection model. The model analyzes the anomaly of network signals based on anomaly feature detection. The selected feature for anomaly detection does not require constant network signal updates and real-time processing of these signals. When the selected features are found in the received signal, the signal is registered as a potential anomaly signal and is then steadily monitored until it is determined as either an anomaly or normal signal. In terms of the results, it determined the anomaly with 99.7% (0.997) accuracy in f(4)(S0) and in case f(4)(REJ) received 11,233 signals with a normal or 171anomaly judgment accuracy of 98.7% (0.987). Full article
(This article belongs to the Collection Intelligent Security Sensors in Cloud Computing)
Show Figures

Figure 1

Figure 1
<p>Normal Signal (NS) vs. Anomaly Signal (AS).</p>
Full article ">Figure 2
<p>Energy Network for an Energy Trade Market.</p>
Full article ">Figure 3
<p>ADM (Anomaly Detection Model).</p>
Full article ">Figure 4
<p>Collection of the Network Signal.</p>
Full article ">Figure 5
<p>Correlation of each feature.</p>
Full article ">Figure 6
<p>Analysis between service and flag.</p>
Full article ">Figure 7
<p>Analysis between features.</p>
Full article ">
15 pages, 2100 KiB  
Article
Measurement of Ex Vivo Liver, Brain and Pancreas Thermal Properties as Function of Temperature
by Ahad Mohammadi, Leonardo Bianchi, Somayeh Asadi and Paola Saccomandi
Sensors 2021, 21(12), 4236; https://doi.org/10.3390/s21124236 - 21 Jun 2021
Cited by 41 | Viewed by 4257
Abstract
The ability to predict heat transfer during hyperthermal and ablative techniques for cancer treatment relies on understanding the thermal properties of biological tissue. In this work, the thermal properties of ex vivo liver, pancreas and brain tissues are reported as a function of [...] Read more.
The ability to predict heat transfer during hyperthermal and ablative techniques for cancer treatment relies on understanding the thermal properties of biological tissue. In this work, the thermal properties of ex vivo liver, pancreas and brain tissues are reported as a function of temperature. The thermal diffusivity, thermal conductivity and volumetric heat capacity of these tissues were measured in the temperature range from 22 to around 97 °C. Concerning the pancreas, a phase change occurred around 45 °C; therefore, its thermal properties were investigated only until this temperature. Results indicate that the thermal properties of the liver and brain have a non-linear relationship with temperature in the investigated range. In these tissues, the thermal properties were almost constant until 60 to 70 °C and then gradually changed until 92 °C. In particular, the thermal conductivity increased by 100% for the brain and 60% for the liver up to 92 °C, while thermal diffusivity increased by 90% and 40%, respectively. However, the heat capacity did not significantly change in this temperature range. The thermal conductivity and thermal diffusivity were dramatically increased from 92 to 97 °C, which seems to be due to water vaporization and state transition in the tissues. Moreover, the measurement uncertainty, determined at each temperature, increased after 92 °C. In the temperature range of 22 to 45 °C, the thermal properties of pancreatic tissue did not change significantly, in accordance with the results for the brain and liver. For the three tissues, the best fit curves are provided with regression analysis based on measured data to predict the tissue thermal behavior. These curves describe the temperature dependency of tissue thermal properties in a temperature range relevant for hyperthermia and ablation treatments and may help in constructing more accurate models of bioheat transfer for optimization and pre-planning of thermal procedures. Full article
(This article belongs to the Special Issue Sensing for Biomedical Applications)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Schematic view of the experimental setup; (<b>b</b>) picture of the container filled with liver, immersed in the temperature-controlled water bath, and including TEMPOS’s probe and needle housing FBG sensors.</p>
Full article ">Figure 2
<p>Temperature distribution across the tissue depths. This measurement refers to one of the experiments performed in the brain.</p>
Full article ">Figure 3
<p>(<b>a</b>) Thermal conductivity, (<b>b</b>) thermal diffusivity and (<b>c</b>) volumetric heat capacity for ex vivo porcine livers as a function of temperature and their associated uncertainty.</p>
Full article ">Figure 4
<p>(<b>a</b>) Thermal conductivity, (<b>b</b>) thermal diffusivity and (<b>c</b>) volumetric heat capacity for ex vivo calf brains during heating.</p>
Full article ">Figure 4 Cont.
<p>(<b>a</b>) Thermal conductivity, (<b>b</b>) thermal diffusivity and (<b>c</b>) volumetric heat capacity for ex vivo calf brains during heating.</p>
Full article ">Figure 5
<p>(<b>a</b>) Thermal conductivity, (<b>b</b>) thermal diffusivity and (<b>c</b>) volumetric heat capacity for ex vivo porcine pancreases during heating.</p>
Full article ">
20 pages, 4820 KiB  
Article
A Simple Neural Network for Collision Detection of Collaborative Robots
by Michał Czubenko and Zdzisław Kowalczuk
Sensors 2021, 21(12), 4235; https://doi.org/10.3390/s21124235 - 21 Jun 2021
Cited by 19 | Viewed by 5324
Abstract
Due to the epidemic threat, more and more companies decide to automate their production lines. Given the lack of adequate security or space, in most cases, such companies cannot use classic production robots. The solution to this problem is the use of collaborative [...] Read more.
Due to the epidemic threat, more and more companies decide to automate their production lines. Given the lack of adequate security or space, in most cases, such companies cannot use classic production robots. The solution to this problem is the use of collaborative robots (cobots). However, the required equipment (force sensors) or alternative methods of detecting a threat to humans are usually quite expensive. The article presents the practical aspect of collision detection with the use of a simple neural architecture. A virtual force and torque sensor, implemented as a neural network, may be useful in a team of collaborative robots. Four different approaches are compared in this article: auto-regressive (AR), recurrent neural network (RNN), convolutional long short-term memory (CNN-LSTM) and mixed convolutional LSTM network (MC-LSTM). These architectures are analyzed at different levels of input regression (motor current, position, speed, control velocity). This sensor was tested on the original CURA6 robot prototype (Cooperative Universal Robotic Assistant 6) by Intema. The test results indicate that the MC-LSTM architecture is the most effective with the regression level set at 12 samples (at 24 Hz). The mean absolute prediction error obtained by the MC-LSTM architecture was approximately 22 Nm. The conducted external test (72 different signals with collisions) shows that the presented architecture can be used as a collision detector. The MC-LSTM collision detection f1 score with the optimal threshold was 0.85. A well-developed virtual sensor based on such a network can be used to detect various types of collisions of cobot or other mobile or stationary systems operating on the basis of human-machine interaction. Full article
(This article belongs to the Collection Smart Robotics for Automation)
Show Figures

Figure 1

Figure 1
<p>The CURA6 robot.</p>
Full article ">Figure 2
<p>Example of 1000 samples (about 0.7 s) of measurement data for motor/joint # 1 (bright red stripe show the stop period while light green shows movement).</p>
Full article ">Figure 3
<p>Structures of tested models predicting motor current (<math display="inline"><semantics> <mover> <mi>I</mi> <mo stretchy="false">^</mo> </mover> </semantics></math>) on the output layer (6 neuron nodes), with a linear activation function.</p>
Full article ">Figure 4
<p>Experimental results for the mean absolute error—MAE (solid lines) and the root mean-square error—RMS (dashed lines) as a function of the regression order, where the descriptive legend is common.</p>
Full article ">Figure 4 Cont.
<p>Experimental results for the mean absolute error—MAE (solid lines) and the root mean-square error—RMS (dashed lines) as a function of the regression order, where the descriptive legend is common.</p>
Full article ">Figure 5
<p>Current trajectories in ampere obtained using the MC-LSTM model for <math display="inline"><semantics> <mrow> <mi>r</mi> <mi>e</mi> <mi>g</mi> <msup> <mi>r</mi> <mo>°</mo> </msup> <mo>=</mo> <mn>12</mn> </mrow> </semantics></math> for example joint #1.</p>
Full article ">Figure 6
<p>Diagnostic signals using the MC-LSTM(12) model for motor #1, including motor current <span class="html-italic">I</span> (blue line) and its prediction <math display="inline"><semantics> <mover> <mi>I</mi> <mo stretchy="false">^</mo> </mover> </semantics></math> (red line), mean motor current <math display="inline"><semantics> <msub> <mi>μ</mi> <mi>I</mi> </msub> </semantics></math> (green line along with green confidence area of width <math display="inline"><semantics> <mrow> <mi>γ</mi> <msub> <mi>σ</mi> <mi>I</mi> </msub> </mrow> </semantics></math>), and prediction bias, <math display="inline"><semantics> <mrow> <mi>I</mi> <mo>−</mo> <mover> <mi>I</mi> <mo stretchy="false">^</mo> </mover> </mrow> </semantics></math> (dark blue) relative to its mean (purple confidence area around zero of width <math display="inline"><semantics> <mrow> <mi>γ</mi> <msub> <mi>σ</mi> <msub> <mover> <mo>Δ</mo> <mo stretchy="false">^</mo> </mover> <mi>I</mi> </msub> </msub> </mrow> </semantics></math>). The statistical detection moments (<span class="html-italic">j</span>) are denoted by steel blue dots, while the prediction moments (<math display="inline"><semantics> <mrow> <mi>j</mi> <mi>j</mi> </mrow> </semantics></math>) are marked as scarlet dots and the optimal threshold detector (<math display="inline"><semantics> <mrow> <mi>j</mi> <mi>j</mi> <mi>j</mi> </mrow> </semantics></math>) is denoted by orange dots.</p>
Full article ">Figure 6 Cont.
<p>Diagnostic signals using the MC-LSTM(12) model for motor #1, including motor current <span class="html-italic">I</span> (blue line) and its prediction <math display="inline"><semantics> <mover> <mi>I</mi> <mo stretchy="false">^</mo> </mover> </semantics></math> (red line), mean motor current <math display="inline"><semantics> <msub> <mi>μ</mi> <mi>I</mi> </msub> </semantics></math> (green line along with green confidence area of width <math display="inline"><semantics> <mrow> <mi>γ</mi> <msub> <mi>σ</mi> <mi>I</mi> </msub> </mrow> </semantics></math>), and prediction bias, <math display="inline"><semantics> <mrow> <mi>I</mi> <mo>−</mo> <mover> <mi>I</mi> <mo stretchy="false">^</mo> </mover> </mrow> </semantics></math> (dark blue) relative to its mean (purple confidence area around zero of width <math display="inline"><semantics> <mrow> <mi>γ</mi> <msub> <mi>σ</mi> <msub> <mover> <mo>Δ</mo> <mo stretchy="false">^</mo> </mover> <mi>I</mi> </msub> </msub> </mrow> </semantics></math>). The statistical detection moments (<span class="html-italic">j</span>) are denoted by steel blue dots, while the prediction moments (<math display="inline"><semantics> <mrow> <mi>j</mi> <mi>j</mi> </mrow> </semantics></math>) are marked as scarlet dots and the optimal threshold detector (<math display="inline"><semantics> <mrow> <mi>j</mi> <mi>j</mi> <mi>j</mi> </mrow> </semantics></math>) is denoted by orange dots.</p>
Full article ">Figure 7
<p>Receiver operation characteristic (ROC) for the MC-LSTM-12 model with thresholds (inscribed in tags) taken from the range (1, 2; 2, 1).</p>
Full article ">
19 pages, 1667 KiB  
Article
A Semantic and Knowledge-Based Approach for Handover Management
by Fulvio Yesid Vivas, Oscar Mauricio Caicedo and Juan Carlos Nieves
Sensors 2021, 21(12), 4234; https://doi.org/10.3390/s21124234 - 21 Jun 2021
Cited by 5 | Viewed by 2609
Abstract
Handover Management (HM) is pivotal for providing service continuity, enormous reliability and extreme-low latency, and meeting sky-high data rates, in wireless communications. Current HM approaches based on a single criterion may lead to unnecessary and frequent handovers due to a partial network view [...] Read more.
Handover Management (HM) is pivotal for providing service continuity, enormous reliability and extreme-low latency, and meeting sky-high data rates, in wireless communications. Current HM approaches based on a single criterion may lead to unnecessary and frequent handovers due to a partial network view that is constrained to information about link quality. In turn, HM approaches based on multicriteria may present a failure of handovers and wrong network selection, decreasing the throughput and increasing the packet loss in the network. This paper proposes SIM-Know, an approach for improving HM. SIM-Know improves HM by including a Semantic Information Model (SIM) that enables context-aware and multicriteria handover decisions. SIM-Know also introduces a SIM-based distributed Knowledge Base Profile (KBP) that provides local and global intelligence to make contextual and proactive handover decisions. We evaluated SIM-Know in an emulated wireless network. When the end-user device moves at low and moderate speeds, the results show that our approach outperforms the Signal Strong First (SSF, single criterion approach) and behaves similarly to the Analytic Hierarchy Process combined with the Technique for Order Preferences by Similarity to the Ideal Solution (AHP-TOPSIS, multicriteria approach) regarding the number of handovers and the number of throughput drops. SSF outperforms SIM-Know and AHP-TOPSIS regarding the handover latency metric because SSF runs a straightforward process for making handover decisions. At high speeds, SIM-Know outperforms SSF and AHP-TOPSIS regarding the number of handovers and the number of throughput drops and, further, improves the throughput, delay, jitter, and packet loss in the network. Considering the obtained results, we conclude that SIM-Know is a practical and attractive solution for cognitive HM. Full article
(This article belongs to the Special Issue Green Communications under Delay Tolerant Networking)
Show Figures

Figure 1

Figure 1
<p>Semantic Information Model.</p>
Full article ">Figure 2
<p>Knowledge Base Profile.</p>
Full article ">Figure 3
<p>SIM-Know Operation.</p>
Full article ">Figure 4
<p><math display="inline"><semantics> <mrow> <mi>K</mi> <mi>B</mi> <msub> <mi>P</mi> <mi>S</mi> </msub> </mrow> </semantics></math> Data Format.</p>
Full article ">Figure 5
<p>SIM-Know in 5G Intra-AMF/UPF Handover.</p>
Full article ">Figure 6
<p>Test Environment.</p>
Full article ">Figure 7
<p>Throughput Drops.</p>
Full article ">Figure 8
<p>Proactivity.</p>
Full article ">Figure 9
<p>Handover Latency.</p>
Full article ">Figure 10
<p>Impact on VoIP Traffic.</p>
Full article ">Figure 11
<p>Impact on TCP Traffic.</p>
Full article ">
18 pages, 5821 KiB  
Article
Utterance Level Feature Aggregation with Deep Metric Learning for Speech Emotion Recognition
by Bogdan Mocanu, Ruxandra Tapu and Titus Zaharia
Sensors 2021, 21(12), 4233; https://doi.org/10.3390/s21124233 - 20 Jun 2021
Cited by 15 | Viewed by 3555
Abstract
Emotion is a form of high-level paralinguistic information that is intrinsically conveyed by human speech. Automatic speech emotion recognition is an essential challenge for various applications; including mental disease diagnosis; audio surveillance; human behavior understanding; e-learning and human–machine/robot interaction. In this paper, we [...] Read more.
Emotion is a form of high-level paralinguistic information that is intrinsically conveyed by human speech. Automatic speech emotion recognition is an essential challenge for various applications; including mental disease diagnosis; audio surveillance; human behavior understanding; e-learning and human–machine/robot interaction. In this paper, we introduce a novel speech emotion recognition method, based on the Squeeze and Excitation ResNet (SE-ResNet) model and fed with spectrogram inputs. In order to overcome the limitations of the state-of-the-art techniques, which fail in providing a robust feature representation at the utterance level, the CNN architecture is extended with a trainable discriminative GhostVLAD clustering layer that aggregates the audio features into compact, single-utterance vector representation. In addition, an end-to-end neural embedding approach is introduced, based on an emotionally constrained triplet loss function. The loss function integrates the relations between the various emotional patterns and thus improves the latent space data representation. The proposed methodology achieves 83.35% and 64.92% global accuracy rates on the RAVDESS and CREMA-D publicly available datasets, respectively. When compared with the results provided by human observers, the gains in global accuracy scores are superior to 24%. Finally, the objective comparative evaluation with state-of-the-art techniques demonstrates accuracy gains of more than 3%. Full article
(This article belongs to the Special Issue Emotion Monitoring System Based on Sensors and Data Analysis)
Show Figures

Figure 1

Figure 1
<p>The proposed methodological framework with the main steps involved: audio-stream preprocessing, feature extraction and utterance level aggregation, system training with emotion metric learning and SVM training.</p>
Full article ">Figure 2
<p>Audio signal pre-processing: voice activity detection, silence removal and spectrogram image computation.</p>
Full article ">Figure 3
<p>SE-ResNet CNN extension with a GhostVLAD layer for feature aggregation.</p>
Full article ">Figure 4
<p>Emotion representation in the valence-arousal space using Mikel’s wheel of emotions.</p>
Full article ">Figure 5
<p>Difference between the triplet loss function and the emotion constraint. (<b>a</b>) The triplet loss function, and (<b>b</b>). The proposed emotion metric.</p>
Full article ">Figure 6
<p>Visualization of the feature embedding using t-SNE on the CREMA-D dataset: (<b>a</b>) softmax loss, (<b>b</b>) triplet loss, and (<b>c</b>) emotion metric learning.</p>
Full article ">Figure 7
<p>Visualization of the feature embedding using t-SNE on the RAVDESS dataset: (<b>a</b>) softmax loss, (<b>b</b>) triplet loss, and (<b>c</b>) emotion metric learning.</p>
Full article ">Figure 8
<p>The statistical experimental results of the considered databases: (<b>a</b>) RAVDESS, (<b>b</b>) CREMA-D.</p>
Full article ">Figure 9
<p>The confusion matrixes on the evaluation dataset (<b>a</b>) RAVDESS and (<b>b</b>) CREMA-D. (S1). The baseline method; (S2). SE-ResNet with multi-stage training; (S3). SE-ResNet with GhostVLAD layer; (S4). The SE-ResNet with the GhostVLAD layer and the classical triplet loss function; (S5). The proposed framework, which involves SE-ResNet with the GhostVLAD aggregation layer and emotion constraint loss.</p>
Full article ">Figure 10
<p>The system performance evaluation on RAVDESS and CREMA-D datasets with the different parameters involved: (<b>a</b>) the number of NetVLAD clusters (<span class="html-italic">K</span>) and (<b>b</b>) the number of GhostVLAD clusters (<span class="html-italic">G</span>).</p>
Full article ">Figure 11
<p>The system performance evaluation for different values of the control margins <math display="inline"><semantics> <mi>α</mi> </semantics></math> and <math display="inline"><semantics> <mi>β</mi> </semantics></math> on (<b>a</b>) RAVDESS dataset; (<b>b</b>) CREMA-D dataset.</p>
Full article ">
13 pages, 4681 KiB  
Article
The Accuracy of Patient-Specific Instrumentation with Laser Guidance in a Dynamic Total Hip Arthroplasty: A Radiological Evaluation
by Andrea Ferretti, Ferdinando Iannotti, Lorenzo Proietti, Carlo Massafra, Attilio Speranza, Andrea Laghi and Raffaele Iorio
Sensors 2021, 21(12), 4232; https://doi.org/10.3390/s21124232 - 20 Jun 2021
Cited by 10 | Viewed by 3172
Abstract
The functional positioning of components in a total hip arthroplasty (THA) and its relationship with individual lumbopelvic kinematics and a patient’s anatomy are being extensively studied. Patient-specific kinematic planning could be a game-changer; however, it should be accurately delivered intraoperatively. The main purpose [...] Read more.
The functional positioning of components in a total hip arthroplasty (THA) and its relationship with individual lumbopelvic kinematics and a patient’s anatomy are being extensively studied. Patient-specific kinematic planning could be a game-changer; however, it should be accurately delivered intraoperatively. The main purpose of this study was to verify the reliability and accuracy of a patient-specific instrumentation (PSI) and laser-guided technique to replicate preoperative dynamic planning. Thirty-six patients were prospectively enrolled and received dynamic hip preoperative planning based on three functional lateral spinopelvic X-rays and a low dose CT scan. Three-dimensional (3D) printed PSI guides and laser-guided instrumentation were used intraoperatively. The orientation of the components, osteotomy level and change in hip length and offset were measured on postoperative CT scans and compared with the planned preoperative values. The length of surgery was compared with that of a matched group of thirty-six patients who underwent a conventional THA. The mean absolute deviation from the planned inclination and anteversion was 3.9° and 4.4°, respectively. In 92% of cases, both the inclination and anteversion were within +/− 10° of the planned values. Regarding the osteotomy level, offset change and limb length change, the mean deviation was, respectively, 1.6 mm, 2.6 mm and 2 mm. No statistically significant difference was detected when comparing the planned values with the achieved values. The mean surgical time was 71.4 min in the PSI group and 60.4 min in the conventional THA group (p < 0.05). Patient-specific and laser-guided instrumentation is safe and accurately reproduces dynamic planning in terms of the orientation of the components, osteotomy level, leg length and offset. Moreover, the increase in surgical time is negligible. Full article
(This article belongs to the Special Issue Smart Sensors Applications in Total Joint Arthroplasty)
Show Figures

Figure 1

Figure 1
<p>Nine different cup orientations are provided to the surgeon with an indication of the risk of impingement.</p>
Full article ">Figure 2
<p>(<b>a</b>) Patient-specific femoral guide fitted to the proximal femur. (<b>b</b>) Femoral guide fitted to the proximal femur after the osteotomy cut.</p>
Full article ">Figure 3
<p>(<b>a</b>) 3D acetabular model with the patient-specific guide in place. (<b>b</b>) The 3D printed form of the patient’s acetabulum is used as a model by the surgeon to guide them in the positioning of the laser guide.</p>
Full article ">Figure 4
<p>Acetabular guide adapted inside the acetabulum. Once the guide is positioned, the curved handle is attached with a laser that will indicate the reference point for the positioning of the acetabular component.</p>
Full article ">Figure 5
<p>(<b>a</b>) Pelvic reference pin and acetabular guide introducer topped with the laser pointer. The lights of both lasers were made to converge on the wall of the theater. (<b>b</b>) Demonstration of the removable laser adapted also to the top of the impaction handle; the coincidence of the lights is checked again to guide the cup impaction.</p>
Full article ">Figure 6
<p>Measurement of the acetabular cup anteversion angle on a postoperative CT scan.</p>
Full article ">Figure 7
<p>Scatter plot showing the position of the acetabular component within 5° (hashed line box) and 10° (solid line box) of deviation from the planned inclination and anteversion.</p>
Full article ">
23 pages, 33650 KiB  
Article
Pavement Quality Index Rating Strategy Using Fracture Energy Analysis for Implementing Smart Road Infrastructure
by Samuel Abejide, Mohamed M. H. Mostafa, Dillip Das, Bankole Awuzie and Mujib Rahman
Sensors 2021, 21(12), 4231; https://doi.org/10.3390/s21124231 - 20 Jun 2021
Cited by 3 | Viewed by 6521
Abstract
Developing a responsive pavement-management infrastructure system is of paramount importance, accentuated by the quest for sustainability through adoption of the Road Traffic Management System. Technological advances have been witnessed in developed countries concerning the development of smart, sustainable transportation infrastructure. However, the same [...] Read more.
Developing a responsive pavement-management infrastructure system is of paramount importance, accentuated by the quest for sustainability through adoption of the Road Traffic Management System. Technological advances have been witnessed in developed countries concerning the development of smart, sustainable transportation infrastructure. However, the same cannot be said of developing countries. In this study, the development of a pavement management system at network level was examined to contribute towards a framework for evaluating a Pavement Quality Index and service life capacity. Environmental surface response models in the form of temperature and moisture variations within the pavement were applied, using sensor devices connected to a data cloud system to carry out mathematical analysis using a distinctive mesh analysis deformation model. The results indicated variation in the Resilient Modulus of the pavement, with increasing moisture content. Increase in moisture propagation increased saturation of the unbound granular base which reduced the elastic modulus of the sub-base and base layer and reduced the strength of the pavement, resulting in bottom-up cracks and cracking failure. The horizontal deformation reduced, indicating that the material was experiencing work hardening and further stress would not result in significant damage. Increasing temperature gradient resulted in reduced stiffness of the asphalt layer. In tropical regions, this can result in rutting failure which, over time, results in top-down cracks and potholes, coupled with increasing moisture content. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

Figure 1
<p>Pictorial representation of H—Gauge manufactured by Kyowa.</p>
Full article ">Figure 2
<p>Proposed Arduino Block Diagram for Determining Moisture and Temperature Values.</p>
Full article ">Figure 3
<p>Proposed Technology for Moisture Instrumentation of Pavement Sub-Grade Layer.</p>
Full article ">Figure 4
<p>Smart Instrumentation of Pavement using Moisture Instrumentation Sensor for Pavement Analysis. (<b>a</b>) A top view of the sensor at the instrumentation site; (<b>b</b>) Pictorial view of the sensor embedded into the road before refill and compaction</p>
Full article ">Figure 5
<p>Strains under Cyclic Loading [<a href="#B37-sensors-21-04231" class="html-bibr">37</a>].</p>
Full article ">Figure 6
<p>Experimental Layout for Smart Pavement-Instrumentation and Modelling.</p>
Full article ">Figure 7
<p>Pavement Instrumentation Humidity Values Obtained from Sensor Probes.</p>
Full article ">Figure 8
<p>Pavement Instrumentation Humidity Values Obtained from Sensor Probes.</p>
Full article ">Figure 9
<p>Pavement Instrumentation Temperature Values Obtained from Sensor Probes.</p>
Full article ">Figure 10
<p>Asphalt-Concrete Fracture Energy against Temperature Sensor 1.</p>
Full article ">Figure 11
<p>Asphalt-Concrete Fracture Energy against Temperature Sensor 2.</p>
Full article ">Figure 12
<p>Relationship between AC Fracture Energy and Resilient Modulus E1.</p>
Full article ">Figure 13
<p>Relationship between AC Fracture Energy and Resilient Modulus E2.</p>
Full article ">Figure A1
<p>Pavement Instrumentation Humidity Values Obtained from Sensor Probe.</p>
Full article ">Figure A2
<p>Pavement Instrumentation Temperature Values Obtained from Sensor Probe.</p>
Full article ">
14 pages, 1311 KiB  
Article
Near-Infrared Reflectance Spectroscopy for Predicting the Phospholipid Fraction and the Total Fatty Acid Composition of Freeze-Dried Beef
by Guillermo Ripoll, Sebastiana Failla, Begoña Panea, Jean-François Hocquette, Susana Dunner, Jose Luis Olleta, Mette Christensen, Per Ertbjerg, Ian Richardson, Michela Contò, Pere Albertí, Carlos Sañudo and John L. Williams
Sensors 2021, 21(12), 4230; https://doi.org/10.3390/s21124230 - 20 Jun 2021
Cited by 5 | Viewed by 3422
Abstract
Research on fatty acids (FA) is important because their intake is related to human health. NIRS can be a useful tool to estimate the FA of beef but due to the high moisture and the high absorbance of water makes it difficult to [...] Read more.
Research on fatty acids (FA) is important because their intake is related to human health. NIRS can be a useful tool to estimate the FA of beef but due to the high moisture and the high absorbance of water makes it difficult to calibrate the analyses. This work evaluated near-infrared reflectance spectroscopy as a tool to assess the total fatty acid composition and the phospholipid fraction of fatty acids of beef using freeze-dried meat. An average of 22 unrelated pure breed young bulls from 15 European breeds were reared on a common concentrate-based diet. A total of 332 longissimus thoracis steaks were analysed for fatty acid composition and a freeze-dried sample was subjected to near-infrared spectral analysis. 220 samples (67%) were used as a calibration set with the remaining 110 (33%) being used for validation of the models obtained. There was a large variation in the total FA concentration across the animals giving a good data set for the analysis and whilst the coefficient of variation was nearly 68% for the monounsaturated FA it was only 27% for the polyunsaturated fatty acids (PUFA). PLS method was used to develop the prediction models. The models for the phospholipid fraction had a low R2p and high standard error, while models for neutral lipid had the best performance, in general. It was not possible to obtain a good prediction of many individual PUFA concentrations being present at low concentrations and less variable than other FA. The best models were developed for Total FA, saturated FA, 9c18:1 and 16:1 with R2p greater than 0.76. This study indicates that NIRS is a feasible and useful tool for screening purposes and it has the potential to predict most of the FA of freeze-dried beef. Full article
(This article belongs to the Special Issue Using Vis-NIR Spectroscopy for Predicting Quality Compounds in Foods)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Average (bold line), maximum and minimum (thin lines) of raw NIR spectra. (<b>b</b>) Standard normal variate (SNV) and standard normal variate plus detrending (SNVD) pre-treatments. (<b>c</b>) SNVD and first-order derivative pre-treatments. Spectra were recorded as log(1/R).</p>
Full article ">Figure 2
<p>Regression coefficients of each wavelength for the model of total fatty acids.</p>
Full article ">Figure 3
<p>Scatter plots of models of 16:1, 11c18:1, total FA, SFA and MUFA of the total fraction.</p>
Full article ">
17 pages, 12492 KiB  
Article
A Novel Dentary Bone Conduction Device Equipped with Laser Communication in DSP
by Jau-Woei Perng, Tung-Li Hsieh and Cheng-Yan Guo
Sensors 2021, 21(12), 4229; https://doi.org/10.3390/s21124229 - 20 Jun 2021
Cited by 3 | Viewed by 3717
Abstract
In this study, we designed a dentary bone conduction system that transmits and receives audio by laser. The main objective of this research was to propose a complete hardware design method, including a laser audio transmitter and receiver and digital signal processor (DSP) [...] Read more.
In this study, we designed a dentary bone conduction system that transmits and receives audio by laser. The main objective of this research was to propose a complete hardware design method, including a laser audio transmitter and receiver and digital signal processor (DSP) based digital signal processing system. We also present a digital filter algorithm that can run on a DSP in real time. This experiment used the CMU ARCTIC databases’ human-voice reading audio as the standard audio. We used a piezoelectric sensor to measure the vibration signal of the bone conduction transducer (BCT) and separately calculated the signal-to-noise ratio (SNR) of the digitally filtered audio output and the unfiltered audio output using DSP. The SNR of the former was twice that of the latter, and the BCT output quality significantly improved. From the results, we can conclude that the dentary bone conduction system integrated with a DSP digital filter enhances sound quality. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

Figure 1
<p>Structure of the human ear.</p>
Full article ">Figure 2
<p>The schematic diagram of a dentary bone conduction system applied to humans.</p>
Full article ">Figure 3
<p>The PCB layout of the laser transmission device.</p>
Full article ">Figure 4
<p>The PCB layout of the laser receiving device.</p>
Full article ">Figure 5
<p>Laser audio transmission device. a displays the switch: up is open, down is closed. b shows the voltage switch, set to a voltage of 4.2 V, with the jump changed to 5 V (not recommended). c displays the audio input: one may use a 3.5-mm earphone plug to input the computer audio, and the laser will output the audio signal. d shows the audio output: one can confirm that the input audio is used normally and directly connects to a 3.5-mm earphone. e displays the switch between the microphone and audio input: it can be set to the left for audio input or to the right for microphone input.</p>
Full article ">Figure 6
<p>Laser audio signal receiver. a shows the PAM8403 output earphone, which can drive bone conduction. b shows the trans-impedance amplifier (TIA) output earphone, which can be used to listen to the sound transmitted by the solar panel or photodiode. c displays the power switch, which turns on and off. d displays the input of the piezoelectric sensor.</p>
Full article ">Figure 7
<p>System architecture.</p>
Full article ">Figure 8
<p>The audio signal input circuit of the laser audio transmission device.</p>
Full article ">Figure 9
<p>The microphone circuit of the laser audio transmission device.</p>
Full article ">Figure 10
<p>The trans-impedance amplifier of the laser audio signal receiver circuit.</p>
Full article ">Figure 11
<p>The voltage follower of the laser audio signal receiver circuit.</p>
Full article ">Figure 12
<p>The feed-forward amplifier of the laser audio signal receiver circuit.</p>
Full article ">Figure 13
<p>The low-pass filter with a Sallen–Key type of laser audio signal receiver circuit.</p>
Full article ">Figure 14
<p>STM32F4-Discovery and 12-bit ADC.</p>
Full article ">Figure 15
<p>Digital signal processing flowchart.</p>
Full article ">Figure 16
<p>Connection between the laser audio receiver and the STM32F407 processor.</p>
Full article ">Figure 17
<p>Time−domain signal of TIA and DSP output.</p>
Full article ">Figure 18
<p>Time−domain signal of vibration of BCT.</p>
Full article ">
22 pages, 3092 KiB  
Article
A Standard-Based Internet of Things Platform and Data Flow Modeling for Smart Environmental Monitoring
by Tércio Filho, Luiz Fernando, Marcos Rabelo, Sérgio Silva, Carlos Santos, Maria Ribeiro, Ian A. Grout, Waldir Moreira and Antonio Oliveira-Jr
Sensors 2021, 21(12), 4228; https://doi.org/10.3390/s21124228 - 20 Jun 2021
Cited by 6 | Viewed by 3857
Abstract
The environment consists of the interaction between the physical, biotic, and anthropic means. As this interaction is dynamic, environmental characteristics tend to change naturally over time, requiring continuous monitoring. In this scenario, the internet of things (IoT), together with traditional sensor networks, allows [...] Read more.
The environment consists of the interaction between the physical, biotic, and anthropic means. As this interaction is dynamic, environmental characteristics tend to change naturally over time, requiring continuous monitoring. In this scenario, the internet of things (IoT), together with traditional sensor networks, allows for the monitoring of various environmental aspects such as air, water, atmospheric, and soil conditions, and sending data to different users and remote applications. This paper proposes a Standard-based Internet of Things Platform and Data Flow Modeling for Smart Environmental Monitoring. The platform consists of an IoT network based on the IEEE 1451 standard which has the network capable application processor (NCAP) node (coordinator) and multiple wireless transducers interface module (WTIM) nodes. A WTIM node consists of one or more transducers, a data transfer interface and a processing unit. Thus, with the developed network, it is possible to collect environmental data at different points within a city landscape, to perform analysis of the communication distance between the WTIM nodes, and monitor the number of bytes transferred according to each network node. In addition, a dynamic model of data flow is proposed where the performance of the NCAP and WTIM nodes are described through state variables, relating directly to the information exchange dynamics between the communicating nodes in the mesh network. The modeling results showed stability in the network. Such stability means that the network has capacity of preserve its flow of information, for a long period of time, without loss frames or packets due to congestion. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

Figure 1
<p>Network topology developed.</p>
Full article ">Figure 2
<p>Network model developed.</p>
Full article ">Figure 3
<p>Flowchart of the logical part of the NCAP.</p>
Full article ">Figure 4
<p>WTIM developed in the laboratory.</p>
Full article ">Figure 5
<p>Signal conditioning circuit diagram.</p>
Full article ">Figure 6
<p>Flowchart of the logical part of WTIM.</p>
Full article ">Figure 7
<p>Generic protocol with AT command.</p>
Full article ">Figure 8
<p>Request command.</p>
Full article ">Figure 9
<p>Response command.</p>
Full article ">Figure 10
<p>Network topology applied in the city.</p>
Full article ">Figure 11
<p>Reading of data via MQTT.</p>
Full article ">Figure 12
<p>Architecture of the sensor networks.</p>
Full article ">Figure 13
<p>Quasi-characteristic polynomial.</p>
Full article ">Figure 14
<p>Performance graph of <math display="inline"><semantics> <msub> <mi>S</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>S</mi> <mn>2</mn> </msub> </semantics></math> nodes.</p>
Full article ">Figure 15
<p><math display="inline"><semantics> <msub> <mi>S</mi> <mn>3</mn> </msub> </semantics></math> node performance graph.</p>
Full article ">Figure 16
<p>Phase picture of the sensor network.</p>
Full article ">
27 pages, 7697 KiB  
Article
Unmanned Aerial Vehicles (UAVs) for Physical Progress Monitoring of Construction
by Nicolás Jacob-Loyola, Felipe Muñoz-La Rivera, Rodrigo F. Herrera and Edison Atencio
Sensors 2021, 21(12), 4227; https://doi.org/10.3390/s21124227 - 20 Jun 2021
Cited by 37 | Viewed by 4990
Abstract
The physical progress of a construction project is monitored by an inspector responsible for verifying and backing up progress information, usually through site photography. Progress monitoring has improved, thanks to advances in image acquisition, computer vision, and the development of unmanned aerial vehicles [...] Read more.
The physical progress of a construction project is monitored by an inspector responsible for verifying and backing up progress information, usually through site photography. Progress monitoring has improved, thanks to advances in image acquisition, computer vision, and the development of unmanned aerial vehicles (UAVs). However, no comprehensive and simple methodology exists to guide practitioners and facilitate the use of these methods. This research provides recommendations for the periodic recording of the physical progress of a construction site through the manual operation of UAVs and the use of point clouds obtained under photogrammetric techniques. The programmed progress is then compared with the actual progress made in a 4D BIM environment. This methodology was applied in the construction of a reinforced concrete residential building. The results showed the methodology is effective for UAV operation in the work site and the use of the photogrammetric visual records for the monitoring of the physical progress and the communication of the work performed to the project stakeholders. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>Research methodology.</p>
Full article ">Figure 2
<p>Flow chart of the proposed work progress monitoring inspection methodology.</p>
Full article ">Figure 3
<p>(<b>a</b>) Definition of the top face and lateral faces according to the shape of the structure. (<b>b</b>) Flight paths in front of each face, horizontal and vertical steps.</p>
Full article ">Figure 4
<p>Tools used in the application of the methodology to the case study.</p>
Full article ">Figure 5
<p>Selected locations for complete inspection of the structure.</p>
Full article ">Figure 6
<p>Key points detected in the image set during aerotriangulation.</p>
Full article ">Figure 7
<p>Pose of each photograph rectified by aerotriangulation.</p>
Full article ">Figure 8
<p>Superposition of point cloud and as-planned model on each inspection day—overlap at (<b>a</b>) day 0, (<b>b</b>) day 12, (<b>c</b>) day, (<b>d</b>) day 24, (<b>e</b>) day 30, and (<b>f</b>) day 41.</p>
Full article ">Figure 8 Cont.
<p>Superposition of point cloud and as-planned model on each inspection day—overlap at (<b>a</b>) day 0, (<b>b</b>) day 12, (<b>c</b>) day, (<b>d</b>) day 24, (<b>e</b>) day 30, and (<b>f</b>) day 41.</p>
Full article ">Figure 9
<p>Identification and verification of construction conditions.</p>
Full article ">Figure 10
<p>Different aspects observed in the reconstructed model: (<b>a</b>) Reality capture of the workspace of the case study, (<b>b</b>) stockpiling points and location of tools, (<b>c</b>) verification of concreting of elements, (<b>d</b>) potentially hazardous areas.</p>
Full article ">Figure 11
<p>Visualization of the work schedule in the Naviswork Manage software.</p>
Full article ">
17 pages, 4120 KiB  
Article
Signal Expansion Method in Indoor FMCW Radar Systems for Improving Range Resolution
by Seongmin Baek, Yunho Jung and Seongjoo Lee
Sensors 2021, 21(12), 4226; https://doi.org/10.3390/s21124226 - 20 Jun 2021
Cited by 14 | Viewed by 3366
Abstract
As various unmanned autonomous driving technologies such as autonomous vehicles and autonomous driving drones are being developed, research on FMCW radar, a sensor related to these technologies, is actively being conducted. The range resolution, which is a parameter for accurately detecting an object [...] Read more.
As various unmanned autonomous driving technologies such as autonomous vehicles and autonomous driving drones are being developed, research on FMCW radar, a sensor related to these technologies, is actively being conducted. The range resolution, which is a parameter for accurately detecting an object in the FMCW radar system, depends on the modulation bandwidth. Expensive radars have a large modulation bandwidth, use the band above 77 GHz, and are mainly used as in-vehicle radar sensors. However, these high-performance radars have the disadvantage of being expensive and burdensome for use in areas that require precise sensors, such as indoor environment motion detection and autonomous drones. In this paper, the range resolution is improved beyond the limited modulation bandwidth by extending the beat frequency signal in the time domain through the proposed Adaptive Mirror Padding and Phase Correction Padding. The proposed algorithm has similar performance in the existing Zero Padding, Mirror Padding, and Range RMSE, but improved results were confirmed through the ρs indicating the size of the side lobe compared to the main lobe and the accurate detection rate of the OS CFAR. In the case of ρs, it was confirmed that with single targets, Adaptive Mirror Padding was improved by about 3 times and Phase Correct Padding was improved by about 6 times compared to the existing algorithm. The results of the OS CFAR were divided into single targets and multiple targets to confirm the performance. In single targets, Adaptive Mirror Padding improved by about 10% and Phase Correct Padding by about 20% compared to the existing algorithm. In multiple targets, Phase Correct Padding improved by about 20% compared to the existing algorithm. The proposed algorithm was verified through the MATLAB Tool and the actual FMCW radar. As the results were similar in the two experimental environments, it was verified that the algorithm works in real radar as well. Full article
(This article belongs to the Section Electronic Sensors)
Show Figures

Figure 1

Figure 1
<p>FMCW radar system architecture.</p>
Full article ">Figure 2
<p>Transmit and receive frequency signals over time in FMCW radar.</p>
Full article ">Figure 3
<p>FMCW radar DSP architecture.</p>
Full article ">Figure 4
<p>Proposed FMCW radar DSP architecture.</p>
Full article ">Figure 5
<p>Proposed Adaptive Mirror Padding algorithm flowchart.</p>
Full article ">Figure 6
<p>Adaptive Mirror Padding according to the index position of the pole.</p>
Full article ">Figure 7
<p>Proposed FMCW radar DSP Architecture.</p>
Full article ">Figure 8
<p>Proposed Phase Correct algorithm flowchart.</p>
Full article ">Figure 9
<p>Process of obtaining the Phase Correct Padding algorithm index.</p>
Full article ">Figure 10
<p>FFT results by algorithm.</p>
Full article ">Figure 11
<p>Algorithm simulation flowchart.</p>
Full article ">Figure 12
<p>Optimal Threshold Value α, β.</p>
Full article ">Figure 13
<p>MAE result of the proposed algorithm according to Sample Point.</p>
Full article ">Figure 14
<p>Single Target Mean Absolute Error (Normalized Spectrum).</p>
Full article ">Figure 15
<p>Multiple Target Mean Absolute Error (Normalized Spectrum).</p>
Full article ">Figure 16
<p>Algorithm Test Flowchart.</p>
Full article ">
16 pages, 3332 KiB  
Communication
A Novel Feature Extraction and Fault Detection Technique for the Intelligent Fault Identification of Water Pump Bearings
by Muhammad Irfan, Abdullah Saeed Alwadie, Adam Glowacz, Muhammad Awais, Saifur Rahman, Mohammad Kamal Asif Khan, Mohammad Jalalah, Omar Alshorman and Wahyu Caesarendra
Sensors 2021, 21(12), 4225; https://doi.org/10.3390/s21124225 - 20 Jun 2021
Cited by 15 | Viewed by 3468
Abstract
The reliable and cost-effective condition monitoring of the bearings installed in water pumps is a real challenge in the industry. This paper presents a novel strong feature selection and extraction algorithm (SFSEA) to extract fault-related features from the instantaneous power spectrum (IPS). The [...] Read more.
The reliable and cost-effective condition monitoring of the bearings installed in water pumps is a real challenge in the industry. This paper presents a novel strong feature selection and extraction algorithm (SFSEA) to extract fault-related features from the instantaneous power spectrum (IPS). The three features extracted from the IPS using the SFSEA are fed to an extreme gradient boosting (XBG) classifier to reliably detect and classify the minor bearing faults. The experiments performed on a lab-scale test setup demonstrated classification accuracy up to 100%, which is better than the previously reported fault classification accuracies and indicates the effectiveness of the proposed method. Full article
(This article belongs to the Special Issue Artificial Intelligence for Fault Diagnostics and Prognostics)
Show Figures

Figure 1

Figure 1
<p>The distribution of maintenance costs in the chemical industry.</p>
Full article ">Figure 2
<p>The important components of the centrifugal pump.</p>
Full article ">Figure 3
<p>The flow chart of the SFSEA.</p>
Full article ">Figure 4
<p>The photo of the experiment test rig.</p>
Full article ">Figure 5
<p>Fault types simulated in the bearing.</p>
Full article ">Figure 6
<p>The NL plots for the (<b>a</b>) normal bearing, (<b>b</b>) type 1 defect and (<b>c</b>) type 2 defect.</p>
Full article ">Figure 6 Cont.
<p>The NL plots for the (<b>a</b>) normal bearing, (<b>b</b>) type 1 defect and (<b>c</b>) type 2 defect.</p>
Full article ">Figure 7
<p>The ML plots for the defective bearing: (<b>a</b>) normal bearing, (<b>b</b>) type 1 defect and (<b>c</b>) type 2 defect.</p>
Full article ">Figure 7 Cont.
<p>The ML plots for the defective bearing: (<b>a</b>) normal bearing, (<b>b</b>) type 1 defect and (<b>c</b>) type 2 defect.</p>
Full article ">Figure 8
<p>The FL plots for the defective bearing: (<b>a</b>) normal bearing, (<b>b</b>) type 1 defect and (<b>c</b>) type 2 defect.</p>
Full article ">Figure 8 Cont.
<p>The FL plots for the defective bearing: (<b>a</b>) normal bearing, (<b>b</b>) type 1 defect and (<b>c</b>) type 2 defect.</p>
Full article ">
10 pages, 1993 KiB  
Communication
Hybrid Fiber-Optic Sensing Integrating Brillouin Optical Time-Domain Analysis and Fiber Bragg Grating for Long-Range Two-Parameter Measurement
by Shien-Kuei Liaw, Chi-Wen Liao, Meng-Hsuan Tsai, Dong-Chang Li, Shu-Ming Yang, Zhu-Yong Xia, Chien-Hung Yeh and Wen-Fung Liu
Sensors 2021, 21(12), 4224; https://doi.org/10.3390/s21124224 - 20 Jun 2021
Cited by 4 | Viewed by 2588
Abstract
Distributed fiber sensing (DFS) can provide real-time signals and warnings. The entire length of fiber optic cable can act as a sensing element, but the accuracy is sometimes limited. On the other hand, point-to-point fiber sensing (PPFS) is usually implemented using one or [...] Read more.
Distributed fiber sensing (DFS) can provide real-time signals and warnings. The entire length of fiber optic cable can act as a sensing element, but the accuracy is sometimes limited. On the other hand, point-to-point fiber sensing (PPFS) is usually implemented using one or more fiber Bragg gratings (FBGs) at specific positions along with the fiber for the monitoring of specific parameters (temperature, strain, pressure, and so on). However, the cost becomes expensive when the number of FBGs increases. A hybrid fiber sensing scheme is thus proposed, combining the advantages of DFS and PPFS. It is based on a Brillouin optical time-domain analysis (BOTDA) fiber system with additional FBGs embedded at certain positions where it is necessary to detect specific parameters. The hybrid fiber sensing system has the advantages of full sensing coverage at essential locations that need to be carefully monitored. In our work, the test results showed that the proposed system could achieve a sensing distance of 16 km with the single-mode fiber with a 2 m spatial resolution. For FBG parameter measurements, the temperature variation was 52 °C, from 25 °C to 77 °C, with a temperature sensitivity of 23 pm/°C, and the strain was from 0 to 400 µε, with a strain sensitivity of 0.975 pm/µε, respectively, using two FBGs. Full article
Show Figures

Figure 1

Figure 1
<p>Hybrid system combining fiber grating and BOTDA system.</p>
Full article ">Figure 2
<p><b>Analyses of</b> Brillouin frequency versus fiber distance and optical output power using MATLAB software. (<b>a</b>) 3D graphic. (<b>b</b>) 2D graphic.</p>
Full article ">Figure 3
<p>(<b>a</b>) Two FBGs are located on both sides of the DSF. (<b>b</b>) Before the strain and temperature changes were applied to the FBGs, and (<b>c</b>) after the strain and temperature changes were applied to the FBGs.</p>
Full article ">Figure 4
<p>The fiber Brillouin frequency changes due to temperature variation.</p>
Full article ">Figure 5
<p>(<b>a</b>) The optical intensity against fiber distance data in BOTDA at 10.88 GHz, (<b>b</b>) the optical intensity against fiber distance data in BOTDA at 10.57 GHz addressing the reproducibility issue, (<b>c</b>) the optical intensity against fiber distance data in BOTDA at 10.88 GHz, (<b>d</b>) the optical intensity against fiber distance data in BOTDA at 10.57 GHz addressing the repeatability issue.</p>
Full article ">Figure 5 Cont.
<p>(<b>a</b>) The optical intensity against fiber distance data in BOTDA at 10.88 GHz, (<b>b</b>) the optical intensity against fiber distance data in BOTDA at 10.57 GHz addressing the reproducibility issue, (<b>c</b>) the optical intensity against fiber distance data in BOTDA at 10.88 GHz, (<b>d</b>) the optical intensity against fiber distance data in BOTDA at 10.57 GHz addressing the repeatability issue.</p>
Full article ">
27 pages, 4089 KiB  
Article
The Deep Learning Solutions on Lossless Compression Methods for Alleviating Data Load on IoT Nodes in Smart Cities
by Ammar Nasif, Zulaiha Ali Othman and Nor Samsiah Sani
Sensors 2021, 21(12), 4223; https://doi.org/10.3390/s21124223 - 20 Jun 2021
Cited by 28 | Viewed by 7723
Abstract
Networking is crucial for smart city projects nowadays, as it offers an environment where people and things are connected. This paper presents a chronology of factors on the development of smart cities, including IoT technologies as network infrastructure. Increasing IoT nodes leads to [...] Read more.
Networking is crucial for smart city projects nowadays, as it offers an environment where people and things are connected. This paper presents a chronology of factors on the development of smart cities, including IoT technologies as network infrastructure. Increasing IoT nodes leads to increasing data flow, which is a potential source of failure for IoT networks. The biggest challenge of IoT networks is that the IoT may have insufficient memory to handle all transaction data within the IoT network. We aim in this paper to propose a potential compression method for reducing IoT network data traffic. Therefore, we investigate various lossless compression algorithms, such as entropy or dictionary-based algorithms, and general compression methods to determine which algorithm or method adheres to the IoT specifications. Furthermore, this study conducts compression experiments using entropy (Huffman, Adaptive Huffman) and Dictionary (LZ77, LZ78) as well as five different types of datasets of the IoT data traffic. Though the above algorithms can alleviate the IoT data traffic, adaptive Huffman gave the best compression algorithm. Therefore, in this paper, we aim to propose a conceptual compression method for IoT data traffic by improving an adaptive Huffman based on deep learning concepts using weights, pruning, and pooling in the neural network. The proposed algorithm is believed to obtain a better compression ratio. Additionally, in this paper, we also discuss the challenges of applying the proposed algorithm to IoT data compression due to the limitations of IoT memory and IoT processor, which later it can be implemented in IoT networks. Full article
(This article belongs to the Special Issue AI for IoT)
Show Figures

Figure 1

Figure 1
<p>Chronology of factors on the development of smart cities.</p>
Full article ">Figure 2
<p>The IoT network architecture.</p>
Full article ">Figure 3
<p>Three memory types for the IoT.</p>
Full article ">Figure 4
<p>Multi sensors to one IoT node architecture.</p>
Full article ">Figure 5
<p>Compression results and ratios for all datasets.</p>
Full article ">Figure 5 Cont.
<p>Compression results and ratios for all datasets.</p>
Full article ">Figure 5 Cont.
<p>Compression results and ratios for all datasets.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop