[go: up one dir, main page]

Next Issue
Volume 23, March-1
Previous Issue
Volume 23, February-1
 
 
sensors-logo

Journal Browser

Journal Browser

Sensors, Volume 23, Issue 4 (February-2 2023) – 618 articles

Cover Story (view full-size image): Monitoring the coastal environment is a crucial factor in ensuring its proper management. This paper presents a low-cost multiparametric probe that can be integrated into a wireless sensor network to send data to a marine observatory. The probe comprises physical sensors capable of measuring water temperature, salinity, and total suspended solids (TSS). The physical sensors for salinity and TSS are created and calibrated. The results indicate that no effect of temperature is found for both sensors and no interference of salinity in measuring TSS or vice versa. The obtained calibration model for salinity is characterised by a correlation coefficient of 0.9 and a mean absolute error (MAE) of 0.74 g/L. The model for TSS is characterised by a correlation coefficient of 0.99 and an MAE of 12 mg/L. The price of the proposed devices is EUR 100. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
17 pages, 25768 KiB  
Article
SwimmerNET: Underwater 2D Swimmer Pose Estimation Exploiting Fully Convolutional Neural Networks
by Nicola Giulietti, Alessia Caputo, Paolo Chiariotti and Paolo Castellini
Sensors 2023, 23(4), 2364; https://doi.org/10.3390/s23042364 - 20 Feb 2023
Cited by 12 | Viewed by 2918
Abstract
Professional swimming coaches make use of videos to evaluate their athletes’ performances. Specifically, the videos are manually analyzed in order to observe the movements of all parts of the swimmer’s body during the exercise and to give indications for improving swimming technique. This [...] Read more.
Professional swimming coaches make use of videos to evaluate their athletes’ performances. Specifically, the videos are manually analyzed in order to observe the movements of all parts of the swimmer’s body during the exercise and to give indications for improving swimming technique. This operation is time-consuming, laborious and error prone. In recent years, alternative technologies have been introduced in the literature, but they still have severe limitations that make their correct and effective use impossible. In fact, the currently available techniques based on image analysis only apply to certain swimming styles; moreover, they are strongly influenced by disturbing elements (i.e., the presence of bubbles, splashes and reflections), resulting in poor measurement accuracy. The use of wearable sensors (accelerometers or photoplethysmographic sensors) or optical markers, although they can guarantee high reliability and accuracy, disturb the performance of the athletes, who tend to dislike these solutions. In this work we introduce swimmerNET, a new marker-less 2D swimmer pose estimation approach based on the combined use of computer vision algorithms and fully convolutional neural networks. By using a single 8 Mpixel wide-angle camera, the proposed system is able to estimate the pose of a swimmer during exercise while guaranteeing adequate measurement accuracy. The method has been successfully tested on several athletes (i.e., different physical characteristics and different swimming technique), obtaining an average error and a standard deviation (worst case scenario for the dataset analyzed) of approximately 1 mm and 10 mm, respectively. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>A single camera was fixed sideways underwater, and only the submerged athlete’s body is framed.</p>
Full article ">Figure 2
<p>Skeleton that represents a model of a human body. The black dots represent the manually annotated targets.</p>
Full article ">Figure 3
<p>Example of wrist trajectories. (<b>a</b>): Curves originally labeled by the models. Labels are highly mixed due to the symmetry of human body with respect to the sagittal plane. (<b>b</b>): Curves corrected by the proposed algorithm. In the example, the red curve represents the right wrist’s trajectory and the blue curve represents the left wrist’s trajectory.</p>
Full article ">Figure 4
<p>The athlete is identified by the model and used to define a fixed-size ROI of 1024 × 576 pixel in order to obtain small images with the swimmer always in the center of the frame. If the identified ROI falls outside of the input image, the frame is discarded and the next one is used.</p>
Full article ">Figure 5
<p>The binary mask identified by the semantic segmentation model for target 1 (i.e., the swimmer’s head) is superimposed on the input image. The position of the target, in pixel, is taken as the position of the center of gravity from the area identified by the model.</p>
Full article ">Figure 6
<p>Through iterative application of the developed semantic segmentation models, a coordinate is assigned to each targeted body part that is visible within the frame.</p>
Full article ">Figure 7
<p>Examples of the different videos used for testing the SwimmerNET method: one athlete swimming freestyle for training (<b>a</b>) and two new athletes in different pools for the test phase—specifically, a female athlete performing freestyle (<b>c</b>) and a male athlete performing dolphin (<b>b</b>) and backstroke (<b>d</b>). Finally, there are frames extrapolated from videos gathered from the public Sports Videos in the Wild (SVW) repository [<a href="#B27-sensors-23-02364" class="html-bibr">27</a>] showing a male athlete during a freestyle swimming session (<b>e</b>,<b>f</b>).</p>
Full article ">Figure 8
<p>Examples of trajectories of the right parts of the body during a swimming stroke: a 1.90 m tall male athlete during freestyle (<b>a</b>) and a 1.80 m tall male athlete during dolphin style (<b>b</b>).</p>
Full article ">Figure 9
<p>Percentage of targets not recognized by the proposed method divided by body part.</p>
Full article ">Figure 10
<p>The presence of outliers in the target location increases the standard deviation of the error. Once eliminated, the standard deviation remains below 5 pixels for each target.</p>
Full article ">Figure 11
<p>Mean error in locating body parts.</p>
Full article ">Figure 12
<p>Standard deviation of mean error in locating body parts.</p>
Full article ">
22 pages, 2726 KiB  
Article
LP-MAB: Improving the Energy Efficiency of LoRaWAN Using a Reinforcement-Learning-Based Adaptive Configuration Algorithm
by Benyamin Teymuri, Reza Serati, Nikolaos Athanasios Anagnostopoulos and Mehdi Rasti
Sensors 2023, 23(4), 2363; https://doi.org/10.3390/s23042363 - 20 Feb 2023
Cited by 13 | Viewed by 2897
Abstract
In the Internet of Things (IoT), Low-Power Wide-Area Networks (LPWANs) are designed to provide low energy consumption while maintaining a long communications’ range for End Devices (EDs). LoRa is a communication protocol that can cover a wide range with low energy consumption. To [...] Read more.
In the Internet of Things (IoT), Low-Power Wide-Area Networks (LPWANs) are designed to provide low energy consumption while maintaining a long communications’ range for End Devices (EDs). LoRa is a communication protocol that can cover a wide range with low energy consumption. To evaluate the efficiency of the LoRa Wide-Area Network (LoRaWAN), three criteria can be considered, namely, the Packet Delivery Rate (PDR), Energy Consumption (EC), and coverage area. A set of transmission parameters have to be configured to establish a communication link. These parameters can affect the data rate, noise resistance, receiver sensitivity, and EC. The Adaptive Data Rate (ADR) algorithm is a mechanism to configure the transmission parameters of EDs aiming to improve the PDR. Therefore, we introduce a new algorithm using the Multi-Armed Bandit (MAB) technique, to configure the EDs’ transmission parameters in a centralized manner on the Network Server (NS) side, while improving the EC, too. The performance of the proposed algorithm, the Low-Power Multi-Armed Bandit (LP-MAB), is evaluated through simulation results and is compared with other approaches in different scenarios. The simulation results indicate that the LP-MAB’s EC outperforms other algorithms while maintaining a relatively high PDR in various circumstances. Full article
(This article belongs to the Special Issue Intelligent IoT and Wireless Communications)
Show Figures

Figure 1

Figure 1
<p>Range of wireless protocols, according to [<a href="#B10-sensors-23-02363" class="html-bibr">10</a>] and our own knowledge and experience.</p>
Full article ">Figure 2
<p>LoRaWAN network architecture.</p>
Full article ">Figure 3
<p>The assumed working mode sequence for each ED, adopted from [<a href="#B18-sensors-23-02363" class="html-bibr">18</a>].</p>
Full article ">Figure 4
<p>The initialization of LP-MAB for the <span class="html-italic">u</span>th ED.</p>
Full article ">Figure 5
<p>Possible first round of the LP-MAB exploration phase for the <span class="html-italic">u</span>th ED.</p>
Full article ">Figure 6
<p>Possible next rounds of the LP-MAB exploration phase for the <span class="html-italic">u</span>th ED.</p>
Full article ">Figure 7
<p>Possible LP-MAB exploitation phase for the <span class="html-italic">u</span>th ED, with <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>V</mi> <mo>)</mo> </mrow> </semantics></math> representing the final round of the exploitation phase for this transmission period of the <span class="html-italic">u</span>th ED. In this extreme case used as an example, action <math display="inline"><semantics> <msubsup> <mi>a</mi> <mi>q</mi> <mi>u</mi> </msubsup> </semantics></math> has been selected to be performed in all rounds.</p>
Full article ">Figure 8
<p>PDR &amp; EC versus different numbers of static EDs in Scenario 1.</p>
Full article ">Figure 9
<p>PDR &amp; EC versus different values of channel saturation in Scenario 2.</p>
Full article ">Figure 10
<p>PDR &amp; EC versus different numbers of mobile EDs in Scenario 3.</p>
Full article ">Figure 11
<p>PDR &amp; EC versus different values for speed for mobile EDs in Scenario 4.</p>
Full article ">Figure 12
<p>PDR &amp; EC of the LP-MAB scheme versus varying network sizes and different mobility speeds in Scenario 5.</p>
Full article ">Figure 13
<p>PDR &amp; EC versus different numbers of simulation days in Scenario 6.</p>
Full article ">Figure 14
<p>PDR &amp; EC versus different values for number of sending message per day in Scenario 7.</p>
Full article ">Figure 15
<p>PDR &amp; EC versus different values for number of total actions in Scenario 8.</p>
Full article ">
19 pages, 568 KiB  
Review
Unsupervised Learning of Disentangled Representation via Auto-Encoding: A Survey
by Ikram Eddahmani, Chi-Hieu Pham, Thibault Napoléon, Isabelle Badoc, Jean-Rassaire Fouefack and Marwa El-Bouz
Sensors 2023, 23(4), 2362; https://doi.org/10.3390/s23042362 - 20 Feb 2023
Cited by 6 | Viewed by 3732
Abstract
In recent years, the rapid development of deep learning approaches has paved the way to explore the underlying factors that explain the data. In particular, several methods have been proposed to learn to identify and disentangle these underlying explanatory factors in order to [...] Read more.
In recent years, the rapid development of deep learning approaches has paved the way to explore the underlying factors that explain the data. In particular, several methods have been proposed to learn to identify and disentangle these underlying explanatory factors in order to improve the learning process and model generalization. However, extracting this representation with little or no supervision remains a key challenge in machine learning. In this paper, we provide a theoretical outlook on recent advances in the field of unsupervised representation learning with a focus on auto-encoding-based approaches and on the most well-known supervised disentanglement metrics. We cover the current state-of-the-art methods for learning disentangled representation in an unsupervised manner while pointing out the connection between each method and its added value on disentanglement. Further, we discuss how to quantify disentanglement and present an in-depth analysis of associated metrics. We conclude by carrying out a comparative evaluation of these metrics according to three criteria, (i) modularity, (ii) compactness and (iii) informativeness. Finally, we show that only the Mutual Information Gap score (MIG) meets all three criteria. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>An illustration of the notation used in this paper. For <math display="inline"><semantics> <mrow> <mi>X</mi> <mo>=</mo> <mfenced separators="" open="{" close="}"> <msub> <mi>x</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <mo>…</mo> <mo>,</mo> <msub> <mi>x</mi> <mi>N</mi> </msub> </mfenced> </mrow> </semantics></math>, a set of <span class="html-italic">N</span> observations. Disentangled representation learning is expected to identify the distinct generative factors <math display="inline"><semantics> <mrow> <mi>V</mi> <mo>=</mo> <mfenced separators="" open="{" close="}"> <msub> <mi>ν</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>ν</mi> <mn>2</mn> </msub> <mo>…</mo> <mo>,</mo> <msub> <mi>ν</mi> <mi>n</mi> </msub> </mfenced> </mrow> </semantics></math> that explain these observations and encode them with independent latent variables <math display="inline"><semantics> <mrow> <mi>Z</mi> <mo>=</mo> <mfenced separators="" open="{" close="}"> <msub> <mi>z</mi> <mn>1</mn> </msub> <mo>,</mo> <mo>…</mo> <mo>,</mo> <msub> <mi>z</mi> <mi>n</mi> </msub> </mfenced> </mrow> </semantics></math> in latent space.</p>
Full article ">Figure 2
<p>The structure of the variational auto-encoder (VAE). The stochastic encoder <math display="inline"><semantics> <mrow> <msub> <mi>q</mi> <mi>ϕ</mi> </msub> <mrow> <mo>(</mo> <mi>z</mi> <mo>|</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mrow> </semantics></math>, also called the inference model, learns stochastic mappings between an observed <span class="html-italic">X</span>-space (input data) and a latent <span class="html-italic">Z</span>-space (hidden representation). The generative model <math display="inline"><semantics> <mrow> <msub> <mi>p</mi> <mi>θ</mi> </msub> <mrow> <mo>(</mo> <mi>z</mi> <mo>|</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mrow> </semantics></math>, a stochastic decoder, reconstructs the data given the hidden representation.</p>
Full article ">Figure 3
<p>Schematic overview of different choices of augmenting the evidence lower bound of VAE. To improve disentanglement, most approaches focus on regularizing the original VAE objective by (i) up-weighting the ELBO with an adjustable hyperparameter <math display="inline"><semantics> <mi>β</mi> </semantics></math>, resulting in a <math display="inline"><semantics> <mi>β</mi> </semantics></math>-VAE approach. (ii) Adding different terms to the ELBO, such as mutual information, total correlation or covariance, resulting in InfoMax-VAE, <math display="inline"><semantics> <mi>β</mi> </semantics></math>-TCVAE, Factor-VAE and DIP-VAE approaches, respectively.</p>
Full article ">
20 pages, 7564 KiB  
Article
Design and Performance Verification of a Novel RCM Mechanism for a Minimally Invasive Surgical Robot
by Hu Shi, Zhixin Liang, Boyang Zhang and Haitao Wang
Sensors 2023, 23(4), 2361; https://doi.org/10.3390/s23042361 - 20 Feb 2023
Cited by 4 | Viewed by 2526
Abstract
Minimally invasive surgical robots have the advantages of high positioning accuracy, good stability, and flexible operation, which can effectively improve the quality of surgery and reduce the difficulty for doctors to operate. However, in order to realize the translation of the existing RCM [...] Read more.
Minimally invasive surgical robots have the advantages of high positioning accuracy, good stability, and flexible operation, which can effectively improve the quality of surgery and reduce the difficulty for doctors to operate. However, in order to realize the translation of the existing RCM mechanism, it is often necessary to add a mobile unit, which is often bulky and occupies most space above the patient’s body, thus causing interference to the operation. In this paper, a new type of planar RCM mechanism is proposed. Based on this mechanism, a 3-DOF robotic arm is designed, which can complete the required motion for surgery without adding a mobile unit. In this paper, the geometric model of the mechanism is first introduced, and the RCM point of the mechanism is proven during the motion process. Then, based on the establishment of the geometric model of the mechanism, a kinematics analysis of the mechanism is carried out. The singularity, the Jacobian matrix, and the kinematic performance of the mechanism are analyzed, and the working space of the mechanism is verified according to the kinematic equations. Finally, a prototype of the RCM mechanism was built, and its functionality was tested using a master–slave control strategy. Full article
Show Figures

Figure 1

Figure 1
<p>Deduction of planar 2-DOF RCM mechanism.</p>
Full article ">Figure 2
<p>A 3-DOF RCM mechanism.</p>
Full article ">Figure 3
<p>Dimensioning for kinematic analysis.</p>
Full article ">Figure 4
<p>Mechanism motion performance index. (<bold>a</bold>) <italic>η</italic> varies with <italic>c</italic>. (<bold>b</bold>) The distributions of 1/k when <italic>c</italic> = 230 mm.</p>
Full article ">Figure 5
<p>D-H coordinate system of surgical robotic arm.</p>
Full article ">Figure 6
<p>The 3D workspace and 3 plane projections.</p>
Full article ">Figure 7
<p>Prototype.</p>
Full article ">Figure 8
<p>Master manipulator Touch.</p>
Full article ">Figure 9
<p>Master–slave coordinate system definition.</p>
Full article ">Figure 10
<p>Master–slave control algorithm flow.</p>
Full article ">Figure 11
<p>Tracking experiment process.</p>
Full article ">Figure 12
<p>Master–slave trajectory. (<bold>a</bold>) Master manipulator trajectory. (<bold>b</bold>) End trajectory of the slave manipulator.</p>
Full article ">Figure 13
<p>Trajectories of master and slave in the <italic>xyz</italic> directions. (<bold>a</bold>) Master x direction. (<bold>b</bold>) Slave z direction. (<bold>c</bold>) Master y direction. (<bold>d</bold>) Slave x direction. (<bold>e</bold>) Master z direction. (<bold>f</bold>) Slave y direction.</p>
Full article ">Figure 13 Cont.
<p>Trajectories of master and slave in the <italic>xyz</italic> directions. (<bold>a</bold>) Master x direction. (<bold>b</bold>) Slave z direction. (<bold>c</bold>) Master y direction. (<bold>d</bold>) Slave x direction. (<bold>e</bold>) Master z direction. (<bold>f</bold>) Slave y direction.</p>
Full article ">Figure 14
<p>The experimental setup of the robotic arm gripping and handling.</p>
Full article ">Figure 15
<p>The experimental process of gripping and handling.</p>
Full article ">Figure 16
<p>Master–slave following trajectory. (<bold>a</bold>) The tracking effect at the end of the manipulator. (<bold>b</bold>) Enlargement of the tracking effect at the end of the manipulator.</p>
Full article ">Figure 17
<p>Master–slave following error in gripping and handling experiments. (<bold>a</bold>) X direction. (<bold>b</bold>) Y direction. (<bold>c</bold>) Z direction. (<bold>d</bold>) Resultant.</p>
Full article ">Figure 17 Cont.
<p>Master–slave following error in gripping and handling experiments. (<bold>a</bold>) X direction. (<bold>b</bold>) Y direction. (<bold>c</bold>) Z direction. (<bold>d</bold>) Resultant.</p>
Full article ">Figure 18
<p>Robotic arm joint tracking effect where zoom in takes place in red boxes. (<bold>a</bold>) Revolute joint tracking effect. (<bold>b</bold>) Enlargement of revolute joint tracking effect. (<bold>c</bold>) Mobile joint I tracking effect. (<bold>d</bold>) Enlargement of mobile joint I tracking effect. (<bold>e</bold>) Mobile joint II tracking effect. (<bold>f</bold>) Enlargement of mobile joint II tracking effect.</p>
Full article ">
15 pages, 4645 KiB  
Article
High Accuracy and Cost-Effective Fiber Optic Liquid Level Sensing System Based on Deep Neural Network
by Erfan Dejband, Yibeltal Chanie Manie, Yu-Jie Deng, Mekuanint Agegnehu Bitew, Tan-Hsu Tan and Peng-Chun Peng
Sensors 2023, 23(4), 2360; https://doi.org/10.3390/s23042360 - 20 Feb 2023
Cited by 6 | Viewed by 2910
Abstract
In this paper, a novel liquid level sensing system is proposed to enhance the capacity of the sensing system, as well as reduce the cost and increase the sensing accuracy. The proposed sensing system can monitor the liquid level of several points at [...] Read more.
In this paper, a novel liquid level sensing system is proposed to enhance the capacity of the sensing system, as well as reduce the cost and increase the sensing accuracy. The proposed sensing system can monitor the liquid level of several points at the same time in the sensing unit. Additionally, for cost efficiency, the proposed system employs only one sensor at each spot and all the sensors are multiplexed. In multiplexed systems, when changing the liquid level inside the container, the float position is changed and leads to an overlap or cross-talk between two sensors. To solve this overlap problem and to accurately predict the liquid level of each container, we proposed a deep neural network (DNN) approach to properly identify the water level. The performance of the proposed DNN model is evaluated via two different scenarios and the result proves that the proposed DNN model can accurately predict the liquid level of each point. Furthermore, when comparing the DNN model with the conventional machine learning schemes, including random forest (RF) and support vector machines (SVM), the DNN model exhibits the best performance. Full article
(This article belongs to the Special Issue Novel Optoelectronic Sensors)
Show Figures

Figure 1

Figure 1
<p>Scheme diagram of the proposed multiple-point liquid level sensing system, which consists of four main blocks: (i) central office, (ii) sensing unit which shows the IDWM liquid level sensing system, (iii) preprocessing unit, and (iv) deep neural network structure.</p>
Full article ">Figure 2
<p>(<b>a</b>) The scheme of water level sensor structure that an FBG sensor connected to the indicated float. The water level can be controlled by the controlling gate. (<b>b</b>) The power versus wavelength of the reflected spectra of 10 different water level steps, and (<b>c</b>) the linear shift in wavelength when the water level change from 9.5 cm to 0.5 cm with the steps of 0.5 cm.</p>
Full article ">Figure 3
<p>The DNN model performance in terms of loss and accuracy for training and validation dataset.</p>
Full article ">Figure 4
<p>(<b>a</b>) The experimental result of the reflected spectrum of two water level sensors in five different water level steps of the first scenario. (<b>b</b>) The unmeasurable overlapping gap due to the overlap of two sensor spectra in the first scenario.</p>
Full article ">Figure 5
<p>(<b>a</b>) The experimental result of the reflected spectrum of two water level sensors in 5 different water level steps of the second scenario. (<b>b</b>) The unmeasurable overlapping gap due to the overlap of two sensor spectra in the second scenario.</p>
Full article ">Figure 6
<p>The predicted water level versus the actual water level for the (<b>a</b>) first scenario and (<b>b</b>) second scenario.</p>
Full article ">Figure 7
<p>The water level prediction error versus the water level for the (<b>a</b>) first scenario and (<b>b</b>) second scenario.</p>
Full article ">Figure 8
<p>Depiction of the mean absolute error (MAE) of experimental results for the support vector machine (SVM), random forest (RF), and deep neural network (DNN) algorithm.</p>
Full article ">Figure 9
<p>(<b>a</b>) The simulation results of reflected spectra of five water level sensors in five different water level steps, and (<b>b</b>) the predicted water level versus the actual water level of five sensors.</p>
Full article ">
10 pages, 2883 KiB  
Communication
Highly Elastically Deformable Coiled CNT/Polymer Fibers for Wearable Strain Sensors and Stretchable Supercapacitors
by Jin Hyeong Choi, Jun Ho Noh and Changsoon Choi
Sensors 2023, 23(4), 2359; https://doi.org/10.3390/s23042359 - 20 Feb 2023
Cited by 7 | Viewed by 2369
Abstract
Stretchable yarn/fiber electronics with conductive features are optimal components for different wearable devices. This paper presents the construction of coil structure-based carbon nanotube (CNT)/polymer fibers with adjustable piezoresistivity. The composite unit fiber is prepared by wrapping a conductive carbon CNT sheath onto an [...] Read more.
Stretchable yarn/fiber electronics with conductive features are optimal components for different wearable devices. This paper presents the construction of coil structure-based carbon nanotube (CNT)/polymer fibers with adjustable piezoresistivity. The composite unit fiber is prepared by wrapping a conductive carbon CNT sheath onto an elastic spandex core. Owing to the helical coil structure, the resultant CNT/polymer composite fibers are highly stretchable (up to approximately 300%) without a noticeable electrical breakdown. More specifically, based on the difference in the coil index (which is the ratio of the coil diameter to the diameter of the fiber within the coil) according to the polymeric core fiber (spandex or nylon), the composite fiber can be used for two different applications (i.e., as strain sensors or supercapacitors), which are presented in this paper. The coiled CNT/spandex composite fiber sensor responds sensitively to tensile strain. The coiled CNT/nylon composite fiber can be employed as an elastic supercapacitor with excellent capacitance retention at 300% strain. Full article
(This article belongs to the Special Issue The State-of-the-Art of Smart Materials Sensors and Actuators)
Show Figures

Figure 1

Figure 1
<p>Morphology and electron transfer mechanism for the coiled carbon nanotube (CNT)/polymer fiber (<b>a</b>) Photograph showing Vorticella before (<b>left</b>) and after body stretching (<b>right</b>) via the loop opening of the coil-structured tail. (<b>b</b>) Schematic of biomimetic strategy for the fabrication of coil-structured CNT/polymer fibers. (1 and 2) The CNT sheath is being wrapped around the core fiber; (3) the fiber is twisted many times to coil the fiber. The electron transfer mechanisms in the coil fiber-based sensor are compared: (<b>c</b>) (initial state) direct electron transfer through the closed loops, and (<b>d</b>) (stretched state) indirect electron transfer along the coil helix through the opened loops. (<b>e</b>) Photograph shows the bending of coiled CNT/polymer fiber sensor with tweezers. (<b>f</b>) Photographs of coiled CNT/polymer fiber sensor (initial length: 1 cm) before (<b>above</b>) and after (<b>below</b>) 300% strain application (final length: 4 cm).</p>
Full article ">Figure 2
<p>Electromechanical response and coil loop opening characteristics (<b>a</b>) Schematic (<b>above</b>) and photograph (<b>below</b>) of the definition of the coil index, which can be calculated by dividing the average diameter of the coil fiber (<span class="html-italic">D</span>) by the diameter of the precursor fiber (<span class="html-italic">d</span>). (<b>b</b>) Scanning electron microscopy (SEM) images of coiled CNT/polymer fiber sensors with low (<b>above</b>: scale bar = 200 µm) and high indexes (<b>below</b>: scale bar = 300 µm). Resistance changes versus strain of coiled CNT/polymer fiber with (<b>c</b>) high (2.9) and (<b>d</b>) low index (1.9). The slope of the blue lines represents the piezoresistive sensitivity (i.e., GF). Photographs and magnified SEM images show progressive coil loop opening at (<b>e</b>) 0% and 100%, and surface buckles unfolding at (<b>f</b>) 200% and 300% strains. (<b>g</b>) Photograph of three coiled CNT/polymer fiber sensors (length: 1.2 cm). The table in (<b>h</b>) lists the performance characteristics of the three fiber sensors.</p>
Full article ">Figure 3
<p>Demonstration of torque stability and weavability of the coiled fiber sensors for wearable applications. (<b>a</b>) Schematic shows SEBS package-coated coiled CNT/polymer fiber sensor, and (<b>b</b>) its resistance with respect to tensile strain. The inset of (<b>b</b>) shows the resistance change ratio during 1000 repeated loading/unloading with 300% strain. Photographs show the SEBS package-coated, coiled CNT/spandex fiber sensor at different strain values: (<b>c</b>) 0%, (<b>d</b>) 100%, (<b>e</b>) 200%, and (<b>f</b>) 300%. The photographs in (<b>g</b>) present seven coiled CNT/spandex fiber sensors woven into the wristband part of a mock ribbed glove to replace the mock rib fibers; (<b>h</b>) shows the magnified image. Photographs present finger movements (<b>i</b>) before and (<b>j</b>) after bending and (<b>k</b>) corresponding resistance change versus time at different finger bending angles ((i) 25°; (ii) 50°; (iii) 75°; and (iv) 100°).</p>
Full article ">Figure 4
<p>Electrochemical double layer (EDL) charge storage performance of coiled CNT/polymer fiber and capacitance retention during fiber stretching. (<b>a</b>) Schematic of the three-electrode system for measurement of the capacitance of coiled CNT/polymer fiber sensor (i.e., the working electrode); Ag/AgCl and Pt mesh were used as reference and counter electrodes in 0.1M Na<sub>2</sub>SO<sub>4</sub> electrolyte, respectively. (<b>b</b>) Measured cyclic voltammogram (CV) curve (at 300 mV/s) of the three-electrode system. (<b>c</b>) CV curves of coiled CNT/polymer fiber supercapacitor measured at 10–100 mV s<sup>−1</sup>. (<b>d</b>) Galvanostatic charge–discharge curves measured from 12.5 µA cm<sup>−2</sup> to 50.0 µA cm<sup>−2</sup> current densities. (<b>e</b>) Calculated linear and areal-specific capacitance as a function of voltage scan rate. (<b>f</b>) Nyquist curve for the frequency range from 0.2 to 100 kHz (the inset shows the high-frequency region). The CV curves of the undeformed coiled CNT/polymer fiber are compared with the CV curves measured at a (<b>g</b>) statically and (<b>h</b>) dynamically applied strain (up to 300%).</p>
Full article ">
14 pages, 2008 KiB  
Article
An IoT Enable Anomaly Detection System for Smart City Surveillance
by Muhammad Islam, Abdulsalam S. Dukyil, Saleh Alyahya and Shabana Habib
Sensors 2023, 23(4), 2358; https://doi.org/10.3390/s23042358 - 20 Feb 2023
Cited by 13 | Viewed by 4022
Abstract
Since the advent of visual sensors, smart cities have generated massive surveillance video data, which can be intelligently inspected to detect anomalies. Computer vision-based automated anomaly detection techniques replace human intervention to secure video surveillance applications in place from traditional video surveillance systems [...] Read more.
Since the advent of visual sensors, smart cities have generated massive surveillance video data, which can be intelligently inspected to detect anomalies. Computer vision-based automated anomaly detection techniques replace human intervention to secure video surveillance applications in place from traditional video surveillance systems that rely on human involvement for anomaly detection, which is tedious and inaccurate. Due to the diverse nature of anomalous events and their complexity, it is however, very challenging to detect them automatically in a real-world scenario. By using Artificial Intelligence of Things (AIoT), this research work presents an efficient and robust framework for detecting anomalies in surveillance large video data. A hybrid model integrating 2D-CNN and ESN are proposed in this research study for smart surveillance, which is an important application of AIoT. The CNN is used as feature extractor from input videos which are then inputted to autoencoder for feature refinement followed by ESN for sequence learning and anomalous events detection. The proposed model is lightweight and implemented over edge devices to ensure their capability and applicability over AIoT environments in a smart city. The proposed model significantly enhanced performance using challenging surveillance datasets compared to other methods. Full article
Show Figures

Figure 1

Figure 1
<p>General pipeline of anomaly detection.</p>
Full article ">Figure 2
<p>The proposed framework for anomaly detection.</p>
Full article ">Figure 3
<p>Internal architecture of autoencoder.</p>
Full article ">Figure 4
<p>Performance comparison of the proposed model with ablation study methods.</p>
Full article ">
14 pages, 21652 KiB  
Article
H-Shaped Radial Phononic Crystal for High-Quality Factor on Lamb Wave Resonators
by Weitao He, Lixia Li, Zhixue Tong, Haixia Liu, Qian Yang and Tianhang Gao
Sensors 2023, 23(4), 2357; https://doi.org/10.3390/s23042357 - 20 Feb 2023
Cited by 3 | Viewed by 1594
Abstract
In this paper, a novel H-shaped radial phononic crystal (H-RPC) structure is proposed to suppress the anchor loss of a Lamb wave resonator (LWR), which has an ultra-high frequency (UHF) and ultra-wideband gap characteristics. Compared to previous studies on phononic crystal (PC) structures [...] Read more.
In this paper, a novel H-shaped radial phononic crystal (H-RPC) structure is proposed to suppress the anchor loss of a Lamb wave resonator (LWR), which has an ultra-high frequency (UHF) and ultra-wideband gap characteristics. Compared to previous studies on phononic crystal (PC) structures aimed at suppressing anchor loss, the radial phononic crystal (RPC) structure is more suitable for suppressing the anchor loss of the LWR. By using the finite element method, through the research and analysis of the complex energy band and frequency response, it is found that the elastic wave can generate an ultra-wideband gap with a relative bandwidth of up to 80.2% in the UHF range when propagating in the H-RPC structure. Furthermore, the influence of geometric parameters on the ultra-wideband gap is analyzed. Then, the H-RPC structure is introduced into the LWR. Through the analysis of the resonant frequency, it is found that the LWR formed by the H-RPC structure can effectively reduce the vibration energy radiated by the anchor point. The anchor quality factor was increased by 505,560.4% compared with the conventional LWR. In addition, the analysis of the LWR under load shows that the LWR with the H-RPC structure can increase the load quality factor by 249.9% and reduce the insertion loss by 93.1%, while the electromechanical coupling coefficient is less affected. Full article
(This article belongs to the Special Issue High-Performance MEMS Sensors)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) The cross section of H-RPC unit cell; (<b>b</b>) The formation of RPC; (<b>c</b>) The three-dimensional model formed by the rotation of 4-cycle H-RPC structure 180°.</p>
Full article ">Figure 2
<p>Complex energy band curve of the H-RPC structure. The real wave vector energy band curve in (Г-R) direction is represented by solid lines of different colors on the right side, and the imaginary wave vector energy band curve in (Г-R) direction is represented by red dotted lines on the left side.</p>
Full article ">Figure 3
<p>Frequency response model. (<b>a</b>) is a traditional model; (<b>b</b>) is a 4-period H-RPC model.</p>
Full article ">Figure 4
<p>H-RPC frequency response curve.</p>
Full article ">Figure 5
<p>The influence of geometric parameters on ultra-wideband gap. (<b>a</b>) the influence of geometric parameter ratio (<math display="inline"><semantics> <mrow> <msub> <mi>h</mi> <mn>1</mn> </msub> <mo>/</mo> <mi>h</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>a</mi> <mn>1</mn> </msub> <mo>/</mo> <mi>a</mi> </mrow> </semantics></math>) on bandwidth; (<b>b</b>) the influence of geometric parameter ratio (<math display="inline"><semantics> <mrow> <msub> <mi>h</mi> <mn>1</mn> </msub> <mo>/</mo> <mi>h</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>a</mi> <mn>1</mn> </msub> <mo>/</mo> <mi>a</mi> </mrow> </semantics></math>) on center frequency.</p>
Full article ">Figure 6
<p>LWR model. (<b>a</b>) conventional simplified 1/4 model; (<b>b</b>) LWR 1/4 simplified model after adding three cycles of H-RPC.</p>
Full article ">Figure 7
<p>Finite element simulation of resonance mode. (<b>a</b>) Conventional LWR mode 1/4; (<b>b</b>) LWR mode 1/4 with three cycles of H-RPC and R = 0. <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mi>r</mi> </msub> </mrow> </semantics></math> is the resonant frequency and <math display="inline"><semantics> <mrow> <msub> <mi>Q</mi> <mrow> <mi>a</mi> <mi>n</mi> <mi>c</mi> </mrow> </msub> </mrow> </semantics></math> is the quality factor of the anchor.</p>
Full article ">Figure 8
<p>(<b>a</b>) The vibration mode of the resonator in the resonant mode; (<b>b</b>) The Z-direction of the A-A‘ is the displacement diagram.</p>
Full article ">Figure 9
<p>(<b>a</b>) admittance curve Y11; (<b>b</b>) insertion loss curve.</p>
Full article ">
47 pages, 2662 KiB  
Review
A Survey on 5G Coverage Improvement Techniques: Issues and Future Challenges
by Chilakala Sudhamani, Mardeni Roslee, Jun Jiat Tiang and Aziz Ur Rehman
Sensors 2023, 23(4), 2356; https://doi.org/10.3390/s23042356 - 20 Feb 2023
Cited by 26 | Viewed by 9273
Abstract
Fifth generation (5G) is a recent wireless communication technology in mobile networks. The key parameters of 5G are enhanced coverage, ultra reliable low latency, high data rates, massive connectivity and better support to mobility. Enhanced coverage is one of the major issues in [...] Read more.
Fifth generation (5G) is a recent wireless communication technology in mobile networks. The key parameters of 5G are enhanced coverage, ultra reliable low latency, high data rates, massive connectivity and better support to mobility. Enhanced coverage is one of the major issues in the 5G and beyond 5G networks, which will be affecting the overall system performance and end user experience. The increasing number of base stations may increase the coverage but it leads to interference between the cell edge users, which in turn impacts the coverage. Therefore, enhanced coverage is one of the future challenging issues in cellular networks. In this survey, coverage enhancement techniques are explored to improve the overall system performance, throughput, coverage capacity, spectral efficiency, outage probability, data rates, and latency. The main aim of this article is to highlight the recent developments and deployments made towards the enhanced network coverage and to discuss its future research challenges. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

Figure 1
<p>Structure of the article.</p>
Full article ">Figure 2
<p>(<b>a</b>) Single cell, (<b>b</b>) cell structure.</p>
Full article ">Figure 3
<p>5G coverage enhancement techniques.</p>
Full article ">Figure 4
<p>Small cell techniques.</p>
Full article ">Figure 5
<p>(<b>a</b>) The mmWave signal is blocked by obstacles. (<b>b</b>) Small cells used to avoid the multipath fading.</p>
Full article ">Figure 6
<p>Primary carrier components and secondary carrier components in a network.</p>
Full article ">Figure 7
<p>Types of carrier aggregations.</p>
Full article ">Figure 8
<p>Illustration of DR-OC communication.</p>
Full article ">Figure 9
<p>Illustration of DC-OC communication.</p>
Full article ">Figure 10
<p>Illustration of DR-DC communication.</p>
Full article ">Figure 11
<p>Illustration of DC-DC communication.</p>
Full article ">Figure 12
<p>NOMA with a SIC receiver.</p>
Full article ">Figure 13
<p>Massive MIMO with UL and DL.</p>
Full article ">Figure 14
<p>Statistical analysis of 5G key parameters.</p>
Full article ">
17 pages, 6534 KiB  
Article
A Fiber-Optic Non-Invasive Swallowing Assessment Device Based on a Wearable Pressure Sensor
by Masanori Maeda, Miyuki Kadokura, Ryoko Aoki, Noriko Komatsu, Masaru Kawakami, Yuya Koyama, Kazuhiro Watanabe and Michiko Nishiyama
Sensors 2023, 23(4), 2355; https://doi.org/10.3390/s23042355 - 20 Feb 2023
Cited by 9 | Viewed by 1996
Abstract
We developed a wearable swallowing assessment device using a hetero-core fiber-optic pressure sensor for the detection of laryngeal movement during swallowing. The proposed pressure sensor (comfortably attached to the skin of the neck) demonstrated a high sensitivity of 0.592 dB/kPa and a linearity [...] Read more.
We developed a wearable swallowing assessment device using a hetero-core fiber-optic pressure sensor for the detection of laryngeal movement during swallowing. The proposed pressure sensor (comfortably attached to the skin of the neck) demonstrated a high sensitivity of 0.592 dB/kPa and a linearity of R2 = 0.995 within a 14 kPa pressure band, which is a suitable pressure for the detection of laryngeal movement. In addition, since the fabricated hetero-core fiber-optic pressure sensor maintains appreciable sensitivity over the surface of the sensor, the proposed wearable swallowing assessment device can accurately track the subtle pressure changes induced by laryngeal movements during the swallowing process. Sixteen male subjects and one female subject were evaluated in a variety of age groups ranging from 30 to 60 years old. For all subjects, characteristic swallowing waveforms (with two valleys based on laryngeal movements consisting of upward, forward, backward, and downward displacements) were acquired using the proposed wearable swallowing assessment device. Since the denoted time of the first valley in the acquired waveform determines the “aging effect”, significant differences in swallowing functions among the different age groups were ultimately determined based on the time of the first valley. Additionally, by analyzing each age group using the proposed device, due to p-values being consistently less than 0.05, swallowing times were found to exhibit statistically significant differences within the same groups. Full article
(This article belongs to the Special Issue Optical Fibre Sensing Technology in Biomedical Applications)
Show Figures

Figure 1

Figure 1
<p>Hetero-core fiber-optic bending sensor: (<b>a</b>) sensor structure; (<b>b</b>) bending curvature characteristics in the optical loss change.</p>
Full article ">Figure 2
<p>Pressure sensor using hetero-core fiber optics based on conversion mechanism from the pressure to the bending on the hetero-core portion: (<b>a</b>) structure of the pressure sensor using hetero-core fiber optics; (<b>b</b>) the hetero-core fiber-optic pressure sensor; (<b>c</b>) cross-section of the pressure sensor before and after pressurization.</p>
Full article ">Figure 2 Cont.
<p>Pressure sensor using hetero-core fiber optics based on conversion mechanism from the pressure to the bending on the hetero-core portion: (<b>a</b>) structure of the pressure sensor using hetero-core fiber optics; (<b>b</b>) the hetero-core fiber-optic pressure sensor; (<b>c</b>) cross-section of the pressure sensor before and after pressurization.</p>
Full article ">Figure 3
<p>Wearable swallowing test device: (<b>a</b>) photograph of the wearable swallowing assessment device; (<b>b</b>) appearance when a user attaches the wearable swallowing assessment device.</p>
Full article ">Figure 4
<p>Experimental setup: (<b>a</b>) for measuring the pressure characteristics of the pressure sensor; (<b>b</b>) for evaluating the sensitivity characteristic with respect to the axial displacement of the applied pressure from the center to the edges of the pressure sensor’s surface; (<b>c</b>) for measuring the dynamic characteristics of the pressure sensor; (<b>d</b>) for evaluating the response of the proposed wearable swallowing assessment device to laryngeal movement.</p>
Full article ">Figure 5
<p>Characteristics of the hetero-core fiber-optic pressure sensor: (<b>a</b>) without the puff and the supporting part; (<b>b</b>) with the puff; (<b>c</b>) with the puff and the supporting part pressurized on the center; (<b>d</b>) with a puff and a supporting part pressurized on edge.</p>
Full article ">Figure 5 Cont.
<p>Characteristics of the hetero-core fiber-optic pressure sensor: (<b>a</b>) without the puff and the supporting part; (<b>b</b>) with the puff; (<b>c</b>) with the puff and the supporting part pressurized on the center; (<b>d</b>) with a puff and a supporting part pressurized on edge.</p>
Full article ">Figure 6
<p>The sensitivity characteristic of the hetero-core fiber-optic pressure sensor with the puff and the supporting part to the axial displacement of the applied pressure from the center to the edges of the pressure sensor’s surface.</p>
Full article ">Figure 7
<p>Dynamic characteristics of the hetero-core fiber-optic pressure sensor with the puff applied to sinusoidally time-varying pressure at frequency: (<b>a</b>) 1, (<b>b</b>) 2, and (<b>c</b>) 3 Hz.</p>
Full article ">Figure 7 Cont.
<p>Dynamic characteristics of the hetero-core fiber-optic pressure sensor with the puff applied to sinusoidally time-varying pressure at frequency: (<b>a</b>) 1, (<b>b</b>) 2, and (<b>c</b>) 3 Hz.</p>
Full article ">Figure 8
<p>Responses of the wearable swallowing assessment device and EMG during swallowing.</p>
Full article ">Figure 9
<p>Response by laryngeal movement during swallowing.</p>
Full article ">Figure 10
<p>Sensor response with laryngeal movements for different age groups: (<b>a</b>) optical loss values for males in their 30s; (<b>b</b>) differential values of males in their 30s; (<b>c</b>) optical loss values for males in their 60s; (<b>d</b>) differential values of males in their 60s; (<b>e</b>) optical loss values of female subjects; (<b>f</b>) differential values of female subjects.</p>
Full article ">Figure 10 Cont.
<p>Sensor response with laryngeal movements for different age groups: (<b>a</b>) optical loss values for males in their 30s; (<b>b</b>) differential values of males in their 30s; (<b>c</b>) optical loss values for males in their 60s; (<b>d</b>) differential values of males in their 60s; (<b>e</b>) optical loss values of female subjects; (<b>f</b>) differential values of female subjects.</p>
Full article ">Figure 11
<p>Change in swallowing times versus age in male subjects.</p>
Full article ">Figure 12
<p>Significant differences in swallowing times between male subjects: (<b>a</b>) 30s; (<b>b</b>) 40s; (<b>c</b>) 50s; (<b>d</b>) 60s.</p>
Full article ">Figure 12 Cont.
<p>Significant differences in swallowing times between male subjects: (<b>a</b>) 30s; (<b>b</b>) 40s; (<b>c</b>) 50s; (<b>d</b>) 60s.</p>
Full article ">
13 pages, 1456 KiB  
Article
Validity and Reliability of Kinvent Plates for Assessing Single Leg Static and Dynamic Balance in the Field
by Hugo Meras Serrano, Denis Mottet and Kevin Caillaud
Sensors 2023, 23(4), 2354; https://doi.org/10.3390/s23042354 - 20 Feb 2023
Cited by 6 | Viewed by 2827
Abstract
The objective of this study was to validate PLATES for assessing unipodal balance in the field, for example, to monitor ankle instabilities in athletes or patients. PLATES is a pair of lightweight, connected force platforms that measure only vertical forces. In 14 healthy [...] Read more.
The objective of this study was to validate PLATES for assessing unipodal balance in the field, for example, to monitor ankle instabilities in athletes or patients. PLATES is a pair of lightweight, connected force platforms that measure only vertical forces. In 14 healthy women, we measured ground reaction forces during Single Leg Balance and Single Leg Landing tests, first under laboratory conditions (with PLATES and with a 6-DOF reference force platform), then during a second test session in the field (with PLATES). We found that for these simple unipodal balance tests, PLATES was reliable in the laboratory and in the field: PLATES gives results comparable with those of a reference force platform with 6-DOF for the key variables in the tests (i.e., Mean Velocity of the Center of Pressure and Time to Stabilization). We conclude that health professionals, physical trainers, and researchers can use PLATES to conduct Single Leg Balance and Single Leg Landing tests in the laboratory and in the field. Full article
(This article belongs to the Special Issue Advances in Biomedical Sensing, Instrumentation and Systems)
Show Figures

Figure 1

Figure 1
<p>MVcop during the Single Leg Balance test. Upper panel: AMTI measures in the laboratory. Middle panel: PLATES measures in the laboratory. Lower panel: PLATES measures in the field. In each panel, the boxplot represents MVcop in the 4 experimental conditions: standing on the left/right leg with the eyes open/closed. Each blue circle represents the average of the 3 repetitions performed by each participant (<span class="html-italic">n</span> = 14). Paired Wilcoxon test, <span class="html-italic">ns</span>: no significant difference, <span class="html-italic">*** p</span> &lt; 0.001, <span class="html-italic">**** p</span> &lt; 0.0001. The figures replicate the classical results that stability is lower with the eyes closed for both legs.</p>
Full article ">Figure 2
<p>Comparison of MVcop and TTS obtained with the PLATES vs. AMTI in the Laboratory. Upper row: MVcop for the Single Leg Balance test (mm/s). Lower row: TTS for the Single Leg Landing test (s). Left column: Bland and Altman plot. The central horizontal line indicates the mean of the differences (systematic bias), which is 0 for perfect agreement. The horizontal lines above and below represent the 95% limits of agreement. Right column: Linear regression. Data from all conditions are reported (i.e., 14 participants × 4 conditions for SLB, 14 participants × 2 conditions for SLL). Each dot corresponds to the average over the 3 repetitions in each condition. The figures show that MVcop and TTS are reliably assessed in the laboratory with PLATES, although with a small underestimation (negative bias in Bland and Altman) that is proportional to the value (slope lower than 1 in the regression).</p>
Full article ">Figure 3
<p>Comparison of MVcop and TTS obtained in the Laboratory vs. in the Field with the PLATES. Upper row: MVcop for the Single Leg Balance test (mm/s). Lower row: TTS for the Single Leg Landing test (s). Left column: Bland and Altman plot. The central horizontal line indicates the mean of the differences (systematic bias), which is 0 for perfect agreement. The horizontal lines above and below represent the 95% limits of agreement. Right column: Linear regression. Data from all conditions are reported (i.e., 14 participants × 4 conditions for SLB, 14 participants × 2 conditions for SLL). Each dot corresponds to the average over the 3 repetitions in each condition. The figures show that MVcop and TTS are strongly correlated between PLATES in laboratory and field settings. Indeed, negligible biases are observed in Bland and Altman diagrams that are proportional to the values (slope near 1 in the regression).</p>
Full article ">Figure 4
<p>Comparison of MVcop and TTS obtained with PLATES-Field vs. AMTI-Laboratory. Upper row: MVcop for the Single Leg Balance test (mm/s). Lower row: TTS for the Single Leg Landing test (s). Left column: Bland and Altman plot. The central horizontal line indicates the mean of the differences (systematic bias), which is 0 for perfect agreement. The horizontal lines above and below represent the 95% limits of agreement. Right column: Linear regression. Data from all conditions are reported (i.e., 14 participants × 4 conditions for SLB, 14 participants × 2 conditions for SLL). Each dot corresponds to the average over the 3 repetitions in each condition. The figures show that MVcop and TTS are reliably assessed in the field with PLATES, although with a small underestimation (negative bias in Bland and Altman) that is proportional to the value (slope lower than 1 in the regression).</p>
Full article ">
22 pages, 5162 KiB  
Article
Intrusion Detection System for IoT: Analysis of PSD Robustness
by Lamoussa Sanogo, Eric Alata, Alexandru Takacs and Daniela Dragomirescu
Sensors 2023, 23(4), 2353; https://doi.org/10.3390/s23042353 - 20 Feb 2023
Cited by 1 | Viewed by 1396
Abstract
The security of internet of things (IoT) devices remains a major concern. These devices are very vulnerable because of some of their particularities (limited in both their memory and computing power, and available energy) that make it impossible to implement traditional security mechanisms. [...] Read more.
The security of internet of things (IoT) devices remains a major concern. These devices are very vulnerable because of some of their particularities (limited in both their memory and computing power, and available energy) that make it impossible to implement traditional security mechanisms. Consequently, researchers are looking for new security mechanisms adapted to these devices and the networks of which they are part. One of the most promising new approaches is fingerprinting, which aims to identify a given device by associating it with a unique signature built from its unique intrinsic characteristics, i.e., inherent imperfections, introduced by the manufacturing processes of its hardware. However, according to state-of-the-art studies, the main challenge that fingerprinting faces is the nonrelevance of the fingerprinting features extracted from hardware imperfections. Since these hardware imperfections can reflect on the RF signal for a wireless communicating device, in this study, we aim to investigate whether or not the power spectral density (PSD) of a device’s RF signal could be a relevant feature for its fingerprinting, knowing that a relevant fingerprinting feature should remain stable regardless of the environmental conditions, over time and under influence of any other parameters. Through experiments, we were able to identify limits and possibilities of power spectral density (PSD) as a fingerprinting feature. Full article
(This article belongs to the Special Issue Cybersecurity in the Internet of Things)
Show Figures

Figure 1

Figure 1
<p>Comparison of different PSDs before and after the normalization. Parameter <math display="inline"><semantics> <mi>d</mi> </semantics></math> is frequency-dependent and is given by<math display="inline"><semantics> <mrow> <mo> </mo> <mi>d</mi> <mo>=</mo> <mfenced close="|" open="|"> <mrow> <mi>P</mi> <mi>S</mi> <msubsup> <mi>D</mi> <mn>1</mn> <mo>*</mo> </msubsup> <mfenced> <mi>f</mi> </mfenced> <mo>−</mo> <mi>P</mi> <mi>S</mi> <msubsup> <mi>D</mi> <mn>2</mn> <mo>*</mo> </msubsup> <mfenced> <mi>f</mi> </mfenced> </mrow> </mfenced> </mrow> </semantics></math>, where * denotes the normalized amplitude. In this figure,<math display="inline"><semantics> <mrow> <mo> </mo> <mi>d</mi> </mrow> </semantics></math> is shown at <math display="inline"><semantics> <mrow> <mi>f</mi> <mo>=</mo> <mn>2.4255</mn> <mo> </mo> <mi>GHz</mi> </mrow> </semantics></math> as an example.</p>
Full article ">Figure 2
<p>The BLE devices used in our experiments. These devices were designed in the context of the same project; they are different versions of the same product whose hardware and software architectures have evolved over time. Thus, v1 (the board at the left end) refers to the first ever version; v2 (the two boards in the middle) and v3 (the two boards at the right end) refer to the second and third versions, respectively.</p>
Full article ">Figure 3
<p>Experimental setup in the anechoic chamber. The BLE device emits the signal at 2 m from the RSA306B. The latter is equipped with a BLE antenna and driven by the Tektronix SignalVu-PC software. This way, we can capture the BLE signal in real time and record it as IQ samples on the PC in order to be used later in the script for estimating the power spectral density using Welch’s average periodogram method.</p>
Full article ">Figure 4
<p>Experiment schematic.</p>
Full article ">Figure 5
<p>Profile of the power consumption of the BLE device during advertising, i.e., the broadcast of the advertising packet on each of the three primary advertising channels, one after another. Each peak corresponds to the advertising on a channel.</p>
Full article ">Figure 6
<p>Twenty PSDs of a same BLE device in a static experimental setting. The region between the red lines is the band <span class="html-italic">B</span> where the PSDs are compared; we chose <span class="html-italic">B</span> = 2 MHz, which is the channel bandwidth of the BLE. So, we ignored the background noise to have a more relevant analysis.</p>
Full article ">Figure 7
<p>(<b>a</b>) PSDs of anechoic chamber experiment (<b>b</b>) PSDs of open-space experiment.</p>
Full article ">Figure 8
<p>PSDs of the same BLE device but measured with two different identifiers.</p>
Full article ">Figure 9
<p>(<b>a</b>) PSDs of four different BLE devices using the same identifier (the one of dev48) and, thus, always transmitting exactly the same data. Graph (<b>a</b>) plots are an overlapping of graph (<b>b</b>) four groups of plots.</p>
Full article ">Figure 10
<p>PSDs of dev48 (green plots) and devCA (brown plots) devices using the same identifier (the one of dev48) and, thus, always transmitting exactly the same data.</p>
Full article ">
12 pages, 2154 KiB  
Article
Characterization of the Kinetyx SI Wireless Pressure-Measuring Insole during Benchtop Testing and Running Gait
by Samuel Blades, Matt Jensen, Trent Stellingwerff, Sandra Hundza and Marc Klimstra
Sensors 2023, 23(4), 2352; https://doi.org/10.3390/s23042352 - 20 Feb 2023
Cited by 4 | Viewed by 2600
Abstract
This study characterized the absolute pressure measurement error and reliability of a new fully integrated (Kinetyx, SI) plantar-pressure measurement system (PPMS) versus an industry-standard PPMS (F-Scan, Tekscan) during an established benchtop testing protocol as well as via a research-grade, instrumented treadmill (Bertec) during [...] Read more.
This study characterized the absolute pressure measurement error and reliability of a new fully integrated (Kinetyx, SI) plantar-pressure measurement system (PPMS) versus an industry-standard PPMS (F-Scan, Tekscan) during an established benchtop testing protocol as well as via a research-grade, instrumented treadmill (Bertec) during a running protocol. Benchtop testing results showed that both SI and F-Scan had strong positive linearity (Pearson’s correlation coefficient, PCC = 0.86–0.97, PCC = 0.87–0.92; RMSE = 15.96 ± 9.49) and mean root mean squared error RMSE (9.17 ± 2.02) compared to the F-Scan on a progressive loading step test. The SI and F-Scan had comparable results for linearity and hysteresis on a sinusoidal loading test (PCC = 0.92–0.99; 5.04 ± 1.41; PCC = 0.94–0.99; 6.15 ± 1.39, respectively). SI had less mean RMSE (6.19 ± 1.38) than the F-Scan (8.66 ±2.31) on the sinusoidal test and less absolute error (4.08 ± 3.26) than the F-Scan (16.38 ± 12.43) on a static test. Both the SI and F-Scan had near-perfect between-day reliability interclass correlation coefficient, ICC = 0.97–1.00) to the F-Scan (ICC = 0.96–1.00). During running, the SI pressure output had a near-perfect linearity and low RMSE compared to the force measurement from the Bertec treadmill. However, the SI pressure output had a mean hysteresis of 7.67% with a 28.47% maximum hysteresis, which may have implications for the accurate quantification of kinetic gait measures during running. Full article
(This article belongs to the Collection Sensors for Gait, Human Movement Analysis, and Health Monitoring)
Show Figures

Figure 1

Figure 1
<p>Expanded view of the Kinetyx SI System displaying the main components of the system contained within the fully integrated design, including a pressure-sensing layer (green) which contains 32 resistive pressure-sensing elements distributed across the rearfoot and forefoot regions.</p>
Full article ">Figure 2
<p>Part 1 testing equipment: (<b>a</b>) linear force testing device with 30 mm diameter actuator disk and force plate used for the sinusoidal testing (<b>b</b>) pneumatic bladder pressure tester used for step test, static test, and reliability test (<b>c</b>) Tekscan F-scan system. Part 2 testing equipment: (<b>d</b>) Bertec force-instrumented treadmill.</p>
Full article ">Figure 3
<p>Normalized vGRF signal from the Bertec force-instrumented treadmill (black) and the filtered version of this signal (black dashed). Normalized pressure sum signal from the Kinetyx SI system (blue). Detection of signal onset (green), signal max (blue) and signal offset (red) for each loading cycle from a given foot from the Bertec vGRF data.</p>
Full article ">Figure 4
<p>Mean loading plots from the running data. (<b>a</b>) shows the mean normalized loading data from the ascending part of the signal from the Bertec (black) and SI (blue); (<b>b</b>) shows the mean normalized loading data from the descending part of the signal from the Bertec (black) and SI (blue); (<b>c</b>) shows the mean hysteresis plot of SI vs. Bertec.</p>
Full article ">
15 pages, 4713 KiB  
Article
Investigation of Microwave Electromagnetic Fields in Open and Shielded Areas and Their Possible Effects on Biological Structure
by Filip Vaverka, Milan Smetana, Daniela Gombarska and Zuzana Psenakova
Sensors 2023, 23(4), 2351; https://doi.org/10.3390/s23042351 - 20 Feb 2023
Cited by 3 | Viewed by 2077
Abstract
The article’s subject is the investigation of electromagnetic fields (EMF) of the microwave frequency band in a typical human living environment, especially in shielded areas. The point of view of electromagnetic field presence in the environment with the rapid increase in the level [...] Read more.
The article’s subject is the investigation of electromagnetic fields (EMF) of the microwave frequency band in a typical human living environment, especially in shielded areas. The point of view of electromagnetic field presence in the environment with the rapid increase in the level of the electromagnetic background is currently an essential point concerning population protection against the potential adverse effects of such EMFs. The authors focus on actual measurements, especially in shielded spaces frequently used in everyday life, such as elevator cabins and cars. The goal is a quantitative evaluation of the distribution of specific vector quantities of the EM field and a comparison with the currently valid hygiene standards. Measured values in shielded spaces show elevated levels in contrast to the open space. However, the values do not exceed limits set by considering the thermal effect on living tissues. Full article
(This article belongs to the Section Electronic Sensors)
Show Figures

Figure 1

Figure 1
<p>ICNIRP Comparison of local <span class="html-italic">SAR</span> and local absorbed power density <span class="html-italic">S</span><sub>ab</sub> [<a href="#B9-sensors-23-02351" class="html-bibr">9</a>].</p>
Full article ">Figure 2
<p>ICNIRP Whole- body and local exposure thresholds [<a href="#B9-sensors-23-02351" class="html-bibr">9</a>].</p>
Full article ">Figure 3
<p>Used spectrum analyzer BK PRECISION 2650A and dipole antenna M40x.</p>
Full article ">Figure 4
<p>Example of calibration curves from both devices: digital spectral analyzer (<b>a</b>) and analog spectral analyzer (<b>b</b>).</p>
Full article ">Figure 5
<p>Measurement setup description: (<b>a</b>) representation of measurement points inside the empty elevator and in front of the elevator; (<b>b</b>) location of measurement points and position of the person on the phone in the elevator area.</p>
Full article ">Figure 6
<p>Human head phantom (made using rapid prototyping technology) for microwave EMF measurements usage.</p>
Full article ">Figure 7
<p>Experimental results: EM field received power spectrum in region of interest, individual sources.</p>
Full article ">Figure 8
<p>Experimental results: electric field strength spectrum in region of interest: <span class="html-italic">f</span> = 800 MHz. The line shows the border corridor/elevator.</p>
Full article ">Figure 9
<p>Experimental results: EM field received power spectrum in region of interest.</p>
Full article ">Figure 10
<p>Experimental results: electric field strength spectrum in region of interest during one person on the phone: <span class="html-italic">f</span> = 1744 MHz.</p>
Full article ">Figure 11
<p>Experimental setup: measurement point (red cross) inside the car (<b>a</b>) CAR A, and (<b>b</b>) CAR B.</p>
Full article ">Figure 12
<p>Example of EMF received power of whole spectrum (upper graph) and spectrum in measured frequency band (lower graph) (<b>a</b>) CAR A, and (<b>b</b>) CAR B.</p>
Full article ">Figure 12 Cont.
<p>Example of EMF received power of whole spectrum (upper graph) and spectrum in measured frequency band (lower graph) (<b>a</b>) CAR A, and (<b>b</b>) CAR B.</p>
Full article ">
14 pages, 2133 KiB  
Article
Trajectory Planning of Autonomous Underwater Vehicles Based on Gauss Pseudospectral Method
by Wenyang Gan, Lixia Su and Zhenzhong Chu
Sensors 2023, 23(4), 2350; https://doi.org/10.3390/s23042350 - 20 Feb 2023
Cited by 3 | Viewed by 2023
Abstract
This paper aims to address the obstacle avoidance problem of autonomous underwater vehicles (AUVs) in complex environments by proposing a trajectory planning method based on the Gauss pseudospectral method (GPM). According to the kinematics and dynamics constraints, and the obstacle avoidance requirement in [...] Read more.
This paper aims to address the obstacle avoidance problem of autonomous underwater vehicles (AUVs) in complex environments by proposing a trajectory planning method based on the Gauss pseudospectral method (GPM). According to the kinematics and dynamics constraints, and the obstacle avoidance requirement in AUV navigation, a multi-constraint trajectory planning model is established. The model takes energy consumption and sailing time as optimization objectives. The optimal control problem is transformed into a nonlinear programming problem by the GPM. The trajectory satisfying the optimization objective can be obtained by solving the problem with a sequential quadratic programming (SQP) algorithm. For the optimization of calculation parameters, the cubic spline interpolation method is proposed to generate initial value. Finally, through comparison with the linear fitting method, the rapidity of the solution of the cubic spline interpolation method is verified. The simulation results show that the cubic spline interpolation method improves the operation performance by 49.35% compared with the linear fitting method, which verifies the effectiveness of the cubic spline interpolation method in solving the optimal control problem. Full article
(This article belongs to the Special Issue Sensors, Modeling and Control for Intelligent Marine Robots)
Show Figures

Figure 1

Figure 1
<p>Result of each state in scenario 1: (<b>a</b>) Trajectory planning of the AUV; (<b>b</b>) Variation diagram of thrust; (<b>c</b>) Diagram of velocity change in x and y directions.</p>
Full article ">Figure 2
<p>Result of each state in scenario 2: (<b>a</b>) Trajectory planning of the AUV; (<b>b</b>) Variation diagram of thrust; (<b>c</b>) Diagram of velocity change in x and y directions.</p>
Full article ">Figure 3
<p>Average value of control variables and state variables in scenario 1 and scenario 2: (<b>a</b>) Average value of thrust and torque; (<b>b</b>) Average value of speed in x and y directions.</p>
Full article ">Figure 4
<p>Result of each state in scenario 3: (<b>a</b>) Trajectory planning of the AUV; (<b>b</b>) Variation diagram of thrust; (<b>c</b>) Diagram of velocity change in x and y directions.</p>
Full article ">Figure 5
<p>Comparison diagram of control variables in scenario 1 and scenario 3.</p>
Full article ">
27 pages, 5305 KiB  
Article
Proposal of Mapping Digital Twins Definition Language to Open Platform Communications Unified Architecture
by Salvatore Cavalieri and Salvatore Gambadoro
Sensors 2023, 23(4), 2349; https://doi.org/10.3390/s23042349 - 20 Feb 2023
Cited by 10 | Viewed by 2278
Abstract
The concept of Digital Twin is of fundamental importance to meet the main requirements of Industry 4.0. Among the standards currently available to realize Digital Twins there is the Digital Twins Definition Language. Digital Twin requires exchange of data with the real system [...] Read more.
The concept of Digital Twin is of fundamental importance to meet the main requirements of Industry 4.0. Among the standards currently available to realize Digital Twins there is the Digital Twins Definition Language. Digital Twin requires exchange of data with the real system it models and with other applications that use the digital replica of the system. In the context of Industry 4.0, a reference standard for an interoperable exchange of information between applications, is Open Platform Communications Unified Architecture. The authors believe that interoperability between Digital Twins and Open Platform Communications Unified Architectures communication standard should be enabled. For this reason, the main goal of this paper is to allow a Digital Twin based on the Digital Twins Definition Language to exchange data with any applications compliant to the Open Platform Communications Unified Architecture. A proposal about the mapping from Digital Twins Definition Language to the Open Platform Communications Unified Architecture will be presented. In order to verify the feasibility of the proposal, an implementation has been made by the authors, and its description will be introduced in the paper. Furthermore, the main results of the validation process accomplished on the basis of this implementation will be given. Full article
Show Figures

Figure 1

Figure 1
<p>Graphical representation of the proposal.</p>
Full article ">Figure 2
<p>Example of OPC UA graphical representation.</p>
Full article ">Figure 3
<p>Example of OPC UA AddIn and HasAddIn Reference.</p>
Full article ">Figure 4
<p>OPC UA DTDLInterfaceType.</p>
Full article ">Figure 5
<p>OPC UA DTDL TelemetryType.</p>
Full article ">Figure 6
<p>OPC UA DTDL PropertyType.</p>
Full article ">Figure 7
<p>OPC UA DTDLCommandType.</p>
Full article ">Figure 8
<p>OPC UA DTDLRelationshipType.</p>
Full article ">Figure 9
<p>OPC UA DTDLHasTarget Reference.</p>
Full article ">Figure 10
<p>OPC UA DTDLComponentType.</p>
Full article ">Figure 11
<p>Example of the DTDL Interface: model of a Room (<b>left</b>) and model of a MeetingRoom (<b>right</b>).</p>
Full article ">Figure 12
<p>Graphical Representation of Digital Twin MeetingRoomA in Azure DT Explorer.</p>
Full article ">Figure 13
<p>Example of mapping using the OPC UA DTDL InterfaceType.</p>
Full article ">Figure 14
<p>OPC UA Nodes representing MeetingRoomA in the OPC UA domain.</p>
Full article ">Figure 15
<p>Initializations needed in the OPC UA Server in NodeJS.</p>
Full article ">Figure 16
<p>Automatic update of the tempValue property.</p>
Full article ">
20 pages, 4425 KiB  
Article
Voltammetric Sensor Based on the Poly(p-aminobenzoic Acid) for the Simultaneous Quantification of Aromatic Aldehydes as Markers of Cognac and Brandy Quality
by Guzel Ziyatdinova, Tatyana Antonova and Rustam Davletshin
Sensors 2023, 23(4), 2348; https://doi.org/10.3390/s23042348 - 20 Feb 2023
Cited by 1 | Viewed by 1793
Abstract
Cognac and brandy quality control is an actual topic in food analysis. Aromatic aldehydes, particularly syringaldehyde and vanillin, are one of the markers used for these purposes. Therefore, simple and express methods for their simultaneous determination are required. The voltammetric sensor based on [...] Read more.
Cognac and brandy quality control is an actual topic in food analysis. Aromatic aldehydes, particularly syringaldehyde and vanillin, are one of the markers used for these purposes. Therefore, simple and express methods for their simultaneous determination are required. The voltammetric sensor based on the layer-by-layer combination of multi-walled carbon nanotubes (MWCNTs) and electropolymerized p-aminobenzoic acid (p-ABA) provides full resolution of the syringaldehyde and vanillin oxidation peaks. Optimized conditions of p-ABA electropolymerization (100 µM monomer in Britton–Robinson buffer pH 2.0, twenty cycles in the polarization window of −0.5 to 2.0 V with a potential scan rate of 100 mV·s−1) were found. The poly(p-ABA)-based electrode was characterized by scanning electron microscopy (SEM), cyclic voltammetry, and electrochemical impedance spectroscopy (EIS). Electrooxidation of syringaldehyde and vanillin is an irreversible two-electron diffusion-controlled process. In the differential pulse mode, the sensor allows quantification of aromatic aldehydes in the ranges of 0.075–7.5 and 7.5–100 µM for syringaldehyde and 0.50–7.5 and 7.5–100 µM for vanillin with the detection limits of 0.018 and 0.19 µM, respectively. The sensor was applied to cognac and brandy samples and compared to chromatography. Full article
(This article belongs to the Special Issue Sensing Platforms for Food Quality and Safety Monitoring)
Show Figures

Figure 1

Figure 1
<p>Baseline-corrected differential pulse voltammograms of 10 µM syringaldehyde, vanillin, and their mixture: (<b>a</b>) at the bare GCE; (<b>b</b>) at the MWCNTs/GCE. The supporting electrolyte is Britton–Robinson buffer pH 2.0. Pulse amplitude is 50 mV, pulse time is 50 ms, and potential scan rate is 10 mV·s<sup>−1</sup>.</p>
Full article ">Figure 2
<p>Electropolymerization of 100 µM <span class="html-italic">p</span>-ABA at the MWCNTs/GCE in Britton–Robinson buffer pH 2.0: (<b>a</b>) First cycle of electropolymerization; (<b>b</b>) Twenty cycles of electropolymerization. The inset is the enlarged scale of voltammograms in the potential range of 0.1–1.2 V. Potential scan rate is 100 mV·s<sup>−1</sup>.</p>
Full article ">Figure 3
<p>Changes in the oxidation currents of 10 µM mixture of syringaldehyde and vanillin at the polymer-modified electrode depending on the: (<b>a</b>) number of cycles at υ = 100 mV·s<sup>−1</sup>; (<b>b</b>) number of cycles at υ = 150 mV·s<sup>−1</sup>; (<b>c</b>) supporting electrolyte pH; (<b>d</b>) monomer concentration; (<b>e</b>) polarization window used for the poly(<span class="html-italic">p</span>-ABA) layer electrodeposition. The response of aromatic aldehydes is recorded in Britton–Robinson buffer pH 2.0 using cyclic voltammetry at a potential scan rate of 100 mV·s<sup>−1</sup>.</p>
Full article ">Figure 4
<p>SEM images of: (<b>a</b>) bare GCE; (<b>b</b>) MWCNTs/GCE; (<b>c</b>) Poly(<span class="html-italic">p</span>-ABA)/MWCNTs/GCE.</p>
Full article ">Figure 5
<p>(<b>a</b>) Cyclic voltammograms of 1.0 mM ferrocyanide ions in 0.1 M KCl at the bare GCE, MWCNTs/GCE, and poly(<span class="html-italic">p</span>-ABA)/MWCNTs/GCE. The potential scan rate is 100 mV·s<sup>−1</sup>; (<b>b</b>) Nyquist plot (experimental (points) and fitted (lines)) for bare GCE, MWCNTs/GCE, and poly(<span class="html-italic">p</span>-ABA)/MWCNTs/GCE in the presence of 1.0 mM mixture of ferro-/ferricyanide ions in 0.1 M KCl. Polarization potential is 0.21 V, frequency range is 10 kHz—0.04 Hz, amplitude is 5 mV.</p>
Full article ">Figure 6
<p>Baseline-corrected differential pulse voltammograms of 10 µM mixture of syringaldehyde and vanillin at the MWCNTs/GCE and poly(<span class="html-italic">p</span>-ABA)/MWCNTs/GCE in Britton–Robinson buffer pH 2.0. Pulse amplitude is 50 mV, pulse time is 50 ms, and potential scan rate is 10 mV·s<sup>−1</sup>.</p>
Full article ">Figure 7
<p>Voltammetric characteristics of 100 µM aromatic aldehydes at the poly(<span class="html-italic">p</span>-ABA)-modified electrode depending on the Britton–Robinson buffer pH: (<b>a</b>) Oxidation potentials of syringaldehyde; (<b>b</b>) Oxidation currents of syringaldehyde; (<b>c</b>) Oxidation potentials of vanillin; (<b>d</b>) Oxidation currents of vanillin.</p>
Full article ">Figure 8
<p>Cyclic voltammograms of aromatic aldehydes at the poly(<span class="html-italic">p</span>-ABA)-modified electrode in the Britton–Robinson buffer pH 2.0 at various potential scan rates: (<b>a</b>) 100 µM of syringaldehyde; (<b>b</b>) 100 µM of vanillin.</p>
Full article ">Figure 9
<p>Baseline-corrected differential pulse voltammograms of the aromatic aldehydes equimolar mixtures of various concentrations at the poly(<span class="html-italic">p</span>-ABA)-modified electrode in the Britton–Robinson buffer pH 2.0: (<b>a</b>) concentration range of 0.075–7.5 µM; (<b>b</b>) concentration range of 7.5–100 µM. Pulse amplitude is 50 mV, pulse time is 25 ms, potential scan rate is 10 mV·s<sup>−1</sup>.</p>
Full article ">Scheme 1
<p>Electropolymerization of <span class="html-italic">p</span>-ABA in acidic medium.</p>
Full article ">Scheme 2
<p>Electrooxidation of aromatic aldehydes.</p>
Full article ">
16 pages, 710 KiB  
Review
Protocols Targeting Afferent Pathways via Neuromuscular Electrical Stimulation for the Plantar Flexors: A Systematic Review
by Anastasia Papavasileiou, Anthi Xenofondos, Stéphane Baudry, Thomas Lapole, Ioannis G. Amiridis, Dimitrios Metaxiotis, Themistoklis Tsatalas and Dimitrios A. Patikas
Sensors 2023, 23(4), 2347; https://doi.org/10.3390/s23042347 - 20 Feb 2023
Cited by 1 | Viewed by 2003
Abstract
This systematic review documents the protocol characteristics of studies that used neuromuscular electrical stimulation protocols (NMES) on the plantar flexors [through triceps surae (TS) or tibial nerve (TN) stimulation] to stimulate afferent pathways. The review was conducted according to the Preferred Reporting Items [...] Read more.
This systematic review documents the protocol characteristics of studies that used neuromuscular electrical stimulation protocols (NMES) on the plantar flexors [through triceps surae (TS) or tibial nerve (TN) stimulation] to stimulate afferent pathways. The review was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) statement, was registered to PROSPERO (ID: CRD42022345194) and was funded by the Greek General Secretariat for Research and Technology (ERA-NET NEURON JTC 2020). Included were original research articles on healthy adults, with NMES interventions applied on TN or TS or both. Four databases (Cochrane Library, PubMed, Scopus, and Web of Science) were systematically searched, in addition to a manual search using the citations of included studies. Quality assessment was conducted on 32 eligible studies by estimating the risk of bias with the checklist of the Effective Public Health Practice Project Quality Assessment Tool. Eighty-seven protocols were analyzed, with descriptive statistics. Compared to TS, TN stimulation has been reported in a wider range of frequencies (5–100, vs. 20–200 Hz) and normalization methods for the contraction intensity. The pulse duration ranged from 0.2 to 1 ms for both TS and TN protocols. It is concluded that with increasing popularity of NMES protocols in intervention and rehabilitation, future studies may use a wider range of stimulation attributes, to stimulate motor neurons via afferent pathways, but, on the other hand, additional studies may explore new protocols, targeting for more optimal effectiveness. Furthermore, future studies should consider methodological issues, such as stimulation efficacy (e.g., positioning over the motor point) and reporting of level of discomfort during the application of NMES protocols to reduce the inherent variability of the results. Full article
Show Figures

Figure 1

Figure 1
<p>Flow diagram of the study selection process via databases and other sources, according to PRISMA 2020 for new systematic reviews.</p>
Full article ">Figure 2
<p>Incidence of stimulation frequency (<b>A</b>), train duration (<b>B</b>), stimulation intensity (<b>C</b>) and pulse duration (<b>D</b>) expressed as percentage of total number of WPHF protocols applied on the triceps surae (TS) and tibial nerve (TN). * Burst represents protocols with variable frequency comprised of one WPHF burst (80–100 Hz) between stimulation trains of lower frequency (20–30 Hz). ** Progressive represents one protocol with variable stimulation frequency ramping up from 4 to 100 Hz and ramping down to 4 Hz. Stimulation intensity (C) has been adjusted based on the produced force [expressed as a percentage of maximal voluntary contraction (MVC)] or the M-wave [expressed as percentage of maximum M-wave (M<sub>max</sub>)], or a percentage of the motor threshold (MT).</p>
Full article ">
16 pages, 7397 KiB  
Article
Selective Deeply Supervised Multi-Scale Attention Network for Brain Tumor Segmentation
by Azka Rehman, Muhammad Usman, Abdullah Shahid, Siddique Latif and Junaid Qadir
Sensors 2023, 23(4), 2346; https://doi.org/10.3390/s23042346 - 20 Feb 2023
Cited by 6 | Viewed by 2240
Abstract
Brain tumors are among the deadliest forms of cancer, characterized by abnormal proliferation of brain cells. While early identification of brain tumors can greatly aid in their therapy, the process of manual segmentation performed by expert doctors, which is often time-consuming, tedious, and [...] Read more.
Brain tumors are among the deadliest forms of cancer, characterized by abnormal proliferation of brain cells. While early identification of brain tumors can greatly aid in their therapy, the process of manual segmentation performed by expert doctors, which is often time-consuming, tedious, and prone to human error, can act as a bottleneck in the diagnostic process. This motivates the development of automated algorithms for brain tumor segmentation. However, accurately segmenting the enhanced and core tumor regions is complicated due to high levels of inter- and intra-tumor heterogeneity in terms of texture, morphology, and shape. This study proposes a fully automatic method called the selective deeply supervised multi-scale attention network (SDS-MSA-Net) for segmenting brain tumor regions using a multi-scale attention network with novel selective deep supervision (SDS) mechanisms for training. The method utilizes a 3D input composed of five consecutive slices, in addition to a 2D slice, to maintain sequential information. The proposed multi-scale architecture includes two encoding units to extract meaningful global and local features from the 3D and 2D inputs, respectively. These coarse features are then passed through attention units to filter out redundant information by assigning lower weights. The refined features are fed into a decoder block, which upscales the features at various levels while learning patterns relevant to all tumor regions. The SDS block is introduced to immediately upscale features from intermediate layers of the decoder, with the aim of producing segmentations of the whole, enhanced, and core tumor regions. The proposed framework was evaluated on the BraTS2020 dataset and showed improved performance in brain tumor region segmentation, particularly in the segmentation of the core and enhancing tumor regions, demonstrating the effectiveness of the proposed approach. Our code is publicly available. Full article
(This article belongs to the Special Issue Advances in Biomedical Sensing, Instrumentation and Systems)
Show Figures

Figure 1

Figure 1
<p>Illustrations of brain tumor regions in an MRI slice from the BraTS 2020 database. From <b>left</b> to <b>right</b>: FLAIR, T1, T1ce, and T2 slices.</p>
Full article ">Figure 2
<p>The illustration of the preprocessing stage, which includes scan refinement and image enhancement using cropping and histogram equalization, respectively.</p>
Full article ">Figure 3
<p>Selective deeply supervised multi-scale attention network (SDS-MSA-Net) takes 2D and 3D inputs to segment three types of brain tumor regions. SDS-MSA-Net produces four outputs, which enable it to be trained with the selective deep supervision technique.</p>
Full article ">Figure 4
<p>Illustration of the architecture of Res block, Conv block, bridge block, DeConv block, and auxiliary block. (<b>a</b>) Res blocks and (<b>c</b>) bridge blocks are used in the 3D encoding unit to extract and to downscale the dimensions of the meaningful features, respectively; (<b>b</b>) Conv blocks are employed in the 2D encoding unit; (<b>d</b>) DeConv block is used in the decoder block to upscale the refined features; finally, (<b>e</b>) the auxiliary block employed in the SDS block to immediately upscale the features from intermediate layers of the decoder block to produce the segmentation mask of the selected brain tumor region(s).</p>
Full article ">Figure 5
<p>The schematic of the attention unit (AU) that uses additive attention is illustrated. AG is being utilized in the decoder block in the proposed SDS-MSA-Net (<a href="#sensors-23-02346-f003" class="html-fig">Figure 3</a>). The input features (<span class="html-italic">x</span>) are scaled with attention coefficients (<math display="inline"><semantics> <mi>α</mi> </semantics></math>) computed in AU. Spatial regions are selected by analyzing both the activations and contextual information provided by the gating signal (<span class="html-italic">g</span>), which is collected from a coarser scale. AUs are employed in the proposed MSA-Net at the decoder block to refine the coarse features coming from the encoder block.</p>
Full article ">Figure 6
<p>Learning curves for different training schemes.</p>
Full article ">Figure 7
<p>Results of SDS-MSA-Net compared with three downgraded variants (attention UNet, MS-CNN, and MSA-CNN). Note: Red, blue and green colors indicate the whole, core and enhanced tumor regions, respectively.</p>
Full article ">
25 pages, 796 KiB  
Article
Sensor Clustering Using a K-Means Algorithm in Combination with Optimized Unmanned Aerial Vehicle Trajectory in Wireless Sensor Networks
by Thanh-Nam Tran, Thanh-Long Nguyen, Vinh Truong Hoang and Miroslav Voznak
Sensors 2023, 23(4), 2345; https://doi.org/10.3390/s23042345 - 20 Feb 2023
Cited by 5 | Viewed by 2328
Abstract
We examine a general wireless sensor network (WSN) model which incorporates a large number of sensors distributed over a large and complex geographical area. The study proposes solutions for a flexible deployment, low cost and high reliability in a wireless sensor network. To [...] Read more.
We examine a general wireless sensor network (WSN) model which incorporates a large number of sensors distributed over a large and complex geographical area. The study proposes solutions for a flexible deployment, low cost and high reliability in a wireless sensor network. To achieve these aims, we propose the application of an unmanned aerial vehicle (UAV) as a flying relay to receive and forward signals that employ nonorthogonal multiple access (NOMA) for a high spectral sharing efficiency. To obtain an optimal number of subclusters and optimal UAV positioning, we apply a sensor clustering method based on K-means unsupervised machine learning in combination with the gap statistic method. The study proposes an algorithm to optimize the trajectory of the UAV, i.e., the centroid-to-next-nearest-centroid (CNNC) path. Because a subcluster containing multiple sensors produces cochannel interference which affects the signal decoding performance at the UAV, we propose a diagonal matrix as a phase-shift framework at the UAV to separate and decode the messages received from the sensors. The study examines the outage probability performance of an individual WSN and provides results based on Monte Carlo simulations and analyses. The investigated results verified the benefits of the K-means algorithm in deploying the WSN. Full article
(This article belongs to the Special Issue Advanced Applications of WSNs and the IoT)
Show Figures

Figure 1

Figure 1
<p>Random positioning of sensor nodes, where <math display="inline"><semantics> <mrow> <mn>20</mn> <mo>≤</mo> <mi>N</mi> <mo>≤</mo> <mn>50</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 2
<p>Optimized number of subclusters using the gap statistic method, the optimal number of clusters at <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>4</mn> </mrow> </semantics></math> satisfying the first maximum standard error.</p>
Full article ">Figure 3
<p>Sensor clustering using the <span class="html-italic">K</span>-means algorithm, with optimal number of clusters <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>4</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>Joint UAV trajectory and the shortest path based on the centroid-to-next-nearest-centroid distance given by Algorithm 1 (i.e., <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold">C</mi> <mn mathvariant="bold">1</mn> </msub> <mo>→</mo> <msub> <mi mathvariant="bold">C</mi> <mn mathvariant="bold">3</mn> </msub> <mo>→</mo> <msub> <mi mathvariant="bold">C</mi> <mn mathvariant="bold">4</mn> </msub> <mo>→</mo> <msub> <mi mathvariant="bold">C</mi> <mn mathvariant="bold">2</mn> </msub> </mrow> </semantics></math>).</p>
Full article ">Figure 5
<p>Joint schedule.</p>
Full article ">Figure 6
<p>Procedure of processing data at the UAV.</p>
Full article ">Figure 7
<p>Outage probability at the UAV for the UAV’s subcluster trajectory sequence (<b>a</b>) <math display="inline"><semantics> <msub> <mi mathvariant="bold">C</mi> <mn mathvariant="bold">1</mn> </msub> </semantics></math>, (<b>b</b>) <math display="inline"><semantics> <msub> <mi mathvariant="bold">C</mi> <mn mathvariant="bold">3</mn> </msub> </semantics></math>, (<b>c</b>) <math display="inline"><semantics> <msub> <mi mathvariant="bold">C</mi> <mn mathvariant="bold">4</mn> </msub> </semantics></math> and (<b>d</b>) <math display="inline"><semantics> <msub> <mi mathvariant="bold">C</mi> <mn mathvariant="bold">2</mn> </msub> </semantics></math>.</p>
Full article ">Figure 8
<p>Outage probability at the mobile base station for the UAV’s subcluster trajectory sequence (<b>a</b>) <math display="inline"><semantics> <msub> <mi mathvariant="bold">C</mi> <mn mathvariant="bold">1</mn> </msub> </semantics></math>, (<b>b</b>) <math display="inline"><semantics> <msub> <mi mathvariant="bold">C</mi> <mn mathvariant="bold">3</mn> </msub> </semantics></math>, (<b>c</b>) <math display="inline"><semantics> <msub> <mi mathvariant="bold">C</mi> <mn mathvariant="bold">4</mn> </msub> </semantics></math> and (<b>d</b>) <math display="inline"><semantics> <msub> <mi mathvariant="bold">C</mi> <mn mathvariant="bold">2</mn> </msub> </semantics></math>.</p>
Full article ">Figure 8 Cont.
<p>Outage probability at the mobile base station for the UAV’s subcluster trajectory sequence (<b>a</b>) <math display="inline"><semantics> <msub> <mi mathvariant="bold">C</mi> <mn mathvariant="bold">1</mn> </msub> </semantics></math>, (<b>b</b>) <math display="inline"><semantics> <msub> <mi mathvariant="bold">C</mi> <mn mathvariant="bold">3</mn> </msub> </semantics></math>, (<b>c</b>) <math display="inline"><semantics> <msub> <mi mathvariant="bold">C</mi> <mn mathvariant="bold">4</mn> </msub> </semantics></math> and (<b>d</b>) <math display="inline"><semantics> <msub> <mi mathvariant="bold">C</mi> <mn mathvariant="bold">2</mn> </msub> </semantics></math>.</p>
Full article ">Figure 9
<p>Improved outage probability at the mobile base station equipped with <math display="inline"><semantics> <mrow> <msub> <mi>A</mi> <mi>B</mi> </msub> <mo>=</mo> <mn>32</mn> </mrow> </semantics></math> antennae for the UAV trajectory subcluster sequence (<b>a</b>) <math display="inline"><semantics> <msub> <mi mathvariant="bold">C</mi> <mn mathvariant="bold">1</mn> </msub> </semantics></math>, (<b>b</b>) <math display="inline"><semantics> <msub> <mi mathvariant="bold">C</mi> <mn mathvariant="bold">3</mn> </msub> </semantics></math>, (<b>c</b>) <math display="inline"><semantics> <msub> <mi mathvariant="bold">C</mi> <mn mathvariant="bold">4</mn> </msub> </semantics></math> and (<b>d</b>) <math display="inline"><semantics> <msub> <mi mathvariant="bold">C</mi> <mn mathvariant="bold">2</mn> </msub> </semantics></math>.</p>
Full article ">Figure 9 Cont.
<p>Improved outage probability at the mobile base station equipped with <math display="inline"><semantics> <mrow> <msub> <mi>A</mi> <mi>B</mi> </msub> <mo>=</mo> <mn>32</mn> </mrow> </semantics></math> antennae for the UAV trajectory subcluster sequence (<b>a</b>) <math display="inline"><semantics> <msub> <mi mathvariant="bold">C</mi> <mn mathvariant="bold">1</mn> </msub> </semantics></math>, (<b>b</b>) <math display="inline"><semantics> <msub> <mi mathvariant="bold">C</mi> <mn mathvariant="bold">3</mn> </msub> </semantics></math>, (<b>c</b>) <math display="inline"><semantics> <msub> <mi mathvariant="bold">C</mi> <mn mathvariant="bold">4</mn> </msub> </semantics></math> and (<b>d</b>) <math display="inline"><semantics> <msub> <mi mathvariant="bold">C</mi> <mn mathvariant="bold">2</mn> </msub> </semantics></math>.</p>
Full article ">Figure A1
<p>The randomly distributed WSN (<b>a</b>), determined optimal number of subclusters <span class="html-italic">K</span> (<b>b</b>), division into subclusters (<b>c</b>) and centroid-to-next-nearest-centroid trajectory (<b>d</b>).</p>
Full article ">
21 pages, 1363 KiB  
Article
An Adaptable and Unsupervised TinyML Anomaly Detection System for Extreme Industrial Environments
by Mattia Antonini, Miguel Pincheira, Massimo Vecchio and Fabio Antonelli
Sensors 2023, 23(4), 2344; https://doi.org/10.3390/s23042344 - 20 Feb 2023
Cited by 25 | Viewed by 5230
Abstract
Industrial assets often feature multiple sensing devices to keep track of their status by monitoring certain physical parameters. These readings can be analyzed with machine learning (ML) tools to identify potential failures through anomaly detection, allowing operators to take appropriate corrective actions. Typically, [...] Read more.
Industrial assets often feature multiple sensing devices to keep track of their status by monitoring certain physical parameters. These readings can be analyzed with machine learning (ML) tools to identify potential failures through anomaly detection, allowing operators to take appropriate corrective actions. Typically, these analyses are conducted on servers located in data centers or the cloud. However, this approach increases system complexity and is susceptible to failure in cases where connectivity is unavailable. Furthermore, this communication restriction limits the approach’s applicability in extreme industrial environments where operating conditions affect communication and access to the system. This paper proposes and evaluates an end-to-end adaptable and configurable anomaly detection system that uses the Internet of Things (IoT), edge computing, and Tiny-MLOps methodologies in an extreme industrial environment such as submersible pumps. The system runs on an IoT sensing Kit, based on an ESP32 microcontroller and MicroPython firmware, located near the data source. The processing pipeline on the sensing device collects data, trains an anomaly detection model, and alerts an external gateway in the event of an anomaly. The anomaly detection model uses the isolation forest algorithm, which can be trained on the microcontroller in just 1.2 to 6.4 s and detect an anomaly in less than 16 milliseconds with an ensemble of 50 trees and 80 KB of RAM. Additionally, the system employs blockchain technology to provide a transparent and irrefutable repository of anomalies. Full article
Show Figures

Figure 1

Figure 1
<p>Overlapping of technological areas. Image inspired by <a href="#sensors-23-02344-f001" class="html-fig">Figure 1</a> in [<a href="#B31-sensors-23-02344" class="html-bibr">31</a>].</p>
Full article ">Figure 2
<p>Schema of the proposed system including the Tiny-MLOps pipeline. Green blocks are the newly introduced blocks, while yellow blocks have been improved since [<a href="#B13-sensors-23-02344" class="html-bibr">13</a>].</p>
Full article ">Figure 3
<p>Schema of the proposed system: the rotating pump where the sensor is installed, the power line used for energy and communication, the gateway, and the cloud/external world.</p>
Full article ">Figure 4
<p>Test-bed devices built. (<b>a</b>) is the test-bed built in [<a href="#B13-sensors-23-02344" class="html-bibr">13</a>] to validate the proposed solution comprising an IMU to sample vibrations, a temperature sensor to sample the fan engine temperature, a PC 12v DC fan, and a suspended structure over springs that function asa vibrating simulator. (<b>b</b>) is the PCB board under test in this work before the deployment inside the underwater pump. The board carries the ESP32-ROVER-IE board with the MCU, the ST7540 PLC Modem, the ICM-20948 IMU mounted over a break-out board by Adafruit, the temperature sensor placed below the IMU board, the power-line connector, and the power supply unit (PSU), missing in this picture.</p>
Full article ">Figure 5
<p>Inference delay for IF trained (training size 100) with a subsampling pool of 50 (<b>a</b>) and 100 instances (<b>b</b>). Straight lines are the trend line showing the linear dependence between the ensemble size and inference time.</p>
Full article ">Figure 6
<p>Loading memory footprint with subsampling pool of 50 (<b>a</b>) and 100 instances (<b>b</b>).</p>
Full article ">Figure 7
<p>Model loading time with subsampling pool of 50 (<b>a</b>) and 100 instances (<b>b</b>).</p>
Full article ">
20 pages, 35574 KiB  
Article
Point Cloud Instance Segmentation with Inaccurate Bounding-Box Annotations
by Yinyin Peng, Hui Feng, Tao Chen and Bo Hu
Sensors 2023, 23(4), 2343; https://doi.org/10.3390/s23042343 - 20 Feb 2023
Viewed by 2431
Abstract
Most existing point cloud instance segmentation methods require accurate and dense point-level annotations, which are extremely laborious to collect. While incomplete and inexact supervision has been exploited to reduce labeling efforts, inaccurate supervision remains under-explored. This kind of supervision is almost inevitable in [...] Read more.
Most existing point cloud instance segmentation methods require accurate and dense point-level annotations, which are extremely laborious to collect. While incomplete and inexact supervision has been exploited to reduce labeling efforts, inaccurate supervision remains under-explored. This kind of supervision is almost inevitable in practice, especially in complex 3D point clouds, and it severely degrades the generalization performance of deep networks. To this end, we propose the first weakly supervised point cloud instance segmentation framework with inaccurate box-level labels. A novel self-distillation architecture is presented to boost the generalization ability while leveraging the cheap but noisy bounding-box annotations. Specifically, we employ consistency regularization to distill self-knowledge from data perturbation and historical predictions, which prevents the deep network from overfitting the noisy labels. Moreover, we progressively select reliable samples and correct their labels based on the historical consistency. Extensive experiments on the ScanNet-v2 dataset were used to validate the effectiveness and robustness of our method in dealing with inexact and inaccurate annotations. Full article
(This article belongs to the Special Issue Intelligent Point Cloud Processing, Sensing and Understanding)
Show Figures

Figure 1

Figure 1
<p>Illustration of various weak supervision methods for point cloud segmentation. (<b>a</b>) Incomplete point-level labels denote the classes to which a small fraction of points belong. (<b>b</b>) Scene-level (subcloud-level) labels indicate all of the classes appearing in the scene (subcloud). (<b>c</b>) Box-level labels indicate the class and location of each object. (<b>d</b>) Inaccurate box-level labels indicate the portion of boxes that are mislabeled. For example, a “chair” is mislabelled as a “sofa”.</p>
Full article ">Figure 2
<p>The training framework of self-distillation based on perturbation and history. We first generate pseudo-labels according to the point–box association (c.f. <a href="#sec3dot2-sensors-23-02343" class="html-sec">Section 3.2</a>) and train a 3D sparse convolutional network with two types of consistency regularization, namely, PCR (c.f. <a href="#sec3dot4dot1-sensors-23-02343" class="html-sec">Section 3.4.1</a>) and TCR (c.f. <a href="#sec3dot4dot3-sensors-23-02343" class="html-sec">Section 3.4.3</a>). With the help of regularization, the model is able to perform label refurbishment (HLR, c.f. <a href="#sec3dot4dot2-sensors-23-02343" class="html-sec">Section 3.4.2</a>) with higher precision. Note that the noisy loss is used only in the warm-up stage, and afterward, it is replaced by the clean loss, since the cleaned (i.e., refurbished) labels are available.</p>
Full article ">Figure 3
<p>Illustration of the perturbation-based consistency regularization (PCR) module. We construct a parallel branch through data perturbation and force the output predictions of the two branches to be consistent. Note that the predictions include both semantics and geometry.</p>
Full article ">Figure 4
<p>Illustration of the history-guided label refurbishment (HLR) module. We use a historical queue to store the past predictions and correct the previously generated pseudo-labels with consistently predicted classes while keeping the unreliable samples unchanged instead of directly dropping them. Compared with other methods, we take a more conservative strategy, as regularization decreases the overfitting risk.</p>
Full article ">Figure 5
<p>Illustration of the temporal consistency regularization (TCR) module. We record the exponential moving average of the past predicted distributions (logits), which serve as the soft targets for the current prediction.</p>
Full article ">Figure 6
<p>Visualization of different noise rates affecting the semantic labels. From left to right are the input scene, the ground-truth semantics, and the pseudo-labels of noise rates of 20%, 40%, and 60%. The higher the noise rate, the more chaotic the semantics.</p>
Full article ">Figure 7
<p>Qualitative comparison at a noise rate of 40% on ScanNet-v2. The legend is employed to distinguish among different semantic meanings, while the individual instances are randomly colored. The key differences are marked out with red dashed rectangles.</p>
Full article ">Figure 8
<p>Qualitative comparison at a noise rate of 40% on ScanNet-v2. The legend is employed to distinguish among different semantic meanings, and the key differences are marked out with red dashed rectangles.</p>
Full article ">Figure 9
<p>Bad cases on ScanNet-v2 in the noise-free setting. The first two rows show that refrigerators could be misclassified as cabinets, doors, and other furniture. We use “?” to represent this complicated situation. The last two rows show that windows could be misclassified as curtains, which lowered both categories’ performance. The legend is employed to distinguish among different semantic meanings, and the key differences are marked out with red dashed rectangles.</p>
Full article ">Figure 10
<p>Trend of statistics in history-guided label refurbishment.</p>
Full article ">Figure 11
<p>Qualitative demonstration of history-guided label refurbishment. From left to right are the input point clouds, the corresponding noisy pseudo-labels, the refurbished labels in epochs 40, 80, and 200, and the ground-truth semantic labels.</p>
Full article ">
22 pages, 11919 KiB  
Article
Flight Controller as a Low-Cost IMU Sensor for Human Motion Measurement
by Artur Iluk
Sensors 2023, 23(4), 2342; https://doi.org/10.3390/s23042342 - 20 Feb 2023
Cited by 1 | Viewed by 2824
Abstract
Human motion analysis requires information about the position and orientation of different parts of the human body over time. Widely used are optical methods such as the VICON system and sets of wired and wireless IMU sensors to estimate absolute orientation angles of [...] Read more.
Human motion analysis requires information about the position and orientation of different parts of the human body over time. Widely used are optical methods such as the VICON system and sets of wired and wireless IMU sensors to estimate absolute orientation angles of extremities (Xsens). Both methods require expensive measurement devices and have disadvantages such as the limited rate of position and angle acquisition. In the paper, the adaptation of the drone flight controller was proposed as a low-cost and relatively high-performance device for the human body pose estimation and acceleration measurements. The test setup with the use of flight controllers was described and the efficiency of the flight controller sensor was compared with commercial sensors. The practical usability of sensors in human motion measurement was presented. The issues related to the dynamic response of IMU-based sensors during acceleration measurement were discussed. Full article
(This article belongs to the Special Issue Human Activity Recognition in Smart Sensing Environment)
Show Figures

Figure 1

Figure 1
<p>The set of sensors from the optical motion tracking system: VICON (<b>left</b>) and IMU-based sensors mounted on the human body (<b>right</b>).</p>
Full article ">Figure 2
<p>MTw Awinda sensor 47 × 30 × 13 mm, mass 16 g (<b>left</b>); and Noraxon Ultium Motion sensor 44.5 × 33 × 12.2 mm, mass 19g (<b>right</b>).</p>
Full article ">Figure 3
<p>RedShift Labs UM7, 28 × 28 × 11 mm, mass 7.5 g (<b>left</b>); and DFRobot SEN0386 sensor, 51.3 × 36 × 10 mm, mass 18g (<b>right</b>).</p>
Full article ">Figure 4
<p>Layout and connectivity ports of the flight controller.</p>
Full article ">Figure 5
<p>The SD card slot on the bottom side of the flight controller.</p>
Full article ">Figure 6
<p>View of the cabled sensor secured by a transparent shrink tube, with a 32 GB micro-SD card in the slot for local recording.</p>
Full article ">Figure 7
<p>The layout of the measurement system.</p>
Full article ">Figure 8
<p>Remote control receiver used to trigger the sensor system.</p>
Full article ">Figure 9
<p>The stages of recording synchronization.</p>
Full article ">Figure 10
<p>The example output of the synchronization procedure in Python.</p>
Full article ">Figure 11
<p>The reference Xsens MTi G-700 sensor (<b>left</b>); and the flight controller sensors for synchronization measurement (<b>right</b>).</p>
Full article ">Figure 12
<p>Raw measurement of impulse excitation (acceleration on the Z axis) of 5 flight controller sensors (FC1–FC5) before synchronization.</p>
Full article ">Figure 13
<p>The raw measurement of impulse excitation (acceleration in Z-axis) of 5 flight controller sensors (FC1–FC5) before synchronization. Magnifications of the first pulse (<b>a</b>), the middle pulse (<b>b</b>), and the last pulse (<b>c</b>).</p>
Full article ">Figure 14
<p>Measurement of impulse excitation of five flight controller sensors (FC1–FC5) after synchronization and reference signal measured with the Xsens MTi-G700 sensor (MTi).</p>
Full article ">Figure 15
<p>Measurement of impulse excitation by five flight controller sensors (FC1–FC5) after the synchronization and reference signal measured with the Xsens MTi-G700 sensor (MTi). Magnification of the initial (<b>a</b>), middle (<b>b</b>), and last pulse (<b>c</b>). ∆t: shift between the reference pulse and the FC pulse.</p>
Full article ">Figure 16
<p>Comparison of dynamic angle measurement using the flight controller sensor (FC) and reference Xsens MTi G-700 sensor (MTi).</p>
Full article ">Figure 17
<p>The angle deviation between the FC sensor and the reference MTi G-700 sensor. The red lines represent the standard deviation of the FC signal according to the reference MTi signal.</p>
Full article ">Figure 18
<p>The magnified view at the time of maximum error from <a href="#sensors-23-02342-f017" class="html-fig">Figure 17</a>, the signals measured by the reference sensor and the FC sensor (<b>top</b>) and error of estimation (<b>bottom</b>). The red dashed lines in the bottom view represent the standard deviation of the FC signal according to the reference MTi signal.</p>
Full article ">Figure 19
<p>The MTi G-700 sensor and the set of sensors connected to the palm: MTi-G700 sensor, Xsens Awinda wireless sensor, and flight controller sensor.</p>
Full article ">Figure 20
<p>Synchronized measurement of pitch angle: MTi: reference signal, Xsens MTi-G700 sensor; MTw: Xsens Awinda wireless sensor; FC: flight controller sensor.</p>
Full article ">Figure 21
<p>Synchronized measurement of the pitch angle, magnification of the single movement: MTi: reference signal, Xsens MTi-G700 sensor; MTw: Xsens Awinda wireless sensor; FC: flight controller sensor.</p>
Full article ">Figure 22
<p>Location of sensors on the human body (<b>left</b>), location of the sensor on the head (<b>middle</b>) and on the neck (<b>right</b>).</p>
Full article ">Figure 23
<p>Angles measured on the left foot during walking by the flight controller.</p>
Full article ">Figure 24
<p>Angles measured on the head during walking by the flight controller.</p>
Full article ">Figure 25
<p>Vertical component of acceleration of each sensor during the passage with bare feet.</p>
Full article ">Figure 26
<p>Vertical component of acceleration of each sensor during the passage in sport shoes.</p>
Full article ">Figure 27
<p>Vertical component of the head, neck, and tailbone during passage with bare feet.</p>
Full article ">Figure 28
<p>The vertical component of the acceleration of head, neck, and tailbone during the passage in sports shoes.</p>
Full article ">Figure 29
<p>The vertical component of the head acceleration during 10 subsequent steps of the single passage.</p>
Full article ">Figure 30
<p>The vertical component of the head acceleration during 10 subsequent steps of the single passage with bare feet.</p>
Full article ">Figure 31
<p>The vertical component of the head acceleration during 10 subsequent steps of the single passage in sports shoes.</p>
Full article ">Figure 32
<p>The vertical response of the sensors during the single impact of the bare right foot onto a hard surface.</p>
Full article ">
22 pages, 6163 KiB  
Article
Hyperspectral and Multispectral Image Fusion with Automated Extraction of Image-Based Endmember Bundles and Sparsity-Based Unmixing to Deal with Spectral Variability
by Salah Eddine Brezini and Yannick Deville
Sensors 2023, 23(4), 2341; https://doi.org/10.3390/s23042341 - 20 Feb 2023
Cited by 5 | Viewed by 2464
Abstract
The aim of fusing hyperspectral and multispectral images is to overcome the limitation of remote sensing hyperspectral sensors by improving their spatial resolutions. This process, also known as hypersharpening, generates an unobserved high-spatial-resolution hyperspectral image. To this end, several hypersharpening methods have been [...] Read more.
The aim of fusing hyperspectral and multispectral images is to overcome the limitation of remote sensing hyperspectral sensors by improving their spatial resolutions. This process, also known as hypersharpening, generates an unobserved high-spatial-resolution hyperspectral image. To this end, several hypersharpening methods have been developed, however most of them do not consider the spectral variability phenomenon; therefore, neglecting this phenomenon may cause errors, which leads to reducing the spatial and spectral quality of the sharpened products. Recently, new approaches have been proposed to tackle this problem, particularly those based on spectral unmixing and using parametric models. Nevertheless, the reported methods need a large number of parameters to address spectral variability, which inevitably yields a higher computation time compared to the standard hypersharpening methods. In this paper, a new hypersharpening method addressing spectral variability by considering the spectra bundles-based method, namely the Automated Extraction of Endmember Bundles (AEEB), and the sparsity-based method called Sparse Unmixing by Variable Splitting and Augmented Lagrangian (SUnSAL), is introduced. This new method called Hyperspectral Super-resolution with Spectra Bundles dealing with Spectral Variability (HSB-SV) was tested on both synthetic and real data. Experimental results showed that HSB-SV provides sharpened products with higher spectral and spatial reconstruction fidelities with a very low computational complexity compared to other methods dealing with spectral variability, which are the main contributions of the designed method. Full article
(This article belongs to the Special Issue Hyperspectral Sensors, Algorithms and Task Performance)
Show Figures

Figure 1

Figure 1
<p>True-color image composite for the synthetic dataset. (<b>a</b>) Original Hyperspectral image; (<b>b</b>) Low-spectral-resolution multispectral image; (<b>c</b>) Low-spatial-resolution hyperspectral image.</p>
Full article ">Figure 2
<p>True-color image composite for the real dataset. (<b>a</b>) Low-spatial-resolution hyperspectral image; (<b>b</b>) High-spatial-resolution pansharpened multispectral image.</p>
Full article ">Figure 3
<p>Extracted spectral library from the synthetic data by AEEB.</p>
Full article ">Figure 4
<p>Band-wise PSNR for the synthetic dataset.</p>
Full article ">Figure 5
<p>True-color image composite for the synthetic dataset. (<b>a</b>) Original hyperspectral image; (<b>b</b>) Obtained HSB-SV sharpened hyperspectral image; (<b>c</b>) Obtained HMF-IPNMF sharpened hyperspectral image; (<b>d</b>) Obtained HySure sharpened hyperspectral image; (<b>e</b>) Obtained CNMF sharpened hyperspectral image; (<b>f</b>) Obtained FuVar sharpened hyperspectral image.</p>
Full article ">Figure 6
<p>Spectral band in the <math display="inline"><semantics> <mrow> <mn>0.850</mn> <mo> </mo> <mrow> <mi mathvariant="sans-serif">μ</mi> <mi mathvariant="normal">m</mi> </mrow> </mrow> </semantics></math> region (<b>a</b>) Original hyperspectral image; (<b>b</b>) Obtained HSB-SV sharpened hyperspectral image; (<b>c</b>) Obtained HMF-IPNMF sharpened hyperspectral image; (<b>d</b>) Obtained HySure sharpened hyperspectral image; (<b>e</b>) Obtained CNMF sharpened hyperspectral image; (<b>f</b>) Obtained FuVar sharpened hyperspectral image.</p>
Full article ">Figure 7
<p>Spectral library extracted from the real data by AEEB.</p>
Full article ">Figure 8
<p>True-color image composite for fusion products derived for the real dataset. (<b>a</b>) Obtained HSB-SV sharpened hyperspectral image; (<b>b</b>) Obtained HMF-IPNMF sharpened hyperspectral image; (<b>c</b>) Obtained HySure sharpened hyperspectral image; (<b>d</b>) Obtained CNMF sharpened hyperspectral image; (<b>e</b>) Obtained FuVar sharpened hyperspectral image.</p>
Full article ">Figure 9
<p>Spectral band in the <math display="inline"><semantics> <mrow> <mn>0.854</mn> <mo> </mo> <mrow> <mi mathvariant="sans-serif">μ</mi> <mi mathvariant="normal">m</mi> </mrow> </mrow> </semantics></math> region. (<b>a</b>) Obtained HSB-SV sharpened hyperspectral image; (<b>b</b>) Obtained HMF-IPNMF sharpened hyperspectral image; (<b>c</b>) Obtained HySure sharpened hyperspectral image; (<b>d</b>) Obtained CNMF sharpened hyperspectral image; (<b>e</b>) Obtained FuVar sharpened hyperspectral image.</p>
Full article ">
14 pages, 1989 KiB  
Article
Time Domain Transmissiometry-Based Sensor for Simultaneously Measuring Soil Water Content, Electrical Conductivity, Temperature, and Matric Potential
by Yuki Kojima, Manabu Matsuoka, Tomohide Ariki and Tetsuo Yoshioka
Sensors 2023, 23(4), 2340; https://doi.org/10.3390/s23042340 - 20 Feb 2023
Cited by 2 | Viewed by 2467
Abstract
Owing to the increasing popularity of smart agriculture in recent years, it is necessary to develop a single sensor that can measure several soil properties, particularly the soil water content and matric potential. Therefore, in this study, we developed a sensor that can [...] Read more.
Owing to the increasing popularity of smart agriculture in recent years, it is necessary to develop a single sensor that can measure several soil properties, particularly the soil water content and matric potential. Therefore, in this study, we developed a sensor that can simultaneously measure soil water content (θ), electrical conductivity (σb), temperature, and matric potential (ψ). The proposed sensor can determine θ and σb using time domain transmissiometry and can determine ψ based on the capacitance of the accompanying ceramic plate. A series of laboratory and field tests were conducted to evaluate the performance of the sensor. The sensor output values were correlated with the soil properties, and the temperature dependence of the sensor outputs was evaluated. Additionally, field tests were conducted to measure transient soil conditions over a long period. The results show that the developed sensor can measure each soil property with acceptable accuracy. Moreover, the root-mean-square errors of the sensor and reference values were 1.7 for the dielectric constant (which is equivalent to θ), 62 mS m−1 for σb, and 0.05–0.88 for log ψ. The temperature dependence was not a problem, except when ψ was below −100 kPa. The sensor can be used for long-term measurements in agricultural fields and exhibited sufficient lifetime and performance. We believe that the developed sensor can contribute to smart agriculture and research on heat and mass transfer in soil. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Schematic and (<b>b</b>) photograph of the proposed sensor.</p>
Full article ">Figure 2
<p>Time domain transmissiometry waveform analysis for (<b>a</b>) soil water content (dielectric constant) and (<b>b</b>) electrical conductivity.</p>
Full article ">Figure 3
<p>(<b>a</b>) Relationship between dielectric constant (ε<sub>b</sub>) measured with a reference sensor and <span class="html-italic">t</span><sub>D</sub> measured with the new sensor, and (<b>b</b>) comparison between ε<sub>b</sub> measured with the reference sensor (ε<sub>b<span class="html-italic">, reference</span></sub>) and the new sensor (ε<sub>b<span class="html-italic">, new sensor</span></sub>). The different colored plots represent the different soil types, i.e., Toyoura sand, Gifu University experimental field soil (GU soil), and Andisol.</p>
Full article ">Figure 4
<p>(<b>a</b>) Relationship between the soil matric potential (ψ) and the <span class="html-italic">C</span><sub>D</sub> measured using the reference and proposed sensors, respectively, and (<b>b</b>) comparison between the ψ measured with the reference sensor (ψ<span class="html-italic"><sub>reference</sub></span>) and the proposed sensor (ψ<span class="html-italic"><sub>new sensor</sub></span>). The different colored plots in panel (<b>a</b>) represent the Gifu University experimental field soil (GU soil) and the Andisol, and the solid lines indicate fitted models (Equation (4)) with the plots of either soil and with all plots. The different colored plots in panel (<b>b</b>) represent the GU soil and the Andisol with the sensor-specific (SS) or common (C) parameters of Equation (4).</p>
Full article ">Figure 5
<p>(<b>a</b>) Relationship between the electrical conductivity (σ<sub>b</sub>) of potassium chloride (KCl) solution measured with the reference sensor and the <span class="html-italic">V</span><sub>D</sub> measured with the proposed sensor, and (<b>b</b>) comparison between the σ<sub>b</sub> of the Toyoura sand calculated with Equation (1) (σ<sub>b,<span class="html-italic">reference</span></sub>) and the σ<sub>b</sub> determined using the proposed sensor (σ<sub>b,<span class="html-italic">new sensor</span></sub>). The different colored plots represent different sensor numbers.</p>
Full article ">Figure 6
<p>(<b>a</b>) The new sensor-measured dielectric constant (ε<sub>b</sub>), (<b>b</b>) digital values of the new sensor-measured capacitance (<span class="html-italic">C</span><sub>D</sub>), and (<b>c</b>) the new sensor-measured matric potential (ψ) as functions of the temperature. The colored plot indicates the different water content (θ) of the Andisol, i.e., 0.20 m<sup>3</sup> m<sup>−3</sup>, 0.40 m<sup>3</sup> m<sup>−3</sup>, and 0.60 m<sup>3</sup> m<sup>−3</sup>.</p>
Full article ">Figure 7
<p>(<b>a</b>) Time series of soil properties measured with the new sensor in the Gifu University experimental field. The panels present (<b>a</b>) soil and air temperatures, (<b>b</b>) soil volumetric water content and precipitation, (<b>c</b>) soil matric potential, and (<b>d</b>) soil bulk electrical conductivity. The “LF” in panel (<b>d</b>) indicates liquid fertilizer application.</p>
Full article ">Figure 8
<p>In situ water retention curves were obtained with the new sensors installed at depths of (<b>a</b>) 10 cm and (<b>b</b>) 20 cm. The different colored plots indicate the different periods: summer (from 1 July to 18 August 2021) and fall (from 12 October to 27 December 2021).</p>
Full article ">
13 pages, 1430 KiB  
Article
Spectroradiometer Calibration for Radiance Transfer Measurements
by Clemens Rammeloo and Andreas Baumgartner
Sensors 2023, 23(4), 2339; https://doi.org/10.3390/s23042339 - 20 Feb 2023
Cited by 7 | Viewed by 2193
Abstract
Optical remote sensing and Earth observation instruments rely on precise radiometric calibrations which are generally provided by the broadband emission from large-aperture integrating spheres. The link between the integrating sphere radiance and an SI-traceable radiance standard is made by spectroradiometer measurements. In this [...] Read more.
Optical remote sensing and Earth observation instruments rely on precise radiometric calibrations which are generally provided by the broadband emission from large-aperture integrating spheres. The link between the integrating sphere radiance and an SI-traceable radiance standard is made by spectroradiometer measurements. In this work, the calibration efforts of a Spectra Vista Corporation (SVC) HR-1024i spectroradiometer are presented to study how these enable radiance transfer measurements at the Calibration Home Base (CHB) for imaging spectrometers at the Remote Sensing Technology Institute (IMF) of the German Aerospace Center (DLR). The spectral and radiometric response calibrations of an SVC HR-1024i spectroradiometer are reported, as well as the measurements of non-linearity and its sensitivity to temperature changes and polarized light. This achieves radiance transfer measurements with the calibrated spectroradiometer with relative expanded uncertainties between 1% and 3% (k=2) over the wavelength range of 380 nm to 2500 nm, which are limited by the uncertainties of the applied radiance standard. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

Figure 1
<p>Radiometric response calibration setup of the spectroradiometer on the DLR radiance standard (RASTA). The calibrated radiance from a reflectance panel illuminated by an FEL lamp is monitored by five filter radiometers, but only one radiometer is shown in the sketch for clarity. The spectroradiometer is calibrated inside a temperature-controlled enclosure where the air temperature is maintained by a radiator and monitored by a Pt100 temperature sensor connected to a temperature-controller. The fiber optic bundle from the spectroradiometer is attached to an off-axis parabolic mirror that reduces the spectroradiometer’s field of view to approximately 4<math display="inline"><semantics> <msup> <mrow/> <mo>°</mo> </msup> </semantics></math>. The spectroradiometer’s field of view is aligned to the center of the RASTA reflectance panel with the off-axis parabolic mirror. This is aligned before the radiometric calibration by coupling the light from a diffused LED source into the fiber optic instead of the spectroradiometer.</p>
Full article ">Figure 2
<p>Normalized spectral response functions of two adjacent channels in each of the spectroradiometer detector arrays with asymmetric Gaussian fits. The plotted channel <span class="html-italic">c</span> is number 190, 610 and 850 for the VNIR, SWIR-1, and SWIR-2 detector, respectively. The asymmetry of the SRF is most pronounced in the SWIR-2 channels.</p>
Full article ">Figure 3
<p>Spectral calibration results from an asymmetric Gaussian spectral response fits for all spectroradiometer channels. (<b>top</b>) Difference between the center wavelength <math display="inline"><semantics> <msub> <mi>λ</mi> <mi>c</mi> </msub> </semantics></math> and the factory calibration from several years prior; (<b>middle</b>) full width at half maximum of the spectral response function; and (<b>bottom</b>) asymmetry parameter from the spectral response fits.</p>
Full article ">Figure 4
<p>Temperature sensitivity measurements and linear fits of three channels in the VNIR detector array of the spectroradiometer. The spectroradiometer signals have been normalized to the response at a reference detector temperature of 32.3 °C. The labels indicate the center wavelength <math display="inline"><semantics> <msub> <mi>λ</mi> <mi>c</mi> </msub> </semantics></math> of the plotted channel.</p>
Full article ">Figure 5
<p>Temperature sensitivity coefficients and expanded uncertainties of the spectroradiometer’s VNIR channels.</p>
Full article ">Figure 6
<p>Spectroradiometer signals <math display="inline"><semantics> <msub> <mi>S</mi> <mi>c</mi> </msub> </semantics></math> of each detector array scaled by its integration time <math display="inline"><semantics> <msub> <mi>t</mi> <mi>int</mi> </msub> </semantics></math> and normalized to the reference signal <math display="inline"><semantics> <msub> <mi>S</mi> <mrow> <mi>c</mi> <mo>,</mo> <mi>ref</mi> </mrow> </msub> </semantics></math> at reference integration times <math display="inline"><semantics> <msub> <mi>t</mi> <mrow> <mi>int</mi> <mo>,</mo> <mi>ref</mi> </mrow> </msub> </semantics></math> = 80 ms, 40 ms, and 15 ms for the VNIR, SWIR-1, and SWIR-2 detectors, respectively.</p>
Full article ">Figure 7
<p>Radiometric responsivity of the spectroradiometer from calibrations with either a 4<math display="inline"><semantics> <msup> <mrow/> <mo>°</mo> </msup> </semantics></math> FOV lens or the fiber bundle with an off-axis parabolic (OAP) mirror as applied in the radiance transfer measurements.</p>
Full article ">Figure 8
<p>Contributions to the relative uncertainty in the radiometric calibration of the spectroradiometer. The DLR radiance standard (RASTA) has the most significant uncertainty over the spectroradiometer’s wavelength range.</p>
Full article ">Figure 9
<p>Spectroradiometer response of three channels to linear-polarized light as a function of the polarization angle with respect to the spectroradiometer slit orientation. The polarization sensitive response follows Malus’s law, as shown by the fits with Equation (<a href="#FD6-sensors-23-02339" class="html-disp-formula">6</a>).</p>
Full article ">Figure 10
<p>(<b>top</b>) Polarization sensitivity of the SVC HR-1024i spectroradiometer. (<b>bottom</b>) Angle of polarization with respect to the slit direction of the spectroradiometer where its signal is maximum.</p>
Full article ">Figure 11
<p>Radiance transfer setup where the calibrated spectroradiometer measures the spectral radiance of a large-aperture integrating sphere. (<b>left</b>) Photograph of the integrating sphere aperture with the spectroradiometer and the fiber-optic bundle. (<b>right</b>) The field-of-view (FOV) of the spectroradiometer is aligned to the center of the back of the integrating sphere, such that it overlaps with the FOV of a device under test (DUT).</p>
Full article ">Figure 12
<p>(<b>top</b>) Radiance from the integrating sphere at different lamp combinations as measured with the calibrated spectroradiometer. (<b>bottom</b>) Uncertainty contributions in the spectroradiometer measurements.</p>
Full article ">
29 pages, 19442 KiB  
Article
New Cognitive Deep-Learning CAPTCHA
by Nghia Dinh Trong, Thien Ho Huong and Vinh Truong Hoang
Sensors 2023, 23(4), 2338; https://doi.org/10.3390/s23042338 - 20 Feb 2023
Cited by 7 | Viewed by 4836
Abstract
CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart), or HIP (Human Interactive Proof), has long been utilized to avoid bots manipulating web services. Over the years, various CAPTCHAs have been presented, primarily to enhance security and usability against new [...] Read more.
CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart), or HIP (Human Interactive Proof), has long been utilized to avoid bots manipulating web services. Over the years, various CAPTCHAs have been presented, primarily to enhance security and usability against new bots and cybercriminals carrying out destructive actions. Nevertheless, automated attacks supported by ML (Machine Learning), CNN (Convolutional Neural Network), and DNN (Deep Neural Network) have successfully broken all common conventional schemes, including text- and image-based CAPTCHAs. CNN/DNN have recently been shown to be extremely vulnerable to adversarial examples, which can consistently deceive neural networks by introducing noise that humans are incapable of detecting. In this study, the authors improve the security for CAPTCHA design by combining text-based, image-based, and cognitive CAPTCHA characteristics and applying adversarial examples and neural style transfer. Comprehend usability and security assessments are performed to evaluate the efficacy of the improvement in CAPTCHA. The results show that the proposed CAPTCHA outperforms standard CAPTCHAs in terms of security while remaining usable. Our work makes two major contributions: first, we show that the combination of deep learning and cognition can significantly improve the security of image-based and text-based CAPTCHAs; and second, we suggest a promising direction for designing CAPTCHAs with the concept of the proposed CAPTCHA. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>zxCAPTCHA design.</p>
Full article ">Figure 2
<p>Logic CAPTCHA architecture.</p>
Full article ">Figure 3
<p>Overall network architecture.</p>
Full article ">Figure 4
<p>Loss network VGG-16.</p>
Full article ">Figure 5
<p>Style-transfer network.</p>
Full article ">Figure 6
<p>Style-prediction network.</p>
Full article ">Figure 7
<p>GA optimization steps.</p>
Full article ">Figure 8
<p>Text-based CAPTCHA generation.</p>
Full article ">Figure 9
<p>Grid-based CAPTCHA generation.</p>
Full article ">Figure 10
<p>Cognitive-based types.</p>
Full article ">Figure 11
<p>Security evaluation.</p>
Full article ">Figure 12
<p>Usability evaluation.</p>
Full article ">Figure 13
<p>Real example of zxCAPTCHA.</p>
Full article ">Figure 14
<p>User distribution, (<b>a</b>) by genders, (<b>b</b>) by ages, and (<b>c</b>) by education.</p>
Full article ">Figure 14 Cont.
<p>User distribution, (<b>a</b>) by genders, (<b>b</b>) by ages, and (<b>c</b>) by education.</p>
Full article ">Figure 15
<p>Relay attack steps.</p>
Full article ">
13 pages, 23788 KiB  
Article
Deep-Learning-Based Context-Aware Multi-Level Information Fusion Systems for Indoor Mobile Robots Safe Navigation
by Yin Jia, Balakrishnan Ramalingam, Rajesh Elara Mohan, Zhenyuan Yang, Zimou Zeng and Prabakaran Veerajagadheswar
Sensors 2023, 23(4), 2337; https://doi.org/10.3390/s23042337 - 20 Feb 2023
Cited by 2 | Viewed by 2226
Abstract
Hazardous object detection (escalators, stairs, glass doors, etc.) and avoidance are critical functional safety modules for autonomous mobile cleaning robots. Conventional object detectors have less accuracy for detecting low-feature hazardous objects and have miss detection, and the false classification ratio is high when [...] Read more.
Hazardous object detection (escalators, stairs, glass doors, etc.) and avoidance are critical functional safety modules for autonomous mobile cleaning robots. Conventional object detectors have less accuracy for detecting low-feature hazardous objects and have miss detection, and the false classification ratio is high when the object is under occlusion. Miss detection or false classification of hazardous objects poses an operational safety issue for mobile robots. This work presents a deep-learning-based context-aware multi-level information fusion framework for autonomous mobile cleaning robots to detect and avoid hazardous objects with a higher confidence level, even if the object is under occlusion. First, the image-level-contextual-encoding module was proposed and incorporated with the Faster RCNN ResNet 50 object detector model to improve the low-featured and occluded hazardous object detection in an indoor environment. Further, a safe-distance-estimation function was proposed to avoid hazardous objects. It computes the distance of the hazardous object from the robot’s position and steers the robot into a safer zone using detection results and object depth data. The proposed framework was trained with a custom image dataset using fine-tuning techniques and tested in real-time with an in-house-developed mobile cleaning robot, BELUGA. The experimental results show that the proposed algorithm detected the low-featured and occluded hazardous object with a higher confidence level than the conventional object detector and scored an average detection accuracy of 88.71%. Full article
(This article belongs to the Special Issue Sensor Technology for Intelligent Control and Computer Visions)
Show Figures

Figure 1

Figure 1
<p>Block diagram of proposed system.</p>
Full article ">Figure 2
<p>Context aware DCNN-based object detection framework.</p>
Full article ">Figure 3
<p>Experiment results of hazardous object detection.</p>
Full article ">Figure 4
<p>Comparison analysis results of the context-aware object detection algorithm and conventional object detection scheme for the escalator and glass door: (<b>a</b>) Yolo V4; (<b>b</b>) Faster RCNN ResNet 50; (<b>c</b>) Proposed system. From top to bottom, the occlusion conditions are shown from low, medium, and high levels.</p>
Full article ">Figure 5
<p>Experiment Robot [<a href="#B3-sensors-23-02337" class="html-bibr">3</a>].</p>
Full article ">Figure 6
<p>Environment: SUTD Mass Rapid Transition (MRT) station.</p>
Full article ">Figure 7
<p>Environment: SUTD campus.</p>
Full article ">
16 pages, 22548 KiB  
Article
Lensless Three-Dimensional Imaging under Photon-Starved Conditions
by Jae-Young Jang and Myungjin Cho
Sensors 2023, 23(4), 2336; https://doi.org/10.3390/s23042336 - 20 Feb 2023
Cited by 2 | Viewed by 1416
Abstract
In this paper, we propose a lensless three-dimensional (3D) imaging under photon-starved conditions using diffraction grating and computational photon counting method. In conventional 3D imaging with and without the lens, 3D visualization of objects under photon-starved conditions may be difficult due to lack [...] Read more.
In this paper, we propose a lensless three-dimensional (3D) imaging under photon-starved conditions using diffraction grating and computational photon counting method. In conventional 3D imaging with and without the lens, 3D visualization of objects under photon-starved conditions may be difficult due to lack of photons. To solve this problem, our proposed method uses diffraction grating imaging as lensless 3D imaging and computational photon counting method for 3D visualization of objects under these conditions. In addition, to improve the visual quality of 3D images under severely photon-starved conditions, in this paper, multiple observation photon counting method with advanced statistical estimation such as Bayesian estimation is proposed. Multiple observation photon counting method can estimate the more accurate 3D images by remedying the random errors of photon occurrence because it can increase the samples of photons. To prove the ability of our proposed method, we implement the optical experiments and calculate the peak sidelobe ratio as the performance metric. Full article
(This article belongs to the Collection 3D Imaging and Sensing System)
Show Figures

Figure 1

Figure 1
<p>Geometric relations in diffraction grating imaging and examples of diffraction image array (DIA). (<b>a</b>) On the left, geometric relationship for point object, diffraction images (DIs), diffraction grating and imaging lens. On the right, it is an example of the DIA; (<b>b</b>) On the left, the spatial period of the diffraction image array according to the depth of the object. On the right, it is the example of the DIA.</p>
Full article ">Figure 2
<p>Computational reconstruction through convolution of DIA and <math display="inline"><semantics> <mi>δ</mi> </semantics></math>-function arrays in diffraction grating imaging. (<b>a</b>) Reconstruction result when the spatial period of the <math display="inline"><semantics> <mi>δ</mi> </semantics></math>-function array coincides with the spatial period at the object’s depth; (<b>b</b>) Reconstruction result when the spatial period at the object’s depth and that of the <math display="inline"><semantics> <mi>δ</mi> </semantics></math>-function array do not match each other.</p>
Full article ">Figure 3
<p>Physical photon counting detector.</p>
Full article ">Figure 4
<p>Procedure of computational photon counting model.</p>
Full article ">Figure 5
<p>(<b>a</b>) Original image, (<b>b</b>) photon counting image by MLE, and (<b>c</b>) photon counting image by Bayesian estimation where 157,361 photons are extracted from the original image (<b>a</b>).</p>
Full article ">Figure 6
<p>Estimated images with different expected photon ratios (1%, 10%, and 50%). (<b>a</b>) Single observation photon counting images, (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math> observation photon counting imaging by MLE and (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math> observation photon counting imaging by Bayesian estimation, respectively.</p>
Full article ">Figure 7
<p>Optical experiment setup to acquire DIA. (<b>a</b>) Configuration of optical experiment and (<b>b</b>) The size of the objects used in the experiment and the distance between them and (<b>c</b>) diffraction image arrays (DIA) and the enlarged images of their 0th order diffraction images.</p>
Full article ">Figure 8
<p>EOA for the distance between the diffraction grating and the object in this diffraction grating imaging system.</p>
Full article ">Figure 9
<p>Diffraction images of HKNU objects by photon counting imaging with 1% photon ratio by (<b>a</b>) Single observation, (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math> observations of MLE, and (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math> observations of MAP, respectively.</p>
Full article ">Figure 10
<p>Diffraction images of Men objects by photon counting imaging with 1% photon ratio by (<b>a</b>) Single observation, (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math> observations of MLE, and (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math> observations of MAP, respectively.</p>
Full article ">Figure 11
<p>3D images under normal illumination with various spatial periods at reconstruction depths by the original diffraction images of (<b>a</b>) HKNU objects and (<b>b</b>) Men objects, respectively.</p>
Full article ">Figure 12
<p>3D images under photon-starved conditions of HKNU objects with 1% photon ratio and various spatial periods at reconstruction depths by (<b>a</b>) the single observation photon counting images (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math> observation photon counting images by MLE and (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math> observation photon counting images by MAP, respectively.</p>
Full article ">Figure 13
<p>3D images under photon-starved conditions of Men objects with 1% photon ratio and various spatial periods at reconstruction depths by (<b>a</b>) the single observation photon counting images (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math> observation photon counting images by MLE and (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math> observation photon counting images by MAP, respectively.</p>
Full article ">Figure 14
<p>Peak sidelobe ratio (PSR) results of HKNU objects via various spatial periods at reconstruction depths by single observation photon counting method, <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math> observation MLE, and <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math> observation MAP with 1% photon ratio.</p>
Full article ">Figure 15
<p>Peak sidelobe ratio (PSR) results of Men objects via various spatial periods at reconstruction depths by single observation photon counting method, <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math> observation MLE, and <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math> observation MAP with 1% photon ratio.</p>
Full article ">
16 pages, 6838 KiB  
Article
A Novel Method to Model Image Creation Based on Mammographic Sensors Performance Parameters: A Theoretical Study
by Nektarios Kalyvas, Anastasia Chamogeorgaki, Christos Michail, Aikaterini Skouroliakou, Panagiotis Liaparinos, Ioannis Valais, George Fountos and Ioannis Kandarakis
Sensors 2023, 23(4), 2335; https://doi.org/10.3390/s23042335 - 20 Feb 2023
Cited by 1 | Viewed by 1572
Abstract
Background: Mammographic digital imaging is based on X-ray sensors with solid image quality characteristics. These primarily include (a) a response curve that yields high contrast and image latitude, (b) a frequency response given by the Modulation Transfer Function (MTF), which enables [...] Read more.
Background: Mammographic digital imaging is based on X-ray sensors with solid image quality characteristics. These primarily include (a) a response curve that yields high contrast and image latitude, (b) a frequency response given by the Modulation Transfer Function (MTF), which enables small detail imaging and (c) the Normalize Noise Power Spectrum (NNPS) that shows the extent of the noise effect on image clarity. Methods: In this work, a methodological approach is introduced and described for creating digital phantom images based on the measured image quality properties of the sensor. For this purpose, a mathematical phantom, simulating breast tissue and lesions of blood, adipose, muscle, Ca and Ca(50%)-P(50%) was created by considering the corresponding X-ray attenuation coefficients. The simulated irradiation conditions of the phantom used four mammographic spectra assuming exponential attenuation. Published data regarding noise and blur of a commercial RadEye HR CMOS imaging sensor were used as input data for the resulting images. Results: It was found that the Ca and Ca(50%)-P(50%) lesions were visible in all exposure conditions. In addition, the W/Rh spectrum at 28 kVp provided more detailed images than the corresponding Mo/Mo spectrum. Conclusions: The presented methodology can act complementarily to image quality measurements, leading to initial optimization of the X-ray exposure parameters per clinical condition. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) The location of different lesion types on the mathematical phantom [<a href="#B30-sensors-23-02335" class="html-bibr">30</a>], and (<b>b</b>) the linear X-ray attenuation coefficients of the lesions (Reference 30 is licensed under CC BY-NC-SA 4.0).</p>
Full article ">Figure 2
<p>The mammographic X-ray spectra considered in this study.</p>
Full article ">Figure 3
<p>A flowchart of the method described in Equations (1) to (6).</p>
Full article ">Figure 4
<p>(<b>a</b>) <span class="html-italic">PSF</span> derived by rotating Equation (6), and (<b>b</b>) published experimental MTF results and the ones theoretically calculated by the <span class="html-italic">PSF</span> of this work [<a href="#B27-sensors-23-02335" class="html-bibr">27</a>,<a href="#B30-sensors-23-02335" class="html-bibr">30</a>] (Reference 30 is licensed under CC BY-NC-SA 4.0).</p>
Full article ">Figure 5
<p>A 4.2 cm phantom image for Mo/Mo 5 mGy and 28 kVp spectrum with (<b>a</b>) 0.1 cm lesion thickness and (<b>b</b>) 0.5 cm lesion thickness.</p>
Full article ">Figure 6
<p>A 4.2 cm phantom image for Mo/Mo 3 mGy and 32 kVp spectrum with (<b>a</b>) 0.1 cm lesion thickness and (<b>b</b>) 0.5 cm lesion thickness.</p>
Full article ">Figure 7
<p>A 4.2 cm phantom image for W/Rh 5 mGy and 28 kVp spectrum with (<b>a</b>) 0.1 cm lesion thickness and (<b>b</b>) 0.5 cm lesion thickness.</p>
Full article ">Figure 8
<p>A 4.2 cm phantom image for W/Rh 5 mGy and 32 kVp spectrum with (<b>a</b>) 0.1 cm lesion thickness and (<b>b</b>) 0.5 cm lesion thickness.</p>
Full article ">Figure 9
<p>A 6 cm phantom image for Mo/Mo 5 mGy and 28 kVp spectrum with (<b>a</b>) 0.1 cm lesion thickness and (<b>b</b>) 0.5 cm lesion thickness.</p>
Full article ">Figure 10
<p>A 6 cm phantom image for Mo/Mo 3 mGy and 32 kVp spectrum with (<b>a</b>) 0.1 cm lesion thickness and (<b>b</b>) 0.5 cm lesion thickness.</p>
Full article ">Figure 11
<p>A 6 cm phantom image for W/Rh 5 mGy and 28 kVp spectrum with (<b>a</b>) 0.1 cm lesion thickness and (<b>b</b>) 0.5 cm lesion thickness.</p>
Full article ">Figure 12
<p>A 6 cm phantom image for W/Rh 5 mGy and 32 kVp spectrum with (<b>a</b>) 0.1 cm lesion thickness and (<b>b</b>) 0.5 cm lesion thickness.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop