[go: up one dir, main page]

Next Issue
Volume 24, January-1
Previous Issue
Volume 23, December-1
 
 
sensors-logo

Journal Browser

Journal Browser

Sensors, Volume 23, Issue 24 (December-2 2023) – 295 articles

Cover Story (view full-size image): A lightweight, compliant sensorised glove capable of detecting scratching with Machine Learning (ML) using data from flexible microtubular sensors and inertial measurement unit (IMU) has been developed. The sensorised glove provides the user and clinicians with quantifiable information of scratching intensity, frequency, and duration as a proxy to classify the itch severity caused by atopic dermatitis (AD). The paper describes the design of the sensorised glove, training the ML model to detect scratching with the purpose of assaying scratching objectively and a pilot daytime clinical study to validate the device with patients. The sensorised glove can detect 94.4% scratching with the dominant hand where the glove was worn. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
22 pages, 5889 KiB  
Article
Towards Minimizing the LiDAR Sim-to-Real Domain Shift: Object-Level Local Domain Adaptation for 3D Point Clouds of Autonomous Vehicles
by Sebastian Huch and Markus Lienkamp
Sensors 2023, 23(24), 9913; https://doi.org/10.3390/s23249913 - 18 Dec 2023
Cited by 2 | Viewed by 2366
Abstract
Perception algorithms for autonomous vehicles demand large, labeled datasets. Real-world data acquisition and annotation costs are high, making synthetic data from simulation a cost-effective option. However, training on one source domain and testing on a target domain can cause a domain shift attributed [...] Read more.
Perception algorithms for autonomous vehicles demand large, labeled datasets. Real-world data acquisition and annotation costs are high, making synthetic data from simulation a cost-effective option. However, training on one source domain and testing on a target domain can cause a domain shift attributed to local structure differences, resulting in a decrease in the model’s performance. We propose a novel domain adaptation approach to address this challenge and to minimize the domain shift between simulated and real-world LiDAR data. Our approach adapts 3D point clouds on the object level by learning the local characteristics of the target domain. A key feature involves downsampling to ensure domain invariance of the input data. The network comprises a state-of-the-art point completion network combined with a discriminator to guide training in an adversarial manner. We quantify the reduction in domain shift by training object detectors with the source, target, and adapted datasets. Our method successfully reduces the sim-to-real domain shift in a distribution-aligned dataset by almost 50%, from 8.63% to 4.36% 3D average precision. It is trained exclusively using target data, making it scalable and applicable to adapt point clouds from any source domain. Full article
(This article belongs to the Special Issue Innovations with LiDAR Sensors and Applications)
Show Figures

Figure 1

Figure 1
<p>We present an object-based point cloud domain adaptation method. We start by extracting object point clouds <math display="inline"><semantics> <msubsup> <mi mathvariant="bold">O</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>k</mi> </mrow> <mi>S</mi> </msubsup> </semantics></math> from a source scene point cloud <math display="inline"><semantics> <msubsup> <mi mathvariant="bold">X</mi> <mi>i</mi> <mi>S</mi> </msubsup> </semantics></math> (here: simulated data, blue). Our trained domain adaptation network then adapts these object point clouds to create target-style object point clouds (here: real-world, red) <math display="inline"><semantics> <msubsup> <mi mathvariant="bold">O</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>k</mi> </mrow> <mrow> <mi>S</mi> <mo>,</mo> <mi>adapted</mi> </mrow> </msubsup> </semantics></math>, which are placed back in their original positions in the source scene point cloud. The final output <math display="inline"><semantics> <msubsup> <mi mathvariant="bold">X</mi> <mi>i</mi> <mrow> <mi>S</mi> <mo>,</mo> <mi>adapted</mi> </mrow> </msubsup> </semantics></math> is a combination of the original source scene point cloud and the adapted object point cloud.</p>
Full article ">Figure 2
<p>Detailed structure of our object-based point cloud domain adaptation network. The training procedure involves reducing the size of a point cloud <math display="inline"><semantics> <msubsup> <mi mathvariant="bold">O</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>k</mi> </mrow> <mi>T</mi> </msubsup> </semantics></math> by means of farthest point sampling (FPS) and then reconstructing it using the generator <span class="html-italic">G</span>, which is a point completion network. We employ a discriminator that uses patches with <math display="inline"><semantics> <msub> <mi>λ</mi> <mi>patch</mi> </msub> </semantics></math> points as input to further aid reconstruction. During inference, the generator <span class="html-italic">G</span> takes the source point clouds <math display="inline"><semantics> <msubsup> <mi mathvariant="bold">O</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>k</mi> </mrow> <mi>S</mi> </msubsup> </semantics></math> that have been downsampled using FPS as input and then adapts them locally to generate the output point clouds <math display="inline"><semantics> <msubsup> <mi mathvariant="bold">O</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>k</mi> </mrow> <mrow> <mi>S</mi> <mo>,</mo> <mi>adapted</mi> </mrow> </msubsup> </semantics></math> in the target style.</p>
Full article ">Figure 3
<p>3D average precision (AP) with IoU 70% for PointRCNN trained with <span class="html-italic">sim</span>, <span class="html-italic">sim-to-real</span>, or <span class="html-italic">real</span> data and evaluated on <span class="html-italic">real</span> data (target). The horizontal lines mark the mean AP, and the five points mark the individual five training runs per train-test pairing.</p>
Full article ">Figure 4
<p>Single object point clouds of <span class="html-italic">sim</span>, <span class="html-italic">sim-to-real</span>, and <span class="html-italic">real</span> datasets for comparison of the domain adaption method. In (<b>b</b>,<b>d</b>,<b>f</b>) we provide a crop of the red boxes depicted in (<b>a</b>,<b>c</b>,<b>e</b>), respectively, for a detailed view of the local structure. Blue shades represent simulated point clouds and red shades represent real-world point clouds. For reference, we include the 3D model and picture of the object in (<b>a</b>,<b>e</b>), respectively.</p>
Full article ">Figure 5
<p>Aggregated normalized object point clouds of <span class="html-italic">sim</span> (blue), <span class="html-italic">sim-to-real</span> (blue), and <span class="html-italic">real</span> (red) datasets. Each aggregated object point cloud consists of 50 randomly selected individual point clouds.</p>
Full article ">Figure 6
<p>T-SNE plot of the latent feature space of PointPillars trained on <span class="html-italic">real</span>, <span class="html-italic">sim</span>, or <span class="html-italic">sim-to-real</span> data. Each point visualizes a feature vector generated by network inference with a single point cloud of the <span class="html-italic">real</span> test set.</p>
Full article ">Figure 7
<p>3D average precision (AP) with IoU 70% for PointRCNN trained with <span class="html-italic">sim</span>, <span class="html-italic">real</span>, multiple <span class="html-italic">sim-to-real</span> variants, or <span class="html-italic">sim-noise</span> data and evaluated on <span class="html-italic">real</span> data (target) in close-range <math display="inline"><semantics> <mfenced separators="" open="[" close=")"> <mn>0.0</mn> <mtext> </mtext> <mi mathvariant="normal">m</mi> <mo>,</mo> <mn>33.3</mn> <mtext> </mtext> <mi mathvariant="normal">m</mi> </mfenced> </semantics></math>. For the <span class="html-italic">sim-to-real</span> variants, we alter the downsampling factor <math display="inline"><semantics> <mi>δ</mi> </semantics></math> from its default value seven to five or three, respectively. Furthermore, we analyze the performance of our domain adaptation method without adversarial training, i.e., without the discriminator (<span class="html-italic">Sim-to-Real No-GAN</span>). The horizontal lines mark the mean AP, and the five points mark the individual five training runs per train-test pairing.</p>
Full article ">
17 pages, 14628 KiB  
Article
A Grating Interferometric Acoustic Sensor Based on a Flexible Polymer Diaphragm
by Linsen Xiong and Zhi-mei Qi
Sensors 2023, 23(24), 9912; https://doi.org/10.3390/s23249912 - 18 Dec 2023
Cited by 5 | Viewed by 1648
Abstract
This study presents a grating interferometric acoustic sensor based on a flexible polymer diaphragm. A flexible-diaphragm acoustic sensor based on grating interferometry (GI) is proposed through design, fabrication and experimental demonstration. A gold-coated polyethylene terephthalate diaphragm was used for the sensor prototype. The [...] Read more.
This study presents a grating interferometric acoustic sensor based on a flexible polymer diaphragm. A flexible-diaphragm acoustic sensor based on grating interferometry (GI) is proposed through design, fabrication and experimental demonstration. A gold-coated polyethylene terephthalate diaphragm was used for the sensor prototype. The vibration of the diaphragm induces a change in GI cavity length, which is converted into an electrical signal by the photodetector. The experimental results show that the sensor prototype has a flat frequency response in the voice frequency band and the minimum detectable sound pressure can reach 164.8 µPa/√Hz. The sensor prototype has potential applications in speech acquisition and the measurement of water content in oil. This study provides a reference for the design of optical interferometric acoustic sensor with high performance. Full article
(This article belongs to the Special Issue Acoustic and Ultrasonic Sensing Technology in Non-Destructive Testing)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the grating interferometer (GI)-based acoustic sensor.</p>
Full article ">Figure 2
<p>Theoretical analysis of the first resonant frequency and mechanical sensitivity for different (<b>a</b>) thickness, (<b>b</b>) radius, (<b>c</b>) density, (<b>d</b>) tensile pre-stress of diaphragm.</p>
Full article ">Figure 3
<p>Sketch of the acoustic sensor including variables and coordinate system. The red dashed box indicates the modeled region.</p>
Full article ">Figure 4
<p>Simulated deformation of the diaphragm at a sound pressure of 1 Pa at 1 kHz. (<b>a</b>) Displacement distribution in the plane of the diaphragm. (<b>b</b>) Displacement distribution on the radial direction of the diaphragm (z-component).</p>
Full article ">Figure 5
<p>Simulated mode shapes and corresponding modal frequencies of the diaphragm: (<b>a</b>) mode 1, (<b>b</b>) mode 2, (<b>c</b>) mode 6, (<b>d</b>) mode 17.</p>
Full article ">Figure 5 Cont.
<p>Simulated mode shapes and corresponding modal frequencies of the diaphragm: (<b>a</b>) mode 1, (<b>b</b>) mode 2, (<b>c</b>) mode 6, (<b>d</b>) mode 17.</p>
Full article ">Figure 6
<p>(<b>a</b>) Simulated frequency response curve of mechanical sensitivity. (<b>b</b>) Simulated frequency response curves of mechanical sensitivity of diaphragms (center point) of different materials.</p>
Full article ">Figure 7
<p>Simulated frequency response of diaphragm (center point) mechanical sensitivity for different thickness of: (<b>a</b>) gold reflector, (<b>b</b>) air gap.</p>
Full article ">Figure 8
<p>(<b>a</b>) Optical interference curve of the GI-based acoustic sensor. (<b>b</b>) The responses of GI-based acoustic sensor work at different operating points.</p>
Full article ">Figure 9
<p>Schematic diagram of the fabrication process of the proposed flexible-diaphragm acoustic sensor chip.</p>
Full article ">Figure 10
<p>(<b>a</b>) Fabrication process of the chromium grating on the glass substrate. (<b>b</b>) Photograph of the grating on the glass substrate after ion beam etching. (<b>c</b>) Schematic diagram of the combination of grating and spacer. (<b>d</b>) Photograph of the “grating-spacer” structure.</p>
Full article ">Figure 10 Cont.
<p>(<b>a</b>) Fabrication process of the chromium grating on the glass substrate. (<b>b</b>) Photograph of the grating on the glass substrate after ion beam etching. (<b>c</b>) Schematic diagram of the combination of grating and spacer. (<b>d</b>) Photograph of the “grating-spacer” structure.</p>
Full article ">Figure 11
<p>(<b>a</b>) Photograph of assembly of PET membrane with “grating-spacer” structure. (<b>b</b>) Photograph of proposed flexible-diaphragm acoustic sensor chip. (<b>c</b>) Schematic diagram of experimental setup for testing resonant frequency of tensioned large-size membrane. (<b>d</b>) Resonant frequency of the tensioned large-size membrane.</p>
Full article ">Figure 12
<p>(<b>a</b>) Schematic diagram of the packaged acoustic sensor. (<b>b</b>) Photograph of the laser diode (LD) and photodetector (PD) utilized in the work. (<b>c</b>) Measured wavelength range of the LD. (<b>d</b>) Photograph of the packaged acoustic sensor.</p>
Full article ">Figure 13
<p>The experimental setup for acoustic characterization.</p>
Full article ">Figure 14
<p>Responses of the proposed sensor to acoustic signals with different frequencies and corresponding frequency domain results obtained by Fourier transform processing. (<b>a</b>,<b>e</b>) 250 Hz; (<b>b</b>,<b>f</b>) 500 Hz; (<b>c</b>,<b>g</b>) 1000 Hz; (<b>d</b>,<b>h</b>) 5000 Hz.</p>
Full article ">Figure 14 Cont.
<p>Responses of the proposed sensor to acoustic signals with different frequencies and corresponding frequency domain results obtained by Fourier transform processing. (<b>a</b>,<b>e</b>) 250 Hz; (<b>b</b>,<b>f</b>) 500 Hz; (<b>c</b>,<b>g</b>) 1000 Hz; (<b>d</b>,<b>h</b>) 5000 Hz.</p>
Full article ">Figure 15
<p>(<b>a</b>) Frequency response curve from 50 Hz to 6.4 kHz. The previous work is Ref. [<a href="#B13-sensors-23-09912" class="html-bibr">13</a>]. (<b>b</b>) Frequency response test points over voice frequency band.</p>
Full article ">
35 pages, 6877 KiB  
Review
Recent Advancements in Graphene-Based Implantable Electrodes for Neural Recording/Stimulation
by Md Eshrat E. Alahi, Mubdiul Islam Rizu, Fahmida Wazed Tina, Zhaoling Huang, Anindya Nag and Nasrin Afsarimanesh
Sensors 2023, 23(24), 9911; https://doi.org/10.3390/s23249911 - 18 Dec 2023
Cited by 3 | Viewed by 6102
Abstract
Implantable electrodes represent a groundbreaking advancement in nervous system research, providing a pivotal tool for recording and stimulating human neural activity. This capability is integral for unraveling the intricacies of the nervous system’s functionality and for devising innovative treatments for various neurological disorders. [...] Read more.
Implantable electrodes represent a groundbreaking advancement in nervous system research, providing a pivotal tool for recording and stimulating human neural activity. This capability is integral for unraveling the intricacies of the nervous system’s functionality and for devising innovative treatments for various neurological disorders. Implantable electrodes offer distinct advantages compared to conventional recording and stimulating neural activity methods. They deliver heightened precision, fewer associated side effects, and the ability to gather data from diverse neural sources. Crucially, the development of implantable electrodes necessitates key attributes: flexibility, stability, and high resolution. Graphene emerges as a highly promising material for fabricating such electrodes due to its exceptional properties. It boasts remarkable flexibility, ensuring seamless integration with the complex and contoured surfaces of neural tissues. Additionally, graphene exhibits low electrical resistance, enabling efficient transmission of neural signals. Its transparency further extends its utility, facilitating compatibility with various imaging techniques and optogenetics. This paper showcases noteworthy endeavors in utilizing graphene in its pure form and as composites to create and deploy implantable devices tailored for neural recordings and stimulations. It underscores the potential for significant advancements in this field. Furthermore, this paper delves into prospective avenues for refining existing graphene-based electrodes, enhancing their suitability for neural recording applications in in vitro and in vivo settings. These future steps promise to revolutionize further our capacity to understand and interact with the neural research landscape. Full article
(This article belongs to the Special Issue Novel Field-Effect Transistor Gas/Chem/Bio Sensing)
Show Figures

Figure 1

Figure 1
<p>Extensive utilization of graphene-based materials in regenerative medicine and tissue engineering. [Reproduced with permission from [<a href="#B71-sensors-23-09911" class="html-bibr">71</a>]].</p>
Full article ">Figure 2
<p>Graphene microelectrodes are utilized for in vitro recording of neural activity. (<b>a</b>) Illustration of the experimental arrangement featuring transparent graphene electrodes seamlessly integrated with an inverted microscope. (<b>b</b>) Images from fluorescence microscopy demonstrate well-established cultured neurons flourishing on the surface of graphene field-effect transistors. [Adapted from [<a href="#B58-sensors-23-09911" class="html-bibr">58</a>]. Copyright (2017), with permission from Frontiers].</p>
Full article ">Figure 3
<p>The porous graphene electrode array is being made. The diagrams below show the steps involved in making the porous graphene electrode array: a photograph showing the finished 64-electrode array is displayed, a tilted scanning electron microscopy (SEM) image of the 64-spot porous graphene array is shown, and impedance measurements of the 64 electrodes were carried out at 1 kHz. (<b>a</b>) Using laser pyrolysis to pattern the graphene; (<b>b</b>) establishing metal interconnects; (<b>c</b>) applying SU-8 encapsulation; (<b>d</b>) Image capturing a created 64-electrode array; (<b>e</b>) Tilted scanning electron microscopy (SEM) depiction of a 64-spot array composed of porous graphene. The inset showcases an SEM view of an individual spot; and (<b>f</b>) Evaluation of impedance for all 64 electrodes at 1 kHz. [Reprinted with the permission of [<a href="#B105-sensors-23-09911" class="html-bibr">105</a>]].</p>
Full article ">Figure 4
<p>Manufacturing and visualizing LCGO brush electrodes. (<b>a</b>) The electrodes are connected to copper wires and insulated with polytetrafluoroethylene, and possess an approximate diameter of 1 mm. This bonding is achieved using a conductive epoxy containing silver. (<b>b</b>) After this bonding, a layer of Parylene C is applied as a protective coating. (<b>c</b>) A laser operating at 250 mW is utilized for ablation. This step opens up the end of the electrode, resulting in the formation of a distinctive ‘brush’ electrode. (<b>d</b>) The application of laser treatment leads to the formation of an amorphous electrode, characterized by an exceptionally high degree of surface irregularities and porosity [reprinted with the permission from [<a href="#B106-sensors-23-09911" class="html-bibr">106</a>]].</p>
Full article ">Figure 5
<p>It illustrates the clear micro-ECoG device, highlighting key fabrication steps. (<b>a</b>) Initial metal patterning on a Parylene C-coated silicon wafer substrate for traces and pads. (<b>b</b>) Sequential stacking of four graphene monolayers. (<b>c</b>) Precise graphene patterning to form electrode locations. [Reprinted with the permission from [<a href="#B88-sensors-23-09911" class="html-bibr">88</a>]].</p>
Full article ">Figure 6
<p>This figure illustrates a multielectrode ERG recording employing a soft and transparent graphene electrode array. The construction of the array involves layered structures (<b>a</b>). The top section displays the array’s optical transparency when positioned over printed paper, with recording sites arranged linearly (<b>b</b>). The bottom part offers an optical microscopy view, emphasizing graphene electrode sites and traces, including an insulated electrode (<b>c</b>). A stripped graphene electrode array is also shown over a dilated rabbit eye (<b>d</b>). The schematic showcases the distribution of recording channels on the rabbit eye, from the temporal area to the nasal periphery (<b>d</b>). [This figure has been adapted from [<a href="#B117-sensors-23-09911" class="html-bibr">117</a>]].</p>
Full article ">Figure 7
<p>In vivo cortical vasculature images were captured using the CLEAR device. Panels (<b>a</b>,<b>c</b>) present the bright-field image of the graphene electrode on the cerebral cortex beneath a cranial window. Correspondingly, panels (<b>b</b>,<b>d</b>) showcase the fluorescence images of the same device as shown in (<b>a</b>,<b>c</b>). The cortical vasculature was visible through the graphene electrode in panels (<b>e</b>,<b>f</b>). A schematic illustrating optical stimulation by blue light with a 473 nm wavelength on the cerebral cortex is provided in panel (<b>g</b>), demonstrating its compatibility with a transparent graphene MEA. Lastly, panel (<b>h</b>) displays the recording of neural signals evoked by blue light via the transparent graphene MEA. (Reproduced with permission from [<a href="#B88-sensors-23-09911" class="html-bibr">88</a>]).</p>
Full article ">Figure 8
<p>The neuronal signals were captured using a TGVH device. Panel (<b>a</b>) exhibits optical images of a custom-designed electrode array composed of patterned graphene. This array comprises 35 distinct graphene electrodes, each with 1 × 1 mm dimensions, accompanied by an internal ground electrode spanning 2.9 mm<sup>2</sup>, all positioned on a Cr/Pt base electrode. The inset provides a closer view of a single-channel graphene electrode. In panel (<b>b</b>), a topographical AFM image reveals a two-layer graphene electrode, with the inset indicating the thickness of the marked line. Panel (<b>c</b>) displays the finalized TGVH device. Panels (<b>d</b>,<b>e</b>) show FE-SEM images, respectively, of the multielectrode array constructed from vertically aligned carbon nanotubes (VACNT) in its original state and a single VACNT electrode. Finally, panel (<b>f</b>) presents a schematic representation of the TGVH device. [Reprinted with the permission of [<a href="#B132-sensors-23-09911" class="html-bibr">132</a>]].</p>
Full article ">Figure 9
<p>In vitro neural recording by graphene transistor arrays. (<b>a</b>) Microscopic image of a dense neuronal network cultured over a GMEA array; (<b>b</b>) a time-series recording of spiking–bursting activity propagating through different network channels [reprinted with permission from [<a href="#B146-sensors-23-09911" class="html-bibr">146</a>]]; (<b>c</b>) the design layout of 32 arrays in GFET chip for in vitro recording of neuronal signals; (<b>d</b>) time track recording of an intrinsic neuronal bursting activity displays the alternative burst periods at high frequency and spikes at low frequency and the average AP (red) obtained from 77 individual APs (grey) [reprinted with permission from [<a href="#B151-sensors-23-09911" class="html-bibr">151</a>]]; (<b>e</b>) a 3D self-rolled biosensor array fabricated on a sacrificial layer; insets D and S are the drain and source of GFET, respectively; (<b>f</b>) a 3D confocal microscopic image of this biosensor array, scale bar 50 µm; (<b>g</b>) recording of the FPs where Ca<sup>2+</sup> fluorescence intensity was continuously recorded as a function of time and average FP peak (red) calculated from 100 peaks (gray) [reprinted with the permission from [<a href="#B154-sensors-23-09911" class="html-bibr">154</a>]].</p>
Full article ">Figure 10
<p>GFET devices were utilized for in vivo electrophysiological mapping. Panel (<b>a</b>) displays a pictorial representation of the SG-GFET array featuring various components. In panel (<b>b</b>), a synchronous LFP (local field potential) was recorded from the cerebral cortex of WAG rats, exhibiting a frequency range of 3–4 Hz [reprinted with the permission from [<a href="#B155-sensors-23-09911" class="html-bibr">155</a>]]. (<b>c</b>,<b>d</b>) Optical photographs of the crumpled GFET arrays taken before and after positioning them over the left cortical surface of the rat’s brain. (<b>e</b>) Live monitoring of induced epilepsy activities featuring three distinct phases; the penicillin injection time is marked by the black arrow [reprinted with the permission from [<a href="#B162-sensors-23-09911" class="html-bibr">162</a>]]. (<b>f</b>) The experimental configuration depicting the interfacing of the SG-GFET array with the brain, coupled with a custom-built front-end amplifier. (<b>g</b>) Captured recording illustrating a CSD propagating front using a single SG-GFET. Activity within the 1–50 Hz frequency range is depicted in blue (left axis), while wide-band activity (0.001−50 Hz) is represented in black (right axis), alongside the corresponding spectrogram within the 1−50 Hz band. [Reprinted with the permission from [<a href="#B159-sensors-23-09911" class="html-bibr">159</a>]]. (<b>h</b>) Array of SG-GFET positioned on the rat cortex. (<b>i</b>) Illustration outlining the SG-GFET prototype for conducting in vivo biocompatibility assessments. (<b>j</b>) The evaluation of discrimination ratio was conducted using the novel object recognition test on various days following the implantation. [Reprinted with the permission from [<a href="#B160-sensors-23-09911" class="html-bibr">160</a>]].</p>
Full article ">Figure 11
<p>(<b>a</b>) This schematic illustrates the equivalent circuit representing the interface between the probe and neural tissue locations. Only the neural recording process is depicted to simplify the representation, with neurons acting as a voltage source (V<sub>e</sub>). However, it is important to note that an analogous neural stimulation circuit can be characterized as well. (<b>b</b>) Depicted here is the scenario of an implantable neural device failure, along with its corresponding equivalent circuit.</p>
Full article ">Figure 12
<p>(<b>a</b>) Illustration depicting the process of graphene electrode fabrication. (<b>b</b>) Electrocardiogram (ECG) of a zebrafish heart. (Reprinted with the permission from [<a href="#B171-sensors-23-09911" class="html-bibr">171</a>]). The adhesion of graphene on the electrode has been enhanced, a crucial factor for ensuring long-term implantation. (Reproduced with permission from [<a href="#B171-sensors-23-09911" class="html-bibr">171</a>]).</p>
Full article ">
20 pages, 5650 KiB  
Article
Research on Pneumatic Control of a Pressurized Self-Elevating Mat for an Offshore Wind Power Installation Platform
by Junguo Cui, Qi Shi, Yunfei Lin, Haibin Shi, Simin Yuan and Wensheng Xiao
Sensors 2023, 23(24), 9910; https://doi.org/10.3390/s23249910 - 18 Dec 2023
Cited by 1 | Viewed by 1475
Abstract
Efficient deep-water offshore wind power installation platforms with a pressurized self-elevating mat are a new type of equipment used for installing offshore wind turbines. However, the unstable internal pressure of the pressurized self-elevating mat can cause serious harm to the platform. This paper [...] Read more.
Efficient deep-water offshore wind power installation platforms with a pressurized self-elevating mat are a new type of equipment used for installing offshore wind turbines. However, the unstable internal pressure of the pressurized self-elevating mat can cause serious harm to the platform. This paper studies the pneumatic control system of the self-elevating mat to improve the precision of its pressure control. According to the pneumatic control system structure of the self-elevating mat, the pneumatic model of the self-elevating mat is established, and a conventional PID controller and fuzzy PID controller are designed and established. It can be seen via Simulink simulation that the fuzzy PID controller has a smaller adjustment time and overshoot, but its anti-interference ability is relatively weak. The membership degree and fuzzy rules of the fuzzy PID controller are optimized using a neural network algorithm, and a fuzzy neural network PID controller based on BP neural network optimization is proposed. The simulation results show that the overshoot of the optimized controller is reduced by 9.71% and the stability time is reduced by 68.9% compared with the fuzzy PID. Finally, the experiment verifies that the fuzzy neural network PID controller has a faster response speed and smaller overshoot, which improves the pressure control accuracy and robustness of the self-elevating mat and provides a scientific basis for the engineering applications of the self-elevating mat. Full article
(This article belongs to the Topic Advanced Energy Harvesting Technology)
Show Figures

Figure 1

Figure 1
<p>Structure diagram of wind power installation platform.</p>
Full article ">Figure 2
<p>Air supply flow chart of the air source device.</p>
Full article ">Figure 3
<p>Schematic diagram of fuzzy PID control.</p>
Full article ">Figure 4
<p>The membership function of <span class="html-italic">e</span> and <span class="html-italic">ec</span>.</p>
Full article ">Figure 5
<p>Simulation structure of the fuzzy PID controller.</p>
Full article ">Figure 6
<p>Internal structure of the fuzzy controller.</p>
Full article ">Figure 7
<p>Fuzzy PID simulation.</p>
Full article ">Figure 8
<p>Fuzzy control parameter results.</p>
Full article ">Figure 9
<p>Simulation analysis of interference introduction.</p>
Full article ">Figure 10
<p>Schematic diagram of FNN-PID control.</p>
Full article ">Figure 11
<p>Fuzzy neural network structure.</p>
Full article ">Figure 12
<p>FNN-PID controller simulation structure.</p>
Full article ">Figure 13
<p>FNN-PID controller.</p>
Full article ">Figure 14
<p>Simulation results under interference.</p>
Full article ">Figure 15
<p>Schematic diagram of pneumatic experiment.</p>
Full article ">Figure 16
<p>Experimental platform.</p>
Full article ">Figure 17
<p>Experimental curve of pressure boost and pressure retention.</p>
Full article ">
24 pages, 2204 KiB  
Article
Dynamic-Distance-Based Thresholding for UAV-Based Face Verification Algorithms
by Julio Diez-Tomillo, Jose Maria Alcaraz-Calero and Qi Wang
Sensors 2023, 23(24), 9909; https://doi.org/10.3390/s23249909 - 18 Dec 2023
Cited by 3 | Viewed by 1655
Abstract
Face verification, crucial for identity authentication and access control in our digital society, faces significant challenges when comparing images taken in diverse environments, which vary in terms of distance, angle, and lighting conditions. These disparities often lead to decreased accuracy due to significant [...] Read more.
Face verification, crucial for identity authentication and access control in our digital society, faces significant challenges when comparing images taken in diverse environments, which vary in terms of distance, angle, and lighting conditions. These disparities often lead to decreased accuracy due to significant resolution changes. This paper introduces an adaptive face verification solution tailored for diverse conditions, particularly focusing on Unmanned Aerial Vehicle (UAV)-based public safety applications. Our approach features an innovative adaptive verification threshold algorithm and an optimised operation pipeline, specifically designed to accommodate varying distances between the UAV and the human subject. The proposed solution is implemented based on a UAV platform and empirically compared with several state-of-the-art solutions. Empirical results have shown that an improvement of 15% in accuracy can be achieved. Full article
(This article belongs to the Special Issue Advances in Intelligent Sensors and IoT Solutions)
Show Figures

Figure 1

Figure 1
<p>Simplified diagram of the face verification process.</p>
Full article ">Figure 2
<p>Block diagram of the proposed face verification pipeline composed of five stages: face detection, preprocessing, Siamese network, distance calculation, and decision making. There are two inputs: a video from a UAV and the face of the person to identify, and an output that is the decision.</p>
Full article ">Figure 3
<p>Schematic of the dataset recording distances.</p>
Full article ">Figure 4
<p>Cropped faces at the eight distances recorded.</p>
Full article ">Figure 5
<p>Scale of defined distances depending on the width of the cropped face.</p>
Full article ">Figure 6
<p>Distribution of the similarity indexes and accuracy by threshold at <b>5 m</b> of the four algorithms using <b>cosine distance</b> as metric. (<b>a</b>) Distribution of Similarity indexes of ArcFace. (<b>b</b>) Distribution of Similarity indexes of SFace. (<b>c</b>) Distribution of Similarity indexes of Dlib. (<b>d</b>) Distribution of Similarity indexes of VGG-Face. (<b>e</b>) Accuracy by threshold of ArcFace. (<b>f</b>) Accuracy by threshold of SFace. (<b>g</b>) Accuracy by threshold of Dlib. (<b>h</b>) Accuracy by threshold of VGG-Face.</p>
Full article ">Figure 7
<p>Distribution of similarity indexes and accuracy by threshold at <b>15 m</b> of the four algorithms using <b>cosine distance</b> as metric. (<b>a</b>) Distribution of Similarity indexes of ArcFace. (<b>b</b>) Distribution of Similarity indexes of SFace. (<b>c</b>) Distribution of Similarity indexes of Dlib. (<b>d</b>) Distribution of Similarity indexes of VGG-Face. (<b>e</b>) Accuracy by threshold of ArcFace. (<b>f</b>) Accuracy by threshold of SFace. (<b>g</b>) Accuracy by threshold of Dlib. (<b>h</b>) Accuracy by threshold of VGG-Face.</p>
Full article ">Figure 8
<p>Distribution of similarity indexes and accuracy by threshold at <b>5 m</b> of the four algorithms using <b>Euclidean distance</b> as metric. (<b>a</b>) Distribution of Similarity indexes of ArcFace. (<b>b</b>) Distribution of Similarity indexes of SFace. (<b>c</b>) Distribution of Similarity indexes of Dlib. (<b>d</b>) Distribution of Similarity indexes of VGG-Face. (<b>e</b>) Accuracy by threshold of ArcFace. (<b>f</b>) Accuracy by threshold of SFace. (<b>g</b>) Accuracy by threshold of Dlib. (<b>h</b>) Accuracy by threshold of VGG-Face.</p>
Full article ">Figure 8 Cont.
<p>Distribution of similarity indexes and accuracy by threshold at <b>5 m</b> of the four algorithms using <b>Euclidean distance</b> as metric. (<b>a</b>) Distribution of Similarity indexes of ArcFace. (<b>b</b>) Distribution of Similarity indexes of SFace. (<b>c</b>) Distribution of Similarity indexes of Dlib. (<b>d</b>) Distribution of Similarity indexes of VGG-Face. (<b>e</b>) Accuracy by threshold of ArcFace. (<b>f</b>) Accuracy by threshold of SFace. (<b>g</b>) Accuracy by threshold of Dlib. (<b>h</b>) Accuracy by threshold of VGG-Face.</p>
Full article ">Figure 9
<p>Distribution of the similarity indexes and accuracy by threshold at <b>15 m</b> of the four algorithms using <b>Euclidean distance</b> as metric. (<b>a</b>) Distribution of Similarity indexes of ArcFace. (<b>b</b>) Distribution of Similarity indexes of SFace. (<b>c</b>) Distribution of Similarity indexes of Dlib. (<b>d</b>) Distribution of Similarity indexes of VGG-Face. (<b>e</b>) Accuracy by threshold of ArcFace. (<b>f</b>) Accuracy by threshold of SFace. (<b>g</b>) Accuracy by threshold of Dlib. (<b>h</b>) Accuracy by threshold of VGG-Face.</p>
Full article ">Figure 9 Cont.
<p>Distribution of the similarity indexes and accuracy by threshold at <b>15 m</b> of the four algorithms using <b>Euclidean distance</b> as metric. (<b>a</b>) Distribution of Similarity indexes of ArcFace. (<b>b</b>) Distribution of Similarity indexes of SFace. (<b>c</b>) Distribution of Similarity indexes of Dlib. (<b>d</b>) Distribution of Similarity indexes of VGG-Face. (<b>e</b>) Accuracy by threshold of ArcFace. (<b>f</b>) Accuracy by threshold of SFace. (<b>g</b>) Accuracy by threshold of Dlib. (<b>h</b>) Accuracy by threshold of VGG-Face.</p>
Full article ">Figure 10
<p>Accuracy of the algorithms at fixed distances using the original and the proposed thresholds and cosine distance as metric.</p>
Full article ">Figure 11
<p>Accuracy of the algorithms at fixed distances using the original and the proposed thresholds and Euclidean distance as metrics.</p>
Full article ">Figure 12
<p>Cumulative average of the inference time in milliseconds per frame for each algorithm.</p>
Full article ">
14 pages, 3766 KiB  
Article
Identifying the Sweet Spot of Padel Rackets with a Robot
by Carlos Blanes, Antonio Correcher, Jaime Martínez-Turégano and Carlos Ricolfe-Viala
Sensors 2023, 23(24), 9908; https://doi.org/10.3390/s23249908 - 18 Dec 2023
Viewed by 1872
Abstract
Although the vibration of rackets and the location of the sweet spot for players when hitting the ball is crucial, manufacturers do not specify this behavior precisely. This article analyses padel rackets, provides a solution to determine the sweet spot position (SSP), quantifies [...] Read more.
Although the vibration of rackets and the location of the sweet spot for players when hitting the ball is crucial, manufacturers do not specify this behavior precisely. This article analyses padel rackets, provides a solution to determine the sweet spot position (SSP), quantifies its behavior, and determines the level of vibration transmitted along the racket handle. The proposed methods serve to locate the SSP without quantifying it. This article demonstrates the development of equipment capable of analyzing the vibration behavior of padel rackets. To do so, it employs a robot that moves along the surface of the padel racket, striking it along its central line. Accelerometers are placed on a movable cradle where rackets are positioned and adjusted. A method for analyzing accelerometer signals to quantify vibration severity is proposed. The SSP and vibration behavior along the central line are determined and quantified. As a result of the study, 225 padel rackets are analyzed and compared. SSP is independent of the padel racket shape, balance, weight, moment of inertia, and padel racket shape (tear, diamond, or round) and is not located at the same position as the center of percussion. Full article
Show Figures

Figure 1

Figure 1
<p>Robot, cradle motion, and devices.</p>
Full article ">Figure 2
<p>Robot process flow and devices with their communications for every hit.</p>
Full article ">Figure 3
<p>Example of accelerometer response when padel racket is hit at different points.</p>
Full article ">Figure 4
<p>Evolution <span class="html-italic">V</span> unfiltered and smoothed, and amplitude of the first sinusoidal after FFT for a round beechwood bar.</p>
Full article ">Figure 5
<p>Accelerometer response in a round beechwood bar at 84, 98, and 124 mm.</p>
Full article ">Figure 6
<p>Amplitude and phase for the first sinusoidal every impact in a round beechwood bar wood after FFT.</p>
Full article ">Figure 7
<p>Variation of the sweet spot position (SSP) and V<sub>min</sub> value for 225 padel rackets analyzed.</p>
Full article ">Figure 8
<p>Variation of the sweet spot position (SSP) and balance value for 104 padel rackets analyzed.</p>
Full article ">Figure 9
<p>Variation of the sweet spot position (SSP) and weight value for 104 padel rackets analyzed.</p>
Full article ">Figure 10
<p>Variation of the sweet spot position (SSP) and moment of inertia around the axis X for 104 padel rackets analyzed.</p>
Full article ">Figure 11
<p>Variation of the sweet spot position (SSP) and padel racket shape for 93 padel rackets analyzed.</p>
Full article ">Figure 12
<p>Variation of the <span class="html-italic">V</span> value for 225 padel rackets along its central line and standard deviation in every impact point.</p>
Full article ">Figure 13
<p>Example of <span class="html-italic">V</span> values for six different padel rackets.</p>
Full article ">
29 pages, 23702 KiB  
Article
UAV Photogrammetry for Estimating Stand Parameters of an Old Japanese Larch Plantation Using Different Filtering Methods at Two Flight Altitudes
by Jeyavanan Karthigesu, Toshiaki Owari, Satoshi Tsuyuki and Takuya Hiroshima
Sensors 2023, 23(24), 9907; https://doi.org/10.3390/s23249907 - 18 Dec 2023
Cited by 1 | Viewed by 3016
Abstract
Old plantations are iconic sites, and estimating stand parameters is crucial for valuation and management. This study aimed to estimate stand parameters of a 115-year-old Japanese larch (Larix kaempferi (Lamb.) Carrière) plantation at the University of Tokyo Hokkaido Forest (UTHF) in central [...] Read more.
Old plantations are iconic sites, and estimating stand parameters is crucial for valuation and management. This study aimed to estimate stand parameters of a 115-year-old Japanese larch (Larix kaempferi (Lamb.) Carrière) plantation at the University of Tokyo Hokkaido Forest (UTHF) in central Hokkaido, northern Japan, using unmanned aerial vehicle (UAV) photogrammetry. High-resolution RGB imagery was collected using a DJI Matrice 300 real-time kinematic (RTK) at altitudes of 80 and 120 m. Structure from motion (SfM) technology was applied to generate 3D point clouds and orthomosaics. We used different filtering methods, search radii, and window sizes for individual tree detection (ITD), and tree height (TH) and crown area (CA) were estimated from a canopy height model (CHM). Additionally, a freely available shiny R package (SRP) and manually digitalized CA were used. A multiple linear regression (MLR) model was used to estimate the diameter at breast height (DBH), stem volume (V), and carbon stock (CST). Higher accuracy was obtained for ITD (F-score: 0.8–0.87) and TH (R2: 0.76–0.77; RMSE: 1.45–1.55 m) than for other stand parameters. Overall, the flying altitude of the UAV and selected filtering methods influenced the success of stand parameter estimation in old-aged plantations, with the UAV at 80 m generating more accurate results for ITD, CA, and DBH, while the UAV at 120 m produced higher accuracy for TH, V, and CST with Gaussian and mean filtering. Full article
Show Figures

Figure 1

Figure 1
<p>The study area map (Coordinate system: JGD2000 Japan–19 zone XII/GSIGEO 2000 geoid): Study locations (43°13′ N, 142°23′ E) of the larch plantation in the forest management sub-compartment 87B in the University of Tokyo Hokkaido Forest (UTHF) in Japan. The red dot with value represents the spatial position of a larch tree with tree number. Green, pink and yellow areas represent compartment 87, sub compartment 87B and larch stand area, respectively.</p>
Full article ">Figure 2
<p>UAV photogrammetry process in the field: (<b>a</b>) DJI M300 RTK UAV in the study area; (<b>b</b>) UAV flight plan at 80 m altitude with the base map; (<b>c</b>) UAV flight plan at 120 m altitude with the base map. Legend of different colors shows the elevation of the study area from mean sea level whereas the plus (+) sign with a value represents the length of one side of the flight area in the respective locations. In this case, there were five numbers due to the pentagon shape of the flight plan. The actual flight path had a buffer of 30 m around the flight area. Symbols <span class="html-fig-inline" id="sensors-23-09907-i001"><img alt="Sensors 23 09907 i001" src="/sensors/sensors-23-09907/article_deploy/html/images/sensors-23-09907-i001.png"/></span>, <span class="html-fig-inline" id="sensors-23-09907-i002"><img alt="Sensors 23 09907 i002" src="/sensors/sensors-23-09907/article_deploy/html/images/sensors-23-09907-i002.png"/></span>, <span class="html-fig-inline" id="sensors-23-09907-i003"><img alt="Sensors 23 09907 i003" src="/sensors/sensors-23-09907/article_deploy/html/images/sensors-23-09907-i003.png"/></span>, <span class="html-fig-inline" id="sensors-23-09907-i004"><img alt="Sensors 23 09907 i004" src="/sensors/sensors-23-09907/article_deploy/html/images/sensors-23-09907-i004.png"/></span> represents the save (tap to save current settings and create a mission flight), delete selected waypoint (tap to delete the selected waypoint), start flight button (tap to perform the flight mission), and clear waypoints (tap to clear all the added waypoint), respectively. The meaning of location 1. 東京大学樹木園桜公園 and 2. 樹木園 are the cherry blossom park of the University of Tokyo arboretum and arboretum, respectively.</p>
Full article ">Figure 3
<p>The workflow of the study including field data collection, UAV photogrammetry process; Canopy height model (CHM) generation; Feature extraction—Treetop, TH, and CA; manual CA delineation process; DBH, V, and CST estimation; and accuracy test.</p>
Full article ">Figure 4
<p>The illustration of CHM derivation from UAV DSM and LiDAR DTM at two flight altitudes, UAV 80 m and UAV 120 m: (<b>a</b>) LiDAR DTM; (<b>b</b>) UAV DSM 80 m; (<b>c</b>) UAV CHM 80 m; (<b>d</b>) UAV DSM 120 m; (<b>e</b>) UAV CHM 120 m.</p>
Full article ">Figure 4 Cont.
<p>The illustration of CHM derivation from UAV DSM and LiDAR DTM at two flight altitudes, UAV 80 m and UAV 120 m: (<b>a</b>) LiDAR DTM; (<b>b</b>) UAV DSM 80 m; (<b>c</b>) UAV CHM 80 m; (<b>d</b>) UAV DSM 120 m; (<b>e</b>) UAV CHM 120 m.</p>
Full article ">Figure 5
<p>The scatter plots of field TH with UAV TH: (<b>a</b>) Field TH with LM lowpass filtering at UAV 80 m; (<b>b</b>) Field TH with SM filtering at UAV 80 m; (<b>c</b>) Field TH with SG filtering at UAV 80 m; (<b>d</b>) Field TH with LML filtering at UAV 120 m; (<b>e</b>) Field TH with SG filtering at UAV 120 m; (<b>f</b>) Field TH with SG filtering at UAV 120 m. Black lines represent the zero intercept of the trend line. Red lines represent the regression line of the data (black dots).</p>
Full article ">Figure 6
<p>The results of the delineation of individual tree crowns: (<b>a</b>) Manual crown delineation at UAV 80 m with spatial positions of filed trees; (<b>b</b>) Manual crown delineation at UAV 120 m with spatial positions of filed trees; (<b>c</b>) SRP crown delineation at UAV 80 m with UAV treetops; (<b>d</b>) SRP crown delineation at UAV 120 m with UAV treetops. Yellow and orange lines in (<b>a</b>) and (<b>b</b>) represent the manual crown delineation of larch and other trees, respectively and red dots represent the spatial location of trees in the field. Black lines in (<b>c</b>) and (<b>d</b>) represent the SRP crown delineation of all trees and white dots represent the SRP detected UAV treetops.</p>
Full article ">Figure 7
<p>The scatter plots of manual CA with SRP CA: (<b>a</b>) Manual CA with SRP mean CA at UAV 80 m; (<b>b</b>) Manual CA with SG CA at UAV 80 m; (<b>c</b>) Manual CA with SM CA at UAV 120 m; (<b>d</b>) Manual CA with SG CA at UAV 120 m. Black lines represent the zero intercept of the trend line. Red lines represent the regression line of the data (black dots).</p>
Full article ">Figure 8
<p>The scatter plots of DBH, V, and CST prediction with field estimated values: (<b>a</b>) Field DBH with Gaussian DBH at UAV 80 m; (<b>b</b>) Prediction of field V with SG volume at UAV 120 m; (<b>c</b>) Prediction of field CST with SG CST at UAV 120 m. Black lines represent the zero intercept of the trend line. Red lines represent the regression line of the data (black dots).</p>
Full article ">Figure A1
<p>Representation of the field tree location in the respective orthomosaics: (<b>a</b>) Field tree location and stand area in derived orthomosaic at UAV 80 m; (<b>b</b>) Field tree location and stand area in derived orthomosaic at UAV 120 m. Red dots indicate the spatial location of the trees while numbers (yellow color) represent the respective tree number labeled in the field. Two orange dots represent the distribution of the ground control points, GCP 1 and GCP 2.</p>
Full article ">Figure A1 Cont.
<p>Representation of the field tree location in the respective orthomosaics: (<b>a</b>) Field tree location and stand area in derived orthomosaic at UAV 80 m; (<b>b</b>) Field tree location and stand area in derived orthomosaic at UAV 120 m. Red dots indicate the spatial location of the trees while numbers (yellow color) represent the respective tree number labeled in the field. Two orange dots represent the distribution of the ground control points, GCP 1 and GCP 2.</p>
Full article ">Figure A2
<p>Representation of UAV treetop and field treetop for part of the stand, in which illustration of TP, FN, and FP: (<b>a</b>) LM treetop (<b>b</b>) SRP treetop. Where TP—is the number of correctly detected trees; FP—is the number of incorrectly detected trees; FN—is the number of incorrectly undetected trees; TN—not applicable, is denoted as those places where no tree exists and the model finds no trees.</p>
Full article ">
17 pages, 4385 KiB  
Article
A Glove-Wearing Detection Algorithm Based on Improved YOLOv8
by Shichu Li, Huiping Huang, Xiangyin Meng, Mushuai Wang, Yang Li and Lei Xie
Sensors 2023, 23(24), 9906; https://doi.org/10.3390/s23249906 - 18 Dec 2023
Cited by 18 | Viewed by 5027
Abstract
Wearing gloves during machinery operation in workshops is essential for preventing accidental injuries, such as mechanical damage and burns. Ensuring that workers are wearing gloves is a key strategy for accident prevention. Consequently, this study proposes a glove detection algorithm called YOLOv8-AFPN-M-C2f based [...] Read more.
Wearing gloves during machinery operation in workshops is essential for preventing accidental injuries, such as mechanical damage and burns. Ensuring that workers are wearing gloves is a key strategy for accident prevention. Consequently, this study proposes a glove detection algorithm called YOLOv8-AFPN-M-C2f based on YOLOv8, offering swifter detection speeds, lower computational demands, and enhanced accuracy for workshop scenarios. This research innovates by substituting the head of YOLOv8 with the AFPN-M-C2f network, amplifying the pathways for feature vector propagation, and mitigating semantic discrepancies between non-adjacent feature layers. Additionally, the introduction of a superficial feature layer enriches surface feature information, augmenting the model’s sensitivity to smaller objects. To assess the performance of the YOLOv8-AFPN-M-C2f model, this study conducted multiple experiments using a factory glove detection dataset compiled for this study. The results indicate that the enhanced YOLOv8 model surpasses other network models. Compared to the baseline YOLOv8 model, the refined version shows a 2.6% increase in mAP@50%, a 63.8% rise in FPS, and a 13% reduction in the number of parameters. This research contributes an effective solution for the detection of glove adherence. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>The structure of the YOLOv8 module.</p>
Full article ">Figure 2
<p>The model training procedure.</p>
Full article ">Figure 3
<p>The structure of YOLOv8-AFPN-M-C2f.</p>
Full article ">Figure 4
<p>The structure of AFPN-M-C2f.</p>
Full article ">Figure 5
<p>The Feature Vector Adjustment Module: (<b>a</b>) downsampling model; (<b>b</b>) upsampling model.</p>
Full article ">Figure 6
<p>ASFF model: (<b>a</b>) ASFF_n; (<b>b</b>) ASFF_2.</p>
Full article ">Figure 7
<p>The original architecture of the feature extraction module in AFPN: (<b>a</b>) Blocks module; (<b>b</b>) BasicBlock module.</p>
Full article ">Figure 8
<p>The architecture of the C2f module: (<b>a</b>) the structure of C2f; (<b>b</b>) Bottleneck in C2f.</p>
Full article ">Figure 9
<p>The feature fusion network architecture: (<b>a</b>) AFPN-M extracts features from the {P2, P3, P4, P5} layers of the main network. (<b>b</b>) Traditional FPN, exemplified by AFPN, extracts features from the {P3, P4, P5} layers.</p>
Full article ">Figure 10
<p>Unprocessed images from the Glove dataset.</p>
Full article ">Figure 11
<p>Predicted outcomes: (<b>a</b>) The images in the first column represent the prediction results of YOLOv8-AFPN-M-C2f. (<b>b</b>) The images in the second column depict the outcomes from YOLOv8 (baseline).</p>
Full article ">Figure 12
<p>Comparison of FPS and Params performance across different models.</p>
Full article ">Figure 13
<p>Scatter plot of parameters-FPS for different modules replacing blocks.</p>
Full article ">
21 pages, 9339 KiB  
Article
Influence of Tools and Cutting Strategy on Milling Conditions and Quality of Horizontal Thin-Wall Structures of Titanium Alloy Ti6Al4V
by Szymon Kurpiel, Bartosz Cudok, Krzysztof Zagórski, Jacek Cieślik, Krzysztof Skrzypkowski and Witold Brostow
Sensors 2023, 23(24), 9905; https://doi.org/10.3390/s23249905 - 18 Dec 2023
Cited by 1 | Viewed by 1535
Abstract
Titanium and nickel alloys are used in the creation of components exposed to harsh and variable operating conditions. Such components include thin-walled structures with a variety of shapes created using milling. The driving factors behind the use of thin-walled components include the desire [...] Read more.
Titanium and nickel alloys are used in the creation of components exposed to harsh and variable operating conditions. Such components include thin-walled structures with a variety of shapes created using milling. The driving factors behind the use of thin-walled components include the desire to reduce the weight of the structures and reduce the costs, which can sometimes be achieved by reducing the machining time. This situation necessitates, among other things, the use of new machining methods and/or better machining parameters. The available tools, geometrically designed for different strategies, allow working with similar and improved cutting parameters (increased cutting speeds or higher feed rates) without jeopardizing the necessary quality of finished products. This approach causes undesirable phenomena, such as the appearance of vibrations during machining, which adversely affect the surface quality including the surface roughness. A search is underway for cutting parameters that will minimize the vibration while meeting the quality requirements. Therefore, researching and evaluating the impact of cutting conditions are justified and common in scientific studies. In our work, we have focused on the quality characteristics of horizontal thin-walled structures from Ti6Al4V titanium alloys obtained in the milling process. Our experiments were conducted under controlled cutting conditions at a constant value of the material removal rate (2.03 cm3⁄min), while an increased value of the cut layer was used and tested for use in finishing machining. We used three different cutting tools, namely, one for general purpose machining, one for high-performance machining, and one for high-speed machining. Two strategies were adopted: adaptive face milling and adaptive cylindrical milling. The output quantities included the results of acceleration vibration amplitudes, and selected surface topography parameters of waviness (Wa and Wz) and roughness (Ra and Rz). The lowest values of the pertinent quantities were found for a sample machined with a high-performance tool using adaptive face milling. Surfaces typical of chatter vibrations were seen for all samples. Full article
(This article belongs to the Special Issue Advanced Sensing and Evaluating Technology in Nondestructive Testing)
Show Figures

Figure 1

Figure 1
<p>A 3D model of a thin-walled sample in the horizontal orientation.</p>
Full article ">Figure 2
<p>Monolithic milling cutters used to prepare samples: (<b>a</b>) Tool 1: JS554100E2R050.0Z4-SIRA; (<b>b</b>) Tool 2: JS754100E2C.0Z4A-HXT; (<b>c</b>) Tool 3: JH730100D2R100.0Z7-HXT.</p>
Full article ">Figure 3
<p>Milling paths: (<b>a</b>) adaptive face milling (large radial depth); (<b>b</b>) adaptive cylindrical milling (large depth of the cut).</p>
Full article ">Figure 4
<p>Experimental setup used during milling samples with horizontal thin walls: 1—sample, 2—tool, 3—adaptor, 4—vibration sensor.</p>
Full article ">Figure 5
<p>The areas of measurement of surface topography parameters.</p>
Full article ">Figure 6
<p>Vibration spectrogram of the acceleration signal for the sample: (<b>a</b>) T1; (<b>b</b>) T2; (<b>c</b>) T3.</p>
Full article ">Figure 6 Cont.
<p>Vibration spectrogram of the acceleration signal for the sample: (<b>a</b>) T1; (<b>b</b>) T2; (<b>c</b>) T3.</p>
Full article ">Figure 7
<p>Vibration spectrogram of the acceleration signal for the sample: (<b>a</b>) T4; (<b>b</b>) T5; (<b>c</b>) T6.</p>
Full article ">Figure 8
<p>Values of (<b>a</b>) Wa in areas A, B, and C; (<b>b</b>) Wz in areas A, B, and C; (<b>c</b>) Ra in areas A, B, and C; (<b>d</b>) Rz in areas A, B, and C; (<b>e</b>) Wa in areas D, B, and E; (<b>f</b>) Wz in areas D, B, and E; (<b>g</b>) Ra in areas D, B, and E; (<b>h</b>) Rz in areas D, B, and E.</p>
Full article ">Figure 8 Cont.
<p>Values of (<b>a</b>) Wa in areas A, B, and C; (<b>b</b>) Wz in areas A, B, and C; (<b>c</b>) Ra in areas A, B, and C; (<b>d</b>) Rz in areas A, B, and C; (<b>e</b>) Wa in areas D, B, and E; (<b>f</b>) Wz in areas D, B, and E; (<b>g</b>) Ra in areas D, B, and E; (<b>h</b>) Rz in areas D, B, and E.</p>
Full article ">Figure 9
<p>Results of acceleration vibration statistics for different samples.</p>
Full article ">Figure 10
<p>Statistical results for the processing of different samples: (<b>a</b>) waviness Wa; (<b>b</b>) waviness Wz; (<b>c</b>) roughness Ra; (<b>d</b>) roughness Rz.</p>
Full article ">Figure 10 Cont.
<p>Statistical results for the processing of different samples: (<b>a</b>) waviness Wa; (<b>b</b>) waviness Wz; (<b>c</b>) roughness Ra; (<b>d</b>) roughness Rz.</p>
Full article ">
12 pages, 9890 KiB  
Communication
Enhancing Short Track Speed Skating Performance through Improved DDQN Tactical Decision Model
by Yuanbo Yang, Feimo Li and Hongxing Chang
Sensors 2023, 23(24), 9904; https://doi.org/10.3390/s23249904 - 18 Dec 2023
Cited by 1 | Viewed by 2374
Abstract
This paper studies the tactical decision-making model of short track speed skating based on deep reinforcement learning, so as to improve the competitive performance of corresponding short track speed skaters. Short track speed skating, a traditional discipline in the Winter Olympics since its [...] Read more.
This paper studies the tactical decision-making model of short track speed skating based on deep reinforcement learning, so as to improve the competitive performance of corresponding short track speed skaters. Short track speed skating, a traditional discipline in the Winter Olympics since its establishment in 1988, has consistently garnered attention. As artificial intelligence continues to advance, the utilization of deep learning methods to enhance athletes’ tactical decision-making capabilities has become increasingly prevalent. Traditional tactical decision techniques often rely on the experience and knowledge of coaches and video analysis methods that require a lot of time and effort. Consequently, this study proposes a scientific simulation environment for short track speed skating, that accurately simulates the physical attributes of the venue, the physiological fitness of the athletes, and the rules of the competition. The Double Deep Q-Network (DDQN) model is enhanced and utilized, with improvements to the reward function and the distinct description of four tactics. This enables agents to learn optimal tactical decisions in various competitive states with a simulation environment. Experimental results demonstrate that this approach effectively enhances the competition performance and physiological fitness allocation of short track speed skaters. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>The construction framework of the simulation environment.</p>
Full article ">Figure 2
<p>Schematic diagram of the six stages.</p>
Full article ">Figure 3
<p>The structure of the Double DQN we used in this article.</p>
Full article ">Figure 4
<p>Visualized trajectories of teal short track speed skating competitions.</p>
Full article ">Figure 5
<p>The reward chart and residual physiology fitness chart.</p>
Full article ">Figure 6
<p>The skating trajectory of the agent and athletes in the simulation competition.</p>
Full article ">Figure 7
<p>3D simulation of short track speed skating competition data.</p>
Full article ">Figure 8
<p>Comparison of the real crossing time between 16 groups of competition athletes and the corresponding agent.</p>
Full article ">
23 pages, 5004 KiB  
Article
Cyber-WISE: A Cyber-Physical Deep Wireless Indoor Positioning System and Digital Twin Approach
by Muhammed Zahid Karakusak, Hasan Kivrak, Simon Watson and Mehmet Kemal Ozdemir
Sensors 2023, 23(24), 9903; https://doi.org/10.3390/s23249903 - 18 Dec 2023
Cited by 3 | Viewed by 2813
Abstract
In recent decades, there have been significant research efforts focusing on wireless indoor localization systems, with fingerprinting techniques based on received signal strength leading the way. The majority of the suggested approaches require challenging and laborious Wi-Fi site surveys to construct a radio [...] Read more.
In recent decades, there have been significant research efforts focusing on wireless indoor localization systems, with fingerprinting techniques based on received signal strength leading the way. The majority of the suggested approaches require challenging and laborious Wi-Fi site surveys to construct a radio map, which is then utilized to match radio signatures with particular locations. In this paper, a novel next-generation cyber-physical wireless indoor positioning system is presented that addresses the challenges of fingerprinting techniques associated with data collection. The proposed approach not only facilitates an interactive digital representation that fosters informed decision-making through a digital twin interface but also ensures adaptability to new scenarios, scalability, and suitability for large environments and evolving conditions during the process of constructing the radio map. Additionally, it reduces the labor cost and laborious data collection process while helping to increase the efficiency of fingerprint-based positioning methods through accurate ground-truth data collection. This is also convenient for working in remote environments to improve human safety in locations where human access is limited or hazardous and to address issues related to radio map obsolescence. The feasibility of the cyber-physical system design is successfully verified and evaluated with real-world experiments in which a ground robot is utilized to obtain a radio map autonomously in real-time in a challenging environment through an informed decision process. With the proposed setup, the results demonstrate the success of RSSI-based indoor positioning using deep learning models, including MLP, LSTM Model 1, and LSTM Model 2, achieving an average localization error of 2.16 m in individual areas. Specifically, LSTM Model 2 achieves an average localization error as low as 1.55 m and 1.97 m with 83.33% and 81.05% of the errors within 2 m for individual and combined areas, respectively. These outcomes demonstrate that the proposed cyber-physical wireless indoor positioning approach, which is based on the application of dynamic Wi-Fi RSS surveying through human feedback using autonomous mobile robots, effectively leverages the precision of deep learning models, resulting in localization performance comparable to the literature. Furthermore, they highlight its potential for suitability for deployment in real-world scenarios and practical applicability. Full article
(This article belongs to the Special Issue Machine Learning for IoT Applications and Digital Twins II)
Show Figures

Figure 1

Figure 1
<p>A UML use-case diagram of the proposed system requirements.</p>
Full article ">Figure 2
<p>Diagrammatic visual conceptual model and components of the cyber-physical wireless indoor positioning system, along with corresponding assets at the bottom level.</p>
Full article ">Figure 3
<p>Elements of physical twin.</p>
Full article ">Figure 4
<p>RAICo1 physical real-world environment and corresponding 2D map.</p>
Full article ">Figure 5
<p>The DT environment together with the user interface.</p>
Full article ">Figure 6
<p>Flowcharts of interactive Wi-Fi site survey steps: (<b>a</b>) 1st Stage: Generating the map of the environment; (<b>b</b>) 2nd Stage: Generating RSS data on designated waypoints on the generated map autonomously.</p>
Full article ">Figure 7
<p>Examples of interactive reference point generation for Wi-Fi site surveys are presented in various shapes and designs. (<b>a</b>) A triangle shape with three markers, one inner loop, and one reference point between two vertices. (<b>b</b>) A rectangular shape with four markers, two inner loops, and two reference points between two vertices. (<b>c</b>) A pentagon shape with five markers, three inner loops, and three reference points between two vertices. (<b>d</b>) A hexagon shape with six markers, four inner loops, and four reference points between two vertices.</p>
Full article ">Figure 8
<p>Overview of RSS-based WLAN positioning.</p>
Full article ">Figure 9
<p>Wi-Fi communication infrastructure.</p>
Full article ">Figure 10
<p>Proposed cyber-physical system design implementation and data flow diagram.</p>
Full article ">Figure 11
<p>Map of the experimental areas and designated measurement points visualized with white arrows indicating the position and orientation of reference points during both the training phases (<b>a</b>–<b>c</b>) and the testing phases (<b>d</b>–<b>f</b>).</p>
Full article ">Figure 12
<p>Positioning performance of used algorithms per different mission areas.</p>
Full article ">Figure 13
<p>Comparison of the individual positioning errors in meters obtained from various test points across different mission areas (<b>a</b>–<b>c</b>). The comparison is made between three models: MLP, LSTM Model 1, and LSTM Model 2.</p>
Full article ">Figure 14
<p>Positioning RMSE heat map in meters for each positioning algorithm used. In this heat map, lower RMSE values are indicated by the blue color, while higher RMSE values are represented by the red color.</p>
Full article ">Figure 15
<p>CDFs of individual positioning errors in all areas for all models.</p>
Full article ">
18 pages, 4261 KiB  
Article
High-Precision Corrosion Detection via SH1 Guided Wave Based on Full Waveform Inversion
by Jiawei Wen, Can Jiang and Hao Chen
Sensors 2023, 23(24), 9902; https://doi.org/10.3390/s23249902 - 18 Dec 2023
Cited by 2 | Viewed by 1423
Abstract
Corrosion detection for industrial settings is crucial for safe and efficient operations. Due to its high imaging resolution, the guided–wave full–waveform inversion tomography technique has significant potential for corrosion detection of plate metals. Limited by the long wavelengths of A0 and S0 mode [...] Read more.
Corrosion detection for industrial settings is crucial for safe and efficient operations. Due to its high imaging resolution, the guided–wave full–waveform inversion tomography technique has significant potential for corrosion detection of plate metals. Limited by the long wavelengths of A0 and S0 mode waves, this method exhibits inadequate detection resolution for the earlier shallow and small corrosion defects. Based on the relatively short wavelength characteristics of the SH1 mode wave, we propose a high–precision corrosion detection method via SH1 guided wave using the full waveform inversion algorithms. By conducting finite element simulations of ultrasonic–guided waves on aluminum plates with varying corrosion defects, a comparison was made to assess the detection precision across A0, S0, and SH1 modes. The comparison results showed that, whether for regular or irregular defects, the SH1 mode wave always exhibited higher imaging accuracy than the A0 and S0 mode waves for shallow and small–sized defects. The corresponding experiments were conducted on an aluminum plate with simple or complex defects. The results of the experiments reconfirmed that the full waveform inversion method using SH1 guided wave can effectively reconstruct the shape and size of small and shallow corrosion defects within aluminum plates. Full article
(This article belongs to the Special Issue Ultrasound Imaging and Sensing for Nondestructive Testing)
Show Figures

Figure 1

Figure 1
<p>Flow diagram of the GWT algorithm based on FWI.</p>
Full article ">Figure 2
<p>Schematic of the geometric model of an aluminum plate with corrosion damage. There were a total of 120 transducers arranged in a square array on the upper surface.</p>
Full article ">Figure 3
<p>The dispersion curve for an aluminum plate. (<b>a</b>) Phase velocity. (<b>b</b>) Group velocity.</p>
Full article ">Figure 4
<p>Reconstructed thickness maps using the A0, S0, and SH1 mode waves for a circular defect (<span class="html-italic">r</span> = 45 mm). (<b>a</b>) True thickness map. The white dashed line indicates the location for extracting profile thickness distribution maps in <a href="#sensors-23-09902-f005" class="html-fig">Figure 5</a>g. (<b>b</b>) Thickness map reconstructed by the A0 mode wave. (<b>c</b>) Thickness map reconstructed by the S0 mode wave. (<b>d</b>) Thickness map reconstructed by SH1 mode wave.</p>
Full article ">Figure 5
<p>Reconstructed thickness distributions using the A0, S0, and SH1 mode waves for an axisymmetric round defect with radii ranging from 15 mm to 50 mm. The black line represents the true thickness distribution of the profile at the <span class="html-italic">y</span> = 0 mm axis, while the red, green, and blue lines correspond to the inversion results using the SH1, S0, and A0 mode waves, respectively. (<b>a</b>) <span class="html-italic">r</span> = 15 mm. (<b>b</b>) <span class="html-italic">r</span> = 20 mm. (<b>c</b>) <span class="html-italic">r</span> = 25 mm. (<b>d</b>) <span class="html-italic">r</span> = 30 mm. (<b>e</b>) <span class="html-italic">r</span> = 35 mm. (<b>f</b>) <span class="html-italic">r</span> = 40 mm. (<b>g</b>) <span class="html-italic">r</span> = 45 mm. (<b>h</b>) <span class="html-italic">r</span> = 50 mm.</p>
Full article ">Figure 6
<p>Reconstructed thickness maps using the A0, S0, and SH1 mode waves for the irregular defect. (<b>a</b>) True thickness map. The white dashed line indicates the location for extracting profile thickness distribution maps. (<b>b</b>) Thickness map reconstructed by A0 mode. (<b>c</b>) Thickness map reconstructed by S0 mode. (<b>d</b>) Thickness map reconstructed by SH1 mode. (<b>e</b>) Thickness distribution along the white horizontal dashed line in (<b>a</b>). The black line represents the true thickness distribution of the profile, while the red, green, and blue lines correspond to the inversion results obtained using the SH1 mode, S0 mode, and A0 mode, respectively. (<b>f</b>) Thickness distribution along the white vertical dashed line in (<b>a</b>), with the same notations for the inversion results as in (<b>e</b>).</p>
Full article ">Figure 7
<p>Experimental configuration and aluminum plate.</p>
Full article ">Figure 8
<p>Excitation signal with a center frequency of 250 kHz. (<b>a</b>) The waveform in time domain. (<b>b</b>) The corresponding normalized amplitude spectrum.</p>
Full article ">Figure 9
<p>The waveform acquired from a transmitter–receiver pair in the experiment. The black line denotes full–time trace. The red line denotes the Tukey window based on <span class="html-italic">t</span><sub>1</sub> and <span class="html-italic">t</span><sub>2</sub>. The blue line denotes the windowed SH1 signal.</p>
Full article ">Figure 10
<p>Reconstructed thickness maps using the SH1 mode wave for the regular defect. (<b>a</b>) True thickness map. The white dashed line indicates the location for extracting profile thickness distribution maps. (<b>b</b>) Thickness map reconstructed by SH1 mode. The red dashed line represents the outer contour of the actual model, while the green dots denote the transducer locations. (<b>c</b>) Thickness distribution along the white horizontal dashed line in (<b>a</b>). The black line represents the true thickness distribution of the profile, while the red line corresponds to the reconstructed results obtained using the SH1 mode.</p>
Full article ">Figure 11
<p>Reconstructed thickness maps using the SH1 mode for an irregular defect. (<b>a</b>) True thickness map. The white dashed line indicates the location for extracting profile thickness distribution maps. (<b>b</b>) Thickness map reconstructed by SH1 mode. The red dashed line represents the outer contour of the actual model, while the green dots indicate the transducer placement locations. (<b>c</b>) Thickness distribution along the white horizontal dashed line in (<b>a</b>). The black line represents the true thickness distribution of the profile, while the red line corresponds to the inversion result obtained using the SH1 mode. (<b>d</b>) Thickness distribution along the white vertical dashed line in (<b>a</b>), with the same notations for the inversion results as in (<b>c</b>).</p>
Full article ">
15 pages, 13225 KiB  
Article
Application of p and n-Type Silicon Nanowires as Human Respiratory Sensing Device
by Elham Fakhri, Muhammad Taha Sultan, Andrei Manolescu, Snorri Ingvarsson and Halldor Gudfinnur Svavarsson
Sensors 2023, 23(24), 9901; https://doi.org/10.3390/s23249901 - 18 Dec 2023
Cited by 6 | Viewed by 1745
Abstract
Accurate and fast breath monitoring is of great importance for various healthcare applications, for example, medical diagnoses, studying sleep apnea, and early detection of physiological disorders. Devices meant for such applications tend to be uncomfortable for the subject (patient) and pricey. Therefore, there [...] Read more.
Accurate and fast breath monitoring is of great importance for various healthcare applications, for example, medical diagnoses, studying sleep apnea, and early detection of physiological disorders. Devices meant for such applications tend to be uncomfortable for the subject (patient) and pricey. Therefore, there is a need for a cost-effective, lightweight, small-dimensional, and non-invasive device whose presence does not interfere with the observed signals. This paper reports on the fabrication of a highly sensitive human respiratory sensor based on silicon nanowires (SiNWs) fabricated by a top-down method of metal-assisted chemical-etching (MACE). Besides other important factors, reducing the final cost of the sensor is of paramount importance. One of the factors that increases the final price of the sensors is using gold (Au) electrodes. Herein, we investigate the sensor’s response using aluminum (Al) electrodes as a cost-effective alternative, considering the fact that the electrode’s work function is crucial in electronic device design, impacting device electronic properties and electron transport efficiency at the electrode–semiconductor interface. Therefore a comparison is made between SiNWs breath sensors made from both p-type and n-type silicon to investigate the effect of the dopant and electrode type on the SiNWs respiratory sensing functionality. A distinct directional variation was observed in the sample’s response with Au and Al electrodes. Finally, performing a qualitative study revealed that the electrical resistance across the SiNWs renders greater sensitivity to breath than to dry air pressure. No definitive research demonstrating the mechanism behind these effects exists, thus prompting our study to investigate the underlying process. Full article
(This article belongs to the Special Issue Nanomaterials for Sensor Applications)
Show Figures

Figure 1

Figure 1
<p>Schematic of the fabrication process of SiNWs using MACE method.</p>
Full article ">Figure 2
<p>Cross-sectional and top-view SEM micrograph of n-type SiNWs obtained by MACE. The blue-colored scale bar provided is 20 µm.</p>
Full article ">Figure 3
<p>Schematic of measurement setup for breath monitoring; inset as an example of long cycle breathing in NB mode on the Au/p-SiNWs/Au sample after aging it for 6 months.</p>
Full article ">Figure 4
<p>I–V characteristics of the four types of fabricated samples (p-type and n-type SiNWs with Al and Au electrodes).</p>
Full article ">Figure 5
<p>Breath sensing test of p-type SiNWs with Au electrodes.</p>
Full article ">Figure 6
<p>Breath sensing test of n-type SiNWs with Au electrodes.</p>
Full article ">Figure 7
<p>Breath sensing test of p-type SiNWs with Al electrodes.</p>
Full article ">Figure 8
<p>Breath sensing test of n-type SiNWs with Al-electrodes.</p>
Full article ">Figure 9
<p>Sensitivity of SiNWs samples, error bars shows the range of 10 individual measurements.</p>
Full article ">Figure 10
<p>Comparison between a commercial breath sensor (Nox A1s™) and a SiNWs breath sensor.</p>
Full article ">Figure 11
<p>Humidity level changes, measured by a commercial humidity sensor.</p>
Full article ">Figure 12
<p>Resistance changes on p-type SiNWs sample with Au electrodes upon exposure to breath, N<math display="inline"><semantics> <msub> <mrow/> <mn>2</mn> </msub> </semantics></math>-gas, and compressed air.</p>
Full article ">
18 pages, 4891 KiB  
Article
Particle Tracking and Micromixing Performance Characterization with a Mobile Device
by Edisson A. Naula Duchi, Héctor Andrés Betancourt Cervantes, Christian Rodrigo Yañez Espinosa, Ciro A. Rodríguez, Luis E. Garza-Castañon and J. Israel Martínez López
Sensors 2023, 23(24), 9900; https://doi.org/10.3390/s23249900 - 18 Dec 2023
Cited by 1 | Viewed by 1618
Abstract
Strategies to stir and mix reagents in microfluid devices have evolved concomitantly with advancements in manufacturing techniques and sensing. While there is a large array of reported designs to combine and homogenize liquids, most of the characterization has been focused on setups with [...] Read more.
Strategies to stir and mix reagents in microfluid devices have evolved concomitantly with advancements in manufacturing techniques and sensing. While there is a large array of reported designs to combine and homogenize liquids, most of the characterization has been focused on setups with two inlets and one outlet. While this configuration is helpful to directly evaluate the effects of features and parameters on the mixing degree, it does not portray the conditions for experiments that involve more than two substances required to be subsequently combined. In this work, we present a mixing characterization methodology based on particle tracking as an alternative to the most common approach to measure homogeneity using the standard deviation of pixel intensities from a grayscale image. The proposed algorithm is implemented on a free and open-source mobile application (MIQUOD) for Android devices, numerically tested on COMSOL Multiphysics, and experimentally tested on a bidimensional split and recombine micromixer and a three-dimensional micromixer with sinusoidal grooves for different Reynolds numbers and geometrical features for samples with fluids seeded with red, blue, and green microparticles. The application uses concentration field data and particle track data to evaluate up to eleven performance metrics. Furthermore, with the insights from the experimental and numerical data, a mixing index for particles (mp) is proposed to characterize mixing performance for scenarios with multiple input reagents. Full article
(This article belongs to the Special Issue Optical Biosensors and Applications)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>MIQUOD process schematic for processing concentration data type.</p>
Full article ">Figure 2
<p>MIQUOD process schematic for processing particle-tracking data type.</p>
Full article ">Figure 3
<p>Phase portraits of particle simulation in COMSOL at: (<b>a</b>) Inlet of 3D mixer; (<b>b</b>) Outlet of 3D mixer.</p>
Full article ">Figure 4
<p>Three inlet microdevices for phosphate concentration monitoring. (<b>a</b>) CAD design of the micromixer. (<b>b</b>) Manufactured microdevice compared with a dime. (<b>c</b>,<b>d</b>) Features of the microdevices.</p>
Full article ">Figure 5
<p>MIQUOD application flow scheme for Android devices.</p>
Full article ">Figure 6
<p>Concentration simulations. (<b>a</b>) ASAR micromixer with different stages and ∆; (<b>b</b>) ASAR micromixer with 5 stages; (<b>c</b>) Sinuosidal mixer with different diamter and grooves downstream; (<b>d</b>) Sinusoidal mixer performance example.</p>
Full article ">Figure 7
<p>Pre-processing process performed on an image. The stages are: (<b>a</b>) Generation of a mask according to the color; (<b>b</b>) Bitwise operation with the selection target; (<b>c</b>) Gray filtering for simplification of calculations; (<b>d</b>) Basic mathematical morphology process (erosion and dilation).</p>
Full article ">Figure 8
<p>Testing with dyes and particles for the device. (<b>a</b>) Dye test in the device; (<b>b</b>) Original image and the target selection image with the particle detection feature for red particles (white dots) in the intersection of the inlets.</p>
Full article ">
12 pages, 2062 KiB  
Article
Comparison of Fluidic and Non-Fluidic Surface Plasmon Resonance Biosensor Variants for Angular and Intensity Modulation Measurements
by Piotr Mrozek, Lukasz Oldak and Ewa Gorodkiewicz
Sensors 2023, 23(24), 9899; https://doi.org/10.3390/s23249899 - 18 Dec 2023
Viewed by 1208
Abstract
Fluidic and non-fluidic surface plasmon resonance measurements were realized for the same type of sensory layer and using the same mouse IgG antibody and anti-mouse IgG antibody biomolecular system. A comparison of the thicknesses of the anti-mouse IgG antibody layers bound to the [...] Read more.
Fluidic and non-fluidic surface plasmon resonance measurements were realized for the same type of sensory layer and using the same mouse IgG antibody and anti-mouse IgG antibody biomolecular system. A comparison of the thicknesses of the anti-mouse IgG antibody layers bound to the ligand at increasing analyte concentrations ranging from 0.0 μg mL−1 to 5.0 μg mL−1 in the non-fluidic and the fluidic variant showed that the thickness of the bound anti-mouse antibody layers in the fluidic variant was approximately 1.5–3 times larger than in the non-fluidic variant. The greater thicknesses of the deposited layers were also reflected in the larger increment of the resonant angle in the fluidic variant compared to the non-fluidic variant in the considered range of analyte concentrations. The choice between fluidic and non-fluidic surface plasmon resonance biosensors may be justified by the availability of analyte volume and the intended modulation technique. When working with limited analyte, non-fluidic biosensors with intensity modulation are more advantageous. For larger analyte quantities, fluidic biosensors with angular modulation are recommended, primarily due to their slightly higher sensitivity in this measurement mode. Full article
(This article belongs to the Special Issue Surface Plasmon Resonance-Based Biosensor)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of: (<b>a</b>) non-fluidic SPR biosensor; (<b>b</b>) fluidic SPR biosensor.</p>
Full article ">Figure 2
<p>Calibration curve of mouse IgG antibody–anti-mouse IgG antibody biomolecular system for non-fluidic sensor; resonant angle for an analyte concentration of 0.0 μg mL<sup>−1</sup>: θ<sub>0_n-f</sub> = 34.0°. Error bars are standard deviation (SD) calculated from 3 measurement repetitions.</p>
Full article ">Figure 3
<p>Calibration curve of mouse IgG antibody–anti-mouse IgG antibody biomolecular system for fluidic sensor; resonant angle for an analyte concentration of 0.0 μg mL<sup>−1</sup>: θ<sub>0_f</sub> = 74.5°. Error bars are standard deviation (SD) calculated from 3 measurement repetitions.</p>
Full article ">Figure 4
<p>SPR model curves fitted to experimental results for non-fluidic sensor: circles—reflectance R measurement results for Cr-Ag-Au chip; solid line—curve of Cr-Ag-Au chip; 1a, 2a, 3a, 4a—curves corresponding to the parameters and measurement conditions included in <a href="#sensors-23-09899-t001" class="html-table">Table 1</a>.</p>
Full article ">Figure 5
<p>SPR model curves fitted to experimental results for fluidic sensor: 1b, 2b, 3b, 4b—curves corresponding to the parameters and measurement conditions included in <a href="#sensors-23-09899-t002" class="html-table">Table 2</a>.</p>
Full article ">
17 pages, 6263 KiB  
Article
Application of Machine Learning for Calibrating Gas Sensors for Methane Emissions Monitoring
by Ballard Andrews, Aditi Chakrabarti, Mathieu Dauphin and Andrew Speck
Sensors 2023, 23(24), 9898; https://doi.org/10.3390/s23249898 - 18 Dec 2023
Cited by 7 | Viewed by 2903
Abstract
Methane leaks are a significant component of greenhouse gas emissions and a global problem for the oil and gas industry. Emissions occur from a wide variety of sites with no discernable patterns, requiring methodologies to frequently monitor these releases throughout the entire production [...] Read more.
Methane leaks are a significant component of greenhouse gas emissions and a global problem for the oil and gas industry. Emissions occur from a wide variety of sites with no discernable patterns, requiring methodologies to frequently monitor these releases throughout the entire production chain. To cost-effectively monitor widely dispersed well pads, we developed a methane point instrument to be deployed at facilities and connected to a cloud-based interpretation platform that provides real-time continuous monitoring in all weather conditions. The methane sensor is calibrated with machine learning methods of Gaussian process regression and the results are compared with artificial neural networks. A machine learning approach incorporates environmental effects into the sensor response and achieves the accuracies required for methane emissions monitoring with a small number of parameters. The sensors achieve an accuracy of 1 part per million methane (ppm) and can detect leaks at rates of less than 0.6 kg/h. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Typical system layout showing four methane point instruments (1–4), along the boundary of an O&amp;G facility. Each sensor continuously reports methane concentrations and meteorological data to a cloud gateway after which algorithms running in the cloud interpret these time-series concentrations and meteorological data to determine if a leak is occurring and, if so, establish its anticipated leak rate and location. The inset shows the concentration measured at each sensor location. (<b>b</b>) The methane point instrument consists of the methane sensor (MOx), temperature, and humidity sensors, mounted inside a housing with filters to prevent accumulation of dust, water, and snow; an anemometer atop the pole, a wrap-around solar panel; and the battery and electronics inside its main body.</p>
Full article ">Figure 2
<p>During a calibration run, the methane concentration (in parts per million, ppm) is ramped for each relative humidity (RH) and temperature (T) combination. The right axis shows the readings of methane from the optical reference sensor.</p>
Full article ">Figure 3
<p>(<b>a</b>) MOx resonant resistance circuit. Impedance analyzer is connected across V<sub>C</sub> (+) and V<sub>C</sub> (−); (<b>b</b>) Real (Z1) and imaginary (Z2) parts of the impedance for two different methane concentrations. Red dots are the measured frequencies and blue lines are curve fits (Equation (2)).</p>
Full article ">Figure 4
<p>3D plot of sensor resistivity (R) vs. absolute humidity (AH) and measured methane concentration (ppm) of reference methane gas analyzer.</p>
Full article ">Figure 5
<p>(<b>a</b>–<b>c</b>) GPR model with a squared exponential kernel trained on an exemplary ppm versus R curve (fixed T and RH), with added noise decreasing from left to right. The dotted blue line is the analytic model (Equation (9)). The GPR fit (green line) passes through the mean at each observational data point. The gray-shaded areas show the 95% confidence interval of the variance of the model predictions.</p>
Full article ">Figure 6
<p>Illustration of shallow neural network with three inputs in the first layer (0) x<sub>1</sub>, x<sub>2</sub> and x<sub>3</sub>, four nodes in the hidden layer (1), and one node in the output layer (2).</p>
Full article ">Figure 7
<p>Predicted vs. measured methane concentrations (ppm) for training (blue) and test (red) datasets for one- and four-sensor GPR models with an isotropic exponential kernel. Table insets show the training and test MAE for 10 and 50 ppm.</p>
Full article ">Figure 8
<p>Predicted vs. measured methane concentration (ppm) for training (blue), test (red) and green line (one-to-one) datasets for one- and four-sensor ANN models with two inputs, two hidden layers, and 20 nodes, RELU activation. Table insets show the training and test MAE for 10 and 50 ppm.</p>
Full article ">Figure 9
<p>MAE of training (blue) and test (orange) sets at 10 ppm concentration with predictors R, T, and RH and response ppm for one sensor. See <a href="#sensors-23-09898-t001" class="html-table">Table 1</a> for an explanation of abbreviations.</p>
Full article ">Figure 10
<p>MAE of training (blue) and test (orange) sets at 10 ppm concentration with predictors R, T, and RH and response ppm for four sensors. See <a href="#sensors-23-09898-t001" class="html-table">Table 1</a> for an explanation of abbreviations.</p>
Full article ">Figure 11
<p>(<b>a</b>) Overlay of the response of a reference optical analyzer (blue line) with the MOx sensor response calibrated with GPR model (red dots) and ANN model (green dots) from a controlled leak test. (<b>b</b>) Correlation plots for reference analyzer and model calibrations for GPR (top), and ANN (bottom).</p>
Full article ">Figure 12
<p>Response of eight sensors over a 3-month period. Grey-shaded regions show methane releases indicated by arrows. For each release, one or more sensors detected each release at the tens of ppm level, determined by the prevailing wind direction among other factors.</p>
Full article ">Figure 13
<p>Interpreted results from 20 h of 9.3 kg/h releases showing the estimated source location (green) as compared to the actual location (blue) along with 95% confidence limits shown as the ellipse. The red region is the constrained source location area based on wind directions where methane concentrations above the background are detected by a given sensor as defined by the intersection of the orange constraints.</p>
Full article ">Figure 14
<p>The 6L spherical metal canister, used for the background gas collection, is placed above the sensor unit to sample the background gases at the test site. Note that the background gas analysis was performed during our first generation of point instruments being used in the field, as depicted by the form factor of the unit in the picture.</p>
Full article ">Figure 15
<p>Sensitivity of the calibration model to offsets in the temperature (<b>a</b>) and relative humidity readings (<b>b</b>) and both (<b>c</b>). Apart from the edges of the calibration model, errors in the temperature +2 °C and RH +2.0% readings cause ppm errors of less than 1 ppm.</p>
Full article ">Figure A1
<p>MAE of training (blue) and test (orange) sets at 50 ppm concentration with predictors R, AH, and response ppm for four sensors. Kernels for GPR models as labeled, the ANN models had 1–3 hidden layers each with 10 nodes per layer. See <a href="#sensors-23-09898-t001" class="html-table">Table 1</a> for an explanation of abbreviations in the graph.</p>
Full article ">Figure A2
<p>MAE of training (blue) and test (orange) sets at 200 ppm concentration with predictors R, AH, and response ppm for four sensors. Kernels for GPR models as labeled; ANN models labeled 1–3 hidden layers with 10 nodes per layer. See <a href="#sensors-23-09898-t001" class="html-table">Table 1</a> for an explanation of abbreviations in the graph.</p>
Full article ">Figure A3
<p>MAE of training (blue) and test (orange) sets at 500 ppm concentration with predictors R, AH, and response ppm for four sensors. Kernels for GPR models as labeled; ANN models labeled 1–3 hidden layers with 10 nodes per layer See <a href="#sensors-23-09898-t001" class="html-table">Table 1</a> for an explanation of abbreviations in the graph.</p>
Full article ">Figure A4
<p>MAE of training (blue) and test (orange) sets at 1200 ppm concentration with predictors R, AH, and response ppm for four sensors. Kernels for GPR models as labeled; ANN models labeled 1–3 hidden layers with 10 nodes per layer. See <a href="#sensors-23-09898-t001" class="html-table">Table 1</a> for an explanation of abbreviations in the graph.</p>
Full article ">Figure A5
<p>MAE of training (blue) and test (orange) sets at 2300 ppm concentration with predictors R, AH, and response ppm for four sensors. Kernels for GPR models as labeled; ANN models labeled 1–3 hidden layers with 10 nodes per layer. See <a href="#sensors-23-09898-t001" class="html-table">Table 1</a> for an explanation of abbreviations in the graph.</p>
Full article ">
21 pages, 11103 KiB  
Article
A PAD-Based Unmanned Aerial Vehichle Route Planning Scheme for Remote Sensing in Huge Regions
by Tianyi Shao, Yuxiang Li, Weixin Gao, Jiayuan Lin and Feng Lin
Sensors 2023, 23(24), 9897; https://doi.org/10.3390/s23249897 - 18 Dec 2023
Cited by 1 | Viewed by 1269
Abstract
Unmanned aerial vehicles (UAVs) have been employed extensively for remote-sensing missions. However, due to their energy limitations, UAVs have a restricted flight operating time and spatial coverage, which makes remote sensing over huge regions that are out of UAV flight endurance and range [...] Read more.
Unmanned aerial vehicles (UAVs) have been employed extensively for remote-sensing missions. However, due to their energy limitations, UAVs have a restricted flight operating time and spatial coverage, which makes remote sensing over huge regions that are out of UAV flight endurance and range challenging. PAD is an autonomous wireless charging station that might significantly increase the flying time of UAVs by recharging them in the air. In this work, we introduce PADs to simplify UAV-based remote sensing over a huge region, and then we explore the UAV route planning problem once PADs have been predeployed throughout a huge remote sensing region. A route planning scheme, named PAD-based remote sensing (PBRS), is proposed to solve the problem. The PBRS scheme first plans the UAV’s round-trip routes based on the location of the PADs and divides the whole target region into multiple PAD-based subregions. Between adjacent subregions, the UAV flight subroute is planned by determining piggyback points to minimize the total time for remote sensing. We demonstrate the effectiveness of the proposed scheme by conducting several sets of simulation experiments based on the digital orthophoto model of Hutou Village in Beibei District, Chongqing, China. The results show that the PBRS scheme can achieve excellent performance in three metrics of remote sensing duration, the number of trips to charging stations, and the data-storage rate in UAV remote-sensing missions over huge regions with predeployed PADs through effective planning of UAVs. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

Figure 1
<p>The drawbacks of flying a single UAV over a huge region and the three main ways to solve them. Where (<b>a</b>) depicts the initial remote sensing setup with a single UAV and a lone base station, (<b>b</b>) depicts the multi-flight approach with a single UAV, (<b>c</b>) illustrates multi-UAV cooperation for executing remote sensing missions and (<b>d</b>) shows a single UAV with increased battery capacity for executing remote sensing missions.</p>
Full article ">Figure 2
<p>(<b>a</b>) represents the initial remote-sensing region. (<b>b</b>) represents the remote-sensing region after the introduction of PAD.</p>
Full article ">Figure 3
<p>UAV turning schematic. (<b>a</b>) denotes the UAV traveling from point i via point j to point l and (<b>b</b>) denotes the UAV traveling from point i via point j to point k.</p>
Full article ">Figure 4
<p>Three flight routes for UAVs in the PBRS scheme.</p>
Full article ">Figure 5
<p>Complete graph (<b>b</b>) constructed based on (<b>a</b>); the purple lines are the newly added edges for <math display="inline"><semantics> <msup> <mrow> <mi>G</mi> </mrow> <mo>′</mo> </msup> </semantics></math>.</p>
Full article ">Figure 6
<p>Digital orthophoto model of Hutou Village in Beibei District, Chongqing, China.</p>
Full article ">Figure 7
<p>The simulation results of varying the number of target points.</p>
Full article ">Figure 8
<p>The simulation results of varying the size of region.</p>
Full article ">Figure 9
<p>The simulation results of the energy capacity of the UAV.</p>
Full article ">Figure 10
<p>The simulation results of the storage capacity of the UAV.</p>
Full article ">Figure 11
<p>The route results. (<b>a</b>) shows the route of the TBRS scheme, and (<b>b</b>) shows the route of the PBRS scheme.</p>
Full article ">
18 pages, 805 KiB  
Article
Joint Task Offloading and Resource Allocation for Intelligent Reflecting Surface-Aided Integrated Sensing and Communication Systems Using Deep Reinforcement Learning Algorithm
by Liu Yang, Yifei Wei and Xiaojun Wang
Sensors 2023, 23(24), 9896; https://doi.org/10.3390/s23249896 - 18 Dec 2023
Viewed by 1966
Abstract
This paper investigates an intelligent reflecting surface (IRS)-aided integrated sensing and communication (ISAC) framework to cope with the problem of spectrum scarcity and poor wireless environment. The main goal of the proposed framework in this work is to optimize the overall performance of [...] Read more.
This paper investigates an intelligent reflecting surface (IRS)-aided integrated sensing and communication (ISAC) framework to cope with the problem of spectrum scarcity and poor wireless environment. The main goal of the proposed framework in this work is to optimize the overall performance of the system, including sensing, communication, and computational offloading. We aim to achieve the trade-off between system performance and overhead by optimizing spectrum and computing resource allocation. On the one hand, the joint design of transmit beamforming and phase shift matrices can enhance the radar sensing quality and increase the communication data rate. On the other hand, task offloading and computation resource allocation optimize energy consumption and delay. Due to the coupled and high dimension optimization variables, the optimization problem is non-convex and NP-hard. Meanwhile, given the dynamic wireless channel condition, we formulate the optimization design as a Markov decision process. To tackle this complex optimization problem, we proposed two innovative deep reinforcement learning (DRL)-based schemes. Specifically, a deep deterministic policy gradient (DDPG) method is proposed to address the continuous high-dimensional action space, and the prioritized experience replay is adopted to speed up the convergence process. Then, a twin delayed DDPG algorithm is designed based on this DRL framework. Numerical results confirm the effectiveness of proposed schemes compared with the benchmark methods. Full article
(This article belongs to the Special Issue Feature Papers in the 'Sensor Networks' Section 2023)
Show Figures

Figure 1

Figure 1
<p>System model.</p>
Full article ">Figure 2
<p>Proposed task offloading and resource allocation framework based on DDPG.</p>
Full article ">Figure 3
<p>Proposed task offloading and resource allocation framework based on TD3.</p>
Full article ">Figure 4
<p>Convergence performance under different learning rates.</p>
Full article ">Figure 5
<p>Convergence performance under different discount factors.</p>
Full article ">Figure 6
<p>The weighted achievable data rate versus the transmit power budget.</p>
Full article ">Figure 7
<p>Sensing SINR versus the transmit power budget.</p>
Full article ">Figure 8
<p>The total energy consumption versus the number of users.</p>
Full article ">Figure 9
<p>The total average execution latency versus the number of users.</p>
Full article ">
13 pages, 4049 KiB  
Article
High-Performance SAW Resonator with Spurious Mode Suppression Using Hexagonal Weighted Electrode Structure
by Yulong Liu, Hongliang Wang, Feng Zhang, Luhao Gou, Shengkuo Zhang, Gang Cao and Pengcheng Zhang
Sensors 2023, 23(24), 9895; https://doi.org/10.3390/s23249895 - 18 Dec 2023
Cited by 2 | Viewed by 1998
Abstract
Surface acoustic wave resonators are widely applied in electronics, communication, and other engineering fields. However, the spurious modes generally present in resonators can cause deterioration in device performance. Therefore, this paper proposes a hexagonal weighted structure to suppress them. With the construction of [...] Read more.
Surface acoustic wave resonators are widely applied in electronics, communication, and other engineering fields. However, the spurious modes generally present in resonators can cause deterioration in device performance. Therefore, this paper proposes a hexagonal weighted structure to suppress them. With the construction of a finite element resonator model, the parameters of the interdigital transducer (IDT) and the area of the dummy finger weighting are determined. The spurious waves are confined within the dummy finger area, whereas the main mode is less affected by this structure. To verify the suppression effect of the simulation, resonators with conventional and hexagonal weighted structures are fabricated using the micro-electromechanical systems (MEMS) process. After the S-parameter test of the prepared resonators, the hexagonal weighted resonators achieve a high level of spurious mode suppression. Their properties are superior to those of the conventional structure, with a higher Q value (10,406), a higher minimum return loss (25.7 dB), and a lower ratio of peak sidelobe (19%). This work provides a feasible solution for the design of SAW resonators to suppress spurious modes. Full article
(This article belongs to the Section Sensor Materials)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Schematic diagram of the resonator slice model. (<b>b</b>) Simulation results of the SAW resonator in Rayleigh mode. (<b>c</b>) Displacement of the model in Rayleigh wave mode versus substrate depth.</p>
Full article ">Figure 2
<p>Admittance curves for different electrode thicknesses. (<b>a</b>) Electrode thickness from 0.010 λ to 0.025 λ. (<b>b</b>) Electrode thickness from 180 nm to 220 nm.</p>
Full article ">Figure 3
<p>S11 parameter curves for different electrode pairs of IDTs.</p>
Full article ">Figure 4
<p>(<b>a</b>) Periodic strip 3D finite element model. (<b>b</b>) S11 parameter profiles of IDT with different W.</p>
Full article ">Figure 5
<p>(<b>a</b>) Relative amplitude displacement contour curves for unweighted modes 0, 1, and 2. (<b>b</b>) Sketch of the single-port hexagonal weighted resonator.</p>
Full article ">Figure 6
<p>S11 curves of resonators with different weighting levels, the inset shows the simulation model. (<b>a</b>) Weighted at 0%. (<b>b</b>) Weighted at 30%. (<b>c</b>) Weighted at 50%. (<b>d</b>) Weighted at 100%.</p>
Full article ">Figure 7
<p>Device processing process flow chart.</p>
Full article ">Figure 8
<p>(<b>a</b>) Confocal microscope image of the unweighted resonator (Chip A). (<b>b</b>) Confocal micro scope image of the hexagonal weighted resonator (Chip B).</p>
Full article ">Figure 9
<p>(<b>a</b>) Two types of devices made with test PCBs. (<b>b</b>) Test platform.</p>
Full article ">Figure 10
<p>Comparison of actual measurement and simulation of chips A and B.</p>
Full article ">Figure 11
<p>(<b>a</b>) Enlargement of actual measurement of device A. (<b>b</b>) Enlargement of actual measurement of device B.</p>
Full article ">
16 pages, 4771 KiB  
Article
Self-Attention Mechanism-Based Head Pose Estimation Network with Fusion of Point Cloud and Image Features
by Kui Chen, Zhaofu Wu, Jianwei Huang and Yiming Su
Sensors 2023, 23(24), 9894; https://doi.org/10.3390/s23249894 - 18 Dec 2023
Cited by 2 | Viewed by 2096
Abstract
Head pose estimation serves various applications, such as gaze estimation, fatigue-driven detection, and virtual reality. Nonetheless, achieving precise and efficient predictions remains challenging owing to the reliance on singular data sources. Therefore, this study introduces a technique involving multimodal feature fusion to elevate [...] Read more.
Head pose estimation serves various applications, such as gaze estimation, fatigue-driven detection, and virtual reality. Nonetheless, achieving precise and efficient predictions remains challenging owing to the reliance on singular data sources. Therefore, this study introduces a technique involving multimodal feature fusion to elevate head pose estimation accuracy. The proposed method amalgamates data derived from diverse sources, including RGB and depth images, to construct a comprehensive three-dimensional representation of the head, commonly referred to as a point cloud. The noteworthy innovations of this method encompass a residual multilayer perceptron structure within PointNet, designed to tackle gradient-related challenges, along with spatial self-attention mechanisms aimed at noise reduction. The enhanced PointNet and ResNet networks are utilized to extract features from both point clouds and images. These extracted features undergo fusion. Furthermore, the incorporation of a scoring module strengthens robustness, particularly in scenarios involving facial occlusion. This is achieved by preserving features from the highest-scoring point cloud. Additionally, a prediction module is employed, combining classification and regression methodologies to accurately estimate head poses. The proposed method improves the accuracy and robustness of head pose estimation, especially in cases involving facial obstructions. These advancements are substantiated by experiments conducted using the BIWI dataset, demonstrating the superiority of this method over existing techniques. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>The top is the RGB image, and the bottom is the head pose label corresponding to the RGB image. (<b>a</b>) When the yaw angle is 38.27°, the discretized results of the head pose. (<b>b</b>) When the yaw angle is −40.39°, the discretized results of the head pose.</p>
Full article ">Figure 2
<p>Head pose estimation network. Different colors represent the characteristics of different stages. The network is mainly divided into four modules: feature function, fusion function, score function, and predict function.</p>
Full article ">Figure 3
<p>Feedforward residual MLP module.</p>
Full article ">Figure 4
<p>Spatial self-attention module.</p>
Full article ">Figure 5
<p>Feature extraction module. The network consists of five residual point blocks, three attention blocks, and a feature transform.</p>
Full article ">Figure 6
<p>Classification and regression module.</p>
Full article ">Figure 7
<p>Data source. (<b>a</b>) RGB image. The head position in the RGB image. (<b>b</b>) Head mask. In the head mask, the white region is the head, while the black region is the background. (<b>c</b>) Point cloud. From the depth map to the point cloud.</p>
Full article ">Figure 8
<p>Point clouds at different scales. (<b>a</b>) Primary point cloud. (<b>b</b>) Downsampling the original point cloud to 1024. (<b>c</b>) Downsampling the original point cloud to 512.</p>
Full article ">Figure 9
<p>When comparing model prediction accuracy, within the same dataset, between instances with and without positional encoding. (<b>a</b>) The predictive accuracy of the model on yaw angles. (<b>b</b>) The predictive accuracy of the model on pitch angles. (<b>c</b>) The predictive accuracy of the model on roll angles.</p>
Full article ">Figure 10
<p>When using the 11th and 12th instances as the dataset, a comparison between the model’s predicted values and the ground truth. (<b>a</b>) Comparison on yaw angles. (<b>b</b>) Comparison on pitch angles. (<b>c</b>) Comparison on roll angles.</p>
Full article ">Figure 11
<p>When each set of data in the dataset is used as a test set, the model’s predicted results are compared to the ground truth, and the average absolute error is calculated. The red dotted line represents the upper limit of the model’s prediction accuracy.</p>
Full article ">Figure 12
<p>Comparison of different methods on the BIWI dataset. (<b>a</b>) The comparison of different methods in terms of yaw angle prediction accuracy. (<b>b</b>) The comparison of different methods in terms of pitch angle prediction accuracy. (<b>c</b>) The comparison of different methods in terms of roll angle prediction accuracy.</p>
Full article ">Figure 13
<p>Visualization of partial test set results. The blue, green, and red colors respectively represent yaw, pitch, and roll angles. The top and bottom sections show the RGB and point cloud visualizations, respectively.</p>
Full article ">
16 pages, 3388 KiB  
Article
CVII: Enhancing Interpretability in Intelligent Sensor Systems via Computer Vision Interpretability Index
by Hossein Mohammadi, Krishnaprasad Thirunarayan and Lingwei Chen
Sensors 2023, 23(24), 9893; https://doi.org/10.3390/s23249893 - 18 Dec 2023
Cited by 2 | Viewed by 1452
Abstract
In the realm of intelligent sensor systems, the dependence on Artificial Intelligence (AI) applications has heightened the importance of interpretability. This is particularly critical for opaque models such as Deep Neural Networks (DNN), as understanding their decisions is essential, not only for ethical [...] Read more.
In the realm of intelligent sensor systems, the dependence on Artificial Intelligence (AI) applications has heightened the importance of interpretability. This is particularly critical for opaque models such as Deep Neural Networks (DNN), as understanding their decisions is essential, not only for ethical and regulatory compliance, but also for fostering trust in AI-driven outcomes. This paper introduces the novel concept of a Computer Vision Interpretability Index (CVII). The CVII framework is designed to emulate human cognitive processes, specifically in tasks related to vision. It addresses the intricate challenge of quantifying interpretability, a task that is inherently subjective and varies across domains. The CVII is rigorously evaluated using a range of computer vision models applied to the COCO (Common Objects in Context) dataset, a widely recognized benchmark in the field. The findings established a robust correlation between image interpretability, model selection, and CVII scores. This research makes a substantial contribution to enhancing interpretability for human comprehension, as well as within intelligent sensor applications. By promoting transparency and reliability in AI-driven decision-making, the CVII framework empowers its stakeholders to effectively harness the full potential of AI technologies. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>Computer Vision Interpretability Index calculation setup tailored for intelligent sensor data analysis in a real-world setting.</p>
Full article ">Figure 2
<p>Illustration of annotations used in the CVII (Computer Vision Interpretability Index). This figure provides a visual representation of the annotations used in the CVII framework, clarifying their respective meanings and purposes.</p>
Full article ">Figure 3
<p>Model interpretability comparison setup. We calculated the overall COCO test set interpretability using the proposed approach and the annotations provided by the dataset (the first row in the diagram). Model developers can substitute their model with one, two, or all three component tasks in the computer vision and calculate the interpretability index based on the annotations their model provides. The results can then be compared with the benchmark provided in the first row.</p>
Full article ">Figure 4
<p>Five image instances in the COCO test set. Each has different characteristics based on their level of complexity, to detect each object in the images and distinguish them from their surrounding objects.</p>
Full article ">Figure 5
<p>An image example used for the case study that explains the rationale behind CVII.</p>
Full article ">Figure 6
<p>Scenario B: Utilizing the CVII platform for a zebra detection and immunization mission using a camera-equipped sensory drone.</p>
Full article ">
13 pages, 1712 KiB  
Communication
Guided Acoustic Waves in Polymer Rods with Varying Immersion Depth in Liquid
by Klaus Lutter, Alexander Backer and Klaus Stefan Drese
Sensors 2023, 23(24), 9892; https://doi.org/10.3390/s23249892 - 18 Dec 2023
Viewed by 1403
Abstract
Monitoring tanks and vessels play an important part in public infrastructure and several industrial processes. The goal of this work is to propose a new kind of guided acoustic wave sensor for measuring immersion depth. Common sensor types such as pressure sensors and [...] Read more.
Monitoring tanks and vessels play an important part in public infrastructure and several industrial processes. The goal of this work is to propose a new kind of guided acoustic wave sensor for measuring immersion depth. Common sensor types such as pressure sensors and airborne ultrasonic sensors are often limited to non-corrosive media, and can fail to distinguish between the media they reflect on or are submerged in. Motivated by this limitation, we developed a guided acoustic wave sensor made from polyethylene using piezoceramics. In contrast to existing sensors, low-frequency Hanning-windowed sine bursts were used to excite the L(0,1) mode within a solid polyethylene rod. The acoustic velocity within these rods changes with the immersion depth in the surrounding fluid. Thus, it is possible to detect changes in the surrounding media by measuring the time shifts of zero crossings through the rod after being reflected on the opposite end. The change in time of zero crossings is monotonically related to the immersion depth. This relative measurement method can be used in different kinds of liquids, including strong acids or bases. Full article
(This article belongs to the Special Issue Acoustic Sensors and Their Applications)
Show Figures

Figure 1

Figure 1
<p>Simulated phase velocities of longitudinal (L) and flexural (F) modes for HD-PE rods with varying diameters.</p>
Full article ">Figure 2
<p>Simulated group velocities of longitudinal (L) and flexural (F) modes for HD-PE rods with varying diameters.</p>
Full article ">Figure 3
<p>Simulation environment.</p>
Full article ">Figure 4
<p>Influence of inner diameter of the piezo ring.</p>
Full article ">Figure 5
<p>Influence of piezo height.</p>
Full article ">Figure 6
<p>Influence of backing height with different piezo heights.</p>
Full article ">Figure 7
<p>Schematics of the experimental setup.</p>
Full article ">Figure 8
<p>Flow chart of the zero tracing algorithm.</p>
Full article ">Figure 9
<p>Comparison of signal amplitudes with backing (20 dB amplification).</p>
Full article ">Figure 10
<p>Amplitudes (peak to peak) over various signal frequencies.</p>
Full article ">Figure 11
<p>Measured signals of a 40 mm rod (1.7 m length).</p>
Full article ">Figure 12
<p>Overlap of dispersion graphs and 2D-FFT transform from laser Doppler vibrometer measurements.</p>
Full article ">Figure 13
<p>Measured signals of different HD-PE rods.</p>
Full article ">Figure 14
<p>Acoustic signal at two different immersion depths.</p>
Full article ">Figure 15
<p>Zero tracing compared to pressure sensor voltages for rods of different diameter.</p>
Full article ">Figure 16
<p>Nonlinear behaviour of traced zero crossing times at low immersion depths, showing a zoomed-in view of <a href="#sensors-23-09892-f015" class="html-fig">Figure 15</a>b.</p>
Full article ">Figure 17
<p>L(0,1) reflection at different temperatures.</p>
Full article ">
12 pages, 6511 KiB  
Article
Carbon Paste Electrodes Surface-Modified with Surfactants: Principles of Surface Interactions at the Interface between Two Immiscible Liquid Phases
by Ivan Švancara and Milan Sýs
Sensors 2023, 23(24), 9891; https://doi.org/10.3390/s23249891 - 18 Dec 2023
Cited by 2 | Viewed by 1559
Abstract
Carbon paste electrodes ex-situ modified with different surfactants were studied using cyclic voltammetry with two model redox couples, namely hexaammineruthenium (II)/(III) and hexacyanoferrate (II)/(III), in 0.1 mol L−1 acetate buffer (pH 4), 0.1 mol L−1 phosphate buffer (pH 7), and 0.1 [...] Read more.
Carbon paste electrodes ex-situ modified with different surfactants were studied using cyclic voltammetry with two model redox couples, namely hexaammineruthenium (II)/(III) and hexacyanoferrate (II)/(III), in 0.1 mol L−1 acetate buffer (pH 4), 0.1 mol L−1 phosphate buffer (pH 7), and 0.1 mol L−1 ammonia buffer (pH 9) at a scan rate ranging from 50 to 500 mV s−1. Distinct effects of pH, ionic strength, and the composition of supporting media, as well as of the amount of surfactant and its accumulation at the electrode surface, could be observed and found reflected in changes of double-layer capacitance and electrode kinetics. It has been proved that, at the two-phase interface, the presence of surfactants results in elctrostatic interactions that dominate in the transfer of model substances, possibly accompanied also by the effect of erosion at the carbon paste surface. The individual findings depend on the configurations investigated, which are also illustrated on numerous schemes of the actual microstructure at the respective electrode surface. Finally, principal observations and results are highlighted and discussed with respect to the future development and possible applications of sensors based on surfactant-modified composited electrodes. Full article
(This article belongs to the Special Issue Electrochemical Sensors: Technologies and Applications)
Show Figures

Figure 1

Figure 1
<p>Schematic representation of the microstructure of two types of surfactant-modified CPEs.</p>
Full article ">Figure 2
<p>Individual steps of <span class="html-italic">ex-situ</span> modified CPEs with surfactant preparation.</p>
Full article ">Figure 3
<p>Repetitive cyclic voltammograms (five cycles) of 500 μmol L<sup>−1</sup> K<sub>3</sub>[Fe(CN)<sub>6</sub>] obtained at CPE-CTAB (<b>a</b>) and 500 μmol L<sup>−1</sup> K<sub>3</sub>[Ru(NH<sub>3</sub>)<sub>6</sub>] obtained at CPE-SDS (<b>b</b>), both obtained by measurements in 0.1 mol L<sup>−1</sup> phosphate buffer (pH 7) at 50 mV s<sup>−1</sup>.</p>
Full article ">Figure 4
<p>Linear-sweep voltammograms of 0.1 mol L<sup>−1</sup> acetate buffer (pH 4; red), phosphate buffer (pH 7; green), and ammonia buffer (pH 10; blue curve) recorded on bare CPE (solid; <b>a</b>), CPE-CTAB (dashed; <b>b</b>), and CPE-SDS (dotted lines; <b>c</b>) at potential step of 5 mV and scan rate of 50 mV s<sup>−1</sup>.</p>
Full article ">Figure 5
<p>Cyclic voltammograms of 1 mol L<sup>−1</sup> KOH obtained at CPE-SDS (<b>a</b>), CPE-SDBS (<b>b</b>), CPE-CTAB (<b>c</b>), and CPE-DDAB (<b>d</b>) at potential step of 5 mV and scan rates of 50, 100, 200, 300, 400, and 500 mV s<sup>−1</sup>. Scheme of surfactant molecules extracted into pasting liquid of CPE.</p>
Full article ">Figure 5 Cont.
<p>Cyclic voltammograms of 1 mol L<sup>−1</sup> KOH obtained at CPE-SDS (<b>a</b>), CPE-SDBS (<b>b</b>), CPE-CTAB (<b>c</b>), and CPE-DDAB (<b>d</b>) at potential step of 5 mV and scan rates of 50, 100, 200, 300, 400, and 500 mV s<sup>−1</sup>. Scheme of surfactant molecules extracted into pasting liquid of CPE.</p>
Full article ">Figure 6
<p>Representative illustration of accumulation of surfactants (via extractive anchoring of the molecules of CTAB and SDS into triacylglycerides). Blue and yellow colour represent an aqueous phase (working medium) and non-aqueous phase of triacylglycerides (paste binder), respectively.</p>
Full article ">Figure 7
<p>Cyclic voltammograms of 500 μmol L<sup>−1</sup> K<sub>3</sub>[Fe(CN)<sub>6</sub>] obtained at bare CPE (black curve) a differently prepared CPE-CTAB electrodes in 0.1 mol L<sup>−1</sup> phosphate buffer (pH 7) at 50 mV s<sup>−1</sup>. These modified CPEs differed only in the accumulation time of the surfactant from its 1 mmol L<sup>−1</sup> aqueous solution at a stirring speed of 400 rpm and room temperature. The corresponding scheme illustratively shows the different CPE coverage depending on the accumulation time of CTAB and the ion-paired [Fe(CN)<sub>6</sub>]<sup>3−</sup> anions (large violet particle).</p>
Full article ">Figure 8
<p>Cyclic voltammograms of 500 μmol L<sup>−1</sup> K<sub>3</sub>[Fe(CN)<sub>6</sub>] recorded at bare CPE (<b>a</b>), CPE-CTAB (<b>b</b>), and CPE-SDS (<b>c</b>) in 1 (orange), 0.1 (blue), 0.05 (red), and 0.01 mol L<sup>−1</sup> KCl (green curve) at 50 mV s<sup>−1</sup>. Below and right: scheme of electrostatic interaction between the extracted molecules of surfactant and the free ions in working medium, where the large particles with negative charge (violet colour) represent the [Fe(CN)<sub>6</sub>]<sup>3−</sup> anion.</p>
Full article ">Figure 9
<p>(<b>Left</b>) Cyclic voltammograms of 500 μmol L<sup>−1</sup> K<sub>3</sub>[Fe(CN)<sub>6</sub>] recorded at the bare CPE (black curve), CPE-CTAB (green), CPE-CTAB-SDS (orange line), and CPE-Triton X-100 (red) in 0.1 mol L<sup>−1</sup> phosphate buffer (pH 7) at 50 mV s<sup>−1</sup>. (<b>Right</b>) Scheme of electrostatic interaction between the extracted molecules of surfactant and free ions in the working medium, where large negatively charged particles represent the [Fe(CN)<sub>6</sub>]<sup>3−</sup> anion.</p>
Full article ">
22 pages, 592 KiB  
Article
Empowering Participatory Research in Urban Health: Wearable Biometric and Environmental Sensors for Activity Recognition
by Rok Novak, Johanna Amalia Robinson, Tjaša Kanduč, Dimosthenis Sarigiannis, Sašo Džeroski and David Kocman
Sensors 2023, 23(24), 9890; https://doi.org/10.3390/s23249890 - 18 Dec 2023
Cited by 1 | Viewed by 1947
Abstract
Participatory exposure research, which tracks behaviour and assesses exposure to stressors like air pollution, traditionally relies on time-activity diaries. This study introduces a novel approach, employing machine learning (ML) to empower laypersons in human activity recognition (HAR), aiming to reduce dependence on manual [...] Read more.
Participatory exposure research, which tracks behaviour and assesses exposure to stressors like air pollution, traditionally relies on time-activity diaries. This study introduces a novel approach, employing machine learning (ML) to empower laypersons in human activity recognition (HAR), aiming to reduce dependence on manual recording by leveraging data from wearable sensors. Recognising complex activities such as smoking and cooking presents unique challenges due to specific environmental conditions. In this research, we combined wearable environment/ambient and wrist-worn activity/biometric sensors for complex activity recognition in an urban stressor exposure study, measuring parameters like particulate matter concentrations, temperature, and humidity. Two groups, Group H (88 individuals) and Group M (18 individuals), wore the devices and manually logged their activities hourly and minutely, respectively. Prioritising accessibility and inclusivity, we selected three classification algorithms: k-nearest neighbours (IBk), decision trees (J48), and random forests (RF), based on: (1) proven efficacy in existing literature, (2) understandability and transparency for laypersons, (3) availability on user-friendly platforms like WEKA, and (4) efficiency on basic devices such as office laptops or smartphones. Accuracy improved with finer temporal resolution and detailed activity categories. However, when compared to other published human activity recognition research, our accuracy rates, particularly for less complex activities, were not as competitive. Misclassifications were higher for vague activities (resting, playing), while well-defined activities (smoking, cooking, running) had few errors. Including environmental sensor data increased accuracy for all activities, especially playing, smoking, and running. Future work should consider exploring other explainable algorithms available on diverse tools and platforms. Our findings underscore ML’s potential in exposure studies, emphasising its adaptability and significance for laypersons while also highlighting areas for improvement. Full article
(This article belongs to the Special Issue Sensors for Human Activity Recognition II)
Show Figures

Figure 1

Figure 1
<p>Schematic representation of the overall methodology and data flows used in this work.</p>
Full article ">
16 pages, 4264 KiB  
Article
Design and Development of an Imitation Detection System for Human Action Recognition Using Deep Learning
by Noura Alhakbani, Maha Alghamdi and Abeer Al-Nafjan
Sensors 2023, 23(24), 9889; https://doi.org/10.3390/s23249889 - 18 Dec 2023
Cited by 2 | Viewed by 1596
Abstract
Human action recognition (HAR) is a rapidly growing field with numerous applications in various domains. HAR involves the development of algorithms and techniques to automatically identify and classify human actions from video data. Accurate recognition of human actions has significant implications in fields [...] Read more.
Human action recognition (HAR) is a rapidly growing field with numerous applications in various domains. HAR involves the development of algorithms and techniques to automatically identify and classify human actions from video data. Accurate recognition of human actions has significant implications in fields such as surveillance and sports analysis and in the health care domain. This paper presents a study on the design and development of an imitation detection system using an HAR algorithm based on deep learning. This study explores the use of deep learning models, such as a single-frame convolutional neural network (CNN) and pretrained VGG-16, for the accurate classification of human actions. The proposed models were evaluated using a benchmark dataset, KTH. The performance of these models was compared with that of classical classifiers, including K-Nearest Neighbors, Support Vector Machine, and Random Forest. The results showed that the VGG-16 model achieved higher accuracy than the single-frame CNN, with a 98% accuracy rate. Full article
Show Figures

Figure 1

Figure 1
<p>HAR algorithm.</p>
Full article ">Figure 2
<p>KTH dataset.</p>
Full article ">Figure 3
<p>Single-frame CNN.</p>
Full article ">Figure 4
<p>CNN classification model structurer.</p>
Full article ">Figure 5
<p>VGG-16 architecture.</p>
Full article ">Figure 6
<p>VGG-16 model structure.</p>
Full article ">
1 pages, 158 KiB  
Correction
Correction: Todorov et al. Electromagnetic Sensing Techniques for Monitoring Atopic Dermatitis—Current Practices and Possible Advancements: A Review. Sensors 2023, 23, 3935
by Alexandar Todorov, Russel Torah, Mahmoud Wagih, Michael R. Ardern-Jones and Steve P. Beeby
Sensors 2023, 23(24), 9888; https://doi.org/10.3390/s23249888 - 18 Dec 2023
Viewed by 878
Abstract
**Mahmoud Wagih** was not included as an author in the original publication [...] Full article
5 pages, 207 KiB  
Editorial
Colorimetric Sensors: Methods and Applications
by Feng-Qing Yang and Liya Ge
Sensors 2023, 23(24), 9887; https://doi.org/10.3390/s23249887 - 18 Dec 2023
Cited by 12 | Viewed by 5328
Abstract
Colorimetric sensors have attracted considerable attention in many sensing applications because of their specificity, high sensitivity, cost-effectiveness, ease of use, rapid analysis, simplicity of operation, and clear visibility to the naked eye [...] Full article
(This article belongs to the Special Issue Colorimetric Sensors: Methods and Applications)
27 pages, 2711 KiB  
Article
A Novel Hierarchical Security Solution for Controller-Area-Network-Based 3D Printing in a Post-Quantum World
by Tyler Cultice, Joseph Clark, Wu Yang and Himanshu Thapliyal
Sensors 2023, 23(24), 9886; https://doi.org/10.3390/s23249886 - 17 Dec 2023
Cited by 1 | Viewed by 1736
Abstract
As the popularity of 3D printing or additive manufacturing (AM) continues to increase for use in commercial and defense supply chains, the requirement for reliable, robust protection from adversaries has become more important than ever. Three-dimensional printing security focuses on protecting both the [...] Read more.
As the popularity of 3D printing or additive manufacturing (AM) continues to increase for use in commercial and defense supply chains, the requirement for reliable, robust protection from adversaries has become more important than ever. Three-dimensional printing security focuses on protecting both the individual Industrial Internet of Things (I-IoT) AM devices and the networks that connect hundreds of these machines together. Additionally, rapid improvements in quantum computing demonstrate a vital need for robust security in a post-quantum future for critical AM manufacturing, especially for applications in, for example, the medical and defense industries. In this paper, we discuss the attack surface of adversarial data manipulation on the physical inter-device communication bus, Controller Area Network (CAN). We propose a novel, hierarchical tree solution for a secure, post-quantum-supported security framework for CAN-based AM devices. Through using subnet hopping between isolated CAN buses, our framework maintains the ability to use legacy or third-party devices in a plug-and-play fashion while securing and minimizing the attack surface of hardware Trojans or other adversaries. The results of the physical implementation of our framework demonstrate 25% and 90% improvement in message costs for authentication compared to existing lightweight and post-quantum CAN security solutions, respectively. Additionally, we performed timing benchmarks on the normal communication (hopping) and authentication schemes of our framework. Full article
Show Figures

Figure 1

Figure 1
<p>Structure of our CAN-based 3D printing framework based on a hierarchical root-of-trust structure. This provides a high-security network for 3D printers to utilize internally and externally to connect a large number of untrusted components.</p>
Full article ">Figure 2
<p>The 3D printing security paradigm and taxonomy of each category of security. Our framework targets the Network Security and Internal Communication Security aspects to secure CAN communications inside and outside the 3D printer and network.</p>
Full article ">Figure 3
<p>Depiction of an example CAN-based 3D printer structure. The components connect to a CAN bus to be directly addressable by the main board. Each printer in a printer network (or Farm) is connected to the Controller PC through various protocols, the CAN bus being a notable example.</p>
Full article ">Figure 4
<p>Pipeline design of our framework from initialization to normal operation for a singular subnet. The three distinct phases of our design are discovery, authentication, and communication (including capability and address requesting). Each subnet should perform these steps upon startup.</p>
Full article ">Figure 5
<p>Addresses and routing between a pair of network nodes. The left side shows the route from the orange node (bottom left) to the green node (bottom center), while the right shows the reverse route. Each route is composed from a series of local network address “hops”. The current address is replaced with the return address after each hop, allowing the return route to be calculated.</p>
Full article ">Figure 6
<p>General flow of standard communication depicting a sender transmitting through a router to reach a destination. Source (blue) creates the route and encrypted data to send to the first “hop” in the series. The router(s) (orange) will decrypt with the previous bus’ session key, validate, and re-encrypt the data using the next unique CAN bus parent session key. The router process is repeated until the data reaches the destination (green). Addresses and TTL are modified after each hop.</p>
Full article ">Figure 7
<p>The Controller Area Network Flexible Data-Rate (CAN-FD) frame with the data field divided into sections required for our framework. Use of a header of at least 8-bytes is recommended, leaving the data and tag fields with 56 bytes to use for encrypted data.</p>
Full article ">Figure 8
<p>Depiction of the expansiveness of our framework for both internal 3D printer communication and CAN networking. The tree structure allows for printer trees to expand into network trees, as well as allow legacy, non-CAN printers and peripherals (cameras) onto the network safely.</p>
Full article ">Figure 9
<p>Full test bed of our framework implementation, with attached 3D printer components. Network nodes are implemented by SAM C21 micro-controllers with servo, temperature sensor, and fan components attached to the endpoints.</p>
Full article ">Figure 10
<p>Node efficiency vs. number of client nodes for several values of <span class="html-italic">k</span> (number of nodes in subnet). A lower node efficiency percentage means more routers and infrastructure. This directly impacts number of hops (latency), number of branches, and hardware cost of our framework.</p>
Full article ">Figure 11
<p>Results of time to complete authentication, in Log(y), for the number of nodes in subnet <span class="html-italic">k</span>. Data points extrapolated by line of best fit for high values of <span class="html-italic">k</span>. Post-quantum performance gives a 10× overhead, but both are well within acceptable timing ranges for expected values of <span class="html-italic">k</span>.</p>
Full article ">Figure 12
<p>Number of messages (cost) needed to perform authentication in our LWC and PQC frameworks compared to existing CAN frameworks. Our LWC and PQC frameworks have noticeably different message costs, but both still perform better than most other CAN frameworks [<a href="#B10-sensors-23-09886" class="html-bibr">10</a>,<a href="#B11-sensors-23-09886" class="html-bibr">11</a>,<a href="#B12-sensors-23-09886" class="html-bibr">12</a>].</p>
Full article ">Figure 13
<p>Network latency (time taken for transmission) for lightweight and post-quantum framework designs by number of hops. Each hop requires a router to validate and reconstruct each packet. Maximum number of hops is usually limited to two times the depth of the tree.</p>
Full article ">Figure 14
<p>Network configuration for measuring normal communication latency. The SAM C21 micro-controllers are connected in our tree-based network, with the computer connecting to both the root (solid line) and one endpoint node (dashed line). Latency tests are conducted between each message hop count (or depth) and extrapolated, as shown in <a href="#sensors-23-09886-f013" class="html-fig">Figure 13</a>.</p>
Full article ">
19 pages, 599 KiB  
Article
Computation Offloading and Resource Allocation Based on P-DQN in LEO Satellite Edge Networks
by Xu Yang, Hai Fang, Yuan Gao, Xingjie Wang, Kan Wang and Zheng Liu
Sensors 2023, 23(24), 9885; https://doi.org/10.3390/s23249885 - 17 Dec 2023
Cited by 2 | Viewed by 2002
Abstract
Traditional low earth orbit (LEO) satellite networks are typically independent of terrestrial networks, which develop relatively slowly due to the on-board capacity limitation. By integrating emerging mobile edge computing (MEC) with LEO satellite networks to form the business-oriented “end-edge-cloud” multi-level computing architecture, some [...] Read more.
Traditional low earth orbit (LEO) satellite networks are typically independent of terrestrial networks, which develop relatively slowly due to the on-board capacity limitation. By integrating emerging mobile edge computing (MEC) with LEO satellite networks to form the business-oriented “end-edge-cloud” multi-level computing architecture, some computing-sensitive tasks can be offloaded by ground terminals to satellites, thereby satisfying more tasks in the network. How to make computation offloading and resource allocation decisions in LEO satellite edge networks, nevertheless, indeed poses challenges in tracking network dynamics and handling sophisticated actions. For the discrete-continuous hybrid action space and time-varying networks, this work aims to use the parameterized deep Q-network (P-DQN) for the joint computation offloading and resource allocation. First, the characteristics of time-varying channels are modeled, and then both communication and computation models under three different offloading decisions are constructed. Second, the constraints on task offloading decisions, on remaining available computing resources, and on the power control of LEO satellites as well as the cloud server are formulated, followed by the maximization problem of satisfied task number over the long run. Third, using the parameterized action Markov decision process (PAMDP) and P-DQN, the joint computing offloading, resource allocation, and power control are made in real time, to accommodate dynamics in LEO satellite edge networks and dispose of the discrete-continuous hybrid action space. Simulation results show that the proposed P-DQN method could approach the optimal control, and outperforms other reinforcement learning (RL) methods for merely either discrete or continuous action space, in terms of the long-term rate of satisfied tasks. Full article
(This article belongs to the Special Issue Integration of Satellite-Aerial-Terrestrial Networks)
Show Figures

Figure 1

Figure 1
<p>LEO satellite edge networks.</p>
Full article ">Figure 2
<p>Flowchart for joint computation offloading and resource allocation with P-DQN.</p>
Full article ">Figure 3
<p>Average reward under different learning rates.</p>
Full article ">Figure 4
<p>Average return under different batchsize settings.</p>
Full article ">Figure 5
<p>Rate of satisfied tasks under different computing resources budgets in the cloud server.</p>
Full article ">Figure 6
<p>Rate of satisfied tasks under different computing resource budgets of LEO satellites.</p>
Full article ">Figure 7
<p>Rate of satisfied tasks under different maximum tolerance latency settings.</p>
Full article ">Figure 8
<p>Rate of satisfied tasks under different terminal numbers.</p>
Full article ">Figure 9
<p>Proportion of offloaded tasks under different approaches.</p>
Full article ">Figure 10
<p>Rate of satisfied tasks in terminals, satellites, and cloud server.</p>
Full article ">
16 pages, 4166 KiB  
Article
Fast Thermocycling in Custom Microfluidic Cartridge for Rapid Single-Molecule Droplet PCR
by Hirokazu Takahara, Hayato Tanaka and Masahiko Hashimoto
Sensors 2023, 23(24), 9884; https://doi.org/10.3390/s23249884 - 17 Dec 2023
Viewed by 1456
Abstract
The microfluidic droplet polymerase chain reaction (PCR), which enables simultaneous DNA amplification in numerous droplets, has led to the discovery of various applications that were previously deemed unattainable. Decades ago, it was demonstrated that the temperature holding periods at the denaturation and annealing [...] Read more.
The microfluidic droplet polymerase chain reaction (PCR), which enables simultaneous DNA amplification in numerous droplets, has led to the discovery of various applications that were previously deemed unattainable. Decades ago, it was demonstrated that the temperature holding periods at the denaturation and annealing stages in thermal cycles for PCR amplification could be essentially eliminated if a rapid change of temperature for an entire PCR mixture was achieved. Microfluidic devices facilitating the application of such fast thermocycling protocols have significantly reduced the time required for PCR. However, in microfluidic droplet PCR, ensuring successful amplification from single molecules within droplets has limited studies on accelerating assays through fast thermocycling. Our developed microfluidic cartridge, distinguished for its convenience in executing single-molecule droplet PCR with common laboratory equipment, features droplets positioned on a thin glass slide. We hypothesized that applying fast thermocycling to this cartridge would achieve single-molecule droplet PCR amplification. Indeed, the application of this fast protocol demonstrated successful amplification in just 22 min for 30 cycles (40 s/cycle). This breakthrough is noteworthy for its potential to expedite microfluidic droplet PCR assays, ensuring efficient single-molecule amplification within a remarkably short timeframe. Full article
(This article belongs to the Special Issue Portable Biosensors for Rapid Detection)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Photo of a polydimethylsiloxane (PDMS) microfluidic sheet. (<b>b</b>) Drawing of a microfluidic channel pattern with the terminal reservoirs on the PDMS sheet and an enlarged view of the T-channel junction. R1 and R2 represent reservoirs for oil and aqueous phase loading, respectively, whereas R3 indicates an outlet space into which droplets generated at the T-channel junction flow.</p>
Full article ">Figure 2
<p>Illustration with a photographic image indicating the structure of the microfluidic cartridge, composed of four distinct layers: layer 1, thin glass (depicted in light green); layer 2, polyethylene terephthalate (PET) (depicted in light gray); layer 3, polycarbonate (PC) (depicted in light yellow); and layer 4, polydimethylsiloxane (PDMS) (depicted in light blue). Within layer 4, R1 and R2 denote reservoirs for oil and aqueous phase loading, respectively, whereas R3 represents an outlet space that is provided to establish a reduced-pressure environment by air suction for the fluid manipulation. The diagram at the lower left corner shows a cross-sectional view along (below) the dotted line I−II on the upper left 3D cartridge illustration.</p>
Full article ">Figure 3
<p>Schematic of the current droplet PCR workflow. (<b>a</b>) Droplet preparation, (<b>b</b>) PCR, and (<b>c</b>) fluorescence imaging of thermocycled droplets.</p>
Full article ">Figure 4
<p>(<b>a</b>) Illustration of the PCR chamber (i.e., R4) with the thermocouples (TCs: TC1 and TC2) arranged to monitor temperatures on the metal heat block surface and the glass slide surface (with the covering oil), respectively, during thermal cycles. (<b>b</b>) Temperature vs. time traces for the surfaces of the aluminum heat block and glass slide. The thermal cycling conditions applied to the droplet samples were as follows: an initial activation period lasting approximately 40 s within the range of 94 to 98 °C, 30 cycles of a two-step thermal profile consisting of ~1 s at 94 °C for denaturation and ~2 s at 63 °C for combined annealing and extension, and an additional final extension of approximately 1 min at 72 to 73 °C. We obtained the profiles for the metal block surface (blue solid line) and for the glass slide surface (red solid line) upon programming the instrument to denature at 99 °C for 0 s and perform annealing/extension at 58 °C for 0 s at the fastest temperature ramp rate for the 30 cycles. The right panel exhibits an enlarged view of the green-shaded area shown in the left panel.</p>
Full article ">Figure 5
<p>Comparison of temperature vs. time traces on the surface of the thick glass plate (1.1 mm thick, indicated by the green solid line) with those on the surfaces of the metal blocks (indicated by the blue solid line) and the thin glass slide (0.15 mm thick, indicated by the red solid line). We obtained the temperature profile for the thick glass plate surface upon programming the instrument to denature at 99 °C for 0 s, followed by annealing/extension at 58 °C for 0 s using the fastest temperature ramp rate, similarly to the measurements shown in <a href="#sensors-23-09884-f004" class="html-fig">Figure 4</a>b.</p>
Full article ">Figure 6
<p>Influence of varying Mg<sup>2+</sup> concentration on the degree of separation between FL(+) and FL(−) droplet populations in the standard and fast cycling protocols. We adjusted the ratio of the number of template DNA molecules to droplets to 7:10 for sample preparation. For each of (<b>a</b>–<b>h</b>), typical histograms obtained at each specified magnesium ion concentration ((<b>a</b>), 1.5 mM; (<b>b</b>), 2 mM; (<b>c</b>), 3 mM; (<b>d</b>), 4 mM; (<b>e</b>), 5 mM; (<b>f</b>), 6 mM; (<b>g</b>), 7 mM; and (<b>h</b>), 8 mM, respectively) are presented in the upper panel (depicted in red) for the standard cycling protocol and in the lower panel (depicted in blue) for the fast cycling protocol. The left cluster in each of the histograms indicates FL(−) droplets, and the right cluster indicates FL(+) droplets.</p>
Full article ">Figure 7
<p>Influence of varying Mg<sup>2+</sup> concentration on the mean fluorescence intensities of FL(+) droplets in the standard and fast cycling protocols. We adjusted the ratio of the number of template DNA molecules to droplets to 7:10 for sample preparation. The error bars represent SEM values. Number of experiments: <span class="html-italic">n</span> = 5.</p>
Full article ">Figure 8
<p>Histograms of hexachloro-fluorescein (HEX) fluorescence of thermocycled droplets for (<b>a</b>) no-template control (NTC) and (<b>b</b>) positive control experiments. The preset <span class="html-italic">λ</span> values, total droplet sample number, and percentage of fluorescence-positive FL(+) droplets in the distribution are indicated in the individual panels.</p>
Full article ">
Previous Issue
Back to TopTop