[go: up one dir, main page]

Next Issue
Volume 24, February-2
Previous Issue
Volume 24, January-2
 
 
sensors-logo

Journal Browser

Journal Browser

Sensors, Volume 24, Issue 3 (February-1 2024) – 328 articles

Cover Story (view full-size image):

This paper outlines a method for enhancing optical sensor sensitivity by combining self-image theory with a graphene oxide coating. The sensor, set to a length corresponding to the second self-image point (29.12 mm), was coated with an 80 µm/mL graphene oxide film using the Layer-by-Layer technique. Refractive index characterization of the sensor demonstrated a wavelength sensitivity of 200 ± 6nm/RIU.

Comparisons between uncoated and graphene oxide-coated sensors measuring glucose concentrations from 25 to 200 mg/dL revealed an eightfold sensitivity improvement with one bilayer of Polyethyleneimine/graphene. The final graphene oxide-based sensor exhibited a sensitivity of 10.403 ± 0.004 pm/(mg/dL) with stability, indicated by a low standard deviation of 0.46 pm/min and a maximum theoretical resolution of 1.90 mg/dL. View this paper

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
12 pages, 1401 KiB  
Article
Optical Sensing Using Hybrid Multilayer Grating Metasurfaces with Customized Spectral Response
by Mahmoud H. Elshorbagy, Alexander Cuadrado and Javier Alda
Sensors 2024, 24(3), 1043; https://doi.org/10.3390/s24031043 - 5 Feb 2024
Cited by 1 | Viewed by 1283
Abstract
Customized metasurfaces allow for controlling optical responses in photonic and optoelectronic devices over a broad band. For sensing applications, the spectral response of an optical device can be narrowed to a few nanometers, which enhances its capabilities to detect environmental changes that shift [...] Read more.
Customized metasurfaces allow for controlling optical responses in photonic and optoelectronic devices over a broad band. For sensing applications, the spectral response of an optical device can be narrowed to a few nanometers, which enhances its capabilities to detect environmental changes that shift the spectral transmission or reflection. These nanophotonic elements are key for the new generation of plasmonic optical sensors with custom responses and custom modes of operation. In our design, the metallic top electrode of a hydrogenated amorphous silicon thin-film solar cell is combined with a metasurface fabricated as a hybrid dielectric multilayer grating. This arrangement generates a plasmonic resonance on top of the active layer of the cell, which enhances the optoelectronic response of the system over a very narrow spectral band. Then, the solar cell becomes a sensor with a response that is highly dependent on the optical properties of the medium on top of it. The maximum sensitivity and figure of merit (FOM) are SB = 36,707 (mA/W)/RIU and ≈167 RIU−1, respectively, for the 560 nm wavelength using TE polarization. The optical response and the high sensing performance of this device make it suitable for detecting very tiny changes in gas media. This is of great importance for monitoring air quality and thecomposition of gases in closed atmospheres. Full article
(This article belongs to the Special Issue Optical Sensing and Technologies)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) The layer structure of the aSiH solar cell with the top contact made of silver and a metasurface on top; (<b>b</b>) the operation mechanism of the device, (<b>c</b>) the electronic output of the sensor.</p>
Full article ">Figure 2
<p>The absorption in the active layer as a function of the multilayer grating width and height for (<b>a</b>,<b>c</b>) a wavelength of 430 nm and (<b>b</b>,<b>d</b>) a wavelength of 560 nm. The numbers in red correspond to the designs presented in <a href="#sensors-24-01043-t002" class="html-table">Table 2</a>.</p>
Full article ">Figure 3
<p>(<b>a</b>) The spectral response of the device for a wavelength of 430 nm, TE and TM; (<b>b</b>) the spectral response of the device for a wavelength of 560 nm, TE and TM.</p>
Full article ">Figure 4
<p>Electric and magnetic field distributions at resonance wavelengths for the cases highlighted by red numbers in <a href="#sensors-24-01043-f002" class="html-fig">Figure 2</a>. The wavelength and polarization type of each case are presented at the top of each map, while the case number is at the bottom of the map.</p>
Full article ">Figure 5
<p>The spatial distribution of the normalized dissipated energy at the resonance wavelengths for the cases highlighted by red numbers in <a href="#sensors-24-01043-f002" class="html-fig">Figure 2</a>. The wavelength and polarization type of each case are presented at the top of each map, while the case number is at the bottom of the map.</p>
Full article ">Figure 6
<p>The responsivity of the device as a function of the refractive index change in the top medium for the selected optimized cases.</p>
Full article ">
17 pages, 1414 KiB  
Article
Classifying Motorcyclist Behaviour with XGBoost Based on IMU Data
by Gerhard Navratil and Ioannis Giannopoulos
Sensors 2024, 24(3), 1042; https://doi.org/10.3390/s24031042 - 5 Feb 2024
Viewed by 1052
Abstract
Human behaviour detection is relevant in many fields. During navigational tasks it is an indicator for environmental conditions. Therefore, monitoring people while they move along the street network provides insights on the environment. This is especially true for motorcyclists, who have to observe [...] Read more.
Human behaviour detection is relevant in many fields. During navigational tasks it is an indicator for environmental conditions. Therefore, monitoring people while they move along the street network provides insights on the environment. This is especially true for motorcyclists, who have to observe aspects such as road surface conditions or traffic very careful. We thus performed an experiment to check whether IMU data is sufficient to classify motorcyclist behaviour as a data source for later spatial and temporal analysis. The classification was done using XGBoost and proved successful for four out of originally five different types of behaviour. A classification accuracy of approximately 80% was achieved. Only overtake manoeuvrers were not identified reliably. Full article
(This article belongs to the Special Issue Advanced Sensing Technology for Intelligent Transportation Systems)
Show Figures

Figure 1

Figure 1
<p>Workflow of the experiment.</p>
Full article ">Figure 2
<p>Route of the test rides: Start-1-2-3-4-5-6-3-4-1-End. Background: OpenStreetMap.</p>
Full article ">Figure 3
<p>Snapshot from one of the videos.</p>
Full article ">Figure 4
<p>Confusion Matrix (<b>a</b>) and Loss (<b>b</b>) with negative log-likelihood as a scoring function for all classes: Cruise (0), Traffic (1), Fun (2), Overtake (3), and Wait (4) using all available features.</p>
Full article ">Figure 5
<p>Confusion Matrix (<b>a</b>) and Loss (<b>b</b>) with negative log-likelihood as a scoring function for the classes Cruise (0), Traffic (1), Fun (2), and Wait (3) using all available features.</p>
Full article ">Figure 6
<p>SHAP feature importance.</p>
Full article ">
19 pages, 1047 KiB  
Article
Assessment of Drivers’ Mental Workload by Multimodal Measures during Auditory-Based Dual-Task Driving Scenarios
by Jiaqi Huang, Qiliang Zhang, Tingru Zhang, Tieyan Wang and Da Tao
Sensors 2024, 24(3), 1041; https://doi.org/10.3390/s24031041 - 5 Feb 2024
Cited by 1 | Viewed by 1367
Abstract
Assessing drivers’ mental workload is crucial for reducing road accidents. This study examined drivers’ mental workload in a simulated auditory-based dual-task driving scenario, with driving tasks as the main task, and auditory-based N-back tasks as the secondary task. A total of three levels [...] Read more.
Assessing drivers’ mental workload is crucial for reducing road accidents. This study examined drivers’ mental workload in a simulated auditory-based dual-task driving scenario, with driving tasks as the main task, and auditory-based N-back tasks as the secondary task. A total of three levels of mental workload (i.e., low, medium, high) were manipulated by varying the difficulty levels of the secondary task (i.e., no presence of secondary task, 1-back, 2-back). Multimodal measures, including a set of subjective measures, physiological measures, and behavioral performance measures, were collected during the experiment. The results showed that an increase in task difficulty led to increased subjective ratings of mental workload and a decrease in task performance for the secondary N-back tasks. Significant differences were observed across the different levels of mental workload in multimodal physiological measures, such as delta waves in EEG signals, fixation distance in eye movement signals, time- and frequency-domain measures in ECG signals, and skin conductance in EDA signals. In addition, four driving performance measures related to vehicle velocity and the deviation of pedal input and vehicle position also showed sensitivity to the changes in drivers’ mental workload. The findings from this study can contribute to a comprehensive understanding of effective measures for mental workload assessment in driving scenarios and to the development of smart driving systems for the accurate recognition of drivers’ mental states. Full article
Show Figures

Figure 1

Figure 1
<p>Experimental scenario and equipment for physiological signal acquisition.</p>
Full article ">Figure 2
<p>Comparisons of sub-dimensions of NASA-TLX among three tasks with different difficulty levels. Error bars represent standard errors (* <span class="html-italic">p</span> &lt; 0.05, ** <span class="html-italic">p</span> &lt; 0.01).</p>
Full article ">Figure 3
<p>Comparisons of the four EEG measures among three tasks with different difficulty levels. Error bars represent standard errors (* <span class="html-italic">p</span> &lt; 0.05).</p>
Full article ">
18 pages, 3370 KiB  
Article
Multi-Stage Learning Framework Using Convolutional Neural Network and Decision Tree-Based Classification for Detection of DDoS Pandemic Attacks in SDN-Based SCADA Systems
by Onur Polat, Muammer Türkoğlu, Hüseyin Polat, Saadin Oyucu, Hüseyin Üzen, Fahri Yardımcı and Ahmet Aksöz
Sensors 2024, 24(3), 1040; https://doi.org/10.3390/s24031040 - 5 Feb 2024
Cited by 5 | Viewed by 1582
Abstract
Supervisory Control and Data Acquisition (SCADA) systems, which play a critical role in monitoring, managing, and controlling industrial processes, face flexibility, scalability, and management difficulties arising from traditional network structures. Software-defined networking (SDN) offers a new opportunity to overcome the challenges traditional SCADA [...] Read more.
Supervisory Control and Data Acquisition (SCADA) systems, which play a critical role in monitoring, managing, and controlling industrial processes, face flexibility, scalability, and management difficulties arising from traditional network structures. Software-defined networking (SDN) offers a new opportunity to overcome the challenges traditional SCADA networks face, based on the concept of separating the control and data plane. Although integrating the SDN architecture into SCADA systems offers many advantages, it cannot address security concerns against cyber-attacks such as a distributed denial of service (DDoS). The fact that SDN has centralized management and programmability features causes attackers to carry out attacks that specifically target the SDN controller and data plane. If DDoS attacks against the SDN-based SCADA network are not detected and precautions are not taken, they can cause chaos and have terrible consequences. By detecting a possible DDoS attack at an early stage, security measures that can reduce the impact of the attack can be taken immediately, and the likelihood of being a direct victim of the attack decreases. This study proposes a multi-stage learning model using a 1-dimensional convolutional neural network (1D-CNN) and decision tree-based classification to detect DDoS attacks in SDN-based SCADA systems effectively. A new dataset containing various attack scenarios on a specific experimental network topology was created to be used in the training and testing phases of this model. According to the experimental results of this study, the proposed model achieved a 97.8% accuracy rate in DDoS-attack detection. The proposed multi-stage learning model shows that high-performance results can be achieved in detecting DDoS attacks against SDN-based SCADA systems. Full article
(This article belongs to the Special Issue Intelligent Solutions for Cybersecurity)
Show Figures

Figure 1

Figure 1
<p>Organization of this study.</p>
Full article ">Figure 2
<p>SDN-based SCADA system architecture.</p>
Full article ">Figure 3
<p>Experimental topology for network data collection of SDN-based SCADA system.</p>
Full article ">Figure 4
<p>Confusion matrices of DT classifier.</p>
Full article ">Figure 5
<p>MS-LNet model-implementation steps: (<b>a</b>) Training phase of the CNN model, (<b>b</b>) Training phase of DT classifier, (<b>c</b>) Testing phase of the proposed model.</p>
Full article ">Figure 5 Cont.
<p>MS-LNet model-implementation steps: (<b>a</b>) Training phase of the CNN model, (<b>b</b>) Training phase of DT classifier, (<b>c</b>) Testing phase of the proposed model.</p>
Full article ">Figure 6
<p>Confusion matrices of 1D-CNN.</p>
Full article ">Figure 7
<p>Confusion matrices of (<b>a</b>) Flatten, (<b>b</b>) FC1, and (<b>c</b>) FC2.</p>
Full article ">Figure 8
<p>ROC diagram of (<b>a</b>) flatten, (<b>b</b>) FC1, and (<b>c</b>) FC2.</p>
Full article ">
21 pages, 8321 KiB  
Article
Multilayer Perceptron-Based Error Compensation for Automatic On-the-Fly Camera Orientation Estimation Using a Single Vanishing Point from Road Lane
by Xingyou Li, Hyoungrae Kim, Vijay Kakani and Hakil Kim
Sensors 2024, 24(3), 1039; https://doi.org/10.3390/s24031039 - 5 Feb 2024
Viewed by 934
Abstract
This study introduces a multilayer perceptron (MLP) error compensation method for real-time camera orientation estimation, leveraging a single vanishing point and road lane lines within a steady-state framework. The research emphasizes cameras with a roll angle of 0°, predominant in autonomous vehicle contexts. [...] Read more.
This study introduces a multilayer perceptron (MLP) error compensation method for real-time camera orientation estimation, leveraging a single vanishing point and road lane lines within a steady-state framework. The research emphasizes cameras with a roll angle of 0°, predominant in autonomous vehicle contexts. The methodology estimates pitch and yaw angles using a single image and integrates two Kalman filter models with inputs from image points (u, v) and derived angles (pitch, yaw). Performance metrics, including avgE, minE, maxE, ssE, and Stdev, were utilized, testing the system in both simulator and real-vehicle environments. The outcomes indicate that our method notably enhances the accuracy of camera orientation estimations, consistently outpacing competing techniques across varied scenarios. This potency of the method is evident in its adaptability and precision, holding promise for advanced vehicle systems and real-world applications. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>The framework for automatic on-the-fly camera orientation estimation system.</p>
Full article ">Figure 2
<p><math display="inline"><semantics> <mrow> <mi>f</mi> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>,</mo> <msub> <mi>e</mi> <mi>p</mi> </msub> </mrow> </semantics></math> for pitch performance with different camera FOVs.</p>
Full article ">Figure 3
<p><math display="inline"><semantics> <mrow> <mi>f</mi> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>,</mo> <msub> <mi>e</mi> <mi>p</mi> </msub> </mrow> </semantics></math> for yaw performance with different camera FOVs.</p>
Full article ">Figure 4
<p>Camera orientation to road plane and coordinates definition for each coordinate system.</p>
Full article ">Figure 5
<p>The MLP model to estimate the error with the VP: The two inputs are the distance from the CP of the image to the VP, whose description as <math display="inline"><semantics> <msub> <mi>D</mi> <mi>u</mi> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>D</mi> <mi>v</mi> </msub> </semantics></math> on the image coordinates. The two outputs are the pitch and yaw error for error compensation, whose description is <math display="inline"><semantics> <msub> <mi>e</mi> <mi>p</mi> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>e</mi> <mi>y</mi> </msub> </semantics></math> on the image coordinates.</p>
Full article ">Figure 6
<p>Loss curves: The training data comprise the distance from VP to CP and the related error. The validation data comprise different FOV data. Networks are built by different layers.</p>
Full article ">Figure 7
<p>Average inferencing time for each network.</p>
Full article ">Figure 8
<p>The settings of MORAI environment and camera.</p>
Full article ">Figure 9
<p>Scene settings based on transformed angles and changing angles.</p>
Full article ">Figure 10
<p>The settings of real vehicle and camera.</p>
Full article ">Figure 11
<p>Performance comparison of three algorithms with Scenario 1.</p>
Full article ">Figure 12
<p>Performance comparison of three algorithms with Scenario 2.</p>
Full article ">Figure 13
<p>Performance comparison of three algorithms with Scenario 3.</p>
Full article ">Figure 14
<p>Performance comparison of three algorithms with Scenario 4.</p>
Full article ">Figure 15
<p>Pitch performance comparison of three algorithms with Scenario 5.</p>
Full article ">Figure 16
<p>Yaw performance comparison of three algorithms with Scenario 5.</p>
Full article ">Figure 17
<p>Compensate pitch performance comparison with different camera FOVs.</p>
Full article ">Figure 18
<p>Compensate yaw performance comparison with different camera FOVs.</p>
Full article ">Figure 19
<p>Compensate pitch error comparison with different camera FOVs.</p>
Full article ">Figure 20
<p>Compensate yaw error comparison with different camera FOVs.</p>
Full article ">Figure 21
<p>Application sample image of the proposed method.</p>
Full article ">Figure 22
<p>Distance estimation performance comparison between off-line calibration and the proposed method.</p>
Full article ">Figure 23
<p>Limitations in terms of VP detection.</p>
Full article ">
49 pages, 1329 KiB  
Review
A Survey on Open Radio Access Networks: Challenges, Research Directions, and Open Source Approaches
by Wilfrid Azariah, Fransiscus Asisi Bimo, Chih-Wei Lin, Ray-Guang Cheng, Navid Nikaein and Rittwik Jana
Sensors 2024, 24(3), 1038; https://doi.org/10.3390/s24031038 - 5 Feb 2024
Cited by 15 | Viewed by 4039
Abstract
The open radio access network (RAN) aims to bring openness and intelligence to the traditional closed and proprietary RAN technology and offer flexibility, performance improvement, and cost-efficiency in the RAN’s deployment and operation. This paper provides a comprehensive survey of the Open RAN [...] Read more.
The open radio access network (RAN) aims to bring openness and intelligence to the traditional closed and proprietary RAN technology and offer flexibility, performance improvement, and cost-efficiency in the RAN’s deployment and operation. This paper provides a comprehensive survey of the Open RAN development. We briefly summarize the RAN evolution history and the state-of-the-art technologies applied to Open RAN. The Open RAN-related projects, activities, and standardization is then discussed. We then summarize the challenges and future research directions required to support the Open RAN. Finally, we discuss some solutions to tackle these issues from the open source perspective. Full article
(This article belongs to the Special Issue Future Wireless Communication Networks (Volume II))
Show Figures

Figure 1

Figure 1
<p>RAN architecture per generation.</p>
Full article ">Figure 2
<p>Comparison between a non-disaggregated and disaggregated network architecture.</p>
Full article ">Figure 3
<p>Vertical and horizontal functional split of Open RAN.</p>
Full article ">Figure 4
<p>Three types of RIC.</p>
Full article ">Figure 5
<p>O-RAN Alliance’s WGs and their objectives.</p>
Full article ">Figure 6
<p>OSC’s workflow.</p>
Full article ">Figure 7
<p>Comparison between 3GPP and O-RAN network functions and interfaces.</p>
Full article ">Figure 8
<p>Comparison between 3GPP and O-RAN architecture.</p>
Full article ">Figure 9
<p>Logical architecture of O-RAN.</p>
Full article ">Figure 10
<p>O-RAN’s SMO framework.</p>
Full article ">Figure 11
<p>O-RAN’s OAM architecture.</p>
Full article ">Figure 12
<p>Near-RT RIC internal architecture.</p>
Full article ">Figure 13
<p>O-CU architecture.</p>
Full article ">Figure 14
<p>O-DU architecture.</p>
Full article ">Figure 15
<p>O-DU high architecture in H release.</p>
Full article ">Figure 16
<p>O-DU low processing blocks.</p>
Full article ">Figure 17
<p>PDSCH in lower layer split 7.2x.</p>
Full article ">Figure 18
<p>Fronthaul data flows.</p>
Full article ">Figure 19
<p>Open source 4G/5G software license types.</p>
Full article ">Figure 20
<p>Logical resources of OSC Labs.</p>
Full article ">
25 pages, 3321 KiB  
Article
Decentralized IoT Data Authentication with Signature Aggregation
by Jay Bojič Burgos and Matevž Pustišek
Sensors 2024, 24(3), 1037; https://doi.org/10.3390/s24031037 - 5 Feb 2024
Cited by 1 | Viewed by 1237
Abstract
The rapid expansion of the Internet of Things (IoT) has introduced significant challenges in data authentication, necessitating a balance between scalability and security. Traditional approaches often rely on third parties, while blockchain-based solutions face computational and storage bottlenecks. Our novel framework employs edge [...] Read more.
The rapid expansion of the Internet of Things (IoT) has introduced significant challenges in data authentication, necessitating a balance between scalability and security. Traditional approaches often rely on third parties, while blockchain-based solutions face computational and storage bottlenecks. Our novel framework employs edge aggregating servers and Ethereum Layer 2 rollups, offering a scalable and secure IoT data authentication solution that reduces the need for continuous, direct interaction between IoT devices and the blockchain. We utilize and compare the Nova and Risc0 proving systems for authenticating batches of IoT data by verifying signatures, ensuring data integrity and privacy. Notably, the Nova prover significantly outperforms Risc0 in proving and verification times; for instance, with 10 signatures, Nova takes 3.62 s compared to Risc0’s 369 s, with this performance gap widening as the number of signatures in a batch increases. Our framework further enhances data verifiability and trust by recording essential information on L2 rollups, creating an immutable and transparent record of authentication. The use of Layer 2 rollups atop a permissionless blockchain like Ethereum effectively reduces on-chain storage costs by approximately 48 to 57 times compared to direct Ethereum use, addressing cost bottlenecks efficiently. Full article
(This article belongs to the Special Issue Recent Trends and Advances in Telecommunications and Sensing)
Show Figures

Figure 1

Figure 1
<p>Proof aggregation.</p>
Full article ">Figure 2
<p>Incremental verifiable computation.</p>
Full article ">Figure 3
<p>Fully on-chain storage.</p>
Full article ">Figure 4
<p>Divided storage (on-chain + off-chain database).</p>
Full article ">Figure 5
<p>Nova recursive prover.</p>
Full article ">Figure 6
<p>Overview of the whole solution (smart contract + off-chain aggregation).</p>
Full article ">Figure 7
<p>Proving time for Nova (recursive + compressed) and Risc0.</p>
Full article ">Figure 8
<p>Nova proving time for the recursive and the compressed SNARK.</p>
Full article ">Figure 9
<p>Verification time per signature for Nova (recursive + compressed) and Risc0.</p>
Full article ">Figure 10
<p>Proving time per signature for Nova and Risc0.</p>
Full article ">Figure 11
<p>Cost comparison between using Layer 1 only and Layered (L1 + L2) for writing batch authenticating data.</p>
Full article ">
14 pages, 7711 KiB  
Article
Feasibility Study for Monitoring an Ultrasonic System Using Structurally Integrated Piezoceramics
by Jonas M. Werner, Tim Krüger and Welf-Guntram Drossel
Sensors 2024, 24(3), 1036; https://doi.org/10.3390/s24031036 - 5 Feb 2024
Viewed by 1027
Abstract
This paper presents a new approach to monitoring ultrasonic systems using structurally integrated piezoceramics. These are integrated into the sonotrode at different points and with different orientations. The procedure for integrating the piezoceramics into the sonotrode and their performance is experimentally investigated. We [...] Read more.
This paper presents a new approach to monitoring ultrasonic systems using structurally integrated piezoceramics. These are integrated into the sonotrode at different points and with different orientations. The procedure for integrating the piezoceramics into the sonotrode and their performance is experimentally investigated. We examine whether the measured signal can be used to determine the optimal operating frequency of the ultrasonic system, if integrating several piezoceramics enables discernment of the current vibration shape, and if the piezoceramics can withstand the high strains caused by the vibrations in a frequency range of approximately 20–25 kHz. The signals from the piezoceramic sensors are compared to the real-time displacement at different points of the sonotrode using a 3D laser scanning vibrometer. To evaluate the performance of the sensors, different kinds of excitation of the ultrasonic system are chosen. Full article
Show Figures

Figure 1

Figure 1
<p>Manufacturing process for the integration of the piezoceramics and their dimensions.</p>
Full article ">Figure 2
<p>Cross-section and polarization of the SIP.</p>
Full article ">Figure 3
<p>Close-up of the SIP.</p>
Full article ">Figure 4
<p>Numerical simulation of the ultrasonic system.</p>
Full article ">Figure 5
<p>Experimental setup.</p>
Full article ">Figure 6
<p>Impedance of the system.</p>
Full article ">Figure 7
<p>Longitudinal vibration shape of the ultrasonic system.</p>
Full article ">Figure 8
<p>Bending vibration shape of the ultrasonic system.</p>
Full article ">Figure 9
<p>Response function of the 3D laser scanning vibrometer in all three spatial directions, specific vibration modes are marked.</p>
Full article ">Figure 10
<p>(<b>a</b>) Transfer function of the SIP, specific vibration modes are marked; (<b>b</b>) close-up of the relevant frequency spectrum.</p>
Full article ">Figure 11
<p>Noise of the sensors.</p>
Full article ">Figure 12
<p>Comparison of four sensors.</p>
Full article ">Figure 13
<p>Comparison of the rms value of the signals of several SIPs over time.</p>
Full article ">Figure 14
<p>Signal of FVB at different points in time.</p>
Full article ">
27 pages, 1891 KiB  
Article
Enhancing Security and Flexibility in the Industrial Internet of Things: Blockchain-Based Data Sharing and Privacy Protection
by Weiming Tong, Luyao Yang, Zhongwei Li, Xianji Jin and Liguo Tan
Sensors 2024, 24(3), 1035; https://doi.org/10.3390/s24031035 - 5 Feb 2024
Viewed by 1503
Abstract
To address the complexities, inflexibility, and security concerns in traditional data sharing models of the Industrial Internet of Things (IIoT), we propose a blockchain-based data sharing and privacy protection (BBDSPP) scheme for IIoT. Initially, we characterize and assign values to attributes, and employ [...] Read more.
To address the complexities, inflexibility, and security concerns in traditional data sharing models of the Industrial Internet of Things (IIoT), we propose a blockchain-based data sharing and privacy protection (BBDSPP) scheme for IIoT. Initially, we characterize and assign values to attributes, and employ a weighted threshold secret sharing scheme to refine the data sharing approach. This enables flexible combinations of permissions, ensuring the adaptability of data sharing. Subsequently, based on non-interactive zero-knowledge proof technology, we design a lightweight identity proof protocol using attribute values. This protocol pre-verifies the identity of data accessors, ensuring that only legitimate terminal members can access data within the system, while also protecting the privacy of the members. Finally, we utilize the InterPlanetary File System (IPFS) to store encrypted shared resources, effectively addressing the issue of low storage efficiency in traditional blockchain systems. Theoretical analysis and testing of the computational overhead of our scheme demonstrate that, while ensuring performance, our scheme has the smallest total computational load compared to the other five schemes. Experimental results indicate that our scheme effectively addresses the shortcomings of existing solutions in areas such as identity authentication, privacy protection, and flexible combination of permissions, demonstrating a good performance and strong feasibility. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

Figure 1
<p>Comparison between Interactive zero-knowledge proofs and Non-interactive zero-knowledge proofs.</p>
Full article ">Figure 2
<p>The BBDSPP scheme’s system model.</p>
Full article ">Figure 3
<p>Blockchain data storage structure.</p>
Full article ">Figure 4
<p>Blockchain network creation process.</p>
Full article ">Figure 5
<p>Comparative analysis of computational time in the key generation phase.</p>
Full article ">Figure 6
<p>Comparative analysis of computational time in the authentication phase.</p>
Full article ">Figure 7
<p>Comparative analysis of computational time in the encryption phase.</p>
Full article ">Figure 8
<p>Comparative analysis of computational time in the decryption phase.</p>
Full article ">Figure 9
<p>Comparative analysis of total calculation time for each scheme.</p>
Full article ">
16 pages, 6928 KiB  
Article
Improving Vehicle Heading Angle Accuracy Based on Dual-Antenna GNSS/INS/Barometer Integration Using Adaptive Kalman Filter
by Hongyuan Jiao, Xiangbo Xu, Shao Chen, Ningyan Guo and Zhibin Yu
Sensors 2024, 24(3), 1034; https://doi.org/10.3390/s24031034 - 5 Feb 2024
Cited by 1 | Viewed by 1504
Abstract
High-accuracy heading angle is significant for estimating autonomous vehicle attitude. By integrating GNSS (Global Navigation Satellite System) dual antennas, INS (Inertial Navigation System), and a barometer, a GNSS/INS/Barometer fusion method is proposed to improve vehicle heading angle accuracy. An adaptive Kalman filter (AKF) [...] Read more.
High-accuracy heading angle is significant for estimating autonomous vehicle attitude. By integrating GNSS (Global Navigation Satellite System) dual antennas, INS (Inertial Navigation System), and a barometer, a GNSS/INS/Barometer fusion method is proposed to improve vehicle heading angle accuracy. An adaptive Kalman filter (AKF) is designed to fuse the INS error and the GNSS measurement. A random sample consensus (RANSAC) method is proposed to improve the initial heading angle accuracy applied to the INS update. The GNSS heading angle obtained by a dual-antenna orientation algorithm is additionally augmented to the measurement variable. Furthermore, the kinematic constraint of zero velocity in the lateral and vertical directions of vehicle movement is used to enhance the accuracy of the measurement model. The heading errors in the open and occluded environment are 0.5418° (RMS) and 0.636° (RMS), which represent reductions of 37.62% and 47.37% compared to the extended Kalman filter (EKF) method, respectively. The experimental results demonstrate that the proposed method effectively improves the vehicle heading angle accuracy. Full article
(This article belongs to the Special Issue Advanced Inertial Sensors, Navigation, and Fusion)
Show Figures

Figure 1

Figure 1
<p>The architecture of the dual-antenna GNSS/INS/Barometer integrated navigation system. <math display="inline"><semantics> <mrow> <msub> <mo>∗</mo> <mi>G</mi> </msub> </mrow> </semantics></math> represents values measured by GNSS. <math display="inline"><semantics> <mi>δ</mi> </semantics></math> represents the error value.</p>
Full article ">Figure 2
<p>Diagram of the coordinate systems.</p>
Full article ">Figure 3
<p>The experiment vehicle.</p>
Full article ">Figure 4
<p>Acquisition of experimental data in the open and occluded environment: (<b>a</b>) Experiment with real-time scenario in the playground; (<b>b</b>) Test vehicle through the building tunnel; (<b>c</b>) Test vehicle trajectory in the open environment; (<b>d</b>) Test vehicle trajectory in the occluded environment.</p>
Full article ">Figure 5
<p>HDOP during the open and occluded environment experiment: (<b>a</b>) The value of HDOP during the open environment experiment; (<b>b</b>) The value of HDOP during the occluded environment experiment.</p>
Full article ">Figure 6
<p>Acquisition of original IMU data in the open and occluded environments: (<b>a</b>) Original triaxial accelerometer data in the playground; (<b>b</b>) Original triaxial gyroscope data in the playground; (<b>c</b>) Original triaxial accelerometer data in the occluded environment; (<b>d</b>) Original triaxial gyroscope data in the occluded environment.</p>
Full article ">Figure 7
<p>The estimation of heading angle during the open and occluded environment experiments: (<b>a</b>) The estimation of heading angle in the open environment; (<b>b</b>) The estimation of heading angle in the occluded environment.</p>
Full article ">
17 pages, 10850 KiB  
Article
Small and Micro-Water Quality Monitoring Based on the Integration of a Full-Space Real 3D Model and IoT
by Yuanrong He, Yujie Yang, Tingting He, Yangfeng Lai, Yudong He and Bingning Chen
Sensors 2024, 24(3), 1033; https://doi.org/10.3390/s24031033 - 5 Feb 2024
Cited by 1 | Viewed by 1263
Abstract
In order to address the challenges of small and micro-water pollution in parks and the low level of 3D visualization of water quality monitoring systems, this research paper proposes a novel wireless remote water quality monitoring system that combines the Internet of Things [...] Read more.
In order to address the challenges of small and micro-water pollution in parks and the low level of 3D visualization of water quality monitoring systems, this research paper proposes a novel wireless remote water quality monitoring system that combines the Internet of Things (IoT) and a 3D model of reality. To begin with, the construction of a comprehensive 3D model relies on various technologies, including unmanned aerial vehicle (UAV) tilt photography, 3D laser scanning, unmanned ship measurement, and close-range photogrammetry. These techniques are utilized to capture the park’s geographical terrain, natural resources, and ecological environment, which are then integrated into the three-dimensional model. Secondly, GNSS positioning, multi-source water quality sensors, NB-IoT wireless communication, and video surveillance are combined with IoT technologies to enable wireless remote real-time monitoring of small and micro-water bodies. Finally, a high-precision underwater, indoor, and outdoor full-space real-scene three-dimensional visual water quality monitoring system integrated with IoT is constructed. The integrated system significantly reduces water pollution in small and micro-water bodies and optimizes the water quality monitoring system. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

Figure 1
<p>Water quality system architecture.</p>
Full article ">Figure 2
<p>Water quality monitoring hardware terminal design block diagram.</p>
Full article ">Figure 3
<p>Power circuit diagram.</p>
Full article ">Figure 4
<p>Water quality acquisition circuit diagram.</p>
Full article ">Figure 5
<p>NB-IOT module circuit diagram.</p>
Full article ">Figure 6
<p>GNSS positioning module circuit diagram.</p>
Full article ">Figure 7
<p>Main program flow diagram.</p>
Full article ">Figure 8
<p>Cloud platform page diagram.</p>
Full article ">Figure 9
<p>The technical flow chart of the full-space real 3D model construction.</p>
Full article ">Figure 10
<p>Outdoor real 3D model.</p>
Full article ">Figure 11
<p>Interior real 3D model.</p>
Full article ">Figure 12
<p>Water quality monitoring device 3D model.</p>
Full article ">Figure 13
<p>Underwater Terrain Real 3D model.</p>
Full article ">Figure 14
<p>Results of full-space 3D model fusion.</p>
Full article ">Figure 15
<p>Functional structure of the system.</p>
Full article ">Figure 16
<p>Main interface of the system.</p>
Full article ">Figure 17
<p>Three-dimensional module function diagram.</p>
Full article ">Figure 18
<p>Function diagram of water quality monitoring module.</p>
Full article ">Figure 19
<p>Function diagram of video surveillance and user management module.</p>
Full article ">
0 pages, 2977 KiB  
Article
Estimation of Muscle Forces of Lower Limbs Based on CNN–LSTM Neural Network and Wearable Sensor System
by Kun Liu, Yong Liu, Shuo Ji, Chi Gao and Jun Fu
Sensors 2024, 24(3), 1032; https://doi.org/10.3390/s24031032 - 5 Feb 2024
Cited by 5 | Viewed by 1420
Abstract
Estimation of vivo muscle forces during human motion is important for understanding human motion control mechanisms and joint mechanics. This paper combined the advantages of the convolutional neural network (CNN) and long-short-term memory (LSTM) and proposed a novel muscle force estimation method based [...] Read more.
Estimation of vivo muscle forces during human motion is important for understanding human motion control mechanisms and joint mechanics. This paper combined the advantages of the convolutional neural network (CNN) and long-short-term memory (LSTM) and proposed a novel muscle force estimation method based on CNN–LSTM. A wearable sensor system was also developed to collect the angles and angular velocities of the hip, knee, and ankle joints in the sagittal plane during walking, and the collected kinematic data were used as the input for the neural network model. In this paper, the muscle forces calculated using OpenSim based on the Static Optimization (SO) method were used as the standard value to train the neural network model. Four lower limb muscles of the left leg, including gluteus maximus (GM), rectus femoris (RF), gastrocnemius (GAST), and soleus (SOL), were selected as the studying objects in this paper. The experiment results showed that compared to the standard CNN and the standard LSTM, the CNN–LSTM performed better in muscle forces estimation under slow (1.2 m/s), medium (1.5 m/s), and fast walking speeds (1.8 m/s). The average correlation coefficients between true and estimated values of four muscle forces under slow, medium, and fast walking speeds were 0.9801, 0.9829, and 0.9809, respectively. The average correlation coefficients had smaller fluctuations under different walking speeds, which indicated that the model had good robustness. The external testing experiment showed that the CNN–LSTM also had good generalization. The model performed well when the estimated object was not included in the training sample. This article proposed a convenient method for estimating muscle forces, which could provide theoretical assistance for the quantitative analysis of human motion and muscle injury. The method has established the relationship between joint kinematic signals and muscle forces during walking based on a neural network model; compared to the SO method to calculate muscle forces in OpenSim, it is more convenient and efficient in clinical analysis or engineering applications. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

Figure 1
<p>Structure of convolutional neural network–long-short-term memory CNN–LSTM.</p>
Full article ">Figure 2
<p>Experiment with the optical motion capture system (OMCS) and the self-developed wearable inertial sensor system.</p>
Full article ">Figure 3
<p>A typical group of kinematic parameters, where (<b>a</b>) represented the joint angles and (<b>b</b>) represented the joint angular velocities. The signals acquired by the self-developed sensor system are represented by the dotted lines, and those acquired by the optical motion capture system (OMCS) are represented by the solid lines.</p>
Full article ">Figure 4
<p>Predicted muscle forces at the slow walking speed of one gait cycle.</p>
Full article ">Figure 4 Cont.
<p>Predicted muscle forces at the slow walking speed of one gait cycle.</p>
Full article ">Figure 5
<p>Predicted muscle forces at the medium walking speed of one gait cycle.</p>
Full article ">Figure 6
<p>Predicted muscle forces at the fast walking speed of one gait cycle.</p>
Full article ">Figure 6 Cont.
<p>Predicted muscle forces at the fast walking speed of one gait cycle.</p>
Full article ">Figure 7
<p>Root mean square percent (RMSE%) of the predicted muscle strength at slow speed using convolutional neural network (CNN), long-short-term memory (LSTM), and CNN–LSTM.</p>
Full article ">Figure 8
<p>Root mean square percent (RMSE%) of the predicted muscle strength at medium speed using convolutional neural network (CNN), long-short-term memory (LSTM), and CNN–LSTM.</p>
Full article ">Figure 9
<p>Root mean square percent (RMSE%) of the predicted muscle strength at the fast speed using convolutional neural network (CNN), long-short-term memory (LSTM), and CNN–LSTM.</p>
Full article ">Figure 10
<p>RMSE% of muscle forces estimated by convolutional neural network–long-short-term memory (CNN–LSTM) when using dataset 1.</p>
Full article ">Figure 11
<p>RMSE% of muscle forces estimated by convolutional neural network–long-short-term memory (CNN–LSTM) when using dataset 2.</p>
Full article ">
12 pages, 6914 KiB  
Communication
Adaptive Segmentation Algorithm for Subtle Defect Images on the Surface of Magnetic Ring Using 2D-Gabor Filter Bank
by Yihui Li, Manling Ge, Shiying Zhang and Kaiwei Wang
Sensors 2024, 24(3), 1031; https://doi.org/10.3390/s24031031 - 5 Feb 2024
Cited by 1 | Viewed by 854
Abstract
In order to realize the unsupervised segmentation of subtle defect images on the surface of small magnetic rings and improve the segmentation accuracy and computational efficiency, here, an adaptive threshold segmentation method is proposed based on the improved multi-scale and multi-directional 2D-Gabor filter [...] Read more.
In order to realize the unsupervised segmentation of subtle defect images on the surface of small magnetic rings and improve the segmentation accuracy and computational efficiency, here, an adaptive threshold segmentation method is proposed based on the improved multi-scale and multi-directional 2D-Gabor filter bank. Firstly, the improved multi-scale and multi-directional 2D-Gabor filter bank was used to filter and reduce the noise on the defect image, suppress the noise pollution inside the target area and the background area, and enhance the difference between the magnetic ring defect and the background. Secondly, this study analyzed the grayscale statistical characteristics of the processed image; the segmentation threshold was constructed according to the gray statistical law of the image; and the adaptive segmentation of subtle defect images on the surface of small magnetic rings was realized. Finally, a classifier based on a BP neural network is designed to classify the scar images and crack images determined by different threshold segmentation methods. The classification accuracies of the iterative method, the OTSU method, the maximum entropy method, and the adaptive threshold segmentation method are, respectively, 85%, 87.5%, 95%, and 97.5%. The adaptive threshold segmentation method proposed in this paper has the highest classification accuracy. Through verification and comparison, the proposed algorithm can segment defects quickly and accurately and suppress noise interference effectively. It is better than other traditional image threshold segmentation methods, validated by both segmentation accuracy and computational efficiency. At the same time, the real-time performance of our algorithm was performed on the advanced SEED-DVS8168 platform. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Flow diagram of image segmentation in this study.</p>
Full article ">Figure 2
<p>Original magnetic ring image acquisition system.</p>
Full article ">Figure 3
<p>Physical imaging comparison: (<b>a</b>) the ordinary camera; (<b>b</b>) the CCD camera; (<b>c</b>) area CCD camera + microscope.</p>
Full article ">Figure 4
<p>Effect diagram of real part value of 2D-Gabor filter with different scales and directions.</p>
Full article ">Figure 5
<p>Two-dimensional (2D)-Gabor filtering images with subtle defect at different scales and directions.</p>
Full article ">Figure 6
<p>The initial image and processed images by 2D-Gabor filters: (<b>a</b>) initial image; (<b>b</b>) filter image accumulation of 4 <math display="inline"><semantics> <mrow> <mo>×</mo> </mrow> </semantics></math> 8 filter banks; (<b>c</b>) filter image accumulation of improved filter banks.</p>
Full article ">Figure 7
<p>The corresponding 3D grayscale images. (<b>a</b>) initial image; (<b>b</b>) filter image accumulation of 4 <math display="inline"><semantics> <mrow> <mo>×</mo> </mrow> </semantics></math> 8 filter banks; (<b>c</b>) filter image accumulation of improved filter banks.</p>
Full article ">Figure 8
<p>Improved 2D-Gabor filter bank processing image and gray histogram: (<b>a</b>) scar; (<b>b</b>) crack.</p>
Full article ">Figure 9
<p>Segmentation results of traditional methods. (<b>a</b>) Iterative; (<b>b</b>) OTSU; (<b>c</b>) maximum entropy.</p>
Full article ">Figure 10
<p>Thresholding segmentation results of scar defect images. (<b>a</b>) Adaptive threshold segmentation result; (<b>b</b>) area pick-up result; (<b>c</b>) defect image.</p>
Full article ">Figure 11
<p>Classification accuracy under the number of neurons in different hidden layers.</p>
Full article ">Figure 12
<p>Structural diagram of magnetic ring detection system.</p>
Full article ">Figure 13
<p>Real-time display of processing results.</p>
Full article ">
16 pages, 330 KiB  
Article
Armed with Faster Crypto: Optimizing Elliptic Curve Cryptography for ARM Processors
by Ruben De Smet, Robrecht Blancquaert, Tom Godden, Kris Steenhaut and An Braeken
Sensors 2024, 24(3), 1030; https://doi.org/10.3390/s24031030 - 5 Feb 2024
Viewed by 1457
Abstract
Elliptic curve cryptography is a widely deployed technology for securing digital communication. It is the basis of many cryptographic primitives such as key agreement protocols, digital signatures, and zero-knowledge proofs. Fast elliptic curve cryptography relies on heavily optimised modular arithmetic operations, which are [...] Read more.
Elliptic curve cryptography is a widely deployed technology for securing digital communication. It is the basis of many cryptographic primitives such as key agreement protocols, digital signatures, and zero-knowledge proofs. Fast elliptic curve cryptography relies on heavily optimised modular arithmetic operations, which are often tailored to specific micro-architectures. In this article, we study and evaluate optimisations of the popular elliptic curve Curve25519 for ARM processors. We specifically target the ARM NEON single instruction, multiple data (SIMD) architecture, which is a popular architecture for modern smartphones. We introduce a novel representation for 128-bit NEON SIMD vectors, optimised for SIMD parallelisation, to accelerate elliptic curve operations significantly. Leveraging this representation, we implement an extended twisted Edwards curve Curve25519 back-end within the popular Rust library “curve25519-dalek”. We extensively evaluate our implementation across multiple ARM devices using both cryptographic benchmarks and the benchmark suite available for the Signal protocol. Our findings demonstrate a substantial back-end speed-up of at least 20% for ARM NEON, along with a noteworthy speed improvement of at least 15% for benchmarked Signal functions. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>Addition of points on an elliptic curve. The full line indicates the drawn line to get <math display="inline"><semantics> <mrow> <mo>−</mo> <mi>R</mi> </mrow> </semantics></math>. The dashed line indicates the reflection along the <span class="html-italic">x</span>-axis to get <span class="html-italic">R</span>.</p>
Full article ">Figure 2
<p>Distribution of one field element’s limbs over a vector: of the ten limbs of the first field element, two are put in specific sub-vectors in the first SIMD vector. These sub-vectors are made up of the limbs plus extra bits used to exceed the bound of the field order.</p>
Full article ">Figure 3
<p>Distribution of field element limbs over vectors for the representation of an elliptic curve point. The first vector holds the first two limbs of field elements <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>A</mi> <mo>,</mo> <mi>B</mi> <mo>,</mo> <mi>C</mi> <mo>,</mo> <mi>D</mi> <mo>)</mo> </mrow> </semantics></math>, the second vector the next two, and so on. The alternating limbs are distributed in such a way as to make operations such as multiplication easier.</p>
Full article ">Figure 4
<p>Split of field element limbs over vectors, which prepares for a multiplication.</p>
Full article ">Figure 5
<p>Splitting of vectors.</p>
Full article ">Figure 6
<p>Split of 128-bit ARM NEON vector into 64-bit vectors in preparation for multiplication.</p>
Full article ">Figure 7
<p>Constant-time variable-base scalar multiplication benchmarks.</p>
Full article ">Figure 8
<p>Decrypt UUID benchmarks.</p>
Full article ">
19 pages, 1426 KiB  
Article
Implementation of Automated Guided Vehicles for the Automation of Selected Processes and Elimination of Collisions between Handling Equipment and Humans in the Warehouse
by Iveta Kubasakova, Jaroslava Kubanova, Dominik Benco and Dominika Kadlecová
Sensors 2024, 24(3), 1029; https://doi.org/10.3390/s24031029 - 5 Feb 2024
Cited by 6 | Viewed by 2961
Abstract
This article deals with the implementation of automated guided vehicles (AGVs) in a selected company. The aim is to analyse the use of AGVs in our country and abroad and to provide information about the use of AGVs in other countries and operations [...] Read more.
This article deals with the implementation of automated guided vehicles (AGVs) in a selected company. The aim is to analyse the use of AGVs in our country and abroad and to provide information about the use of AGVs in other countries and operations other than ours. The result of the analysis was a literature review, which points out the individual advantages and disadvantages of the use of AGVs in companies. Within the review we also address the issue of AMR vehicles, due to the modernization of existing AGVs in the company, or the replacement of AMRs with AGVs in general. Our aim is to show why AGVs can replace human work. This is mainly because of the continuous increase in the wages of employees, because of safety, but also because of the modernization of the selected company. The company has positive experience of AGVs in other sites. We wanted to point out a higher form of automation, and how it would be possible to use AMR vehicles for the same work as AGVs. In the company, we have identified jobs where we would like to introduce AGVs or AMR vehicles. Consequently, we chose the AGV from CEIT operated by magnetic tape and the AMR from SEER as an example. Based on studies, the demand for AGVs is expected to increase by up to 17% in 2019–2024. Therefore, the company is looking into the issue of the implementation of AGVs at multiple sites. The question which remains is the economic return and the possibility of investing in the automation of processes in the company, which we discuss in more detail in the conclusion of the article and in the research. The article describes the exact processes for AGVs, their workload, and also the routes for AGVs, such as loading/unloading points, stopping points, checkpoints, junctions with other AGVs, charging stations, and field elements, as well as their speed, frequency and the possibility of collision with other AGVs. Our research shows that by applying the new technology, the company will save a large amount of money on employee wages. The purchase of two AGVs will cost the company EUR 49,000, while the original technology used in the company cost EUR 79,200 annually. The payback period for such an investment is 8 months. The benefits of implementing AGVs are evaluated in the last section of this paper, where both the economic and time requirements of the different proposals are included. This section also includes recommendations for improving specific parts of the enterprise. Full article
(This article belongs to the Special Issue Sensors and Systems for Automotive and Road Safety (Volume 2))
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of material transport from to the warehouse to the assembly line.</p>
Full article ">Figure 2
<p>Automation at the selected location.</p>
Full article ">Figure 3
<p>The workload.</p>
Full article ">Figure 4
<p>AGV from SEER, type AMB-J [<a href="#B33-sensors-24-01029" class="html-bibr">33</a>].</p>
Full article ">Figure 5
<p>Scheme for the introduction of AMR/AGV vehicles.</p>
Full article ">
22 pages, 13534 KiB  
Article
Open-Circuit Fault Diagnosis of T-Type Three-Level Inverter Based on Knowledge Reduction
by Xiaojuan Chen and Zhaohua Zhang
Sensors 2024, 24(3), 1028; https://doi.org/10.3390/s24031028 - 5 Feb 2024
Cited by 2 | Viewed by 1094
Abstract
Compared with traditional two-level inverters, multilevel inverters have many solid-state switches and complex composition methods. Therefore, diagnosing and treating inverter faults is a prerequisite for the reliable and efficient operation of the inverter. Based on the idea of intelligent complementary fusion, this paper [...] Read more.
Compared with traditional two-level inverters, multilevel inverters have many solid-state switches and complex composition methods. Therefore, diagnosing and treating inverter faults is a prerequisite for the reliable and efficient operation of the inverter. Based on the idea of intelligent complementary fusion, this paper combines the genetic algorithm–binary granulation matrix knowledge-reduction method with the extreme learning machine network to propose a fault-diagnosis method for multi-tube open-circuit faults in T-type three-level inverters. First, the fault characteristics of power devices at different locations of T-type three-level inverters are analyzed, and the inverter output power and its harmonic components are extracted as the basis for power device fault diagnosis. Second, the genetic algorithm–binary granularity matrix knowledge-reduction method is used for optimization to obtain the minimum attribute set required to distinguish the state transitions in various fault cases. Finally, the kernel attribute set is utilized to construct extreme learning machine subclassifiers with corresponding granularity. The experimental results show that the classification accuracy after attribute reduction is higher than that of all subclassifiers under different attribute sets, reflecting the advantages of attribute reduction and the complementarity of different intelligent diagnosis methods, which have stronger fault-diagnosis accuracy and generalization ability compared with the existing methods and provides a new way for hybrid intelligent diagnosis. Full article
Show Figures

Figure 1

Figure 1
<p>T-3L inverter topology.</p>
Full article ">Figure 2
<p>Switching state circuits based on the current direction for phase <span class="html-italic">A</span>: (<b>a</b>) Switching states <span class="html-italic">P</span> for <math display="inline"><semantics> <mrow> <mi>i</mi> <mo>&gt;</mo> <mn>0</mn> </mrow> </semantics></math>. (<b>b</b>) Switching states <span class="html-italic">O</span> for <math display="inline"><semantics> <mrow> <mi>i</mi> <mo>&gt;</mo> <mn>0</mn> </mrow> </semantics></math>. (<b>c</b>) Switching states <span class="html-italic">N</span> for <math display="inline"><semantics> <mrow> <mi>i</mi> <mo>&gt;</mo> <mn>0</mn> </mrow> </semantics></math>. (<b>d</b>) Switching states <span class="html-italic">P</span> for <math display="inline"><semantics> <mrow> <mi>i</mi> <mo>&lt;</mo> <mn>0</mn> </mrow> </semantics></math>. (<b>e</b>) Switching states <span class="html-italic">O</span> for <math display="inline"><semantics> <mrow> <mi>i</mi> <mo>&lt;</mo> <mn>0</mn> </mrow> </semantics></math>. (<b>f</b>) Switching states <span class="html-italic">N</span> for <math display="inline"><semantics> <mrow> <mi>i</mi> <mo>&lt;</mo> <mn>0</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>Single IGBT fault: (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mi>I</mi> <mi>a</mi> </msub> <mo>&gt;</mo> <mn>0</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <msub> <mi>S</mi> <mrow> <mi>a</mi> <mn>1</mn> </mrow> </msub> </semantics></math> fails. (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mi>I</mi> <mi>a</mi> </msub> <mo>&lt;</mo> <mn>0</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <msub> <mi>S</mi> <mrow> <mi>a</mi> <mn>1</mn> </mrow> </msub> </semantics></math> fails. (<b>c</b>) <math display="inline"><semantics> <mrow> <msub> <mi>I</mi> <mi>a</mi> </msub> <mo>&gt;</mo> <mn>0</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <msub> <mi>S</mi> <mrow> <mi>a</mi> <mn>2</mn> </mrow> </msub> </semantics></math> fails. (<b>d</b>) <math display="inline"><semantics> <mrow> <msub> <mi>I</mi> <mi>a</mi> </msub> <mo>&lt;</mo> <mn>0</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <msub> <mi>S</mi> <mrow> <mi>a</mi> <mn>2</mn> </mrow> </msub> </semantics></math> fails.</p>
Full article ">Figure 4
<p>Two IGBTs fault in the same phase: (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mi>I</mi> <mi>a</mi> </msub> <mo>&gt;</mo> <mn>0</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <msub> <mi>S</mi> <mrow> <mi>a</mi> <mn>1</mn> </mrow> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>S</mi> <mrow> <mi>a</mi> <mn>2</mn> </mrow> </msub> </semantics></math> fails. (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mi>I</mi> <mi>a</mi> </msub> <mo>&lt;</mo> <mn>0</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <msub> <mi>S</mi> <mrow> <mi>a</mi> <mn>1</mn> </mrow> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>S</mi> <mrow> <mi>a</mi> <mn>2</mn> </mrow> </msub> </semantics></math> fails. (<b>c</b>) <math display="inline"><semantics> <mrow> <msub> <mi>I</mi> <mi>a</mi> </msub> <mo>&gt;</mo> <mn>0</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <msub> <mi>S</mi> <mrow> <mi>a</mi> <mn>1</mn> </mrow> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>S</mi> <mrow> <mi>a</mi> <mn>4</mn> </mrow> </msub> </semantics></math> fails. (<b>d</b>) <math display="inline"><semantics> <msub> <mi>S</mi> <mrow> <mi>a</mi> <mn>1</mn> </mrow> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>S</mi> <mrow> <mi>a</mi> <mn>4</mn> </mrow> </msub> </semantics></math> fails.</p>
Full article ">Figure 5
<p>Flowchart of binary granular matrix knowledge-reduction algorithm based on genetic algorithm optimization.</p>
Full article ">Figure 6
<p>The proposed model structure.</p>
Full article ">Figure 7
<p>MATLAB simulation results under normal operation.</p>
Full article ">Figure 8
<p>MATLAB simulation results under the condition of an open-circuit fault in <math display="inline"><semantics> <msub> <mi>S</mi> <mrow> <mi>a</mi> <mn>1</mn> </mrow> </msub> </semantics></math>.</p>
Full article ">Figure 9
<p>MATLAB simulation results under the condition of an open-circuit fault in <math display="inline"><semantics> <msub> <mi>S</mi> <mrow> <mi>a</mi> <mn>1</mn> </mrow> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>S</mi> <mrow> <mi>a</mi> <mn>2</mn> </mrow> </msub> </semantics></math>.</p>
Full article ">Figure 10
<p>The capability to validate attribute reduction for three algorithms. (<b>a</b>) Neighbor−GrC, (<b>b</b>) Entropy−GrC, (<b>c</b>) GA−GrC.</p>
Full article ">Figure 11
<p>Sub-classifier training error curve. (<b>a</b>) Neighbor−GrC−ELM, (<b>b</b>) Entropy−GrC−ELM, (<b>c</b>) GA−GrC−ELM.</p>
Full article ">Figure 12
<p>The classification results of data-set outcomes for each optimization method. (<b>a</b>) ELM, (<b>b</b>) SVM, (<b>c</b>) BP, (<b>d</b>) GrC−ELM, (<b>e</b>) GrC−SVM, (<b>f</b>) GrC−BP, (<b>g</b>) Neighbor−GrC−ELM, (<b>h</b>) Entropy−GrC−ELM, (<b>i</b>) GA−GrC−ELM.</p>
Full article ">
25 pages, 497 KiB  
Article
Examination of Traditional Botnet Detection on IoT-Based Bots
by Ashley Woodiss-Field, Michael N. Johnstone and Paul Haskell-Dowland
Sensors 2024, 24(3), 1027; https://doi.org/10.3390/s24031027 - 5 Feb 2024
Cited by 2 | Viewed by 1510
Abstract
A botnet is a collection of Internet-connected computers that have been suborned and are controlled externally for malicious purposes. Concomitant with the growth of the Internet of Things (IoT), botnets have been expanding to use IoT devices as their attack vectors. IoT devices [...] Read more.
A botnet is a collection of Internet-connected computers that have been suborned and are controlled externally for malicious purposes. Concomitant with the growth of the Internet of Things (IoT), botnets have been expanding to use IoT devices as their attack vectors. IoT devices utilise specific protocols and network topologies distinct from conventional computers that may render detection techniques ineffective on compromised IoT devices. This paper describes experiments involving the acquisition of several traditional botnet detection techniques, BotMiner, BotProbe, and BotHunter, to evaluate their capabilities when applied to IoT-based botnets. Multiple simulation environments, using internally developed network traffic generation software, were created to test these techniques on traditional and IoT-based networks, with multiple scenarios differentiated by the total number of hosts, the total number of infected hosts, the botnet command and control (CnC) type, and the presence of aberrant activity. Externally acquired datasets were also used to further test and validate the capabilities of each botnet detection technique. The results indicated, contrary to expectations, that BotMiner and BotProbe were able to detect IoT-based botnets—though they exhibited certain limitations specific to their operation. The results show that traditional botnet detection techniques are capable of detecting IoT-based botnets and that the different techniques may offer capabilities that complement one another. Full article
Show Figures

Figure 1

Figure 1
<p>Network diagram of IoT-based botnet simulation, including the infected IoT network, attacker network, and DDoS target network.</p>
Full article ">Figure 2
<p>True negative rates of <a href="#sensors-24-01027-t001" class="html-table">Table 1</a>, grouped by total hosts, infected hosts, and aberrant hosts.</p>
Full article ">Figure 3
<p>True negative rates of <a href="#sensors-24-01027-t002" class="html-table">Table 2</a>, grouped by total hosts, infected hosts, and aberrant hosts.</p>
Full article ">
19 pages, 882 KiB  
Article
Multi-Dimensional Wi-Fi Received Signal Strength Indicator Data Augmentation Based on Multi-Output Gaussian Process for Large-Scale Indoor Localization
by Zhe Tang, Sihao Li, Kyeong Soo Kim and Jeremy S. Smith
Sensors 2024, 24(3), 1026; https://doi.org/10.3390/s24031026 - 5 Feb 2024
Cited by 3 | Viewed by 1334
Abstract
Location fingerprinting using Received Signal Strength Indicators (RSSIs) has become a popular technique for indoor localization due to its use of existing Wi-Fi infrastructure and Wi-Fi-enabled devices. Artificial intelligence/machine learning techniques such as Deep Neural Networks (DNNs) have been adopted to make location [...] Read more.
Location fingerprinting using Received Signal Strength Indicators (RSSIs) has become a popular technique for indoor localization due to its use of existing Wi-Fi infrastructure and Wi-Fi-enabled devices. Artificial intelligence/machine learning techniques such as Deep Neural Networks (DNNs) have been adopted to make location fingerprinting more accurate and reliable for large-scale indoor localization applications. However, the success of DNNs for indoor localization depends on the availability of a large amount of pre-processed and labeled data for training, the collection of which could be time-consuming in large-scale indoor environments and even challenging during a pandemic situation like COVID-19. To address these issues in data collection, we investigate multi-dimensional RSSI data augmentation based on the Multi-Output Gaussian Process (MOGP), which, unlike the Single-Output Gaussian Process (SOGP), can exploit the correlation among the RSSIs from multiple access points in a single floor, neighboring floors, or a single building by collectively processing them. The feasibility of MOGP-based multi-dimensional RSSI data augmentation is demonstrated through experiments using the hierarchical indoor localization model based on a Recurrent Neural Network (RNN)—i.e., one of the state-of-the-art multi-building and multi-floor localization models—and the publicly available UJIIndoorLoc multi-building and multi-floor indoor localization database. The RNN model trained with the UJIIndoorLoc database augmented with the augmentation mode of “by a single building”, where an MOGP model is fitted based on the entire RSSI data of a building, outperforms the other two augmentation modes and results in the three-dimensional localization error of 8.42 m. Full article
(This article belongs to the Collection Sensors and Systems for Indoor Positioning)
Show Figures

Figure 1

Figure 1
<p>An overview of multi-dimensional fingerprint data augmentation based on MOGP.</p>
Full article ">Figure 2
<p>Block diagrams of fingerprint data augmentation based on (<b>a</b>) SOGP and (<b>b</b>) MOGP.</p>
Full article ">Figure 3
<p>Three different modes of data augmentation: (<b>a</b>) by a single floor, (<b>b</b>) by neighboring floors, and (<b>c</b>) by a single building.</p>
Full article ">Figure 4
<p>Network architecture of the RNN indoor localization model with LSTM cells [<a href="#B12-sensors-24-01026" class="html-bibr">12</a>].</p>
Full article ">Figure 5
<p>Spatial distribution of the RPs of the UJIIndoorLoc database over the buildings and the floors, where the green, the blue, and the red dots denote the RPs of Buildings 0, 1, and 2, respectively.</p>
Full article ">Figure 6
<p>MOGP-based data augmentation of the RSSIs from WAP489 of the UJIIndoorLoc database based on the Matérn5/2 kernel with the parameters in <a href="#sensors-24-01026-t003" class="html-table">Table 3</a>.</p>
Full article ">Figure 7
<p>Spatial distribution of the original and the augmented RSSIs for the corner of the fourth floor of Building 2 of the UJIIndoorLoc database, where the red circles indicate two potential problems of the lack of original RSSI data and insufficient RP coverage.</p>
Full article ">
10 pages, 3037 KiB  
Article
Research on the Effect of Vibrational Micro-Displacement of an Astronomical Camera on Detector Imaging
by Bin Liu, Shouxin Guan, Feicheng Wang, Xiaoming Zhang, Tao Yu and Ruyi Wei
Sensors 2024, 24(3), 1025; https://doi.org/10.3390/s24031025 - 5 Feb 2024
Viewed by 743
Abstract
Scientific-grade cameras are frequently employed in industries such as spectral imaging technology, aircraft, medical detection, and astronomy, and are characterized by high precision, high quality, fast speed, and high sensitivity. Especially in the field of astronomy, obtaining information about faint light often requires [...] Read more.
Scientific-grade cameras are frequently employed in industries such as spectral imaging technology, aircraft, medical detection, and astronomy, and are characterized by high precision, high quality, fast speed, and high sensitivity. Especially in the field of astronomy, obtaining information about faint light often requires long exposure with high-resolution cameras, which means that any external factors can cause the camera to become unstable and result in increased errors in the detection results. This paper aims to investigate the effect of displacement introduced by various vibration factors on the imaging of an astronomical camera during long exposure. The sources of vibration are divided into external vibration and internal vibration. External vibration mainly includes environmental vibration and resonance effects, while internal vibration mainly refers to the vibration caused by the force generated by the refrigeration module inside the camera during the working process of the camera. The cooling module is divided into water-cooled and air-cooled modes. Through the displacement and vibration experiments conducted on the camera, it is proven that the air-cooled mode will cause the camera to produce greater displacement changes relative to the water-cooled mode, leading to blurring of the imaging results and lowering the accuracy of astronomical detection. This paper compares the effects of displacement produced by two methods, fan cooling and water-circulation cooling, and proposes improvements to minimize the displacement variations in the camera and improve the imaging quality. This study provides a reference basis for the design of astronomical detection instruments and for determining the vibration source of cameras, which helps to promote the further development of astronomical detection. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>Structural model of the camera.</p>
Full article ">Figure 2
<p>Camera shift experiment.</p>
Full article ">Figure 3
<p>Experimental principle of nano-displacement platform.</p>
Full article ">Figure 4
<p>Comparison of camera shift in different states.</p>
Full article ">Figure 5
<p>Camera vibration experiment.</p>
Full article ">Figure 6
<p>Camera vibration at rest.</p>
Full article ">Figure 7
<p>Camera vibration in air-cooled mode.</p>
Full article ">Figure 8
<p>Camera vibration in water-cooling mode.</p>
Full article ">Figure 9
<p>Effect of changes in displacement in different directions.</p>
Full article ">
13 pages, 1828 KiB  
Article
Microfluidic Paper-Based Device Incorporated with Silica Nanoparticles for Iodide Quantification in Marine Source Dietary Supplements
by Mafalda G. Pereira, Ana Machado, Andreia Leite, Maria Rangel, Adriano Bordalo, António O. S. S. Rangel and Raquel B. R. Mesquita
Sensors 2024, 24(3), 1024; https://doi.org/10.3390/s24031024 - 5 Feb 2024
Cited by 1 | Viewed by 1111
Abstract
Iodine is an essential micronutrient for humans due to its fundamental role in the biosynthesis of thyroid hormones. As a key parameter to assess health conditions, iodine intake needs to be monitored to ascertain and prevent iodine deficiency. Iodine is available from various [...] Read more.
Iodine is an essential micronutrient for humans due to its fundamental role in the biosynthesis of thyroid hormones. As a key parameter to assess health conditions, iodine intake needs to be monitored to ascertain and prevent iodine deficiency. Iodine is available from various food sources (such as seaweed, fish, and seafood, among others) and dietary supplements (multivitamins or mineral supplements). In this work, a microfluidic paper-based analytical device (μPAD) to quantify iodide in seaweed and dietary supplements is described. The developed μPAD is a small microfluidic device that emerges as quite relevant in terms of its analytical capacity. The quantification of iodide is based on the oxidation of 3,3′,5,5′-tetramethylbenzidine (TMB) by hydrogen peroxide in the presence of iodine, which acts as the catalyst to produce the blue form of TMB. Additionally, powder silica was used to intensify and uniformize the colour of the obtained product. Following optimization, the developed μPAD enabled iodide quantification within the range of 10–100 µM, with a detection limit of 3 µM, and was successfully applied to seaweeds and dietary supplements. The device represents a valuable tool for point-of-care analysis, can be used by untrained personnel at home, and is easily disposable, low-cost, and user-friendly. Full article
Show Figures

Figure 1

Figure 1
<p>Iodide µPAD assembly; (<b>A</b>) schematic representation of the alignment; L1, laminating pouch containing the sampling holes; TL, top layer containing silica powder, hydrogen peroxide, and acetic acid; BL, bottom layer containing TMB; L2, laminating pouch; (<b>B</b>) representation of a single reading unit, where sample loading is 20 µL; (<b>C</b>) image scan of the device for a calibration curve, where S<sub>i</sub> represents the different iodide standards and B is the blank; (<b>D</b>) software RGB-treated image (Image J, version 1.53) for intensity measurements and absorbance calculation, with the respective calibration curve plotting, where the dashed circle represents the selected area for intensity counts.</p>
Full article ">Figure 2
<p>Influence of filter paper porosity and treatment on the calibration curve slope (sensitivity): (<b>A</b>) studies for the top layer to be loaded with hydrogen peroxide and acetic acid; (<b>B</b>) studies for the bottom layer to be loaded with TMB reagent; the dark blue column represents the chosen option.</p>
Full article ">Figure 3
<p>Influence of the reagent concentrations on calibration curve slope (sensitivity); (<b>A</b>) TMB concentration; (<b>B</b>) hydrogen peroxide concentration; the dark blue columns represent the chosen option and the error bars represent 5% deviation.</p>
Full article ">Figure 4
<p>Tested configurations with respective image scan on both scanning sides, (<b>A</b>) H<sub>2</sub>O<sub>2</sub> and CH<sub>3</sub>COOH in the top layer and TMB in the bottom layer; (<b>B</b>) TMB in the top layer with H<sub>2</sub>O<sub>2</sub> and CH<sub>3</sub>COOH in the bottom layer.</p>
Full article ">Figure 5
<p>Influence of μPAD storage conditions (temperature and presence of air) on the calibration curve slope (sensitivity); the horizontal line represents the calibration curve average slope of the freshly prepared devices, and the shadow area represents a 5% deviation of that average.</p>
Full article ">
18 pages, 3663 KiB  
Review
Railway Catenary Condition Monitoring: A Systematic Mapping of Recent Research
by Shaoyao Chen, Gunnstein T. Frøseth, Stefano Derosa, Albert Lau and Anders Rönnquist
Sensors 2024, 24(3), 1023; https://doi.org/10.3390/s24031023 - 5 Feb 2024
Viewed by 1442
Abstract
In this paper, a different approach to the traditional literature review—literature systematic mapping—is adopted to summarize the progress in the recent research on railway catenary system condition monitoring in terms of aspects such as sensor categories, monitoring targets, and so forth. Importantly, the [...] Read more.
In this paper, a different approach to the traditional literature review—literature systematic mapping—is adopted to summarize the progress in the recent research on railway catenary system condition monitoring in terms of aspects such as sensor categories, monitoring targets, and so forth. Importantly, the deep interconnections among these aspects are also investigated through systematic mapping. In addition, the authorship and publication trends are also examined. Compared to a traditional literature review, the literature mapping approach focuses less on the technical details of the research but reflects the research trends, and focuses in a specific field by visualizing them with the help of different plots and figures, which makes it more visually direct and comprehensible than the traditional literature review approach. Full article
(This article belongs to the Special Issue Sensors for Non-Destructive Testing and Structural Health Monitoring)
Show Figures

Figure 1

Figure 1
<p>The process of systematic mapping.</p>
Full article ">Figure 2
<p>The workflow of composing the search string.</p>
Full article ">Figure 3
<p>The publication number per year.</p>
Full article ">Figure 4
<p>The keywords’ relationships across all the articles.</p>
Full article ">Figure 5
<p>The authorship of all the articles.</p>
Full article ">Figure 6
<p>The proportion of the top ten monitoring targets and yearly variation.</p>
Full article ">Figure 7
<p>The proportion of the top ten sensors and yearly variation.</p>
Full article ">Figure 8
<p>The proportion of platforms.</p>
Full article ">Figure 9
<p>Plot showing the relationship among monitoring targets, sensor types, and platforms.</p>
Full article ">
15 pages, 3140 KiB  
Article
Improving the Accuracy of Metatarsal Osteotomies in Minimally Invasive Foot Surgery Using a Digital Inclinometer: Preliminary Study
by Carlos Fernández-Vizcaino, Eduardo Nieto-García, Nadia Fernández-Ehrling and Javier Ferrer-Torregrosa
Sensors 2024, 24(3), 1022; https://doi.org/10.3390/s24031022 - 5 Feb 2024
Cited by 3 | Viewed by 1278
Abstract
Minimally invasive foot surgery (MIS) has become a common procedure to treat various pathologies, and accuracy in the angle of metatarsal osteotomies is crucial to ensure optimal results. This randomized controlled trial with 37 patients investigates whether the implementation of a digital inclinometer [...] Read more.
Minimally invasive foot surgery (MIS) has become a common procedure to treat various pathologies, and accuracy in the angle of metatarsal osteotomies is crucial to ensure optimal results. This randomized controlled trial with 37 patients investigates whether the implementation of a digital inclinometer can improve the accuracy of osteotomies compared to traditional freehand techniques. Patients were randomly allocated to group A (n = 15) receiving inclinometer-assisted surgery or group B (n = 22) receiving conventional surgery. Osteotomies were performed and outcomes were evaluated using an inclinometer. The inclinometer group showed a significant decrease in plantar pressure from 684.1 g/cm2 pretreatment to 449.5 g/cm2 post-treatment (p < 0.001, Cohen’s d = 5.477). The control group decreased from 584.5 g/cm2 to 521.5 g/cm2 (p = 0.001, Cohen’s d = 0.801). The effect size between groups was large (Cohen’s d = −2.572, p < 0.001). The findings indicate a significant improvement in accuracy and reduction in outliers when using an inclinometer, suggesting that this technology has the potential to improve surgical practice and patient outcomes in minimally invasive metatarsal osteotomies. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

Figure 1
<p>Flow diagram of the selection process and analysis of the patients included in this study.</p>
Full article ">Figure 2
<p>Sensor incorporated in the surgical motor.</p>
Full article ">Figure 3
<p>It is placed at a 90-degree angle and the osteotomy is performed at a definitive 45-degree angle. Blue line is the diaphyseal axis of the metatarsal. the green line is perpendicular to the diaphyseal axis (90°) and we position it at 45° degrees to make the metatarsal cut. observing it on the inclinometer.</p>
Full article ">Figure 4
<p>Static pressure measurements.</p>
Full article ">Figure 5
<p>Comparison of scores between the groups when measuring the angle performed in cadaveric osteotomy.</p>
Full article ">Figure 6
<p>Decrease in pre and post pressures in each study group without the inclinometer.</p>
Full article ">Figure 7
<p>Decrease in pre and post pressures in each study group with the inclinometer.</p>
Full article ">Figure 8
<p>Pressure diagram obtained from the difference between pre- and post-surgery pressures.</p>
Full article ">
17 pages, 15436 KiB  
Article
Developing a Flying Explorer for Autonomous Digital Modelling in Wild Unknowns
by Naizhong Zhang, Yaoqiang Pan, Yangwen Jin, Peiqi Jin, Kewei Hu, Xiao Huang and Hanwen Kang
Sensors 2024, 24(3), 1021; https://doi.org/10.3390/s24031021 - 5 Feb 2024
Viewed by 980
Abstract
Digital modelling stands as a pivotal step in the realm of Digital Twinning. The future trend of Digital Twinning involves automated exploration and environmental modelling in complex scenes. In our study, we propose an innovative solution for robot odometry, path planning, and exploration [...] Read more.
Digital modelling stands as a pivotal step in the realm of Digital Twinning. The future trend of Digital Twinning involves automated exploration and environmental modelling in complex scenes. In our study, we propose an innovative solution for robot odometry, path planning, and exploration in unknown outdoor environments, with a focus on Digital modelling. The approach uses a minimum cost formulation with pseudo-randomly generated objectives, integrating multi-path planning and evaluation, with emphasis on full coverage of unknown maps based on feasible boundaries of interest. The approach allows for dynamic changes to expected targets and behaviours. The evaluation is conducted on a robotic platform with a lightweight 3D LiDAR sensor model. The robustness of different types of odometry is compared, and the impact of parameters on motion planning is explored. The consistency and efficiency of exploring completely unknown areas are assessed in both indoor and outdoor scenarios. The experiment shows that the method proposed in this article can complete autonomous exploration and environmental modelling tasks in complex indoor and outdoor scenes. Finally, the study concludes by summarizing the reasons for exploration failures and outlining future focuses in this domain. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

Figure 1
<p>AREX: Flight system designed for high durability, multi-scenario exploration, and environmental modelling.</p>
Full article ">Figure 2
<p>Robot design: (<b>a</b>) AREX’s frame structure and sensor layout. The MID-360 LiDAR is positioned at the front of the fuselage at a 45° angle to the horizontal z-axis, while the Realsense D430 is mounted above the MID-360 to fill in blind spots in the point cloud. The front dual arms of AREX form a 120° angle, maximizing the avoidance of obstructing the LIDAR’s scanning range. (<b>b</b>) AREX’s circuitry structure. Reflects the circuitry between the ESCs and motors, as well as the communication interfaces and protocols among various modules.</p>
Full article ">Figure 3
<p>Software architecture of AREX.</p>
Full article ">Figure 4
<p>A graphic illustration of the VINS-Fusion framework. If the latest keyframe comes, it will be kept, and the visual and IMU measurements of the oldest frame will be marginalized. The prior factor of the loss function is obtained from marginalization. We can obtain the IMU propogation factor from IMU pre-integration. By computing the reprojection error between two keyframes, we can obtain the vision factor.</p>
Full article ">Figure 5
<p>System overview of improved FAST-LIO, which can output high-frequency odometry.</p>
Full article ">Figure 6
<p>Demostration of RRT exploration strategy. Case A: Candidate vertexs close to an obstacle will not be considered. Case B: Candidate vertexes close to a known area will not be considered. Case C: Candidate vertex far away from the exploration direction will not be considered.</p>
Full article ">Figure 7
<p>Comparison of visual and LiDAR odometry: (<b>a</b>) completion of odometry initialization; (<b>b</b>) comparison of global paths.</p>
Full article ">Figure 8
<p>Effects of different parameters on motion planning. (<b>a</b>) Single-frame LiDAR point cloud expansion grid map. (<b>b</b>) Single-frame LiDAR raw point cloud. (<b>c</b>) Inflation grid map corresponding to the merged point cloud. (<b>d</b>) Merged raw point cloud. (<b>e</b>) Obstacle avoidance demonstration with 0.1 inflation coefficient. (<b>g</b>) Obstacle avoidance with a 0.5 inflation coefficient. (<b>f</b>,<b>h</b>) The obstacle avoidance paths of (<b>e</b>), and (<b>g</b>) in the raw point cloud. (<b>i</b>) Robot proximity to the obstacle point cloud. (<b>j</b>) robot within the inflated obstacle point cloud.</p>
Full article ">Figure 9
<p>Instances of an autonomous exploration mission within the forest. (<b>a</b>) The exploration results and on-site flight demonstration in the forest. (<b>b</b>) Forest semantic map based on exploration results.</p>
Full article ">Figure 10
<p>Presentation of the underground parking garage exploration process. (<b>a</b>) The exploration results and on-site flight demonstration in the underground parking garage. (<b>b</b>) Display of the global point cloud map in the underground parking garage.</p>
Full article ">
15 pages, 1690 KiB  
Article
A Microwave Differential Dielectric Sensor Based on Mode Splitting of Coupled Resonators
by Ali M. Almuhlafi, Mohammed S. Alshaykh, Mansour Alajmi, Bassam Alshammari and Omar M. Ramahi
Sensors 2024, 24(3), 1020; https://doi.org/10.3390/s24031020 - 5 Feb 2024
Cited by 1 | Viewed by 1394
Abstract
This study explores the viability of using the avoided mode crossing phenomenon in the microwave regime to design microwave differential sensors. While the design concept can be applied to any type of planar electrically small resonators, here, it is implemented on split-ring resonators [...] Read more.
This study explores the viability of using the avoided mode crossing phenomenon in the microwave regime to design microwave differential sensors. While the design concept can be applied to any type of planar electrically small resonators, here, it is implemented on split-ring resonators (SRRs). We use two coupled synchronous SRRs loaded onto a two-port microstrip line system to demonstrate the avoided mode crossing by varying the distance between the split of the resonators to control the coupling strength. As the coupling becomes stronger, the split in the resonance frequencies of the system increases. Alternatively, by controlling the strength of the coupling by materials under test (MUTs), we utilize the system as a microwave differential sensor. First, the avoided mode crossing is theoretically investigated using the classical microwave coupled resonator techniques. Then, the system is designed and simulated using a 3D full-wave numerical simulation. To validate the concept, a two-port microstrip line, which is magnetically coupled to two synchronous SRRs, is utilized as a sensor, where the inter-resonator coupling is chosen to be electric coupling controlled by the dielectric constant of MUTs. For the experimental validation, the sensor was fabricated using printed circuit board technology. Two solid slabs with dielectric constants of 2.33 and 9.2 were employed to demonstrate the potential of the system as a novel differential microwave sensor. Full article
(This article belongs to the Special Issue Toward Advanced Microwave Sensors)
Show Figures

Figure 1

Figure 1
<p>Schematic of two coupled synchronous SRRs where the inter-resonator coupling is based on electric coupling.</p>
Full article ">Figure 2
<p>The circuit diagram of the coupled SRR shown in <a href="#sensors-24-01020-f001" class="html-fig">Figure 1</a>.</p>
Full article ">Figure 3
<p>Schematic of two synchronous SRRs coupled to a two-port microstrip line (TL).</p>
Full article ">Figure 4
<p>The system eigenmodes, <math display="inline"><semantics> <msub> <mi>f</mi> <mi>e</mi> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>f</mi> <mi>m</mi> </msub> </semantics></math>, and <math display="inline"><semantics> <msub> <mi>f</mi> <mi>u</mi> </msub> </semantics></math>, versus the variable, <math display="inline"><semantics> <msub> <mi mathvariant="normal">d</mi> <mi mathvariant="normal">s</mi> </msub> </semantics></math>.</p>
Full article ">Figure 5
<p>The electric coupling (<math display="inline"><semantics> <msub> <mi>κ</mi> <mi>E</mi> </msub> </semantics></math>) versus the variable, <math display="inline"><semantics> <msub> <mi mathvariant="normal">d</mi> <mi mathvariant="normal">s</mi> </msub> </semantics></math>, at <math display="inline"><semantics> <msub> <mi mathvariant="normal">b</mi> <mn>2</mn> </msub> </semantics></math> = 0.5 mm, where <math display="inline"><semantics> <msub> <mi mathvariant="normal">d</mi> <mi mathvariant="normal">s</mi> </msub> </semantics></math> is varied from 0.05 to 5 mm with a step value of 0.05 mm (Some values of <math display="inline"><semantics> <msub> <mi>κ</mi> <mi>E</mi> </msub> </semantics></math> versus <math display="inline"><semantics> <msub> <mi mathvariant="normal">d</mi> <mi mathvariant="normal">s</mi> </msub> </semantics></math> are denoted by red dots).</p>
Full article ">Figure 6
<p>The frequency split of the system versus <math display="inline"><semantics> <msub> <mi>κ</mi> <mi>E</mi> </msub> </semantics></math>. (<b>a</b>) The red line represents the magnetic resonance (<math display="inline"><semantics> <msub> <mi>f</mi> <mi>m</mi> </msub> </semantics></math>) versus <math display="inline"><semantics> <msub> <mi>κ</mi> <mi>E</mi> </msub> </semantics></math>, the blue line represents the resonance frequency of a single resonator (<math display="inline"><semantics> <msub> <mi>f</mi> <mi>u</mi> </msub> </semantics></math>) versus <math display="inline"><semantics> <msub> <mi>κ</mi> <mi>E</mi> </msub> </semantics></math>, and the black line represents the electric resonance (<math display="inline"><semantics> <msub> <mi>f</mi> <mi>e</mi> </msub> </semantics></math>) versus <math display="inline"><semantics> <msub> <mi>κ</mi> <mi>E</mi> </msub> </semantics></math> (<b>b</b>) The quantified frequency splitting versus <math display="inline"><semantics> <msub> <mi>κ</mi> <mi>E</mi> </msub> </semantics></math> by coupled-mode theory using (<a href="#FD8-sensors-24-01020" class="html-disp-formula">8</a>) in the exact (black line), weak coupling approximation (blue line), and the eigenmode solver in HFSS (red line).</p>
Full article ">Figure 7
<p>The response of the one-SRR-based system in the form of the transmission and reflection coefficients (<math display="inline"><semantics> <mrow> <mrow> <mo>|</mo> </mrow> <msub> <mi mathvariant="normal">S</mi> <mn>21</mn> </msub> <mrow> <mo>|</mo> </mrow> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mrow> <mo>|</mo> </mrow> <msub> <mi mathvariant="normal">S</mi> <mn>11</mn> </msub> <mrow> <mo>|</mo> </mrow> </mrow> </semantics></math>) at <math display="inline"><semantics> <msub> <mi mathvariant="normal">b</mi> <mn>2</mn> </msub> </semantics></math> = 0.5 mm.</p>
Full article ">Figure 8
<p>The response of the two-synchronous-SRR-based system in the case of <math display="inline"><semantics> <msub> <mi mathvariant="normal">d</mi> <mi mathvariant="normal">s</mi> </msub> </semantics></math> = 0.05 and 0.45 mm at <math display="inline"><semantics> <msub> <mi mathvariant="normal">b</mi> <mn>2</mn> </msub> </semantics></math> = 0.5 mm.</p>
Full article ">Figure 9
<p>The transmission coefficient (<math display="inline"><semantics> <mrow> <mrow> <mo>|</mo> </mrow> <msub> <mi mathvariant="normal">S</mi> <mn>21</mn> </msub> <mrow> <mo>|</mo> </mrow> </mrow> </semantics></math>) in a 2D plane as a function of frequency and <math display="inline"><semantics> <msub> <mi mathvariant="normal">d</mi> <mi mathvariant="normal">s</mi> </msub> </semantics></math>.</p>
Full article ">Figure 10
<p>Schematic of two synchronous coupled SRRs loaded with a dielectric slab (<math display="inline"><semantics> <msub> <mi mathvariant="normal">W</mi> <mi>MUT</mi> </msub> </semantics></math> = 3.2 mm, <math display="inline"><semantics> <msub> <mi mathvariant="normal">L</mi> <mi>MUT</mi> </msub> </semantics></math> = 13 mm, and <math display="inline"><semantics> <msub> <mi mathvariant="normal">T</mi> <mi>MUT</mi> </msub> </semantics></math> = 3 mm).</p>
Full article ">Figure 11
<p>The transmission coefficient (<math display="inline"><semantics> <mrow> <mrow> <mo>|</mo> </mrow> <msub> <mi mathvariant="normal">S</mi> <mn>21</mn> </msub> <mrow> <mo>|</mo> </mrow> </mrow> </semantics></math>) of the system in the presence of a dielectric slab with a relative permittivities of 1, 3, 6, and 9.</p>
Full article ">Figure 12
<p>The system response versus the relative permittivity of the slab. (<b>a</b>) The blue and black lines represent the sensitivity and the degree of the frequency splitting of the system, respectively, in the presence of MUT. (<b>b</b>) The black line represents the absolute value of the difference in the magnitude min<math display="inline"><semantics> <mrow> <mo>{</mo> <mo>|</mo> <msub> <mi mathvariant="normal">S</mi> <mn>21</mn> </msub> <mo>|</mo> <mo>}</mo> </mrow> </semantics></math> between the split resonances in the presence of MUT.</p>
Full article ">Figure 13
<p>The fabricated two-port system: (<b>a</b>) top view, (<b>b</b>) bottom view, and (<b>c</b>) perspective view where the system is connected to a VNA.</p>
Full article ">Figure 14
<p>The response of the system in free space.</p>
Full article ">Figure 15
<p>The response of the system in the presence of a dielectric slab.</p>
Full article ">
10 pages, 3157 KiB  
Article
Experimental In Vitro Microfluidic Calorimetric Chip Data towards the Early Detection of Infection on Implant Surfaces
by Signe L. K. Vehusheia, Cosmin I. Roman, Markus Arnoldini and Christofer Hierold
Sensors 2024, 24(3), 1019; https://doi.org/10.3390/s24031019 - 5 Feb 2024
Cited by 1 | Viewed by 1122
Abstract
Heat flux measurement shows potential for the early detection of infectious growth. Our research is motivated by the possibility of using heat flux sensors for the early detection of infection on aortic vascular grafts by measuring the onset of bacterial growth. Applying heat [...] Read more.
Heat flux measurement shows potential for the early detection of infectious growth. Our research is motivated by the possibility of using heat flux sensors for the early detection of infection on aortic vascular grafts by measuring the onset of bacterial growth. Applying heat flux measurement as an infectious marker on implant surfaces is yet to be experimentally explored. We have previously shown the measurement of the exponential growth curve of a bacterial population in a thermally stabilized laboratory environment. In this work, we further explore the limits of the microcalorimetric measurements via heat flux sensors in a microfluidic chip in a thermally fluctuating environment. Full article
(This article belongs to the Special Issue Eurosensors 2023 Selected Papers)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Sketch of a 2 + 1 channel microfluidic chip [<a href="#B19-sensors-24-01019" class="html-bibr">19</a>] reproduced with permission. The heat transfer coefficient of the top channel matches that of aortic heat transfer. (<b>b</b>) Microfluidic chip in a thermally stabilized environment able to resolve bacterial growth though heat flux measurement [<a href="#B1-sensors-24-01019" class="html-bibr">1</a>]. Schematics adapted from [<a href="#B1-sensors-24-01019" class="html-bibr">1</a>,<a href="#B19-sensors-24-01019" class="html-bibr">19</a>].</p>
Full article ">Figure 2
<p>Left column (<b>a</b>,<b>c</b>,<b>e</b>) show the channels of the microfluidic chip in the thermally not-stabilized environment. Right column (<b>b</b>,<b>d</b>,<b>f</b>) show the channels of the microfluidic chip in the thermally stabilized environment [<a href="#B1-sensors-24-01019" class="html-bibr">1</a>] with the additional thermally stabilizing copper block. (<b>e</b>,<b>f</b>) show the cross-section of the chip. The groove is to ensure a given thickness of the PDMS above the heat flux sensor. Sketches not to scale.</p>
Full article ">Figure 3
<p>A schematic of the measurement setups of the thermally not-stabilized and the thermally stabilized systems. In the thermally not-stabilized system, the microfluidic chip is placed directly on an arbitrary surface in a temperature-controlled room without further thermal stabilization measures. The thermally stabilized system is a previously published system which shows the data of a microfluidic chip placed inside a thermally stabilizing PMMA box and with a block of copper on the top side of the microfluidic chip [<a href="#B1-sensors-24-01019" class="html-bibr">1</a>].</p>
Full article ">Figure 4
<p>Fabrication steps of the microfluidic chip with two integrated heat flux sensors. A patterned silicon wafer with SU8 is used as the basis of the microfluidic chip channel fabrication. Multiple steps of PDMS curing are applied to create and control different layer thicknesses around the heat flux sensors. After removing the cured PDMS from the wafer, the inlets and outlets are punched out, and it is fused with a glass slide using oxygen plasma.</p>
Full article ">Figure 5
<p>Temperature measurements in both the (<b>a</b>) thermally not-stabilized and (<b>b</b>) thermally stable environments. The fluctuations in (<b>a</b>) are in the range of 1 K from 1.0 h to 3.3 h, whereas in (<b>b</b>) the range of temperature is 0.3 K over 4.2 h. The data shown is for the duration of the whole experiment—both calibration and bacterial phase. The temperature data shown in (<b>b</b>) is part of the previously published heat flux data [<a href="#B1-sensors-24-01019" class="html-bibr">1</a>].</p>
Full article ">Figure 6
<p>Raw and differentially compensated heat flux values. (<b>a</b>) Data during the calibration phase and (<b>b</b>) data after the differential compensation is applied. The peaks in the heat flux correspond to peaks in the temperature of the temperature-controlled room, as visible in <a href="#sensors-24-01019-f005" class="html-fig">Figure 5</a>a. (<b>a</b>,<b>c</b>) Both have a blue heat flux signal and a red heat flux signal where the different heat fluxes represent the different channels. The blue data belong to the channel in which the bacteria is subsequently added, and the red data belong to the calibration channel. (<b>c</b>,<b>d</b>) Heat flux data upon addition of <span class="html-italic">E. coli</span>. Peaks in the heat flux correspond to peaks in the temperature, as visible in <a href="#sensors-24-01019-f005" class="html-fig">Figure 5</a>a (shifted by time). The raw data are shown as opaque, and the data averaged over 200 datapoints are shown in the darker color and with a thicker line.</p>
Full article ">Figure 7
<p>Differentially compensated heat flux measurements in the two systems. (<b>a</b>) Data in the thermally not-stabilized system. A change in the heat flux is distinguishable in the exponential growth phase (as indicated by the arrow at 7500 s). Region 1 is indicated by a line in the first part before the arrow, and Region 2 is indicated by the line starting around 7500 s. (<b>b</b>) Comparison of the cumulative distribution functions of the calibration phase with raw data with those of Regions 1 and 2. (<b>c</b>) Data for the thermally stabilized system, shown previously in [<a href="#B1-sensors-24-01019" class="html-bibr">1</a>]. The data shown are the points at which the bacteria was added to the system (<span class="html-italic">t</span> = 0 s is the addition of bacteria). In all figures, the opaque data is the raw heat flux signal, and the dark line shows a 200-point moving average. (<b>d</b>) Semilogarithmic plot of the data. An exponential increase in both the measured heat flux and also the bacterial population growth is identifiable in the same region.</p>
Full article ">
22 pages, 8078 KiB  
Article
A Metamaterial Surface Avoiding Loss from the Radome for a Millimeter-Wave Signal-Sensing Array Antenna
by Inyeol Moon, Woogon Kim, Yejune Seo and Sungtek Kahng
Sensors 2024, 24(3), 1018; https://doi.org/10.3390/s24031018 - 5 Feb 2024
Viewed by 1269
Abstract
Radar systems are a type of sensor that detects radio signals reflected from objects located a long distance from transmitters. For covering a longer range and a higher resolution in the operation of a radar, a high-frequency band and an array antenna are [...] Read more.
Radar systems are a type of sensor that detects radio signals reflected from objects located a long distance from transmitters. For covering a longer range and a higher resolution in the operation of a radar, a high-frequency band and an array antenna are measures to take. Given a limited size to the antenna aperture in the front end of the radar, the choice of a millimeter-wave band leads to a denser layout for the array antenna and a higher antenna gain. Millimeter-wave signals tend to become attenuated faster by a larger loss of the covering material like the radome, implying this disadvantage offsets the advantage of high antenna directivity, compared to the C-band and X-band ones. As the radome is essential to the radar system to protect the array antenna from rain and dust, a metamaterial surface in the layer is suggested to meet multiple objectives. Firstly, the proposed electromagnetic structure is the protection layer for the source of radiation. Secondly, the metasurface does not disturb the millimeter-wave signal and makes its way through the cover layer to the air. This electromagnetically transparent surface transforms the phase distribution of the incident wave into the equal phase in the transmitted wave, resulting in an increased antenna gain. This is fabricated and assembled with the array antenna held in a 3D-printed jig with harnessing accessories. It is examined in view of S21 as the transfer coefficient between two ports of the VNA, having the antenna alone and with the metasurface. Additionally, the far-field test comes next to check the validity of the suggested structure and design. The bench test shows around a 7 dB increase in the transfer coefficient, and the anechoic chamber field test gives about a 5 dB improvement in antenna gain for a 24-band GHz array antenna. Full article
(This article belongs to the Special Issue Electromagnetic Sensing and Nondestructive Evaluation)
Show Figures

Figure 1

Figure 1
<p>The array antenna in use. (<b>a</b>) Radiators on the front side; (<b>b</b>) feeder on the backside; (<b>c</b>) S<sub>11</sub> as the reflection coefficient of the antenna; (<b>d</b>) far-field pattern of the antenna; (<b>e</b>) array antenna in a jig; (<b>f</b>) S<sub>11</sub> as the reflection coefficient of the antenna in the jig; (<b>g</b>) far-field pattern of the antenna within the jig expressed in from red (strongest) through green (middle) to blue (weakest).</p>
Full article ">Figure 1 Cont.
<p>The array antenna in use. (<b>a</b>) Radiators on the front side; (<b>b</b>) feeder on the backside; (<b>c</b>) S<sub>11</sub> as the reflection coefficient of the antenna; (<b>d</b>) far-field pattern of the antenna; (<b>e</b>) array antenna in a jig; (<b>f</b>) S<sub>11</sub> as the reflection coefficient of the antenna in the jig; (<b>g</b>) far-field pattern of the antenna within the jig expressed in from red (strongest) through green (middle) to blue (weakest).</p>
Full article ">Figure 2
<p>The metasurface for the array antenna in the jig. (<b>a</b>) Phase map of the incident wave; (<b>b</b>) the phase map required by the metasurface; (<b>c</b>) 1-bit expression of the phase map of the metasurface; (<b>d</b>) two types of pixels for the 1-bit phase map; (<b>e</b>) top view of the metasurface comprising the pixels; (<b>f</b>) bird’s eye view of the metasurface; (<b>g</b>) S<sub>11</sub> of the metasurface-combined antenna in the jig; (<b>h</b>) far-field pattern of the metasurface antenna within the jig expressed in from red (strongest) through green (middle) to blue (weakest).</p>
Full article ">Figure 2 Cont.
<p>The metasurface for the array antenna in the jig. (<b>a</b>) Phase map of the incident wave; (<b>b</b>) the phase map required by the metasurface; (<b>c</b>) 1-bit expression of the phase map of the metasurface; (<b>d</b>) two types of pixels for the 1-bit phase map; (<b>e</b>) top view of the metasurface comprising the pixels; (<b>f</b>) bird’s eye view of the metasurface; (<b>g</b>) S<sub>11</sub> of the metasurface-combined antenna in the jig; (<b>h</b>) far-field pattern of the metasurface antenna within the jig expressed in from red (strongest) through green (middle) to blue (weakest).</p>
Full article ">Figure 2 Cont.
<p>The metasurface for the array antenna in the jig. (<b>a</b>) Phase map of the incident wave; (<b>b</b>) the phase map required by the metasurface; (<b>c</b>) 1-bit expression of the phase map of the metasurface; (<b>d</b>) two types of pixels for the 1-bit phase map; (<b>e</b>) top view of the metasurface comprising the pixels; (<b>f</b>) bird’s eye view of the metasurface; (<b>g</b>) S<sub>11</sub> of the metasurface-combined antenna in the jig; (<b>h</b>) far-field pattern of the metasurface antenna within the jig expressed in from red (strongest) through green (middle) to blue (weakest).</p>
Full article ">Figure 3
<p>The test bench observing S<sub>11</sub> of the two specimens. (<b>a</b>) The prototype of the source antenna in the jig; (<b>b</b>) schematic of measuring S<sub>11</sub> of the prototyped source antenna; (<b>c</b>) measured S<sub>11</sub> of the prototyped source antenna; (<b>d</b>) the prototype of the metasurface-combined antenna in the jig; (<b>e</b>) schematic of measuring S<sub>11</sub> of the prototyped metasurface-loaded antenna; (<b>f</b>) measured S<sub>11</sub> of the prototyped metasurface-loaded antenna.</p>
Full article ">Figure 3 Cont.
<p>The test bench observing S<sub>11</sub> of the two specimens. (<b>a</b>) The prototype of the source antenna in the jig; (<b>b</b>) schematic of measuring S<sub>11</sub> of the prototyped source antenna; (<b>c</b>) measured S<sub>11</sub> of the prototyped source antenna; (<b>d</b>) the prototype of the metasurface-combined antenna in the jig; (<b>e</b>) schematic of measuring S<sub>11</sub> of the prototyped metasurface-loaded antenna; (<b>f</b>) measured S<sub>11</sub> of the prototyped metasurface-loaded antenna.</p>
Full article ">Figure 3 Cont.
<p>The test bench observing S<sub>11</sub> of the two specimens. (<b>a</b>) The prototype of the source antenna in the jig; (<b>b</b>) schematic of measuring S<sub>11</sub> of the prototyped source antenna; (<b>c</b>) measured S<sub>11</sub> of the prototyped source antenna; (<b>d</b>) the prototype of the metasurface-combined antenna in the jig; (<b>e</b>) schematic of measuring S<sub>11</sub> of the prototyped metasurface-loaded antenna; (<b>f</b>) measured S<sub>11</sub> of the prototyped metasurface-loaded antenna.</p>
Full article ">Figure 4
<p>The test bench observing S<sub>21</sub> of the two specimens. (<b>a</b>) Test configuration for measuring S<sub>21</sub> between the twin array antennas; (<b>b</b>) measured S<sub>21</sub> between the array antennas; (<b>c</b>) test configuration for measuring S<sub>21</sub> between the metasurface-loaded array antenna and unloaded-array antenna; (<b>d</b>) measured S<sub>21</sub> between the metasurface-loaded array antenna and unloaded-array antenna; (<b>e</b>) comparing the curves of S<sub>21</sub>, including a reference.</p>
Full article ">Figure 4 Cont.
<p>The test bench observing S<sub>21</sub> of the two specimens. (<b>a</b>) Test configuration for measuring S<sub>21</sub> between the twin array antennas; (<b>b</b>) measured S<sub>21</sub> between the array antennas; (<b>c</b>) test configuration for measuring S<sub>21</sub> between the metasurface-loaded array antenna and unloaded-array antenna; (<b>d</b>) measured S<sub>21</sub> between the metasurface-loaded array antenna and unloaded-array antenna; (<b>e</b>) comparing the curves of S<sub>21</sub>, including a reference.</p>
Full article ">Figure 5
<p>The anechoic antenna chamber tests. (<b>a</b>) Test setup for the source antenna in the jig; (<b>b</b>) measured antenna gain of the source antenna in the jig; (<b>c</b>) test setup for the metasurface-loaded antenna in the jig; (<b>d</b>) measured antenna gain of the metasurface-loaded antenna in the jig; (<b>e</b>) comparing the curves of antenna gain.</p>
Full article ">Figure 5 Cont.
<p>The anechoic antenna chamber tests. (<b>a</b>) Test setup for the source antenna in the jig; (<b>b</b>) measured antenna gain of the source antenna in the jig; (<b>c</b>) test setup for the metasurface-loaded antenna in the jig; (<b>d</b>) measured antenna gain of the metasurface-loaded antenna in the jig; (<b>e</b>) comparing the curves of antenna gain.</p>
Full article ">Figure 6
<p>The analysis of phase maps from the steps to build the antennas. (<b>a</b>) Observation planes on the source antenna in the jig; (<b>b</b>) E-plane phase profile of the source antenna in the jig; (<b>c</b>) H-plane phase profile of the source antenna in the jig; (<b>d</b>) observation planes only on the metasurface; (<b>e</b>) E-plane phase profile of the metasurface alone; (<b>f</b>) H-plane phase profile of the metasurface alone; (<b>g</b>) observation planes on the metasurface-loaded antenna in the jig; (<b>h</b>) E-plane phase profile of the metasurface-loaded antenna; (<b>i</b>) H-plane phase profile of the metasurface-loaded antenna.</p>
Full article ">Figure 6 Cont.
<p>The analysis of phase maps from the steps to build the antennas. (<b>a</b>) Observation planes on the source antenna in the jig; (<b>b</b>) E-plane phase profile of the source antenna in the jig; (<b>c</b>) H-plane phase profile of the source antenna in the jig; (<b>d</b>) observation planes only on the metasurface; (<b>e</b>) E-plane phase profile of the metasurface alone; (<b>f</b>) H-plane phase profile of the metasurface alone; (<b>g</b>) observation planes on the metasurface-loaded antenna in the jig; (<b>h</b>) E-plane phase profile of the metasurface-loaded antenna; (<b>i</b>) H-plane phase profile of the metasurface-loaded antenna.</p>
Full article ">Figure 6 Cont.
<p>The analysis of phase maps from the steps to build the antennas. (<b>a</b>) Observation planes on the source antenna in the jig; (<b>b</b>) E-plane phase profile of the source antenna in the jig; (<b>c</b>) H-plane phase profile of the source antenna in the jig; (<b>d</b>) observation planes only on the metasurface; (<b>e</b>) E-plane phase profile of the metasurface alone; (<b>f</b>) H-plane phase profile of the metasurface alone; (<b>g</b>) observation planes on the metasurface-loaded antenna in the jig; (<b>h</b>) E-plane phase profile of the metasurface-loaded antenna; (<b>i</b>) H-plane phase profile of the metasurface-loaded antenna.</p>
Full article ">Figure 7
<p>The analysis of the refractive index of the wireless link. (<b>a</b>) Test setup of viewing S<sub>21</sub> without the metamaterial; (<b>b</b>) test setup of viewing S<sub>21</sub> with the metamaterial; (<b>c</b>) comparing the phases of the two cases.</p>
Full article ">Figure 7 Cont.
<p>The analysis of the refractive index of the wireless link. (<b>a</b>) Test setup of viewing S<sub>21</sub> without the metamaterial; (<b>b</b>) test setup of viewing S<sub>21</sub> with the metamaterial; (<b>c</b>) comparing the phases of the two cases.</p>
Full article ">
13 pages, 3307 KiB  
Article
Smart pH Sensing: A Self-Sensitivity Programmable Platform with Multi-Functional Charge-Trap-Flash ISFET Technology
by Yeong-Ung Kim and Won-Ju Cho
Sensors 2024, 24(3), 1017; https://doi.org/10.3390/s24031017 - 4 Feb 2024
Viewed by 1098
Abstract
This study presents a novel pH sensor platform utilizing charge-trap-flash-type metal oxide semiconductor field-effect transistors (CTF-type MOSFETs) for enhanced sensitivity and self-amplification. Traditional ion-sensitive field-effect transistors (ISFETs) face challenges in commercialization due to low sensitivity at room temperature, known as the Nernst limit. [...] Read more.
This study presents a novel pH sensor platform utilizing charge-trap-flash-type metal oxide semiconductor field-effect transistors (CTF-type MOSFETs) for enhanced sensitivity and self-amplification. Traditional ion-sensitive field-effect transistors (ISFETs) face challenges in commercialization due to low sensitivity at room temperature, known as the Nernst limit. To overcome this limitation, we explore resistive coupling effects and CTF-type MOSFETs, allowing for flexible adjustment of the amplification ratio. The platform adopts a unique approach, employing CTF-type MOSFETs as both transducers and resistors, ensuring efficient sensitivity control. An extended-gate (EG) structure is implemented to enhance cost-effectiveness and increase the overall lifespan of the sensor platform by preventing direct contact between analytes and the transducer. The proposed pH sensor platform demonstrates effective sensitivity control at various amplification ratios. Stability and reliability are validated by investigating non-ideal effects, including hysteresis and drift. The CTF-type MOSFETs’ electrical characteristics, energy band diagrams, and programmable resistance modulation are thoroughly characterized. The results showcase remarkable stability, even under prolonged and repetitive operations, indicating the platform’s potential for accurate pH detection in diverse environments. This study contributes a robust and stable alternative for detecting micro-potential analytes, with promising applications in health management and point-of-care settings. Full article
(This article belongs to the Special Issue Biosensors and Electrochemical Sensors)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Optical microscope image and (<b>b</b>) schematic structure with a cross-sectional view of the thin-film layers of the CTF-type MOSFET.</p>
Full article ">Figure 2
<p>Simplified equivalent circuit for the self-sensitivity programmable pH sensor platform.</p>
Full article ">Figure 3
<p>Electrical characteristics of CTF-type MOSFET: (<b>a</b>) transfer curve, and (<b>b</b>) output curve.</p>
Full article ">Figure 4
<p>Energy band diagram of CTF-type MOSFET under (<b>a</b>) non-bias, (<b>b</b>) positive-bias, and (<b>c</b>) negative-bias states.</p>
Full article ">Figure 5
<p>(<b>a</b>) Transfer curves, (<b>b</b>) hysteresis resistance, and (<b>c</b>) retention time according to the program/erasure modes.</p>
Full article ">Figure 6
<p>Channel resistance modulation with input pulse application. Conditions include the number of pulses: (<b>a</b>) 1, (<b>b</b>) 10, and (<b>c</b>) 50.</p>
Full article ">Figure 7
<p>pH sensing operation of self-sensitivity programmable pH sensor with amplification ratio of (<b>a</b>) ×0.5, (<b>b</b>) ×1, and (<b>c</b>) ×5.</p>
Full article ">Figure 8
<p>Programmed pH sensitivity values based on the amplification ratio.</p>
Full article ">Figure 9
<p>Non-ideal effect of the self-sensitivity programmable pH sensor platform: (<b>a</b>) hysteresis voltage, and (<b>b</b>) drift rate.</p>
Full article ">Figure 10
<p>Comparative analysis of sensitivity and non-ideal effects in pH sensing operations.</p>
Full article ">
13 pages, 3906 KiB  
Article
High-Precision Atom Interferometer-Based Dynamic Gravimeter Measurement by Eliminating the Cross-Coupling Effect
by Yang Zhou, Wenzhang Wang, Guiguo Ge, Jinting Li, Danfang Zhang, Meng He, Biao Tang, Jiaqi Zhong, Lin Zhou, Runbing Li, Ning Mao, Hao Che, Leiyuan Qian, Yang Li, Fangjun Qin, Jie Fang, Xi Chen, Jin Wang and Mingsheng Zhan
Sensors 2024, 24(3), 1016; https://doi.org/10.3390/s24031016 - 4 Feb 2024
Cited by 2 | Viewed by 1273
Abstract
A dynamic gravimeter with an atomic interferometer (AI) can perform absolute gravity measurements with high precision. AI-based dynamic gravity measurement is a type of joint measurement that uses an AI sensor and a classical accelerometer. The coupling of the two sensors may degrade [...] Read more.
A dynamic gravimeter with an atomic interferometer (AI) can perform absolute gravity measurements with high precision. AI-based dynamic gravity measurement is a type of joint measurement that uses an AI sensor and a classical accelerometer. The coupling of the two sensors may degrade the measurement precision. In this study, we analyzed the cross-coupling effect and introduced a recovery vector to suppress this effect. We improved the phase noise of the interference fringe by a factor of 1.9 by performing marine gravity measurements using an AI-based gravimeter and optimizing the recovery vector. Marine gravity measurements were performed, and high gravity measurement precision was achieved. The external and inner coincidence accuracies of the gravity measurement were ±0.42 mGal and ±0.46 mGal after optimizing the cross-coupling effect, which was improved by factors of 4.18 and 4.21 compared to the cases without optimization. Full article
(This article belongs to the Collection Inertial Sensors and Applications)
Show Figures

Figure 1

Figure 1
<p>Principle of the joint gravity measurement and the introduction of the cross-coupling effect.</p>
Full article ">Figure 2
<p>(Color online). AI-based dynamic gravimeter for the marine gravity measurement.</p>
Full article ">Figure 3
<p>(Color online.) Allan standard deviation of the measured gravity value at the National Geodetic Observatory in Wuhan for 2T = 30 ms.</p>
Full article ">Figure 4
<p>(Color online). Gravity comparison with a shore-based gravity reference site under mooring state.</p>
Full article ">Figure 5
<p>(Color online.) (<b>a</b>) The trajectory of the survey line during the marine gravity measurement. (<b>b</b>) The power spectral density amplitude of the measured acceleration in the z direction under the mooring state (black dashed line) and sailing state (red solid line).</p>
Full article ">Figure 6
<p>(Color online). (<b>a</b>) The calibration curves for the recovery vector components during the gravity survey measurement, the z component of recovery vector is subtracted by 0.99 for the convenience of display. (<b>b</b>) The recovered fringe when the recovery vector is set to {0, 0, 1}. (<b>c</b>) The recovered fringe when the recovery vector is set to its optimized value {0.0060, −0.0034, 0.9860}.</p>
Full article ">Figure 7
<p>(Color online). Data processing process of the gravity anomaly. The red solid line in (<b>a</b>–<b>c</b>) represents data of the three survey lines, while the blue dashed line represents other data during the gravity survey. (<b>a</b>) The recovery acceleration <span class="html-italic">a<sub>rec</sub></span><sub>,<span class="html-italic">z</span></sub>(<span class="html-italic">t</span>). (<b>b</b>) The recovery acceleration <span class="html-italic">a<sub>rec</sub></span><sub>,<span class="html-italic">z</span></sub>(<span class="html-italic">t</span>) after the low-pass filter. (<b>c</b>) The calculated motion acceleration <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>a</mi> </mrow> <mrow> <mi mathvariant="italic">mot</mi> <mo>,</mo> <mi>z</mi> </mrow> </msub> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math> after the low-pass filter. (<b>d</b>) The measured gravity anomaly of the AI-based gravimeter (red solid line) and the classical shipborne strapdown gravimeter (black dotted line).</p>
Full article ">Figure 8
<p>(Color online). Comparing the gravity anomaly measurement during the three survey lines. (<b>a</b>) Before deducting the sea surface height-induced gravity. (<b>b</b>) After deducting the sea surface height-induced gravity. The inset figure is the measured water depth during the three survey lines.</p>
Full article ">
20 pages, 21127 KiB  
Article
Respecting Partial Privacy of Unstructured Data via Spectrum-Based Encoder
by Qingcai Luo and Hui Li
Sensors 2024, 24(3), 1015; https://doi.org/10.3390/s24031015 - 4 Feb 2024
Viewed by 851
Abstract
Since the popularity of Machine Learning as a Service (MLaaS) has been increasing significantly, users are facing the risk of exposing sensitive information that is not task-related. The reason is that the data uploaded by users may include some information that is not [...] Read more.
Since the popularity of Machine Learning as a Service (MLaaS) has been increasing significantly, users are facing the risk of exposing sensitive information that is not task-related. The reason is that the data uploaded by users may include some information that is not useful for inference but can lead to privacy leakage. One straightforward approach to mitigate this issue is to filter out task-independent information to protect user privacy. However, this method is feasible for structured data with naturally independent entries, but it is challenging for unstructured data. Therefore, we propose a novel framework, which employs a spectrum-based encoder to transform unstructured data into the latent space and a task-specific model to identify the essential information for the target task. Our system has been comprehensively evaluated on three benchmark visual datasets and compared to previous works. The results demonstrate that our framework offers superior protection for task-independent information and maintains the usefulness of task-related information. Full article
(This article belongs to the Special Issue Cognitive Radio Networks: Technologies, Challenges and Applications)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Structured data and (<b>b</b>) unstructured data.</p>
Full article ">Figure 2
<p>The VAE-based encoder maps the raw data to the latent space, and the proposed indicator points out the relevance of the latent code to the target task and removes irrelevant elements. The subsequent decoder reconstructs the data from the filtered code, with the target attributes being preserved while the remaining attributes are obfuscated.</p>
Full article ">Figure 3
<p>Different colours represent different attributes in the unstructured data, and the balls represent the factors that affect the attributes.</p>
Full article ">Figure 4
<p>The workflow of our framework. The top line is the training stage, including the training of the encoder–decoder pair and Indicator. The bottom line is the test stage. An indicator is introduced to recommend the indexes of the representation dimensions that need to be retained. At the same time, an arbitrary sample is used as a carrier to supplement the remaining dimensions.</p>
Full article ">Figure 5
<p>Illustration of how the indicator works. Indicator searches for the maximum allowable oscillation range that remains utility for the task model in the <span class="html-italic">B</span> representation dimensions.</p>
Full article ">Figure 6
<p>Reconstructed image visualization of traversing the representation dimensions marked by the indicator.</p>
Full article ">Figure 7
<p>The representation dimensions marked by Indicator are fixed, while the values of the remaining dimensions are replaced with 0. The above illustrations are reconstructed images based on these processed representations.</p>
Full article ">Figure 8
<p>The training sets of dSprites and MNIST are divided into 3 subsets, respectively. The above illustration is the parameter curve obtained by Indicator training on these subsets. In the illustration, the dimensions that fall into the yellow area are considered more concerned by the task model. In rows 1 and 2, the indicators mark the disentangled representations generated from <math display="inline"><semantics> <mi>β</mi> </semantics></math>-VAE. Indicators in lines 3 and 4 mark the disentangled representations generated using Factor-VAE. Lines 5 and 6 are Indicator marking the disentangled representations generated using <math display="inline"><semantics> <mi>β</mi> </semantics></math>-TCVAE.</p>
Full article ">Figure 8 Cont.
<p>The training sets of dSprites and MNIST are divided into 3 subsets, respectively. The above illustration is the parameter curve obtained by Indicator training on these subsets. In the illustration, the dimensions that fall into the yellow area are considered more concerned by the task model. In rows 1 and 2, the indicators mark the disentangled representations generated from <math display="inline"><semantics> <mi>β</mi> </semantics></math>-VAE. Indicators in lines 3 and 4 mark the disentangled representations generated using Factor-VAE. Lines 5 and 6 are Indicator marking the disentangled representations generated using <math display="inline"><semantics> <mi>β</mi> </semantics></math>-TCVAE.</p>
Full article ">Figure 9
<p>The t-SNE visualization of the <math display="inline"><semantics> <mrow> <mi>A</mi> <msup> <mi>M</mi> <mrow> <mi>l</mi> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics></math> output. The first column represents the performance of the original data in the face of the attack model. The second and third columns are the anonymously transformed reconstruction performance facing the attack model, with “Eyeglasses” and “Gender” as the task-related attributes, respectively.</p>
Full article ">Figure 10
<p>Privacy–utility comparison on CelebA. Among them, the y-axis takes the <math display="inline"><semantics> <mrow> <mi>e</mi> <mi>x</mi> <mi>p</mi> <mo>(</mo> <mo>·</mo> <mo>)</mo> </mrow> </semantics></math> of the evaluation result.</p>
Full article ">Figure 11
<p>The above illustrations are facial images whose task-independent attributes are confused. The upper part takes “Eyeglasses” as the task-related attribute, and the bottom part, “Gender” is regarded as the task-related attribute.</p>
Full article ">
18 pages, 2614 KiB  
Article
Enhanced Noise-Resilient Pressure Mat System Based on Hyperdimensional Computing
by Fatemeh Asgarinejad, Xiaofan Yu, Danlin Jiang, Justin Morris, Tajana Rosing and Baris Aksanli
Sensors 2024, 24(3), 1014; https://doi.org/10.3390/s24031014 - 4 Feb 2024
Cited by 1 | Viewed by 1263
Abstract
Traditional systems for indoor pressure sensing and human activity recognition (HAR) rely on costly, high-resolution mats and computationally intensive neural network-based (NN-based) models that are prone to noise. In contrast, we design a cost-effective and noise-resilient pressure mat system for HAR, leveraging Velostat [...] Read more.
Traditional systems for indoor pressure sensing and human activity recognition (HAR) rely on costly, high-resolution mats and computationally intensive neural network-based (NN-based) models that are prone to noise. In contrast, we design a cost-effective and noise-resilient pressure mat system for HAR, leveraging Velostat for intelligent pressure sensing and a novel hyperdimensional computing (HDC) classifier that is lightweight and highly noise resilient. To measure the performance of our system, we collected two datasets, capturing the static and continuous nature of human movements. Our HDC-based classification algorithm shows an accuracy of 93.19%, improving the accuracy by 9.47% over state-of-the-art CNNs, along with an 85% reduction in energy consumption. We propose a new HDC noise-resilient algorithm and analyze the performance of our proposed method in the presence of three different kinds of noise, including memory and communication, input, and sensor noise. Our system is more resilient across all three noise types. Specifically, in the presence of Gaussian noise, we achieve an accuracy of 92.15% (97.51% for static data), representing a 13.19% (8.77%) improvement compared to state-of-the-art CNNs. Full article
(This article belongs to the Special Issue Feature Papers in the Internet of Things Section 2023)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) HDC training. Each training data point is encoded to a hypervector and added up to the proper class hypervector based on its label. (<b>b</b>) HDC inference. The query data are encoded and compared with all the class hypervectors. The class index with the highest similarity is the prediction result.</p>
Full article ">Figure 2
<p>Conceptual diagram for the hardware design including a pressure mat, multiplexers (MUX), analog–digital-converters (ADC), and a Raspberry Pi (RPi) for data processing and learning.</p>
Full article ">Figure 3
<p>Voltage at ADC for load resistors in series with Velostat.</p>
Full article ">Figure 4
<p>Proposed encoding: (<b>a</b>) Splitting original image to smaller windows, (<b>b</b>) encoding a singular window using conventional random projection encoding, (<b>c</b>) ID-based encoding of the original image, and (<b>d</b>) training.</p>
Full article ">Figure 5
<p>Visualization of the time-series tiptoe data.</p>
Full article ">Figure 6
<p>Accuracy comparison between 1 and 100 epochs of training across SVM [<a href="#B37-sensors-24-01014" class="html-bibr">37</a>], LR [<a href="#B38-sensors-24-01014" class="html-bibr">38</a>], MLP [<a href="#B17-sensors-24-01014" class="html-bibr">17</a>], CNN [<a href="#B7-sensors-24-01014" class="html-bibr">7</a>], Baseline HDC [<a href="#B25-sensors-24-01014" class="html-bibr">25</a>] and Proposed Method (static data on the left, dynamic data on the right).</p>
Full article ">Figure 7
<p>t-SNE plots illustrating the encoded hypervectors learned by our method for static data and continuous data compared to baseline HDC. The plots are generated with <span class="html-italic">D</span> = 10<span class="html-italic">k</span> and <span class="html-italic">D</span> = 200. KL denotes the Kullback–Leibler divergence. The bottom row is our method where the clusters are more distinguishable.</p>
Full article ">Figure 8
<p>(<b>a</b>) Comparison of execution time of static data and dynamic data. (<b>b</b>) Shows average energy consumption comparison across SVM [<a href="#B37-sensors-24-01014" class="html-bibr">37</a>], LR [<a href="#B38-sensors-24-01014" class="html-bibr">38</a>], MLP [<a href="#B17-sensors-24-01014" class="html-bibr">17</a>], CNN [<a href="#B7-sensors-24-01014" class="html-bibr">7</a>], Baseline HDC [<a href="#B25-sensors-24-01014" class="html-bibr">25</a>] and Proposed Method.</p>
Full article ">Figure 9
<p>Different kinds of noise in our pressure mat system prototype: sensor, Input, communication, and memory noise.</p>
Full article ">Figure 10
<p>Effect of input noise, (<b>a</b>) shift-right, (<b>b</b>) blurriness, and (<b>c</b>) rotation, on the accuracy across SVM [<a href="#B37-sensors-24-01014" class="html-bibr">37</a>], LR [<a href="#B38-sensors-24-01014" class="html-bibr">38</a>], MLP [<a href="#B17-sensors-24-01014" class="html-bibr">17</a>], CNN [<a href="#B7-sensors-24-01014" class="html-bibr">7</a>], Baseline HDC [<a href="#B25-sensors-24-01014" class="html-bibr">25</a>] and Proposed Method for static data.</p>
Full article ">Figure 11
<p>Effect of sensor noise (white and Gaussian on input data) on the accuracy for static data across SVM [<a href="#B37-sensors-24-01014" class="html-bibr">37</a>], LR [<a href="#B38-sensors-24-01014" class="html-bibr">38</a>], MLP [<a href="#B17-sensors-24-01014" class="html-bibr">17</a>], CNN [<a href="#B7-sensors-24-01014" class="html-bibr">7</a>], Baseline HDC [<a href="#B25-sensors-24-01014" class="html-bibr">25</a>] and Proposed Method.</p>
Full article ">Figure 12
<p>Effect of memory and communication noise on accuracy across SVM [<a href="#B37-sensors-24-01014" class="html-bibr">37</a>], LR [<a href="#B38-sensors-24-01014" class="html-bibr">38</a>], MLP [<a href="#B17-sensors-24-01014" class="html-bibr">17</a>], CNN [<a href="#B7-sensors-24-01014" class="html-bibr">7</a>], Baseline HDC [<a href="#B25-sensors-24-01014" class="html-bibr">25</a>] and Proposed Method for continuous data (<b>a</b>) and static data (<b>b</b>). The left, middle, and right columns show the results of packet loss, bitflip, and Gaussian noise, respectively.</p>
Full article ">
Previous Issue
Back to TopTop